top of page
Search

Whose Choice is It? You or Algorithms?

Writer's picture: Ernest N EzeoguErnest N Ezeogu



How We Get Caught in the Web

You have a report to submit by 9 AM, but you missed the Monday night match of your favorite soccer team. You decide to watch the highlights on Youtube before you finish work on the report. After the first highlight, you begin to see highlights of other games that your team played in the past. You decide to indulge a little more. Before you know it, it's already 10 AM, you have missed the deadline to submit your report, and now you wonder how this could have happened; you are a victim of the "rabbit hole effect" of a recommender system.


Most of us have been in similar situations; you engaged with a website or an online application and searched for an item, suddenly, the website begins recommending similar items you searched for to you. Recommender systems are software tools and techniques that provide suggestions for items that could be of interest to a user, the items could range from books and movies, to recipes and soccer highlights.


Owing to the datafication of information channels, enormous and complex data sets are the order of the day. Analyzing these vast and complex data sets by humans is too arduous and has spiraled beyond human capabilities. Therefore, machine learning (ML) algorithms are developed to analyze these data sets, detect patterns, make conclusions, and create specific outputs with increasing speed and accuracy. ML algorithms are a game-changer. According to Essinger and Rosen, Machine learning is a subfield of artificial intelligence that evolved from teaching computers to automatically learn a solution to a problem (Essinger & L. Rosen, 2011).


According to Richert and Coelho, the primary goal of machine learning is to understand algorithms to carry out tasks by providing them with a couple of examples (what they need to do and what they do not need to do) (Richert & Coelho, 2013). Machine learning involves three types of tasks, which are supervised, unsupervised, and reinforcement machine learning. Supervised machine learning algorithms are trained using structured historical data. With this training, the algorithm learns to make distinctions between given concepts. Therefore, it can reach more accurate conclusions when it encounters new data in the form of real-life "examples" (Carbonell, 1989). Recommender systems are examples of supervised machine learning algorithms.



Image from: Penn Today - University of Pennsylvania


What Are Recommender Systems?


Recommender systems are widely used by many social media networks, like YouTube, Facebook, Twitter, Instagram, etc. Recommender systems are algorithms that suggest relevant items to users (items mean content, solutions, and other users). The range of items being offered to users on the internet is so broad that the need for automated recommender systems arose. Recommender systems provide a personalized selection of relevant items to users (Jeckmans et al., 2013). Recommender systems are helpful to overcome the problem of information overload in online applications. Various applications have adopted recommendation systems, including e-commerce, healthcare, transportation, agriculture, and media. Recommender systems are of great importance to online vendors like Jumia, Amazon, Netflix, etc., because user information can help them predict demand, build customer loyalty, and increase cross-selling possibilities (Peppers, 1999). More details about a user equate to more personalized recommendations (Jeckmans et al., 2013 ).


Users have come to rely heavily on recommendation systems because they provide convenience and better service matching, saving time and effort, and promoting an optimal user experience (Jeckmans et al., 2013 ). However, the benefits of this sophisticated personalization technology inherently come with sacrificing some privacy (Cynthia, 2006). The convenience attached to personalization services may rely on unsolicited data collection (Lin et al., 2012), and recommendation systems may share data with third parties (Bennett and Lanning, 2007). This phenomenon is known as the "privacy-personalization tradeoff" (Bo Zang et al., 2014).



Recommender systems share a common characteristic: they require information on the user's attributes, demands, or preferences to generate personalized recommendations. The more detailed the information related to the user is, the more accurate the recommendations for the user are (Jeckmans et al., 2013 ). Jeckmans also pointed out that service providers running the recommender systems collect information where possible to ensure accurate recommendations. The information supplied can either be automatically collected or provided explicitly by the user. Data is automatically collected when users interact with recommender systems and make decisions based on recommendations. Collected data is deployed to make better recommendations. Asides from making recommendations, a recommender system is intended to help the user with decision making, especially in cases where the items are already known (R. Burke, Felfernig, & H. Göker, 2011).



A successful recommender system (RS) is achieved by extensively acquiring, storing, and processing user data (Bo Zang et al., 2014). This rich data source potentially becomes a target for hackers. Recommender systems used by digital platforms keep us engaged; however, those systems often promote misinformation, abuse, and polarization. These algorithms are curatorial and can up-rank or downrank content. Recommendation systems generally function in two ways (Jeckmans et al., 2013). The first is a content-based system; in this system, the algorithm recommends contents, products, or solutions similar to what the user previously liked. The second kind of recommendation system is what's called a collaborative filtering system. The system recommends items based on what it has determined or people similar to the user. This way, recommender algorithms have an overbearing influence on what the user sees and engages with, thereby shaping behavioral patterns. This ability of recommender systems to keep users engaged for long periods comes with ethical challenges because lengthened exposure to similar content can affect behavior, alter perception, and influence election results. A recommender system is like a two-edged sword, and it can become a tool or a weapon.


RSs are "black-box" systems. It is difficult to explain how they reach conclusions and make recommendations; this lack of transparency makes it possible for the RS to be manipulated for ulterior motives. The air of mystery makes it challenging to hold people accountable if RSs produce unwanted results. Due to the ability of RSs to personalize recommendations, users will get exposed to a narrower range of content over time, drastically reducing content diversity and causing the "down the rabbit hole effect" where users are incrementally recommended more extreme content (Albright, 2018). Inaccurate or insufficient data could also pose a challenge to RSs, causing reduced accuracy of recommendations, especially to new users and new items. This problem is known as "cold-start." The quality of data received by the RS determines the quality of recommendation it gives; biased data will inadvertently lead to biased recommendations that can fuel inequality and marginalization, garbage in garbage out. The metrics used to determine the content that will be recommended can be identified and exploited by external influences like human operators, the government, advertisers, and (big) companies, and they may be able to influence the RSs (Zoetekouw, 2019 ). Users generally expect openness and transparency while using online applications. When RSs are tampered with, users will be fed with content that has been heavily influenced by external forces, thereby shaping their views and opinions without their knowledge.


The Good and Ugly Side of Recommender Systems


1. Recommender systems (RSs) can determine whose resume is viewed by recruiters

2. RSs can determine who gets employed, promoted at work or who gets fired from a job, they can decide who sees advertisements for open positions, housing, and products, they can estimate a person's risk of committing crimes or the length of a prison term, they can examine, and allocate insurance and benefits, obtain and determine credit, rank and curate news and information in search engines (Caplan et al., 2018).


Powerful algorithms work behind the scene and are used for searching, aggregation, surveillance, forecasting, filtering, recommendations, scoring, content production, and allocation (Saurwein, Just, and Latzer 2015). Recommender systems are the new sheriffs in town.


Recommender Systems and Misinformation

A famous case of how recommender systems can be tampered with to shape opinions, influence views, and used for misinformation to influence election outcomes was reported during the 2016 United States presidential election. Social media platforms were used as campaign tools for the candidates; the candidates' campaign teams used platforms like YouTube, Facebook, Twitter, Instagram, and Snapchat to gather votes or connect with existing voters. Voters reported that they got stuck in a filter bubble. Users said that they couldn't receive content from candidates they had not searched for previously (El-Bermawy, 2016); the recommender systems used by tech companies essentially recommended content from only one of the aspirants. Recommender systems induce the "rabbit hole effect." It adjusts itself to mirror the user's preferences and continues to recommend only related items. By so doing, content from one aspirant can be curated and downranked, effectively influencing election results. El-Khalili also reports how the Egyptian government used recommender systems as a tool for propaganda, sequel to the Egyptian revolution of 2011 that ended with the resignation of President Mubarak and scores of casualties (El-Khalili, 2013).


RSs and accompanying challenges like vulnerability to manipulation, biases, and distortions of reality, surveillance, the threat to data protection and privacy, social discrimination, violation of intellectual property rights, abuse of market power, effects on cognitive capabilities, growing heteronomy, and loss of human sovereignty and controllability of technology (Bashir, 2020) with the diffusion of algorithmic journalism with mainstream journalism, governance perspective are needed in analysis, assessment and improved regulation of algorithms. Social media platforms like Facebook and Twitter can be used to massively disseminate fake and misleading information (Ndlela, 2020); search engines like google can be manipulated by the wealthy and technologically savvy members of society to produce distorted results for ulterior motives.


For example, The 2015 general elections in Nigeria are described as one of the most divisive in our national history. Some of the fake news generated by Cambridge Analytica found its way into mainstream media in Nigeria (Oparah, 2015). The deployment of algorithms to influence political outcomes can endanger our democracy and further heat up the polity. That algorithm can be deployed to manipulate public opinion and cause adverse consequences, gives credence that recommender systems need to be regulated both within the industry and authorities.


Recommender systems, however, have been applied for nobler courses; it is used by the language app Duolingo to help people learning a new language stick with the program. Recommender systems are involved in video games that help to distract cancer patients as they receive painful treatments. Tech professionals proposed Nir Eyal's "Regret Test" as the litmus test for building recommender systems. The ethical question to ask is, "If people knew everything the product designer knows, would they still execute the intended behavior? Are they likely to regret doing this?"







REFERENCES

Abdullahi Saleh Bashir. (2020). Algorithm Governance Framework for Media Regulation in Nigerian Media System https://www.researchgate.net/publication/341047572 [Acessed September 28, 2019]


Albright, J. (2018). Untrue-Tube: Monetizing Misery and Disinformation. Retrieved from Medium website: https://medium.com/@d1gi/untrue-tube-monetizingmisery-and-disinformation-388c4786cc3d


Bennett, J. and Lanning, S. 2007. The Netflix Prize. In Proceedings of the KDD cup and workshop. 35.


Burke, R., Felfernig, A., & H. Göker, M. (2011). Recommender Systems: An Overview. Ai Magazine, 32, 13–18. https://doi.org/10.1609/aimag.v32i3.2361


Carbonell, J. G. (1989). Introduction: Paradigms for Machine Learning. Artif. Intell., 40(1–3), 1–9. https://doi.org/10.1016/0004-3702(89)90045-3


Cynthia Dwork. Differential privacy. In Automata, Languages, and Programming, 33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006, Proceedings, Part II, pages 1–12, 2006.


Donovan, J.M., Caplan, R., Matthews, J.N., & Hanson, L. (2018). Algorithmic accountability: a primer.


El-Bermawy, M. M. (2016). Your Filter Bubble Is Destroying Democracy. Retrieved from Wired website: https://www.wired.com/2016/11/filter-bubbledestroying-democracy/

Essinger, S., & L. Rosen, G. (2011). An introduction to machine learning for students in secondary education. 243–248. https://doi.org/10.1109/DSP-SPE.2011.5739219


Jeckmans, A., Beye, M., Erkin, Z., Hartel, P.H., Lagendijk, R.L., & Tang, Q. (2013). Privacy in Recommender Systems. Social Media Retrieval.


Lin, J., Amini, S., Hong, J.I., Sadeh, N., Lindqvist, J., and Zhang, J. 2012. Expectation and purpose: understanding users' mental models of mobile app privacy through crowdsourcing. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing. 501-510.


Ndlela, Martin N. 2020. "Social Media and Elections in Africa, Volume 1." Pp. 13–38 in Social Media and Elections in Africa, Volume 1. Vol. 1, edited by W. Ndlela, Martin N and Mano. Cham: Palmgrave Macmillan.


Oparah, Peter Claver. 2015. "So Is Buhari Indeed A Religious Fanatic." Sahara Reporters. Retrieved (http://saharareporters.com/2015/01/27/so-buhari-indeed-religious-fanatic-peter-claver-%09oparah).


Peppers, D., Rogers, M., and Dorf, B. 1999. Is your company ready for one-to-one marketing. Harvard Business Review, 77(1): 151-160.


Richert, W., & Coelho, L. P. (2013). Getting Started with Python Machine Learning. In Building Machine Learning Systems with Python (pp. 7–8).


Saurwein, Florian, Natascha Just, and Michael Latzer. 2015. "Governance of Algorithms: Options and Limitations." Info 17(6):35–49.


Tufekci, Z. (2018b). YouTube, the Great Radicalizer. Retrieved from The New York Times website: https://www.nytimes.com/2018/03/10/opinion/sunday/yo utube-politics-radical.html

Zoetekouw, K.F.A. (2019). A critical analysis of the negative consequences caused by recommender

37 views0 comments

Commenti


bottom of page