Artificial intelligence (AI) is not the issue with recommender systems, since any solution to improving online recommendations will likely require some form of AI. A core problem with current approaches is that they derive recommendations primarily from human behaviour, often simply to keep us engaged on the platform. People’s choices provide essential information about what matters to them, but, as Amartya Sen forcefully argued, their choices cannot wholly explain their motivations. Perhaps the most salient reason for this gap is that choice does not necessarily reflect a maximization of preferences and can be driven by other motives. For example, engaging with misleading information online reflects a constraint in the quality of information rather than a preference for false information.

So, recommender systems based on behaviour do not allow people to ground recommendations on broader aspects of what matters to them. This is key for human development, since it relates to how people can exercise their agency and, ultimately, their freedom. From a human development perspective, this is a fundamental concern, perhaps less visible than other problems, with behaviour-based recommendations, which include both the exploitation of what psychologist Daniel Kahneman called System 1 thinking (behavioural biases that digital platforms exploit for engagement) and the difficulty of accounting for heterogeneity in preferences.
Download the full article to read it offline: