3.1. The MIL Algorithm-Literacy Matrix of Scenarios of Use
Overall, the scenarios of use, as reflected in the podcasts, mimicked four major information search strategies: by notional keywords, by communities of affinity, by influencer accounts, by tool affordances, as the Dashboard became more and more agile as its database increased (see
Table 1). This implied being able to navigate across social media, mass media and print channels and to validate and modify information across domains (from data to news to documents), as suggested by transliteracy theory. It was thus possible to develop a trajectory for users, from online source to data traces to evidence-building in real life circumstances.
In the process, the scenarios of use provided insights on three major roles of algorithms (ranking, recommending and predicting), in a task-oriented way. This did not so much increase the transparency of algorithms than transparency in their uses, eliciting the notion that if they cannot be modified, it is nonetheless possible to modify their uses and practices. The initial focus on information (rather than disinformation) was equally rewarding, as it became apparent that the point was not to stop algorithms but to stop the amplification of disinformation, thus raising ethical issues among the users.
The revelations of the inquests showed the actual workings of the algorithms, at the users’ end (not the API end of the platforms) and as a consequence, the competences mobilised, and the societal issues addressed. The first two podcasts laid the stress on the investigative and search dimensions of the strategies while the last two podcasts also added a reflexive dimension, as journalists, fact-checkers, developers and MIL experts objectified their practices.
Scenario 1, “the keyboard fighters”, showed the mismatch between the online calls for action and real-life mobilisations as the “liberty convoy” threats, that seemed threatening online, turned out to be insubstantial in real life. The role of algorithmic ranking was thus debunked in relation to user search. The MIL lesson drawn was that disinformation did not always work and had to be verified by facts (See podcast 1).
Scenario 2, “algorithms and propaganda: dangerous liaisons”, revealed how algorithms tended to promote state propaganda: as Russia Today was banned by the European decision (due to the war in Ukraine), algorithms recommended a new state-controlled media, CGTN, the state channel of the Chinese Communist Party, that relayed Russian propaganda. The role of algorithmic recommendation was thus exposed in relation to user engagement. The MIL lesson drawn was that disinformation was amplified along polarised lines (See podcast 2).
Scenario 3, “how algorithms changed my life”, unveiled how conspiracy theories circulated on influential accounts, in “censorship free” and unmoderated networks like Odysee. It followed an influencer, the extreme right political personality Dries Van Langenhove, who called for racism, violence and anti-covid stances. The role of algorithmic recommendation was thus unveiled in relation to user echo chambers. The MIL lesson drawn was that information diversity was key to avoid being caught in the rabbit holes of the attention economy (See podcast 3).
Scenario 4, “the algorithm watchers”, demonstrated how Google auto-complete systematically offered users the Donbass Insider recommendation when they typed Donbass in their search bar, across all people user-meters. Donbass Insider relayed Russian false messages about the war in Ukraine and was linked to Christelle Néant, a Franco-Russian pro-Kremlin blogger and self-styled journalist. The role of algorithmic prediction was revealed in relation to user interactions with the tool affordances. The MIL lesson drawn was that queries and prompts can lead to automated bias and manipulation (See podcast 4).
The scenarios of use method confirmed its efficiency in describing user interactions with the various social media and online platforms and in unveiling the role of algorithms in their interplay with information and user engagement. They provided insights on the workings of such systems, yielding some surprises and undermining some “faulty” early hypotheses and predictions, as the developers, fact-checkers and journalists followed through with their real-life inquests. This task-oriented perspective, close to their everyday practice, was further elicited in the conversations held in the podcasts, with the MIL experts, especially the last two that were focused on how algorithms changed their working strategies.
The scenarios of use also indicated a shift in the modes of conducting information search, in particular in relation to sources and evidence-building. The users are no longer dealing with secret or opaque sources but with contingent, voluminous amounts of data that require interpretation, with the help of specific tools and with an awareness of how algorithms work. This shift was made visible by the journalists involved in project Crossover, who equated it to a form of “forensics”, that required a different way of conceptualising inquiry (podcast 4). They saw a positive use of algorithms as an “early signal” of phenomena that might develop and that are worth monitoring and pursuing (podcast 3). They described a kind of algo-journalism, focused on demand, riding the algorithms with a two-step process: online trends detection followed by selection of topics that are worth delving into. This algo-journalism “includes sorting information, reviewing it, presenting it visually or even using robots to write articles… And we almost systematically use algorithms with artificial intelligence to process all that” (podcast 3).
To a larger extent, the scenarios of use also made visible the engineering of attention, via algorithms. The topics that were chosen for inquiry, pushed by algorithms, revealed how much this attention is based on emotions, especially fear, that generates traffic, even if this traffic is based on propaganda, bias or manipulation (podcast 2). The intricate patterns between engagement and recommendation are particularly telling about how participation, presented as a positive attitude online, can be weaponized to bend offline attitudes (podcast 2), though not always meeting with success (podcast 1).
Finally, the scenarios of use also pointed to the possibility of new mediations: journalists, developers, and MIL experts came together in engaging collaborative work. The dashboard was improved in an agile method as the various inquests led to new strategies, akin to pre-bunking, befitting their fact-checking mission (podcasts 3 and 4). The dashboard introduced a tooled mediation as well, that could offer a counter-balance to the algorithmic mediation as captured by the major online platforms (Google and Meta in particular).
3.2. MIL AL-Competence Framework
The scenarios of use enabled the MIL experts to derive a number of valuable “lessons learnt”. They made it possible to understand how the actions online (such as queries and prompts) were algorithmically conditioned to shape access to information and individualisation of results and outcomes (as verified by the user-meters vs the APIs analysis). They made it possible to combine awareness of processes and knowledge about functions, in particular ranking, recommending and predicting. They could thus derive the competences required for users to deal with algorithms in their daily practices.
More importantly, some meta-competences appeared together with specific micro-competences. They could point to strategies and solutions at the individual and collective level. The interest of considering developments in journalism (media), with the description of platform algorithmic applications (data) in order to consider the results yielded (documents) confirmed the usefulness of transliteracy theory for embedding Algorithm literacy in Media and Information Literacy (see last section in podcasts 1, 2, 3 and 4).
For media, the meta competence was related to the understanding of the context of production and distribution of algorithms and the cultural and societal implications. The ensuing micro-competences were distributed along areas related to knowledge, skills, attitudes and values:
- -
Know the new context of news production and amplification via algorithms
- -
Pay attention to emotions and how they are stirred by sensationalist contents and take a step back from “hot” news,
- -
Be suspicious and aware of “weak signals” for disinformation (lack of traffic on some accounts, except for some divisive topics; very little activity among and across followers on a so-called popular website or community, …)
- -
Fight confirmation biases and other cognitive biases
For documents, the meta competence was related to the mastery of information search and platform navigation, in particular the controlled and diversified use of sources as pushed by algorithms. The ensuing micro-competences were distributed along areas related to knowledge, skills, attitudes and values:
- -
Vary sources of information
- -
Be vigilant about divisive issues where opinion dominates and facts and sources are not presented
- -
Modify social media uses to avoid filter bubbles and (unsolicited) echo chambers
- -
Set limits to tracking so as to reduce targeting (as less data are collected from your devices)
- -
Deactivate some functionalities regularly and set the parameters of your accounts
- -
Browse anonymously (use VPNs)
For data, the meta competence was related to the control or oversight of algorithmic patterns, in particular for the sake of transparency and accountability. The ensuing micro-competences were distributed along areas related to knowledge, skills, attitudes and values:
- -
Decipher algorithms, their biases and platform responsibility
- -
“Ride” algorithms for specific purposes
- -
Pay attention to RGPD and platform loyalty to data protection
- -
Mobilise for more transparency and accountability about their impact
- -
Require social networks to delete fake news accounts, to ban toxic personalities, to moderate contents
- -
Encourage the creation of information verification sites and use them
- -
Use technical fact checking tools like the Dashboard or InVID-Weverify
- -
Signal or report to platforms or web managers if misuses are detected
- -
Comment and/or rectify “fake news”, whenever possible
- -
Alert fact checkers, journalists or the community of affinity
The MIL experts deemed it important to emphasise user agency and reactivity by adding explicit and implicit actions to curate algorithms and adjust browsing behaviour, as evidenced in Crossover project. They were intent on elucidating the mechanics of algorithms as well as the processes at stake to make it possible to prevent algorithmic risks as well as empower users to ride algorithms for their own information consumption.
3.3. The Knowledge Base with Pedagogical Pathways and Design Considerations
The competences domains and attendant micro-competences were picked up in the interactive quizzes and their accompanying documents. The four interactive quizzes offered many options like “drag and drop”, “fill in the blanks”, etc. They could be played as standalone (by youth and adults) or associated with the podcasts (see
Table 3 and
Table 4).
Quiz 1 was derived from the scenario of use 1 and podcast 1.
Apart from understanding how algorithms work, understanding the economic and geopolitical models behind them, and using your critical thinking skills wisely (without becoming paranoid), you can build some strategies to control your information better. Here is a list of reasonable goals if you want to reduce the influence of algorithms on your information. It's up to you to find the solution that goes with it!
Quiz 2 was derived from the scenario of use 2 and podcast 2.
The four pedagogical pathways showed educators how to use the quizzes in class while reinforcing their knowledge base. They suggested activities and workshops for interactions with young people, including how to use the Dashboard (pedagogical document 4). The full “Algo-literacy Prebunking toolkit” [
29] also summarized the whole experiment with a poster, downloadable for educators and the general public, to be used in all kinds of workshops, entitled “Algo-literacy for all in 10 key points” (
https://savoirdevenir.net/crossover/).
The full “prebunking toolkit” was put together according to MIL design principles, in particular modularity, authenticity of documents, competence-based framework and tool embedded in multi-stakeholder activity in order to understand information and disinformation [
27]. The accompanying document, with teaching guidelines, were meant to entice educators in engaging with MIL literacy, so that they could overcome their lack of knowledge and confidence on the topic [
17]. The prebunking notion [
30,
31] seemed fit to be introduced at the end of the process, in terms of helping users anticipate the role of algorithms by preparation and education as the best filter against disinformation. The point was to create new heuristics and a kind of educational preparedness that could be pedagogically sustainable, especially if taken in a larger MIL design that encompassed the societal and cultural context [
32].