logo headerx1
Search
Close this search box.
cha mono

  1. Home
  2. /
  3. Precarity
  4. /
  5. Precarity and Artificial Intelligence

Precarity and Artificial Intelligence

by David Lewis, anthropologist and historian,
david.lewis@umontreal.ca

Dr. Google doesn’t have a professorship (yet), but might as well – and Siri, Alexa and the rest will form its first student cohort. In just a few short years, Google and co. have become ubiquitous in our lives, authoritative sources of knowledge acquisition. Google has even become the supreme knowledge authority of the web – an authority that, moreover, is rarely questioned … even though it depends not only on algorithms that have their own stakes, but also on optimization offered to anyone who can afford it. However, this is only the most telling example of a much more complex – and consequently even more insidious – reality that is materializing on our doorstep, that of artificial intelligence.

The world of education is in fact already affected by this new reality, but we are only at the very beginning of a wave that could well turn into a tsunami … from kindergarten to university, currently lagging behind. It is clear that the entire community will be affected, but it is also clear that some will be more affected than others. The world of education has also been experiencing, for the past few decades, a trend towards the casualization of its community, notably, at the university, with the function of lecturer, essentially conceived as a residual category to that of professor. Now, we can easily suppose that the upcoming transformations stemming from artificial intelligence will hit the precarious harder than others. Here is how.

Education and Precarity

Alexander J. Means notes that, although education is quite generally “called forth in official neoliberal discourse as a solution to precarity” (Means 2019, 2), in reality, precarity is omnipresent – and, more ironically, it is in the academic world, where the most coveted degrees come from, that it is most evident. This is particularly true of the status of lecturer, which is mine, as it is for some 10,000 to 15,000 colleagues in Quebec, and of many others elsewhere. More specifically, I am, like about half of my colleagues, a career lecturer, i.e. someone for whom this is the main longterm occupation. Moreover, although I am an anthropologist by training, I teach primarily in history, one of the many strange effects of our precarity.

Precarious workers in higher education include faculty like myself, but also researchers, adjuncts and others. Precarity and job stability may be relative concepts, but it is particularly easy for us precarious academics to see the differences between our daily lives and those of professors, which is after all the best example of job stability there is, literally the archetype of the genre.

Academic precarity, including that of the lecturers, still comes with better working conditions than those of the majority of precarious workers. In particular, they have little to do with those of day laborers, who not only live in a precarious situation on a daily basis – whereas ours would be more seasonal – but who also have to deal with difficult or even dangerous conditions, and are generally employed in unrewarding or even humiliating jobs. The fact remains that, by its very nature, precarity makes our situation as lecturers much more fragile than it would be if we occupied the same functions with job stability, that is, in the current paradigm, if we were professors. Still, we have the privilege of being around passionate colleagues, both professors and lecturers, and university life can indeed be very stimulating and rewarding. However, it remains difficult to participate fully when everything is decided without us, when we are ignored or even despised. It is true that progress has been made since unionization – notably, at the Université de Montréal, in the last few years, thanks to a revision of the charter (2018) and consequently of the university’s statutes. Indeed, these now recognize the existence of lecturers, and allow our participation in most committees and other formal bodies. Lecturers at other Quebec universities also have representatives on various institutional committees, but now that we have two per committee in most cases (by the logic, which we defended, of ‘one is good, two is betterʼ), our representativity is probably among the best in Quebec. Yet, our presence, in the university structure, is vastly inferior to the level of our participation in the academic mission of the institution, already not in the bodies, and even less so if we include all academic functions.

Moreover, even if we our representativity was at the same level as our academic contribution, the main problem of our precarity is that most of us have to deal with a significant variability in our workload, and therefore in our income as well – which, moreover, inevitably has repercussions on our physical health, and even more so on our mental health (there is a need for a study on burn out among university teachers). This is a reality that does not fit well with the notion of a career, and makes it very difficult to actively participate in society, especially in terms of getting married, buying a house or having children.

The fact remains, as Émilie Bernier notes, that lecturers do have a voice (which does not imply that we are always heard… let alone listened to): “Even though most of the time I speak in front of the turned-off cameras of faceless students, I have a voice. I have a sense of my dignity, I have the means.” (Bernier 2022, 106) … and indeed it is even an integral part of our profession, at least in terms of speaking in its oral form. For the written form, we are, for essentially systemic reasons, at a great disadvantage compared to professors. Thus, despite all the difficulties of our reality, it is undeniable that we are privileged compared to many precarious people.

Artificial Intelligence

As David Lorge Parnas explains “‘Artificial Intelligence’ remains a buzzword, a word that many think they understand but nobody can define” (Parnas 2017, 1). There is indeed a wide spectrum of definitions, both in terms of focus and content – but a consensus has emerged in recent years around a set of computational practices replacing functions that have traditionally been performed with human intelligence. This is reflected, for example, in the Department of Education and Higher Education’s Digital Competency Framework (2019), in which artificial intelligence is presented as a: “Field of study concerned with the artificial reproduction of the cognitive faculties of human intelligence, with the aim of creating software or machines capable of performing functions normally falling within its scope.” (MEES 2019).

This is of course a rapidly expanding field, with multiple applications in a variety of spheres of society – and, as Yoshua Bengio noted at a 2019 conference, these developments are likely to have multiple impacts on our lives: “the transformative potential of these technologies is incredible,” he said. Many of these will certainly be positive, useful to society as well as to individuals – but it is clear that they will also generate concerns, even potential dangers, particularly, in the world of education, in terms of the capabilities we are trying to develop in our students: “the possibility, well, it’s more than a possibility, in fact it’s already happening, to use artificial intelligence to control minds, in advertising and on social media” (Bengio, 2019) … regardless of whether it’s for commercial, ideological or other reasons. This well-advertised shock, especially and particularly always in education, a shock that remains, at least according to François Taddei, is still to be understood: “Our school programs and our educational systems have not currently become aware of the intensity of the shock that the progress of artificial intelligence is about to bring to our ways of living, working, consuming, living together, questioning our legal norms and […] shaking our ethical standards.” (Taddei 2018, 46) – which will inevitably also affect the teacher – student relationship. Now, this predicted shock is no longer simply on our doorstep, it has undeniably passed us by, as Thierry Karsenti notes in Artificial Intelligence in Education: “Artificial intelligence […] is already very present in education, notably with the applications that learners and teachers use daily on their cell phones, or when they carry out research on the Internet.” (Karsenti 2018, 115).

Risks in Education

It is clear that artificial intelligence will allow the development of a panoply of tools, each more wonderful than the last, tools that should help develop the cognitive abilities of our students – and thus, artificial intelligence should be able to support the work of teachers. Its adoption is going to happen, whether we like it or not – but the fact remains that this transformation raises a few questions, including the place of the private sector in higher education. After all, as Yoshua Bengio reminded us, the primary objective of companies is to maximize profits, not to train responsible citizens with a sharp critical mind. The private sector is certainly an essential partner in the shift towards artificial intelligence, since it is able to provide software and other computing tools that are not developed in vitro – but at the same time, these tools are produced by external actors, not only to the academic world, but also to our social reality. They therefore come with a worldview that can imply multiple biases, or worse. Indeed, as Maria Wood notes: “the data the algorithm is learning from could have structural and historical bias baked into it” (Wood 2021), which could have the consequence of reproducing or even amplifying past discrimination. This is all the more insidious because it is a priori invisible: “The algorithm appears impartial because it seemingly does not have biased instructions in it, so its recommendations are perceived to be impartial.” (Wood 2021). The biases that Wood notes are mainly related to administrative aspects such as admissions or the tracking of student records – but as she points out, they could soon also be found in assessment as in other aspects of the teacher – student relationship.

However, the more important these tools become in learning, the more the place of teachers risks being marginalized, weakened, reduced to that of simple facilitators, coaches, advisors, perhaps, big brothers rather than father figures. This may be a desirable development, but whether it is or not, one thing is certain: it can only lead to a questioning of the figure of the teacher, and consequently of his or her place, authority and ability to manage the group. Of course, these transformations are bound to affect the precarious more fundamentally than others, women more than men (and the majority of lecturers are women), and especially those from cultural communities or other marginalized or marginalizable groups – whether in terms of job security, symbolic deficit or otherwise.

In fact, it is not only teachers who will be affected, but the whole of knowledge itself, indeed the whole structure that underpins it: the traditional centers of knowledge are already losing their primacy to a patchwork of variable sources responding to imperatives that are far from being exclusively academic. The current and future transformations in the knowledge economy could weaken not only the teachers that we are, but also the institution that currently sits at the top. The role of the university could eventually be reduced to little more than that of an interface between sources of knowledge … if it even survives.

Still, one can imagine that most applications of artificial intelligence in education will be benevolent (whereas this will certainly not be the case in the wider world). Thus, artificial intelligence as defined by Karsenti as “a field of study whose purpose is the artificial reproduction of the cognitive faculties of human intelligence” (Karsenti 2018, 113) does not seem particularly threatening in education – on the contrary, even. Far more concerning, perhaps, is big data, which Karsenti sees as: “a digital ecosystem that allows for the collection, transfer, archiving, and manipulation of a plethora of data” (Karsenti 2018, 113). In particular, we can think of algorithms, those that affect human behaviour by taking us in certain directions rather than others as search engines and social media already do … and this is just the beginning. Indeed, algorithms have become so ubiquitous and so good at molding our behaviours that it has become almost ‘naturalʼ to compare ‘humanʼ behaviours to them – as Schwirtz et al. do, in the New York Times of December 16, 2022: “A former Putin confidant compared the dynamic to the radicalization spiral of a social media algorithm, feeding users content that provokes an emotional reaction.”

One might also fear a form of the configuring thought of and standardization of knowledge – and thus, potentially another source of minimization of the teacher’s role. Fortunately, this is not yet the case in the classroom, but it is already possible, as Wang, Chang & Li establish, to correct essay exam answers in a way that is relatively similar to what a human would do, as long as they are sufficiently marked up – an option that could become tempting for universities: “To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit students’ constructed responses are beneficial. But the high cost required in manually grading constructed responses could become an obstacle in applying open-ended questions.” (Wang, Chang & Li 2008, 1450). Few teachers will complain about no longer having to grade exams and papers, myself included, but the fact remains that, even when grading is outsourced to auxiliary staff, we teachers are still the authority responsible for assigning grades to students, which involves a series of interactions and judgments – whereas with the advent of artificial intelligence, this may no longer be the case, especially if universities see a cost saving in this. Such a rupture could only weaken the already weakened link between teachers and students.

Artificial intelligence may also soon be found in the classroom itself. This is apparently already the case in China, albeit in a very specific context, with the so-called ‘shadow educationʼ, in this case the tutoring industry – as reported by the Higher Education Council in Artificial Intelligence in Education (2020): “In China, where competition for access to higher education is particularly fierce, some private learning centers offering extracurricular services have invested heavily in intelligent tutors.” (Higher Education Council 2020, 16). Such a practice, of course, raises a variety of concerns, including the consequences of even partial formatting of student thinking: “Despite improvements in learning, some experts fear that these practices will lead to a standardization of learning and assessment, which could leave future generations unprepared for a constantly changing world.” (Higher Education Council 2020, 16). The expected behaviour of teachers could of course also be affected.

Somewhat similarly, Adam L.-Desjardins and Amy Tran note, in L’intelligence artificielle en éducation (2019), a demobilizing effect that could result from developments in artificial intelligence: “Overconfidence and dependence on the use of these technologies could lead to a certain intellectual laziness” (L.-Desjardins & Tran 2019). Worse, this laziness could eventually be easily exploited outside the walls (physical and virtual) of the university, notably by allowing: “some ill-intentioned powers to use them in order to achieve their political goals” (L.-Desjardins & Tran 2019). We can also fear a lack of initiative, even an intellectual apathy among students – which, I believe, will certainly not facilitate our task, on the contrary.

Indeed, the temptation to be taken by the hand may soon become very strong. In a recent and disturbing article entitled Ceci n’est pas un professeur d’université (2022), Jonathan Durand Folco reflects on the transformative role of content production software, “a ‘natural language processing’ technology that can be used to automate academic writing and research” (Durand Folco 2022). As we read the text, we can see the insidious effects that the growing presence of such technology will have on the development of the tools of thought for future generations. Other tools will be needed to counteract negative effects.

Risks for the Precarious

The coming changes will affect the entire world of education, and it is clear that the university will not escape them. As with everything else, the precarious will bear the brunt of these changes more than others.

Not only will the transformations inevitably affect the job security of many of us – but even for those who manage to stay, we will have to adjust, by modifying the format and content of our courses, by undergoing training, by constantly updating our computer tools … and all this, always or almost always at our own expense, both in time and money. We also risk being hit hard by task fragmentation – which could happen in a number of ways, including the automated correction mentioned above.

It is also likely that we will be left out, or at best on the margins, of the reflection on the integration of artificial intelligence in our classrooms and in our teaching. We can therefore predict that the solutions will be tailored to the needs of the professors (who after all populate the administration), and without any regard for our own. This is at least what we can observe these days at the Université de Montréal with the implementation of the CHAL (Création Horaires et Assignation Locaux) software, a system of course schedules management entirely conceived in a professorial logic, and which does not take into account at all the reality of the lecturers … and we are talking about a level of complexity that is at least one order of magnitude lower than artificial intelligence.

This artificial intelligence could also threaten our intellectual authority, at least in areas where there are differences of opinion … that is, in almost all areas. Yoshua Bengio mentioned, among the potential impacts of artificial intelligence, the concentration of power and authoritarian drifts. We are obviously thinking first of dictatorships and other despotic governments – but thanks to artificial intelligence, various forms of more or less established powers could find a way to invite themselves into the classroom, whether they be commercial, political, ideological, religious or other actors. One might think that one day, expressing an opinion opposed to Microsoft, for example, on Teams or in a class run by Microsoft software, could have a negative impact on a career; or that expressing an opinion contrary to that of an ‘influencer’ or public figure (real or virtual) could be dangerous.

It could also become difficult to express an opinion that goes against a dominant discourse, whether in education or in any other sphere of society – whether it is conveyed by a state, a company involved in education, or by any other entity or dynamic with the potential to influence it, including social movements and public opinion. If, for example, I stated in class that the first atomic bomb dropped on Japan could possibly be justified, but certainly not the second, while the truth conveyed to the students by artificial intelligence stated that both were and that there was a consensus on the matter – and that in addition, artificial intelligence had a vast arsenal of tools at its disposal (including metavers and deepfakes), I wouldn’t stand a chance, unless of course I had an authority as strong as that of a professor … which is essentially a question of status.

In fact, my authority here would be remarkably solid, though, precisely, by interposed authority: indeed, this is the position of Edwin Oldfather Reischauer, notably in Histoire du Japon et des Japonais (1970). In fact, I wonder how the artificial intelligence would have reacted if it had been able to interact with Reischauer – but one thing is sure, he would have lost neither his job nor his ability to express his opinion … whereas there is no guarantee that a lecturer who finds himself/herself in a conflict of opinion with an artificial intelligence whose powers are poorly defined by his institution would keep his freedom of expression, or even his/her job. More than just freedom of expression, in fact, it is freedom of opinion that is at stake here – and it is already resulting, even before the coming wave of artificial intelligence, in many lecturers engaging in self-censorship, often even unconsciously. The growing presence of an unquestionable authority will only make it worse.

For me, as for many colleagues I believe, one of my fundamental roles as a teacher is to dismantle the preconceived notions of my students, in my case about Japan and Japanese people, particularly with regard to samurai and geisha. As we know, the fight against myths and half-truths is at the very heart of the mission of most if not all disciplines in the social sciences and humanities, especially in history and anthropology. Generations of historians and anthropologists have worked hard to dismantle nationalistic, identity-based or other constructs… and today, an algorithm and a few entertainment companies could produce more persistent myths!

Fortunately, we at the Université de Montréal should be protected from many of these excesses (while others will no doubt experience them), at least for the next few years. Indeed, by adopting a statement of principles on freedom of expression in the summer of 2021, the Université de Montréal has been a pioneer – and what’s more, the statement has been the subject of broad consultation and a fairly strong community consensus. However, the legislator now requires the Université de Montréal, like all Quebec universities, to adopt a policy on academic freedom by the summer of 2023 (as per Bill 32: An Act respecting academic freedom in the university sector). At the Université de Montréal, the exercise is being carried out with the necessary seriousness, but the fact remains that this is a requirement that, in my opinion, puts far too much emphasis on the game of denunciation to the detriment of the promotion of good practices, thus making the dynamic unnecessarily confrontational – with always and again more immediate risks for the precarious people that we are… as the recent past has too clearly shown us.

The sense of security felt by members of the Université de Montréal community following the adoption of the statement of principles may be short-lived, however, particularly given the potential influence of external actors: one need only think of the interference of the Quebec government in university governance, precisely with the law just mentioned. We can also think of the impact that lawsuits brought by students could have, a trend that has been on the rise in recent years, including in Canada, as explained by Stephen G. Ross and Colleen Mackeigan, who describe the trend as “emerging area in education law” (Ross & Mackeigan 2019). The lawsuits have so far been limited, they say, to questions of contractual relations, but one can imagine that the subjects will eventually expand to other issues, including privacy, image rights and copyright – which could thus also involve teachers or even private companies dissatisfied with the space allocated to them by universities. As for Bill 32, it may have been adopted with good intentions (and almost total lack of understanding by the academic world), but it is still undeniably a form of interference – and a future government may well want to use the same kind of tool for much less noble purposes. We can still hope that civic and academic mobilization, such as the one that produced the 2018 Declaration

for a Responsible Development of Artificial Intelligence, by the way, an initiative of the Université de Montréal, can help protect us all from the most harmful effects and the most serious drifts. Indeed, one of its 10 principles, the 4th, called the principle of solidarity, suggests that: “The development of AIS must be compatible with the maintenance of links of solidarity between people and generations”, while the 6th, the principle of equity, states that: “The development and use of AIS must contribute to the realization of a just and equitable society.” – and states in particular (2nd point) that, “The development of AIS must contribute to the elimination of relations of domination between persons and groups based on differences in power, wealth or knowledge.”

Another of the risks mentioned by Yoshua Bengio is that artificial intelligence will one day develop its own selection criteria, without the need for human input … a form, in short, of artificial selection. Fortunately, we are far from this, and humans are and should remain at the heart of universities – but whatever the dangers that lie ahead, professors are still much better protected than lecturers.

In 1994, Claude Lessard, then Dean of the Faculty of Education at the Université de Montréal, declared at a FNEEQ symposium that we must “civilize precarity” (Lessard 1995 (1994), 99) – before giving a portrait of it which, nearly 30 years later, unfortunately remains all too relevant: “Civilizing precarity means finding new mechanisms of integration and developing a sense of belonging to the institution for those among the teachers who will not be able to enjoy a strong and permanent employment relationship. It means giving them a real place in the pedagogical decision-making process and in the decision-making process” (Lessard 1995 (1994), p. 3). (Lessard 1995 (1994), 99). In education as elsewhere, the most convincing impacts of artificial intelligence will come, says Sahir Dhalla in The problem with AI that acts like you (2020), from a “cooperation between AI and humans”. Let’s hope that we will think about inviting lecturers to this conference.

Bibliography

Bengio, Yoshua [2019] : « Intelligence artificielle, apprentissage profond, logiciel libre et bien commun », Actes du 6e Colloque libre de l’ADTE (4 juin 2019, Université Laval). Association pour le développement technologique en éducation https://adte.ca/actes-du-6e-colloque-libre-de-ladte-2019/

Bernier, Émilie [2022] : « À l’orée du trou noir. La perspective des personnes chargées de cours et la démocratisation de l’université ». Les enseignantes et enseignants contractuels dans l’université du XXIe siècle (Acfas)

https://www.acfas.ca/sites/default/files/2022-11/Acfas_Cahier-scientifique_no120_numerique_VF_nov2022.pdf

Conseil supérieur de l’éducation [2020] : L’intelligence artificielle en éducation : un aperçu des possibilités et des enjeux (Document préparatoire pour le Rapport sur l’état et les besoins de l’éducation 2018-2020)

https://www.cse.gouv.qc.ca/wp-content/uploads/2020/11/50-2113-ER-intelligence-artificielle-en-education.pdf

Dhalla, Sahir [2020] : « The problem with AI that acts like you – Human-like AI models raise questions of bias and our right to personal data », The Varsity

https://thevarsity.ca/2022/11/20/ethics-of-human-like-ai/

[2018] : Déclaration de Montréal pour un développement responsable de l’intelligence artificielle

https://www.declarationmontreal-iaresponsable.com/la-declaration

Durand Folco, Jonathan [2022] : « Ceci n’est pas un professeur d’université », Le Devoir 15 décembre 2022

https://www.ledevoir.com/opinion/idees/774674/idees-ceci-n-est-pas-un-professeur-d-universite

L.-Desjardins, Adam & Tran, Amy [2019] : « L’intelligence artificielle en éducation », L’école branchée

https://ecolebranchee.com/dossier-intelligence-artificielle-education/

Lessard, Claude [1995 (1994)] : « La précarisation de l’enseignement » Actes du colloque sur la précarité dans l’enseignement, FNEEQ-CSN.

Karsenti, Thierry [2018] : « Intelligence artificielle en éducation : L’urgence de préparer les futurs enseignants aujourd’hui pour l’école de demain ? »,

Formation et profession, vol. 26, no. 3 – pp. 112-119

http://formationprofession.org/pages/article/26/21/a159

Means, Alexander J. [2019] : « Precarity and the Precaritization of Teaching » Encyclopedia of Teacher Education. Springer

https://www.researchgate.net/publication/333965390_Precarity_and_the_Precaritization_of_Teaching

Ministère de l’Économie, de la Science et de l’Innovation [2016] : Plan d’action en économie numérique.

https://cdn-contenu.quebec.ca/cdn-contenu/adm/min/economie/publications-adm/plans-action/PL_plan_action_economie_numerique_2016-2021.pdf

Ministère de l’Éducation et de l’Enseignement supérieur [2019] : Cadre de référence de la compétence numérique

http://www.education.gouv.qc.ca/fileadmin/site_web/documents/ministere/Cadre-reference-competence-num.pdf

Parnas, David Lorge [2017] : « Inside Risks – the Real Risks of Artificial Intelligence ». Communications of the ACM, vol. 60, no. 10

http://www.csl.sri.com/users/neumann/cacm242.pdf

Reischauer, Edwin Oldfather [1970] & Dubreuil, Richard, trad. [1973] : Histoire du Japon et des Japonais, vol. 1 – ‘des Origines à 1945’. Seuil.

Ross, Stephen G. & Mackeigan, Colleen [2019] : « Canada: Claims Against Educational Institutions – It’s Not Just Academic Anymore », Mondaq [Rogers Partners LLP]

https://www.mondaq.com/canada/education/795454/claims-against-educational-institutions-it39s-not-just-academic-anymore

Schwirtz, Michael et al. [2022] : « How Putin’s War in Ukraine Became a Catastrophe for Russia », The New York Times, 16 décembre 2022

https://www.nytimes.com/interactive/2022/12/16/world/europe/russia-putin-war-failures-ukraine.html

Taddei, François [2018] : Apprendre au XXIe siècle, Calmann Levy.

Wang, Hao-Chuan ; Chang, Chun-Yen & Li, Tsai-Yen [2008] : « Assessing creative problem-solving with automated text grading », Computers & Education, vol. 51 – pp. 1450-1466

https://doi.org/10.1016/j.compedu.2008.01.006

Wood, Maria [2021] : « What Are the Risks of Algorithmic Bias in Higher Education? », Every Learner, Everywhere

https://www.everylearnereverywhere.org/blog/what-are-the-risks-of-algorithmic-bias-in-higher-education