Psychologist logo
Workers having a meeting with AI
Cyberpsychology, Leadership and teamwork, Work and occupational

‘Neurotechnology and emotional AI are creating a new kind of line manager’

Anthony Montgomery and colleagues ask whether we can possibly live up to AI’s definition of optimal wellbeing?

17 December 2024

"Fitter happier
More productive
Comfortable
Not drinking too much
Regular exercise at the gym (3 days a week)
Getting on better with your associate employee contemporaries"

Radiohead, Fitter Happier

The opening lyrics to this song, read in a robotic, monotonous and desensitised voice, are from a 1997 album – OK Computer – intended to explore the ever-decreasing standing room that humans occupy in a world dominated by machines and computers (Clarke, 2010). More than a quarter of a century on, there appears to be widespread acceptance that AI is a fait accompli, and therefore its positives should be embraced and the negatives acknowledged, but not inflated. Neurotechnology and emotional Artificial Intelligence (AI) represent the cutting-edge of this AI technology. They are the ghosts in the machine that Radiohead songwriter Thom Yorke was worried about. 

Neuro-somatic technology represents a nascent but rapidly growing industry. Neural interfaces can be found in smartphone apps, earbuds, wearables, headsets, safety helmets, and watches. Companies such as Emotiv, Neuralink, Kernel, and Neurable claim their technologies can optimise daily activities, boost productivity, track attentiveness and engagement, facilitate learning, and create a more harmonious and responsive working environment. Emotional AI technologies, on the other hand, are being used to read a person's affective state. Businesses around the globe are now using emotional recognition technology to monitor employees for engagement, productivity, compliance (Suni Lopez et al., 2019), and, increasingly, well-being (Spataro, 2020). Amazon and Walmart have patented a 'performance metric' bracelet that uses ultra-sonic sensors to measure an employee's productivity and eavesdrops on their communication with customers (Davidson, 2018).

This Brain tech market was worth 1.74 billion USD in 2022 and is set to increase to 6.2 billion USD by 2030 (World Economic Forum, 2024). It is built around the twin goals of intimacy and instrumentality – to create constantly alert, 'happy', and engaged employees via the harvesting of our intimate data. However, as an invasive form of surveillance capitalism, the managerial adoption of neurotech and emotional AI brings with it myriad legal, ethical, cultural, and scientific issues. Advocates insist it can positively contribute to productivity, learning, mindfulness, and efficiency in our workplaces. 

The flip side of this powerful potential, however, is that it can threaten democracy, be ethically challenging, and erode employee well-being via its valorisation of 'digital' values such as disposability, replaceability, speed of optimisation, scale, and accuracy (Vallor, 2024). 

Here, we will argue that the allure of neurotech must be adequately balanced with ethical regulation, and an assessment of its impact on well-being and democracy within the workplace. 

What is the correct algorithm for well-being?

Historically, employee well-being has played second fiddle to technological innovation. Indeed, initial attempts to improve working conditions during the interwar years were driven by a need for greater productivity. The 21st century has witnessed a wider recognition that well-being should be a strategic issue in its own right, while also recognising its impact on recruitment and retention. However, the emphasis is still on individuals being responsible for their health, albeit supported with appropriate resources and tools. 

Now, with neurotechnology and emotional AI, we have tools that have the potential to damage as well as support. Automated management systems such as those used by Uber and Amazon have already been found to foment higher degrees of anxiety and stress through target settings, time tracking, gamification, ticketing systems, and performance monitoring (La Torre et al., 2019), lower trust levels (Brougham & Haar, 2017) and have also been found to encourage discrimination (Rhue, 2019).

Automated monitoring is supposed to lead to greater transparency, reduced potential for error, and improved efficiency – but ironically, it can result in reduced interactions with colleagues and the decreased opportunity for social intelligence to solve organisational problems (Arslan et al., 2022). It does this by triggering a state of hypervigilance, which is inherently associated with a constant state of stress and anxiety. Cognitive and empathic surveillance  can make us feel less competent, less in control and reduce opportunities for social relatedness – which are the three key components of healthy functioning in humans (Deci & Ryan, 2012). 

To this, we add the risk of escalating inequality and mental health problems. An increasing number of studies demonstrate how potential biases in affect-recognition algorithms can inadvertently lead to prejudice or false positive profiling (Cabitza et al., 2022; Lee et al., 2019; Schelenz, 2022). Congruently, there is an emerging debate about how AI use in business induces complacency in leaders as they place unwarranted trust in technology (De Cremer, 2022; Walsh, 2019). 

Employees placed under neurosomatic surveillance may be robbed of their social capital, which evidence suggests is vital for physical health and well-being (Ehsan et al., 2019). There's also a danger, as with previous corporate wellness programs, that the burden of behaviour modification is placed on the worker. Our agency and democratic participation in an organisation are gradually eroded until a crisis point is reached. At this point, the expressed burnout is more likely to be interpreted as a personal failure, and the opportunity to reflect on how structures and systems fuelled our burnout is lost. 

The truth is, we don't know much about the impact of neurotech and emotional AI in the workplace. How do people feel about AI in terms of their worker identity? Does it represent an experiential threat to their sense of self, relations with colleagues/clients/patients, and relationship with the organisation? And what is the role of industrial relations in this new landscape? 

Deeper fears

A 2023 OECD survey of employers and employees in the manufacturing and financial sectors indicated that both tend to be very positive about the impact of AI on productivity and working conditions. However, the specific concerns raised by employees were indicative of deeper fears concerning control and job security. 57 per cent of workers supported banning AI that would decide which workers would be dismissed, while 40 per cent supported banning AI that would decide which workers would be hired. Moreover, employees reported feeling increased pressure to perform at work due to automated data collection (62 per cent/56 per cent in finance/manufacturing).  

Industrial relations are entering an unknown space in terms of how working conditions are influenced by neurotechnology and emotional AI. The number of apps for auto-chatbot theory and mindfulness practice dwarfs the handful of apps to facilitate labour organising (Vallor, 2019). But what we do know is that the 'black-box' nature of neurotech and emotional AI has the potential to threaten the fragile relationship between employers, employees, and unions. The lack of knowledge concerning the algorithms will make workplaces feel less democratic and less safe. For example, when scoring systems are coupled with neurosomatic technologies, the result may limit freedom of thought and expression. And the research suggests that empathic surveillance in the workplace may also fuel higher levels of stress, mistrust, and animosity between subordinates and superiors (Bondanini et al., 2020; Brougham & Haar, 2018; Mantello & Ho, 2023).

Towards 'neurorights'

Neurorights can be defined as the ethical, legal, social, or natural principles of freedom or entitlement related to a person's cerebral and mental domain; that is, the fundamental normative rules for the protection and preservation of the human brain and mind (Ienca, 2021). The ethical and legal frameworks on neurotechnology became an urgent and significant task among policymakers, with the OECD Recommendations on Responsible Innovation in Neurotechnology and the UNECSO's preparation for the Recommendations on Ethical Framework on Neurotechnology. The discussion of 'Neurorights' has helped highlight the need for bespoke legislation on neurotechology – as well as strong consideration of who should be involved in such decision making. 

The recent Leon declaration from the EU calls for the development of a humanist neurotechnology that protects people's fundamental rights and contributes to Europe's competitiveness and open strategic autonomy (Spanish EU Presidency, 2023). However, it's not yet clear how these principles can protect employee rights concerning equitable access, data privacy, the right to object to automated data processing, acceptable limits of cognitive enhancement, and cognitive freedom.

Part of this cognitive freedom is about human indeterminacy. That's the inherent unpredictability and alterability in human cognition, decision-making, and behaviour (Beerends & Aydin, 2024). It permits a diversity of personalities, behaviours, and is regarded as a formative driver of creativity and innovation. An over-reliance on the deterministic models that drive neurotechology and emotional AI, failing to understand the nuances of human behaviour, may threaten this essential human quality. We can easily imagine a worker being penalised or marginalised for their lack of 'conformity' – in behaviour, cognition or even affect. 

What can work and organisational psychology do? 

The modern period has been dominated by the issue of autonomy and control at work and its implications for employee well-being, as represented by job-design theories such as the Job-Demands Resources model (Bakker & Demerouti, 2007; Bakker et al., 2017). Policymakers across the US, UK, European Commission, and OECD have called for improvements to job quality to improve individual well-being, firm competitiveness, and national economic growth (Warhurst et al., 2022). However, it's not yet clear whether we have the appropriate theoretical and methodological tools to comprehend the divergent goals of economic growth and employee self-determination in the increasingly digital workplace. 

AI has reimagined managerialisation on a new scale. Neurotechnology and emotional AI are creating a new kind of line manager. There is a sense in which AI can be the ultimate manager – assumed to be unbiased, temperate, and efficient – using algorithms for the good of both the worker and the bottom line. According to Leyer and Schneider (2021), AI-enabled software is a technological entity with decision competencies that represents a new form of agent in organisations, in that 'when managers have the option to delegate a decision to AI-enabled software, the software acts as an agent on the managers' behalf'. Machines are becoming the line managers. 

The new digital landscape calls for a different epistemological and ontological perspective. Firstly, there is a need to deepen understanding of how people relate to each other when AI and other advanced technological innovations enter their workplaces and influence the affective life of organisations (Einola & Khoreva, 2023; Einola et al., 2024). Human–AI interaction is a messy, ambiguous, confusing, contested and affectual phenomenon that must be studied through different lenses and using multiple methods (Sergeeva et al., 2020). 

A mirror that reflects back

We need to question the idea that these AI technologies merely 'describe' an existing reality about well-being. Instead, they may be determining a new way of understanding the nature of well-being itself. In short, we need to look at this technology through the lens of technological determinism (Héder, 2021). Through its selection of what to focus on, and what not to focus on, this technology is likely to determine perspectives about what is a 'good' organisational culture. The real power lies not in the technology itself, but in who determines its focus, and who determines what types of data are collected. 

This is particularly concerning given the potential legitimacy that is likely to be conferred on approaches that adopt AI. The organisational reality these approaches construct may be seen as having more legitimacy than other ways of understanding organisations, particularly more humanistic or systemic approaches. To return to Radiohead, technology is a mirror that reflects back to us the deeper values predominant in our organisations. And that worries us.

  • Anthony Montgomery1, Peter Mantello2, Olga Lainidi3, Hiroshi Miyashita4 & Ian Kehoe5 

1.     Department of Psychology, Northumbria University, UK 

2.     Ritsumeikan Asia Pacific University College of Asia Pacific Studies, Japan 

3.     School of Psychology, Leeds University, UK 

4.     Faculty of Policy Studies, Chuo University, Japan 

5.    American College of Thessaloniki, Greece

References

Arslan, A., Cooper, C., Khan, Z., Golgeci, I., & Ali, I. (2022). Artificial intelligence and human workers interaction at team level: a conceptual assessment of the challenges and potential HRM strategies. International Journal of Manpower, 43(1), 75-88.

Bakker, A. & Demerouti, E. (2007). The job demands-resources model: State of the art. Journal of Managerial Psychology 22(3): 309–328.

Bakker A, Rodrigues-Munoz A and Sanz Vergel A (2016) Modelling job crafting behaviours: Implications for work behaviours. Human Relations 69(1): 169–189.

Beerends, S., & Aydin, C. (2024). Negotiating the authenticity of AI: how the discourse on AI rejects human indeterminacy. AI & SOCIETY, 1-14.

Bondanini, G., Giorgi G., Ariza-Montes A., Vega-Muñoz A., Andreucci-Annunziata P (2020) Technostress dark side of technology in the workplace: a scientometric analysis. Int J Environ Res Public Health 17(21):8013. https://doi.org/10. 3390/ijerph17218013

Brougham, D., Haar, J. (2018) Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): employees' perceptions of our future workplace. J Manag Org 24(2):239–257. https://doi.org/10.1017/jmo.2016.55

Cabitza,. F, Campagner, A., Mattioli, M., (2022). The unbearable (technical) unreliability of automated facial emotion recognition. Big Data Soc 9(2):20539517221129549. https://doi.org/10.1177/20539517221129549

Clarke, M. (2010). Radiohead: Hysterical and Useless. London: Plexus

Clarke, J. & Newman, J. (1993). The right to manage: A second managerial revolution. Cultural Studies, 1993, 7, 427–41.

Davidson, B.J. (2018). Walmart Patents Surveillance Technology and Here is How It Could Affect Its Workers. Percento Technologies. Available at: https://percentotech.com/ bobbyjdavidson/walmartpatents-surveillancetechnology/ (Accessed: 2021 July 28).

Deci, E. L., & Ryan, R. M. (2012). Self-determination theory. Handbook of theories of social psychology, 1(20), 416-436.

Dewey, M. and Wilkens, U. (2019), "The bionic radiologist: avoiding blurry pictures and providing greater insights", Digital Medicine, Vol. 2, p. 65, doi: 10.1038/s41746-019-0142-9.

Ehsan, A., Klaas, H. S., Bastianen, A., & Spini, D. (2019). Social capital and health: A systematic review of systematic reviews. SSM-population health, 8, 100425.

Einola, K. and Khoreva, V. (2023) Best friend or broken tool? Exploring the co-existence of humans and artificial intelligence in the workplace ecosystem. Human Resource Management 62(1):117–135.

Einola, K., Khoreva, V., & Tienari, J. (2024). A colleague named Max: A critical inquiry into affects when an anthropomorphised AI (ro) bot enters the workplace. Human Relations77(11), 1620-1649.

Fournier, V., & Grey, C. (2000). At the critical moment: Conditions and prospects for critical management studies. Human relations53(1), 7-32.

Fuller, J., Raman, M., Sage-Gavin, E., Hines, K., et al (September 2021). Hidden Workers: Untapped Talent. Published by Harvard Business School Project on Managing the Future of Work and Accenture. 

Guest, D., Knox, A., & Warhurst, C. (2022). Humanizing work in the digital age: Lessons from socio-technical systems and quality of working life initiatives. Human relations75(8), 1461-1482.

Grote G and Guest D (2017) The case for reinvigorating quality of working life research. Human Relations 70(2): 149–167.

Héder, M. (2021). AI and the resurrection of Technological Determinism. Információs Társadalom: Társadalomtudományi Folyóirat, 21(2), 119-130.

Ienca, M. (2021). On neurorights. Frontiers in Human Neuroscience15, 701258.

Lane, M., Williams, M., & Broecke, S. (2023). The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers.

La Torre, G., Esposito, A., Sciarra, I., & Chiappetta, M. (2019). Definition, symptoms and risk of techno-stress: a systematic review. International archives of occupational and environmental health92, 13-35.

Lee, N,T,, Resnick, P., Barton, G. (2019) Algorithmic bias detection and mitigation: best practices and policies to reduce consumer harms. Brookings Institute, Washington DC. https://www.brookings.edu/research/algorithmic-biasdetection-and-mitigation-best-practices-and-policies-to-reduce-consumerharms/

Leyer, M. and Schneider, S. (2021). Decision augmentation and automation with artificial intelligence: Threat or opportunity for managers? Business Horizons 64(5): 711–724.

Mantello, P., Ho M-T (2023) Emotional AI and the future of wellbeing in the postpandemic workplace. AI Soc. https://doi.org/10.1007/s00146-023-01639-8

Pereira, V., Hadjielias, E., Christofi, M., & Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. Human Resource Management Review, 33(1), 100857.

Pollitt, C. (1993). Managerialism and the public services. Oxford: Blackwell, 1993.

Rhue, L. (2019). Anchored to bias: How AI-human scoring can induce and reduce bias due to the anchoring effect. Available at SSRN 3492129De Cremer, D. (2022), "With AI entering organizations, responsible leadership may slip", AI Ethics, Vol. 2, pp. 49-51, doi: 10.1007/s43681-021-00094-9.

Schelenz, L. (2022). Artificial intelligence between oppression and resistance: black feminist perspectives on emerging technologies. In A. Hanemaayer (Ed.), Artificial intelligence and its discontents: critiques from the social sciences and humanities. Springer International Publishing. p 225–249. https://doi.org/10.1007/978-3-030-88615-8_11

Sergeeva, A.V., Faraj, S. and Huysman, M. (2020). Losing touch: An embodiment perspective on coordination in robotic surgery. Organization Science 31(5): 1053–1071.

Spanish Presidency of the Council of the European Union, "León Declaration on European neurotechnology: a human focused and rights' oriented approach", Press Release, October 24, 2023, https://spanish-presidency.consilium.europa.eu/en/news/leon-declaracion-european-neurotechnology-human-rights/.

Spataro J (2020) The future of work—the good, the challenging & the unknown. Microsoft. Available at: https:// www. micro soft. com/en- us/ micro soft- 365/ blog/ 2020/ 07/ 08/ future- work- good- challenging- unknown.

Suni Lopez F, Condori-Fernandez N, Catala A (2019) Towards realtime automatic stress detection for office workplaces. In: Lossio- Ventura JA, Munante D, Alatrista-Salas H (eds) Information Management and Big Data. Springer International Publishing, Cham, pp 273–288. 

Walsh, M. (2019), "When algorithms make managers worse", Harvard Business Review, Product #: H04XR1-PDF-ENG, available at: https://hbr.org/2019/05/when-algorithms-make-managersworse (accessed 14 August 2020)

Wilkens, U. (2020). Artificial intelligence in the workplace–A double-edged sword. The International Journal of Information and Learning Technology, 37(5), 253-265.

World Economic Forum (2024). The brain computer interface market is growing – but what are the risks? Jun 14, 2024. https://www.weforum.org/stories/2024/06/the-brain-computer-interface-market-is-growing-but-what-are-the-risks/

Vallor, S. (2024). The AI Mirror. Oxford University Press, New York.