? Grid intensity view:

Issue 1

Issue 2

Issue 3

Issue 4

Issue 5

Issue 6

Issue 7

Issue 8

Unknown grid intensity

Slowing Down AI with Speculative Friction

An image of lungs depicting the bronchus as branches of a tree flowering wildly through its alveoli. Artwork by Yan Li1 

Meaningful human oversight over AI requires a critical look at the temporal dynamics of how AI enters our lives. I argue that ideologies such as “move fast and break things” do just that—move fast and break things. Instead, what if we could slow down and contribute to crafting empowering futures? 

While working at a number of technology organizations in Silicon Valley, I’ve found that what has kept me true to my own values is to counterbalance a widely spread fascination with the science fiction at the center of technology innovation with a notion of speculative friction centered on the margins of those considered intended and unintended users of the products of technology innovation. My fascination with speculative friction is not about speculation as in financial investment in stocks, property, or other ventures in the hope of gain with the risk of loss. It is also not about friction that results in a waste of time, harm to vulnerable populations, value extraction, and disempowerment. Instead, it is about  the use of speculation as in critical thinking, asking questions, and imagining alternatives, not in a distant future but in the present moment. 

For example, to prevent viral misinformation from spreading, in 2020 Twitter introduced friction by nudging users to read an article before retweeting it. Other nudges on the platform ask people to pause before they post something potentially harmful. I argue that similar types of friction in the interactions between people and recommender system algorithms could trigger a generative process that empowers new models of engagement among human and algorithmic actors. 

To start with, there are many kinds of  friction we are faced with everyday, for example, friction makes a wheel move, gives rise to fires, and starts arguments. Anthropologist Anna Lowenhaupt Tsing describes friction as “the awkward, unequal, unstable, and creative qualities of interconnection across difference.”2 Documenting the historical events of how corporate deforestation halted in Indonesia, at least temporarily, she shares that the process was one of “collaboration not as consensus making but rather an opening for productive confusion,”3 observing that knowledge “grows through multiple layers of collaboration—as both empathy and betrayal.”4 

Inspired by her fieldwork, I suggest the need for us to collectively articulate and negotiate the social, political, and environmental aspects of friction in the context of algorithmic systems. Furthermore, in response to the strong economic incentives to move quickly in the field of AI, we aim to speculate about the benefits from slowing down and learning from educators who’ve leveraged both science fiction and speculative fiction as a tool to open up new imaginaries in AI-driven tech and tech policy innovation.5 

Understanding friction

Acknowledging the complexities of using the term AI,6 we need to consider the temporal, spatial, and social aspects of how we relate to AI-driven sociotechnical systems. To analyze the friction among stakeholders involved in these relations, including impacted communities, civil society advocates, technology companies, regulatory bodies, and environmental ecosystems, we take inspiration from the fields of anthropology, the critical anthropology of design, and speculative and critical design. 

Lucy Suchman takes us on a journey through her 20 years of experience at Xerox PARC, sharing her reflections on the problems she encounters in the enactment of innovation.7 Innovation as technology production through laborious reconfigurations,8 exemplifies the friction between the old and the new.  In her critical scholarship, Paola Ricaurte articulates the friction between AI-enabled technology and the territories and the bodies who bear the costs, including power asymmetries, unfair labor practices, opaque supply chains, and the historical process of extractivism and dispossession.9 Furthermore, Sasha Costanza-Chock’s work has inspired communities globally to dismantle structural inequality through a design justice approach led by marginalized communities.10 

What unites these interdisciplinary scholars is a critical approach to the tensions between technology innovation, design, power, and social justice, which we now consider in the context of the field of Speculative and Critical Design (SCD).  Anthony Dunne and Fiona Raby describe SCD as a type of design practice that creates friction, it aims to challenge norms, values, and incentives, and in this way has the potential to become a catalyst for change.11 

In the table below they juxtapose design as it is usually understood with the practice of SCD, highlighting that they are complimentary and the goal is to facilitate a discussion. SCD (the B side of the A/B comparison below) is not about problem solving but about problem finding, asking questions, provocation, and creating functional and social fictions that “simultaneously sit in this world, the here-and-now, while belonging to another yet-to-exist one.”12

Table 1. ​​  A/B design practice comparison by Dunne and Raby in their book “Speculative Everything: design, fiction, and social dreaming.“ (p. vii).

For example, together with collaborators Megan Ma and Renee Shelby, we’ve leveraged a SCD approach in our proposal for a Terms-we-Serve-with (TwSw) agreement – a feminist-inspired social, computational, and legal contract for restructuring power asymmetries and center-periphery dynamics to enable improved transparency and human agency in algorithmic decision-making.13 It enables slowing down AI by proposing a forum for accountability in cases of experiences of algorithmic harm on the individual and collective levels. 

Leveraging Dunne and Raby’s work we do not intend to replace traditional terms-of-service agreements with the TwSw, but instead see them as complimentary. Our goal is to leverage a socio-technical approach and formal verification methods to open up space for new social imaginaries to emerge from the “zones of friction”14 in the context of the use of algorithmic systems. While in the field of AI there’s often an unquestionable strive for frictionless technology, we ask what if we could design specific kinds of friction back in, in order to enable slowing down, self-reflection, conflict resolution, open collaboration, learning, and care.

Speculative Friction in Action: The IEEE 7010 Standard

The Institute of Electrical and Electronics Engineers (IEEE) is a global standards-making nonprofit which pioneered the development of common standards for network communication protocols among different devices and technologies. We leverage these standards every time we access the Internet. In their recent work they’ve launched a number of initiatives furthering public understanding of the importance of addressing critical considerations in the design of AI systems within the context of our symbiotic relationship with ecological ecosystems.15 

A paradigm shift in how we navigate friction within the dominant AI business model is at the core of the IEEE recommended practice for assessing the impact of AI on human well-being.16 Through an interdisciplinary dialogue, we reached a definition for well-being as “the continuous and sustainable physical, mental, and social flourishing of individuals, communities and populations where their economic needs are cared for within a thriving ecological environment.”17 The assessment of an AI system is conducted with regards to objective and subjective well-being indicators within the twelve domains visualized in the innermost sections of the sunburst diagram below. 

Fig 1. ​​Domains of well-being, corresponding sub-domains, and indicators: (1) Affect, (2) Community, (3) Culture, (4) Education, (5) Economy, (6) Environment, (7) Human Settlements, (8) Health, (9) Government, (10) Psychological Well-Being/Mental well-being, (11) Satisfaction with life, and (12) Work. 18

Key takeaways since the publication of this work include that: (1) a systems thinking approach inspires an ecosystemic vision for AI – learning from relationships across living systems and serving a broad community composed of diverse human populations, nonhuman beings, and ecosystems,19 (2) there’s a need to broaden the conversation and include diverse voices in enacting just and sustainable futures;20 (3) technology doesn’t exist in a vacuum but is situated within a socio-ecological-technological context.21 

Speculative Friction in Action: Learnings from a MozFest 2022 Workshop

Together with designer and linguist Jana Thompson,22 we organized a speculative fiction workshop23 at the recent Mozilla MozFest festival. Our goal was to inspire collective imaginaries about ecosystemic visions for AI and explore how we could move towards them. The motivation for this stream of work for me has been the need for: (1) transparency about the broader environmental impacts of AI,24 (2) open models of engagement;25 and (3) my lived experiences while building AI in Silicon Valley.26 

In the workshop we introduced four scenarios where specific AI technologies were used to solve environmental justice problems – (1) the water infrastructure crisis in Flint, Michigan; (2) increased risk of wildfires and illegal logging in a community managed forest in Manggur, Indonesia; (3) draughts and floods causing harvest failures in the farmwands of Sewua, Ghana; and (4) language loss in Taranaki, New Zealand. In the first part of the activity, participants discussed the values and metaphors reflected in the use of AI in each scenario, questioning the trust relationships among involved stakeholders including the intended and unintended users of the technology. Then we introduced artifacts from the future, inviting workshop participants to discuss what values, metaphors, and technologies exist in the speculative futures where these artifacts take center stage. Finally we explored what paradigm shifts enabled the transition towards these speculative futures.

The goal of the workshop was to investigate the broader environmental justice impacts of how AI is used in particular scenarios and question what the future would look like if AI development slows down. Most importantly, we were not trying to predict the future but to explore a wide range of possible, plausible, and probable futures and the frictions among them. As a result, workshop participants envisioned that (1) incentive structures in AI could be deconstructed by bringing visibility, utility, accessibility, and recognition to the relations between diverse people and ecosystems from the micro to the macro scales; (2) ecology inspires new governance models where we build together with Nature’s ecosystems across digital and physical worlds; and (3) locally responsive feedback systems, inspired by plant neurobiology enable new kinds of response when AI technologies lead to incidents and controversies.

Call-to-active-hope27

I’m called to a vision for speculative friction and its potential to empower open climate communities globally to be a critical voice in the way AI is used within sustainability and the way we frame and investigate the broader socio-ecological impacts of AI. What kinds of friction do you see? What does speculative friction make possible in the context of your work and lived experience? Learn more and contribute here: https://speculativefriction.org/

About the Author

Bogdana Rakova is a Senior Trustworthy AI fellow at Mozilla working at the intersection of people, trust, transparency, accountability, environmental justice, and technology. Her work is exploring new kinds of social, legal, and computational agreements that enable improved consent and contestability in the context of our interactions with algorithmic systems.


  1.  See https://www.yanyanleee.com/
  2. Tsing, A. L. (2011). Friction: An ethnography of global connection (p. 4). Princeton University Press.
  3.  Tsing, A. L. (2011). Friction: An ethnography of global connection (p. 247). Princeton University Press.
  4.  Tsing, A. L. (2011). Friction: An ethnography of global connection (p. 155). Princeton University Press.
  5. Arizona State University. The Applied Sci-Fi Project – Center for Science and the Imagination. Fiesler, C. (2021, May). Ethical Speculation in the Computing Classroom. In 2021 Conference on Research in Equitable and Sustained Participation in Engineering, Computing, and Technology (RESPECT) (pp. 1-1). IEEE. Yoshinaga, I., Guynes, S., & Canavan, G. (Eds.). (2022). Uneven Futures: Strategies for Community Survival from Speculative Fiction. MIT Press.  
  6. Tucker, E. 2022. Artifice and Intelligence. Center on Privacy & Technology at Georgetown Law.  
  7. Suchman, L. 2011. Anthropological relocations and the limits of design. Annual review of anthropology, 40(1), 1-18.
  8. Suchman L. 2002. Located accountabilities in technology production. Scand. J. Inf. Syst. 14: 91–105
  9. Ricaurte, P. (2022). Ethics for the majority world: AI and the question of violence at scale. Media, Culture & Society, 01634437221099612. Ricaurte, P. (2019). Data epistemologies, the coloniality of power, and resistance. Television & New Media, 20(4), 350-365.
  10. Costanza-Chock, S. (2020). Design Justice: Community-led practices to build the worlds we need. The MIT Press.
  11. Dunne, A., & Raby, F. (2013). Speculative everything: design, fiction, and social dreaming. MIT press.
  12. Dunne, A., & Raby, F. (2013). Speculative everything: design, fiction, and social dreaming. (p. 43) MIT press.
  13. See https://foundation.mozilla.org/en/blog/computer-says-no/ and Rakova, B., Ma, M., & Shelby, R. (2022). Terms-we-Serve-with: a feminist-inspired social imaginary for improved transparency and engagement in AI. arXiv preprint arXiv:2206.02492.
  14. Tsing, Anna Lowenhaupt. (2021). Zones of friction. Culture & Démocratie Special Issue 2020. Eurozine. https://www.eurozine.com/zones-of-friction/ 
  15. See https://ethicsinaction.ieee.org/. K. Karachalios; N. Stern; J. C. Havens, “Measuring What Matters in the Era of Global Warming and the Age of Algorithmic Promises,” in Measuring What Matters in the Era of Global Warming and the Age of Algorithmic Promises , vol., no., pp.1-17, 30 Nov. 2020.
  16.  See https://ieeexplore.ieee.org/document/9084219 and Schiff, D., Ayesh, A., Musikanski, L., & Havens, J. C. (2020, October). IEEE 7010: A new standard for assessing the well-being implications of artificial intelligence. In 2020 IEEE international conference on systems, man, and cybernetics (SMC). IEEE.
  17.  IEEE Standards Committee. (2020). IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being: IEEE Standard 7010-2020. (p. 19). IEEE.
  18. IEEE Standards Committee. (2020). IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being: IEEE Standard 7010-2020. IEEE.
  19.  HG Solomon, L., & Baio, C. (2020). An Argument for an Ecosystemic AI: Articulating Connections across Prehuman and Posthuman Intelligences. International Journal of Community Well-Being, 3(4), 559-584.
  20. Schiff, D., Rakova, B., Ayesh, A., Fanti, A., & Lennon, M. (2020). Principles to practices for responsible AI: closing the gap. arXiv preprint arXiv:2006.04707.
  21. Leach, M., Stirling, A. C., & Scoones, I. (2010). Dynamic Sustainabilities: Technology, Environment, Social Justice. (p. 232). Taylor & Francis. See also The (eco)systemic challenges in AI workshop introducing broader socio-technical and socio-ecological perspectives to the field of Artificial Intelligence during the Hybrid Human-Artificial Intelligence Conference 2022
  22. https://www.luxzia.ai/
  23. “Ecosystemic AI through a speculative fiction approach”. Mozilla MozFest 2022. See a summary at https://twitter.com/bobirakova/status/1500905539638681605
  24. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623).
  25. Arana-Catania, M., Lier, F. A. V., Procter, R., Tkachenko, N., He, Y., Zubiaga, A., & Liakata, M. (2021). Citizen participation and machine learning for a better democracy. Digital Government: Research and Practice, 2(3), 1-22.
  26. See https://partnershiponai.org/challenges-for-responsible-ai-practitioners/ and Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2021). Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-23.
  27.  I was introduced to the term “active hope” through the work of environmental activist and scholar Joanna Macy. See Macy, J., & Johnstone, C. (2012). Active hope: How to face the mess we’re in without going crazy. New World Library.