social, ethical and legal issues in computing lecture series
Upcoming Events
Copyright & AI: Legalities and Legal Issues
November 19, 2024, 11:30 AM
Meaghan Shannon – Queen’s University
This lecture will provide an overview of what copyright is and how it works, focusing on the interplay between legislation and case law as well as the relationship between copyright law and contract law. Canadian copyright law is intended to balance the rights conferred upon authors of works with the exceptions that are available to users of works. The Canadian fair dealing exception and the American fair use defense will be explored during a deep dive into the relevant case law. Once an understanding of copyright is established, the current legal landscape will be considered and applied to artificial intelligence so that the legalities (legal obligation) and legal issues can be recognized and perhaps reconciled.
Chatbots for Psychotherapy: Possibilities, Limitations and Moral Responsibility
Nov 26, 2024, 11:30 AM
Catherine Stinson – Queen’s University
Are scientists the people best positioned to make decisions about the ethical impacts of their research and whether that research should be allowed to proceed? Or should regulatory bodies, governments or the public decide where the line should be drawn? Bridgman famously argued that scientists have a special status that confers them freedom from considering consequences beyond scientific ones. Oppenheimer famously disagreed.
With this background in mind we look at a contemporary case study. For better or for worse, chatbot-based psychotherapy services are actively being offered and used online, with virtually no oversight. There are both obvious potential benefits and potential harms. We draw out some of these benefits and harms, and suggest policy responses that might help encourage more of the former and less of the latter. We then reflect on whether Bridgman’s arguments apply to the current moment.
Past Events
Who the Computer Sees: Race, Gender and AI
Mar 27, 2024
Carla Fehr – University of Waterloo, Wolfe Chair in Scientific and Technological Literacy
Facial recognition systems can do a lot more than open your smartphone. They can sort faces into many categories, including emotional state, age, race, and sex. Most Americans are, without their consent, included in government face recognition databases. This paper develops a case study in which scholar, activist, and public figure Joy Buolamwini diagnoses a now-famous failure of facial recognition systems to ‘see’ and accurately classify Black women’s faces. This case illustrates many important issues in the ethics and politics of AI. In lecture I highlight how this problem is an EDI problem and caution against ‘easy’ solutions that can both backfire and lead to the exploitation of diverse researchers.
Artificial Intelligence, Artifice, and Art
Mar 20, 2024
Ted Chiang – Science Fiction writer (Bio)
Does artificial intelligence deserve to be called intelligence? What are the uses of synthetic text and imagery, and what would it take for those to be artistic mediums?
Disability, Social Media and AI: Implications For the Computing Sciences
Mar 13, 2024
Johnathan Flowers – California State University, Northridge
This talk is divided into two parts: the first part talks through some of the social implications of technology and disability with an emphasis on AI and emerging technologies as they intersect with ableism and ableist discourse in society, covering some of the material in my chapter on AI and disability in the Bloomsbury Guide to Philosophy of Disability. The second part will engage with the computing sciences and disability, specifically the cultural environment of the computing sciences and the ways it relies on a “culture of smartness” which maintains ableism within the field.
The Art of Digital Capitalism
Mar 6, 2024
Tung-Hui Hu – University of Michigan, English & Digital Studies
While the ultimate compliment to an AI model is that it can write poetry or create art, this talk looks to actual writers and artists who have worked alongside digital technology. Moving from the 1970s, when a group of artists decide to build a decentralized network, to the present moment, when artists and writers are training their own AI models, this talk is structured around these footnotes to the history of computation. These artistic works aren’t just decorative or speculative, though; instead, they have the potential to teach us how to live with (and perhaps turn away from) the devastating consequences of digital capitalism.
Social Media, Polarization and Conflict
Feb 28, 2024
Jonathan Stray – UC Berkeley Center for Human-compatible AI
It’s now commonplace to say that ranking algorithms used by major social media and news platforms are tearing us apart, but what does this mean, what is the evidence, and what could we do differently? I’ll begin with some frameworks for thinking about conflict and polarization, to clearer define what the goals of “better” algorithms might be. Then we’ll look at theories how social media ranking algorithms can affect conflict, and data which might clarify what is actually happening. We’ll conclude by asking the question: if our algorithms are bad, what would better algorithms look like?
Pragmatism vs. Principle and How We Get Both Wrong: Inclusive Design Gifts
Feb 14, 2024
Jess Mitchell – Ontario College of Art and Design
From decision-making, thinking, and the creation of everyday things what are we missing? And what hides in the gaps? An inclusive design perspective gives us an opportunity to approach just about everything differently. Let’s have a chat about approaching things differently.
Virtual Reality as Artistic and Reflexive Media
Feb 7, 2024
Sojung Bahng – Queen’s University, Department of Film & Media and DAN School of Drama and Music
This talk will introduce utilization of computational media in artistic and cinematic practices. The primary focus will be on the role of virtual reality as a reflexive material device for storytelling, serving as a lens to reflect our perception and consciousness within socio-cultural contexts.
Algorithmic Bias and Fairness: Exploring Historical Context, Methodological Shortcomings and Future Challenges
Jan 31, 2024
Rina Khan – Queen’s University, School of Computing
AI has seen incredible strides in the past decade and is now ubiquitous in various applications we interact with every day. AI applications have also demonstrated the ability for perpetuating harmful biases and stereotypes to even committing actual harm. This can be observed in facial recognition, law enforcement, hiring screening, automated grading, and natural language processing among others. In this talk, I will examine the lessons that can be learned from the history of computing and AI in relation to algorithmic fairness. I will explore the methodological factors that lead to algorithmic bias and harm and the human factors that are intrinsically interwoven. I will finally discuss proposed mitigation strategies, and the challenges that lie ahead towards creating fairer models.
Designing for Coexistence: Adaptability, Equity, and Ethical Pluralism in Sociotechnical Systems
Jan 17, 2024
Mohammad Rashidujjaman Rifat – University of Toronto, Computer Science
Many technologies today are built on Western scientific principles and empirical data. This approach often overlooks or even discriminates against people whose values are deeply rooted in traditional beliefs and ethics. In his talk, putting faith in the center of analysis, Rifat will examine how the prevailing ethical perspectives in technology development tend to favor some communities while neglecting or marginalizing others worldwide. Rifat will share insights from his diverse research, which spans areas like sustainability, development, privacy, and the prevention of online harms, to highlight how this marginalization occurs. He will then explain his approach to addressing these challenges by integrating theories from postsecular, postcolonial, and decolonial studies with advanced computing techniques, ranging from deep learning to virtual reality. His goal is to develop technologies that are more inclusive, equitable, and plural, especially for those whose ethics and traditions have been overlooked.
Epistemic Corruption and Interested Knowledge
Nov 23, 2023
Sergio Sismondo – Queen’s University, Philosophy
When a system that produces and distributes knowledge importantly loses integrity, ceasing to provide the kinds of trusted knowledge expected of it, we can label this ‘epistemic corruption’. It turns out that such systems are often more fragile than they appear, and they can lose their integrity as a result of internal or external pressures. It also turns out that important actors will often disagree about what constitutes epistemic corruption or which practices are cases – and hence it is important to look at accusations and defences with a measure of neutrality. I will present a small handful examples of epistemic corruption, in an attempt to understand some of the stakes.
Towards Equitable Language Technologies
Nov 16, 2023
Su Lin Blodgett – Microsoft Research Montreal
Language technologies are now ubiquitous. Yet the benefits of these technologies do not accrue evenly to all people, and they can be harmful; they can reproduce stereotypes, prevent speakers of “non-standard” language varieties from participating fully in public discourse, and reinscribe historical patterns of linguistic discrimination. In this talk, I will take a tour through the rapidly emerging body of research examining bias and harm in language technologies and offer some perspective on the many challenges of this work. I will offer some perspective on the many challenges of this work, ranging from how we anticipate and measure language-related harms to how we grapple with the complexities of where and how language technologies are encountered. I will conclude by discussing some future directions towards more equitable technologies.
An Approach Towards Accessibility and Inclusive Design
Nov 9, 2023
Matt Jacobs and Eric Kellenberger – Queen’s University and San José State University
It is easy to mistake accessibility and inclusive design auxiliary efforts, where accommodations are simply appended to an existing product. In truth, the most extreme examples of need often provide valuable insight into features that benefit everyone. This concept is the driving force behind ‘Universal Design.’ The goal of the present talk is to provide frameworks that broaden our ability to examine accessibility and inclusion, equipping the audience with tools for more iterative, person-driven design approaches. The emphasis will be more on how to interpret and approach the problem, rather than the exact solution itself.
Terraforming Bits & Carbonivorous Clouds: On the Metabolic Rift of Computation
Nov 2, 2023
Steven Gonzalez Monserrate – Goethe University
In the nineteenth century, Karl Marx formulated the concept of “metabolic rift” to describe capitalism’s unsustainable expansion as chemical fertilizers depleted soil nutrients and smog from factories choked the skies of an industrializing Europe. Today, much of what society describes as the “Cloud” resides in data centers not so unlike Marx’s factories. They are the invisible engines of digital capitalism; their pooled, remote computational power and storage capacity are the informatic backbone of everything from social media to payroll to ChatGPT. Like capitalism, computation is a metabolic process. Drawing on six years of ethnographic research in data centers located in the United States, Puerto Rico, Iceland, and Singapore, this lecture surveys the global and local environmental impacts of cloud computing including carbon emissions, water footprint, electronic waste output, land use, and noise pollution. Inspired by science and technology studies and speculative fiction, alternative data ecologies are presented as a corrective to digital capitalism’s environmental excess.
A Critical Look at Canada’s Proposed Artificial Intelligence and Data Act
Oct 26, 2023
Teresa Scassa – University of Ottawa
How do we regulate a technology that crosses all sectors and industries, and that presents considerable risks alongside its promises? Canada’s response to this question, the proposed Artificial Intelligence and Data Act (AIDA), is currently before the INDU committee of Parliament. If passed, AIDA will provide for ex ante regulation of commercial AI in Canada. This presentation offers a critical look at AIDA, placing it within the broader context of other governance work in Canada and abroad.
The Artificial Sublime
Oct 5, 2023
Regina Rini – York University
AI tools like Dall-E, Midjourney, and even ChatGPT can produce objects that look like artwork. But is it really art? Here I will argue that AI is surprisingly well-suited to a particular type of artistic value: the sublime. Sublimity, according to Kant, is the experience of encountering something so vast that the human mind cannot comprehend it. Kant thought that this could be found only in nature, not in art made by humans. But, I will argue, he was wrong about that last part – and it turns out that AI can produce sublime experiences too.
The Future of Work in Canada – A Public Policy Perspective
Sep 28, 2023
Sunil Johal – University of Toronto
Industry Presence and Influence in AI
Sep 7, 2023
Will Aitken – Queen’s University
The advent of transformers, higher computational budgets, and big data has engendered remarkable progress in Natural Language Processing (NLP). Impressive performance of industry pre-trained models has garnered public attention in recent years and made news headlines. That these are industry models is noteworthy. Rarely, if ever, are academic institutes producing exciting new NLP models. Using these models is critical for competing on NLP benchmarks and correspondingly to stay relevant in NLP research. We surveyed 100 papers published at EMNLP 2022 to determine whether this phenomenon constitutes a reliance on industry for NLP publications. We find that there is indeed a substantial reliance. Citations of industry artifacts and contributions across categories is at least three times greater than industry publication rates per year. Quantifying this reliance does not settle how we ought to interpret the results. We discuss two possible perspectives in our discussion: 1) Is collaboration with industry still collaboration in the absence of an alternative? Or 2) has free NLP inquiry been captured by the motivations and research direction of private corporations?
Topic: Copyright and Fair Use
Mar 31, 2023
John Watkinson – Larva Labs
Ethical Issues in the Mass Collection of Human Rights Documentation
Mar 24, 2023
Yvonne Ng – WITNESS
Investigators, researchers, and archivists around the world are using computingtools and services to collect and preserve large quantities of human rightsdocumentary evidence, often without considering all the potential unintendedconsequences and harms. We will discuss some of the ethical issues that arise,and ways that some have found to approach this work responsibly.
Light-touch ethics: Responsible AI’s role in government
Mar 17, 2023
Ana Brandusescu – McGill University
A part of the artificial intelligence (AI) ethics movement, responsible AI has become a dominant strategy in governing AI, rooted in corporate social responsibility. One such example is the algorithmic impact assessment (AIA). Created by governments and professional associations, a typical AIA produces a points-based reward system for impact and risk assessment levels for an AI system. This talk will address power and influence in responsible AI and the broader implications for the governance of AI and its ethics.
Why privacy doesn’t matter and understanding what’s really at stake does.
Mar 3, 2023
LLana James – University of Toronto
Artificial Intelligence: Navigating the Intersection of Ethics, Law, and Policy
Feb 17, 2023
Kassandra McAdams-Roy
This lecture will examine the legal, ethical and policy considerations surrounding the development and use of Artificial Intelligence (AI). The widespread adoption of AI has created new challenges for society, including issues related to data privacy, algorithmic bias, accountability, human safety and more. The lecture will explore some of the current and emerging laws, regulations and other normative frameworks governing AI. It will also discuss the ethical considerations surrounding the use of AI, the broader policy implications, and will consider how best to balance the benefits of this technology with the need to protect individual rights and interests.
What is creativity, and what does it have to do with labour and computers?
Feb 10, 2023
Darren Abramson – Dalhousie University
What does it mean to create something? What do we deserve for our labour? I briefly consider the concept of creativity and argue for a particular view with examples from machine learning. Then I consider the value of labour in programming, and contrast a view from the turn of the millennium with perspectives from recent events.
Inclusive Design, Accessibility and the Outlier Challenge
Feb 3, 2023
Jutta Treviranus – OCADU
What is inclusive design? How is it situated with respect to other forms of design and accessibility? What approaches does it offer for complexity, uncertainty, disparity, and wicked decisions?
Anticheat, Antitox, and the Bottom Line
Jan 27, 2023
Kyle Boerstler – Activision
It seems like a no-brainer that keeping cheaters out of games would be a good idea for companies (and it’s why jobs like mine exist). However, tensions appear when the population of cheaters overlaps with the population of spenders in games. This problem becomes even worse for Antitox, because the perceived harm is lower, often goes unreported, and does not have an obvious solution. For these reasons, investment in antitox is often below investment in anticheat, which means the solutions are often more heavy-handed, and less likely to be implemented because it is even more common for toxic players and spenders to overlap. In this talk, I will cover these issues from my standpoint as a data scientist, addressing the tensions and discussing the relative effects on our player populations.
The Problem with Automated Content Moderation
Jan 20, 2023
Zeerek Talak – Simon Fraser University
Online content moderation using machine learning is a task that is both necessary yet has failed in its mission to protect marginalized communities that are disproportionately at risk of harms. Claims have been made that the issue has been available resources, i.e. datasets and adequately advanced machine learning models. I argue in this talk that the fundamental reason that the power dynamics which govern our social structures have inadequately been subverted to afford the protection of marginalized communities. Through acritical reading of machine learning, I show how the task of protecting marginalized communities is at odds with machine learning without an associated restructuring of the power dynamics that govern the technology.
Portrait of the Artist as a Young Algorithm
Nov 21, 2022
Sofie Vlaad – Queen’s University
Sofie’s research is firmly rooted in both feminist philosophy and transgender studies. These twin schools of thought inform her work in ways that are both explicit and implicit. Her current project brings together ethics of artificial intelligence, philosophy of creativity, and digital poetics to explore a series of related questions: Might we consider poetry constructed with the assistance of machine learning to be a product of creativity? If so, how is this form of creativity shaped by algorithmic bias? Does computer generated poetry have aesthetic value?
Currently Sofie is working on an article that posits trans poetics as a way of doing trans philosophy, a co-authored piece exploring how we might epistemically ground diversity projects in AI, and a collaborative arts project exploring queer/mad/trans/femme futures.
What Software Eats: The Banal Violences of Efficiency and How to Bite Back
Nov 14, 2022
Bianca Wylie – Digital Public, Tech Reset Canada
Bianca is a writer with a dual background in technology and public engagement. She is a partner at Digital Public and a co-founder of Tech Reset Canada. She worked for several years in the tech sector in operations, infrastructure, corporate training, and product management. Then, as a professional facilitator, she spent several years co-designing, delivering and supporting public consultation processes for various governments and government agencies. She founded the Open Data Institute Toronto in 2014 and co-founded Civic Tech Toronto in 2015.
Bianca’s writing has been published in a range of publications including: Boston Review, VICE, The Globe and Mail, and Toronto Life. She also posts on Medium. She is currently a member of the advisory boards for the Electronic Privacy Information Centre (EPIC), The Computational Democracy Project and the Minderoo Tech & Policy Lab and is a senior fellow at the Centre for International Governance Innovation.
Darwin’s Animoji: Histories of Racialization in Facial Analyses Past and Present
Oct 31, 2022
Luke Stark – University of Western Ontario
Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario. His work interrogates the historical, social, and ethical impacts of computing and artificial intelligence technologies, particularly those mediating social and emotional expression. His scholarship highlights the asymmetries of power, access and justice that are emerging as these systems are deployed in the world, and the social and political challenges that technologists, policymakers, and the wider public face as a result.
Computing and Global Development: A Critical Perspective
Oct 17, 2022
Ishtiaque Ahmed – University of Toronto
His research interest is situated at the intersection of computer science and the critical social sciences. His work is often motivated by social justice and sustainability issues, and he puts them in the academic contexts of Human-Computer Interaction (HCI) and Information and Communication Technology and Development (ICTD). He operates through a wide range of technical and methodological apparatuses from ethnography to design, and from NLP to tangible user interface.
Understanding Conflicts of Interest in Ethics of AI Research
Oct 3, 2022
Mohamed Abdalla – University of Toronto
As more governmental bodies look to regulate the application of AI, it is important that the incentives of those consulted be clearly understood and taken into account. This talk will explore the role of industry funding on AI research and the incentives such funding creates. To do this, we will: i) discuss how conflicts of interest are treated in other fields of academia, ii) quantify financial relationships between researchers and industry, and iii) discuss how young professionals and future researchers should approach the issue of corporate funding.
The Automation of Everything
Sep 19, 2022
David Murakami Wood – University of Ottawa
Beginning with factory work and the introduction of the production line, this presentation examines how automation within capitalism has progressed from the workplace through to the liminal spaces between work and not-work towards the full automation of the social. It draws on work from fields as diverse as Political Economy, Surveillance Studies, Science and Technology Studies, Geography and Environmental Studies to trace the implications of automation for work and life in the era of platform capitalism.
Refusing AI Contact: Autism, Algorithms and the Dangers of ‘Technopsyence’
Sep 12, 2022
Os Keyes – University of Washington
Their work focuses on bringing together both the sociology and philosophy of technoscience to examine the interplay of gender, disability, technology and power. Current projects focus on the framing and co-relations between autistic people and artificial intelligence, and the ways trans people are the subject of, and subject to, scientific research.