About
I am a Professor in the School of Communication at Simon Fraser University, where I have been a faculty member since 1994. From 2011 to 2020, I served as Director of the Master of Digital Media Program at the Centre for Digital Media (Great Northern Way Campus). I directed SFU's Centre for Policy Research on Science & Technology (CPROST) for nearly two decades.
My research focuses on communication and collaboration networks in IT innovation, technology futures methodologies, digital scholarly publishing, and innovation clusters. I am co-author of New Media: An Introduction (Oxford University Press), now in its fourth edition, which has become a foundational text in the field.
In 2020, I was appointed Chevalier of the Ordre des Palmes Académiques by the French government in recognition of contributions to education and culture.
Current Projects
The book presents a comprehensive analysis of the impending transition of artificial intelligence from a passive tool to an autonomous actor. The authors argue that while current AI provides significant benefits in medicine and science, the rapid pursuit of superintelligence poses an absolute risk of human extinction. They contend that existing government regulations are structurally inadequate because they treat AI as a controllable product rather than an adaptive, deceptive entity. Through quantitative modeling and historical analogies, the text estimates a disturbing probability of losing human control by the mid-21st century. Consequently, the authors call for an immediate global prohibition on developing superintelligent systemsto ensure civilizational safety. The work emphasizes that internal self-control must be architecturally guaranteed before such powerful technologies are integrated into society.
Interactive companion: P(doom) estimates grouped by epistemic position →
Blog: (Somewhat) More Optimistic → March 18, 2026
CASX Companion (1st edition) → April 1, 2026
CASX Companion → April 2, 2026
Research Interests
- Communication & collaboration networks
- IT innovation
- Technology & society futures
- Digital scholarly publishing
- Innovation clusters
- Regional development
Selected Publications
What's New in AI (March/April 2026)
These items represent a distillation of my reading in the last month (or so) on artificial intelligence. I follow certain news sites, journals, blogs etc using an RSS reader, called Feedly. Every day it presents me with 100s of items. I skim through those and mark the ones that seem most interesting. You can see the "AI" marked ones in the public version of that list. I later review those and the ones that seem like they could be important, I flag for inclusion in my source/bibliography manager, Zotero. Then, at the beginning of each month, I get Claude CoWork to help me by picking the top five stories from that previous selection process, provide a short summary, and then prepare it for the website. I upload it using sftp to my server at the university, and you're reading it now.
Timelines Accelerating — Again
The AI Futures Project published its Q1 2026 timelines update, and the direction is unmistakable: shorter. Daniel Kokotajlo's median for "Automated Coder" — the point at which AI companies would rather lay off their human software engineers than stop using AI — has moved from late 2029 to mid 2028. The reason: progress in agentic coding has been faster than expected over the last few months. The METR coding time horizon is doubling roughly every four months, new models (including Claude Opus 4.6 and GPT-5.2) have been notably impressive, and Claude Code has already reached an annualized revenue run rate of $2.5 billion. Timelines for "Top-Expert-Dominating AI" — a system at least as capable as top human experts across virtually all cognitive tasks — have similarly shifted about 18 months earlier. The authors note that several AI company researchers they respect are privately predicting automated AI R&D even sooner than the project itself forecasts.
Can Reasoning Models Actually Reason?
A significant Apple preprint — "The Illusion of Thinking" — challenges the narrative around large reasoning models (LRMs) such as o3 and DeepSeek-R1. Using controllable puzzle environments, the authors show that LRMs face a complete accuracy collapse beyond certain complexity thresholds. More counterintuitively, reasoning effort increases with problem complexity up to a point, then actually declines — even when token budget remains available. LRMs also fail to use explicit algorithms and reason inconsistently across problem scales. The paper identifies three regimes: at low complexity, standard LLMs outperform reasoning models; at medium complexity, LRMs have an advantage; at high complexity, both collapse entirely. This doesn't invalidate current reasoning models' practical usefulness, but it does complicate claims about genuine planning or reasoning capability — a distinction that matters considerably for arguments about AI risk trajectories.
The UN Enters the Governance Arena
The UN's Independent International Scientific Panel on AI held its first meeting on March 3, with Secretary-General Guterres delivering opening remarks. Mandated by the General Assembly, the panel is explicitly modelled on the IPCC — a standing scientific body intended to provide authoritative, politically independent assessments of AI's risks and opportunities. Guterres framed it in notably urgent terms, stressing that AI governance cannot wait for consensus that may never come. This is a significant institutional development: for the first time, there is a multilateral scientific body with a formal mandate to advise the international community on AI. Whether it can move with sufficient speed relative to the technology is an open question, but its existence changes the governance landscape — particularly for middle powers like Canada that have argued for multilateral approaches. Canada's own Yoshua Bengio is co-chair of the Panel.
Anthropic Updates Its Safety Commitments
Anthropic published Responsible Scaling Policy v3.0, an update to its self-imposed framework for evaluating when to pause AI development. RSP v3 reflects lessons from deploying frontier models including Opus 4.6, and refines the "AI Safety Levels" thresholds at which Anthropic commits to halting or restricting further scaling. Separately, a piece by Celia Ford captures a striking irony in the alignment field: researchers working on aligning superhuman AI are increasingly turning to AI to help automate the alignment research itself, openly acknowledging that manual approaches cannot keep pace. As one researcher put it: "Who the fuck knows how to align superhuman AI?" — which is either reassuring candour or a deeply unsettling admission, depending on your priors. Our book's argument that internal architectural controls must be built in from the start looks increasingly pertinent.
Economic Reality Check
Two pieces from opposite ends of the AI optimism spectrum are worth reading together. Ed Zitron's "The Subprime AI Crisis Is Here" argues that AI product revenue is increasingly built on speculative enterprise contracts rather than demonstrated productivity gains — a structural parallel to the 2008 financial crisis that may be closer than the industry acknowledges. Meanwhile, a careful empirical study of US household internet browsing data (Blank, Schubert & Zhang) finds that adopting generative AI increases leisure browsing while leaving productive digital tasks unchanged — suggesting that at home, people are primarily using AI to save time on chores, then spending that time on entertainment rather than work. The productivity gains are real, but they are accruing to leisure, not output. Together, these pieces complicate the standard narrative that AI is straightforwardly transforming economic productivity.
Curated from my Zotero library and Feedly AI board • Last updated April 2, 2026
Reading timeline (2014–2026) → · Full archive →
Education
PhD, Communication — Simon Fraser University, 1994
MA, Communication — Simon Fraser University, 1986
BA, Mass Communication — Carleton University, 1981
Contact
Email: smith@sfu.ca
Google Scholar: View profile
Office: School of Communication, Simon Fraser University
8888 University Drive, Burnaby BC V5A 1S6, Canada