Schwartz Reisman Institute for Technology and Society / en AI safety, cybersecurity experts take on key roles at Schwartz Reisman Institute for Technology and Society /news/ai-safety-cybersecurity-experts-take-key-roles-u-t-s-schwartz-reisman-institute-technology-and <span class="field field--name-title field--type-string field--label-hidden">AI safety, cybersecurity experts take on&nbsp;key&nbsp;roles at Schwartz Reisman Institute for Technology and Society</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-06/sri-appts.jpg?h=5a646a6b&amp;itok=s3UtdfVL 370w, /sites/default/files/styles/news_banner_740/public/2024-06/sri-appts.jpg?h=5a646a6b&amp;itok=WgLV4SSI 740w, /sites/default/files/styles/news_banner_1110/public/2024-06/sri-appts.jpg?h=5a646a6b&amp;itok=eM9TB1FA 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-06/sri-appts.jpg?h=5a646a6b&amp;itok=s3UtdfVL" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-06-14T11:18:33-04:00" title="Friday, June 14, 2024 - 11:18" class="datetime">Fri, 06/14/2024 - 11:18</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>From left to right:&nbsp;David Duvenaud and Roger Grosse have been appointed Schwartz Reisman Chairs in Technology and Society; David Lie has been appointed director of the Schwartz Reisman Institute for Technology and Society (supplied images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/adina-bresge" hreflang="en">Adina Bresge</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-applied-science-engineering" hreflang="en">Faculty of Applied Science &amp; Engineering</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> <div class="field__item"><a href="/news/tags/vector-institute" hreflang="en">Vector Institute</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">Roger Grosse and David Duvenaud named Schwartz Reisman Chairs in Technology and Society, while David Lie becomes the institute’s new director</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>A leading expert in cybersecurity and two renowned AI safety researchers are set to take on leading roles at the ɫֱ’s Schwartz Reisman Institute for Technology and Society.&nbsp;</p> <p><strong>David Lie</strong>, who is known for his seminal work that led to modern trusted execution processor architectures, has been named the new director of the Schwartz Reisman Institute (SRI), which aims to explore and address the ethical and societal implications of artificial intelligence and other emerging technologies.</p> <p>His four-year appointment, which takes effect July 1, coincides with two renowned experts in AI safety –&nbsp;<strong>Roger Grosse&nbsp;</strong>and&nbsp;<strong>David Duvenaud</strong>&nbsp;– being named Schwartz Reisman Chairs in Technology and Society for five-year terms.</p> <p>“I think one of the top priorities is ensuring that SRI and ɫֱ are the primary places in Canada – and perhaps in the world – for AI safety discussion and research,” says Lie, a professor in the Faculty of Applied Science &amp; Engineering’s Edward S. Rogers Sr. department of electrical and computer engineering.</p> <p>“My vision is to make us one of the leaders. Canada has already contributed greatly to machine learning and AI through the contributions of previous scholars like [<a href="https://www.provost.utoronto.ca/awards-funding/university-professors/">University Professor Emeritus</a>]&nbsp;<strong>Geoffrey Hinton</strong>, and I think we have a very strong role to play in this important technology going forward.”</p> <p>The appointments come as inaugural director and chair&nbsp;<strong>Gillian Hadfield</strong>&nbsp;prepares to conclude her term as chair this month (she stepped down as director at the end of last year). The institute,&nbsp;created following <a href="/news/landmark-100-million-gift-university-toronto-gerald-schwartz-and-heather-reisman-will-power">a historic gift in 2019</a>&nbsp;from business leaders&nbsp;<strong>Gerald Schwartz</strong>&nbsp;and&nbsp;<strong>Heather Reisman</strong>, brings together experts from disciplines across ɫֱ’s three campuses to steer AI development to prioritize safety and human welfare.&nbsp;</p> <p>“We are thrilled to welcome David Lie, Roger Grosse and David Duvenaud to their new roles at the Schwartz Reisman Institute,” says&nbsp;<strong>Melanie Woodin</strong>, dean of the Faculty of Arts &amp; Science. “Their expertise and leadership will be instrumental in fostering the interdisciplinary collaboration needed for the ɫֱ to remain at the forefront of technological innovation that benefits humanity.”</p> <p>Lie, who has served as a research lead at SRI and holds cross-appointments in the department of computer science and the Faculty of Law, says his decades of research on making computer systems more secure and trustworthy – including contributions to computer architecture, formal verification, techniques using operating systems and networking – have equipped him to tackle the complex issues posed by AI, which will require researchers to anticipate and adapt to the unexpected.</p> <p>“As AI become more powerful, they may do things – or are already doing things – that we didn’t anticipate or expect,” says Lie. “Bringing cybersecurity skills, thinking and tools into the AI safety discussion will be absolutely critical to solving the problem.”</p> <p>Lie emphasizes that interdisciplinary collaboration is key to addressing potential AI disruption, noting that it has been pivotal in his own research and other roles.&nbsp;</p> <p>His current research focuses on securing mobile platforms, cloud computing security and bridging the divide between technology and policy. He is also an associate director at the&nbsp;<a href="https://datasciences.utoronto.ca/">Data Sciences Institute</a>, a ɫֱ&nbsp;<a href="https://isi.utoronto.ca/">institutional strategic initiative</a>, a faculty affiliate at the Vector Institute for Artificial Intelligence and a senior fellow at Massey College.</p> <p>“It’s really one of the things that I love about a place like ɫֱ, because it's big and you have experts in every imaginable field to collaborate with,” he says. “I feel very strongly that we can always accomplish way more together than we can individually. That's true for people, but that's also true for disciplines.”</p> <p>As incoming Schwartz Reisman Chairs in Technology and Society, Grosse and Duvenaud have vital roles to play in driving SRI’s research agenda and sharing its findings with the world, says Lie.</p> <p>“One of the main ways universities contribute to society is through research, but we also contribute through discourse; we contribute by translating knowledge and providing that to policymakers, decision-makers and stakeholders,” he says. “I see SRI playing an important part in these roles.”</p> <p>Both Grosse and Duvenaud are associate professors of computer science in the Faculty of Arts &amp; Science, faculty affiliates at SRI, founding members of the Vector Institute and Canada CIFAR AI chairs – and both are working at <a href="https://www.anthropic.com/">San Francisco-based&nbsp;Anthropic</a>, a research company focused on <a href="/news/achieving-alignment-how-u-t-researchers-are-working-keep-ai-track">AI safety and&nbsp;alignment</a>.</p> <p>Grosse, whose research applies our understanding of deep learning to the safety and alignment of AI systems, says academia has an essential role to play in guiding AI development by looking beyond short-term incentives to ask how these technologies can be safely and ethically integrated for the long-term benefit of humanity.&nbsp;</p> <p>“I'm very excited to be able to understand and mitigate catastrophic risks from AI, to be part of an interdisciplinary community that's especially well positioned to make progress in these issues, and I really appreciate the leadership that donors are showing and supporting this work,” he says.</p> <p>“I think academia is great for being able to ask the more fundamental questions, to carry out maybe more forward-looking research that might not be directly on a company's critical path, but will contribute to safety efforts at many different organizations.”</p> <p>Duvenaud’s research, meanwhile, focuses on&nbsp;probabilistic deep learning, artificial general intelligence governance and dangerous capabilities evaluation.</p> <p>He envisions SRI as a “centre of gravity” where academics, industry members, government leaders and other stakeholders can engage with each other and shape the future of AI technologies.</p> <p>“The idea is that by having this institute dedicated to this direction, we’ll be able to do things like host visitors and engage with academics from all sorts of disciplines –such as law, economics, and other parts of civil society – so that, ultimately, when policy discussions come up, we’ll be equipped and credible as people who can help governments navigate these decisions,” says Duvenaud, who is cross appointed to the department of statistical sciences.</p> <p><strong>Sheila McIlraith</strong>, an associate director and research lead at SRI, professor of computer science in the Faculty of Arts &amp; Science, and a Canada CIFAR AI Chair at the Vector Institute, underlines the importance of rallying diverse disciplinary experts from across ɫֱ to address the opportunities and challenges that AI will wield in the coming years.</p> <p>“AI is no longer the sole purview of computer scientists. It is reshaping the way we live, work, and interact with each other, and it will take experts from a broad range of disciplines to help ensure that AI is developed and deployed for the benefit of humanity, and that Canada adapts swiftly to protect our institutions," says McIlraith, who is an expert in AI safety research herself.&nbsp;</p> <p>“Threats are already upon us; now is the time to act.”&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 14 Jun 2024 15:18:33 +0000 Christopher.Sorensen 308182 at What Now? AI, Episode 5: This Is Not Real /news/what-now-ai-episode-5-not-real <span class="field field--name-title field--type-string field--label-hidden">What Now? AI, Episode 5: This Is Not Real</span> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>mattimar</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-05-23T15:33:08-04:00" title="Thursday, May 23, 2024 - 15:33" class="datetime">Thu, 05/23/2024 - 15:33</time> </span> <div class="field field--name-field-youtube field--type-youtube field--label-hidden field__item"><figure class="youtube-container"> <iframe src="https://www.youtube.com/embed/PsWmUTAfluE?wmode=opaque" width="450" height="315" id="youtube-field-player" class="youtube-field-player" title="Embedded video for What Now? AI, Episode 5: This Is Not Real" aria-label="Embedded video for What Now? AI, Episode 5: This Is Not Real: https://www.youtube.com/embed/PsWmUTAfluE?wmode=opaque" frameborder="0" allowfullscreen></iframe> </figure> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/temerty-faculty-medicine" hreflang="en">Temerty Faculty of Medicine</a></div> <div class="field__item"><a href="/news/tags/what-now-ai" hreflang="en">What Now? AI</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/munk-school-global-affairs-public-policy-0" hreflang="en">Munk School of Global Affairs &amp; Public Policy</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/political-science" hreflang="en">Political Science</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">ɫֱ Mississauga</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Artificial intelligence presents new opportunities to strengthen democracy even as it threatens to cast a shadow over election integrity and further the spread of misinformation.</p> <p>In the fifth episode of&nbsp;What Now? AI, ɫֱ hosts <strong>Beth Coleman</strong>&nbsp;and&nbsp;<strong>Rahul Krishnan</strong> are joined by experts <strong>Harper Reed</strong> and <strong>Peter Loewen</strong>, who is also from ɫֱ, to explore the impact of AI on the political realm. &nbsp;</p> <p>Listen to episode five on&nbsp;<a href="https://podcasts.apple.com/us/podcast/what-now-ai/id1635579922" target="_blank">Apple</a>,&nbsp;<a href="https://open.spotify.com/show/6E0YlC5Sw59q7Al5UAWOP8?si=795f1fa38c2b4812" target="_blank">Spotify</a>,&nbsp;<a href="https://soundcloud.com/universityoftoronto" target="_blank">SoundCloud</a>,&nbsp;<a href="/news/what-now-ai-episode-4-ai-and-creativity#:~:text=%2C%E2%80%AFSoundCloud%2C%E2%80%AF-,iHeartRadio,-%E2%80%AFand%E2%80%AFAmazon.%20Watch%E2%80%AFepisode" target="_blank">iHeartRadio</a>&nbsp;and&nbsp;<a href="/news/what-now-ai-episode-4-ai-and-creativity#:~:text=%2C%E2%80%AFiHeartRadio%E2%80%AFand%E2%80%AF-,Amazon,-.%20Watch%E2%80%AFepisode%20four%20on">Amazon</a>. Watch episode five on <a href="https://www.youtube.com/watch?v=PsWmUTAfluE">YouTube</a>.&nbsp;</p> <p>Loewen, director of ɫֱ’s Munk School of Global Affairs &amp; Public Policy and a professor in the department of political science in the Faculty of Arts &amp; Science, explains how AI removes the human touch from politics, potentially making the public uneasy.</p> <p>“We still don't like the fact that it might be a machine that we're talking to,” said Loewen, who is also the associate director of the Schwartz Reisman Institute for Technology and Society.</p> <p>“But then if you layer on this dimension of not knowing if this is actually the campaign that’s doing it, I think that’s probably orders of magnitude worse because what it does is it takes us from the realm of kind of feeling uneasy about something into feeling like this thing is corrupted.”</p> <p>Reed, meanwhile, <a href="https://www.motherjones.com/politics/2012/10/harper-reed-obama-campaign-microtargeting/" target="_blank">spoke about his experience</a> as the chief technology officer on former U.S. president Barack Obama’s re-election campaign in 2012.</p> <p>“The technology we built was not about convincing someone at the time that Mitt Romney was a bad person or a good person,” said Reed during a conversation with Coleman about AI and democracy that was filmed live at the Schwartz Reisman Institute for Technology and Society’s annual conference <a href="https://absolutelyinterdisciplinary.com/" target="_blank">Absolutely Interdisciplinary</a>, a portion of which was used in the podcast episode.</p> <p>“The tech was more about making sure you got to vote.”</p> <p>When asked about the biggest threats to AI and democracy, Reed emphasized that he is less worried about the technology itself and more with ensuring it’s beneficial to societal use.&nbsp;</p> <p>“I’m worried about who has access to it and how they are using it.”</p> <h4>About the hosts:&nbsp;</h4> <p><strong>Beth Coleman</strong>&nbsp;is an associate professor at ɫֱ Mississauga’s&nbsp;<a href="https://www.utm.utoronto.ca/iccit/" target="_blank">Institute of Communication, Culture, Information and Technology</a>&nbsp;and the Faculty of Information. She is also a&nbsp;research lead on AI policy and praxis&nbsp;at the&nbsp;<a href="https://srinstitute.utoronto.ca/" target="_blank">Schwartz Reisman Institute for Technology and Society</a>. Coleman authored&nbsp;<a href="https://k-verlag.org/books/beth-coleman-reality-was-whatever-happened/" target="_blank"><em>Reality Was Whatever Happened: Octavia Butler AI&nbsp;and Other Possible Worlds</em></a>&nbsp;using art and generative AI.&nbsp;</p> <p><strong>Rahul Krishnan</strong>&nbsp;is an&nbsp;assistant professor in ɫֱ’s department&nbsp;of computer science in the Faculty of Arts &amp; Science&nbsp;and&nbsp;department of laboratory medicine and pathobiology in the Temerty Faculty of Medicine. He is a Canada CIFAR Chair at the Vector Institute, a faculty affiliate at the Schwartz Reisman Institute for Technology and Society and a faculty member at the&nbsp;<a href="https://tcairem.utoronto.ca/" target="_blank">Temerty Centre for AI Research and Education in Medicine&nbsp;(T-CAIREM)</a>.&nbsp;</p> <p><em>Note: The artwork in the background of Peter Loewen’s interview belong to the Mirvish Family’s private collection. The large image, titled&nbsp;Floating Free, is by K.M. Graham. The smaller image is untitled and by the same artist.</em></p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Thu, 23 May 2024 19:33:08 +0000 mattimar 307908 at ɫֱ initiative encourages computer science students to incorporate ethics into their work /news/u-t-initiative-encourages-computer-science-students-incorporate-ethics-their-work <span class="field field--name-title field--type-string field--label-hidden">ɫֱ initiative encourages computer science students to incorporate ethics into their work</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=8N5uArHf 370w, /sites/default/files/styles/news_banner_740/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=SuP6_Tgs 740w, /sites/default/files/styles/news_banner_1110/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=bU01W_QA 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=8N5uArHf" alt="a woman sits in a computer science classroom"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-04-26T15:22:07-04:00" title="Friday, April 26, 2024 - 15:22" class="datetime">Fri, 04/26/2024 - 15:22</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>(photo by urbazon/Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">Total enrolment in courses featuring Embedded Ethics Education Initiative modules exceeded 8,000 students this year</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Computer science students at the ɫֱ are learning how to incorporate ethical considerations into the design and development of new technologies such as artificial intelligence with the help of a unique undergraduate initiative.</p> <p>The <a href="https://www.cs.toronto.edu/embedded-ethics/">Embedded Ethics Education Initiative</a> (E3I) aims to provide students with the ability to critically assess the societal impacts of the technologies they will be designing and developing throughout their careers. That includes grappling with issues such as AI safety, data privacy and misinformation.</p> <p>Program co-creator<strong> Sheila McIlraith</strong>, a professor in the department of computer science in the Faculty of Arts &amp; Science and an associate director at the <a href="http://srinstitute.utoronto.ca">Schwartz Reisman Institute for Technology and Society</a>&nbsp;(SRI), says E3I aims to help students “recognize the broader ramifications of the technology they’re developing on diverse stakeholders, and to avoid or mitigate any negative impact.”&nbsp;</p> <p>First launched in 2020 as a two-year pilot program, the initiative is a collaborative venture between the&nbsp;department of computer science and SRI in association with the&nbsp;department of philosophy. It integrates ethics modules into select undergraduate computer science courses – and has reached thousands of ɫֱ students in this academic year alone.&nbsp;</p> <p><strong>Malaikah Hussain</strong> is one of the many ɫֱ students who has benefited from the initiative. As a first-year student enrolled in <a href="https://artsci.calendar.utoronto.ca/course/csc111h1">CSC111: Foundations of Computer Science II</a>, she participated in an E3I module that explored how a data structure she learned about in class laid the foundation of a contact tracing system and raised ethical issues concerning data collection. &nbsp;</p> <p>“The modules underlined how the software design choices we make extend beyond computing efficiency concerns to grave ethical concerns such as privacy,” says Hussain, who is now a third-year computer science specialist. &nbsp;&nbsp;</p> <p>Hussain adds that the modules propelled her interest in ethics and computing, leading her to pursue upper year courses on the topic. During a subsequent internship, she organized an event about the ethics surrounding e-waste disposal and the company’s technology life cycle. &nbsp;</p> <p>“The E3I modules have been crucial in shaping my approach to my studies and work, emphasizing the importance of ethics in every aspect of computing,” she says. &nbsp;</p> <p>The program, which initially reached 400 students, has seen significant growth over the last four years. This academic year alone, total enrolment in&nbsp;computer science&nbsp;courses with E3I programming has exceeded 8,000 students. Another 1,500&nbsp;students participated in E3I programming in courses outside computer science.&nbsp;</p> <figure role="group" class="caption caption-drupal-media align-left"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2024-04/techdesign-lead.jpg" width="370" height="270" alt="&quot;&quot;"> </div> </div> <figcaption><em>Clockwise from top left: Steven Coyne, Diane Horton, David Liu and Sheila McIlraith&nbsp;(supplied images)</em></figcaption> </figure> <p>In recognition of the program’s impact on the undergraduate student learning experience,&nbsp;McIlraith and her colleagues&nbsp;–&nbsp;<strong>Diane Horton </strong>and&nbsp;<strong>David Liu</strong>, a professor and associate professor, teaching stream, respectively, in the department of computer science,<strong>&nbsp;</strong>and <strong>Steven Coyne</strong>, an assistant professor who is jointly appointed to the departments of computer science and philosophy<b> </b>–&nbsp;were recently recognized with the <a href="https://alumni.utoronto.ca/events-and-programs/awards/awex/northrop-frye-awards">2024 Northrop Frye Award (Team)</a>, one of the prestigious ɫֱ Alumni Association Awards of Excellence. &nbsp;&nbsp;&nbsp;</p> <p>Horton, who leads the initiative’s assessment efforts, points to the team’s <a href="https://dl.acm.org/doi/abs/10.1145/3626252.3630834" target="_blank">recently published paper</a> showing that after participating in modules in only one or two courses, students are inspired to learn more about ethics and are benefiting in the workplace. &nbsp;</p> <p>“We have evidence that they are better able to identify ethical issues arising in their work, and that the modules help them navigate those issues,” she says.&nbsp;</p> <p>Horton adds that the findings build on <a href="https://dl.acm.org/doi/abs/10.1145/3478431.3499407">earlier assessment work</a> showing that after experiencing modules in only one course, students became more interested in ethics and tech, and more confident in their ability to deal with ethical issues they might encounter. &nbsp;</p> <p>The team says the initiative’s interdisciplinary nature is key to delivering both a curriculum and experience with an authentic voice, giving instructors and students the vocabulary and depth of knowledge to engage on issues such as privacy, well-being and harm. &nbsp;</p> <p>“As a philosopher and ethicist, I love teaching in a computer science department,” says Coyne. “My colleagues teach me about interesting ethical problems that they’ve found in their class material, and I get to reciprocate by finding distinctions and ideas that illuminate those problems. And we learn a lot from each other – intellectually and pedagogically – when we design a module for that class together.” &nbsp;&nbsp;</p> <p>E3I is founded upon three key principles: teach students how – not what – to think; encourage ethics-informed design choices as a design principle; and make discussions safe, not personal. &nbsp;</p> <p>“Engaging with students and making them feel safe, not proselytizing, inviting the students to participate is especially important,” says Liu. &nbsp;</p> <p>The modules support this type of learning environment by using stakeholders with fictional character profiles that include names, pictures and a backstory. &nbsp;</p> <p>“Fictional stakeholders help add a layer of distance so students can think through the issues without having to say, ‘This is what I think,’” Horton says.&nbsp;“Stakeholders also increase their awareness of the different kinds of people who might be impacted.” &nbsp;</p> <p>McIlraith adds that having students advocate for an opinion that is not necessarily their own encourages empathy, while Liu notes that many have a “real hunger” to learn about the ethical considerations of their work.&nbsp;</p> <p>“An increasing number of students are thinking, ‘I want to be trained as a computer scientist and I want to use my skills after graduation,’ but also ‘I want to do something that I think will make a positive impact on the world,’” he says. &nbsp;&nbsp;</p> <p>Together, the E3I team works with course instructors to develop educational modules that tightly pair ethical concepts with course-specific technical material. In an applied software design course, for example, students learn about accessible software and disability theory; in a theoretical algorithms course, they learn about algorithmic fairness and distributive justice; and in a game design course, they learn about addiction and consent. &nbsp;</p> <p><strong>Steve Engels</strong>, a computer science professor, teaching stream, says integrating an ethics module about addiction into his fourth-year capstone course on video game design felt like a natural extension of his lecture topic on ludology – in particular, the psychological techniques used to make games compelling – instead of something that felt artificially inserted into the course. &nbsp;&nbsp;</p> <p>“Project-based courses can sometimes compel students to focus primarily on the final product of the course, but this module provided an opportunity to pause and reflect on what they were doing and why,” Engels says. “It forced them to confront their role in the important and current issue of gaming addiction, so they would be more aware of the ethical implications of their future work and thus be better equipped to handle it.” &nbsp;</p> <p>By next year, each undergraduate computer science student will encounter E3I modules in at least one or two courses every year throughout their program. The team is also exploring the adoption of the E3I model in other STEM disciplines, from ecology to statistics. Beyond ɫֱ, the team plans to share their expertise with other Canadian universities that are interested in developing a similar program.&nbsp;</p> <p>“This initiative is having a huge impact,” McIlraith says. “You see it in the number of students we’re reaching and in our assessment results. But it’s more than that – we’re instigating a culture change.”</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 26 Apr 2024 19:22:07 +0000 Christopher.Sorensen 307643 at What Now? AI, Episode 4: AI and Creativity /news/what-now-ai-episode-4-ai-and-creativity <span class="field field--name-title field--type-string field--label-hidden">What Now? AI, Episode 4: AI and Creativity</span> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>mattimar</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-04-25T17:22:39-04:00" title="Thursday, April 25, 2024 - 17:22" class="datetime">Thu, 04/25/2024 - 17:22</time> </span> <div class="field field--name-field-youtube field--type-youtube field--label-hidden field__item"><figure class="youtube-container"> <iframe src="https://www.youtube.com/embed/WZdGvaE0uIw?wmode=opaque" width="450" height="315" id="youtube-field-player" class="youtube-field-player" title="Embedded video for What Now? AI, Episode 4: AI and Creativity" aria-label="Embedded video for What Now? AI, Episode 4: AI and Creativity: https://www.youtube.com/embed/WZdGvaE0uIw?wmode=opaque" frameborder="0" allowfullscreen></iframe> </figure> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/temerty-faculty-medicine" hreflang="en">Temerty Faculty of Medicine</a></div> <div class="field__item"><a href="/news/tags/what-now-ai" hreflang="en">What Now? AI</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/alumni" hreflang="en">Alumni</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/startups" hreflang="en">Startups</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">ɫֱ Mississauga</a></div> <div class="field__item"><a href="/news/tags/vector-institute" hreflang="en">Vector Institute</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>The creative industry is poised to be forever changed by artificial intelligence.&nbsp;</p> <p>Writing tools like ChatGPT and image generators like Midjourney and DALL-E have exploded into the mainstream. Adobe built its own version of generative AI technology for its creative suites and OpenAI announced its text-to-video model, Sora.&nbsp;&nbsp;</p> <p>What impact will these tools and models have on the creative process? How will they change the role of an artist?&nbsp;&nbsp;</p> <p>In the fourth episode of <em>What Now? AI</em>, hosts <strong>Beth Coleman</strong> and <strong>Rahul Krishnan</strong> dive into these questions with AI researchers&nbsp;<strong>Sanja Fidler</strong> of the ɫֱ and&nbsp;<strong>Nick Frosst</strong>, who co-founded the startup <a href="https://cohere.com" target="_blank">Cohere</a>.</p> <p>Listen to episode four on  <a href="https://podcasts.apple.com/us/podcast/what-now-ai/id1635579922" target="_blank">Apple</a>,<a href="https://open.spotify.com/show/6E0YlC5Sw59q7Al5UAWOP8?si=795f1fa38c2b4812" target="_blank"> Spotify</a>, <a href="https://soundcloud.com/universityoftoronto" target="_blank">SoundCloud</a>, <a href="https://www.iheart.com/podcast/263-what-now-ai-99641114/" target="_blank">iHeartRadio</a> and <a href="https://music.amazon.ca/podcasts/60a0653e-3cd0-410e-b270-2582480b991a/what-now-ai" target="_blank">Amazon</a>. Watch episode four on <a href="https://www.youtube.com/watch?v=WZdGvaE0uIw">YouTube</a>. &nbsp;</p> <p>Fidler, the vice president of AI research at NVIDIA and an associate professor of mathematical and computational sciences at ɫֱ Mississauga, says that while AI technology is still in its early stage, it has the potential to provide artists with more adaptability and creative control.&nbsp;&nbsp;&nbsp;</p> <p>“When artists see these methods like text-to-x, text-to-image or text-to-video, I feel that they have pushback because now there is only text that allows you to control the content,” says Fidler, an affiliate faculty member at the Vector Institute, which she co-founded.&nbsp;&nbsp;</p> <p>“I think artists do want to have this iterative creative control. They have some idea in their head, and they have all these tools that allowed them to go from that idea into the final product. We want to do the same thing with AI as well.”&nbsp;</p> <p>Frosst, who sings in the band Good Kid, says he doesn’t use large language models to help him write songs –&nbsp;only to help analyze lyrics and themes.&nbsp;&nbsp;</p> <p>“I’m not really looking to optimize my artistic expression,” says Frosst, who completed his undergraduate degree in computer science and cognitive science at ɫֱ.&nbsp;&nbsp;</p> <p>“I don’t really want to write a new Good Kid song and be less involved. I want to be more involved.”&nbsp;&nbsp;</p> <p>Frosst believes AI will change the way art is created, but not to the point where people aren’t interested in the artists who are making it.&nbsp;&nbsp;</p> <p>“We want to know who made it, and that’s mostly what’s enjoyable about it.”&nbsp;&nbsp;</p> <p><strong>About the hosts:&nbsp;</strong></p> <p><strong>Beth Coleman</strong>&nbsp;is an associate professor at ɫֱ Mississauga’s&nbsp;<a href="https://www.utm.utoronto.ca/iccit/" target="_blank">Institute of Communication, Culture, Information and Technology</a>&nbsp;and the Faculty of Information. She is also a&nbsp;research lead on AI policy and praxis&nbsp;at the&nbsp;<a href="https://srinstitute.utoronto.ca/" target="_blank">Schwartz Reisman Institute for Technology and Society</a>. Coleman authored&nbsp;<a href="https://k-verlag.org/books/beth-coleman-reality-was-whatever-happened/" target="_blank"><em>Reality Was Whatever Happened: Octavia Butler AI&nbsp;and Other Possible Worlds</em></a>&nbsp;using art and generative AI.&nbsp;</p> <p><strong>Rahul Krishnan</strong>&nbsp;is an&nbsp;assistant professor in ɫֱ’s department&nbsp;of computer science in the Faculty of Arts &amp; Science&nbsp;and&nbsp;department of laboratory medicine and pathobiology in the Temerty Faculty of Medicine. He is a Canada CIFAR Chair at the Vector Institute, a faculty affiliate at the Schwartz Reisman Institute for Technology and Society and a faculty member at the&nbsp;<a href="https://tcairem.utoronto.ca/" target="_blank">Temerty Centre for AI Research and Education in Medicine&nbsp;(T-CAIREM)</a>.&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Thu, 25 Apr 2024 21:22:39 +0000 mattimar 307603 at What Now? AI, Episode 3: Innovation for Good /news/what-now-ai-episode-3-innovation-good <span class="field field--name-title field--type-string field--label-hidden">What Now? AI, Episode 3: Innovation for Good</span> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-04-18T16:47:11-04:00" title="Thursday, April 18, 2024 - 16:47" class="datetime">Thu, 04/18/2024 - 16:47</time> </span> <div class="field field--name-field-youtube field--type-youtube field--label-hidden field__item"><figure class="youtube-container"> <iframe src="https://www.youtube.com/embed/Pq8hrKLBIjM?wmode=opaque" width="450" height="315" id="youtube-field-player" class="youtube-field-player" title="Embedded video for What Now? AI, Episode 3: Innovation for Good" aria-label="Embedded video for What Now? AI, Episode 3: Innovation for Good: https://www.youtube.com/embed/Pq8hrKLBIjM?wmode=opaque" frameborder="0" allowfullscreen></iframe> </figure> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/temerty-faculty-medicine" hreflang="en">Temerty Faculty of Medicine</a></div> <div class="field__item"><a href="/news/tags/unity-health" hreflang="en">Unity Health</a></div> <div class="field__item"><a href="/news/tags/what-now-ai" hreflang="en">What Now? AI</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/leslie-dan-faculty-pharmacy" hreflang="en">Leslie Dan Faculty of Pharmacy</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> <div class="field__item"><a href="/news/tags/startups" hreflang="en">Startups</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">ɫֱ Mississauga</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>While the news headlines about&nbsp;AI often focus on dangers and risks, the potential for life-saving innovation in fields such as health care is huge.</p> <p>In the third episode of&nbsp;What Now? AI, co-hosts&nbsp;<strong>Beth Coleman</strong>&nbsp;and&nbsp;<strong>Rahul Krishnan</strong>&nbsp;of the ɫֱ are joined by experts&nbsp;<strong>Christine Allen</strong>&nbsp;and&nbsp;<strong>Andrew Pinto</strong>&nbsp;to discuss AI’s potential to advance drug development and dramatically improve primary care.&nbsp;</p> <p>Listen to episode three on&nbsp;<a href="https://podcasts.apple.com/us/podcast/what-now-ai/id1635579922" target="_blank">Apple</a>,&nbsp;<a href="https://open.spotify.com/show/6E0YlC5Sw59q7Al5UAWOP8?si=795f1fa38c2b4812" target="_blank">Spotify</a>,&nbsp;<a href="https://soundcloud.com/universityoftoronto" target="_blank">SoundCloud</a>,&nbsp;<a href="https://www.iheart.com/podcast/263-what-now-ai-99641114/" target="_blank">iHeartRadio</a>&nbsp;and&nbsp;<a href="https://music.amazon.ca/podcasts/60a0653e-3cd0-410e-b270-2582480b991a/what-now-ai" target="_blank">Amazon</a>. Watch <a href="https://youtu.be/Pq8hrKLBIjM?si=ds5TrkR8DMc5VZq_" target="_blank">episode three on YouTube</a>.&nbsp;</p> <p>Allen, a professor in ɫֱ’s Leslie Dan Faculty of Pharmacy and an expert in drug formulation and development,&nbsp;co-founded<a href="https://intrepidlabs.tech/" target="_blank"> Intrepid Labs Inc.</a>&nbsp;with&nbsp;<strong>Alán Aspuru-Guzik</strong>, a professor in the departments of chemistry and computer science in ɫֱ’s Faculty of Arts &amp; Science.&nbsp;</p> <p>One of the first startups to emerge from the&nbsp;<a href="https://acceleration.utoronto.ca/">Acceleration Consortium</a>&nbsp;at ɫֱ, Intrepid Labs&nbsp;is accelerating pharmaceutical drug development through the integration of AI, automation and advanced computing.</p> <p>“It’s this concept of using AI to explore the unexplored,” Allen says.&nbsp;</p> <p>“What if that formulation could really transform the properties and performance of your drug is one of those unexplored formulations. That will then take that drug through clinical development smoothly and get it to patients faster, which is really the goal.”</p> <p>Meanwhile, Pinto, a family physician at St. Michael’s Hospital and director of the&nbsp;<a href="https://upstreamlab.org/" target="_blank">Upstream Lab</a>, Unity Health Toronto, focuses his research on addressing the social determinants of health by running clinical trials of interventions and using AI tools for surveillance of respiratory illness.&nbsp;</p> <p>“When we started to do this work around AI in primary care, we wanted to be directed by primary care providers and patients,” says Pinto, an&nbsp;associate professor in the department of family and community medicine in ɫֱ’s Temerty Faculty of Medicine and at the Dalla Lana School of Public Health.&nbsp;</p> <p>Reducing health-care inequities is top of mind for Pinto. His lab focuses on implementing AI to prioritize community engagement and bridge socioeconomic gaps to mitigate biases.&nbsp;</p> <p>“We’re using these tools to look at all of the patients in a population and then focus our attention on the people who need it most.”&nbsp;</p> <h4>About the hosts:&nbsp;</h4> <p><strong>Beth Coleman</strong> is an associate professor at ɫֱ Mississauga’s&nbsp;<a href="https://www.utm.utoronto.ca/iccit/" target="_blank">Institute of Communication, Culture, Information and Technology</a>&nbsp;and the Faculty of Information. She is also a&nbsp;research lead on AI policy and praxis&nbsp;at the<a href="http://srinstitute.utoronto.ca/" target="_blank">&nbsp;Schwartz Reisman Institute for Technology and Society</a>. Coleman authored&nbsp;<a href="https://k-verlag.org/books/beth-coleman-reality-was-whatever-happened/" target="_blank"><em>Reality Was Whatever Happened: Octavia Butler AI&nbsp;and Other Possible Worlds</em></a>&nbsp;using art and generative AI.&nbsp;</p> <p><strong>Rahul Krishnan</strong>&nbsp;is an&nbsp;assistant professor in ɫֱ’s department&nbsp;of computer science in the Faculty of Arts &amp; Science&nbsp;and&nbsp;department of laboratory medicine and pathobiology in the Temerty Faculty of Medicine. He is a Canada CIFAR Chair at the Vector Institute, a faculty affiliate at the Schwartz Reisman Institute for Technology and Society and a faculty member at the&nbsp;<a href="https://tcairem.utoronto.ca/" target="_blank">Temerty Centre for AI Research and Education in Medicine</a>&nbsp;(T-CAIREM).&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Thu, 18 Apr 2024 20:47:11 +0000 Christopher.Sorensen 307507 at What Now? AI, Episode 2: Safe and Accountable  /news/what-now-ai-episode-2-safe-and-accountable <span class="field field--name-title field--type-string field--label-hidden">What Now? AI,&nbsp;Episode 2: Safe and Accountable&nbsp;</span> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-04-11T08:57:17-04:00" title="Thursday, April 11, 2024 - 08:57" class="datetime">Thu, 04/11/2024 - 08:57</time> </span> <div class="field field--name-field-youtube field--type-youtube field--label-hidden field__item"><figure class="youtube-container"> <iframe src="https://www.youtube.com/embed/QDVlINfID_M?wmode=opaque" width="450" height="315" id="youtube-field-player--2" class="youtube-field-player" title="Embedded video for What Now? AI,&nbsp;Episode 2: Safe and Accountable&nbsp;" aria-label="Embedded video for What Now? AI,&nbsp;Episode 2: Safe and Accountable&nbsp;: https://www.youtube.com/embed/QDVlINfID_M?wmode=opaque" frameborder="0" allowfullscreen></iframe> </figure> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/mariam-matti" hreflang="en">Mariam Matti</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/temerty-faculty-medicine" hreflang="en">Temerty Faculty of Medicine</a></div> <div class="field__item"><a href="/news/tags/what-now-ai" hreflang="en">What Now? AI</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">ɫֱ Mississauga</a></div> <div class="field__item"><a href="/news/tags/vector-institute" hreflang="en">Vector Institute</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>As AI becomes more integrated into our lives, how do we make sure it aligns with our values while minimizing risks?</p> <p>In the second episode of&nbsp;What Now? AI, hosts&nbsp;<strong>Beth Coleman</strong>&nbsp;and&nbsp;<strong>Rahul Krishnan</strong>&nbsp;are joined by ɫֱ experts&nbsp;<strong>Gillian Hadfield</strong>&nbsp;and&nbsp;<strong>Roger Grosse</strong>&nbsp;as they tackle critical questions surrounding AI safety, regulation and alignment.&nbsp;</p> <p>Listen to episode two on <a href="https://podcasts.apple.com/us/podcast/what-now-ai/id1635579922" target="_blank">Apple</a>, <a href="https://open.spotify.com/show/6E0YlC5Sw59q7Al5UAWOP8?si=02e00e2d81bf4a44&amp;nd=1&amp;dlsi=6bdd4d9b9cc147b5" target="_blank">Spotify</a>, <a href="https://soundcloud.com/universityoftoronto" target="_blank">SoundCloud</a>, <a href="https://www.iheart.com/podcast/263-what-now-ai-99641114/">iHeartRadio</a> and <a href="https://music.amazon.ca/podcasts/60a0653e-3cd0-410e-b270-2582480b991a/what-now-ai" target="_blank">Amazon</a>. Watch <a href="https://youtu.be/QDVlINfID_M?si=Kqz4MzFyh9asImYI" target="_blank">episode two on YouTube</a>.&nbsp;</p> <p>Grosse, an associate professor of computer science in the Faculty of Arts &amp; Science and a founding member of the <a href="http://vectorinstitute.ai">Vector Institute</a>, joined the technical staff on the alignment team at Anthropic, an AI safety and research company based in San Francisco, during a sabbatical last year.</p> <p>He calls working on AI research and systems while investigating safety a “difficult needle to thread.”&nbsp;</p> <p>“As you move up the ladder of different AI capabilities, new requirements start kicking in – in terms of keeping the models secure from bad actors and being able to make sure they won’t intentionally carry out harmful plans,” says Grosse, a faculty affiliate at the&nbsp;Schwartz Reisman Institute for Technology and Society.</p> <p>Hadfield,&nbsp;a professor of law and strategic management in the Faculty of Law and the&nbsp;inaugural Schwartz Reisman Chair in Technology and Society, has&nbsp;<a href="https://carnegieendowment.org/2023/07/12/it-s-time-to-create-national-registry-for-large-ai-models-pub-90180" target="_blank">proposed a national registry&nbsp;for large AI models</a>. She thinks companies should disclose to governments what they’re building, the data being used and the AI model’s capabilities.&nbsp;</p> <p>“This is a unique moment in human history,” says Hadfield, who holds a CIFAR AI Chair at the Vector Institute for AI and served as a senior policy adviser to OpenAI from 2018 to 2023. “I think this is the first time that you have such a powerful technology that is being developed almost exclusively within private technology companies, so the public and the academic sector don’t have full visibility into how the technology is working.”&nbsp;</p> <h4>About the hosts:&nbsp;</h4> <p>Beth Coleman is an associate professor at ɫֱ Mississauga’s&nbsp;<a href="https://www.utm.utoronto.ca/iccit/">Institute of Communication, Culture, Information and Technology</a> and the Faculty of Information. She is also a&nbsp;research lead on AI policy and praxis&nbsp;at the <a href="http://srinstitute.utoronto.ca">Schwartz Reisman Institute for Technology and Society</a>. Coleman authored&nbsp;<a href="https://k-verlag.org/books/beth-coleman-reality-was-whatever-happened/"><em>Reality Was Whatever Happened: Octavia Butler AI&nbsp;and Other Possible Worlds</em></a>&nbsp;using art and generative AI.&nbsp;</p> <p>Rahul Krishnan&nbsp;is an&nbsp;assistant professor in ɫֱ’s department&nbsp;of computer science in the Faculty of Arts &amp; Science&nbsp;and&nbsp;department of laboratory medicine and pathobiology in the Temerty Faculty of Medicine. He is a Canada CIFAR Chair at the Vector Institute, a faculty affiliate at the Schwartz Reisman Institute for Technology and Society and a faculty member at the <a href="https://tcairem.utoronto.ca">Temerty Centre for AI Research and Education in Medicine</a> (T-CAIREM).&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Thu, 11 Apr 2024 12:57:17 +0000 Christopher.Sorensen 307409 at How will AI change our world? ɫֱ podcast explores technology’s impact on society /news/how-will-ai-change-our-world-u-t-podcast-explores-technology-s-impact-society <span class="field field--name-title field--type-string field--label-hidden">How will AI change our world? ɫֱ podcast explores technology’s impact on society</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-04/What-Now-AI-horizontal-story-crop.jpg?h=81d682ee&amp;itok=8k8jKDfW 370w, /sites/default/files/styles/news_banner_740/public/2024-04/What-Now-AI-horizontal-story-crop.jpg?h=81d682ee&amp;itok=mznG8gJH 740w, /sites/default/files/styles/news_banner_1110/public/2024-04/What-Now-AI-horizontal-story-crop.jpg?h=81d682ee&amp;itok=vvgvwd6Z 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-04/What-Now-AI-horizontal-story-crop.jpg?h=81d682ee&amp;itok=8k8jKDfW" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-04-04T11:17:45-04:00" title="Thursday, April 4, 2024 - 11:17" class="datetime">Thu, 04/04/2024 - 11:17</time> </span> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/mariam-matti" hreflang="en">Mariam Matti</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/temerty-faculty-medicine" hreflang="en">Temerty Faculty of Medicine</a></div> <div class="field__item"><a href="/news/tags/unity-health" hreflang="en">Unity Health</a></div> <div class="field__item"><a href="/news/tags/what-now-ai" hreflang="en">What Now? AI</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/alumni" hreflang="en">Alumni</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/cifar" hreflang="en">CIFAR</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/dalla-lana-school-public-health" hreflang="en">Dalla Lana School of Public Health</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> <div class="field__item"><a href="/news/tags/geoffrey-hinton" hreflang="en">Geoffrey Hinton</a></div> <div class="field__item"><a href="/news/tags/leslie-dan-faculty-pharmacy" hreflang="en">Leslie Dan Faculty of Pharmacy</a></div> <div class="field__item"><a href="/news/tags/startups" hreflang="en">Startups</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">ɫֱ Mississauga</a></div> <div class="field__item"><a href="/news/tags/vector-institute" hreflang="en">Vector Institute</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">In What Now? AI, hosts&nbsp;Beth Coleman&nbsp;and&nbsp;Rahul Krishnan&nbsp;explore – and demystify – artificial intelligence and its impact on society with the help of leading experts </div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Rapid advances in artificial intelligence are posing profound questions about the future – and about us.&nbsp;</p> <p>Can we ensure safety and alignment within AI systems? How might AI forever transform fields like health care? What ripple effects could AI have on jobs and livelihoods, including in creative industries?&nbsp;</p> <p>ɫֱ researchers&nbsp;<strong>Beth Coleman</strong>&nbsp;and&nbsp;<strong>Rahul Krishnan</strong>&nbsp; explore – and demystify – these and other topics by tapping into the knowledge of leading AI experts in&nbsp;<a href="/podcasts"><em>What Now? AI</em>, a new ɫֱ podcast</a> that launches this week.</p> <p>It can be found on <a href="https://podcasts.apple.com/us/podcast/what-now-ai/id1635579922">Apple</a>, <a href="https://open.spotify.com/show/6E0YlC5Sw59q7Al5UAWOP8?si=27816b6818604d42" target="_blank">Spotify</a>, <a href="https://soundcloud.com/universityoftoronto" target="_blank">Soundcloud,</a> <span style="font-size:inherit"><a href="https://www.iheart.com/podcast/263-what-now-99641114/" target="_blank">iHeartRadio</a> </span>and <a href="https://music.amazon.ca/podcasts/60a0653e-3cd0-410e-b270-2582480b991a/what-now-ai" target="_blank">Amazon</a>.</p> <p>An associate professor at ɫֱ Mississauga’s&nbsp;Institute of Communication, Culture, Information and Technology and the Faculty of Information,&nbsp;Coleman says she hopes the episodes help audiences make sense of new AI tools and systems by cutting through&nbsp;“all the noisiness and controversy that has taken over the headlines.”</p> <p>“It can be complex and technical, but it’s also social,” says Coleman, a&nbsp;research lead on AI policy and praxis&nbsp;at the Schwartz Reisman Institute for Technology &amp; Society.&nbsp;</p> <p>“What we do with AI makes a difference and more people need to be able to share that knowledge.”&nbsp;</p> <p>Coleman’s own research centres around technology and society with a focus on data and cities, AI and policy, and generative arts. Inspired by&nbsp;Octavia Butler’s 1980&nbsp;<em>Xenogenesis</em>&nbsp;trilogy, Coleman authored&nbsp;<em><a href="https://k-verlag.org/books/beth-coleman-reality-was-whatever-happened/" target="_blank">Reality Was Whatever Happened: Octavia Butler AI </a>and Other Possible Worlds</em>&nbsp;using art and generative AI.&nbsp;</p> <p>Krishnan, meanwhile, is an&nbsp;assistant professor in ɫֱ’s department&nbsp;of computer science in the Faculty of Arts &amp; Science&nbsp;and&nbsp;department of laboratory medicine and pathobiology in the Temerty Faculty of Medicine. A&nbsp;Canada CIFAR AI Chair at the Vector Institute and Canada Research Chair in computational medicine, Krishnan and his team focus on teaching neural networks about causality, building deep learning models that analyze cause and effect from data.&nbsp;</p> <p>“I’m excited to co-host this podcast to explore and demystify for a broader audience AI through the lens of an accomplished and diverse set of experts,” says Krishnan, who is also a faculty affiliate at the Schwartz Reisman Institute for Technology and Society and a faculty member at the Temerty Centre for AI Research and Education in Medicine (T-CAIREM).&nbsp;</p> <div> <div class="field field--name-field-media-oembed-video field--type-string field--label-hidden field__item"><iframe src="/media/oembed?url=https%3A//youtube.com/shorts/P_DSFl8ejoE%3Ffeature%3Dshared&amp;max_width=0&amp;max_height=0&amp;hash=Q4OkxXUZFA7yQOzyVgHN6eL4rAl9p4pLJaln5auf1c4" width="113" height="200" class="media-oembed-content" loading="eager" title="What Now? AI podcast http://uoft.me/wnai1"></iframe> </div> </div> <p>&nbsp;</p> <p><em>What Now? AI&nbsp;</em>picks up where the conversation started last year by&nbsp;Geoffrey Hinton, the cognitive psychologist and&nbsp;<a href="https://www.provost.utoronto.ca/awards-funding/university-professors/">University Professor</a>&nbsp;emeritus of computer science who is known as the “Godfather of AI.” After a lifetime spent developing a type of AI known as deep learning, Hinton stepped back from his role at Google&nbsp;<a href="https://www.youtube.com/watch?v=-9cW4Gcn5WY">to warn about the existential threats of unchecked AI development</a>.</p> <p>Since then, there have been ongoing advancements in AI research, technological applications and policy development.</p> <p>Coleman and Krishnan will tackle these and other topics with guests:&nbsp;</p> <ul> <li><strong>Gillian Hadfield</strong>,&nbsp;professor of law and strategic management at the Faculty of Law and the Schwartz Reisman Chair in Technology and Society.&nbsp;&nbsp;</li> <li><strong>Roger Grosse</strong>, associate professor of computer science in the Faculty of Arts &amp; Science and founding member of the Vector Institute.&nbsp;</li> <li><strong>Christine Allen</strong>, professor at the Leslie Dan Faculty of Pharmacy and co-founder and CEO of Intrepid Labs Inc.</li> <li><strong>Andrew Pinto</strong>, a family physician at St. Michael’s Hospital, Unity Health Toronto, and associate professor in the Temerty Faculty of Medicine and the Dalla Lana School of Public Health.&nbsp;</li> <li><strong>Nick Frosst</strong>, co-founder of Cohere, singer in Good Kid band and a ɫֱ computer science and cognitive science alumnus.&nbsp;</li> </ul> <p>“The&nbsp;<em>What Now? AI</em>&nbsp;podcast highlights the incredible researchers at the ɫֱ who are exploring the profound implications of this transformative technology,” says&nbsp;<strong>Leah Cowen</strong>, ɫֱ’s vice-president, research and innovation, and strategic initiatives. “These discussions tackle critical questions surrounding AI safety and alignment and its myriad implications across various domains.&nbsp;</p> <p>“The university is committed to fostering informed discussions that will shape our collective understanding of AI’s role in our society and in our future.”&nbsp;</p> <p>Coleman says she hopes listeners come away from the podcast feeling more grounded.</p> <p>Krishnan, for his part, wants the audience to understand “that there is no one group that has ownership” over the technology” and that “the free exchange of ideas and open-source tools encourage people from all disciplines to come see how accessible AI can be, what AI can do for them and how they can advance the discourse in the field.”&nbsp;</p> <p>&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Thu, 04 Apr 2024 15:17:45 +0000 Christopher.Sorensen 307232 at Four AI trends to watch in 2024 /news/four-ai-trends-watch-2024 <span class="field field--name-title field--type-string field--label-hidden">Four AI trends to watch in 2024</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=bWQQfFcH 370w, /sites/default/files/styles/news_banner_740/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=xSzVRTv8 740w, /sites/default/files/styles/news_banner_1110/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=5GUAZclT 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=bWQQfFcH" alt="A person dressed like a monk stands in front of a sign that reads The Future is AI on a crowded street in Davos"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-01-19T12:02:40-05:00" title="Friday, January 19, 2024 - 12:02" class="datetime">Fri, 01/19/2024 - 12:02</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>AI was a hot topic at this week’s annual meeting of the World Economic Forum in Davos, Switzerland (photo by Andy Barton/SOPA Images/LightRocket via Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-secondary-author-reporter field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/daniel-browne" hreflang="en">Daniel Browne</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/munk-school-global-affairs-public-policy-0" hreflang="en">Munk School of Global Affairs &amp; Public Policy</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/faculty-applied-science-engineering" hreflang="en">Faculty of Applied Science &amp; Engineering</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> <div class="field__item"><a href="/news/tags/global" hreflang="en">Global</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> <div class="field__item"><a href="/news/tags/rotman-school-management" hreflang="en">Rotman School of Management</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">ɫֱ Mississauga</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">“The advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions”</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>As artificial intelligence continues to develop rapidly, the world is watching with excitement and apprehension – as evidenced by the <a href="https://www.washingtonpost.com/technology/2024/01/18/davos-ai-world-economic-forum/">AI buzz in Davos this week at the World Economic Forum’s annual meeting</a>.</p> <p>ɫֱ researchers are using AI to <a href="/news/u-t-receives-200-million-grant-support-acceleration-consortium-s-self-driving-labs-research">advance scientific discovery</a> and <a href="https://tcairem.utoronto.ca/">improve health-care delivery</a>, <a href="/news/who-owns-your-face-scholars-u-t-s-schwartz-reisman-institute-explore-tech-s-thorniest-questions">exploring how to mitigate potential harms</a> and finding new ways to ensure the technology <a href="/news/achieving-alignment-how-u-t-researchers-are-working-keep-ai-track">aligns with human values</a>.&nbsp;</p> <p>“The advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions,” says <strong>Monique Crichlow</strong>, executive director of the Schwartz Reisman Institute for Technology and Society (SRI). “Researchers at SRI and across the university are tackling how to build and regulate AI systems for safer outcomes, as well as the social impacts of these powerful technologies.”</p> <p>“From health-care delivery to accessible financial and legal services, AI has the potential to benefit society in many ways and tackle inequality around the world. But we have real work to do in 2024 to ensure that happens safely.”</p> <p>As AI continues to reshape industries and challenge many aspects of society, here are four emerging themes ɫֱ researchers are keeping their eyes on in 2024:</p> <hr> <h3>1. AI regulation is on its way</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1754158756-crop.jpg?itok=IvlN2HdV" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>U.S. Vice President Kamala Harris applauds as U.S. President Joe Biden signs an executive order on the safe, secure, and trustworthy development and use of artificial intelligence on Oct. 30, 2023 (photo by Brendan Simialowski/AFP/Getty Images)&nbsp;</em></figcaption> </figure> <p>As a technology with a wide range of potential applications, AI has the potential to impact all aspects of society – and regulators around the world are scrambling to catch up<span style="font-size: 1rem;">.</span></p> <p>Set to pass later this year, the <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act"><em>Artificial Intelligence and Data Act </em></a>(AIDA) is the Canadian government’s first attempt to comprehensively regulate AI. Similar attempts by <a href="https://srinstitute.utoronto.ca/news/global-ai-safety-and-governance">other governments</a> include the European Union’s <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence"><em>AI Act</em> </a>and the <a href="https://www.congress.gov/bill/117th-congress/house-bill/6580/text"><em>Algorithmic Accountability Act</em></a> in the United States.</p> <p>But <a href="https://srinstitute.utoronto.ca/news/ai-regulation-in-canada-is-moving-forward-heres-what-needs-to-come-next">there is still much to be done</a>.</p> <p>In the coming year, legislators and policymakers in Canada will tackle many questions, including what counts as fair use when it comes to training data and what privacy means in the 21st century. Is it illegal for companies to train AI systems on copyrighted data, as <a href="https://www.cbc.ca/news/business/new-york-times-openai-lawsuit-copyright-1.7069701">a recent lawsuit</a> from the <em>New York Times</em> alleges? Who owns the rights to AI-generated artworks? Will Canada’s new privacy bill sufficiently <a href="https://srinstitute.utoronto.ca/news/to-guarantee-our-rights-canadas-privacy-legislation-must-protect-our-biometric-data">protect citizens’ biometric data</a>?</p> <p>On top of this, AI’s entry into other sectors and industries will increasingly affect and transform how we regulate other products and services. As&nbsp;<strong>Gillian Hadfield</strong>, a professor in the Faculty of Law and the Schwartz Reisman Chair in Technology and Society, Policy Researcher <strong>Jamie Sandhu</strong>&nbsp;and Faculty of Law doctorial candidate <strong>Noam Kolt</strong> explore in <a href="https://srinstitute.utoronto.ca/news/cifar-ai-insights-policy-regulatory-transformation">a recent policy brief for CIFAR</a>&nbsp;(formerly the Canadian Institute for Advanced Research),&nbsp;a focus on regulating AI through its harms and risks alone “obscures the bigger picture” of how these systems will transform other industries and society as a whole. For example: are current car safety regulations adequate to account for self-driving vehicles powered by AI?</p> <h3>2. The use of generative AI will continue to stir up controversy</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1889111776-crop.jpg?itok=_v5Nv_QX" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Microsoft Bing Image Creator is displayed on a smartphone (photo by Jonathan Raa/NurPhoto/Getty Images)</em></figcaption> </figure> <p>From AI-generated text and pictures to videos and music, use of generative AI has exploded over the past year – and so have questions surrounding issues such as academic integrity, misinformation and the displacement of creative workers.</p> <p>In the classroom, teachers are seeking to understand how <a href="https://magazine.utoronto.ca/campus/education-is-evolving-in-the-age-of-ai/">education is evolving in the age of machine learning</a>. Instructors will need to find new ways to embrace these tools – or perhaps opt to reject them altogether – and students will continue to discover new ways to learn alongside these systems.</p> <p>At the same time, AI systems <a href="https://journal.everypixel.com/ai-image-statistics">created more than 15 billion images last year</a>&nbsp;by some counts – more than the entire 150-year history of photography. Online content will increasingly lack human authorship, and some researchers have proposed that by 2026 <a href="https://thelivinglib.org/experts-90-of-online-content-will-be-ai-generated-by-2026/">as much as 90 per cent of internet text could be generated by AI</a>. Risks around disinformation will increase, and new methods to label content as trustworthy will be essential.</p> <p>Many workers – including writers, translators, illustrators and designers – are worried about job losses. But a tidal wave of machine-generated text could also have negative impacts on AI development. In a recent study, <strong>Nicolas Papernot</strong>, an assistant professor in the Edward S. Rogers Sr. department of electrical and computer engineering in Faculty of Applied Science &amp; Engineering and an SRI faculty affiliate,&nbsp;and his co-authors found <a href="/news/training-ai-machine-generated-text-could-lead-model-collapse-researchers-warn">training AI on machine-generated text led to the system becoming less reliable</a> and subject to “model collapse.”</p> <h3>3. Public perception and trust of AI is shifting</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1933427856-crop.jpg?itok=WipX3hEz" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>A person walks past a temporary AI stall in Davos, Switzerland (photo by Andy Barton/SOPA Images/LightRocket/Getty Images)</em></figcaption> </figure> <p>Can we trust AI? Is our data secure?</p> <p>Emerging research on public trust of AI is shedding light on changing preferences, desires and viewpoints.&nbsp;<strong>Peter Loewen&nbsp;</strong>–&nbsp;the director of the <a href="https://munkschool.utoronto.ca/">Munk School of Global Affairs &amp; Public Policy</a>, SRI’s associate director and the director of the Munk School’s&nbsp;<a href="https://munkschool.utoronto.ca/pearl">Policy, Elections &amp; Representation Lab</a> (PEARL) – is developing an index measuring public perceptions of and attitudes towards AI technologies.</p> <p>Loewen’s team conducted a representative survey of more than 23,000 people across 21 countries, examining attitudes towards regulation, AI development, perceived personal and societal economic impacts, specific emerging technologies such as ChatGPT and the use of AI by government. They plan to release their results soon.</p> <p>Meanwhile, 2024 is being called <a href="https://www.forbes.com/sites/siladityaray/2024/01/03/2024-is-the-biggest-election-year-in-history-here-are-the-countries-going-to-the-polls-this-year/?sh=6c930f8265f9">“the biggest election year in history,”</a> with more than 50 countries headed to the polls, and <a href="https://foreignpolicy.com/2024/01/03/2024-elections-ai-tech-social-media-disinformation/">experts expect interference and misinformation to hit an all-time high</a> thanks to AI. How will citizens know which information, candidates, and policies to trust?&nbsp;</p> <p>In response, some researchers are investigating the foundations of trust itself.&nbsp;<strong>Beth Coleman</strong>, an associate professor at ɫֱ Mississauga’s Institute of Communication, Culture, Information and Technology and the Faculty of Information who is an SRI research lead, is leading <a href="https://srinstitute.utoronto.ca/news/call-for-applicants-trust-working-group">an interdisciplinary working group</a> on the role of trust in interactions between humans and AI systems, examining how trust is conceptualized, earned and maintained in our interactions with the pivotal technology of our time.</p> <h3>4. AI will increasingly transform labour, markets and industries&nbsp;</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1546723736-crop.jpg?itok=oLMOosKv" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>A protester in London holds a placard during a rally in Leicester Square (photo by Vuk Valcic/SOPA Images/LightRocket via Getty Images)</em></figcaption> </figure> <p><strong>Kristina McElheran</strong>, an assistant professor in the Rotman School of Management and an SRI researcher,<strong>&nbsp;</strong>and her collaborators may have recently found <a href="https://www.nbcnews.com/data-graphics/wide-gap-ais-hype-use-business-rcna127210">a gap between AI buzz in the workplace and businesses who are actually using it</a>&nbsp;– but&nbsp;there remains a real possibility that labour, markets and industries will undergo massive transformation.<br> <br> ɫֱ researchers who have published books on how AI will transform industry include: Rotman faculty members <strong>Ajay Agrawal</strong>, <strong>Joshua Gans</strong>&nbsp;and <strong>Avi Goldfarb</strong>, whose <a href="https://www.predictionmachines.ai/power-prediction"><em>Power and Prediction: The Disruptive Economics of Artificial Intelligence</em></a> argues that “old ways of doing things will be upended” as AI predictions improve; and the Faculty of Law’s <strong>Benjamin Alarie</strong> and <strong>Abdi Aidid</strong>, who propose in <a href="https://utorontopress.com/9781487529420/the-legal-singularity/"><em>The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better</em></a> that AI will improve legal services by increasing ease of access and fairness for individuals.</p> <p>In 2024, institutions –&nbsp;public and private – will be creating more guidelines and rules around how AI systems can or cannot be used in their operations. Disruptors will be challenging the hierarchy of the current marketplace.&nbsp;</p> <p>The coming year promises to be transformative for AI as it continues to find new applications across society. Experts and citizens must stay alert to the changes AI will bring and continue to advocate that ethical and responsible practices guide the development of this powerful technology.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 19 Jan 2024 17:02:40 +0000 Christopher.Sorensen 305503 at Achieving alignment: How ɫֱ researchers are working to keep AI on track /news/achieving-alignment-how-u-t-researchers-are-working-keep-ai-track <span class="field field--name-title field--type-string field--label-hidden">Achieving alignment: How ɫֱ researchers are working to keep AI on track</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-12/michael_profile-crop.jpg?h=afdc3185&amp;itok=SzPKXBsl 370w, /sites/default/files/styles/news_banner_740/public/2023-12/michael_profile-crop.jpg?h=afdc3185&amp;itok=oKFXv9hP 740w, /sites/default/files/styles/news_banner_1110/public/2023-12/michael_profile-crop.jpg?h=afdc3185&amp;itok=cMGjqwvw 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-12/michael_profile-crop.jpg?h=afdc3185&amp;itok=SzPKXBsl" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>bresgead</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-12-08T16:18:24-05:00" title="Friday, December 8, 2023 - 16:18" class="datetime">Fri, 12/08/2023 - 16:18</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>Michael Zhang, a PhD student in computer science, says there are myriad reasons why AI models may not respect the intentions of their human creators (supplied image)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/adina-bresge" hreflang="en">Adina Bresge</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/institutional-strategic-initiatives" hreflang="en">Institutional Strategic Initiatives</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">Michael Zhang focuses on AI safety as a graduate researcher at ɫֱ's Schwartz Reisman Institute for Technology and Society</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><div>In the year since OpenAI released ChatGPT, what once seemed like an esoteric question among researchers has pushed its way to the forefront of public discourse: As artificial intelligence becomes more capable, how do we ensure AI systems act in the best interests of humans and – crucially – not turn against us?&nbsp;</div> <div>&nbsp;</div> <div>This dilemma could determine the fate of humanity in the eyes of some researchers, including ɫֱ <a href="https://www.provost.utoronto.ca/awards-funding/university-professors/">University Professor</a> emeritus <strong>Geoffrey Hinton</strong>, known as the “godfather of AI,” who is <a href="https://www.youtube.com/watch?v=-9cW4Gcn5WY&amp;ab_channel=UniversityofToronto">warning that the technology he helped create could evolve into an existential threat</a>. Others have raised alarm about nearer-term risks such as job losses, disinformation and AI-powered warfare.&nbsp;</div> <div>&nbsp;</div> <div><strong>Michael Zhang</strong>, a PhD student in computer science in ɫֱ’s Faculty of Arts &amp; Science, is focused on AI safety and interdisciplinary thinking about the technology as a graduate fellow at the Schwartz Reisman Institute for Technology and Society – and <a href="http://trajectories.substack.com/p/1e80b0f2-8521-448d-9574-7d039ae1cce1">co-authored an article on the subject</a> earlier this year.&nbsp;</div> <div>&nbsp;</div> <div>He recently spoke with&nbsp;<em>ɫֱ News&nbsp;</em>about the alignment problem and what is being done to try and solve it.</div> <div> <hr> <p><strong>What, exactly, is meant by AI alignment?&nbsp;</strong></p> </div> <div>&nbsp;</div> <div>In the research sense, it means trying to make sure that AI does what we intended it to do – so it follows the objectives that we try to give it. But there are lots of problems that can arise, some of which we’re already seeing in today’s models.&nbsp;</div> <div>&nbsp;</div> <div>One is called reward misspecification. It’s tricky to specify what reward function, or objective, you want in the form of a number that an AI model can understand. For example, if you’re a company, you might try to maximize profits – that’s a relatively simple objective. But in pursuing it, there can be unintended consequences in the real world. The model might make or recommend decisions that are harmful to employees or the environment. This example of rewards being underspecified can occur in even more simple settings. If we ask a robot to bring us coffee, we are also implicitly asking it to do so without breaking anything in the kitchen.&nbsp;</div> <div>&nbsp;</div> <div>Another problem is bias. The AI model doesn’t have a mind of its own – it’s given a very strict mathematical objective. But we’re biased, and we generate data that’s biased, and that’s what we give our models. If there exists some underlying bias in the training dataset, the model will “learn” the bias because it best accomplishes that mathematical objective. We’ve already seen how this can lead to issues when we ask AI systems to make decisions such as <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">whether someone should receive bail</a>, or to do <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/">the first round of resume screening</a>.&nbsp;&nbsp;</div> <div>&nbsp;</div> <div><strong>If we built the AI models, how is it they learn to do things we didn’t foresee?&nbsp;</strong></div> <div>&nbsp;</div> <div>When we talk about emergent behaviours – abilities that are present in larger models but not in smaller ones – it’s useful to think about large language models (LLMs) such as ChatGPT. If given an incomplete sentence, ChatGPT’s objective is to predict what the next word is going to be. But if you’re giving it a bunch of different training data – from the works of Shakespeare to mathematical textbooks – the model is going to gain some level of understanding in order to get better at predicting what word comes next.&nbsp;</div> <div>&nbsp;</div> <div>We don’t specify hard-coded rules for what these models are supposed to learn, so we don’t have that much control over what the model generates. One example of this is hallucinations, where models such as ChatGPT create plausible but false claims.&nbsp;</div> <div>&nbsp;</div> <div><strong>What is artificial general intelligence (AGI) and what are some of the existential concerns about it?&nbsp;</strong></div> <div>&nbsp;</div> <div>There are many definitions, but in a general sense, AGI refers to the potential that we develop an AI system that performs most tasks that require intelligence better than or at the same level as humans.&nbsp;</div> <div>&nbsp;</div> <div>People who believe this might happen are concerned about whether these models are going to be aligned with human values. In other words, if they’re more intelligent than the average human, it’s not clear that they’ll actually help us.&nbsp;</div> <div>&nbsp;</div> <div>Some sci-fi ideas about AIs taking over the world or hurting a lot of humans are getting a lot of media attention. One reason people think this might happen is an AI can often act better on its objectives if it has more resources. Hypothetically, an AI system might decide that manipulating humans, or hurting them in some way, might make it easier to acquire resources. This scenario is not going to happen today, but the potential risk is why luminaries such as <strong>Geoffrey Hinton</strong> emphasize the importance of studying and better understanding the models we are training.&nbsp;</div> <div>&nbsp;</div> <div><strong>How are ɫֱ researchers working to tackle the short- and long-term risks of AI?&nbsp;</strong></div> <div>&nbsp;</div> <div>There are five key areas of AI alignment research: specification, interpretability, monitoring, robustness and governance. The Schwartz Reisman Institute is at the forefront of bringing together people from different disciplines to try to steer this technology in a positive direction.&nbsp;&nbsp;</div> <div>&nbsp;</div> <div>In the case of specification, a common approach to fix the problem of reward misspecification is a technique that allows models to learn from human feedback. This is already being put into practice in training LLMs like ChatGPT. Going forward, some researchers are looking for ways to encode a set of human principles for future advanced models to follow. An important question that we can all think about is alignment to whom? What sort of guidelines do we want these models to follow?&nbsp;&nbsp;</div> <div>&nbsp;</div> <div>Then there’s interpretability. A lot of these giant models, like ChatGPT, might have millions or even billions of parameters. These parameters take in an input and then compute a complicated mathematical function to give us the output – but we’re not always sure what happens in this “black box” in the middle. The goal of interpretability is to try to better understand how a model arrives at a given decision. For example, <strong>Roger Grosse</strong>, an associate professor in the department of computer science in the Faculty of Arts &amp; Science and faculty affiliate at SRI, and his students are researching <a href="https://arxiv.org/abs/2308.03296">influence functions</a>, which aims to understand which training examples are most responsible producing a certain output.&nbsp;</div> <div>&nbsp;</div> <div>Another area is monitoring. Due to the presence of emergent behaviours, sometimes we don’t actually know what a new model is capable of until a bunch of different researchers and practitioners poke around and figure it out. This area of research aims to create systematic ways to understand how capable a model actually is. For example, PhD students <strong>Yangjun Ruan</strong> and <strong>Honghua Dong</strong> are among the ɫֱ researchers who co-authored <a href="https://arxiv.org/pdf/2309.15817v1.pdf">a paper</a>&nbsp;that used simulation testing to evaluate the safety risks that could arise from giving current LLMs access to tools such as email and bank accounts.&nbsp;</div> <div>&nbsp;</div> <div>Robustness is a term that broadly refers to making sure that AI models are resistant to unusual events or manipulations by bad actors. That means models shouldn’t be sensitive to small changes and should behave consistently in various circumstances. SRI Faculty Affiliate <strong>Nicolas Papernot</strong>, an assistant professor in the [Edward S. Rogers Sr.] department of computer engineering in the Faculty of Applied Science &amp; Engineering and the department of computer science, has been working on <a href="https://www.youtube.com/watch?v=UpGgIqLhaqo&amp;t=1s&amp;ab_channel=NicolasPapernot">trustworthy machine learning</a>, which seeks to address some of these challenges.&nbsp;</div> <div>&nbsp;</div> <div>Finally, there’s governance. Many different countries are trying to develop rules on how we should regulate AI. For example, SRI Chair <strong>Gillian Hadfield</strong> has been influential <a href="https://srinstitute.utoronto.ca/news/frontier-ai-regulation-challenges">in pushing for policies to curb the dangers of highly capable frontier AI models</a>. There’s also research on the technical side about tools to hold AI developers accountable. PhD student <strong>Dami Choi</strong> and Associate Professor <strong>David Duvenaud </strong>recently co-authored a <a href="https://arxiv.org/abs/2307.00682">paper</a> developing a method to check that a model is actually trained on the data that the organization claims.</div> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 08 Dec 2023 21:18:24 +0000 bresgead 304832 at Geoffrey Hinton fields questions from scholars, students during academic talk on responsible AI /news/geoffrey-hinton-fields-questions-scholars-students-during-academic-talk-responsible-ai <span class="field field--name-title field--type-string field--label-hidden">Geoffrey Hinton fields questions from scholars, students during academic talk on responsible AI</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-11/2023-10-30-Geoff-Hinton-Con-Hall-%2816%29-crop.jpg?h=7e2a7578&amp;itok=qKnAz1bf 370w, /sites/default/files/styles/news_banner_740/public/2023-11/2023-10-30-Geoff-Hinton-Con-Hall-%2816%29-crop.jpg?h=7e2a7578&amp;itok=61hKy5FV 740w, /sites/default/files/styles/news_banner_1110/public/2023-11/2023-10-30-Geoff-Hinton-Con-Hall-%2816%29-crop.jpg?h=7e2a7578&amp;itok=6ES1MOWW 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-11/2023-10-30-Geoff-Hinton-Con-Hall-%2816%29-crop.jpg?h=7e2a7578&amp;itok=qKnAz1bf" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-11-02T10:48:22-04:00" title="Thursday, November 2, 2023 - 10:48" class="datetime">Thu, 11/02/2023 - 10:48</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>Geoffrey Hinton, a University Professor Emeritus of computer science who has been dubbed the “Godfather of AI,” delivers an academic talk about artificial intelligence in ɫֱ’s Convocation Hall (photo by Johnny Guatto)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/chris-sorensen" hreflang="en">Chris Sorensen</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/institute-biomedical-engineering" hreflang="en">Institute of Biomedical Engineering</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/deep-learning" hreflang="en">Deep Learning</a></div> <div class="field__item"><a href="/news/tags/faculty-applied-science-engineering" hreflang="en">Faculty of Applied Science &amp; Engineering</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/geoffrey-hinton" hreflang="en">Geoffrey Hinton</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">ɫֱ Mississauga</a></div> <div class="field__item"><a href="/news/tags/vector-institute" hreflang="en">Vector Institute</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">'Godfather of AI' asks: Will Digital Intelligence Replace Biological Intelligence?</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Does artificial intelligence actually understand? Would knowing more about its inner workings help to keep it in check? Could AI come up with the law of gravity if it hadn’t yet been devised?</p> <p>These were among the questions that professors and students put to <strong>Geoffrey Hinton</strong> during a recent event at the ɫֱ’s 1,730-seat Convocation Hall.</p> <p>The ɫֱ <a href="https://www.provost.utoronto.ca/awards-funding/university-professors/">University Professor</a> emeritus of computer science and “godfather of AI” was there to deliver an academic talk about – and take queries on – the key differences between biological and digital intelligences, whether large language models such as ChatGPT understand what they are doing and <a href="https://www.youtube.com/watch?v=-9cW4Gcn5WY">the existential risks posed by unfettered development of the technology he helped create</a>.</p> <div> <div class="field field--name-field-media-oembed-video field--type-string field--label-hidden field__item"><iframe src="/media/oembed?url=https%3A//youtu.be/iHCeAotHZa4%3Fsi%3DPziDuAejHltjwHRY&amp;max_width=0&amp;max_height=0&amp;hash=hpLrHubP1t0o-8rS39Xyb6KuJB_He4LKsYzB5uvSBIM" width="200" height="113" class="media-oembed-content" loading="eager" title="Geoffrey Hinton | Will digital intelligence replace biological intelligence?"></iframe> </div> </div> <p>&nbsp;</p> <p>“My guess is that they will take over – they'll be much, much more intelligent than people ever were,” said Hinton, who added that humanity was likely “just a passing stage” in intelligence’s evolution.</p> <p>“That's my best guess and I hope I'm wrong.”</p> <p>The Oct. 27. event was co-hosted by ɫֱ’s <a href="https://srinstitute.utoronto.ca/">Schwartz Reisman Institute for Technology and Society</a> and the <a href="https://web.cs.toronto.edu/">department of computer science</a> in the Faculty of Arts &amp; Science in collaboration with the <a href="https://vectorinstitute.ai/">Vector Institute for Artificial Intelligence</a> and the <a href="https://defygravitycampaign.utoronto.ca/initiatives/explore-humanitys-future-in-the-cosmos/">Cosmic Future Initiative</a>. &nbsp;</p> <p>Hinton’s talk came amid a flurry of AI-related developments. Three days earlier, Hinton, fellow <a href="/news/am-turing-award-nobel-prize-computing-given-hinton-and-two-other-ai-pioneers">Turing Award-winner</a> Yoshua Bengio and 22 other AI experts, including ɫֱ professors <strong>Gillian Hadfield</strong>, <strong>Tegan Maharaj</strong> and <strong>Sheila McIlraith</strong>, <a href="https://managing-ai-risks.com/">released a paper</a> calling for governments and Big Tech firms to take action on the issue, including by devoting one-third of their AI research and development budgets to AI safety. And on Oct. 30, U.S. President Joe Biden signed an <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/">Executive Order on Safe, Secure and Trustworthy Artificial Intelligence</a>.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-11/2023-10-30-Geoff-Hinton-Con-Hall-%2814%29-crop.jpg?itok=s7digBI7" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Hinton took questions from audience members, many of them professors and students (photo by Johnny Guatto)</em></figcaption> </figure> <p>“AI is poised to transform how we live and work,” said Professor <strong>Melanie Woodin</strong>, dean of the Faculty of Arts &amp; Science, after she summarized the seminal work Hinton did on deep learning neural networks with the help of his graduate students.</p> <p>“At this pivotal moment when we consider the opportunities and risks of AI, who better to guide us in this conversation than Dr. Hinton himself?”</p> <p>Hinton, who is also a cognitive scientist, explained why he <a href="https://www.youtube.com/watch?v=-9cW4Gcn5WY" target="_blank">began sounding the alarm about AI earlier this year</a> after spending decades developing the technology to better understand how the human mind works. In short: It is the rapid advances in large language models such as OpenAI’s ChatGPT and Google’s PaLM coupled with the scaling advantages that digital intelligences enjoy due to their ability to be copied and share information.</p> <p>And he warned that neural networks’ learning capacity is likely to grow even further as more sources of information, including video, are incorporated into their training. “They could also learn much faster if they manipulated the physical world,” he said.</p> <p>He finished his presentation by suggesting AI chatbots may even be capable of subjective experience – a concept that is tied up with ideas about consciousness and sentience. “The reason I believe that is because I think people are wrong in their analysis of what subjective experience is,” he said.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-11/2023-10-30-Geoff-Hinton-Con-Hall-%282%29-crop.jpg?itok=lsNQ4D8y" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Left to right: Sheila McIlraith, Geoffrey Hinton, Gillian Hadfield and Melanie Woodin (photo by Johnny Guatto)</em></figcaption> </figure> <p>The talk was followed by a lengthy Q-and-A session co-ordinated by McIlraith, a professor in the department of computer science and a faculty member at the Vector Institute, where Hinton is chief scientific adviser. McIlraith said she hoped the event would inspire attendees to “help chart a course toward a future where digital and biological intelligence both enrich the human experience.”</p> <p>Scholars – both professors and students – in fields ranging from philosophy to cognition probed Hinton’s thinking and, in some cases, his conclusions.</p> <p><strong>Shalev Lifshitz</strong>, a fourth-year undergraduate student in computer science who is doing AI research in McIlraith’s group at ɫֱ and the Vector Institute, got into a back-and-forth discussion with Hinton about whether tools like ChatGPT ever truly understand what they are doing (Hinton says yes).</p> <p>“I’m on the fence – I was on the fence before – but I thought he made very interesting points,” Lifshitz said immediately following the event. “I think it depends on what the definition of ‘understanding’ is. I’m not clear on that yet.”</p> <p>Others, like <strong>Jennifer Nagel</strong>, a professor in the department of philosophy at ɫֱ Mississauga, wondered if future AI might find us interesting or special “in a way that would make the best and brightest artificial intelligences take our side.”</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-11/2023-10-30-Geoff-Hinton-Con-Hall-%2820%29-crop.jpg?itok=OstQGqRd" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Scholars in fields ranging from philosophy to cognition probed Hinton’s thinking during the Q-and-A (photo by Johnny Guatto)</em></figcaption> </figure> <p>“I mean, for me to be an interesting conversational partner with you right now, I don't even have to be smarter than you … I just have to have some knowledge that you don't have – or even just some way of looking at a problem that you find interesting,” she said.</p> <p>Hinton was also asked to give advice to students studying in the field.</p> <p>“Work on AI safety,” he said, noting that top researchers such as OpenAI co-founder <strong>Ilya Sutskever</strong>, a ɫֱ alumnus, and <strong>Roger Grosse</strong> and <strong>David Duvenaud</strong> – both associate professors of computer science at the university and Vector Institute faculty members – are all working on the subject.</p> <p>For many, the event was simply a rare chance to hear directly from a world-renowned researcher whose work has already forever changed our lives.</p> <p><strong>Guijin Li</strong>, a PhD student in biomedical engineering, said she is really interested in Hinton’s thoughts on AI development and jumped at the chance to hear him in person.</p> <p>“It’s a once-in-a-lifetime opportunity.”</p> <p><em>—with files from Mariam Matti</em></p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">On</div> </div> Thu, 02 Nov 2023 14:48:22 +0000 Christopher.Sorensen 304201 at