Last week [Apr 17-21] the Annual #ted2023 Conference was held in Vancouver [Canada], with the theme of ‘Possibility’ and Artificial Intelligence featured prominently on Day-2, as well as being a common thread throughout the five-day conference.
#openai President, Chairman & Co-Founder Greg Brockman gave a riveting presentation on the latest advances of the yet unreleased version of GPT-4 and what it means to the future of the human-machine interface. He provided an example of how someone used AI to save the life of their best friend (Sassy the dog) and a few real examples demonstrating the type of tasks and problems future AI systems will be able to solve for us. He also emphasized that “getting AI right is going to require participation from everyone”.
After the talk, Chris Anderson the head of #ted discussed the origins of #openai, their discovery process to date, and pushed Greg about the challenges of getting it right and whether pushing out #chatgpt and #gpt models has been responsible and not reckless. Greg and OpenAI’s position is that we need to collectively work on building safe AI together (with human feedback) and that it is better to put it out there before it is powerful so that we can develop the guardrails sooner rather than later.
In contrast, the Eliezer Yudkowsky Founder of the Machine Intelligence Research Institute (MIRI) called for a complete shutdown of all Artificial Intelligence activity due to the species-level risk he believes is very likely. His primary argument is that the “most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI” but he firmly believes that “we’re not ready and do not currently know how.” In his #timemagazine article [Mar 29, 2023] where he makes the above argument, he also compares the challenge to “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” or “Australopithecus trying to fight Homo sapiens”.
A more moderate stance was taken by Gary Marcus, a Psychology & Neuroscience Professor at #nyu who appears cautiously optimistic and argues for an international regulatory body on Artificial Intelligence; and that we need to integrate the power of AI systems like chatGPT, with more trustworthy and logic-based systems. For more details, you may visit.