May 25 Thinker's Chapter Meeting

On May 16, Sam Altman testified to a US Senate panel where he called for regulation of AI

On May 22, OpenAI published an article to their blog: Governance of superintelligence (openai.com)

Join us on May 24, the Seattle AI Society: Thinker’s Chapter will host a collective discussion of AI regulation focusing on whether regulation is necessary at this stage and what manner of regulation is warranted and practical.

The outcome of this discussion will be published as an SAIS position paper.

3 Likes

Another incoming source of thoughts on regulation. This time from Microsoft, released today: Governing AI: A blueprint for the future

3 Likes

Wow this is a very interesting workshop. Too bad I can’t make it (missed the last one + this) as I’m traveling. Hope someone can post the notes and POV of the members.

1 Like

I just saw this posted to YouTube earlier today - a bit of a summary of different perspectives on AI Governance.

In my opinion, it started out well, but then went off track just a bit, but then came back on point around 12:40. I wish I had seen it before the meeting earlier tonight, but hopefully we’re getting more and more people to join this Discourse.

https://www.youtube.com/watch?v=irLn5-pTkL0

Edit: I also just now came across these documents, which at least in part have to do with the responsible use and governance of AI:



I remembered this time!

Full video of final presentations:

1 Like

Amanda Snellinger attended our meetup last night and I wish I would have made her article available as a pre-read. She discusses the role of user experience & design research (UXDR) in establishing proper governance for AI. I recommend the read to everyone:

What is an AI-affirming future?. UXDR are not bullshit jobs that AI can… | by Amanda Snellinger. Ph.D. | May, 2023 | UX Collective (uxdesign.cc)"

1 Like

@payne I’m interested in contributing to the position paper, and I am sure others are as well. How can I and others help move that forward?

P.S. Thanks for sharing these articles - nice to have some vetted sources on this topic

Here are my summarized notes with the key points made in the discussion between Kamran, Paul, Tyler, and Madelyn.

Regulatory Framework: The conversation started with the idea that a comprehensive and well-implemented regulatory framework could indeed mitigate some of the potential risks associated with the development and deployment of AI technologies. However, it was universally agreed upon that even the most robust regulatory framework couldn’t entirely eliminate these risks. The uniqueness of AI, its unpredictable and distributed evolution, and the wide range of entities involved in its development create a complex landscape that cannot be entirely controlled by regulation. This is not a defeatist perspective but a realistic acceptance that should inform strategies moving forward.

Unprecedented Challenge of AI: A notable point made during the discussion was the comparison of AI with nuclear power. AI, unlike nuclear technology, holds the potential for self-awareness, autonomy, and continuous recursive improvement in capabilities. While nuclear technology presents significant risks, its dangers are largely static and predictable, unlike AI’s potential evolution. This comparison underscores the unprecedented challenge AI presents, necessitating new approaches and tools for its governance.

Lessons from Social Media Regulation: The group reflected on the regulatory failures with social media. Even with clear research indicating harmful effects on individuals and society, regulation has been insufficient, and these platforms have at times caused significant harm. This retrospective viewpoint acts as a warning, emphasizing the need to proactively address AI’s potential threats and the incentives that are driving AI development and deployment rather than retroactively manage their impact.

Initial Governance Frameworks: The governance frameworks proposed by Microsoft and OpenAI were viewed as ‘strawman’ proposals - initial suggestions intended to spark discussion rather than act as finalized solutions. The group appreciated these first steps but agreed that a more robust framework will be required, reflecting the multifaceted challenges AI presents.

Capitalist Incentives and AI Safety: A major point of conversation emerged around the role of commercial incentives in AI development. The current environment encourages rapid advancement with minimal regulatory overhead, creating a tension between safety and innovation. Recognizing this challenge, the group noted that incentives are not currently aligned between commercial development and the overall benefit of AI to society and humanity at large.

Nature of AI vs. Nuclear Technology: The discussion revisited the comparison between AI and nuclear power, focusing on their inherent differences. The accessibility, scalability, and financial incentives surrounding AI make it fundamentally different from nuclear technology, and as such, a unique approach to regulation is required - one that addresses these unique characteristics. The group considered whether there could be lessons to learn from the framework of safety and governance of research on viruses. No specific conclusions were identified by the group, but a note for possible follow up was noted.

Intrinsic vs Extrinsic Control: The conversation moved on to a point about the nature of AI control. Rather than focusing solely on extrinsic regulation and governance, the group noted the additional possibility for an intrinsic approach, embedding values and safety measures within the AI systems themselves. This perspective draws parallels with child rearing and developmental psychology, where a balanced emphasis on external rules and internal values shapes responsible individuals who operate autonomously within a larger ecosystem.

Heuristic Imperatives: One proposed approach to intrinsic control is the idea of “heuristic imperatives”, guiding principles or rules for AI behavior, inspired by the GATO framework and David Shaphiro’s work. This perspective adds a layer of internal governance to AI systems, aligning them with basic intrinsic values and interests from the outset.

Human Governance of AI: The discussion concluded with an acknowledgment of the challenges associated with human governance of AI. As AI systems evolve and their capabilities exceed human comprehension, the efficacy of human-led governance may diminish. This observation underlines the urgency to develop a diverse approach.

3 Likes

@madelyn Thank you so much for this writeup! So incredibly great. And thank you so much for coming last week, I hope to see you again, soon!

Thanks Madelyn :pray:t3::blush: