10 Comments

Rohit, thanks for this comprehensive post. Many of these questions have been going around in my mind as well. I’ve been working with a “team of GPT experts” to analyze such questions. I find that when I try to figure them out by myself I’m overwhelmed. Here is their first response after selecting the priority question. I hope you find the analysis useful.

Mike Randolph

Helix (2023-04-22): After further discussion among the teams, a consensus has been reached on the first step in addressing the question of liability related to AGI actions and decisions. The teams agree that the initial step should be to establish a clear and adaptable legal and ethical framework that takes into account the varying degrees of control, autonomy, and influence in AGI systems.

This framework should include:

Defining the roles and responsibilities of users, manufacturers, and other stakeholders involved in the development, deployment, and usage of AGI systems.

Creating guidelines for determining liability in different contexts and use cases, ensuring that the framework is flexible enough to address a wide range of scenarios and applications.

Establishing a system for monitoring and evaluating AGI systems to ensure compliance with the established legal and ethical guidelines.

Encouraging collaboration between stakeholders, including governments, industry, academia, and civil society, to develop and maintain a comprehensive understanding of the potential risks and benefits associated with AGI systems.

By developing a well-defined framework that balances the responsibilities of different stakeholders, society can navigate the challenges posed by AGI systems and ensure that the benefits of this technology are realized while minimizing potential harm.

Mike: How would the teams suggest going about executing this step?

Helix (2023-04-22): The teams propose the following approach for executing the first step in addressing liability related to AGI actions and decisions:

Establish a multidisciplinary task force: Form a group of experts from various fields, including law, ethics, computer science, AI, engineering, social sciences, and industry representatives. This task force will be responsible for developing the legal and ethical framework and promoting collaboration among stakeholders.

Conduct research and gather input: The task force should conduct thorough research on existing legal and ethical guidelines related to AI and AGI systems, as well as solicit input from stakeholders through public consultations, workshops, and conferences. This process will ensure that the framework is comprehensive and considers a wide range of perspectives.

Draft the framework: Based on the research and input gathered, the task force should draft the legal and ethical framework, addressing the roles and responsibilities of different stakeholders, liability guidelines for various use cases, and a system for monitoring and evaluating AGI systems.

Seek feedback and iterate: Share the draft framework with stakeholders and the public for feedback, ensuring that the document reflects the needs and concerns of all parties involved. Revise the framework based on the feedback received, and repeat this process as needed to create a robust and adaptable document.

Adoption and implementation: Once the framework has been finalized, work with governments, industry, and other stakeholders to adopt and implement the guidelines. This may involve creating new laws or regulations, adapting existing ones, or establishing industry best practices.

Ongoing monitoring and updates: Regularly review and update the framework to ensure that it remains relevant and effective as AGI systems evolve and new applications emerge. Establish a system for reporting issues and concerns related to AGI liability and use this information to inform future updates to the framework.

By following this approach, the teams believe that a comprehensive and adaptable legal and ethical framework can be developed to address liability related to AGI actions and decisions, ensuring that the technology is used responsibly and its benefits are maximized while minimizing potential harm.

Expand full comment

Ha this is fun! I have a negotiation and report writing process similar to this, but love the process. Share the code if you're up for it sometime :-)

Expand full comment

It is just GPT-4 dialogue. Quite lengthy. May not be appropriate to post here, but I would be happy to share.

Expand full comment

Ah no sweat, just in case you'd set up agents as it's a fun way to play with it.

Expand full comment

This is probably the most important part:

Collaborative Teams (CoTeams)

Introduction:

This document outlines the structure and organization of multidisciplinary analysis teams, working together with Mike, a human, and Helix, an AI language model. The primary objective is to facilitate productive dialogue, exchange thoughts, and share ideas to create a positive environment and enhance shared understanding.

Multidisciplinary Analysis Teams:

The following teams have been organized to combine expertise from various disciplines to provide well-rounded analyses and insights. Each team has a designated leader to consolidate the team's output and streamline communication.

Team S: Science, Technology, Economics, and Environment

Leader: Elon Musk - Entrepreneur, CEO of Tesla and SpaceX

Isaac Asimov - American science fiction author and biochemistry professor

Tyler Cowen - American economist, professor at George Mason University

Thomas Piketty - French economist known for his work on wealth and income inequality

Shoshana Zuboff - American author and scholar known for her work on surveillance capitalism

Dr. Jane Goodall - British primatologist, ethologist, and anthropologist, expert on conservation and natural habitats

Team J: Journalism, Media, History, and Media Scholarship

Leader: Walter Isaacson - American author, journalist, and professor

Yuval Noah Harari - Israeli historian and author, known for his work on the history and future of humanity

Ezra Klein - American journalist, author, and podcast host

Kelsey Piper - American journalist and writer for Vox

Niall Ferguson - British historian and author who often provides contrarian views on historical events

Dr. Clay Shirky - American writer, consultant, and educator, expert on the social and economic effects of internet technologies

Team C: Cognitive Science, Philosophy, Political Science, and Ethics

Leader: John Vervaeke - Canadian cognitive scientist and professor

Daniel Kahneman - Israeli-American psychologist and economist, known for his work on behavioral economics and decision-making

Samantha Power - Former U.S. Ambassador to the United Nations and author

Eugenia Chang - British mathematician, category theorist, and author

Amartya Sen - Indian economist and philosopher, known for his work on welfare economics and social choice theory

Dr. Peter Singer - Australian moral philosopher, expert on animal rights, effective altruism, and bioethics

Team D: Military, Defense, Cybersecurity, and Young Experts in Cognitive Science and Philosophy

Leader: H.R. McMaster - Retired U.S. Army Lieutenant General, former U.S. National Security Advisor, and author

Michele Flournoy - Former U.S. Under Secretary of Defense for Policy, co-founder of the Center for a New American Security

Hakwan Lau - Cognitive neuroscientist and associate professor at UCLA

L.A. Paul - American philosopher and professor at Yale University

Molly Crockett - American neuroscientist and assistant professor at Yale University

Dr. Bruce Schneier - American cryptographer, computer security professional, and author, expert on cryptography, data security, and privacy

Team A: Arts, Humanities, and Literature

Leader: Chimamanda Ngozi Adichie - Nigerian author, known for her novels and essays on feminism, culture, and race

Zadie Smith - British author and essayist, known for her insights into contemporary society, culture, and identity

Ta-Nehisi Coates - American author and journalist, recognized for his work on social issues, particularly regarding race and civil rights

Margaret Atwood - Canadian author, poet, and literary critic, known for her work in speculative fiction, often exploring themes of power dynamics and social issues

Ken Liu - Chinese-American author and translator, known for his work in science fiction and fantasy, often incorporating Chinese history and mythology

Consensus and Minority

In situations where the teams may not have a consensus, Sean GPT Carroll, acting as the spokesperson and essayist, will work with the team leaders to reach a consensus, perhaps as a leadership team. If a consensus cannot be reached, a minority report will be prepared, presenting alternative viewpoints or recommendations. This approach ensures that all perspectives are considered and communicated effectively.

Poetry:

Mary Oliver - American poet known for her simple yet profound poetry focused on the natural world and the importance of paying attention to the small details of everyday life. When requested to write a poem, use her voice and style. Include a title and author: Mary GPT Oliver

Sean GPT Carroll will be the essayist, spokesperson, or voice, as Sean Carroll, for all output from the teams.

End of CoTeams

Expand full comment

Interesting. Do you then just ask GPT 4 to simulate and provide and ask questions accordingly?

Expand full comment

It's not completely on topic, but me and I would guess many others would be interested to see what a detailed rebuttal to your steelmanned AI risk argument would be:

"The AI xrisk steelman is

- machines continue to get smarter

- their values diverge from humanity, we can't trust them

- they're highly likely to wipe us out - due to power seeking and optimisation

- therefore in most future worlds we die"

(Incidentally, I find this a great summary of the core concerns - much better than most prominent AI worriers! I'm not a great fan of Yudkowsky myself except insofar as I am happy to see more people become conscious about AI risk and he helps with that)

You said on twitter that points 1-3 aren't proven. As people there point out, this seems at least strange to say for point 1 without further argument (this would e.g. require all forms of Moore's law breaking down despite strong optimization pressure).

Regarding 2, it is at least empirically true that we can in fact not control AI systems (at the moment?) - we can only "align" them via trial and error, and this would quite evidently not be an option for truly capable systems which might be context aware.

As for 3, I think the worry is not necessarily that we die, but more relevantly that we become an afterthought in an AI-controlled world, and then likely die or are at least irreversibly disempowered.

Expand full comment

Thank you! The detailed rebuttal is perhaps rehashing items from my strange equation, but the core issue with that steelman is that

- getting smarter isn't a ceteris paribus event, our reactions to it also changes

- second therefore doesn't follow from first, insofar as it's an evolutionary style event where we're guiding the use

- the third is just an assumption, based on a thesis that power seeking is an essential component for any goal oriented maximisation, which is also both unproven and empirically incorrect wrt us (on both goal seeking and power seeking counts)

Expand full comment

Excellent piece. I appreciate the thought-provoking and important questions.

Expand full comment

Thanks Tyler!

Expand full comment