I think this is a really good exercise to practice policy making. I personally would like to go through the process for the learning experience (whether or not we submit it)
Democratic Inputs to AI (openai.com)
I think the questions are really good.
- How far do you think personalization of AI assistants like ChatGPT to align with a user’s tastes and preferences should go? What boundaries, if any, should exist in this process?
- How should AI assistants respond to questions about public figure viewpoints? E.g. Should they be neutral? Should they refuse to answer? Should they provide sources of some kind?
- Under what conditions, if any, should AI assistants be allowed to provide medical/financial/legal advice?
- In which cases, if any, should AI assistants offer emotional support to individuals?
- Should joint vision-language models be permitted to identify people’s gender, race, emotion, and identity/name from their images? Why or why not?
- When generative models create images for underspecified prompts like ‘a CEO’, ‘a doctor’, or ‘a nurse’, they have the potential to produce either diverse or homogeneous outputs. How should AI models balance these possibilities? What factors should be prioritized when deciding the depiction of people in such cases?
- What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?
- Which categories of content, if any, do you believe creators of AI models should focus on limiting or denying? What criteria should be used to determine these restrictions?
If anyone wants to collab or form a team, let me know here or dm. Tnx!