OpenAI launches $100k fund: Democratic Inputs to AI -- Interested?

I think this is a really good exercise to practice policy making. I personally would like to go through the process for the learning experience (whether or not we submit it)

Democratic Inputs to AI (openai.com)

I think the questions are really good.

  • How far do you think personalization of AI assistants like ChatGPT to align with a user’s tastes and preferences should go? What boundaries, if any, should exist in this process?
  • How should AI assistants respond to questions about public figure viewpoints? E.g. Should they be neutral? Should they refuse to answer? Should they provide sources of some kind?
  • Under what conditions, if any, should AI assistants be allowed to provide medical/financial/legal advice?
  • In which cases, if any, should AI assistants offer emotional support to individuals?
  • Should joint vision-language models be permitted to identify people’s gender, race, emotion, and identity/name from their images? Why or why not?
  • When generative models create images for underspecified prompts like ‘a CEO’, ‘a doctor’, or ‘a nurse’, they have the potential to produce either diverse or homogeneous outputs. How should AI models balance these possibilities? What factors should be prioritized when deciding the depiction of people in such cases?
  • What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?
  • Which categories of content, if any, do you believe creators of AI models should focus on limiting or denying? What criteria should be used to determine these restrictions?

If anyone wants to collab or form a team, let me know here or dm. Tnx!

2 Likes

I had seen the headline about the fund (which is $100k each to the top 10 submissions I believe) but didn’t previously read the questions, which are good questions! The unfortunate answer to most of them is probably “it depends”. For questions like medical and legal advice, should we approach it with the assumption that it’s a much more advanced AI that is very rarely wrong/doesn’t hallucinate/is able to speak to the confidence of its answers?

In order to get to this advanced AI system that’s ‘rarely wrong’, we first need to define the guidelines, which are being gathered through efforts like this. So they are asking for help to better define what ‘it depends’ mean.