Safety challenges of GPT


Nice summary of the safety challenges of generative AI and some of the methods OpenAI is employing to address them.

1 Like

I’m sure Paul is aware of this, but I’d also want to point out that this list of safety challenges is more “What are the potential negative applications of GPT” and not that GPT uniquely enables any of the list, with the except of economic impacts (and even then, only in the future) and acceleration. Virtually everything on the list is already a safety risk from humans and GPT-4 doesn’t make any of it more risky (for now).

However, “GPT enables this risk in groundbreaking ways” is likely to evolve rapidly over the coming years. When that happens, though, the risk remains the negative intention of other humans.

Long term, the risk is AI that takes action of its own intentions, and our ability to monitor, control, or influence that. The risk here is tremendously greater, but at the same time it doesn’t seem like there’s much we can do there to meaningfully reduce this risk. OpenAI’s approach to this is we’ll understand the ways this risk could and can be reduced as we better development the technology, and the technology itself can also help us solve this problem - an approach that itself also has risks.

And as a follow up to economic risks, the biggest risk here is less about job displacement and more about the efficacy of democracy. In a world of abundance and little to no essential resource scarcity, there’s no reason why anyone should go hungry or unhoused or untreated. “Should” being the operative word - if the abundance of people who are displaced from the economy have no recourse or way to be represented, we’re going to have some real issues. Humanity unfortunately doesn’t have a great track record when it comes to empathy or social support networks. And we arguably already have resource abundance and are currently not providing for a lot of people who can’t provide for themselves.