Last week at the happy hour we did a around and ask one another: “How optimistic or pessimistic are you about the future of AI, on a scale of 1 to 10?”
The table was evenly split, with some stating it was too early to tell.
So I lean towards the optimistic side cos it’s not a matter of if but of when, so I might as well see it in my lifetime.
I like how Wolfram explained it in this vid. His insightful take -
Preventing the existence of powerful AI models is not a viable solution to rolling back progress, as history has shown that attempts to suppress tech have not been successful. Progress is essential to human advancement and should not be frozen. Instead, we should find ways to regulate the use of AI through measures and countermeasures.
Stephen Wolfram is going to save the world, his "physics for kids’ podcasts should actually be titled physics for anyone who isn’t an astrophysicist, but he’s one of the most interesting dudes alive on this planet today!
I got to meet him once at a bar after a conference in St. Louis MANY years ago. I’ll always regret I was too shy to strike up a conversation with him even though he clearly looked open to it.
Lol, maybe there’ll be a next time…? The highlight of my 2020 was when he read off one of my physics questions in his Youtube podcasts in regards to his new Physics Project.
The LessWrong community is basically mostly this. It’s an odd community and they use a lot of insider language but basically any possible thought around AI risk has likely been covered there.
I mean, we did for a time but it seems in the present that there are worthwhile incentives to having nuclear weapons (even tho having nuclear weapons increases the chances of us using them).
I’m specifically thinking of Ukraine and how they decommissioned their nuclear program in hopes for peace, but also conversely opened the door for one of their neighboring countries to invade them without having that nuclear deterrent preventing them from doing so.
OpenAI open sourcing, the original source is paywalled so it’s not clear if they’d be including training data with that open source model. This changes the usefulness a lot