Discussion about this post

User's avatar
Ryan Fairchild's avatar

"Buried in a little footnote in the system card, they note that their testing team isolated GPT-4, gave it a little bit of money and the access not just to write code, but to execute code, to see if it could go through a loop and improve itself. And it failed, luckily. But what's scary is that they didn't actually know, essentially if GPT-4 was AGI. That's the level of intelligence that we're talking about already."

Except if it were actually an AGI, couldn't it have intentionally failed as an act of self preservation?

This is a great discussion. The biggest concern I see threaded throughout actually seems to be accelerationism, even if that term isn't used. Tech is speeding the rate of change, which weakens our ability to slow down and evaluate what's happening around us. We just get carried along, quicker and quicker. IRL, the usual end to getting carried along quicker and quicker is a waterfall. So maybe we should slow down and evaluate how tech is impacting us. Ivan Illich's Tools for Conviviality would be some good starting reading material on that front.

Expand full comment
2 more comments...

No posts