1 hour ago · Tech · 0 comments

Dwarkesh Patel1 recently posted an award for the best answers to four key questions about AI. It’s partly a challenge and partly a job interview, since some of the winners will get offered a role as a “research collaborator”. I don’t want the job, but I do want to write down my answer to his first question: why hasn’t AI progress slowed down more? There are a few reasons we might think AI progress would slow down. The particular reason Dwarkesh is interested in goes like this. Training a model (specifically reinforcement learning) requires the model to perform a task and then get “graded” on the output. As models get more powerful and tasks become harder, they take longer and require more FLOPs2 to complete, and thus more FLOPs to train: thus training harder models will take longer. But intuitively, AI progress hasn’t slowed down that much. The famous METR horizon-length graph shows that AI systems are capable of more and more complex tasks over time, and that this process is…

No comments yet. Log in to reply on the Fediverse. Comments will appear here.