Solve new problems and take the next step in competitive programming.
Creating solutions to unforeseen problems is second nature to human intelligence – the result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding text data, but progress in problem solving remains limited to relatively simple math and programming problems, or else retrieving and copying existing solutions. As part of DeepMind’s mission to solve intelligence, we have created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode has achieved an estimated top 54% ranking in programming contests by solving novel problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.
In our preprintwe detail AlphaCode, which uses transformer-based language models to generate code at unprecedented scale, then intelligently filters out a small set of promising programs.
We validated our performance using contests hosted on code strength, a popular platform that regularly hosts competitions that attract tens of thousands of participants from around the world to test their coding skills. We selected 10 recent contests for evaluation, each more recent than our training data. AlphaCode placed roughly at the level of the median competitor, marking the first time that an AI code generation system has reached a competitive level of performance in programming competitions.
To help others benefit from our results, we publish our data set of competitive programming problems and solutions on GitHub, including extensive testing to ensure that programs that pass these tests are correct – an essential feature that current datasets lack. We hope this benchmark will lead to further innovations in problem solving and code generation.