Dazzled physics Miles CRANMER From an early age. His grandfather, a professor of physics at the University of Toronto, gave him books on the subject, and his parents took him to open days at universities near their home in southern Ontario, in Canada. The Institute of the perimeter of theoretical physics was a favorite. “I remember that someone had spoken of Infinity when I was super young, and it was so cool for me,” said Cranmer. In high school, he carried out an internship at the Institute for Quantum IT of the University of Waterloo – “the best summer of my life at that time”. Soon he started studying physics as the first cycle at McGill University.
Then one night during his second year, Cranmer 19 years old read a Interview with Lee Smolin in American scientist in which the eminent theoretical physicist said he “would take generations” to reconcile quantum theory and relativity. “It made something stumble in my brain,” said Cranmer. “I can’t have that – it must go faster.” And for him, the only way to speed up the calendar of scientific progress was artificial intelligence. “This night was a time when I decided:” We have to do the AI for science “.” He started studying automatic learningFinally merger with his doctoral research in astrophysics at Princeton University.
Almost a decade later, Cranmer (now at the University of Cambridge) saw AI starting to transform science, but not as much as it plans. Single -use systems like Alphafold can generate scientific predictions with Revolutionary precisionBut researchers still lack “foundation models” designed for a general scientific discovery. These models would work more as a scientifically precise version of Chatgpt, generating simulations and predictions in a flexible way in several areas of research. In 2023, Cranmer and more than two dozen other scientists launched the AI Polymathic Initiative to start developing these foundation models.
The first step, said Cranmer, is to equip the model of scientific skills that still escape most advanced AI systems. “Some people wanted to make a language model for astrophysics, but I was really skeptical about it,” he recalls. “If you simulate massive liquid systems, be bad for general digital treatment” – as language models undoubtedly are – “Don’t cut it.” Neural networks also find it difficult to distill their predictions in well hidden equations (like E = MC2), and the scientific data necessary to form it are not as numerous on the internet as the text and the raw video on which the cat cat and other generative AI models train.