Lila Sciences develops a compatible scientific superintelligence platform – associated with autonomous laboratories – which can execute the entire scientific method.
In a conversation with MobileMolly Gibson, president of Future Science in Lila Sciences, explained that this technology extends beyond the traditional AI applications, such as protein modeling, generating hypotheses, designing experiences and learning results.
It also highlighted potential risks, such as the creation of pathogenic biological products, and described how Lila is actively working to mitigate them.
Mobihealthness: Can you tell me about technology behind Lila Sciences?
Molly Gibson:: Lila Sciences is a scientific superintendent with autonomous laboratories. We create the ability to expand knowledge by managing the scientific method. Thus, you take different aspects of science, biology and microbiology and more, and you use a computer to see how they can work together.
Historically, for five to 10 years, as we have started using generative AI in science, we really apply it to these parts of science that the human brain is not wired. Things like protein modeling, or the molecular structure of a protein therapy, is something that our human brain is not wired to be able to do. We applied AI in these places, and in these narrow fields, we were able to show very quickly that AI can do better than humans.
What we have not previously shown is that AI can really start to do part of the reasoning of the scientific method that humans are traditionally the best suitable. Thus, the ability to generate a new hypothesis on the world, to conceive an experience, to test this hypothesis, to go to the laboratory and to manage this experience and to learn them. This is what human scientists traditionally do.
We now believe that AI will be able to do all these components to execute the whole wheel of science, and that’s really what we believe – Expand knowledge and the ability to build scientific superintelligence.
Mhn: Is it similar to a quantum computer?
Gibson: We use traditional computers, GPU computers. Thus, you can consider it as a similar type of progress, but not really quantum, not from the point of view of the types of calculations we make. This is more how we integrate AI into the scientific method.
Mhn: How will AI and superintelligence change scientific research?
Gibson: This will change the process by which we do scientific research in general. I think it will ultimately have an impact on the role of a scientist. Scientists will always have a very key and important role in scientific discovery, but some of the things that scientists do today will be done by AI.
But what I really believe is that it will make the role of a scientist much more fun, exciting and collaborative. The pace of discovery will increase.
You can imagine that the role of a scientist is much more guidance of AI to be more creative, to extend the research by which we can explore, but (their role) is helped by AI. So, I think that will change the nature of what a scientist means.
Mhn: It is therefore a tool for scientists; Will he not replace scientists?
Gibson: Yes, it is a tool for scientists. He will replace some of the things that scientists do today, but that does not mean that he will replace scientists.
Today, we have such brilliant scientists designing plates cards for the way experiences are managed, and these are things they should be free. When trained as scientists, they really want to remain scientific; They want to stay in this profession, and often today, I see so many scientists trying to move away from the bench. How do we allow the AI to do these steps while they can do the fun parts?
Mhn: What is the accuracy of the superintendent computer?
Gibson: It really depends on what you look at. Today, there are many places where it is incredibly precise. Our ability to design proteins today, for example, is one of those places where it is really remarkable what we can do.
There are other places where there are unexplored spaces, and as we enter increasingly uncertain spaces, it will be less and less precise. So, like any other type of calculation system or any intelligence, honestly, it becomes less precise because it becomes less certain and in more certain places, the places explored more, it is more precise. And it’s just in a way the exploration of new spaces. If we will really go to new places, he does not know much before he begins to explore it.
Mhn: Is it similar to that of President Trump Stargate project And what they are trying to accomplish – cure diseases by improving AI systems?
Gibson: There is a certain similarity between AI efforts. I will say that what I think is really special about Lila is the accent on science and our ability to really understand. It is built by scientists, it is managed by scientists, and is also managed by AI scientists. But we deeply understand the problems of science and how to really do science.
There are these real world components that you need to face when you make scientific discoveries, and that’s what we really build. We build AI science factories that allow you to go to the laboratory, carry out experiences and expand knowledge. Thus, we do not stop building the central AI system; We really build the complete integrated battery, from start to finish, for a scientific discovery.
Mhn: Do you think technology will eventually cure diseases?
Gibson: I think we will see remedies. I think there is a lot of range on what it looks like and what a remedy really means. What I deeply believe is that AI will improve the human condition and health. Whether it is to cure a disease or to allow us to live in a world without obesity, that this allows us to face crises in mental health – all these things will be improved with these types of systems. The exact definition of hardening disease is often debated, but today I think it is just the advantage that we know that life will be better when we have extended scientific knowledge.
Mhn: What are you nervous with regard to risk? Do you watch something while making this technology progress?
Gibson: From my point of view, many risks that we see are things that we simply cannot predict today. And so on what we are working on, it is trying to identify how we follow them. How do we recognize them before they occur? How do we prepare for these moments when intelligence has reached new levels?
What we are working on construction is that the security framework that allows us to say: “Okay, this model or these models can improve our ability to a non-scientific to make advanced scientific methods. What are the risks associated with this? How can we follow these pathogenic biological products?”
Some of these things we have had to test for decades. With the advent of even being able to synthesize DNA, we had to face the idea of synthesizing pathogens, and we learned from all this.
Now we are just trying to implement what is new with AI in this case, and it is really a question of keeping the same security procedures in place for all the biological systems that we have today, but also with any type of malicious or bad intention by, like, errors by the AI system.
Mhn: RIGHT. AI has a lot of potential, but you have to be careful because if AI wants to create something that destroys us?
Gibson: I think, as, it’s the debate, right? And I think that, at the end of the day, we must be very careful, but avoid building what will improve the world … I think you have to do it carefully. As in any other industry, when you create autonomous cars, there are so many advantages, but we have to do it carefully.