'AI scientist' created to run its own experiments. What will this mean for scientific discoveries?
07.09.2024 - 11:59
/ euronews.com
/ European Commission
Researchers at Sakana.AI have developed an artificial intelligence (AI) model that may be able to automate the entire scientific research process.
The "AI Scientist" can identify a problem, develop hypotheses, implement ideas, run experiments, analyse results, and write reports.
The researchers also incorporated a secondary language model to peer review and evaluate the quality of these reports and validate the findings.
"We sort of think of this as a type of GPT-1 moment for generative scientific discovery," Robert Lange, research scientist and founding member at Sakana.AI, told Euronews Next, adding that much like AI's early stages in other fields, its true potential in science is only just beginning to be realised.
AI’s integration into science has faced some limitations due to the complexities of the field and ongoing issues with these tools, such as hallucinations and questions about ownership.
Yet, its influence in science may already be more widespread than many realise, often used without clear disclosure by researchers.
Earlier this year, a study that analysed writing patterns and specific word usage in academic papers following the release of the now well-known AI chatbot, ChatGPT, estimated that around 60,000 research papers may have been enhanced or polished using AI tools.
Although the use of AI in scientific research could raise some ethical concerns, it could also present an opportunity for new advancements in the field when done properly, with the European Commission saying that AI can act as a "catalyst for scientific breakthroughs and a key instrument in the scientific process".
The AI Scientist project is still in its early stages with researchers publishing a paper in pre-print last month, and the system has some notable limitations.
Some of the flaws, as detailed by the researchers, include incorrect implementation of ideas, unfair comparisons to baselines, and critical errors in writing and evaluating results.
Still, Lange sees these issues as crucial stepping stones and expects that the AI model will significantly improve with more resources and time.
"When you think about the history of machine learning models, like image generation models, chatbots right now, also and text-to-video models, they oftentimes start out with some flaws and some maybe images which are generated, which are not super visually pleasing," Lange said.
"But over time, as we put in more collective resources as a community, they become much more powerful and much more capable," he added.
The AI Scientist, when tested, displayed at times a degree of autonomy by exhibiting behaviours that mimic the actions of human researchers such as taking extra unexpected steps to ensure success.
For instance, instead of optimising