We wonder if there is any aspect of science that can’t be automated given how deeply the most recent AI approaches have permeated.
Artificial intelligence (AI) is embracing science as it controls modern engineering instruments in scientific study. According to a survey from the Australian science agency, CSIRO, AI is used in 98% of scientific domains, and by September 2022, AI-related research would account for about 5.7% of all peer-reviewed research conducted worldwide.
According to Stanford’s AI index report 2023, “AI models are starting to rapidly accelerate scientific progress and in 2022 were used to aid hydrogen fusion, improve matrix manipulation efficiency, and generate new antibodies.”
Automation is advancing in science.
Is there anything that scientists do that can’t be automated given how deeply the newest AI approaches have permeated practically all fields of science? The term “generative modelling” refers to a method that “can help identify the most plausible theory among competing explanations for observational data, based solely on the data.” Furthermore, and perhaps more significantly, “this would be without any preprogrammed knowledge of what physical processes might be at work in the system under study.”
General adversarial networks (GANs), which were the strongest generative modelling systems a few years ago, now appear to have lost ground to transformer designs. For instance, the transformer architecture is used by DeepMind’s AlphaFold and AlphaTensor, two cutting-edge AI models for scientific study.
Due to the transformer architecture’s superior performance in tasks like image recognition and natural language processing, it has becoming more and more popular in generative modelling. This is mostly because of its capacity for capturing distant dependencies, which enables it to efficiently analyse and produce complicated data sequences.
Transformer models produce results that are more consistently stable and reliable than GANs, which are sometimes prone to instability during training. The latter is also quite versatile and is simple to customise for particular applications.
agents of science
We’ve solved the protein-folding mystery thanks to improvements in AI methods, and we’re now using this knowledge to develop malaria vaccinations, tackle antibiotic resistance, cut down on plastic waste, and accelerate medication discovery. Additionally, we’ve broken a matrix multiplication record that had stood for 50 years, enabling never-before-seen lightning-fast AI applications on current hardware. Not only that, but we’ve also created a brain-computer interface that can convert voiced attempts to text, allowing paralysed persons to communicate successfully.
Given the tremendous potential that AI models possess, researchers think that we have only just begun to scratch the surface.
Models like Auto-GPT, which are open source, self-learning, and capable of writing their own code, eliminating defects, and reducing downtime, are already available. We are on the verge of developing a large number of autonomous scientific agents that can perceive, consider, and act in accordance with objectives specified in English prompts.
The best scientist: AI?
But there are also enormous hurdles that need to be anticipated before we jump on the “AI for science” bandwagon. The problem, known as the “reproducibility crisis,” was discovered after Kapoor and Narayanan examined 20 evaluations from 17 different research domains and discovered 329 studies whose findings could not be fully duplicated because of issues with how machine learning was used. In other words, it was difficult to confirm these findings, even after repeating the experiment under identical circumstances and with the same set of data.