One thing I have picked up, speaking to lots of scientists lately, is that many do not fundamentally understand the kind of revolution that is upon us in the context of progress in science. The reasoning that follows applies to almost all knowledge work. Many folks are still stuck to thinking of AI as a data analysis engine, a one-time inference engine to make predictions or a chatbot that merely ‘coughs’ out the next token. Any frequent reader of my newsletter will realize that this simply isn’t so.
The proper, and the more general, way to think about this revolution is that intelligence is getting cheaper. But, even more elaborately, it is to think about it the following way: The history of humankind is the history of the artificialization of our environment, from the invention of agriculture to Tesla Robotaxi, and everything in between. But it is now, for the first time in history, that we are beginning to artificialize the very thing that allows us to artificialize our environment in the first place, namely, intelligence. It is only in this context that anyone, for that matter, could appreciate what is upon us.
We now have AI models that can ‘see’, ‘talk’, ‘write’, control computers, robots, etc. The models are out there; what is left is the plumbing (engineering) work to apply this intelligence to various domains.
It is in this light we talk about the agentic AI system framework, where we give AI access to tools, APIs, to enact changes like a human would in the world. We are now in the first innings of this evolution.
Andrew Ng is one of the AI experts out there who have articulated a useful way to think of agentic AI systems. He contrasts agentic workflows with zero-shot prompting (writing prompts in chat boxes), explaining that while zero-shot prompting asks for an entire task to be completed in one attempt, agentic workflows break it into iterative steps, allowing for continuous reflection and refinement.
There are several features of agentic workflows. For example, reflection enables agents to review their output and make improvements incrementally. Tool use empowers them to integrate external resources for better performance and memory (e.g. access to defined databases and access to the web). Planning is also crucial, as agents develop and execute detailed strategies to meet complex objectives.
We also have the framework of multi-agent collaboration, where different agents work together, each bringing a specific skill set to solve more challenging problems. This technology is evolving with new philosophy and techniques being actively introduced. And, indeed there are no shortages of movements in the market on this ground - from Salesforce autonomous digital agents, Microsoft autonomous AI agents to the wildly popular Replit Agent - a text to app agentic AI app.
I am currently on the final lap of my postdoctoral training, and have begun exploring positions in computing within the life sciences, with a focus on leveraging AI. I also spend a lot of my downtime building apps that could potentially help scientists and other knowledge workers. So naturally, I have been thinking hard about how AI, being a general-purpose technology, can help push forward the sciences.
One thing any scientist thinking about this revolution will quickly realize is that there are constraints, e.g., say for biologists, your tissues or C. elegans still need the same amount of time to grow. However, properly oriented, AI could still do so much despite these constraints. As stated earlier, (an important kind of) intelligence is now getting cheaper, and we can begin to deploy it in ways that speed up the entire process even given those constraints. (As a case in point, in one of my ongoing projects, I have used LLMs to read and intelligently process thousands of paper abstracts within a few minutes and for less than the price of the cheapest Starbucks coffee out there.)
How do we come up with experiments to run? How do we run these experiments? What about hypothesis generation and result interpretations? Over the next few years, many of these processes will be (semi-)automated, which will drastically increase the pace at which we do science, hence the rate at which we make important discoveries. (See this well-written review by Marinka Zitnik and co-workers on Biomedical AI agents). For a less technical and broader treatment of the topic, see Dario Amodei's now popular essay, with particular emphasis on the section on Biology and Health. For evidence of what is possible in this brave new world see Future House impressive body of works.
Friends, the question is no longer whether AI will transform scientific discovery or any knowledge work, for that matter, but how quickly we can adapt our institutions and practices to harness this new paradigm of artificially augmented intelligence.
To finish this up, anytime I write only positive things about the new world of AI that is upon us, I always feel the need to balance the narrative with the fact that there is more to the story. Over reliance of knowledge workers on AI agents, especially when they are not foolproof comes to mind. However, arguably, there are much, much bigger problems than that. And the person who has done the most impressive, much needed, work in articulating this possible dystopian world, that I know of, is Tristan Harris.