Conditioned on AGI,

NB: I’m interning at OpenAI this summer. This post was written before the start of my summer internship, is based on zero privileged information and is only a product of my own thoughts and reasoning.

“I know exactly what I have to do”, says Will, and pulls up a photo. (see: My dad and David Bowie in Greece 1988). “I can’t move forward until I’ve done this”. Similar to him, I can’t move forward with my life and ideas, until I’ve written this out.

I’m going to make an assumption here. If you disagree, I would love to argue with you on it later (reach me at tarark@mit.edu or my twitter), but for the sake of this post, let us condition on the following statement. We are capable of building AGI, (in my definition, a system that can do 95% of humanity’s intellectual labor better than or equal to at least 95% of the human population) in roughly 1-2 years. Conditioned on this, what do we do now?

There are two ways I see this world progressing. Currently, I hedge my bets on ii.

In this write up, I’m going to explain:

Automating meaningful AI research will take longer than 1 year.

There are many ways we can enhance research. Using LLMs to search over longer context windows (like a bunch of research papers), learning about topics on the level of a graduate students, implementing experiments, software engineering and greedy-searching over minor tweaks (worth noting that I think the advancement resulting from minor tweaks has been nearly negligible over the last few years), all seem very plausible to me. In fact, LLMs would be a great tool for that.

However, most meaningful jumps (~ 0.5 OOM) has been from a small group of people with years of experience and intuition taking a step back, understanding where models are failing, and connecting the dots with between two concepts. I’m not claiming that LLMs will be incapable of doing this. I’m claiming that this capability lies at least 3-4 years ahead of current models.

ASI is very unlikely to be publicly accessible.

This is not a hot take. I put extremely low probability on a world where we have an AI lab has developed extremely powerful super intelligence, capable of fully automating research, and is able to productize it without the government stepping in. ASI would be one of the most powerful weapons of each nation and would be kept under strict security only for privileged access. So no, I do not think that ASI will directly impact the life of an average person.

Conditioned on the last two points, the society and the public will be and remain exposed to AGI for a notable amount of time, if not forever. There is a large subset of human tasks and workflows that do not require extremely specialized knowledge of a specific domain or a huge amount of compute to accumulate knowledge, but are doable by good general intelligence. This is economically favorable for labs and society to capitalize on. Following the release of models capable of performing reasoning over extended domains, chains of steps, resources and time into the world, will be a phase change. Marginal public releases of model improvements leading to ASI will decrease and happen privately within labs (probably with government supervision). Product teams will be hard at work to create better scaffolding and lower costs of test time compute.

The structure, guardrails, deployment setup and design surrounding AGI matters.

No technology has ever been let loose on the world with zero structure. How do we safely incorporate LLMs into our processes, how do we safely incorporate agents into our processes and world and what do guardrails for these new agents look like, how do we structure human oversight and what do we consider as trustable oversight, what are human wishes/objectives that we need to bake into the architecture of this new world, what are core human criteria/characteristics, that we need to account for. Which one of these are questions are important and grounded in reality and which ones will be taken care of by the vast product market and innovation&iteration.

This is a good covering subset of the questions I find important in the world of AGI and think people should work on them / think about them/ at the very least, have their answers ready for them. This is a good covering subset of questions that I spend my time thinking about.To me, it seems like not enough people worry about making AGI go well. Being opinionated about the deployment setup of AGI and making good design choices is crucial to a good future for humanity and society. If you disagree with me on anything I wrote or would like to bring up a point, I would love to hear from you.