0:00
/
0:00

AI and our System Reshuffle (Ep. 282)

My conversation with Sangeet Paul Choudary

Sangeet Paul Choudary, bestselling author of Platform Revolution and Reshuffle, and senior fellow at UC Berkeley, joins the show to challenge the conventional wisdom about AI's impact on our economy.

We explore why knowledge workers risk falling "below the algorithm," how curiosity and judgment become luxury goods in a world of cheap answers, and why our educational and career structures need complete reinvention rather than incremental reform.

I hope you enjoy this conversation as much as I did. We’ve shared some highlights below, together with links & a full transcript. As always, if you like what you hear/read, please leave a comment or drop us a review on your provider of choice.

— Jim

Subscribe to The OSVerse to receive your FREE copy of The Infinite Loops Canon: 100 Timeless Books (That You Probably Haven’t Read) 👇👇


Links

Apple Podcasts

Our Substack is growing! Subscribe below for more brain-tickling content designed to make you go, "Hmm that’s interesting!


Highlights

The Container’s Revolution

Sangeet Choudary: “…the container is the perfect example of a technology that is not intelligent, and yet has all the properties of how technology transforms the entire economy. So when the container first came in … the logic of shipping was structured around break bulk cargo, which essentially meant that cargo was not standardized and it had to be manually loaded and unloaded from vessels by dock workers. Because of that, the waiting time at ports was quite high.

And that in turn meant that trade was unreliable, shipping was unreliable... The first order effect that it had was that it standardized cargo and that enabled automation of ports so that crates could be lifted on and off ships through cranes. And so if we were to look at the first order effects of containerization, we would think that it was port automation. But if we stopped at port automation, we would miss out on all the other things the container unleashed. Because the impact on ports was really the most localized effect of the container.

The real value of the container actually got unlocked when trucks, trains and ships agreed on a common format, a common size and standardized structure of the container so that the container could be moved across different forms of transport and move seamlessly from source to destination. Now, that one single thing—the standardization of the container across modes of transport, combined with a unified contract to move things from source to destination—suddenly made logistics and shipping end-to-end reliable.”

Asking Good Questions

Sangeet Choudary: “But the other way that you can really differentiate yourself, especially as a knowledge worker, is to shift your focus away from the knowledge work that is getting commoditized and try to think of its complementary skills that actually are still not commoditized and in fact become increasingly valuable as a particular component gets commoditized. Value often migrates to its complements. So look for those complementary skills. And a simple rule of thumb to think about it is that when answers become cheap, as they do with an LLM and we can generate answers on the fly, asking the right question is the new scarcity. So having that curiosity—and curiosity is very directed exploration. Asking the right questions is the new scarcity. And the reason for that is that when answers are cheap, everybody can generate answers. Which then means that if you're not asking the right questions, you're going down the wrong rabbit hole. And that increases the opportunity cost of exploration versus if you're asking the right question, you're actually benefiting from compounding because with cheap answers quickly developed, your ability to progressively ask better questions constantly improves.”

Career Progression in the Age of AI

Sangeet Choudary: “..when we think about the experience for young workers, we are thinking about that within the structures we have today, which is a linear career path up the pyramid inside a specific kind of firm, which then itself is predicated on the assumption of a single bundled degree that is given after four years of education and assumes all of that exchange value of the degree is captured in a full bundled job. And what I believe we will increasingly see—and I believe that's the only way to counter the uncertainty that lies ahead of us—is that this idea of the bundled degree, bundled education, bundled job, to harvest it all, all of that will have to be completely unbundled and we'll have to figure out a new way to think about what career progression looks like. The boundaries between what is a learning opportunity and what is a job opportunity will also increasingly dissolve. Today we go to a specific place to learn and we call that a college, a university, or maybe evening classes. And we go to a specific place to earn. And those distinctions, I believe, will increasingly start breaking down also because the clock speed of change for many of our learning institutions is very slow.

What is Valuable Now?

Sangeet Choudary: “…those of us who can potentially shift and learn and adapt and figure out how to be curious, curative, have better judgment and so on — even we can get caught in what I call the "vibe coding paradox," which is just because execution is cheap and easy and just because you can see output, you just keep executing, you just keep doing more. And that can be a challenge because if you're stuck in running cycles of execution, you remain blinded to what now becomes valuable. Because the fact that everybody can execute means that something else is going to have value. So you need to understand what that value is.

Just a simple example. If in the past you wanted to write for publications and the time that you had to write was the bottleneck today that is not necessarily the bottleneck, because AI can help you write really fast, but the bottleneck is then having access to those publications which were bottlenecks in the past as well, but they become even bigger bottlenecks today when everybody's competing for the attention of the same publications. So I think the first point is that we should be careful about getting stuck in the vibe coding cycle or the vibe execution cycle, if you will.”


Reading List

  • Platform Revolution; by Geoffrey G Parker, Marshall W Van Alstyne, Sangeet Paul Choudary

  • Reshuffle; by Sangeet Paul Choudary

  • The Hound of the Baskervilles; by Arthur Conan Doyle

  • Why Greatness Cannot Be Planned; by Kenneth Stanley and Joel Lehman


🤖 Machine-Generated Transcript

Jim O'Shaughnessy: Well, hello, everyone. It's Jim O'Shaughnessy with yet another Infinite Loops. I have been looking forward to talking with Sangeet Paul Choudary, the bestselling author of Platform Revolution and Reshuffle, a senior fellow at UC Berkeley. He's written in Reshuffle that who's going to win—the AI restacking of our knowledge economy—is really maybe a question too many of us are not asking the right way. We often hear the cliche: AI isn't going to take your job. A human using AI will. That's a cliche and it's missing the bigger picture. Sangeet, welcome.

Sangeet Choudary: Thank you, Jim. Looking forward to this conversation.

Jim O'Shaughnessy: So, if you don't mind, for our listeners, take us through your thesis. It's very different than what you hear from a lot of people today. I personally find it very compelling. But if you don't mind, let's go through what you think is going to really unfold and where AI is going to really make a big difference.

Sangeet Choudary: So my fundamental thesis is that we often think of the impact of AI as speeding up tasks, helping us do things faster, better, cheaper. But my key point is that there's another way of looking at how AI's impact will actually unfold. AI's impact will actually unfold through changing our systems at every level, changing how our industries function and on what basis firms compete and differentiate themselves. That in turn will change how firms organize towards these new forms of competition. And in response to new forms of organization, the work that gets done within firms will change. And that is actually what will change our jobs. Not so much AI's impact on the tasks themselves. And the underlying point that I make is that we are too obsessed with AI as an alternate form of intelligence.

And we try to keep looking for signs of cognition, the metaphors we use—a PhD in your pocket, et cetera. And even this quest towards artificial general intelligence. We often miss the fact or miss the point that AI at the end of the day is another technology. And the way to think about technology is something that is optimized towards achieving a particular kind of outcome. The examples that I take in the book very deliberately are examples of not-so-intelligent technologies fundamentally changing the entire economy. And that's the key idea—that the impact is not so much in terms of intelligence being infused into tasks, making them better, but AI as a technology rewiring our entire economy.

Jim O'Shaughnessy: Yeah, and I was struck by the idea that when we look at AI as kind of a brand new technology, we tend to bring our former priors with us. And you make the point that what AI is probably going to be really good at is coordinating across very messy and fragmented workflows. That's going to change who gets power, that's going to change the way the entire organization is organized. Who captures value, who doesn't capture value anymore. You use the example of the containers—everyone listening or watching knows about container ships. And what I didn't know, really before I read your book, was that they essentially created a new operating system for trade. Let's talk about that a little bit, and then how we can apply that analogy to other industries using the items that you chat about in your book.

Sangeet Choudary: Absolutely. The container metaphor is really what inspired this book. I was looking for examples of dumb technologies and how they rewire the economy. And very often in our current way of thinking, we look for something digital. But the container is the perfect example of a technology that is not intelligent, and yet has all the properties of how technology transforms the entire economy. So when the container first came in, prior to the container, the logic of shipping was structured around break bulk cargo, which essentially meant that cargo was not standardized and it had to be manually loaded and unloaded from vessels by dock workers. Because of that, the waiting time at ports was quite high.

And that in turn meant that trade was unreliable, shipping was unreliable, because you could never really trust what the exact time would be when the container first came up. The first order effect that it had was that it standardized cargo and that enabled automation of ports so that crates could be lifted on and off ships through cranes. And so if we were to look at the first order effects of containerization, we would think that it was port automation. But if we stopped at port automation, we would miss out on all the other things the container unleashed. Because the impact on ports was really the most localized effect of the container.

The real value of the container actually got unlocked when trucks, trains and ships agreed on a common format, a common size and standardized structure of the container so that the container could be moved across different forms of transport and move seamlessly from source to destination. Now, that one single thing—the standardization of the container across modes of transport, combined with a unified contract to move things from source to destination—suddenly made logistics and shipping end-to-end reliable. And what that ended up doing was that the entire logic of the industrial economy that was structured around the unreliability of shipping now got undone. So there were two or three very specific ways in which this had an impact. The first thing was that the entire logic of manufacturing was structured around vertically integrated, locally co-located facilities, because shipping was not reliable.

And so you could not get stuff made in China and then assemble it somewhere else. What the container did is that it unbundled manufacturing so that the product could now be broken up into components and different components could be made in different parts of the world and assembled together for the final product. What this then did was it enabled component-level competition. Because before this, the competition was largely at the level of the entire product. But now that companies could build components and plug into global trade, companies had to compete at the level of components. And the externalities and additional effects that unlocked was that as component competition improved and the performance of components improved, product innovation improved because you had now better components and the ability to assemble them in new ways to eke out new performance gains.

And so the entire manufacturing sector transformed because of this. Distribution transformed because prior to the container, you needed supply buffers. You needed stock to be stored with middlemen and post the container. You could now have just-in-time manufacturing, you could have faster, responsive supply chains. And so that's really the second and third order effects of the container that got unleashed. And eventually globalization came about because of the container. Where countries were located on supply chains determined how competitive they were, how much negotiating power they had, and all of that got unleashed ultimately because of that shipping container. So the key point that I'm trying to make with this example is that a technology as dumb as the container reshaped the entire economy at every level. It changed jobs not in ways that were anticipated.

Dock workers losing jobs might have been anticipated, but middlemen losing jobs in retail may not have been anticipated, because that was the second order effect of what happened. So it changed jobs, it relocated jobs, it changed organizational structure because vertically integrated structures unbundled, and it changed how companies and countries competed. And that's my key point—that with AI coming in, it's true for almost any technology. When you look at how these effects unfold, but especially with respect to AI coming in, these effects play out in even more interesting ways. Because the fundamental factor that enabled these effects with the container was that the container was a technology of coordination. Only when multiple forms of transport agreed to the same container structure and a standardized size and then also agreed to unified contracting, did these factors get unleashed.

And what AI does today, as you mentioned, while framing the question—a lot of our work, which we think of as knowledge work, relies on tacit knowledge, which is not explicitly codified, explicitly structured. And so coordinating that kind of work, that kind of knowledge work is still very messy, very human, very slow. But what AI does is it takes in fragmented information, fragmented sources of unstructured information, and creates a clear view of any domain, which then allows multiple stakeholders to coordinate their activities around that clear view. I take many examples of how that plays out, but I'll pause here if you'd like to take this in any specific direction.

Jim O'Shaughnessy: Yeah. The first thing that sprang to mind is essentially you're looking at it from a systems theory point of view, which is one that I vastly prefer because it allows you a completely—I call it "god view." You're able to see and anticipate hopefully some of those second and third order effects from it happening. But in your thesis you basically use the term "coordination without consensus," which I love. But then I think about our dear friends over in the EU and—law beats logic. I wish it didn't, but it does. What do you think will happen to your thesis? Like without—absent our good friends in the EU and their over-legislation of everything. Just my opinion. I could see this system working beautifully. What do you say when formal guardrails are created that muck up the AI? What—do we just adapt to that? Or is that going to be a significant bottleneck as we try to unleash these systems for better design?

Sangeet Choudary: Yeah, I think there are two different ways to answer the question. One is just from a systems and technology perspective and one is more from a geopolitical posturing perspective and where we are today versus where we were 10 years back. If I take the former—just for the benefit of listeners, I'll define what I mean by "coordination without consensus." My key point is that traditionally if you wanted coordination, you needed multiple parties to agree. So that's coordination with consensus. And the shipping container is a classic example. Trucks, trains and ships had to agree and there were historical events that drove that consensus and gave us what we have today.

The other way that coordination has happened is through market power, which is a Facebook or a Google or an Apple daily leveraging a significant technological shift to capture market power at scale and then enforcing coordination on everybody else just because they have bottleneck access to the market. When we think about coordination without consensus, my point is that with AI we now have an additional way to coordinate among different parties which does not necessarily require market power. Because the more you move away from consumer markets, and this is not just a consumer versus B2B distinction, but really the more you move away from standardized end-to-end use cases across hundreds of millions of users towards more fragmented, complex markets where coordination is more complex—take the construction industry, logistics industry and so on. In these markets, market power does not work.

And secondly, coordination with consensus does happen, but it really does not go beyond very basic agreement on some data standards. While there might be coordination around how we share the information, there won't necessarily be coordination at other levels that would really unlock value because eventually everybody wants to protect profit pools within these sectors. And so what we typically see is market power is not sufficient coordination because multiple players have market power and the market is fragmented and coordination with consensus is not possible. What AI does in this case is that a lot of these workflows in such industries rely on different forms of inputs and outputs. Take the construction industry, for example. The construction life cycle is very fragmented. Designers, engineers, contractors work at different parts of the life cycle and work on very different tools.

But when you use AI as a way to manage this end-to-end life cycle, you can actually take inputs and outputs from different parts of the life cycle. You can take model building designs, you can take PDF markups on which contractors are making their changes, architects are making their changes, and you can take real-time project plans and feed all of that into a single system which can make sense of all of it. So really the ability to work with unstructured information—because in the past you needed structured information and that was achievable through consensus. But AI can be trained on unstructured information and that's where coordination without consensus comes in. So to get back to your question, I know that was a long prelude, but—

Jim O'Shaughnessy: No, that is very necessary.

Sangeet Choudary: Yeah. So the idea of coordination without consensus then essentially leaves us at this place that you have three different ways to unlock value in these ecosystems. You can do that through consensus. And there are challenges with that as well, because even in a coordination with consensus model, you could have collusion where you could have a few large players work together to capture the entire market and lock it away from small players entering in. It does not have to be explicit holdups. It could even be scenarios where the only way to enter the market is through expensive litigation and small players just don't have the appetite to do that.

We've seen that, for example, in the handset industry with Qualcomm and ARM and Apple and Nokia on the other side, and how that's essentially prevented small players from entering that kind of a market. So my point is that coordination always involves some kind of power. Coordination without consensus, to a large extent, I would position it as a way to enable smaller players to operate on a nearly even footing with some of the larger players in entering the market. Because you can enter the market without requiring necessarily consensus, and hence you don't necessarily need the larger players to work with you. You can target their customers and their users without necessarily having to work with the larger players directly, if you can ingest outputs from their tools from their workflows into your system.

And that's what we are seeing in the construction industry where you have startups coming in which are just ingesting outputs from AutoCAD, Procore, large players and then trying to make sense of the end-to-end project and helping designers and contractors make decisions on the basis of that. So one point to the regulators would be that coordination without consensus, yes, it needs to be regulated in some way, but it offers unique counter-positioning benefits in terms of how smaller players can actually play games that larger players cannot easily copy without hurting their existing profit pools.

And Autodesk is currently in that kind of a bind because if it were to—its whole profit pool today is structured around getting more tools connected to its own ecosystem, whereas a small player coming in would want a coordination without consensus across totally unconnected tools. Bring your own tool and we'll help you manage the end-to-end construction cycle. Anyway, so that's one way I would position it. The second way, and a much shorter answer here, is that I think compared to where we were in 2015 when this regulation against platforms really kicked off in the EU, the industry in the EU, I believe, to a large extent is a bit tired of lagging behind the US and China for that matter.

So while EU regulation is still out on the prowl to that extent, there's a counterbalancing effect that I increasingly see from the industry that they want it to be more—less regulated and more pro-industry in the EU.

Jim O'Shaughnessy: That is—I do too. But I sometimes remind myself that I don't want hope to triumph over experience. And one of the things that I've seen in my career as an investor and now as somebody pursuing multiple vertical opportunities, from books to films to podcasts, et cetera. The idea does strike me that you have another wonderful phrase: "treasure maps, not shovels." But can the map lead us down to a cartographic dark pattern? In other words, could we see algorithmic tacit collusion without meaning to? That's kind of my first thought as an investor. I jotted notes down and after going through your work and reading your Substack in your book, I said to my team, we should be looking for fragmented industries where this sort of system could really be a great offering.

And then the pushback on that was, well, we're going to see—to take an example, look at what's happening in the world of IP right now. Let's take art, let's take Getty suing Stability AI for using their images. It seems to me that when you look historically at really big, massive innovations, one thing that always comes along with that is the old winners, the old dominant players do everything in their power to scuttle it. For example, I mentioned the suit, but let's go back further. The movie industry, when VHS came out, do you know that they did everything in their power to squash VHS? Everything. They lobbied, they got laws passed, they did all that. Of course, we know what happened. It ended up actually being one of the best things that ever happened for their industry.

But the point remains, the first reaction of the established players often is panic action through legal or other means. What are your thoughts on that and how would you address that as someone advocating for coordination without consensus?

Sangeet Choudary: I think there's no denying the fact that one of the most common responses is actually pushback and legal action and so on. Yeah, you're right that whenever there's disruption, a significantly new way of doing things, you have the incumbents respond with litigations, with pushback. I think one of the opportunities that we see today, and it's important to make a distinction here. One of the opportunities that we see today is that when there's a significant technological shift, sometimes it becomes very difficult for incumbents to replicate the new dominant design that emerges out of that shift. And a classic example of this is what happened with the iPhone, for instance. And it was really—none of the incumbents actually won because it was a fundamentally new design. You see this repeatedly, for example, I recently talked about this, but you see this repeatedly with e-commerce as well.

Whenever a new e-commerce player figures out a new way to capture customer demand data, it re-architects its entire fulfillment system around the logic of that data. So I take the example of Netflix versus Blockbuster or Amazon versus retailers. Netflix was able to create this fundamentally new model where it could serve local demand with national supply because it had data from across the country and knew where it needed to move DVDs and Blockbuster did not have that. So it only could serve local demand with local supply, which was a fundamentally less efficient, higher cost of idle DVDs model.

So one point that I'm trying to make is that when we are at a point where the new player can create, can benefit from an architectural advantage that the old player or the incumbent simply cannot copy, the rate of growth at which the new player takes off and the ability to then attract funding and then lobby to freeze or move regulation in its favor could actually give it a form of escape velocity that a traditional player or the incumbent fighting more of a zero-sum game cannot easily win. So I would look for opportunities where there's a fundamentally new architecture, a new dominant design that's emerging and which is gaining that kind of escape velocity to take off. And I think that's one lens that I would apply to it.

The second lens that I would apply to this is that there's a fundamental tension between the old producers and the new coordinators. So value increasingly is shifting away from asset-linked production to coordination of value unlinked from assets or unbundled from assets. A simple example is news from news companies to, or media companies to Facebook and Google, or in this case from somebody like Getty Images to an AI company. Again, I think that the history of value capture shows that in these periods of change, there's always a point where you have a "barbed wire moment" where you can hedge off what was traditionally either commons or it was predicted by an older paradigm. And because laws have not evolved to a certain extent, you can move it in your favor.

And so with The Getty Images vs. Stability AI kind of a model, I think it's a repeat of Qualcomm versus iPhone, where one court had ruled in Qualcomm's favor and the other court overturned it and ruled in Apple's favor. And so essentially both players are well moneyed enough to keep fighting and keep posturing and then settle outside court rather than end up with binary settlements. So I think depending on how well capitalized the two players are, we'll see those kinds of battles.

But because these moments of structural uncertainty are difficult to resolve doctrinally, I would expect that the steady state would be that these kinds of lawsuits would persist, moving from one court to the other, while the players themselves figured out how to share the spoils outside in a way that works out for both of them to some extent. That's what we've seen with Qualcomm and ARM versus all the other handset manufacturers. So that's the basis on which I'm basing this observation.

Jim O'Shaughnessy: So the—another adage in investing is that pioneers get arrows in their back and it's the people who are second off the boat who see that and think, "Huh, maybe we better try a different strategy here." The reason I'm so interested in this is because I think your thesis is correct. I think that the coordinating powers of AI are going to truly revolutionize every industry, particularly creative industries that formerly have relied very heavily on monetizing IP, et cetera. If you look at our verticals at O'Shaughnessy Ventures, you'll see that we have a vertical in each one of those stacks because I saw an opportunity to literally get into these industries using a completely different model. I always loved Bucky Fuller saying that models are really hard to change from inside. Why don't you just invent a new model?

And I kind of look at it this way—focusing on the map, not the shovel. Again, we are completely simpatico on that. But I always try to steel man the opposite argument to anything that I believe because we are confirmation bias machines and we tend to—not intentionally, but we tend to overlook arguments that really are rather destructive to our thesis. And one of the things that we're going to do here, once our entire AI lab is up and running, is to just auto-generate null hypotheses because it's kind of like—and we'll publish them to a database that anyone can have access to because it's just part of human nature to not seek out null hypotheses, right?

If you look at the grant-making process, everyone has a positive thing to prove and that's fine, but it limits our ability to learn via negativa. And if you're a Sherlock Holmes fan and you've read the Hound of the Baskervilles, you know, the only reason that Holmes knew that the intruder was known to the family was because the dog didn't bark, right? So if you don't learn to think that way, you're not going to be as good as old Sherlock. But it also brings us into the world of pricing risk, right? A lot of these new systems are going to be—just because of the very nature of what you're attempting to do, probably there are going to be a lot of risks involved in them that are unforeseen.

What did Rumsfeld say—unknown unknowns? So we're probably in an era right now of a lot of unknown unknowns. You have a thesis on who might emerge as power players here that involves insurance companies. Talk to us a little bit about that.

Sangeet Choudary: Yeah. So there's a specific thesis there that when systems change, there are typically three forms of constraints that emerge or three forms of bottlenecks that emerge when systems are in flux. So one is that what was previously scarce becomes abundant. And that we see with technology effect quite broadly. We see that with AI on knowledge work. The second is that the mechanism of coordination changes. What was manually coordinated becomes algorithmically coordinated, and that then leads to creation of fundamentally new coordination gaps and so on. The third piece, which is really where the insurance piece comes in, is that when systems change, new forms of risks emerge, and certain forms of risks that were unaddressed in the past could be effectively addressed, but new forms of risk start emerging.

So the example that I take specifically is that of consulting firms. Everybody today has an opinion on the end of consulting firms because of AI. And very often we take one of two extreme positions: consulting firms don't just do what ChatGPT does, so they will always exist, or consulting firms do only that, and so they'll collapse. But a common point here is that consulting firms fundamentally do two things. They provide solutions and they bear the risk associated with that solution in some way.

And it's a combination of the two things that gives them pricing power with the client, because the extent to which they can bear that risk, either through brand power or through some other mechanism where they are on the hook, gives them the ability to retain the relationship with the client, which then gives them longer value from the client. So the point that I'm trying to make there is that when you take a consulting firm as an example, these are the two real sources of premium that they capture. And with AI coming in one of these sources of premium, the delivery of solutions could be abstracted away from them and created and supplied at industry scale by AI tool providers.

And the more those tool providers learn across the industry, much like the more Google Maps has learned from everybody else using mapping services on top of them and improved its underlying navigation capabilities. In a similar way, these tools will improve the ability to deliver the solutions across a wider range of use cases and so on, and capture more of that capability into their workings, which then takes away the delta that the consulting firms add on top.

And then the logic then goes that, well, they can still price for the risk, which is where my other point is that if pricing for risk is really what's being talked about here, then the fact that the solution is getting standardized by the underlying tool provider allows risk to be priced more effectively and risk pricing to move to infrastructures rather than to be relationally priced, which is how brands and relationships help to manage risk. So if risk pricing moves to infrastructure, that's a classic case for an insurance play. And I take the example of how this has happened in agriculture already, where a lot of variance associated with different types of soils and weather patterns is already captured at infrastructural scale.

Something that was previously known to local experts and to the farmer is now more widely and predictably known to infrastructures like Climate Corp. and so on. And so insurance companies can now work directly with them and capture the value associated with pricing the risk. Something similar could play out in knowledge work. That's my proposal. As more of the performance of knowledge work gets standardized and captured into these tool providers. So that's the key idea about how value could shift at industry scale towards new forms of insurers. Not today's insurers necessarily, but new forms of insurers working closely with tool providers, potentially insurance solutions being put forward by tool providers themselves.

Jim O'Shaughnessy: Which led to another fascinating part of your thesis that people really, especially knowledge workers, want to be performing above, not below the algorithm. So I'd like you to chat about that for a minute. But then you also published a piece that I loved, which is what happens if humans become luxury goods? What do you judge them against? And you list curiosity, curation and judgment. So let's take those in turn. Let's give our listeners and viewers some advice for how a knowledge worker—if they're a knowledge worker, and by the way, most of the people listening to this are—how to stay above that. But then let's dive into this idea of humans essentially becoming luxury goods and chat about that for a bit.

Sangeet Choudary: Yeah, I love talking about these two topics because I framed this idea of above the algorithm versus below the algorithm around eight years back while looking at the gig economy to really determine whether Uber was enabling a new generation of entrepreneurs or whether it was figuring out a new way to exploit labor at scale. And one of the key distinctions there is this idea of algorithmic coordination. And there are essentially in any algorithmic system, there are players that work above the algorithm and build those systems. And then there are players that work below the algorithm and are managed and coordinated by those systems. And Uber is a classic example. With Uber, a data scientist is essentially a form of capital allocator.

He's making choices and decisions about how the entire market should perform, which will—those decisions will directly determine what kind of returns they get on the operations that they run using that infrastructure. And there are drivers and delivery riders who are constantly being managed by these algorithms. And the reason they are managed by the algorithms is primarily this—that their overall job has been modularized to a very specific standardized task, which is take somebody from point A to point B. All knowledge work associated with the job, which is primarily how do I navigate the roads? has been moved away from them because with Google Maps there's a flattening of skill. Everybody can now navigate with the same effectiveness even if they are new to the city.

And so the traditional advantage that a cabbie with 18 years of experience had goes away. And so what we see there is that when technological augmentation—because people really often talk about automation versus augmentation and augmentation is a good thing, but we see in this case that augmentation actually levels the playing field, flattens the skill premium. And what that then leads to is if after augmenting, if the remaining human-performed task, in this case moving from point A to point B is standardized and commoditized so that the performers, the workers are fully interchangeable, which is actually the case with Uber or Deliveroo or any of these companies DoorDash. At that point they are working below the algorithm, they are fully interchangeable, fully commoditized and they're working below the algorithm.

So I want to make a—I want to call out that this is not an issue of marketplace versus traditional businesses and it's not a case. My argument is not about full-time employees versus contractors. The argument is centrally about the fact that your task or your job is reduced to a commoditized, modular, fully interchangeable task, at which point you as the worker become fully interchangeable. That is when you are fully working below the algorithm with very little ability to set any pricing power, any agency. And while we've seen this with Google Maps doing this to anything that involves navigation, with AI coming in and AI gradually improving over time, we might see similar effects on knowledge work as well.

Because we already see, there have been many studies that have shown that lesser skilled, lesser trained workers get a bigger delta in terms of work performance when augmented with AI versus better trained and higher skilled workers, which essentially means that over time, if more of the knowledge work performance gets absorbed into the machine and the augmenting, quote-unquote, human in the loop is only performing modular interchangeable tasks, they could effectively be coordinated not through traditional managerial coordination, but through this kind of algorithmic coordination where they're constantly interchanged. And so at that point you become a below-the-algorithm worker. So as a knowledge worker, you need to be really watchful for this particular thing. The automation versus augmentation binary does not apply. It's not just automation taking your job and augmentation helping with your job.

Augmentation could actually take away your pricing premium and make you a fully interchangeable below-the-algorithm worker. So what's important is that as the knowledge component of knowledge work becomes increasingly commoditized, instead of trying to run a race against the machine, what I propose in the book is that you should be looking at doing two things. The first is really look at how the system around you is changing and see what's breaking in the system. And assuming risk in the system could be one of those things. Assuming new forms of coordination, providing new coordination solutions could be a second thing.

But the other way that you can really differentiate yourself, especially as a knowledge worker, is to shift your focus away from the knowledge work that is getting commoditized and try to think of its complementary skills that actually are still not commoditized and in fact become increasingly valuable as a particular component gets commoditized. Value often migrates to its complements. So look for those complementary skills. And a simple rule of thumb to think about it is that when answers become cheap, as they do with an LLM and we can generate answers on the fly, asking the right question is the new scarcity. So having that curiosity—and curiosity is very directed exploration. Asking the right questions is the new scarcity. And the reason for that is that when answers are cheap, everybody can generate answers.

Which then means that if you're not asking the right questions, you're going down the wrong rabbit hole. And that increases the opportunity cost of exploration versus if you're asking the right question, you're actually benefiting from compounding because with cheap answers quickly developed, your ability to progressively ask better questions constantly improves. And so I believe that this kind of targeted exploration, which I'm broadly calling curiosity, is going to become more of a luxury good. I think we have been trained to generate answers. We have not necessarily been trained to ask very good questions. Even today, when answers are expensive to produce, those who ask good questions typically hold good positions in the economy. And so that skew is going to get even more skewed towards asking good questions.

Now, the other complementary piece to cheap answers is knowing which answers to choose and which answers to discard. And that's curation—knowing what you should act on and what you should reject. And in order to do that, you need to have what I call taste. You need to have the ability to choose and curate, and you need to be theoretically sound. You need to understand a particular domain well, not in terms of how we traditionally understood and tested for it, but you need to know enough to elevate and exclude and make choices accordingly. And finally, the idea of judgment is essentially very closely associated with risk. All of these answers are easy to generate. You can even choose what's right.

But eventually you have to execute and assume the risk associated with it. And that judgment gets developed because you've run that loop repeatedly. You've, over time, you've made those choices, you've seen how those choices play out. When the stakes are high, you've shifted choices in real time so that you don't have to take the downside of the risk. And that's what trains your judgment. And so those three things are really critical as we move into a world where answers are progressively more cheap than ever before.

Jim O'Shaughnessy: Yeah, and I am both delighted by the prospects of that and a little bit scared. And I'm delighted because nature has made me kind of good in those three things. But I'm scared because I worry about a cognitive chasm and a cognitive elite arising that is very different from and potentially destabilizing to society. As an example, that doesn't really have—it doesn't really fit in, but the visceral part of this fits in. Last time I was in London, I got into a traditional black taxi. And if you know London, you know that prior to Uber, prior to the coordination of all of the mapping, which took away all of the pricing power from the cabbies who had to learn a thing that was called "the knowledge." And learning the knowledge took years, literally.

I got in and he's like, "Why are you in London?" And I told him and he got a little aggressive and he's like, "I hate what you're doing because you're wiping out all of my advantage. Now a kid can come, never study any part of London and take the fare to exactly the same location that I had to study years and years for the knowledge." And it hit me that that's something we're going to have to face on a much larger scale. Because I agree with you, I agree that we were all trained, depending on our age—having the correct answer was highly valued, so you could extract a great deal of value from that.

Now asking the right question, stack-ranking the answers that you receive to that question and then having the guts to put it out there and taking the risk that you're right, those are very different skill sets than the guy who's got the photographic memory and knows everything. I looked at an old journal and I've been fascinated by this stuff forever. And this was from like 20 years ago. And I'm like, "I wonder if we're approaching an age where my excellent memory is no longer going to be highly valued." And I think we have. I think that the idea that memory alone and the ability to think quickly and be fast on your feet and all of that will become more of a commodity because everyone's going to have one of these—I'm holding up a cell phone here—in their pocket with access to quick, cheap, commoditized answers that aren't bad answers. So what are your thoughts on that? And as we approach that, I believe we're going to have to put some sort of plans in place to make certain that people who through no fault of their own, don't make it in this new economy that we're building. What do you think?

Sangeet Choudary: Yeah, I fully agree. I think you're absolutely right on that point. I'll probably make two points here. The first is that even those of us who can potentially shift and learn and adapt and figure out how to be curious, curative, have better judgment and so on—even we can get caught in what I call the "vibe coding paradox," which is just because execution is cheap and easy and just because you can see output, you just keep executing, you just keep doing more. And that can be a challenge because if you're stuck in running cycles of execution, you remain blinded to what now becomes valuable. Because the fact that everybody can execute means that something else is going to have value. So you need to understand what that value is. Just a simple example.

If in the past you wanted to write for publications and the time that you had to write was the bottleneck today that is not necessarily the bottleneck, because AI can help you write really fast, but the bottleneck is then having access to those publications which were bottlenecks in the past as well, but they become even bigger bottlenecks today when everybody's competing for the attention of the same publications. So I think the first point is that we should be careful about getting stuck in the vibe coding cycle or the vibe execution cycle, if you will. The second piece is that as you rightly mentioned, there'll be really large sections of the population.

I happen to know many of them personally, people who are content writers who have lost their jobs and have no way back into any job that pays them what they used to earn. There'll be vast sections of the economy which will not have a way back. We often use reskilling as a cop-out answer. Well, somebody got deskilled, let's reskill them. It's not necessarily that straightforward for three different reasons, I believe. The first is that machines can learn faster than you. So if reskilling is the only way out, you're competing against something that can reskill much faster than you. The second thing is you don't necessarily know what to reskill to because you need to know what will be valued in the new system. And that's not very easy to spot.

So reskilling in itself is not the right answer, which then leads us to the third piece that we need to have a mechanism to either redistribute value in some way which is not necessarily through—and when I say redistribute value, I'm trying to think of a systemic answer for redistributing value. Something like a universal basic income is not a systemic answer. You're saying that the system is broken and we'll just tax a little bit out and give it to everybody else, but the system remains broken. They have no way of coming back into the system. We need to have a mechanism by which we redistribute value so that they have ways to come back into the system.

Which at some points might mean that from a policy perspective, and this is probably not going to be popular with a lot of large part of the audience. But it might mean that from a policy perspective, we may have to introduce frictions into some part of the system just so that we can ensure that the transition is simpler. Which means that certain tasks, even though they can be automated or executed with 70% accuracy, we just keep the bar for them to be automated to say 90% and let humans continue to be in that while they are moving into other tasks in the system.

So I guess the point that I'm trying to make is that completely disenfranchising labor and then trying to come back with a universal basic income kind of answer is going to leave us with a broken system. We have to proactively insert interventions in the system before something like that happens. And there's no simple solution to that. But unless we solve it at the system level and often—instead of paying a tax later to fund a universal basic income, this essentially means that we pay a price upfront and we continue to take a less operationally efficient solution just so that we can protect the system in the long run. These are difficult solutions to architect, but I don't see a way to do this unless we proactively solve it before the problem happens.

Jim O'Shaughnessy: Yeah, and that has preoccupied my thinking for many years now, even back to when they called it machine learning. I was like, if I'm directionally, if I was even like more than 50% directionally right on where I saw it going, and this is pretty much where I saw it going, we're going to be facing some pretty rough transitional times because, for example, you mentioned universal basic income. The empirical evidence against universal basic income is fairly overwhelming. And yet I still am willing to put it on the table as—and this is painful for like a quant like me to say—even when the empirical evidence suggests that solution is not a great one.

I think that to not have as much firepower as we possibly can as a society, because these changes are going to literally rewrite our base code as humans, right? Like, why do so many still cling to the idea that hours put in of effort should determine what your compensation is? Right? That is so—it's like the Marxist theory of value, right? But the fact is, if we're digging holes in the Sahara desert and we are dying from the heat and everything, and it's the hardest work we've ever done, and we do that for 10 hours, we're not going to get paid anything for that because nobody gives a fuck, right? That's what money basically is. It's how many fucks do you give about a particular thing? That's where people will allow compensation and whatnot to flow.

And so we have to decouple so much that has been generational and I see it even in young people today, right, where just work harder, work longer. I don't think that answer works anymore because as you correctly say, the answers can and the machines can learn faster than you can and they can generate pretty good answers that are certainly better than the median human answer. And so we're going to have to fight against a lot of our innate base code like effort equals outcome. No, it actually doesn't. Asking better questions, curating them better, having the ability to put them at risk—in other words, where you are at risk—those are going to be in my opinion, the new watchwords.

And so this is not a trivial problem. And so what advice would you give? Let's assume your thesis is correct and that people are going to—companies are going to be hiring people. Like if I hired you at OSV, what design would you give me to interrogate potential employees? Right. What does the assessment for hiring, promotion, et cetera look like in 2026? Could you design a short falsifiable exercise that would reveal each of these scarcities—curiosity, curation, judgment—in a way that it's very difficult to game? Right. So in a way that de-emphasizes someone's theatricality and really judge them on each of these scarce resources: curiosity, curation, judgment. I'm putting you on the spot here, but what would you design?

Sangeet Choudary: Well, this is a quick reaction and response. So it probably suffers from first order thinking, but I'll give it a shot anyway and let's see where that takes us. I think the first and most obvious thing that I would think of is that I would look to test whoever is looking to join. I would look to test them under the assumption that they have access to all the tools that are today available and test them within that assumption. Where companies have it backwards today is while interviewing they're asking people not to use AI tools or "are you using it in the background?" That's sort of—the other side is always figuring out a way to outwit them. Anyway, what I would look at is give really open-ended questions.

Some of those questions may not even make sense. And then really look at the path of inquiry. What kind of questions are they asking these AI tools? And then based on the outputs that are being generated, how are they on the fly making choices about what to ask next, which kind of outputs to choose, what's their overall path of inquiry? Because again, if I want somebody's contribution to be compounding, I want them to keep asking the right questions and not go down the wrong rabbit hole. I want them to keep having very clear heuristics on why they're choosing specific answers and bringing them up and elevating them and on the basis of that, shaping their path of inquiry as well.

And then I would want to set up a very clear stakes in terms of—again, we would have to design this interview structure in a way that we clearly identify what are two or three points in this interview process where they have a make-or-break choice that they need to take and they need to take it with 100% conviction. And that's the judgment part, right? They need to get where—the outcome can be simulated to some extent or at least you can see why they are putting that stake in the ground and why they are putting all of their—even if they have multiple choices, why are they putting 100% of their conviction onto that one specific choice?

So I guess what I'm trying to say is that I would assume that the tools are fully available and then I would look for the curiosity, curation, judgment. I would design the entire interview process so that I can see that in real time and see how they're making those choices and on what basis they are taking that risk at the end.

Jim O'Shaughnessy: Yeah, when I was at Bear Stearns back in the early 2000s, they asked me one time—and you'll see why it was only once—to interview internship candidates who were coming from blue chip schools, they were super smart, et cetera. But I even at that time, because of my research for empirical evidence-based investing, I kind of turned that lens on the interview process and the traditional interview process in my opinion—but I think it's supported by a lot of good empirical data—it doesn't add value, it actually subtracts value. And we're not going to go into why that is right now. But one of the things that I tried was to improve that. So I didn't ask "Where do you see yourself in five years?" I didn't ask "What's your greatest weakness?" "My greatest weakness is I care too much." But I asked those open-ended questions. So one of the ones that I would ask was—this was for a financial role, investment role. So I would say "Hey, the Dow Jones Industrial Average and The S&P 500 are different. The Dow only has 30 stocks. It's price-weighted and it doesn't reinvest dividends. The S&P 500 is cap-weighted. In other words, the bigger the market cap, the bigger the position in the S&P. It does reinvest dividends. What would the Dow be at today if it was cap-weighted and reinvested dividends?" And obviously I was not looking for the right answer. What I was looking for was how were they going to approach that problem.

And so got a lot of hostility from senior management when they learned that I only recommended one person out of the 20 to join the firm as an intern. As fate would have it, that one person got offers from every major investment bank. Goldman, Morgan Stanley, JP Morgan, Bear Stearns, Lehman Brothers, et cetera. And so I guess people who are going to really embrace what you're saying, are they going to get a lot of hostility from the old guard like "What the hell are you asking them these kind of open-ended crazy questions for?" One of the things that we did at O'Shaughnessy Ventures is we no longer hire on the spot—we give people. I've always looked at that as a snapshot and not a movie and I'd prefer to watch the movie.

So if we're very interested in somebody joining our team, they get a shorter-term consulting agreement anywhere between three and six months so that we can actually watch in real time whether they're going to fit into the way we are running this company. But I don't know that is—I think that's a luxury that a smaller privately held company can engage in that large multinational corporations probably can't. But so let's try to get to second order effects here. Obviously people are going to try to game these new interview systems because they're smart. And what do you think some of those games might be? And again, I'm really putting you on the spot. So I apologize. How would you mitigate against them?

Sangeet Choudary: Well, I think even though I propose that as the way to go about it, I think one thing—one structural flaw with that solution is that it assumes that the person who is evaluating these people knows how to evaluate these things, they know how to write. But then they themselves most likely have been trained in a system where they have not necessarily built the right heuristics to evaluate those things. And I think the other flaw with this is that because we've been brought up in a system that glorifies standardized testing and auditable, verifiable testing, this is none of those things, right? And so then this could lead to additional issues—favoritism and arbitrariness, et cetera. So I think those are obvious challenges that would emerge.

And in terms of—I think even before gaming the system happens, I think these are the issues that will really come up. And I think the burden of proof will also shift to the examiner in these scenarios in terms of why they are making certain choices. And those are not going to be very straightforward to articulate. So again, going back to your point, a smaller private firm with a very specific mandate and a specific culture might be able to pull it off. Might be difficult to—very likely will be difficult to pull this off at scale.

The other—I mean, one way to kind of, and again, I'm just thinking aloud here, but one way to potentially balance this would be to do this kind of testing only in larger groups where there's a bit of collective failure or collective success pieces thrown in as well, so that the arbitrariness gets removed. So in a way, choices made by other players cancel out choices that some others have made. So my point being that the arbitrariness kind of gets taken care of by structuring the game really well between the interviewees. But those are initial things that I can think of. I'm sure there are many ways to game this kind of system as well.

The whole consulting and Google interview was supposed to be a way to test fresh thinking. But over time, all those questions also became fairly standardized and people started figuring out what the interviewer is actually looking for. And once they knew that they are not looking for answers, people knew what kind of language to use to hit all the right spots. So again, that's going to be a way to game this because if the interviewer is not sophisticated enough, and organizations train their interviewers to interview, again falls into the same loopholes of standardized testing. So I think there are very easy ways to game this just because the interviewer is not going to be sophisticated enough. So that would be one of the challenges that would emerge.

Jim O'Shaughnessy: Yeah, which I think actually leads back to my worry about the cognitive chasm, right? Like G for better or worse, if you look at all of the data, G standing for general intelligence has the highest correlation with the greatest number of outcomes, both positive and negative, of any single factor. And it's different than teaching to the test, right? There are a lot of very brilliant people who don't do terribly well on standardized tests. And compounding this problem is that at least in the United States, we have an antiquated school system that in my opinion beats the creativity out of the students rather than encouraging it and helping it blossom. It does the exact opposite because they're teaching to the test and they're installing the correct answer machine in their students' brains.

And that is doing them a massive disservice for the world that we're going into. It's one of the reasons why I'm so interested in alternate educational architectures today. And we are at a point in history where like the bills are coming due and it's like Taylorism is what commoditized labor back more than 100 years ago, right? No, no, no. Your job is not to think of a better way that we could do this. Your job is to turn this screw at this time. And Taylorism took over much of management theology because I think it's more of a religion than a science. And that was very destructive to where we're going. But I worry about these—the rate of cultural adaptation is slower than we think it is.

Again, let's touch on another thing that you describe, I think quite brilliantly, the human touch fallacy. I've been obsessed by elder care and as we know—again, the numbers are there. This is not something that we're speculating about. The aging baby boomers and increasingly aging Gen Xers, the baby boomers and the millennials are of a larger size, but the boomers are a pig going through a python. And just demographics can show you the amount of need that we're going to have for elder care is going to skyrocket. And yet the attitude—I call it the ought versus problem. So many people look at the world as it ought to be, right? Not the way it is. So of course people ought to be willing to take their loved one into their home.

People ought to be willing to go and visit grandma or grandpa if they're in an elder care facility, they ought to do that, but they won't. And so one of the proposals that I have is there is an AI solution here. And as it advances, you could have an AI companion for those elders. And when I even float this as an idea, the number of people who are horrified with me, they're like, "You're a monster. How could you ever even think about that?" Well, go to a rehab facility. My mother-in-law, who unfortunately died recently, she was 99, so she had a great life. But she was briefly in a rehab facility and it was like Dante's Inferno. I mean, there were people in their rooms alone, crying.

There was a woman who just continually was calling out, "Will somebody help me? Will somebody help me?" And so that's the world is—that's the way it's working now. But how do we overcome a lot of these embedded really emotional looks to provide for elder care, right? Caregiving has a super high intrinsic value, as you know, but how do we determine its economic value?

Sangeet Choudary: Yeah, and caregiving is precisely the example that kind of illustrates this because as you said, people constantly say, "You can't replace the human touch," which has high intrinsic value, but if it is not accorded high economic value, if it is not tracked, measured, rewarded in the right way, then you don't have a way to incentivize it or attract the right providers for it. So the thing is that a lot of our—a lot of the logic of the platform economy as a whole, and AI kind of extends that into new forms of work—but the logic of the platform economy as a whole is to get to a point where any market activity can be modularized into a standardizable and measurable task, which can then be apportioned by algorithms, which can then be either matched or in more extreme scenarios, fully managed, end-to-end in terms of pricing, in terms of how it should be performed, where it should be performed by algorithms. So if we take an example—I'll just take a few different examples here, but if we take an example of a platform like Airbnb, for instance, there's only a certain part of it which is standardized and algorithmically coordinated, which is the booking side of it.

And yes, Airbnb has taken what used to be relational, the trust aspect, and made it infrastructural through its rating system so that it becomes market infrastructure and hence you don't need relational trust anymore. But apart from those things, everything else still works independent of the algorithmic allocation logic. Whereas if you look at the other example that we talked about, Uber, delivery, et cetera, these are the extreme cases of modularizing the task and then stripping it of all forms of differentiation not because it cannot be differentiated, but because the factors that differentiate it are not rewarded anymore. And that's the contextual value. The contextual value which is that whether that particular task has value in a certain context. So is an Uber driver's politeness rewarded with a premium? It's not. It's only rewarded with a five star.

And that five star is not directly impacting their ability to charge for the service. The same challenges flip over into healthcare. So the problem that happens with this kind of a modularized approach to modularization and algorithmic coordination is that eventually you want to apply the logic—this is basically done to create, to get to a point where you leave the most standardizable, interchangeable, manageable task with the human and you measure that and you reward only that, and that then creates a full logic for letting the machine do everything else. So I guess where I'm trying to go with this is that it's not "Should humans be companions or should we have an AI companion?" It's not that binary distinction.

It's not a binary distinction of should only certain tasks which are fully measurable and fully trackable, should only those tasks be rewarded, as they are in many healthcare and elder care platforms today, or should they be based on the larger experience around performing and delivering that care? I think if you want sustainable, and by sustainable I mean something that benefits all the stakeholders, but most of all the two specific players involved, the person providing the care and the person receiving the care. If you want solutions that benefit both of them, you want to create certain boundaries by which you don't modularize something as relational and as multifaceted as caregiving into very narrow, bounded, measurable tasks only. And you shift incentives so that incentives are not tied to tasks, but are tied to relational value.

What that means is you essentially shift incentives, for example, to—if you, for instance, want to reward these other aspects of that relationship, the more the relationship sustains, the more the rewards of caregiving accrue to the caregiver on that basis versus today. The way the structure is, get a caregiver on demand. Worked with them before, it doesn't matter. They're just going to come on demand. They're going to come like a delivery worker, get the job done and leave. And so if we want to do this the right way, we have to shift the pricing of these kinds of work away from the modular task logic to more of the relational logic.

And in a way, something like an Airbnb sort of has captured that even though it's turned the relational factor, even though it's turned trust from relational to infrastructure, it's sort of captured that in the way its rating system is structured, which means that the higher the rating, the more your pricing power improves. And that allows you to then reinvest into the services that you provide, et cetera, give better experiences to the guests, which then allows you to get even more ratings and improve your pricing power even further. Versus, if you look at platforms where tasks are highly modular and highly restricted and measured in very specific ways, none of those flywheel effects work in favor of the workers. So I think that's—those are some of the key things that I would look for. If we want to build systemic solutions, we have to move away from the modular task approach for some of these types of work.

Jim O'Shaughnessy: Yeah, but as you're talking about Airbnb, I was recalling a conversation I had with a friend recently who, during their summer travel, learned from the person that they were renting the home from that he had gotten to know what he called his regulars so well that he was going off Airbnb and was going to capture all of the value from just the regulars that he knew, trusted, that were going to treat his home well, et cetera. So I think that works particularly well in an Airbnb type situation. I don't know, does that work in an Uber type situation where a person can get his regulars, so to speak, and do we get back to a black car service or driver preference or any of that?

And I'm fascinated by this because I completely agree with you that AI in particular, because of this ability at orchestration, is going to radically transform most of our industries. And I'm fascinated by how that's going to happen. And obviously I'm a capitalist, so I want to see if I can take advantage of that as well. But you know, back to Uber, I once asked an Uber driver, "Do you get paid more because you are an Uber VIP driver?" And he laughed. He was like, "No, I don't. I don't get paid more at all." And that seemed to me to be a structural flaw in the management of Uber.

It seemed to me that if they paid the drivers who got the most five stars and had the best ratings and comments more than the drivers who had lower ratings, that would be a good retention tool. And yet I don't know if that's still the case—that was a few years ago, but it does bring up the question, okay, so we're in this new world. We are now looking kind of at our potential employees through a very different lens of what is valuable. Curiosity, the curation, and not implementation, but the risk-taking. How would you both incent, let's say again, let's stipulate that we found that person, they're really great, they score high on all of those particular things. How do we incent them more? How do we help them grow in terms of their career, et cetera?

Because the other thing that I see happening, at least right now, is that the junior level roles that used to be plentiful are disappearing. The reason that they paid investment bank trainees so much is because they treated them horribly and made them work ridiculous hours. And now AI doesn't complain. And it doesn't matter how many times you send it back saying, "No, I don't like this color and I don't want that dot there, I want it here." The AI just shrugs and goes, "Okay." So those opportunities for apprenticeship, for mentorship have also been dying. So I guess I'm asking two questions of you. First off, how do we build a new platform where young people entering the workforce can get those opportunities for mentorship, apprenticeship, et cetera? And then the other one, we found somebody—I've hired you, you're fabulous, and I think you're the best thing in the world. How do I continue to incentivize you as we move forward into this new way of doing business in this new economy?

Sangeet Choudary: Yeah. The way I think of this is that we often try to answer these questions within the structures that exist today. And so when we are looking to answer, even when we talked about how do we interview these people, some of the answers that we looked at were again, assuming the structures that we have today. So even when we think about the experience for young workers, we are thinking about that within the structures we have today, which is a linear career path up the pyramid inside a specific kind of firm, which then itself is predicated on the assumption of a single bundled degree that is given after four years of education and assumes all of that exchange value of the degree is captured in a full bundled job.

And what I believe we will increasingly see, and I believe that's the only way to counter the uncertainty that lies ahead of us, is that this idea of the bundled degree, bundled education, bundled job, to harvest it all, all of that will have to be completely unbundled and we'll have to figure out a new way to think about what career progression looks like. The boundaries between what is a learning opportunity and what is a job opportunity will also increasingly dissolve. We'll need to have—today we go to a specific place to learn and we call that a college, a university, or maybe evening classes. And we go to a specific place to earn. And those distinctions, I believe, will increasingly start breaking down also because the clock speed of change for many of our learning institutions is very slow.

Whereas the clock speed for change for some of the more profit-driven and less donation-driven enterprises out there is much faster. And so they will provide learning opportunities. They'll provide earning and learning opportunities, which can then be signaled externally. So we need a rehaul of this entire piece. And I don't think—I think it's going to naturally happen because the answer to this is not going to be "the pyramid is going away and there is no way up. So we now need a new skilling center for the new graduates that are coming out." That's sort of like thinking within the old frame itself and assuming that none of the frame changes. We will fundamentally have entirely new career paths, entirely new non-linear, highly circuitous ways of navigating your career. We'll also have increasing forms of opportunity.

Because I think one of the things that we've already seen with AI is that it allows you to access labor as capital. It allows you to access skilled labor as a form of capital where you can just rent skilled labor at $20 a month and figure out new ways to build value with it. And what forms of skilled labor are available will only increase as we move forward. So if we really think of the convergence of all of these things, the fact that the entire library of skilled labor that's possibly available to you as an individual to entrepreneurially create new value is going to dramatically increase, the traditional bundled learning experience and the bundled earning experience is going to get unbundled and the boundaries are going to become much more porous.

I think all of that will create a fundamentally new career landscape, if you will. So I would—that's the reason why I'm avoiding answering that question within the frames that exist today, because it's very likely. And the way I see it, many of those questions cannot be answered in the frames that exist today. You cannot simply band-aid a solution for all these entry-level graduates just because that ladder has gone away. You have to completely unbundle the entire system and think of new ways to organize it.

Jim O'Shaughnessy: Bingo. That is exactly what I was looking for. Because I think you are absolutely right. The problem here is trying to fit the new technologies, platforms and business methodologies into antiquated systems that no longer are going to work with these new tools. And that frightens a lot of people. For those who are listening and not watching, I'm holding up a gold watch that my grandfather got after 25 years of faithful service. This era is inexorably over. And that scares a lot of people, right, because most people are deterministic thinkers, not probabilistic thinkers. They're linear thinkers, not non-linear thinkers. And that doesn't mean that we can't address these problems. I think we can. I agree with you. But to do it we have to make kind of Schumpeter and his creative destruction look like he was just playing around a little bit.

I think what I have concluded is that every one of these foundational old structures, the educational structure that led to the job structure that led to the advancement of a career structure, all of that needs to be completely reinvented. And trying to graft it onto the old system is not going to work. It's just not going to work. And so back to Bucky Fuller, right? Trying to augment these changes and build new designs that are going to really leapfrog the way things used to be is not possible if you draw as your container the old system, right? And yet we also have to deal with the very human emotional aspect here. A lot of this is scary to a lot of people.

I tend to be incredibly excited about it because to me that I'm alive at this point in history and I can like take part in this is like the greatest gift in the world. But I do not in any time period or conversation try to minimize the fear, the uncertainty. All of those things that are going to, I think, increasingly affect people who thought that they were above the algorithm. I think a lot of people are going to find if they don't kind of embrace like they don't read your book and figure out, "Hey, these are things I need to maybe focus on," they're going to find themselves below the algorithm. And if you think that displacing blue-collar jobs caused a great deal of consternation, wait until you start replacing so-called elite jobs.

And that's why I think the best way through this is open and frank conversations like the one we are having right now. I don't have any definitive answers to which one of these things are going to end up as the winning system. I'm a big fan of Ken Stanley's "Greatness Cannot Be Planned," right? A lot of the greatest advancements in history were accidents, right? So I'm a big believer in making a very big sandbox and trying a lot of different ways to make this so that we can make the transfer as painless as possible.

You can't make it painless, but what you can do is you can experiment with all sorts of different systems that allow for it to keep the dignity of the person intact, keep them wanting to continue to enhance their creativity, their ability to curate all of those particular things. But I think this is kind of the meta-conversation because we can look at all of these things in isolation. But what I think we're talking about, at least what we're discussing right now, I think that's going to be the one that is going to cause a great deal of consternation and a great deal of volatility within the system. And volatility can be a good thing. If things are going up, then you want the highest standard deviation in the world.

Downside volatility is something that you really don't want a lot of. And so I think that the more we talk about this, the more you get examples like this. I'm thinking also kind of your example about TikTok where you say they didn't look at social graphs, they built a behavior engine. And the more we can get people, especially people who start companies and are founders and whatnot, to think, "Yeah, I don't need to fit it into that old model at all. I've got to build a new and superior model," at least that will give—a thousand experiments are better than one.

And so I guess one of my final questions for you would be if I said, "Okay, I love you, I'm gonna seed you with $10 million in capital and your job is to come up with kind of the ultimate company that all these other big companies are going to hire to try to adapt themselves to or literally revolutionarily change their behavior." Like, where would you start? What would be day one for you?

Sangeet Choudary: Yeah, I mean, I sort of veer towards the idea of collective sense-making when uncertainty is extremely high. Because you're again relying on the fact that you don't know what you don't know, and no single source has all the answers, right? In many ways, you're making up answers and testing them as you go. If I had to create a system that tries to aim for these answers, I would—the core of that system would be collective sense-making, which would mean looking at diversity of participants, creating a space where they can come together and can have the right mechanisms to make sense of it. I use those terms carefully because our sense-making mechanisms are also very linear today. Workshops, seminars, these are very linear sense-making mechanisms.

So I would really look at how to create a place where that sense-making could be done at scale because you are seeding a significantly diverse set of—I'm using the term prompts more generically, but prompts for the participants who are coming in and then these participants have skin in the game in terms of the outcome of the sense-making that they're doing. Because talk is cheap. It's really easy to come together and talk about something and then go and implement something completely different in your organizations. But really a collective sense-making mechanism that can be embedded into these organizations and from where you have a feedback loop back in. I strongly believe in this.

I strongly believe in this because there's increasing value to making sense of what's happening with the current mechanisms we have, which is theory, books, talks, et cetera, but we need this larger nebula of collective sense-making around it. These things can only be prompts. So yeah, I would use your $10 million towards that.

Jim O'Shaughnessy: Yeah, I completely agree on the need for cognitive diversity in your collective because I'm going to have a certain way and be able to add a lot of things, but there's a ton of things I'm not going to be able to add at all. It's going to require a very different cognitive profile than my own. And so the idea behind cognitive diversity, collective sense-making, makes a ton of sense to me as well. And again, what kind of delights me is this is a brand new world like the paradigm—all the old paradigms are like, bang, bang, they're going away. And that means that right now there is a tremendous opportunity space in front of us. And trying to cling to old models and old structures that no longer work is in my opinion a death sentence.

And so it's both exhilarating, but also somewhat exhausting and terrifying in equal measure, which is one of the reasons why I find it to be so fascinating to think about kind of the overall uber-change that's going to be coming along. I didn't even get—my producers are hooking me. They know that I'm such a windbag that I go on when I find somebody I really like talking to. So I'll have you back on again because I think I didn't even get to half of what I wanted to. But in the interim, first tell our listeners and viewers where they can find your work, your Substack, your books, et cetera. And then we'll get to our final question.

Sangeet Choudary: Yeah, absolutely. So my new book, Reshuffle, discusses all the ideas that we talked about here. It's available on Amazon. I talk about these ideas every week on my Substack. It's called platforms.substack.com—so that's the plural, platforms.substack.com. And you can also look up my website, platformthinking.com as well.

Jim O'Shaughnessy: All of which I think are great. And you might or might not know—a while back I started a series that we kind of aborted because we got too busy, but it was called "The Great Reshuffle" and it was addressing all of the things that might be happening in society because of this revolution. So we are very aligned on that. So our final question, if you've seen the podcast or listened to the podcast before, is a little different than others. We're going to wave a wand and we're going to make you the emperor of the world for one day.

You can't kill anyone and you can't put anybody in a re-education camp, but what you can do is we're going to hand you a magic microphone and you can say two things into this microphone that are going to incept the entire 8 billion-plus population of humans on the planet today. We'll stick to humans for now. Maybe in the future I'm going to start allowing people to incept large language models, but for now, we're going to only incept humans. What two things are you going to incept so that when they wake up whenever their next morning is, they're going to say, "You know what? I've just had two of the best ideas. And unlike all the other times when I just let them go, I'm going to actually start acting on these two things today." What are you going to incept in the world's population?

Sangeet Choudary: Okay, that's a tough one. But again, because I don't want to give them solutions, but I want them to collectively change the system that they are a part of. The first thing I would say is find someone interesting and do something interesting with them that can solve a problem today or that can help you move towards solving a problem, because that's the fastest way we can create systems of learning, systems of experimentation, and do it with somebody who's interesting because that's—that's the system we want to cultivate. We want to create a system where everybody's having fun doing these things. And the second thing I would say to them is, as soon as you're successful, ensure you let 10 others do the same thing as well.

Equip them so that they do the same thing as well, because that's how you then make it exponential. That's—those are the two things that I think the world needs right now.

Jim O'Shaughnessy: I love both of those. On the second one, one of the things we're attempting to do at OSV is doing just that with our fellowship and grantee program. There are so many incredibly bright people in this world, and the ability to find them and fund them is no longer something that we can say "But I can't do that," because we can. So I love both of those. Sangeet, thank you so much. Until round two. Thank you for round one. Your book is great. I highly recommend it to everyone listening and watching today.

Sangeet Choudary: Thank you so much, Jim. It's been a real pleasure. Thank you.

Jim O'Shaughnessy: And for me as well. Cheers.


Leave a comment

Share

Discussion about this video

User's avatar