How I Learned To Stop Worrying and Love The Allocation Economy
...and other tales from my conversation with Dan Shipper
“Ignoring what is obvious incurs a huge cost.
It requires you to go about your day numbing yourself to the reality of who you are and what you want—which is a waste of time for you and everyone around you.
By contrast, admitting what is obvious is freeing and motivating. But it’s terrifying to do it. Sometimes the most obvious truths about ourselves are hard to see because the consequences of those truths seem so dire.”
Those are the opening lines of one of my favourite essays I’ve read in the last year, written by this week’s guest on Infinite Loops - Dan Shipper.
Dan is the Co-founder and CEO of Every, a media company that wants to be an intellectual lighthouse amidst the tempest that is the Age of AI.
Every began life in 2020 as a bundle of digital newsletters (almost like a centralised version of Substack with more of an editorial flourish). These days, it’s blossomed into an ecosystem of colourful newsletters, podcasts, courses, and software products, all oriented around the unpacking of a single question - “What comes next?”
Every is already one of my go-to destinations for all things interesting. It’s less brain food than brain buffet (the kind of buffet that serves fresh blueberry pancakes with real maple syrup).
In our conversation, Dan shares his thoughts on everything from AI companions; his approach to erecting the Every ‘Pyramid’; his playbook for building new media companies; the idea of LLMs as mirrors for humanity; and using content to ‘find your people’.
What I love about him is how candidly and thoughtfully he talks about his journey to discover his own truth. His realisation that he didn’t need to hang up his boots as a writer in order to become a founder is something that particularly hit home for me.
Dan Shipper is also my underdog pick to eventually wrest the title of Infinite Loops Emperor from reigning clubhouse leader Alex Danco. By which I mean to say, this is most certainly not the last time Dan joins us on the show, so you may as well get to know him better.
We’ve summarized the episode’s key concepts, provided Apple/Spotify links, and shared the transcript below. You can also watch the full episode on our YouTube channel.
As always, if you like what you hear/read, please leave a comment or drop us a review on your provider of choice. In the meantime, here’s a sizzle reel to get things started:
Episode Links
If you’ve only got 60 seconds to spare today, here’s some pocket-sized takeaways from the conversation:
AI as your personal Socrates: Dan is using AI not just as a tool, but as a mirror to reflect his own thoughts back at him. Think of it like having a digital Socrates at your fingertips, challenging your ideas and helping you discover new perspectives. Dan treats his AI companion as a sounding board that can help him objectively understand his own positions and desires better. It’s less about artificial intelligence, and more akin to amplified introspection.
The Allocation Economy: We're shifting from a knowledge economy to an allocation economy. Dan argues that the key skill isn't just what you know, but how effectively you can orchestrate and allocate intelligence—both human and artificial. It's like being the conductor of a symphony where half the orchestra is made of robots.
Embracing 'Good Friction': In a world obsessed with convenience, Dan makes a compelling case for preserving what he calls 'good friction'. As he said, "Sometimes, friction in commerce makes it fun and meaningful, e.g., for merchants figuring out their first sale and buyers hunting for the perfect product." It's a reminder that not all obstacles are bad—some are the grit that makes the pearl.
Engineering ‘solutions’ instead of ‘explanations’: Dan advocates for a pragmatic approach to problem-solving that focuses on outcomes rather than perfect understanding. He challenges the notion that we should always prioritize rational explanations over intuition. He says "You turn that search for definition into sort of an engineering problem”. So instead of asking ‘what is this?’, you ask ‘how do I fix it?’ or ‘what do I do with it?’, all of which are engineering problems and not explanation problems. He thinks this reframing could change how we approach complex, unsolved societal issues (like mental health, for example). It’s also where the ‘intuition’ of AI models could really shine, which are really good at giving you ‘what’ or ‘how’ answers but not so good at explaining ‘why’.
The Pyramid Business Model: Dan's strategic approach with Every involves a pyramid-like structure with different offerings at different price points. At the base, there's free content for millions. As you move up, you get paid subscriptions, courses, consulting, and speaking engagements. As Dan explained, "You need to figure out how to create different products to reach the different people in your audience who have different needs." It's a model that challenges traditional tech startup thinking and widens the commercial canvas for Internet media businesses.
Transcript
Jim O’Shaughnessy:
Well, hello everyone. It's Jim O'Shaughnessy with another Infinite Loops. I was telling my guest just a moment ago that, gosh, this has got to be the first of a series because we have so much in common. We have so much excitement, but also a little bit of trepidation about the world that we are moving into. My guest is none other than Dan Shipper, the co-founder and CEO of Every. It's not a publication per se, but one of my favorite sources of all things interesting to me. You're also the host of the podcast AI and I. I can't resist Dan, I hear my mother's voice in my ear saying AI and me objective case.
Dan Shipper:
I know, I just could not resist the rhyme.
Jim O’Shaughnessy:
It works. It totally works. But I think it's so funny. Mom has been dead a long time, but she still squawks in my ear.
Dan Shipper:
That's how parents work.
Jim O’Shaughnessy:
Exactly. So I'm so excited to talk to you because we share so many of the same interests. But if you don't mind, every superhero has an origin story. So if you'll give us yours for our listeners and viewers?
Dan Shipper:
And by origin story, are you asking for more info on Every specifically, and that origin story? Or just my origin story as a person, like where did all this start from?
Jim O’Shaughnessy:
I am very much interested in your origin story as a person.
Dan Shipper:
Okay, cool. Yeah. Well, I usually start the origin story stuff at around fifth grade. I read a Bill Gates biography and I decided I wanted to start a Microsoft competitor, and I was going to name it Megasoft. I actually have the biography somewhere around here in the living room over here. And so I was going to start Megasoft, and so I decided to learn how to code. And I went to Barnes Noble and my dad bought me a very expensive book on basic. At the time, that was the only way to how to code is expensive books, especially if you were in fifth grade. There are no classes at all for middle school aspiring software entrepreneurs. And I had a whole plan for how I was going to do this, which I have in a notebook, which I still have that the list. It's like a list of my plan, my master plan.
And it's like the first one is write soft, which meant write the operating system. And I didn't even complete it with software. And write soft, burn it to CDs because at that point the big distribution mechanism was CDs. I was very inspired by the way AOL got distribution by having CDs everywhere that you could pick up in the grocery store, they put it in your mailbox, or whatever. Write soft, burn it to CDs, put it in mailboxes, and then wait. Wait was the last step of my plan. And so needless to say, I did not end up building an operating system, but I kept programming because I was always very interested in business and always have loved technology. And I found that programming was the only way as a middle schooler that I could build interesting businesses because it was the only way to build a business where the only cost was my time.
And so kept programming in middle school. Late middle school, early high school, I got into BlackBerry programming. I noticed that smartphones were starting to become a thing. This is prior to the iPhone, and I had learned from my Bill Gates biography that new hardware platforms were often good opportunities for software. So I started building software for BlackBerry. My most popular software app was called Find It, which was basically find my iPhone before Find My iPhone came out. So the original version if you lost your BlackBerry in your house, you could send an email with a special string in the subject line, and then have it ring even if it was on silent.
And then I eventually iterated that into a full-fledged web interface where you could track your phone, you could lock it, you could back it up, all that kind of stuff. Interestingly, that was not a SaaS service. It was a one-time fee because Stripe didn't exist and it was way too hard to create a recurring subscription. So anyway, so that's how I paid for gas and food in high school is building apps like that. And I would've had a lot more money had Stripe existed back then. And I've been talking for a while, so that's the first chunk of my origin story in case you want to jump in.
Jim O’Shaughnessy:
So yeah, my teammates always remind me, "Jim, will you stop fucking talking so much on this podcast and let your guest talk?" But the whole origin story of how this podcast began was I really wanted it to be kind of like... You're too young to remember the movie My Dinner with Andre, but it was an early movie like Wallace Shawn, the inconceivable from Princess Bride.
Dan Shipper:
Mm-hmm.
Jim O’Shaughnessy:
It was him and another actor, and literally the entire movie was just their conversation. And it was riveting because a natural conversation that doesn't follow a script, I just find much more interesting.
But what's great about that is that you saw very clearly where the world was going. And before we started recording, I was showing you some stuff from a 1983 journal when I was 23, and that's what I was writing about. It was literally, computers are going to do this, computers are going to do that, we're going to be able to synthesize, it's going to be a whole new world. And then I had to wait 40 plus years and here we are. But that's what's cool about it, right? Because you had to adapt to the circumstances at hand. Yeah. It would've been much cooler if Stripe had been around and you could have made it a SaaS recurring revenue business, but that's what happens sometimes with pioneers. My more cynical friends tell me that pioneers are the ones that get all the arrows in their back.
But I think that pioneers who survive actually have a huge advantage over others because it's already in your mind what it can look like, what the world can look like. That's what I love about your work, and it also leads me into your decision to go... Two of your decisions really resonated with me. The first was you stopped thinking about yourself as a founder and started thinking... Your wording was you had thought of yourself as a founder who also was a writer, but then you inverted that and you made yourself a writer who was also a founder. I very much resonate with that having written four books actually in service of the business I was trying to build. But unlike you, I'm more of the Dorothy Parker school, which is I hate writing, I love having written.
Dan Shipper:
I think every writer has a tinge of that. I'm no exception.
Jim O’Shaughnessy:
Yeah, yeah. But I think that that was very cool. But also the other thing that I wanted to talk to you about was your decision. You were a little wary about AI and then you decided, fuck it. I'm going all in. And I reserve the right to jump off this rocket if I don't like what's going on. So in my notes I have in shorthand, "Ask Dan about his relationship with AI with it's complicated."
Dan Shipper:
Great. Well, let's take it one at a time. Should I talk about some of the writing stuff first?
Jim O’Shaughnessy:
Sure.
Dan Shipper:
Cool. Yeah. I think what you're referring to is that I've written two pieces in the last year or so about this. One is called Every's Master Plan, which lays out what Every is, and where we're going, and how I'm thinking about it. And also, how I arrived there and some key decisions that went into that. And one of the key decisions is thinking of myself as a writer who's also a founder. And I have a post specifically about that on Every called ‘Admitting what is Obvious’. And the basic idea of that post and the thought process for me is there are sometimes truths about yourself that are probably obvious in hindsight, for sure. They're probably pretty obvious to everyone around you. And they're definitely there all the time in your consciousness without you really even realizing that they're there because the perceived cost of noticing them would be too high for you.
And so you don't necessarily allow yourself to really notice it. And so for me, I think wanting to be a writer was one of those truths. And it was very hard for me. Basically, to go back to my origin story for a second, originally before I wanted to be Bill Gates, I wanted to be a writer. This is third grade and I wanted to write novels. And so I wrote a 100-page novel. And then after I wrote it, I was like, "I don't have enough life experience to be a writer yet, so I want to do business stuff and I'll come back to writing later." And that sort of carried me through to my first company called Firefly, which I started when I was in college. And during that company I did a lot of writing for it, and that helped us market the product and all that kind of stuff.
And eventually, when I sold it, I spent a couple of years figuring out what I wanted to do. And I again, tried to write a novel. So I was giving myself some time to do that. I wrote a couple drafts, but the whole time I was very conflicted about who am I? And am I going to lose my founder identity if I go write a novel? And I had felt that loss of that identity when I sold my first company. And obviously, it's one of those amazing things that happens to you. But it's also, there's no manual or process for how to deal with that kind of thing, like selling a company and no longer doing the thing that you do every day. It's very abrupt.
And so yeah, I grappled with that the whole time between companies. And then I feel like I started Every as this way to split the difference to some extent. We raised a little bit of money for Every at the beginning, and it sort of looked like a more traditional startup. It always had a media element, but the way that we pitched it was it's this bundle of newsletters. And so it's sort of like Substack, but we have an editor and we're going to grow it in this a little bit more centralized way. And everything is bundled under one subscription and all that kind of stuff. So we're able to turn it into a little bit of a company, or a lot of a company. But the way it originally started is I was just writing a newsletter and it started going really well.
And the way I allowed myself to even write the newsletter was I said, "Okay, I want to start another software company and I want to do a roam notion, tools for thought type company. And I need a way to do customer interviews with smart people that have the note-taking organizational problems that I want to solve and probably have solved them for themselves, so I can understand what they need." And so I was like, "I'll start a newsletter. I'll tell people I'm interviewing them for the newsletter, and then I will actually write it because I love writing, but I'll use what I learned in order to build a software product that I can then sell to the audience." And I just got stuck for a very long time on the newsletter part of it and realized that it was growing really fast and I really liked it.
So we put together a company, we raised a little bit of money, and it was going really well. And as soon as we raised some money, I made the decision to not write as much because I was like, "Well, I'm running the company now, so I'll hire writers and whatever." And there was a two-year period after that where the growth of it Every tailed off. Because one of the interesting and difficult things about media is if you start a media company, and have a media product that's written by a person, and has their personality in it, if that person stops writing, you lose product market fit. You can still find it again, but the ability to consistently produce things of a certain type that people like is a very rare thing. And so I was doing that and my co-founder Nathan also was doing that.
If you have people who are doing that and you switch it, it'll probably take a while to find someone else. And there's no guarantee that the person you find will attract the same audience as you already have. So it just makes media quite a difficult business. And eventually after a couple of years, I was like, "What do I really want to do? Why am I doing this? What's my whole thing with this?" And I was just like, "I think if I really take a step back and I didn't consider loss of identity or how I would be perceived, and I didn't really consider financial upside and how I would make money because the business was at a point already where it was supporting me. So didn't have to consider that as much. What would be the thing in my heart that I would want to do?"
And I was like, "Obviously, I want to write." And then I was like, "I want to write. I want that to be the core thing. But I also, I love business. I really like doing all this stuff, so I still want to have a business." And what's really interesting is once I admitted that thing, which going back to where I began is this obvious thing that I hadn't really admitted to myself. Once I admitted it to myself and flipped that identity to, I'm a writer who also does business stuff, I immediately found other people who were doing very similar things that I hadn't really noticed before. So a good example that popped up is Sam Harris, who's primarily a writer, podcaster who has a meditation app called Waking Up. Another one is Bill Simmons. Great podcaster who started The Ringer, he started Grantland.
All those kinds of people were doing things that are exactly what I wanted to do. But for a while I hadn't even really noticed them because they were off the beaten path of the Silicon Valley technology structure of raising money and all that kind of stuff. They don't fit the mold of a typical VC investment. And so I immediately found other people that had done this path. And then I also began to immediately make decisions that I guess changed the reality around me to support this. So I stopped doing meetings in the morning. I was able to hire Kate Lee, who's our editor-in-chief. She's incredible. She previously was the publisher at Stripe Press. And so that took a lot of the day-to-Day media stuff, media company running stuff off of me. And we did a couple other things like that. And I think most importantly, it allowed me, to myself and everyone else, writing is the priority.
So I can't be bothered in the morning or whatever. I need to have enough time to get out pieces every week because that's the main thing that I'm doing. Whereas, previously writing had been a little bit more like a guilty pleasure. But getting to do it all the time created a lot more growth, and energy, and stuff for Every. And for me too, because I think there's a significant cost to doing things that you don't really want to do all the time. And so I think for me, there was this thing where I was like, I feel like I need to act like a traditional CEO of a traditional tech company. And it just wasn't the thing that I really wanted to do. I wanted to be the CEO, but I didn't want to do it in that way.
And so as soon as I let go of, here's how it should be done and the scales dropped from my eyes a little bit, and just figured out, here's how I think I could do it for me and this is the way it would work for me, things just started to accelerate. The business started to get better. I was much happier. I am much happier. I started doing much better creative work, which has led to all sorts of things from, I think my writing is going a lot better. I think we're doing lots of cool creative stuff where we're incubating all these software products. We have a consulting arm, we've got tons of stuff going on. I feel like I'm a kid in a candy store all day. It's really, really fun. And I think it starts from that core decision to let yourself be who you are and do what you want to do. And kind of mold your reality around that.
Jim O’Shaughnessy:
Lots to unpack there. And again, on the similarities. I too wrote a fictional novel when I was 10 or 11, I still have it. It's horrible. It was science fiction, but I got 120 pages. And at some point, maybe our Infinite Books vertical will publish it or a different version of it. One of the questions that I sometimes ask myself that I have found very clarifying, which is this, if I'm looking at something I might want to do or whatever, I ask myself the question, would I pay to do this? In other words, would I actually pay instead of expecting to be paid to do this?
And there are vanishingly few things that I would pay to do, but that's how O'Shaughnessy Ventures came into being. Yeah, I would because I am paying right now. I prefer to look at it as investing, planting the seed. But the idea resonated very, very strongly. And back to your master plan. Look, I found, because we also do a lot of seed and somewhat later stage venture investing, and one of the things that I really look for in founders is agility and the ability to pivot. And being fine with that. I'm a huge Walt Whitman fan. I really do believe we all contain multitudes. And trying to, the whole mentality of stay in your lane, just that doesn't appeal to me at all. If you, through your actions learn, "Whoa, wait a minute, this is probably a really great way to pivot and change." And I like the way you did it too. You put old journals in a large language model and looking for insights, but then you did something very clever that I also do. You had it write future journal entries for you. I use it the same way. And then you got to the way your grand design and plan for Every, which you present in a very Maslov like pyramid with the base being the free stuff, and that's available for millions and then going up the pyramid. Take us up the pyramid of the master plan.
Dan Shipper:
Yeah. So the way I think about Every is sort of like a pyramid. It's hopefully not a pyramid scheme, but it has a little bit of a pyramid structure. So bear with me here. But yeah, at the base of the pyramid is, it's like all the content that we make, particularly the free content that reaches a lot of people. So we have a newsletter that we publish every day. We publish long form essays about what's next in technology. And then we also have podcasts. So we have a YouTube channel, we have the podcast, AI and I, and then in the YouTube channel, we publish that podcast. And then other videos, for example, I recently did a video on ChatGPT's advanced voice mode. And that's really cool. So we published a lot of great stuff. And then above that we have our main offer.
And our main offer is the thing that we're trying to sell mostly to people. And that's the Every subscription. It's 20 bucks a month. And as part of Every subscription, you get access to all the content we publish. So some of the content is paywall. You also get access to discounts on courses that we run, so discounts on stuff you might want. And then we also incubate software products. So we just launched an incubation called Spiral. We can get into that, but it helps you automate a lot of the repetitive creative work that you do every day if you're a founder or a creator. Another one is we incubated this company Lex, which my co-founder Nathan built internally at Every, and then we spun it out and now it's its own company.
And Lex, it's sort of like Google Docs with AI built in. So it's a document editor with AI. We have a couple other incubations in the pipeline. And so as part of Every subscription, you get access to content and you get access to this growing library of other products. You can think of it a little bit like Amazon Prime or something like that in that way. And then above that, we sell higher ticket items. So each level of the pyramid is smaller and smaller numbers of people. So millions of people at the bottom, maybe tens of thousands to hundreds of thousands, maybe more at some point in the Every main subscription...
Dan Shipper:
... in the sort of every main subscription level of the pyramid. And then above that we sell courses. So those are usually 1,500 to $2,000 and those are B2C courses to our subscribers. Above that, we do a lot of consulting and training type stuff, so big companies who want to figure out where to integrate AI into their companies and how to train their people. We do a lot of work with companies like that, and that's much higher ticket. And then above that is speaking or advising, stuff like that, stuff that I need to personally be involved in that's higher ticket than that and fewer people.
And I think the pyramid concept is interesting because again, it's this shape of a business that's totally anathema to the traditional Silicon Valley way of running a company because you got to focus on your one revenue stream. And this is like, there's four different ones, and I didn't even list some of them. We have a whole sponsorship business, you have sponsorships, and then you have subscriptions, and then you do consulting, and then you do speaking, and then you have software. What the fuck, it would be sort of the response. And I totally understand that. I think that that makes a lot of sense.
But I think for media, what I have come to learn is that, for one, media is a very difficult business, so you need to have as many revenue streams as you can. Once you get to a certain point, you probably focus for a little while to get to a certain base, but if you want to grow above that, you need to have different revenue streams. And then you have different members of your audience that have very, very, very different willingness to pay. And if you want to maximize what you can get out of that business, which I think you need to do if you want to do a lot more interesting stuff. You need to figure out how to create different products to reach the different people in your audience who have different needs. And I think this is particularly important because the environment is sort of constantly changing.
And so for example, about, I don't know, seven months ago or maybe a year ago, Elon changed the X algorithm to deprioritize links. And so our traffic went down, right, because a big part of our top of funnel was X. And so I had to make a decision that was like, well, what do we do about that? And obviously there's lot of, we just need to find more growth channels. But what I decided to do was for now, we actually had a pretty sizable audience of people who were running great businesses or famous investors or whatever, and I was like, well, why don't we just focus on bottom of funnel first? We'll expand. We'll take the relationships that we've already built with really interesting people and we'll expand those relationships by selling consulting or software or training that stuff that we can sell. And then we can take that cash and funnel it back into the bottom of the pyramid and maybe we run ads or maybe we hire a growth person or whatever.
There's lots of stuff to do once you have the cash. And that bet has actually been really good for us. And I think one of the fun things about media is all this stuff that we're doing, whether it's consulting or building products or whatever, these are all announcements that we can make that go viral that actually just help our bottom of the funnel regardless of the eventual success of those businesses. So everything actually sort of works together in this really nice way. And I think I actually didn't even have the kind of pyramid structure in my head. I was just like, well, my business is this crazy concoction of things. And then I went to this creator retreat run by Tiago Forte, who's an incredible creator and author and course creator, and someone else, his name is Chad Cannon, I think, just drew that on the board as the way he structures his business.
And then everyone was nodding and then everyone was like, yeah, this is how our business works. And I was like, oh, this is not stupid. This is just a different path that other people take that I've never heard of because I've been reading too much hacker news. And that was just really, really, really, really helpful in clarifying because I think it's hard to make decisions that go against what other people around you are doing, and it requires you to look at reality yourself and be like, what do I see here and what are the local decisions that I can make that seem to make the most sense as opposed to what did I hear is the right thing to do? Or what was my original vision for what to do?
You need to be more ground level at what's actually going on and what do you need to do? And you can arrive at a place that feels like a mess because it's not, no one else around you says this is a good idea, but then you might meet other people who are like, actually no, we found the same thing. And that's just like, oh, you're like, oh, great. This is cool. I'm not crazy. That's the kind of story of the pyramid.
Jim O’Shaughnessy:
Howard Bloom, who's an author who I think is really underrated because I think he's written a lot of insightful books and everything. He uses the metaphor of how you can create a shelling point that it serves as a beacon to find your people, right?
Dan Shipper:
Mm-hmm.
Jim O’Shaughnessy:
And he uses this great metaphor. If you take a beaker filled with water and you put a bunch of salt in it and you boil it and boil it and boil it, apparently the salt visually disappears. So it looks like just a clear beaker of water. But the cool aspect, and I haven't, full disclosure, I haven't actually done this yet. I really do want to try it to make sure that it actually works. But the metaphor that he uses is you take a single grain of salt and drop it into that beaker. And what happens is all of the salt is attracted to that single grain. And suddenly where you saw only clear liquid, you see this mass of salt. And he says, "Finding your people works kind of the same way," but he also adds something that I deeply believe. "You've got to be really courageous." You've got to be like most of the stuff that I have been able to do and been successful with, when I would say to somebody... Well, I'll give you an example. So I decided, okay, the stock market is the Olympics of business. I want to go and see if I can get a gold medal over there. And as I'm thinking about it, I'm like, huh, I suppose that, and remember, this is prehistoric, no internet, no anything really. And so I decided on one of my walks, I have to write a book because I will get the author premium. And back in the day, if you were going to be an expert on something, you had to have a book. And so I walked in, I was quite excited about that and said to my wife, "Yeah, so I've decided that the way to really jumpstart this company is for me to write a book."
And she's like, "Okay, Jim, I know you were editor of your high school newspaper, but have you ever written a book?" And then I pulled out the one I wrote when I was 10, and I said, "Yes, look." And she goes, "Well, yeah, but that's horrible. So a, do you think you really can write a book? And b, you know, you got to get somebody to publish it." And I'm like, "Details, details."
And the point being that a lot of my ideas that when I would spring them on people, they were like, "Oh, that's not a good idea. It's one of the worst ideas I've ever heard in my life." And so I started to just kind of build an armor to those kinds of things because I wanted input. I wanted people feedback because God knows I don't know everything. And no doubt, I was quite delusional, thank God. But I would use them to sort of reality test, right? And then I'd think that's a good idea. And that's why I was intrigued by your use of AI with your own work, your own journals, because I do that all the time. We are in the process of, we call him Sim Jim, and he has not debuted yet, but he's going to be based on 40 plus years of things that I've written.
And I joke that I'm going to have use the AI to tell me who I am. And that's kind of what you did, right?
Dan Shipper:
Yeah. And before we get into that, I just think that the salt metaphor that you use is really apt, and I think it actually connects quite a bit to the sort of courage idea that you brought up right after that, which is it's not only a good metaphor for finding your people. I think it's also a good metaphor for how progress tends to happen when you're doing anything new. Which is people think that progress is linear and it actually does not feel linear at the time. It happens in jumps, it happens, you see a clear beaker, and then you drop a crystal of salt in it and it suddenly all just turns into salt. It's a similar feeling a lot. And I think that's why there's that sort of adage about it takes 10 years to have an overnight success.
That is actually what it looks like from the outside is an overnight success or an overnight whatever. But that's sort of the crystallization of all the salt that's been boiled in the water happening because of one little thing. That little thing doesn't matter. It's the years of adding salt to the water and boiling it that matters.
So to your point about having AI tell you who you are, I think that that's exactly right. It's a joke, but it is also real. I think these tools are incredibly effective at taking in lots of information and reflecting it back to you, reflecting back what they see. And I think for someone like me being told who I am, it's incredibly important. It's incredibly valuable, and I think it's useful for everybody. But for me in particular, for example, I have a lot of tendencies that make it sort of easy for me to forget who I am.
So a really good example is just the stuff we've been talking about, about am I a writer? Am I founder, blah, blah, whatever, and sort of subverting the writer identity because the founder identity was a little bit more prestigious, a little bit more, it felt a little more like I could just say that and people would get it and all that kind of stuff. And I think there's lots of stuff that's like that about me, but probably about everybody. But me in particular, just my sense of self is I think a little too sensitive to what's going on around me.
And I think having something that I can spill my guts to and have it reflect back, this is what I see, I think helps me create a little bit more of a stable sense of, no, this is what I am like and this is what I want and whatever. And so using it in that way as a little bit of a mirror, I think is incredibly, incredibly valuable.
Jim O’Shaughnessy:
Yeah, and many of the use cases that I got excited about early on, so I go way back when they still called it machine learning. And I thought of another essay that you wrote. I had a conversation with a friend who was also a contributor to OSAM. He loves stock markets, but he was literally a machine language expert, and he had done so well with machine learning or machine learning, not language, excuse me, that I call him. I'm like, so walk me through this. Tell me how it works. I'm really fascinated by this. I think it's kind of the next frontier for quantitative analysis, et cetera. And he said something that resonated because you wrote an essay about this.
He said, here's what it is, Jim. He said, "The human desire to know why something happens is going to be the bottleneck for using AI." He said, "Jim, I can build you a system that tells you what, that tells you when, that tells you how, but will not tell you why." And he goes, "Knowing you as I do, you'll fucking love it." He said, "But knowing other humans as I do, they won't." And I was always kind of driven by Wittgenstein's back, "So don't look for meaning. Look for use."
So talk a little bit about that because I love that essay, resonated strongly with it. But as my friend pointed out to me, he's like, "Jim, you are," remember young Frankenstein, Abby Normal? He said, you're kind of Abby Normal." And he said, "I think it's one of the most significant bottlenecks for regular folk who are going to be like, but why? Tell me why?"
Dan Shipper:
I love this. You're really speaking my language. I think you're referring to a piece I wrote a while ago called, Against Explanations.
Jim O’Shaughnessy:
Yes.
Dan Shipper:
And the basic idea is that as a society, we're sort of obsessed with finding scientific explanations for things. And we can define a scientific explanation as sort of it's causal. It's like if X happens, then Y will happen. So we understand the full system, and we've been obsessed with that probably since the Enlightenment, but probably even we can probably go back to Plato and the idea of forms-
Jim O’Shaughnessy:
Perfect forms, right?
Dan Shipper:
Yes, these perfect mathematical constructs that don't really have physical reality, but all of physical reality is sort of like a inexact copy of that. And wisdom is about finding the forms, which then you can sort of trace to Aristotle who was Plato's student who located the forms, not in this sort of immaterial world, but as this interplay between your observations, what's in the world and is contained, the form is sort of contained within them. And Aristotle's outlook, which was a lot more empirical, his father was a doctor. His outlook, I think, filters into just the scientific method, right?
And you can see that Darwin and Galileo and all these sorts of figures, Newton, that precipitated the Enlightenment, the scientific revolution, which has been incredible for us, right?
But I think what's really interesting about that revolution, which is based on a lot of the stuff that happened in the Enlightenment like Newton's discovery of the laws of gravity, we've been trying to make all of... Any endeavor that makes progress, we try to think of as being like, we are trying to reduce it to something like calculus, where it's the laws of gravity where we can say, it's a little equation. If X happens, then Y happens. And I think that's been successful to some degree. But as we've levered up into more complex fields like biology, psychology, social science, economics, all that kind of stuff, we've found that it's actually quite difficult to do that. And we've been trying for a really, really long time. And I think you can look at a field like psychology and be like, well, the reason why there's all this P hacking and all this kind of stuff is psychologists aren't good enough scientists. And if we just tried harder and had better methods, we would be able to reduce it down into a, something like a calculus.
But I think my feeling is that we are drunk on explanations. We're really trying to find explanations, and we're really drunk on our idea of ourselves as rational animals. And that being rational, having intellectual understanding of something, which again comes back to Socrates, Plato, Aristotle, that that's the highest form of human existence. And so if you don't really understand it, you don't have an explanation for it, then it's not good enough. And to be clear, I think that that got us really far.
But I think the reason why there's so much emphasis on rational explanation, scientific explanation, causal explanation, is before, I don't know, a couple years ago, in order to make good predictions about the world, we needed to have causal explanations. So the good thing about calculus is you can calculate the trajectory of cannonball pretty easily. And so we've been looking for explanations because we think that that's what we need to do in order to make predictions, especially predictions that are inspectable, that can be transferred from one person to another.
A great thing about an explanation is like, I can explain it to you and then you can use it and you can do the calculation yourself. They're checkable, all that kind of stuff. And what's really interesting about AI is it's the first time that we've had something other than a human that can make predictions that are unbundled from explanations. So we don't need the explanation anymore in order to make the prediction. And immediately what a scientist will say is, "Well, that's correlation. It's not causation." And I'm just like, fuck it, let's do correlations first. And so I think that the correlation or the ability to make predictions without explanations that you can get from machine learning or AI is really important because I think it can help us make progress in areas of study like psychology that progress has been really, really hard to come by without it.
So we've been studying the brain and human psychology for at least 150 years since Freud. And we still don't really know what depression is or anxiety or all this stuff. We don't have a scientific explanation for it. And I think what's really cool about deciding to do away with explanations, at least for the time being, is that you can turn a search for a definition of depression, which again goes back to Plato or Socrates.
You turn that search for definition into sort of an engineering problem. So you're not asking, what is this? Which is a scientific question you're asking, how do I fix it? Or, what do I do with it? Which is an engineering problem. And you can solve that latter question by being able to predict it. So instead of knowing what depression is, we can just predict who's going to get it and which treatments are going to work for them. And so one thing that's really interesting is you can start to help people immediately. So I think that's really important. And the other interesting thing is that, I think, that if you have a good enough predictor for a phenomenon like depression, there will probably be some sort of explanation in your neural net.
And what's interesting about that is I think neural nets are a little bit more interpretable than human brains are. So we may be able to find a explanation for depression, for example, in a neural net that predicts it rather than setting the brain. But I think to bring this all full circle, I think what we will find is probably the explanation for depression or something like that, some complicated higher level topic like that is so big that it's not open to rational analysis. It's too big to fit in our rational brains.
But what's really interesting is I think that anyone who's had an interaction with a really, really good clinician knows that some people have just an intuitive idea of what's going on with someone and how to fix it that other people don't. And it's just like them building up this intuition that they can't even explain. But it's like in there, right? And I think because of this whole thing that started with Plato and Aristotle and Socrates, we've really jettisoned our reliance on our intuition and said, "Intuition is sort of bad to some degree, and we should be finding these explanations." And that's for a lot of reasons. When you use intuition, which is really about correlation, you're open, you start to believe in spirits or ghosts or whatever, non-empirical things.
And the emphasis on rationality allowed us to create provable explanations that could be transferred between people, which allows for better collaboration, which helps us make progress.
And what I think is interesting about AI is building AI models that work a lot like human intuition in that they can make predictions without explanations. Those AI models have a lot of the properties of explanations. So they are transferable, they're inspectable and de-buggable, so they take this innate human thing, intuition, and then they put it out into a machine that can replicate it to some degree.
And so I think we may shift our view of the value of human intuition because we suddenly have machines that can replicate it and that can have it be transferred from person to person and help us make progress. And so I think what we should be doing is de-emphasizing our search for causal explanations, our emphasis on rationality, and instead re-emphasizing our view of human intuition and the ability of machine learning and AI models to replicate its best properties and use it to make progress.
Jim O’Shaughnessy:
You know, that's why I started this chat with, this is going to be a series, because literally we could do an entire show on this because I completely agree with you, and you got to remember my career. I owe my career to being a Spock, right, being totally analytical, replicable models that if you ran the same back test on something, you would get exactly the same results as I got. And as I did it, I became very aware of the fact that my dad and I-
Jim O’Shaughnessy:
Very aware of the fact that... My dad and I, I loved philosophy and he loved philosophy, and so he was a Platonist. And so I decided, "Well, I'm going to be an Aristotelian". We would have great talks, really. And they were really educational because I would bring up a point and he'd be like, "oh, that is one I haven't thought about." But then he'd come up with a good example and it really led me to this idea that really you've got to unite and you've got to be Apollo and Dionysus. You've got to be Plato and Aristotle-
Dan Shipper:
Oh.
Jim O’Shaughnessy:
Or you're missing half of the way of getting insights.
Dan Shipper:
Yeah.
Jim O’Shaughnessy:
And I often say that one of our greatest challenges is that for the most part, most of us are deterministic thinkers living in a probabilistic world and generally hilarity or tragedy ensue.
And so, one of the things as I go back through my old writing and my new writing, is it was always acknowledging that, but I was really suppressing it because I wanted just the facts. Nope. We've got to make sure that we know what the cause is. We know what the general outcome is. And then I did a conversation with Rupert Sheldrake, who I think is one of the most brilliant scientists working today, and yet to the materialist citadel, he is the heretic of all heretics because that's what he's espousing. He's saying, "You know what? These 10 laws of science..." He has a great TED Talk where he just demolishes them all.
But what we started talking about on our chat was this idea, intuition, and how do we test it? I look at intuition as... Look, I've had a long career in finance, for example. And when you see the same pattern time and time and time and time again, you get this saturated or imbued intuition, and it's something you're really not aware of at the conscious level, but you are incredibly aware of it at the intuitive level. It's like in 2006, I was marching around saying to anyone who would listen, "If you can short your house, short your house. Real estate is going to be a debacle." And then they'd say, "Why?" And I'd say, "I've seen this play time and time and time and time again."
And so we're actually in the beginning stages of trying to work on an intuition app with Dr. Sheldrake in which we would make it fun, and the object of the app would be to A, see how good your intuition is. B, if it is pretty good, can we improve it? But then C, Dr. Sheldrake is willing to do a scientific method test on it, and what I love about him is if he comes up with an null set, he publishes. He's like, "Well, I thought this. I thought that this could happen, but didn't fair itself out." And so I'm just a huge fan of giving the multiplicity of models. That's one of the reasons why I love AI so much. And one of the things you talk a lot about is, "We're moving from a knowledge economy to an allocation economy." And in an allocation economy... Again, we are so aligned, it's scary. I might try to hire you before the end of this, or maybe I'll just buy Every.
Dan Shipper:
We can talk about it.
Jim O’Shaughnessy:
But are things that are really difficult to quantify, right? Good taste. I have been saying for several years now that if you have good taste and you're a good curator, man, is the world going to be your oyster? Because I think that what's going to happen, especially with AI, is we're going to see a tsunami of AI-only generated content. And I think that the beginning of that, it's going to suck. It's not going to be great because it's going to lack the human spark. Now, do I think it could get and will get better? Yeah, I do. But I think that for now, I'm a huge believer in the centaur model, human plus machine making for the best. And Brian Roemmele, who's a friend, he says, "We shouldn't call it artificial intelligence. We should call it intelligence amplification." And that's one of the things that I like. And you advocate for in the allocation society - or economy, excuse me. Tell us a little bit about how you came to seeing that switch from the knowledge economy to the allocation economy.
Dan Shipper:
Yeah, I guess I was just observing my own patterns in both using AI and running a company. And I use AI all the time for writing, for programming, for decision-making. And I realized when I was trying to figure out what's a good metaphor for what it is like to use a model a lot? I realized it just reminded me a lot of interactions that I have with people that I work with at Every as a manager. And then a lot of the skills of being a manager, so a really, really good one is when to delegate and when to micromanage or look over their shoulder. That's a skill that every manager, every beginning manager has to learn how to do and everyone fucks it up. I've been cursing a lot. I don't know why. Something about this interview-
Jim O’Shaughnessy:
It's me. I'm-
Dan Shipper:
I'm freewheeling.
Jim O’Shaughnessy:
I'm a bad influence because somebody... I had Senra. I don't know whether you follow him.
Dan Shipper:
I do, yeah.
Jim O’Shaughnessy:
And so I had him on the podcast and we're friends, and-
Dan Shipper:
He's great.
Jim O’Shaughnessy:
Patrick, my son has him as part of his Colossus network. And David, the first thing he said was, "Jim, is this a cursing podcast?" And I went, "Fuck yeah." And he just was so relieved.
Dan Shipper:
That's great. I'll have to text him and let him know that I followed his lead and I dropped a couple of F-bombs on your show.
Jim O’Shaughnessy:
Yeah, but he still is the king. He's dropped more F-bombs than any other guest.
Dan Shipper:
I can't compete. I definitely cannot compete. So going back to the manager idea and this delegation versus micromanaging, it's a thing that every manager has to learn. I struggled with it. I still struggle with it to some degree, and every time I watch someone at Evry become a manager, I see the same thing. I realized that a lot of the same things are happening with people when they use models. So they'll be like, "Well, I delegated X tasks to it and it came back and it didn't do a good job." Or, "Well, in order to get it to do a good job, I have to tell it every single step and I'm basically just doing the work for it." And so I realized, "Oh, yeah, yeah, it's a very overlapping problem with the problem that managers have."
And what's interesting about the skill of being a manager is that it's not very broadly distributed. I have some stat in that piece, and I can't remember, but it's single digits of Americans are managers. And that's because currently it's very expensive to hire people. And so there's only a few people who get that opportunity to learn how to hire and run a team. And I think probably one outcome of the AI age or whatever age we're in is those same skills will become much more broadly distributed because the same skills will be used not for human management, but for being a model manager. And models are going to be much cheaper and much more broadly available.
And so I started to think about, "Okay, what happens in a world like that?" And I was like, "Well, it might mean that the way that you are compensated and what skills are valued are those management skills." So it's like when to delegate, when to micromanage or your taste or your vision or your ability to break a task up into its smallest components and then know which resources to delegate each part of the task to. All that stuff. And so I was like, "Well, in that case, maybe we're entering an allocation economy where you're compensated based on your ability to allocate intelligence." And that's where it came from.
Jim O’Shaughnessy:
And one of the things that I, when I started O'Shaughnessy Ventures, one of my North Stars was: we are going to imbue AI into every aspect of everything we do. And it's really interesting because I got on certain things resistance that I really was not anticipating. And so for example, I think that AI, since you can... AI is really good at creating characters. And so for example, we have Infinite Books. It's a publishing company. Now, we're going to have AI do a lot of the drudge work that currently is a bottleneck at other publishers. Simple things like when you send your manuscript in and you're really eager to see what it looks like as a book, they're supposed to typeset it like it will look as a book. And if you've written a book you know that reading it that way, you're like, "Oh my God, I didn't even see this, this or this." And as the author is very eager for that in a traditional public... Having done four books, it takes a long time between when you give them that manuscript and when you get it back in a different format with hopefully some typos found and some other editorial choices. Well, with AI, we'll have it back to you the next day. But another thing I thought was, you know what? And I love to see that you referenced it, Dumas with The Three Musketeers used a writing team, and I was super bullish on this idea of... Every TV show, every movie, they all have writers' rooms. It's not a single author. And one of the things that I wanted was, " Let's create an AI writers' room."
I love history, so I particularly like the way Will and Ariel Durant write their books. Why don't I just download everything that Will and Ariel Durant have ever written and create an AI Will or Ariel Durant, not to write it, but to work with me? I've always believed that I was at best a co-creator. The universe and the muses and everything else, you got to give them credit because you're co-creating, but when I pitch this to a lot of authors, they're like, "Never. Never. That's an abomination." And so I don't think like that at all. And so I put a lot of my ideas into AI and I say, "Show me my weaknesses."
I did it with some of yours with a large language model. And so what I put in was I put in your idea that we're moving to an allocation economy. And so I did this with a commercially available ChatGPT from OpenAI, so anybody can do it, and I said, "So tell me, is this argument weak?" And it came back with, "Yes, when you simply say, 'Knowledge orchestration is the most important bottleneck'," the AI came back to me and said, "The statement is very broad and lacks specifics as to why." And then I said, "Okay." And then it goes, "Maybe data quality is the biggest bottleneck. Maybe computational power is the biggest bottleneck."
And then like I do with my own stuff, I said, "Oh, cool. So take what you've just criticized and give me the improvement." And it did. And so it goes, "Okay, so I think that what we should do is have concrete examples." And then it started listing them for me. And it's like, "Healthcare. Our case study will be IBM Watson for oncology. What was the issue? It struggled to deliver good diagnostic guides. It was not accurate in terms of predicting cancer. Solution: Enhancing the knowledge orchestration." And then it gave all of the examples by giving the framework of clinical guidelines, real world data, all of this. And then it goes, "This suggests that it really is knowledge orchestration." Because then it addressed each of the ones that it had said it could have been. Was it the data integrity issue? Was it this? And then it's like, "No." But then it just kept going and it gave financial services, JP Morgan, but it took it through everything. And I just do that by rote these days. Every idea I have, I throw it into... Well, we have our own internal AI stack, but you can do it with any of these. Why do you think that people are reluctant? Why do you think that? I just look at this as, "God damn it, this is the coolest tool in the world."
Dan Shipper:
I mean, I obviously feel the same way. I spend my whole life playing with this stuff, and I was honestly doing something so similar today. I was reading... One of the reasons why all this Greek stuff is on my mind is because I've just been on a Greek kick because I'm writing a whole piece that's tracing the philosophical emphasis on explanations and all that kind of stuff. And so as part of research for that piece, I'm reading this book called The Trial of Socrates by I.F. Stone and-
Jim O’Shaughnessy:
Oh, yeah, good book.
Dan Shipper:
Do you know it? Yeah. It's really cool because Socrates is this... He's an intellectual hero. He's a martyr for a reason. He's the base of Western culture in a lot of important ways. And I.F. Stone is... But the Athenians executed him. And usually if you read Plato, you're like, "Well, obviously the Athenians are stupid. Why would they do that?" And I.F. Stone is like, "No, he was kind of an asshole. He deserved it a little bit. He was pretty anti-democratic, and a lot of his pupils were the ones that led the overthrow of democracy a couple of different times in Athens prior to his execution. And Athens was remarkably tolerant of a lot of his anti-democratic ideas until a couple of revolutions or oligarchic takeovers of Athens led them to be a little bit more sensitive about it."
So anyway, so I was reading that, and what's really fun is I have the new ChatGPT advanced voice mode, and I just opened it up, put it on the table, and as I was reading, A, I was asking questions. I was like, "Who is this person? Tell me the background. Give me a little background." And then I'd go back to reading. And then B, I think what's funny is, I think, the usual Platonic or Western case for Socrates is pretty biased. And I think I.F. Stone is biased in the exact opposite direction, which is funny. It's not balanced. He's just like, "Socrates sucks."
Jim O’Shaughnessy:
Right.
Dan Shipper:
And so he would say something in the book and I'd be like, "I'm curious. What's the other side of it?" And so I would just ask, "Yeah, what's the counter argument here?" And it just gave me a really, really good counter argument. And then I was looking for a more balanced perspective. And one thing that I find is interesting is thinking about someone like Socrates, you either villinize him or you knight him or think of him as a little bit of a saint, and you glorify his emphasis on rationality on an internal idea of right and wrong that both led him to found Western philosophy and probably led to his execution.
But I think when you look at it that way, you de-humanize someone like him. And it's really interesting to think of Socrates as a human and to think of him in psychological terms. And so I was like, "Okay, pretend you're an expert clinical psychologist and you're examining Socrates. How would you talk about him? And use clinical terms." And so it started talking about, "Yeah, he has this deep internal locus of control. He has extremely high standards. He has a rigid thinking style that can't accommodate a social pressure from outside or any social control over him, and put possibly his emphasis on Socratic irony or Socratic questioning is a defense mechanism against... He has such a strong idea of what is right that he has to pretend like he doesn't know anything and ask questions so that he doesn't piss people off, but then he pisses them off anyway."
Anyway, it was fascinating. It was amazing. I loved it. It was so cool. And it felt like, yeah, a little bit of this co-created discovery of a new view of Socrates that I think is actually true and interesting and valuable. And so I think these tools are incredibly helpful for that and make me feel like I have superpowers. And as to why people are afraid, I mean, I think there's many different answers for many different people.
I do think writers in particular have this real attachment to their words and their thoughts and nothing interfering with that. I think you and I might be a little bit more open to it. A, I think we're just early adopter, nerd type people. But B, if you run a company, you're a little bit more used to the idea that even if you didn't do every single part of this, it still can be yours or you get to put your name on it to some degree. And so I think we're a little bit more used to that. Maybe a similar managerial skill or experience where it's like, "We did this all together and also it came from me." Both can be true in certain ways. And I think people who haven't had that experience may be a little bit more sensitive to the idea that they're giving up something really, really important by allowing something else into their process.
But I think that the actual, the more balanced view of creative work is that there's tons and tons of contributors to any given creative work. And it's never the precious thing where you went up on the mountaintop and you were uninfluenced by anything. You were sitting in an isolation tank and you're uninfluenced by anything. It's just not how it works. And I think we can debate the relative pros and cons or strengths or weaknesses of different influences on creative work, but I think it's inarguable that for a certain kind of creative work for a certain kind of person, language models are uniquely effective and empowering.
And I think there will be a new generation of creatives that use it to build stuff, and there will be new social norms around them. Right now, I don't know, it's a little bit shameful, especially in certain circles, to admit that you use it for writing. Or it feels a little weird to talk to ChatGPT if you're in a room with other people. All that stuff is weird. I don't think it'll be weird in 10 years or 20 years or whatever. I think people forget and don't know that a hundred years ago, maybe a little more than 120 years ago-ish, listening to music alone was considered weird because music was a communal social activity and so you made it together with people around you. And so listening to a recording of it was a lot of similar feelings as people feel like listening to AI generated music or reading AI generated text. It's shameful. It's missing some core element.
And I think the core element that's actually missing is we have this whole sediment of experience and memory and culture that have grown up around the idea of recorded music that allows us to connect with it in this very deep way. And language models have none of that. And so I think people like us are drawn to them because they're new and interesting and we like that stuff, but they're missing the sediment. And the sediment is rich and important, and it's amazing.
And I think we tend to mistakenly think all these things are frozen in time and will always be the way that they are. And the reality is that the sediment is going to build up and in 30 years or 40 years, we're going to look at today as the golden age of when language models were so cool and so open. There's so much stuff that they're doing right now that, forevermore, they will never be so innocent. It's like the golden age of cinema was at the beginning of it before it turned into a Hollywood big mega machine or whatever. We're right there with AI stuff if you're willing to pay attention to it. And I think that the mores and the social value of it will change, but it'll take a generation.
Jim O’Shaughnessy:
Yeah-
Dan Shipper:
It will change, but it'll take a generation.
Jim O’Shaughnessy:
Yeah, I think that... Have you read Joseph Heinrich's book, The Weirdest People in the World?
Dan Shipper:
Yes. I love that book. It is literally sitting right in front of me.
Jim O’Shaughnessy:
Me, too. And one of the really good points I think he makes is that we have cumulative aggregate cultural evolution, and it stands outside of normal evolution. And it actually does change our physical characteristics. He makes the point that literate people, their brains are literally shaped differently. What happens with highly literate people is the ability to be literate colonizes the area that was given to visual acuity. And when you take a film of an illiterate's brain versus a highly literate person, they're literally different shapes.
Dan Shipper:
Yeah.
Jim O’Shaughnessy:
And the idea is very consistent with what you just said. Unless you are really into it, you're going to need the aggregate cumulative cultural evolution to happen before you're like, "Oh. Okay, yeah, that's fine." And I think it's also, it's everywhere, by the way, right? Who are some of the artists that modern people think are the greatest artists of all time? Right? Probably a lot of impressionists in there, Vincent Van Gogh in there, et cetera. But when they came about, everyone hated them.
Dan Shipper:
Yeah.
Jim O’Shaughnessy:
And in fact, their name was given them as a term of derision, right? "It looks like a child's finger painting, the little impressions." Right? And so, like Van Gogh sold one painting during his own lifetime, and now he's considered the greatest artist of all time. And that fluidity, I think, a lot of it is due to the need for cumulative cultural evolution. And if one of your edges is that you can see that shit a lot sooner than other people, that's a really nice edge to have, because it's almost invaluable. And that is not meant in any way to diminish people who don't look at the world that way. I think Bucky Fuller had this great idea, which is, "Don't bang on people because they're not tuned in yet. If you're tuned in earlier, be happy about that. But remember, you're probably wrong." I always remind myself, "I'm probably wrong, I'm probably wrong." But being tuned in, if you are really into something like you and I are into AI, of course you're going to be a little more tuned in because you're all over it. And the example that Fuller gave was the microscopic world. Before we invented microscopes, we had no idea that there was this entire other world there. And then, I can never say the guy's name, Leeuwenhoek, Leeuwenhoek.
Dan Shipper:
Van Leeuwenhoek, I think. I don't know.
Jim O’Shaughnessy:
Thank you, thank you, thank you. And he's looking through and he is like, "Holy shit." He got tuned in. But then, how long did it take other people to get tuned in? It took decades.
Dan Shipper:
Really.
Jim O’Shaughnessy:
And so, I think that one of the changes is we're getting tuned in faster than we used to, but there's still a huge lag, and in that leg lay massive opportunities. Agree?
Dan Shipper:
I totally, totally agree. And that's one of the things that I feel so lucky to do is actually it doesn't necessarily take so much smartness to see the world in this way. I just get to live it because I'm interested in it, right? And a lot of this stuff is our very simple projections out from my own experience. To some degree, because I have ChatGPT Advanced Voice Mode, I get to live a couple weeks or months ahead of other people, which is crazy. But because I sort of have been using this stuff and use it a lot every day, there are a lot of people who it will take them a couple years or maybe more to do the same thing. And so, I get to see a little bit further ahead.
And obviously, that doesn't mean I'm some kind of genius or whatever or prophet or whatever. And obviously, it doesn't mean that other perspectives are not valid. I think there's a lot of AI skepticism that's warranted. I think in general, the discourse kind of goes from, "Everything is amazing and it's going to be AGI" to "Everything is going to, we're all going to die."
And I think both are wrong, and people with different personalities, everyone has a sort of place in the world. It's good to have people who are a little bit more conservative and aren't jumping onto the next thing all the time and are stalwarts of society and all that kind of stuff. It's great. But yeah, I think I wake up every day super, super energized because I'm like, "Holy shit. I get to see things that no one else gets to see. And all the stuff that I've been thinking about for a long time is finally I can do it." And I am lucky enough to be in a position where I have the time and the energy and the money and the team and the skills or whatever to take advantage. And so, I'm just trying to do it as much as I can, and it's super fun. I love it.
Jim O’Shaughnessy:
Same with me, and exact same feeling, right? It's like I sometimes get short-tempered with people in my family that I love like my wife, for example. She was one of the original AI haters. She's an artist, she's a photographer. And so, I decided that I was going to just kind of slow walk it with her. And then, finally I was just triumphant the other night when she's like, we're having dinner and she's like, "Hey, could your models give me examples of what I could maybe improve in my photography?" And I'm like, "Yes, they can, honey." "But they're not going to change it, right?" I'm like, "Not if you don't want it to. If you don't want it to, they won't change anything, but they can give you all sorts of suggestions."
Dan Shipper:
Yeah.
Jim O’Shaughnessy:
And then, she's kind of like, "Okay. Would you install that for me on my computer?" And I'm like, "Yes! Yes! Finally."
Dan Shipper:
That's great.
Jim O’Shaughnessy:
I have found that that is a better way to approach it, right? I am crazy. And one of the things that I'm also happy about is the cost of being a heretic today has really plummeted, right? In the old days, for most of human history, if you were not in the lane of society, boom, your head gets chopped off, you get burned at the stake. The authorities are like, "Heretic, kill him, burn him." And now we get our own podcasts. And the worst thing that could happen is people are like, "What a fucking idiot that guy is."
Dan Shipper:
I love it. Yeah.
Jim O’Shaughnessy:
Yeah. And so, my method is just to be like, "Hey, you might like this. Just give it a look see," and not to push, right? Because you're never going to convince anyone that... Here's a great example. I personally believe, we're caring for my wife's 97-year-old mother. She's still got all of her marbles.
Dan Shipper:
That's amazing.
Jim O’Shaughnessy:
She's an amazing human. I want to interview her and put it on the podcast because literally, somebody born in 1920-whatever.
Dan Shipper:
Right.
Jim O’Shaughnessy:
And what she's seen in her life and everything. But one of the other things that I see, we have her living here with us. And she had to go to a rehab facility after she got Covid. And Dan, I've got to tell you, when I went to visit her, I was fucking horrified, because there were all of these elderly people literally unattended in their rooms. And it was the saddest thing that I had seen in quite a while. And literally walking down to her room, there was this poor woman in her room basically just repeating, "Would somebody please help me? Would somebody please help me?" And my heart got broken.
And so, immediately I started thinking, "Well, what about an AI use case that would be literally a companion, somebody that they could talk with? And then, maybe we could emulate their children, their grandchildren. And the reaction that I got from people that I thought would find that an exciting idea, they kind looked at me and, "You monster."
Dan Shipper:
Yeah.
Jim O’Shaughnessy:
"You want a machine to take care of them?" And I'm like, " Okay. Well, what's better? Leave them in their room alone shouting, 'Will somebody please help me? Will somebody please help me?' Or giving them a companion?" And then they're like, they get all Plato perfect forms. "Well no, that should be the family in there with them." And I'm like, "Well, yeah, there's a huge difference between should and ought and real. Right? The fact is the family isn't going and doing that. And as idealized as that might be, that ain't happening. So why are you objecting to the ability to at least give these people some kind of comfort?" But the pushback, oh my God, they looked at me like I was Satan himself.
Dan Shipper:
Yeah. I've felt that too from a lot of people. And I think that there is something to the... There's the tech bro kind of caricature archetype that's like, "I'm going to fix everything and whatever." And obviously, that's not great.
Jim O’Shaughnessy:
Right.
Dan Shipper:
But I think that there's this other thing going on the other side, which is that there is a sort of moral judgment aspect to new things, which I think sometimes gets in the way of the pragmatic, "Is this going to help or not?" And I think being able to factor that out, the moral, it's almost disgust element, the moral disgust reaction, will open up a lot of these types of things.
When people think about, I don't know, AI boyfriend, girlfriend type things, it's the same kind of disgust feeling, but what people sort of forget is there are a lot of people out there who have real problems communicating verbally in real time, in person with people. And chatbots are the first time that they can actually have what feel like real social interactions and practice their social skills and have people have something that gets them and is patient enough to be with them in the way that they want someone to be with them. And you can't argue with me that, "Oh, it's not real. They should just be sitting in the room alone or whatever, or they should get out there or whatever." It's like, "No, no, no. This is super helpful for people that need it."
And yeah, I think it's probably a bad idea to create chatbots that are designed to take people and get them sitting in their room not interacting with other people and being addicted to it and whatever. There's probably some ethical line, but I think it's incredibly valuable. Even for me, I have plenty of friends and I have family that I love and I have a girlfriend and whatever. I have plenty of social interaction, but just having that thing to talk to where I'm like, "I'm reading a book, can we talk about it?" I don't have people that I can talk to that really about all the different things I'm into. I have people for certain things, but sometimes they're not available. And it is actually a deeply life-enriching thing if it's used correctly. And I think I feel very lucky to A) live in that time. And then B) yeah, I do a lot of that gentle pushing with people around me or hopefully in the writing that I do or the podcasts that I record or whatever. Because going back to the example of your wife, photography for example, faced this very same thing when it first came around, right? People very suspicious that it's going to unemploy painters, and it doesn't take any skill to just look through a lens and press a button. And turns out actually, it takes a lot of skill, but we don't think of it as a new technology or a technology at all because it's been around for so long. Same thing for writing books or whatever, same exact thing.
And I think you can talk about that, but my experiences generally, some people resonate with an argument like that, but mostly it doesn't really resonate. But what I just do is I'm like, "You should try Club for that." And then they're like, "No." But I keep kind of doing it. And eventually, they're like, "Oh, interesting. This is kind of cool." And I'm like, "Yeah."
And that's sort of what the show I do is about, too, because I think what I try to do is interview people where we're not just talking about it, we're actually going through and looking at their exact use cases, so like, "What are some historical chats you've done with XYZ model?" And then, we actually use it together. And I think that's a really great way to allow people to put themselves in the place where, "Oh, I can see how I would use it because I can see how someone else is doing it." Where I think talking about it in the abstract, it's such a general purpose tool that people have a hard time connecting with it. And so yeah, the more people just see other people doing it and get to try it for themselves in specific ways that are helpful for them, I think the better.
Jim O’Shaughnessy:
Could not agree more, Dan. And as I said in the beginning, I'm getting my hook here because we're at the 90-minute mark. I hope that this is the first in a series of chats between you and me about AI, the good and the bad, right? So I am not Panglossian about this at all. I understand that we're going to create a whole new set of problems. Hopefully they're going to be better problems, and hopefully the AI will help us solve those problems. But we didn't get into any of the downsides, of which there are many.
Dan Shipper:
Yeah.
Jim O’Shaughnessy:
But it's kind of like the quote from... I can't remember whose line it is, but it's a great quote. It's like, "We invented fire and guess what?" Or "We discovered fire," a better way of putting it. Fire is really dangerous, but it's also the reason for our prefrontal cortex when we started cooking our food, that's what created this executive up here. And so, instead of banning fire, we had fire departments, fire alarms, fire extinguishers, firemen and women, fire retardants, et cetera instead of... The only kind of argument that I have a really hard time with is the folks that I dubbed the Lunatic Fringe. We'll leave names out of it, but a certain someone calling for strikes on GPU facilities I think does not advance the conversation.
Dan Shipper:
I agree.
Jim O’Shaughnessy:
But I think that I'm very willing to talk about the challenges because there'll be a lot. But this has been absolutely fabulous. I'm very excited because I can't wait for the next time I talk to you.
Dan Shipper:
Me, too.
Jim O’Shaughnessy:
But in the interim, first off, tell everyone where they can find you. I know where you can find you, early, every subscriber. But tell our audience where they can find you, and then I've got a final question for you.
Dan Shipper:
All right. So you can find us at every.to and subscribe there. I'm also on X at @danshipper. And you can also subscribe to my podcast, AI and I, on YouTube, Spotify, or Apple Podcasts, so wherever you get your podcasts.
Jim O’Shaughnessy:
Perfecto. All right. You're going to get more than one bite at this apple. If you've heard any of our other podcasts, you know that at the end we make you the emperor of the world, and that you can't kill anyone. You can't put anyone in a reeducation camp. You can't write the book The Republic because you're a reactionary and you lost the Peloponnesian war. But what you can do, what you can do is we're going to give you a magic microphone, and you can say two things into it. And the two things that you say are going to incept the entire population of the planet. The next day, they're going to wake up, whatever their next day is. They're going to wake up and they're going to say, "Man, I just had two of the best ideas. And unlike all the other times, I'm going to act on both of these ideas today." What are you going to incept in the world's population?
Dan Shipper:
This is interesting. I did not prepare for this. But the two things that came to my head first were be curious and be kind.
Jim O’Shaughnessy:
I love both of those. Wow. The world would be such a better place if those were the two things that animated the entire population.
Dan Shipper:
Yeah.
Jim O’Shaughnessy:
Because the final thing I'll say is that everyone says that the only constant is change. I used to really entirely embrace that until I read this kind of crazy philosopher, who I'm looking for his name here, I actually wrote it down because I forgot. Bertrand Hitzler, who wrote a book on consciousness, and he made a great point. And he was like, "No, for human beings, the ultimate aim is growth." And then he goes through all of the things like, "Everything else in the universe grows until it reaches its potential." He uses plants and trees and all of that. And he goes, "But we can't physically grow, or we topple over. What is the one thing where we can have unlimited, infinite growth? Our minds."
And so, I'm a huge believer that curiosity is what leads to that growth and kindness. And there's a huge difference between someone who is kind and someone who is nice. I come from Minnesota and they have this thing, "Minnesota nice." And when I was going back there, I turned to my wife and I said, "Welcome back to the land of false sincerity and pretend nice." But there's a huge difference between being nice and kind. I endeavor and try to be kind. I'm not particularly nice, though.
So this has been absolutely great. I'm going to have my nanny, also known as my executive assistant, who literally tells me what I'm going to be doing the next day. I have no idea. I get an email from her every night and I'm like, "Oh, that's tomorrow. That's awesome. I've been looking forward to talking to Dan this whole time." But thank you so much. I love everything you're doing.
Dan Shipper:
Thank you.
Jim O’Shaughnessy:
And I can't wait for our next conversation.
Dan Shipper:
Awesome. Thanks for having me, and I feel the same way. I'm excited to do it again.
Jim O’Shaughnessy:
Terrific. Thanks, Dan.