What if our objective-driven culture is stifling our creativity?
Ken Stanley, AI researcher and author of “Why Greatness Cannot Be Planned,” joins me to explore why ambitious objectives can blind us to the stepping stones that make breakthroughs possible. Ken is the inventor of the novelty search algorithm and co-creator of Picbreeder, a crowdsourced evolutionary art experiment that has led to important insights about our objective-obsessed culture.
This conversation covers everything from why vacuum tubes had to come before computers, how the path you take to success matters more than the success itself, the “fractured entangled representation” hypothesis, why grant applications kill innovation, how education beats the playground mentality out of children, and why our hunch for what’s ‘interesting’ is often decades of experience distilled into intuitive judgment.
I hope you enjoy this conversation as much as I did. We’ve shared some highlights below, together with links & a full transcript. As always, if you like what you hear/read, please leave a comment or drop us a review on your provider of choice.
— Jim
Links
Our Substack is growing! Subscribe below for more brain-tickling content designed to make you go, "Hmm that’s interesting!
Highlights
Pursuing Interesting Things is Not Being ‘Random’
Ken Stanley: “If you think that I’m saying because you pursue novelty and interestingness that you’re acting randomly, then you don’t understand what randomness is. The point though is that when I make that identification of something being interesting, I’m using my entire life experience, all of my intellect, to make that designation of “interesting”. In fact, if you think about it, there’s more information informing those kinds of decisions than [those that] inform objective decisions. Because when you’re talking about a far-off objective, you know very little about that situation that’s in the future, in a fog. I don’t know the context, how we got there, what are the stepping stones that lead there. But I do know everything about the past. I know everything that I’ve ever experienced very well and how it helps to tune my interestingness compass towards pointing in a certain direction. And so it is the complete opposite of random. It’s taking advantage of the intuitive understanding of the world that people have built up over decades, which is something that we deny ourselves in our society.”
Serendipity = Risk Taking
Ken Stanley: “Obviously in order to maximize the opportunity for serendipity, you need to take some risks. And you’re not obligated to take risks. You can play it safe. But the one thing that I can guarantee is that if you don’t pursue things that interest you, you won’t do anything interesting. That’s just how it works. Now, it’s just that also you might not succeed. So, you know, there are different risk tolerances. And I think people, [might]—depending on their tolerance— build up a cushion behind themselves before taking their risks, but other people are willing to take risks right away. And so it just depends on the person. But ultimately interest, serendipity is something that you can increase the probability of happening, but you just have to recognize you don’t know where you’re going.”
Deception is Everywhere
Ken Stanley: “…deception is just completely rife throughout complex spaces. That’s the way that reality is structured. So to give a concrete example, it shows that we don’t know where stepping stones lead. Like if you think about vacuum tubes leading to computers, the first computers were built on vacuum tubes. But the question is, were the people who worked on vacuum tubes before that thinking about building a computer? And the answer is no.
And actually it’s an interesting thought experiment. If you went back to the year 1850 or something and got a bunch of brilliant scientists working on vacuum tubes in a room and told them, ‘ditch this boring stuff, just build a computer that would be much more interesting than this tube'. Who even needs this?’ Well, we would have no computers or vacuum tubes ... And the reason is because the world is deceptive. It’s not clear that a vacuum tube is important for computation. So the only really way to get the vacuum tube registered into the world is to not care about computers, which is totally counterintuitive.”
How the Desire For the Objective is Drilled In Us
Ken Stanley: "…you know, sometimes I think about children because I think you don’t see it in their firmware. You bring a little toddler to the playground, they’re not going to say, ‘what are my goals for today? I’m going to learn how to climb that monkey bar. That’s what I like. And then I’ll be better at monkey bars next time’. They don’t think objectively at all. It’s just like, they see a playground. It’s just endless opportunities, and that’s really fun.
And so maybe it seems like to me that it’s been beaten out of us culturally because as soon as you start the first grade, then you’re basically saturated with goals. This is what you should do. This is what you need to accomplish. And you’re taught through reinforcement that if you do the thing that you were told to do because it was your goal, you will be rewarded. And reward after reward over decades.
People should not lose that playground mentality completely over the course of life. And the fact we beat it out of them is... this is the opposite of what education should be doing.”
🤖 Machine-Generated Transcript
Jim O’Shaughnessy: Well, hello, everybody. It’s Jim O’Shaughnessy with yet another Infinite Loops. I have been looking forward to today’s guest, well, maybe before I even had this podcast, because his book, Why Greatness Cannot Be Planned, just hit me dead center. Because I’m like, that’s what I think. My guest is Ken Stanley, who has decades of work showing that complex creative domains from biology to technology often show that ambitious objectives can actually blind us to the stepping stones that make breakthroughs possible. Ken, welcome. And my first question is, are we deterministic thinkers living in a probabilistic world?
Kenneth Stanley: Well, thank you. That’s a good start. Thank you for having me. So are we deterministic thinkers in a probabilistic world? That is a deep philosophical question. I think for practical purposes I would not think of us as deterministic, but maybe you’re getting at a deeper point, so maybe I should let you elaborate the question.
Jim O’Shaughnessy: Yeah, yeah, I think often we are deterministic thinkers living in a probabilistic world and often hilarity or tragedy ensues. And I guess by that I mean I’ve always kind of naturally been a nonlinear kind of experimental guy and I caught a lot of hell for it. When I was a teenager I used to go to Tower Records and buy dozens of CDs at random just to see what I was missing. And I would go home and my parents would say, what are you doing? And they were nonplussed to say the least. I very much have guided most of my life by the kind of stepping stone path that you embrace and do such a great job of explaining to people. And I often think that we let objectives or, you know, we must achieve A, B and C. We let those blind us to all of the other letters in the alphabet.
Kenneth Stanley: Yeah, I see where you’re going. Yeah, I missed the opportunity there to connect. Yeah, I see what you mean. Now, as you say, we’re deterministic thinkers. It’s true that at least culturally, I think we do tend to try to make the world deterministic in our thinking and we set objectives as just a matter of routine and we think that kind of a tidy laying out of objectives over time is going to get us to where we want to go. But the world doesn’t actually work that way. And so it can lead to trouble. And that is kind of a theme of a lot of the work that I’ve done. And so I think we’re aligned on that point. Yeah, that is kind of how the world is.
Jim O’Shaughnessy: Another thing that has always kind of worried me, and maybe that’s not the right term, but desire for certainty to me seems to be one of the biggest bugs in human OS and it leads us to vote for the candidate who proclaims things are A, B and C and has a very clear, easy to understand platform, whatever. I mean, CEO, politician, all of the above. And your work, you bring receipts, you’ve changed a lot of this to code like NEAT and all of your other work, HyperNEAT, etc. If you wouldn’t mind, for listeners and viewers who aren’t very familiar with your work, let’s start there. Let’s start with your early work and then we’ll get to the newest work that I’m really intrigued by.
Kenneth Stanley: Sure, yeah. And by the way, I totally agree with what you’re saying about certainty or it’s just actually, I think it comes out in certitude. Online you see so much certitude and that is to me frustrating that people can’t express uncertainty. Everything has to be definitely the way things are and this is definitely the correct opinion. It’s probably not great for public discourse. So, okay, just to transition to my own work then. Where do I come from and where does it all begin? I was originally interested in AI, but in a particular aspect of AI. I mean, this goes back to basically age 8, something like 8 years old. I was always interested in AI, but the particular aspect of AI I was interested in was I was hoping that the computer would do something surprising. That was always what interested me.
Early on, around age 8, I would write code in BASIC and I would get the computer to have a conversation with me. It definitely wasn’t ChatGPT, but it would say things like, hey, what’s your name? And I’d say Ken and it’d say, hi, Ken. And I would impress myself for a short time that I could do that. But what really started to annoy me was just that I knew what it was going to say and I wished it would say something interesting. And that kind of grew into an interest in what kinds of processes lead to unpredictable, interesting types of innovation over time.
And it’s both from the perspective of how do humans over time innovate and invent things that are not necessarily predictable, but also how do things like humans get invented over the evolutionary process that produced us, or biological evolution. And so by the time I got to grad school, the thing that I was zeroing in on was how does a process like evolution succeed in creating what appears to be increasing complexity over eons? And of course not every evolutionary lineage necessarily is increasing in complexity, but so many lineages, including our own, have astronomical increases like up to the 100 trillion connections in the human brain. And I was trying to understand algorithmically, because I’m in computer science, I’m not a biologist, I wanted to understand algorithmically how can a process like that actually happen inside of a computer?
Because at that time I believed that it’s probably the best way to get to human-like AI is to understand the process that generates things like us, rather than trying to reverse engineer how we work. That was sort of how I saw things. I thought reverse engineering a 100 trillion component machine is probably really hard, but just trying to recapitulate something like evolution, which looks relatively more simple to explain, might be easier to get off the ground faster. And so that’s what led to inventing this thing called NeuroEvolution of Augmenting Topologies, or NEAT, which is basically just an algorithm that through an evolutionary algorithm would evolve neural networks, which are the same things we have today in large language models, but much smaller back then, but could evolve them of increasing complexity.
So over generations of this evolutionary process in the computer, the neural networks would get more and more complex. And the NEAT algorithm was an attempt to abstract that kind of a process into the computer and it could indeed evolve increasingly complex artificial neural networks. That’s sort of where I got started. That was my PhD dissertation, on these increasingly complex networks. There’s a long story from there to get to today or through how we end up having a book which is sort of an anti-objective manifesto, but maybe I should pause there since we got to the beginning at least.
Jim O’Shaughnessy: Yeah, I think what’s super cool to me about NEAT, to more fully explain it for everyone listening and watching, is you trained it for people to focus on what caught their eye, right? And stop me if I’m butchering the explanation here, but out of just that focus of what caught their eye, really incredible images like skulls, like a variety of things appeared. If I’m remembering correctly, I played around with it quite extensively and I was just so fascinated. And I personally saw, I’m sure you’re familiar with the Game of Life. That was, I think that was in the ‘70s, right. And I saw a clear connection there. I also, I don’t know if you’re familiar with the work of Howard Bloom, who wrote the book The God Problem. If not, you might like to read it because he basically spends the entire book trying to answer your question. He of course assumes a godless cosmos, right? That’s his stipulation #1. And he’s like, okay, if you grant me, just for sake of argument, that there is no watchmaker, how on earth did the cosmos do what it did? And the parallels to your work I find extremely fascinating because he’s of a very similar mindset to you. He also does... he’s kind of wacky, which is one of the reasons I love him. But he studies nature, bees, ants, termites, and, you know, after reading his The Lucifer Principle and The God Problem and a couple others, Global Brain, his thesis is pretty simple. It’s without exploration, without the... what he calls Bohemian bees, none of it would work.
And the idea that hit me was, I wonder if that’s one of the reasons we went through what we now call the Dark Ages. I wonder if the Catholic Church became the most powerful institution on the planet and really didn’t take kindly to heretics and Bohemians and burned them at the stake. Well, we got a thousand years of darkness. How would you react?
Kenneth Stanley: Yeah, it’s true that there’s a connection between this fundamental question of how can things like us exist if nothing is really in charge deciding how the process should work and the kinds of algorithmic principles that I’m interested in. They’re kind of trying to answer the same question. You know, the lack of supervised guidance in evolution is really interesting, because it’s like evolution looks like a creative genius in some ways, but it’s not a person. It doesn’t have a mind or initiative of its own. But it’s creating things that even today defy engineering and our ability to produce. And so explaining that is very important, I think, from an algorithmic perspective. Like, what kind of algorithm is this? And you’re observing... are you mentioning the pictures that you saw with NEAT?
And I just wanted to make that connection that is also relevant to this, but also that happened a little bit after NEAT, a couple years after NEAT, when that started to happen. NEAT originally was not about pictures. It was just using these evolved neural networks to do usually reinforcement learning tasks like get some kind of robotic system to have some behavior that you want. And it was a couple of years later that people started to have this idea that you could take these evolving neural networks, which are increasing in complexity, and have them generate an image and you could be the one to decide which ones should breed and create the next generation. So it introduces humans in the loop. This actually wasn’t my idea.
And it goes back to evolutionary art, which is a whole area that precedes me by many years. But people’s idea basically was let’s combine evolutionary art with NEAT. And they started doing that in standalone apps, which are not... which are little known today. Hardly anyone knows that these existed. But they were called NEAT-based genetic art apps.
Jim O’Shaughnessy: Yeah, that’s what I was... You’re right, my bad for... Yeah, I should have made that more specific.
Kenneth Stanley: Yeah, so I started playing with those. I mean, I was amazed by them in some way because NEAT was my idea, but the idea of generating art with it was not my idea. And it was so cool. It was something I hadn’t thought of doing with it. Because one of the things about it is that because these neural networks are increasing in complexity, you can actually see the increases in complexity visually through the evolutionary process with the art. So it gives you kind of an intuitive lens into what’s going on inside. Like, how is complexity increasing over time by seeing the images on the surface. And so for me, it was very educational. And many people maybe saw it more like a toy. Like it’s a way of getting some pretty image or something, or just exploring spaces of possibilities.
But I felt like I was learning something really deep when I was playing with it. And it led us to have an idea which was initially initiated by Jimmy Secretan, who was a student at the time, who was in my research group at the University of Central Florida. And he said, you know, maybe we should... maybe we should actually create a crowdsourced version of this. Like, not just a standalone app, but put it on the web for everybody. And then if you evolved something cool, I could see it online and I could continue breeding it from there. And I thought that was just an amazing idea. Like, it’s so interesting because it means the evolutionary process goes basically forever.
And maybe we could see something analogous, very, very different and very less grandiose than say, evolution on Earth, but in some ways analogous to evolution on Earth, which is this continual divergent process of exploration that never ends and increases in complexity. And it would just be image space. But it sounded really amazing. I mean, it’s never been done before. So that’s what led us to create this Picbreeder system, which is the system that most people are familiar with, this kind of NEAT-based genetic art through Picbreeder.
So we put together Picbreeder and put it out on the web and indeed many people... it turned into a couple thousand people probably, which isn’t huge by any modern standard, but it was enough to get this large evolutionary kind of tree of life to grow inside the system on the Internet. And we saw an unbelievable flowering of discoveries. And you might think, why is it unbelievable? What’s so surprising about this? And the reason it’s surprising is because the space that you’re in, which is basically just a certain kind of special neural network that we call the CPPN, it was a kind of neural network space. The vast majority of things that it could draw as images are just total garbage, like 99.9999% of the space.
And this relates back to this kind of question, you know, that you framed around... from the last question you asked of how can there be all this complexity in the universe without any kind of guidance when, you know, most things, just random constellations of particles are just garbage, just space waste. And so where does all this interesting stuff come from? And Picbreeder sort of begs this question too, because you’re seeing over and over again astonishingly meaningful discoveries like a butterfly or a skull or something that looks like the planet Jupiter or a car in a space where those are this infinitesimal tiny part of what’s possible. And you have to explain, and it makes us want to explain even though there are humans in the loop. And that’s a big difference from evolution.
But even with that, how could they possibly have consistently found these unbelievably unlikely needles in a haystack over and over again? There’s got to be some reason for that. And so that led to, I think, a lot of insights about how systems like this really work.
Jim O’Shaughnessy: And then specifically, let’s keep going on that. So let’s go on to the insights and then what the additional hypotheses and theses you came up with.
Kenneth Stanley: Yeah, so there were really cool insights. Actually recently there’s been another insight. There’s actually two really deep insights. But the more famous one that’s been around for a while is this insight about objectives. And so it happened because, you know, after Picbreeder was around for a couple years, you know, it’s worth noting that we didn’t really have a hypothesis. Like, what’s the scientific hypothesis? A lot of people think the scientific method is you have a hypothesis, then you do an experiment. This didn’t work that way. This is more like, let’s just do the experiment, then we’ll figure out what the hypothesis should be. Because I was sure that if you just crowdsource the whole world in this complexifying space, something interesting will happen.
I wasn’t exactly sure what, but I was sure we would learn from it. Part of the reason is because a real challenge with evolution on Earth is that you have to go into the fossil record to actually know anything about what actually happened. It’s like digging in the dirt. Enormous amounts of effort is involved and not everything in the chain is even findable. And so there’s a poverty of information to really reconstruct the entire chain of events. But what’s really cool about Picbreeder is that we do know everything. There’s no digging in the dirt. Everything, every single step of every single choice has been recorded. And so that made me think we’re going to be able to extract a really deep lesson here. I don’t know what it is, but we will.
And indeed, that’s what happened. And it happened in a way that I didn’t expect. But basically what happened was I was playing with Picbreeder myself and I saw an alien face. It looked like E.T. from that movie E.T., that ‘80s movie, if anybody knows that. It looked a lot to me like E.T. And I thought... and someone else had bred this face, so it wasn’t me, but it was on the site. So I thought, oh, this would be fun, because I can get more aliens, because that’s kind of how it works. Like, once somebody finds something, you can get variations of it. So I was like, let’s get variations of E.T. And I was playing this game. But then this really weird thing happened, which is that his eyes, they started to descend down the face.
And I suddenly noticed it looked like wheels. It’s like he has a really kind of flat face or thin. And so when the eyes went to the bottom, it looked kind of like a car with wheels underneath. And it struck me that I was in car space now, no longer in alien face space.
And so after that I could steer it to something that looked pretty well like a car. And you may think, well, great, good for you, you found a car. Like, what’s so exciting about this? But actually, to me, it was a huge epiphany. Life altering, actually. It turns out maybe I didn’t know at the time it would be life altering, but it was huge because I started to become obsessed with the observation that I wasn’t trying to get a car. You know, I hadn’t been thinking about cars. And it occurred to me that if I had been, if I had wanted to get a car, I would not have chosen that alien face. You know, I would have thought, there’s no way that’s going to become a car because it’s a face. But it turned out I did need it.
And the only reason I got it is because I wasn’t trying to get a car. So I was thinking, how could this be? Because it seemed to defy a lot of lessons I’d learned in engineering and computer science, that the way you actually accomplish things in this world is that you set a goal and then you move towards it deliberately. And this is a really deep cultural thing. It’s not just in computer science, but it’s across our culture. In fact, I would say it’s the world culture. It’s not just Western culture, it’s the whole world is like this now. And I was thinking, here’s a case where the only way to achieve this thing would be to not be trying to do it. And so I thought, okay, let’s see.
Now I’m curious, how often does this occur in Picbreeder, in the history of all the images that have been discovered? You know, we’ve got butterflies and skulls and planets and other animals and faces and there’s all kinds of stuff there. So we said, okay, let’s go back and look at the history. And it turns out that almost every single time this is the story. I would have thought that this is some kind of lottery win, like where it’s extremely lucky. Like this would never happen again, like a huge coincidence.
But actually it’s always what happens is that somebody finds something that does not resemble the ultimate discovery, but then the person who makes the ultimate discovery branches from that serendipitous discovery and realizes that this leads to something else and then is opportunistic and gets that other thing. And so it’s not an exception, it’s a rule. The only way to find things in Picbreeder is to not be looking for them. And so that kind of generalized this observation that this is actually a general principle. So then I became obsessed with this after we saw that, because I was like, this must mean something deeper than just Picbreeder. Like, what does this mean? Because it violates everything I’ve ever been taught and it seems to be a principle. And I started imagining weird things.
I was thinking, imagine trying to learn how to race a car because a lot of neuroevolution or evolving neural networks with NEAT was involved in stuff like trying to teach cars to go around simulated racetracks and stuff like that. So I was thinking, what if instead of trying to get it to go around the whole track, like that’s the objective, we just imagine that we just say, keep trying to do new things. So it’s like you crash into one wall, but you say, don’t do that again. So it crashes into another wall. But I was thinking if you just kept playing that game, it would eventually drive around the track. Even though you never told it to drive around the track, you never rewarded it for going forward farther.
It’s just that in the process of trying to find interesting new stuff, it’s inevitable that eventually you have to learn something about the world. Because eventually all the silly things to do that are just trivial to find would be exhausted. And then you’re going to be forced eventually to find something interesting. And I think that’s sort of a description in a way of what’s happening in Picbreeder. People were finding things like circles and lines and curves and things that are not very interesting. But as they increase in complexity over time, they’re just trying to look for anything more interesting. And then when they find something, it becomes a stepping stone. And then more interesting things can be discovered from there. Just like the alien face leads to the car.
And so after several months, probably obsessing over this, I started to think there’s an algorithmic basis. Like, we could actually describe this process. And we ultimately called it novelty search. I did that work with Joel Lehman. It’s actually, in fact, his first day of grad school. I remember it was a few months after this revelation, and I just poured this whole point onto him. I just overwhelmed the poor guy. He’s like, it’s his first day. And I was like, you don’t need objectives. We could just do amazing things just looking for novelty and this is so exciting. And so he was very nice and kind of grabbed onto this point and see if you can actually implement this as an algorithm.
And it turns out it turned into the novelty search algorithm, which was impactful within its field. And in some ways, though, I view it as a validation of a deeper point philosophically rather than just an algorithm that has utility. It’s more important, I think, for the deeper point it makes, which is that novelty search often discovers a better solution to something than the alternative version, which would be trying to solve the problem, which is so paradoxical. And some people think that, oh, well, you know, what you’re saying basically is that the best way to solve your problem is to put novelty search on it. But that’s not my point. Because novelty search, clearly without any clear, distinct objective, cannot be guaranteed to solve any particular problem. It has no particular goal.
Rather, I think the lesson is that it’s extremely embarrassing and concerning that often it does solve the problem better than something that actually knows what it’s trying to solve. And it should make us all very concerned about the actual downside of being too obsessed with pursuing a singular objective.
Jim O’Shaughnessy: Again, philosophically, I’m a huge fan of Taoism and the Dao De Jing. And if you’ve read anything about that philosophy, there’s a principle called wu wei, which is actionless activity. And it parallels quite nicely with what you’re talking about here. The challenge, so I’m in business, right? And business loves its org charts and people fight like hell to be somewhere special on that list. So I ran into a lot of troubles with trying to encourage this type of thinking. A lot of my earlier employees would look at me like I had three heads, like, what are you talking about? You’re crazy. Why would we just randomly put things together? And so I came up through asset management and quantitative analysis, and literally I was fascinated in a similar way with just throwing algorithmic factors together and seeing what emerged.
And just like you said, 99% of it was bullshit. It was just nothing. But the 1% that wasn’t was astounding. And it’s like I would have never ever thought of that. And you know, right now, for example, I’m fascinated by smart swarms. I think that with the interconnectivity that we now have globally, with the diversity of minds that are on that network, a lot of this stuff is now happening just naturally. But if you took a lot of people who were playing with smart swarms and everything else and you told them what you just said, they would also kind of look at you like, oh no, I have objectives. And it seems like a real cultural thing that is going to require quite a fight to overcome.
Kenneth Stanley: Yeah, yeah. And that is something that I started to see the cultural connection with over time after the novelty search algorithm. So if you can imagine, at first, because the way I saw myself in my job is just to do AI. I wasn’t thinking about a cultural angle on this. I was just thinking this is a very interesting algorithmic insight for the field of AI, that you don’t always have to have objectives and actually sometimes it’s better not to. And I was giving a lot of talks about that because it is contrarian. So I think people started inviting me a lot to give talks because contrarian views are kind of entertaining. And so I was giving a lot of these talks and I did gradually start to appreciate that, interestingly, this is not just about AI.
You know, people would ask questions like, well I have objectives at my company. Does this... should I think about that differently? Which is a really strange question at an AI conference, you know, which is usually about the algorithms. And eventually I was actually invited to speak to the Rhode Island School of Design to a bunch of artists because someone who was at that school attended a computer science conference and said hey, I think these artists would find this interesting. And it was there that I really came to understand that the cultural connection is actually pretty important and we should do something about that because we understand the message. I mean we’re basically the messenger. Myself and Joel.
And so that’s what led to thinking we should write a book because I didn’t know any other way to get the word out that this is worth discussing, that the very objective, rigid, objective culture that we live inside has enormous downsides and flaws. It’s not to say we should never have an objective. That would be a perversion of the point, but just that we need to understand that sometimes it’s not the best way to do things and some resources should go to non-objective pursuits. And so I could appreciate that it’s very hard to make this argument. So stories like when you say it’s hard to convince people, this is kind of philosophically a way I want to approach the business, at least in part, but everybody thinks I’m crazy. That story comes over and over again.
People constantly feel like they’re living inside of a culture where even if they think, well, I think that way, but no one else can accept this. And so nothing ever happens to change things. And so I think the book was an attempt to try to empower people to at least have a conversation about this. And it was a very difficult thing to attempt, I think for myself and Joel, having only ever written to academic audiences in a narrow niche. But we said, okay, let’s at least just try this. And it’s been somewhat successful, since here I am on your show, so I’m getting a chance to talk about this.
And I think that the cultural part of it is, I mean it’s a very unusual story in the field of AI for there to be an algorithmic insight which leads to social critique. I don’t actually know of other stories like this. It could be the only... I don’t want to claim it’s the only, maybe I’m wrong, but it’s very unusual. But it’s been a great pleasure to be able to have some kind of cultural impact also because it’s not something I was expecting, but that sort of goes with the meta story. It wasn’t my goal and here I am now talking in kind of a social critic form.
And so now I’ve had many years to discuss this with people and I think I just wanted to key in on one other point you made when you talked about the difficulty of communicating this, which is the word random that constantly comes up when people worry about this point. That sort of initial reaction is like you’re telling me that I should just act randomly and this is actually going to do something better than planning it out and doing things because I have a goal. And I just want to make the point that it is not random to pursue things because they’re interesting.
If you think that I’m saying because you pursue novelty and interestingness that you’re acting randomly, then you don’t understand what randomness is, because what is interesting, which is ultimately what I’m arguing to pursue, just do something because it’s interesting. Like we did Picbreeder because we thought it was interesting. We don’t have an exact hypothesis or objective, but we thought something really interesting would come out of this. It’s like it opens up a new playground of possibilities and low-hanging fruit. And so the point though is that when I make that identification of something being interesting, I’m using my entire life experience, all of my intellect, to make that designation of “interesting”. It’s not like, oh, I just spun a wheel and it just pointed to one thing and that’s what I decided to do that day.
It’s like everything I know goes into it. In fact, if you think about it, there’s more information informing those kinds of decisions than [those that] inform objective decisions. Because when you’re talking about a far-off objective, you know very little about that situation that’s in the future, in a fog. I don’t know the context, how we got there, what are the stepping stones that lead there. But I do know everything about the past. I know everything that I’ve ever experienced very well and how it helps to tune my interestingness compass towards pointing in a certain direction. And so it is the complete opposite of random. It’s taking advantage of the intuitive understanding of the world that people have built up over decades, which is something that we deny ourselves in our society.
Every time somebody, if somebody goes to their boss and says, well, this would be really interesting, let’s try this, they’re going to get shut down. Who’s going to fund or, you know, put resources behind that. It’s like, what’s the goal? How do we measure progress? It’s too risky. That sounds random to me. So we are denying ourselves the opportunity, I think, to use the most valuable resource that we’ve developed when we invested 20, 30 years of education into a person and created the unique intuitions that person has about what’s interesting in the world. And it’s anything but random.
Jim O’Shaughnessy: That is such an important point and one of the things in my own career that I noticed. And again, so quant, which means rule-based, which means algorithmic. But one of the things that I developed over my career was what I call imbued or saturated intuition. It is exactly the point you are making. What happened was this lifetime of experience, of learning, of seeing all the things that worked and didn’t work. That does feed you in your intuition. And so I found myself in the somewhat uncomfortable position of saying to clients, well, I have an intuition about this. And they would look at me, like, you know, you’re a quant, you can’t have intuitions.
Kenneth Stanley: Right? Right.
Jim O’Shaughnessy: But that is where all of the gold ended up coming from, right? Like, huh, wonder if we do... Like, for example, I started a company in 1999 which was meant to be the first online investment advisor where it was called Netfolio. And what you could do is you could take a little quiz, you could tell your likes and dislikes and then it would give you a quantitative portfolio, but one that was malleable. Say you don’t like smoking—My mom died of emphysema, so I’m very anti-smoking—And if Philip Morris was the top of the list, you just kick it off and then another stock fills it up. But at the same time the pushback that I got was, well, wait a minute, doesn’t that negate all of your back tests on your algorithmic models?
And I was like, well, not really, because directionally it’s still going to be the same underlying criteria. And don’t get caught, don’t get too caught up on the fact that, you know, Philip Morris is being swapped for something else. They have the same underlying characteristics. And that was the other problem that I found myself facing a lot. People get so stuck on labels, they get so stuck on names that it got to the point where even my own staff was like, oh, we can’t buy that. And I always noticed it was because they were focused on the name or the label. And I seriously considered taking all the names and ticker symbols out of our system and just giving them numbers because it would allow people to not have that built-in bias.
And we all have built-in biases and often find what we’re looking for because we’re biased toward finding that. But the thing that I just find absolutely fascinating about your work and a lot of what we’re doing at my current firm, O’Shaughnessy Ventures, actually fits in really nice with your work. I mentioned to you before we started recording that because we have the on-prem AI and it knows everything about me, I asked it right before we got on, hey, what do Ken and I have in common? And you know, the number of pages it [gave me] was pretty extraordinary. One of them was, and this is the large language model saying this to me…
So I’m not making this claim about myself because it sounds a little grandiose, but it said “you institutionalize serendipity and you do that through your fellowships where you’re looking for those Bohemian bees” and you’re like, well, this is kind of interesting. Again, your term, we give those to what our team finds interesting. And when other people who don’t think like that look at our grants, they’re like, I’m not seeing a lot of rhyme or reason here. And my answer always is, that’s kind of the point. And the other thing that drives me... I’m sure you’re a fan of Claude Shannon and information theory and how it’s novel. I mean, right. His whole argument was that information equals being novel.
And I remember when I was reading a book about him, by my now colleague Jimmy Soni, who is the editor-in-chief of our Infinite Books division. And there’s this great quote from Shannon saying basically a poem can be incredibly information rich. A political speech has zero information.
Kenneth Stanley: Yeah, you know, that word serendipity is just completely aligned with the themes in the book. I mean, it’s really about serendipity, but it’s kind of about, I would say, the idea that you actually can mechanize serendipity, which is super counterintuitive to a lot of people. It seems almost wrong. Like, that can’t be true. But you know, because everybody also, again, associates the word random with serendipity. It’s like random good things happening. But the thing is that you have to... One thing that might lead to some caution about that is if you look at the Wikipedia page on serendipity, and if you actually look at the list of serendipitous discoveries, because they have this long list, they’re always made by people who have a really good track record, which is very strange.
If it’s random, what help would it be to have a good track record if serendipity is random? If it’s random, the best serendipitous discoverer would be somebody like some lunatic on the street just jumping and hitting walls or something like that. And something cool might happen. But that’s not the kind of people you see in this list. And it shows that there’s a lot more to serendipity than just randomness. And it’s actually, it’s a form of opportunism, but it’s also a form of collecting opportunities. I usually call it stepping stones. And so the more stepping stones you collect, the more possible jumping-off points you have when the opportunity presents itself, but you don’t yet know what opportunity you’re looking for. But it allows you to be opportunistic. And so novelty search, in a way, it’s “algorithmitized serendipity”.
And of course it doesn’t guarantee a particular result. So it’s more like... The point is that there’s a way to organize things such that something interesting will happen. I just can’t say what it is, but it’s going to be much more interesting than if you only insist that everything you do has to be driven by objectives. You’re probably going to be finding much less interesting things. If you want objectives that are guaranteed to be safe, I guess nothing’s guaranteed. But as close as you can get, you won’t do anything interesting. I mean, that’s the trade-off. You might be safe though. And by the way, I’m not trying to make any kind of moral judgment. I’m not saying that being safe is morally wrong. Everybody has a right to take the risks they want to take.
And in some sense I think we’re talking about risk here. Obviously in order to maximize the opportunity for serendipity, you need to take some risks. And you’re not obligated to take risks. You can play it safe. But the one thing that I can guarantee is that if you don’t pursue things that interest you, you won’t do anything interesting. That’s just how it works. Now, it’s just that also you might not succeed. So, you know, there are different risk tolerances. And I think people, you know, have—depending on their tolerance—might build up a cushion behind themselves before taking their risks, but other people are willing to take risks right away. And so it just depends on the person.
But ultimately interest, serendipity is something that you can increase the probability of happening, but you just have to recognize you don’t know where you’re going.
Jim O’Shaughnessy: Yeah. And I think your idea about it being risk-based is spot on. I think that it’s always really fascinated me that, you know, we have the distributions that we have. It seems to me that a fairly large majority of we humans are risk averse, right. And rule followers, and doing everything to maximize safety. And it’s a bit like AI, right? If you maximize the objective function for safety, well, you’re probably not going to find anything interesting, right. You might be safe. I don’t know about that, by the way. I think that’s... You can argue that maximizing the objective function for safety does not make you safe, but... I have a young friend, George Mack is his name, and he’s got a thesis which he calls increasing your surface area for luck.
And it fits beautifully with your way of looking at the world. And he is able to articulate it in a fashion where people who haven’t thought really deeply about this go, oh, yeah. And so his way of doing it is to set up a dichotomy. And he says, if you have two opportunities for this evening, right, go with the one that maximizes your surface area of luck. And so then he gives an example. You could lie on the couch and watch Netflix or, hey, you got invited to this art event in Manhattan, and which one are you going to choose? Well, if you’re trying to increase the surface area of luck, you’re always going to choose going to the art event over watching Netflix.
I teased him and said, yeah, but, well, what if you’re watching something on Netflix and you just suddenly think, I never thought of that. And he laughed, of course. But it’s kind of something that we’ve been trying to create an operating system for. Basically never turn search off, maximize for novelty, pay for novelty, gate it with kind of minimal criteria. Remember the good forks. You’ve got to always... That’s your stepping stones. You’ve always got to bank the stepping stones so that you have a plethora of jumping-off points. It’s not... Your point is very well taken. We’re not giving people fellowships to go throw themselves against walls in the city, but we do have a relative minimum amount of criteria that we will then be willing to pay for novelty. And when we find it, we keep it.
And then we sort of, from a business point of view, commit after a quorum seems to be like, okay, this seems to really be working. And then we lean into that, and the challenge goes back to what we started our conversation with. It seems to me that part of our human operating system, this desire for what is ultimately, in my opinion, an illusion of certainty, really mucks up the works when you’re trying to get people to operate this way. And so another thing that we did was say, okay, look, what I’d like you at my company... I’m selling this to my teammates. What I’d like you to do is, okay, if you just budget 30% of your time for just following what really interests you. And I’ve had some good success with that.
But overcoming this sort of inertia... because it almost... I had one fellow, we didn’t end up hiring him, very good guy, but he was like he had come out of the military and he looked at me like I am coming out of a command and control structure that is going to make it very difficult for me. And then I made the argument to him, well, what do you think destroyed all the old communist societies? It was the command economies, the four-year plan, the five-year plan, and guess what? It didn’t work. And it’s one of the reasons why I believe free markets are much more modeled on this idea, right. Like a command economy is never going to build an iPhone.
In my opinion, a command economy is never going to come up with a Rubik’s Cube or a pet rock or, you know, fill in the blank. And tell me a little bit more about what you’re working on right now because I find that also very fascinating with the idea that you can really scale the AI to create some pretty interesting innovative things.
Kenneth Stanley: Sure, yeah. I mean, man, you just hit on so many interesting topics there. I’m just thinking there’s a lot of things that can be said. I just want to touch on a couple of those because they brought some things to mind. You know, this issue of... In our book we called it a security blanket. Like wanting to feel like everything is certain, but it’s actually not, is the problem. Like you can create objectives and they make you feel like you know where you’re going and also make you feel like you’re going to get there because you have a measure that tells you you’re moving in the right direction. But the world usually doesn’t work that way unless it’s a very simple problem. And so the particular issue that makes the world not work that way is deception.
And deception means that it looks like you’re going in the right direction, but you’re actually not. Or it looks like you’re going in the wrong direction, but you’re actually going in the right direction. And deception is just completely rife throughout complex spaces. That’s the way that reality is structured. So to give a concrete example, it shows that we don’t know where stepping stones lead. Like if you think about vacuum tubes leading to computers, the first computers were built on vacuum tubes. But the question is, were the people who worked on vacuum tubes before that thinking about building a computer? And the answer is no.
And actually it’s an interesting thought experiment. If you went back to the year 1850 or something and got a bunch of brilliant scientists working on vacuum tubes in a room and told them, ditch this boring stuff, just build a computer that would be much more interesting than this tube. Who even needs this? Well, we would have no computers or vacuum tubes is what’s interesting about that. And the reason is because the world is deceptive. It’s not clear that a vacuum tube is important for computation. So the only really way to get the vacuum tube registered into the world is to not care about computers, which is totally counterintuitive.
Which is why it does make sense inside of a company or the way that you described your own organization to allow people to follow interests because they need to be able to find those deceptive stepping stones. And sometimes they’re really deceptive. In the case where it could be that the things that lead to safety may appear unsafe and so, and vice versa, things that are appearing to be safe actually might make the world less safe. This is the paradox of deception, or I sometimes call it the objective paradox. Which is the reason that these kinds of endeavors, like how to make the world safe in some totally airtight way that goes on for perpetuity, is almost impossible. Like, you can’t actually do that.
And so it is actually dangerous to pursue that as an objective, which is just counter to it and unsatisfying and strips off the security blanket. Because the whole thing is we want the security blanket to feel like what we’re doing is going to get us what we want, which is just eternal safety, for example, from AI. But if the world is deceptive, we could be actually hurting ourselves. And it’s grappling with that complexity that’s just unfortunately the way that we have to operate. Which means there does need to be some room to do things that aren’t, on the face of it, clearly safe. And so, you know, part of the reason to understand that is that things evolve over time. Like society evolves.
And so checks and balances that once kept things in check could eventually turn into something that actually hurts you. And so you can’t just have any static view of what does it mean to be in a safe world. Everything has to evolve. That’s why institutions evolve and the government evolves and things like that. As society evolves, in some ways, the most safe institution is probably one that’s able to evolve because we need to change as everything in the world changes. But it all goes back to deception. And so I totally agree with this kind of philosophical view of things that we can’t just constantly try to feel safe. It won’t protect us. We just have to deal with the reality that the world is deceptive. And this is why non-objective thinking is really important.
And so that doesn’t necessarily segue naturally to the question ‘what am I doing now?’ But I’ll try to go to that because that’s really the question that you’re asking. I just wanted to acknowledge that part of it.
Jim O’Shaughnessy: Yeah, but before you go, there’s a great example from finance, right. And that is in the short term, United States treasury bills are the safest investment you can make. In the long term, they are the riskiest. If you look at the returns to treasury bills versus bonds versus stocks over very long periods of time, treasuries in the very short term are very safe. That is correct. Over the long term, a dollar invested back in the late 1920s, inflation adjusted, grows to about a buck 80 in treasury bills, grows to about 300 and odd dollars in bonds, and grows to over $10,000 in stocks. So the higher variance investment, which is riskier. Of course it is, but... I always use that as an example because what you just said is absolutely true.
And I don’t know if you’re a fan of David Deutsch, but his book The Beginning of Infinity, he makes this case beautifully. He builds a beautiful scaffolding for if you try to be at, you know, through precautionary principle and just let’s lock everything down, that is the least safe thing you can do. Life is a verb, it’s not a noun.
Kenneth Stanley: Yeah, yeah. And people always bring up David Deutsch. And I’m really embarrassed to say I still haven’t read the book. I have to because everybody says you must have read David Deutsch. So I should do that.
Jim O’Shaughnessy: You should. I think you really... And I would recommend starting with The Beginning of Infinity because what’s beautiful about it is he really builds a very robust intellectual framework for what we’re talking about right now. He gives it reasons. He has a great quote in there, I’m going to butcher it. But it’s like, hey, what were people talking about... What were the smartest people talking about in 1900? What were their opinions on quantum physics and the Internet? And his answer was they didn’t have opinions on quantum physics and the Internet, because it hadn’t been invented.
Kenneth Stanley: Right, right, yeah, yeah. I mean, this security blanket issue is a very serious problem in society, I think, is that everything is saturated with this need to make everything mechanized so that we can guarantee that we know where we’re going, which is all just security blankets thrown on top of everything. And it’s just so naive and unrealistic and it’s thwarting innovation and actually making the world more dangerous. And I look at it because I was a professor for a long time. The pain point where it often bothered me was in applying for scientific funding. Because always the National Science Foundation or other agencies that can fund research, they want to know what your deliverables are going to be. They ask for the impact and what’s going to happen.
It’s like, I just want to be like, I don’t know what the impacts are yet. By forcing me to tell you and know what’s going to happen, I can’t tell you the most interesting ideas I have. And I know they’re not going to get funded because I don’t know where they’re going to lead. And yet the entire funding industry is built on this premise that you can just tell the people where you’re going. And it just seems to me it’s really irritating and frustrating to me. It’s like, I’m okay with some amount of grants doing that. Okay, let’s have some safe areas where people just do things that are modest that we know we’re going to be able to get something out of. But I mean, come on, this is supposed to be about innovation.
Ultimately some amount of resources should be devoted to things where we don’t know where we’re going. That has to be the case. But it’s just this security blanket culture that just completely saturates everything. And so, yeah, I’m hoping the book does a little bit to nudge the pendulum in the other direction.
Jim O’Shaughnessy: Well, it, you know, it certainly did in our case. I mean, one of the reasons behind the fellowship and grant program was because we wanted to do exactly that. I could not more vehemently agree with you about the funding issue. It’s particularly in the sciences, to the point where one of our objectives as we build out our own AI system here at OSV is I want a portion of it to devote to just developing and publishing automatically to an accessible database for everyone to have access to null hypotheses, right. Because it’s like Sherlock Holmes, right. The Hounds of the Baskervilles. The only reason that he figured out that they knew the intruder was the dog didn’t bark. And via negativa, it just doesn’t seem to be in our cognitive firmware. It’s just... There’s so much richness there, and we just avoid it.
And your point about safe funding? I think it has significantly retarded advancements, not only in science, but across the board. It’s gotten institutionalized. It’s gotten, you know, groupthink. There’s an omni-culture that covers that. And you look around and you think, oh, my God, look at all the... look at all the advancements and innovation that we made 50 years ago. And specifically in physics and the deeper sciences, the last 50 years. Bupkis.
Kenneth Stanley: Yeah, yeah, it’s true. But, you know, I think there’s an interesting thing to think about in what you were saying, because you mentioned the firmware, and so... because this question is, is it kind of deep in our firmware that we kind of have a desire and need to live this way, or is it just a cultural artifact? And, you know, sometimes I think about children because I think you don’t see it in their firmware. You bring a little toddler to the playground, they’re not going to say, what are my goals for today? I’m going to learn how to climb that monkey bar. That’s what I like. And then I’ll be better at monkey bars next time. They don’t think objectively at all. It’s just like, they see a playground. It’s just endless opportunities, and that’s really fun.
And so maybe it seems like to me that it’s been beaten out of us culturally because as soon as you start the first grade, then you’re basically saturated with goals. This is what you should do. This is what you need to accomplish. And you’re taught through reinforcement that if you do the thing that you were told to do because it was your goal, you will be rewarded. And reward after reward over decades. I mean, you just learn that’s how the world works, you know. Because I remember in high school, I remember thinking I had a lot of... I thought, at least to myself, I have a lot of ideas.
I would write down ideas when I would have them, but I actually thought that it’s not that useful to have ideas. I explicitly thought that. I was like, you know, it doesn’t really help you to have ideas because how would that help me? Like getting A’s in school or... Nobody really cares. It’s more for fun. It was an absurd thought that you could grow up thinking that is absolutely crazy. It occurred to me later, like somewhere in grad school that it actually might be useful to have an idea now because, you know, I’m supposed to have a PhD thesis. And that really made me feel good. I was like, wow, finally it might actually be good for me to have an idea.
But the fact that it could take 24 years before somebody thinks, oh, actually ideas might actually be good just shows there is some kind of social pathology. That’s really hurting things. People should not lose that playground mentality completely over the course of life. And the fact we beat it out of them is... this is the opposite of what education should be doing.
Jim O’Shaughnessy: Totally agree. Robert Anton Wilson, who was way ahead of his time, called it the installation of the correct answer machine in young minds. And I have six grandchildren and they range from a little over one to 11. And watching the younger ones, it’s just so obvious to me that your point is well taken. It is not an innate part of our firmware, it’s cultural. We literally... this, I don’t want to get on this soapbox because I feel very passionately about this, but literally we beat the creativity, the discovery, all of that which children naturally have, that’s the original firmware. And then we spend all of our time beating it out of them. And of course, and it’s one of the reasons why I think that the current...
I have a thesis called the Great Reshuffle, which is basically that I started, I don’t know, 2015 with the advent of machine learning, with the advent of connectivity, etc. That everything was going to get reshuffled. And one of the... it’s... we’re in the middle of it now. And one of the reasons that I think that you see the established elites of today squawking as loudly as they are squawking is because these new ideas and all of that is coming from a much smaller part of society. But guess what? They’re going to topple... It’s like the whole accreditation thing. I have deep respect for people who have PhDs. Absolutely. And so I hate this binary, oh, you don’t have to go to school and all that. Well, yeah, you do for a lot of very serious things.
But what happens is we tip over into being just slaves to credentialism. And I think that’s a mistake, right. It’s like I mentioned to you, our CTO is very young. He doesn’t have a PhD, but his proof of work is absolutely extraordinary. You can, if you’re a little more open-minded, you can find these things that don’t have to go up that traditional ladder. I think that’s... we’re breaking with that past. But the culture... Culture takes a while to catch up. It’s much slower. Cumulative cultural evolution is a lot slower than I think both you and I would like it to be.
Kenneth Stanley: Yeah. Yeah. I mean, it’s almost like, I mean, I think risk mitigation won out over innovation or something. Risk mitigation seems to be so popular, even though it’s not talked about as if it’s popular, but it just overwhelms everything. And it’s, I mean, people are taught implicitly through school that you do things to mitigate risks, ultimately make sure that you turn out fine and you can protect your family and things. Obviously these aren’t bad things. But there’s this other aspect of the world that it’s just not really being fostered, which is less objective. You know, it goes back to the original insights in Picbreeder. Ultimately this has to do with sometimes doing things without having an objective.
And by the way, objective is connected to the word metric, you know, because once you have an objective, then you can measure something. So now you have a metric. And so often objectives are expressed through metrics. And these are the grades and the credentials and all those things. It’s an attempt to make everything objective so that we just know what’s going to happen, where we’re going, who’s the best person. Everything’s being measured objectively. But the problem is objectives don’t get you everywhere, you know, especially in innovative systems. So it’s true, selecting a really great CTO with an extremely innovative mindset. You can’t just look at the credential or some objective measure.
It’s actually, it’s very subtle and implicit. What actually would make a real innovator. And so I think these are really important social challenges to overcome, just the way culture has evolved. And actually, I just want to give one anecdotal story because it just reminds me of this. This really came up when a long time ago when I was trying to start a company and I was a professor at the time and the person from the university side whose job it was to shepherd the deal because they want to make sure about compliance and things like that, was a compliance officer. And this made things unbelievably hard because the only incentive there is just that it complies with everything. And so there’s no enthusiasm for just how amazing this could be. It’s all just compliance, compliance.
And it made it almost impossible to get this to actually happen, even though the whole thing had so much opportunity for everybody involved. And I just think, you know, it’s... this is like the whole world is like one compliance officer facing you in front of everything before you can actually get to doing anything interesting just to mitigate risk. It’s like that’s the only thing we’re worried about is risk.
Jim O’Shaughnessy: And of course we have suffocated a lot of innovation just through the explosion of regulations and rules and laws and everything else. Again, when you say compliance to a guy who made his bones in finance markets, it is the most regulated of many, or not the most, but one of the most regulated of all industries. And you know, I was lucky because I had my own company, I could... Yes, of course we had a compliance officer, but it was not... You’re not running things. His name was Ray. Ray, you’re not running things. You’re here to make sure that no one does anything illegal. Absolutely. But the culture of compliance is... Oh man, it... I was briefly at Bear Stearns and they were pretty good about it, but you know, when you saw them coming, it’s like, okay, here’s... Or I went to their offices.
I would always joke as I was entering, I’m entering where ideas go to die.
Kenneth Stanley: Yeah, yeah. I mean, and I’m not against complying with rules. I mean, no, I’m a rule-following type of person. But I mean it’s kind of... it’s just trying to reduce every single little piece of risk that could exist in the world that is just going crazy and it leads to...
Jim O’Shaughnessy: Stasis, which is death.
Kenneth Stanley: Yeah, yeah. I mean, something bad might happen. Sometimes that’s okay. It’s not always okay, but every possible failure and misuse of a dollar. It’s like sometimes it’s just okay. That’s the cost of taking risks. So we shouldn’t be so completely averse to doing things that are interesting. So anyway, yeah, it’s... I guess we share the discouragement with that stuff. I don’t know if we do.
Jim O’Shaughnessy: Yeah, I would love to get on... Yeah, I would love to get on to the fractured entangled representation hypothesis.
Kenneth Stanley: Okay. Yeah, that does follow well from all of this because it’s the second lesson kind of which is much less known, except for that paper that we very recently put out about this fractured entangled representation hypothesis. It’s a lesser known thing, but I think it’s equally deep and profound, which is that we observed early on in Picbreeder this other really fascinating thing, which is that the underlying representations of the images, because remember, it was evolved images that were produced by neural networks, that the representation of those images was quite amazing. And if you’re not familiar with neural networks, you might... it might not be clear. What does it mean to have an amazing representation? We’re talking about the way that the image is actually built inside the network, node by node, neuron by neuron, all the way from the bottom to the top.
There’s a way it’s done. And actually we have ways of visualizing, because of the nature of these images, how it’s being done. So what are the intermediate images, or you could call them masks that are used to build up the image. It’s very easy for us to see those. And by looking at and visualizing those, we saw that it was incredible. It was... and what I mean by incredible is things like it had separated out the mouth of the skull in such a way that there was a single weight inside the network, meaning one parameter basically, that if you change the number on that parameter, you could open and close the mouth. And so it’s not just a picture of a skull.
It looks like a skull on the surface, but it’s actually the underlying mechanics of what it means to be a skull. It’s deeper than just a picture. It actually understands what a skull is at some level, or at least some of the components are represented in this way. And in some ways it’s astonishing because, you know, it seems like you’d have to be an engineer to think of ‘how would I organize the representation of a skull so that there’s a single knob I can pull that would move the mouth open and closed’. It seems like a human must be involved in this, but it isn’t like that. Of course, humans are involved in picking the intermediate stepping stones, but they didn’t design the underlying representation, nobody designed it. That’s what’s so strange about it.
It’s just emergent and yet it has this really organized feel, which is as if it was designed. And so that was just an interesting observation. And one thing that occurred to us was that it may be that actually... we had noticed that if you run NEAT, which is also what’s in Picbreeder, we talked about NEAT at the beginning, to solve a problem, say to get a robot to go through a maze, if it does solve it. And then if you also run novelty search, which remember is kind of modeled on the non-objective way Picbreeder works and it solves it. So we have a case here where both algorithms solve it, the novelty version of NEAT and the objective-driven version of NEAT.
We kept finding that the version that solves it with novelty would be three times more compact, which is a very interesting observation. We’re not sure exactly why this would be, but when you try to solve the problem when it’s not done objectively... So I guess I shouldn’t say you’re trying to solve the problem when you’re not trying to solve it, but as a side effect of just exploring, you actually do accidentally solve it. Your representation is way more compact than if you actually are trying to solve the problem. And then if you couple that with the observation in Picbreeder of these beautiful representations, it’s intriguing.
So then we should ask ourselves, well, much of machine learning as a field is about setting an objective and then using gradient descent or backpropagation on a neural network, which means moving towards an objective on purpose deliberately. And much of the field tries to do it that way, stochastic gradient descent. Maybe those resultant underlying representations are actually pretty bad. And so we thought, let’s see what happens if we try to get the usual conventional way of learning, the SGD or stochastic gradient descent to output Picbreeder images and just see what it looks like inside. And indeed it was terrible. It’s really terrible. It’s just a giant mishmash of meaningless blobs which somehow all come together to make the skull, for example. But it doesn’t look like how you would decompose a skull.
And that’s what we called fractured entangled representation. It’s fractured. The words have a certain meaning. Fractured, in the sense that things that should be unified together, like the left and right side of a face, are actually broken apart inside so that they’re not connected. So it means you could mess up the left side and the right side would do nothing. And so they’re not connected the way you would expect them to because that’s what symmetry is. There’s a connection there. And so in the FER, fractured entangled representation, we’d have this kind of fracture that we could see it explicitly. And it’s also entangled, which means that things that shouldn’t be involved with each other are. So it’s like if you change the size of the eye, the background changes or something like that.
These things shouldn’t be entangled at all, but they are. So there’s fracture and entanglement all throughout the representation. And then in NEAT, in the Picbreeder version, which wasn’t objectively discovered, it’s the opposite. Beautiful decomposition. You could actually understand how it built this thing just by eyeballing it. You don’t need to be an expert in the field. It’s just obvious that it was decomposed into meaningful parts. And so this paper, and we call that unified factored representation, sort of the opposite of fractured and entangled. And so this paper just shows this. It shows the contrast visually and says, what does it mean? What does it mean that the way you got to the solution changes the underlying nature of the solution. See, this is about... What this is about is deceptive appearances.
It’s like, both look just as good. Both produce a beautiful skull, the same skull, but underneath the hood, one is kind of an imposter. It’s a complete piece of junk, and the other one is beautiful. And so what does that mean? There are many possible implications. One possible implication is that what you could do with that skull going forward is very different. You know, so to imagine new skulls or new faces or things like that, it’s much easier if the representation is unified and factored than if it’s fractured and entangled. And we showed this in the paper that you can do what we call weight sweeps and see that it’s easy to get new kinds of images that mean something if you have the UFR version, the good version. It’s the opposite with the other one.
And so that’s one really deep point is that the continued use of these things is impacted by... Which is really important in the world of LLMs now or large models, because what does it say about their potential creativity if they’re characterized by FER or fracture and entanglement? We don’t know. It’s not as easy to look inside these big neural networks and know, but this is a hint it might be like that. And so it sort of raises the question, and the second big question is why is the representation so much better when you find something open-endedly without an objective versus when you find it with an objective? What’s the reason for that?
Because if we understood the reason for that, then we could algorithmically exploit that to make sure that the representation is good. And so I think everybody who’s reading that paper is thinking about this now. Like what, what is this? How do we abstract this lesson in a way that we can actually formalize? But actually I wanted to mention a third point because it’s the cultural point, which is funny that this comes up again that we can make a cultural point. But if you think about it, and this is something I’ve really been very recently thinking about, this idea that the way that you got to where you are can matter more than where you are. It’s not just about neural networks.
That’s also about life, you know, because it’s like, who is the person you became even though you’re successful as a result of the path you took? That might matter a lot for what happens in the future. And it’s not even just about people. It’s even like companies. The path you took to become a successful company, to have high profit, whatever it is that you’re looking for, to be a unicorn might mean that the way you are inside is still FER. So it’s still fractured and entangled. And what that basically means is that you aren’t flexible and adaptable to change in the world because you came to the solution in a very objective way. And even though you succeeded, you’re paying a cost that’s invisible right now.
And I think that’s really intriguing to think about this in the broader social human context than just in terms of algorithmic optimization, that it may have further implications there, that the path you take actually matters a lot to who you became and who you actually want to be. Who are you on the inside? And so that’s something that I could imagine it could be another book, but I’m not exactly planning to write a book right now. But it has kind of the feel of the original book. There’s a lesson there that’s really interesting.
Jim O’Shaughnessy: I am absolutely fascinated by that. I have thought about... I’m a journal keeper and I’ve kept journals since I was 18 and I’m 65, so I have the ability to look back. And I was looking for a journal, but it’s down there buried under some stuff. But what I was going to show you was a graphic representation that I wrote when I was 23, basically what you just said. And it was me at that point. And I put all of these tree paths out, and one of the notes that I made under it was these matter. These will go back and fall back in on themselves. And the path that you end up taking is going to matter. It’s going to matter.
If you took it the traditional way, I was oriented much more towards the non-traditional way of doing things. Like, again, when I founded my first company when I was 28, it was an asset management company. And I was telling a friend about it, and he goes, oh, cool, who’s your backer? And I went, I don’t have a backer. And he just looked at me and he goes, Jimmy, you’re gonna fail. And I was like, well, I think that I’ve discovered some pretty cool things. I’m gonna write a book about it. And I think that book is going to be my calling card. And he just about 20 years older than me. He just shook his head and was like, oh, you poor naive young man. There’s no way that is going to work.
But even the book, the first book that I wrote called Invest Like the Best, which basically showed that you could... back to the idea of people getting obsessed on labels and names. When I first got exposed to the stock market, everyone was talking about the CEO or, you know, the company. They were just talking about that particular thing. And I was a teenager, and I was like, I think you should look at the underlying structure of these companies and see what unites them there. And that was what my first book was about. How you could clone any of your favorite money managers by putting the stocks that they own in their portfolios over time on a big database, sucking out what the variables were, right?
So this manager loves companies that have high forecasted growth and low PEs, whatever the combination, and to abstract that knowledge and make it an algorithm. And but then that gave me the idea, oh, hell’s bells. There’s no Baedeker guide to, you know, like everyone says, I like stocks with low PE. All right, show me how they perform. And I went looking and there was some academic work by Fama and French, but only on price to book. And that gave me the book that actually did make my career on Wall Street. And so the... I love that notion. I really think you should turn it into a book because I think it... I think it also has a much broader cultural impact, as you yourself were implying.
Kenneth Stanley: Yeah. Yeah, that’s a great story. Yeah, it’s true. I mean, it’s... I hadn’t thought about it as much over my life, but it is intriguing and it’s very culturally relevant, how much the path can matter even if you’re successful, you know, which just means you met the objective criteria that you were hoping to meet. So it’s just another downside of objective thinking, you know, and... And I was talking to another... I was talking to another podcast interviewer about this, and, you know, he often advises founders. This is what kind of his normal work is. And he was saying, you know, a lot of founders, they get to their milestone that they do what they were hoping to do, but they’re getting divorced and they’re depressed and all their employees hate them.
And it’s like there’s all this other stuff that comes, which is part of just being objective. It’s like if you’re just only objective and you pursue things in the most deliberate, determined way, you have to get to this point, and that’s the only point you care about. There’s all these casualties in the underlying... Well, in the neural net, we call it the representation, but here it’s the underlying fabric of your life. Just how it’s interconnected, that’s just really... I think it’s just very thought-provoking, you know, that there’s a way to get to a point that’s more... that’s more virtuous than another. And it’s not just about moral things. I mean, it relates to things like creativity, you know, because if you think about the difference between somebody who became...
Who got a great score on some college test, but got there by always following the textbook, always acing all the tests on the road, versus somebody who got there, also did well on that test, but got there through a completely different path of inventing things for themselves, exploring for themselves, maybe having contrarian opinions that they looked into on their own for their own purposes. That person is coming in with a completely different representation of reality. And so the question is, given these two people who both aced the test, which one is going to have a more interesting future career. And it’s probably path dependent. I mean, obviously I think it’s probably the more creative path, but even if it wasn’t, it still shows path dependency.
It’s like how you got there is going to affect what you’re going to do with that success in the future, what kind of person you’re going to be. And there’s just endless things to think about there, you know, because I think it can kind of diffuse a little bit of objective obsession and the feeling that the only thing that matters is the final metric of where you’ve gotten. It sort of helps to show why you don’t have to feel like that because I think that’s a very stressful way of looking at the world. It’s like this is the only measure of my success. To understand that you’re not just successful just because you met the metric. There’s a lot more to it than that. And actually some of that other stuff is more important.
And even if you don’t meet the metric, the other stuff could still be good. And so life is much more complicated than that.
Jim O’Shaughnessy: Yeah. We have an informal heuristic that we are far more interested in the second of your two candidates, the one who got there in a circuitous or in a very non-traditional path. Now it doesn’t always work. Nothing always works, but we have found that it’s worked pretty well for us when we’re working with people. The ladder climber is what I call them. The ladder climber. Great. Good for you. God bless. We’re much more interested in the one that is like, huh, I wonder if there’s another way for me to get there. That ladder...
Kenneth Stanley: Yeah.
Jim O’Shaughnessy: But I bet that there... I bet I could do a very clever thing. It’s like the old Prussian general who was asked, you know, how do you determine where you’re going to send a soldier? And he had a matrix and the part of the matrix was lazy and clever was his best thing because he said promote to the highest part of command. And foolish and lazy, they were cannon fodder. But the clever and lazy because they will almost by definition figure out a way to do it more efficiently, more elegantly. Whatever you have... I am, I cannot believe that you and I have been talking for nearly 90 minutes. My producer is just texting me. This has been absolutely fascinating. I hope you enjoyed it as well. I love your work, everyone listening. Buy his book, read his paper.
This is really fascinating stuff. And you need to understand it because the world we’re going into, I think is going to be... You’re going to be a lot more successful, whatever you want to be doing, if you understand the way Ken looks at the world. As if you’ve ever listened to the podcast, you know that our last question here is always kind of a fun one because it’s very different from your perspective, but that is, we’re going to make you emperor of the world. You are the emperor of the world. And you cannot put anyone in a re-education camp. You cannot kill anyone. But what you can do is we’re going to give you a magic microphone. You can say two things into it that will incept the entire population of the world.
The next morning, whenever their next morning is, they’re going to wake up and think, you know, that’s really interesting. These two ideas I’ve just had, I’m going to pursue both of them. What are you going to incept into the world, Ken?
Kenneth Stanley: And you said two words or...
Jim O’Shaughnessy: No, no, two ideas. They can...
Kenneth Stanley: They can...
Jim O’Shaughnessy: Yeah, two ideas.
Kenneth Stanley: Okay, okay. And okay. Yeah. So, yeah, first I just want to say that this also was a really fast 90 minutes. I really enjoyed it. Thank you for the super... Thank you for this opportunity. And so what do I want to incept into the world? Well, first, just that try to imagine what you would do if you had a day where you had no objectives. And what would you do? And secondly, sort of going back to the second main insight that we talked about, I think it would be interesting. Well, actually, but I’m supposed to be commanding the world in some way. Is that right? So I need to... well, think of you...
Jim O’Shaughnessy: Think of yourself as more of the... You are the world whisperer.
Kenneth Stanley: Okay, here’s what I’ll do. Here’s what I’ll do. All the universities and colleges, you can no longer consider grades in admissions. And there’s no more grant applications. You’re not allowed to make people apply for funding. Figure out how to operate and how to get the right people into the university.
Jim O’Shaughnessy: I love both of those, man. If that one actually happened, just think of the Cambrian-like explosion of innovation and interesting ideas. Ken, this has been so much fun. Thank you for your time. Thank you for the book. I recommend it highly and really delighted to finally be able to have a chat with you.
Kenneth Stanley: Likewise. Yeah, thank you. This was great. I wish we had longer. It was a lot of fun.








