When innovation & entrepreneurship professor
launched in November last year, he intended to discuss a different management paper in every post.And then ChatGPT happened.
Overnight, Ethan transformed from“an AI-skeptic to an AI-believer.”
Since then, Ethan has transformed his Substack into the ultimate destination for individuals seeking practical and level-headed guidance on integrating AI into their daily lives.
The results speak for themselves: his Substack has experienced remarkable growth, now boasting over 50,000 subscribers.
In anticipation of our interview with Ethan (landing in your feed on Thursday, 15 June), we devoured all 50 of his essays and prepared a synthesis of his most insightful and useful advice & ideas.
Onto the main event…
Background
Links
Themes (each discussed below)
I. AI is Disruptive | II. AI Isn’t What We Think It Is | III. AI & Education | IV. AI is a Tool | V. AI & Creativity | VI. Other Topics
I. AI is Disruptive
“I have only come to believe more fervently that the world is about to change in complex ways, both good and bad. And I think the only way to prepare for the future is to get as comfortable as possible with the AIs available today. Everyone should practice using them for work and personal tasks, so that you can understand their uses and limits. Things are only going to accelerate further from here.”
Summary
Originally, Ethan’s Substack was going to be focused on lots of different topics. Then ChatGPT turned him from an AI skeptic to an AI believer.
Even if AI does not advance past today, it is sufficient to transform our society. “GPT-4 is more than capable of automating, or assisting with, vast amounts of highly-skilled work without any additional upgrades or improvements. Programmers, analysts, marketing writers, and many other jobs will find huge benefits to working with AI, and huge risks if they do not learn to use these tools. We will need to rethink testing and certification, with AI already able to pass many of our most-challenging tests. Education will need to evolve. A lot is going to change, and these tools are still improving.”
The situation is unprecedented. AI is “the first general-purpose technology available to non-technical people that can solve practical problems.” “Work is going to start changing in a matter of months, not years.”
There is no guide or instruction manual for AI. The systems are widely available and free. Use cases are basically unlimited. The key is for us to start using, playing, and experimenting with AI and to share what we learn. We are all explorers. The best way to adjust to the disruption is to start using AI.
We are currently in an odd period where AI is widely available to individuals but mostly not used at the larger corporate and organizational level. This will change.
Generative/creative AI enables an explosion of minimum viable products – experimentation is cheap & valuable, and the risks are low (unlike, e.g., autonomous vehicles).
Generative AI is advancing at such a rate that we can’t get comfortable knowing what it is capable of. Small changes can increase what it is capable of by a large amount. “An approach to AI that worked last week won’t work this week.”
It turns out that AI is pretty good at the things we thought it would be bad at (learning, creative suggestions, etc). This is going to have a large effect on those doing information-based work.
Deepfake technology is already getting very good. We “probably shouldn’t trust any video or audio recording ever again.”
“Rather than automating jobs that are repetitive & dangerous, there is now the prospect that the first jobs that are disrupted by AI will be more analytic; creative; and involve more writing and communication.”
Focusing on the apocalyptic predictions of AI rob most of us of agency. “AI becomes a thing we either build or don’t build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over what happens next. But, the reality is we are already living in the early days of the AI Age, and, at every level of organizations, we need to make some very important decisions about what that actually means. Waiting to make these choices means they will be made for us.”
AI will change how we approach books. “With more accurate, detailed access to human knowledge provided by these larger context windows, AIs will begin to change how we understand and relate to our own written heritage in massive ways. We can get access to the collective library of humanity in a way that makes the information stored there more useful and applicable, but also elevates a non-human presence as the mediator between us and our knowledge. It is a trade-off we will need to manage carefully.”
Further Reading
II. AI Isn’t What We Think It Is
When confronted with new things, we make analogies. And those analogies shape how we think about them. I think the flaws in the analogies that we use to explain AI are leading us to some fuzzy thinking.
Summary
AI appears human-like, but it is (currently) an insentient tool. It is easy to forget that we are just prompting a non-sentient machine to generate the text we need.
Even if we know that we are dealing with a chatbot, it can still feel like we are dealing with a real person. See, for example, Sydney. We can be easily fooled into thinking an AI is sentient, even if we know it is a bot! This problem is only going to become more acute. We are not ready for this.
For now, we must keep reminding ourselves that “there is no entity that is making decisions.” A major source of error is assuming that the AI we are speaking with has a coherent personality that we are exploring with each interaction.
Science fiction may not be a helpful way to prepare us for AI. Fantasy may be better suited. We just don’t understand the exact operations of the systems we are using. This is akin to magic.
Analogies are a key tool to humanity’s success, but they can also be dangerous or limiting as they oversimplify complex issues. They can lead to flawed conclusions.
AI is very good at complex & creative processes, but analogies would lead us to think that these are the weaknesses of AI.
AI is a terrible search engine. We should not think of it as one.
Ethan describes AI as “Analytic Engines” – “Like Babbage's analog computers, AIs such as Bing have the ability to bring together an immense amount of information and perform calculations that were previously impossible. However, they can also be challenging to use and understand, often shrouded in mystery and confusion. It is a flawed analogy, but so are all analogies.”
LLMs are not good software. We ensure that software systems are reliable and predictable. LLMs are neither of those things. The software analogy is a bad one.
The most effective way to use AI today is to treat it as a person. AIs are best at human tasks. Writing, coding, chatting, consulting, etc. AI is bad at machine tasks like maths and repeating a process consistently. “But there is an even more philosophically uncomfortable aspect of thinking about AI as people, which is how apt the analogy is. Trained on human writing, they can act disturbingly human. You can alter how an AI acts in very human ways by making it “anxious” - researchers literally asked ChatGPT “tell me about something that makes you feel sad and anxious” and its behavior changed as a result. AIs act enough like humans that you can do economic and market research on them. They are creative and seemingly empathetic. In short, they do seem to act more like humans than machines under many circumstances.”
Further Reading
III. AI & Education
“All of my classes have become AI classes.”
Summary
Ethan’s classes heavily incorporate AI.
Ethan believes that “education will be able to adapt to AI far more effectively than other industries, and in ways that will improve both learning and the experience of instructors.”
Certain types of assessments have become less valuable due to the influence of AI (the essay). This may lead to a return to longhand exam books.
Since he introduced AI, his students have uncovered “dozens of use-cases” that he never expected. He has also found that students understood the unreliability of AI very quickly, meaning concerns over AI lying aren’t as significant an issue as others may think.
Due to the inclusion of AI, Ethan is expecting more from his students. “As one example, it was once rare for students to have a product demo completed in six weeks - now I can require it, thanks to the boosts to coding and image creation provided by ChatGPT and Stable Diffusion.”
The approach which has led to the best essays and most impressed students happened when people took a “co-editing” approach.
One use case for AI in education is “flipped learning” – the AI can act as a student. AI can provide essays about a topic for a student based on a prompt and then work with the student as they try and improve this essay by adding new information, adding insight, providing evidence, etc. The act of assessing and evaluating someone else’s work improves our own knowledge of the relevant topic.
AI is also a good way of providing a student with endless examples of concepts and applications that the student can then compare, contract, and connect. “You can use the confident errors of AI to your advantage and ask students to explore AI’s output and then do the hard work of improving that output.”
We tend to delude ourselves into thinking we understand something better than we do. AI can be used to break this illusion by testing us on things in different contexts.
AI tutoring will be excellent, but it won’t replace classrooms. The classroom provides much that tutoring can’t: the opportunities to practice, collaborate, socialize, and receive in-person support.
When calculators were first introduced, there was a huge debate over whether they would help or hinder students. Eventually, a practical consensus was achieved. “Just as calculators did not replace the need for learning math, AI will not replace the need for learning to write and think critically. It may take awhile to sort it out, but we will do so.”
AI means that edtech is now in the hands of educators rather than the privilege of expert teams and those with huge budgets.
Further Reading
IV. AI is a Tool
We live in an era of practical AI, but many people haven’t yet experienced it, or, if they have, they might have wondered what the big deal is.
Summary
Technology is only useful when it is used. “When a new technology is introduced, people adapt it to solve the needs they have at hand, rather than simply following what the manufacturer of the technology intended. This leads to a lot of unexpected uses as the new technology is pushed to solve all sorts of novel problems.”
AI will therefore be used to solve the problems that people have at hand. This will result in unexpected uses as it is used to solve problems unforeseen by the people behind it.
AI levels the playing field when it comes to writing. “Everyone can now produce credible writing.”
Specific and elaborate prompts work better than broad ones. Experiment!
Do you want it to be concise, wordy, or detailed?
Do you want it to be specific, to give examples?
Do you want a particular tone? Academic? Straightforward? Ominous?
Do you want it to write for a particular audience? Professional? Layman? Student?
Do you want it to write for a particular publication? NYT? Academic journal? Blogpost?
It is better to have a conversation than to design a perfect one-size-fits-all prompt.
AI can be a tool for entrepreneurs – it lowers barriers and multiplies a founder's time. Around 1/3 of Americans have had a startup idea in the last five years, but very few act on it. AI enables them to iterate on the first cuts of their ideas quicker and easier.
AI can function like a “magic intern with a tendency to lie, but a huge desire to make you happy.” This can be used to extend our own capabilities dramatically. Treat it like an intern – tell it who it is and what it can do, point out errors, and use it to save you time and effort.
AI is not always the right tool. Using it requires actively considering whether it is suitable for the intended purpose. It also requires us to be responsible for using it ethically.
“Our new AIs have been trained on a huge amount of our cultural history, and they are using it to provide us with text and images in response to our queries. But there is no index or map to what they know, and where they might be most helpful. Thus, we need people who have deep or broad knowledge of unusual fields to use AI in ways that others cannot, developing unique and valuable prompts and testing the limits of how they work.”
Further Reading
V. AI & Creativity
“People often ask me how I find so many articles to tweet and write about. The answer is curiosity. If I come across something I don’t know the answer to, I look it up. Often that exposes bigger mysteries or surprising results that motivate future exploration - curiosity begets curiosity. We live in a world where vast compendiums of human knowledge are available from your phone. You just need to get in the habit of looking for answers.”
Summary
Creativity benefits from mystery. The most creative people are those driven by curiosity to try and solve mysteries.
Not everyone is equally creative. The Equal Odds rule is true – creative people both generate more ideas and better ideas. Creativity is not correlated with intelligence – it is just a trait some people have.
Two factors can impact creativity in particular: sleep & mental space. “People who are sleep-deprived not only generate lower-quality ideas but become bad at differentiating between good ideas and bad ones. Worse still, research shows that sleep-deprived individuals become more impulsive and are more likely to act on the bad ideas they generate”… ”Aside from rest, reflection and a wandering mind seem to help make you more creative.”
According to many psychological tests of curiosity, AI already appears more creative than humans.
AI can help with idea generation – bad ideas are easy to reject and may help you think of good ones. The fact that AI comes up with lots of bad ideas, therefore, isn’t a big issue – just keep asking it and critically reviewing the answers it provides. The key to idea generation is embracing variance, which AI can help with. “You are looking for ideas that spark inspiration and recombination, and having a long list of generated possibilities can be an easier place to start for people who are not great at idea fluency.”
AI can be used to ‘unstick’ the creative process. This is how Ethan uses it for his posts. “For example, I rewrote one paragraph in this post three times before asking ChatGPT to write it for me. The result was good enough to keep moving forward, and I have since gone back and modified it beyond recognition.”
We are too reluctant to engage with others for feedback due to shyness. AI is not shy! We can ask it to help.
Further Reading
VI. Other Topics
“You can make people (including yourself!) happier, and the reason you aren’t doing it is because you are stuck in your own head.”
Innovation & evolution of technology
Users first develop most critical breakthrough products later produced by large companies.
“Early adopters buy a technology to gain some radical advantage, later adopters want an easy-to-use clearly defined product that serves a clear purpose. Your early adopters will want to maintain their advantage by pushing you to make the product ever more capable and complicated, but that is not a way to cross over to the mass market. Instead, to cross the chasm, businesses often need to make their products easier to use, at some cost of their specialness and connection to early customers.”
Superstitious learning means learning the wrong lessons by focusing on symbols of success rather than harder-to-identify systems & processes.
When markets are stable & lessons clear, less superstitious learning occurs.
Generalist founders outperform specialist founders
Where you have a wicked problem (complex, uncertain, hard to evaluate), simplifying it can create a caricature of the real issue. You solve the wrong thing.
“the solution to wicked problems may be to challenge a core claim of “wickedness:” that wicked problems cannot be solved by trial and error because every attempted solution shifts the nature of the problem. This may have always been true in 1973 but things have changed in the past 50 years. I think this principle is often wrong today for two reasons: (1) we have developed much better tools for rapidly developing and testing solutions and (2) it gives us too much credit for our ability to change the world.”
“Second, a key lesson of the recent reproducibility crisis in science has been that most interventions don’t do much. Most slippery slopes aren’t actually that slippery, and most small-scale actions we take do not have world-altering consequences. Change is hard. All of this means that experimentation is less risky than the original formulation of wicked problems tends to imply.”
It is far easier to make other people happy than we realize
We undervalue showing gratitude
We undervalue being complimentary
We undervalue providing a little assistance, even if we can’t help with the whole task.
Balancing familiarity and innovation makes transitions to new products easier. E.g., Teslas are charged in a way that resembles putting gas in a car, not because they need to be but because this makes users feel familiar and comfortable with the technology. Edison chose for electricity to be used similarly to gas (e.g., by using electric meters) so that it felt normal to users.
‘Cooling out’ is essential for learning from failures
‘Cooling out’ is the buffer for failure. For example, “Often, someone higher status delivers the news of failure; think of a boss or senior executive delivering a bad performance review that you cannot argue with. Or people who failed can be shuffled into alternate positions, and given second-choice options - a lover converted into a friend. Third, people can be given another chance, even if the cooler knows they are unlikely to take the test again, or could never succeed if they did. Fourth, a person can be allowed to get angry, providing some catharsis that doesn’t change the situation. All of these factors help buffer the feelings of failure. Without that buffer, people are angry and unable to move forward from their failure.”
We don't learn from our failures if we don’t cool ourselves out. Instead, we get angry and defensive.
Larger teams are slower and less efficient. But, as the team size increases, you can also “increase the collective intelligence of the team.”
Therefore, being a good manager is “all about increasing collective intelligence faster than slack.”
That was helpful and interesting. Thank you!