Share this episode

In this episode of “The History Factory Podcast,” communications and technology authority Dan Nestle joins host Jason Dressel to discuss the fast-changing state of artificial intelligence. Listen as they explore the evolution of communications, how AI can help people and organizations surface insights and knowledge, and how to use that expertise to be an authority in the marketplace.

Dan Nestle is an award-winning communications executive and communications technology leader, podcaster and author. In 2023 and 2024, PRWeek named him to its Dashboard 25 list as one of the top 25 “movers and shakers” in communications technology. In his work and podcast conversations, Dan has proven expertise in corporate communications, communications technology, generative AI, integrated marketing communications, content marketing, social media strategy and brand storytelling. He is also a coauthor (with Mark Schaefer and 33 friends) and coeditor of “The Most Amazing Marketing Book Ever.”

Share this

Transcript:

Jason Dressel  00:12

Today on the History Factory podcast: Dan Nestle and building authority through AI.

Jason Dressel  00:16

Hi, I’m Jason Dressel, and welcome to the History Factory podcast, the podcast at the intersection of business and history. Today, our conversation is going to focus on the history that is happening in real time, and that is the fast changing landscape of AI. At History Factory, we just launched our new AI platform called Chroniqle, and one of the things that has been different about this rollout versus others in our history is that the product is changing so fast. By the time our team is trained on the latest features, we’ve already created new features or made new discoveries, and we’re having to loop back and bring everyone back up to speed. My guest today is someone who is very familiar with those kinds of dynamics and the constant change of AI. My friend Dan Nestle is a communications executive, host of the popular podcast, “The Trending Communicator,” and an entrepreneur who is one of the earliest adopters of AI that I know and is currently focused, as you will hear, on helping experienced leaders use AI to turn their expertise into credibility, influence and new opportunities. In a very real way, Dan’s work as a consultant with leaders has a lot of commonalities with our work at History Factory for enterprises and brands. In this episode, we’ll talk about how, from different vantage points and capabilities, we’re both engaged in helping people and organizations use AI to surface their knowledge, share it with the world and earn credibility and authority for that expertise. So, buckle up for a fun, sprawling conversation with Dan Nestle.

Jason Dressel  02:03

Dan Nestle, welcome to the History Factory podcast.

Dan Nestle  02:07

It’s a pleasure to be here, Jason. Been looking forward to this for a long time.

Jason Dressel  02:10

Yeah, man, great to have you on, finally. I can’t think of a better person that I’d like to talk to about kind of where we are in the journey of AI, particularly as it pertains to the marketing and communications space. You have been an early adopter of this technology, so maybe let’s just kind of start there. I think one of the things that’s really become obvious over the last year or two is that, for sort of better and for worse, AI has really become almost, like, synonymous with LLMs, but the reality is, AI has been around for decades, and we’ve been kind of building up to this moment for several years. So, maybe just to get kind of started, you know, as someone who’s kind of been at this intersection of emerging technologies and business, you know, how do you think about, kind of, you know, where are we now with AI?

Dan Nestle  03:09

I don’t think anybody really knows where we are with with AI, to be frank with you. I mean, you mentioned this long history with AI, and it’s, you know, I would expect that coming from the History Factory, but there’s a lot of kind of baggage that was kind of part of the deal when AI started, I think, and a lot of that, it might be coming to the fore, or is certainly, there’s certainly underplaying, or undertones, of that with a lot of the discriminatory—not discriminatory in terms of regulation or anything like this, but discriminative AI, like the things that Danny Gayner is doing, or has done, and, you know, the more kind of powerful, like, decision-making machine learning engines that are out there that I don’t even know, like, barely the first thing about. And we’re all out there focused on LLMs, as you said, and I think there’s a lot more under the hood that have yet to really—I’m not going to say, ‘make themselves known,’ but have yet to really be leveraged in the way that we probably can, because we’re over indexing on, you know, hey, tell me what the latest, you know, tell me what the best way is to go get travel tickets or whatever. And, you know, not really paying attention to the real power that’s behind everything. But, you know, I know very little about anything other than the LLMs and other than the kind of generative AI work that we’ve been doing for the last couple of years, and even that is—all-consuming would be a good word, I think, for it. And I’m really in the thick of it, and I don’t even know where it’s necessarily going. But there are certainly, I think, negative trends that will have to be reversed, and there’s positive trends that will have to be highlighted and uncovered and disseminated a little bit better for us to really get a handle on where we are.

Jason Dressel  05:18

Yeah, I was going to ask that later, so maybe best to just ask it now, like, when you think about those positive and negative trends and implications, what are the things that are most top of mind for you?

Dan Nestle  05:30

Well, for me, I just—and this has been the topic of a lot of my recent episodes of my podcast where, you know, I’m bringing in guests that are leaders in the field, and people you and I both know, who, you know, we’re all of a similar mind in that everybody’s getting it wrong with AI, this thing called AI transformation. I can’t stand the fact that people call it AI transformation in the first place, because AI is a disruption more than it is a transformation. And when you kind of label something transformation, it sounds like it’s this kind of, you know, multi-generational IT project that’s never going to end, because that’s what we’ve seen with digital transformation, which is still going on. I me an, I haven’t heard anybody out there—have you?—that’s said, ‘and we completed our digital transformation on the X and X, such and such date.’ Never happened. So, are they still doing that? I don’t know. I don’t know. Is it unending?

Jason Dressel  06:31

It’s like the 21st century, like, buzzword of ‘innovation,’ you know? I feel like, you know, like, when I entered the workforce, it was innovation, and innovation. It’s still innovation. No one’s like, ‘yo, we finally got to the end of that innovation thing.’

Dan Nestle  06:47

Never. Never. And in fact, that’s even—we’re doubling down on innovation. I just think there’s a lot of imprecision on what innovation is to different people. You ask five people what innovation is, and you’re going to get 17 different answers, because they’re going to innovate their answers as they go along. They’re going to be like, ‘oh, you know, innovation…’ But, you know, I don’t even—people have called me an innovator, labeled me an innovator. I got an award that says I’m an innovator. And it’s hard to really say what exactly that means, but to me, it’s all about, you know, finding new things to do with the stuff that we already have and making it better. And invention is a part of it. If you invent something, that’s different than innovation, but you have to be innovative to get to the invention, if that makes sense. But I think people confuse invention with innovation, and I think they confuse, like, efficiency gains with innovation, or they confuse productivity gains with innovation. You know, innovation, like, making something new is what that’s about, taking something that’s already out there and making it better and making it new, or adding something new to it to make it yours. And I’m probably even, you know, going to be questioned about that particular definition by a lot of people out there, because, you know, in some ways, it’s kind of like art, you know. I don’t know what good art is, but I know it when I see it.

Jason Dressel  08:16

Right.

Dan Nestle  08:17

It’s something like that. But I think people need to understand that there’s differences in what innovation is, and we shouldn’t be hitching our wagons or telling everybody that, ‘oh, you need to innovate, you need to innovate,’ because we need to know what that is first, you know. But that said, the biggest, to me, unlocked advantage that’s currently out there with every single person who has access to any particular LLM is this instant capability to become an innovator, if you so desire. And that’s part of the missing ingredient, or the blind spot, in these kind of so-called AI transformation stories, because corporations, by and large, corporations and large organizations, especially large ones, are not set up to welcome the innovator. They want the innovators. They say they want the innovators. Maybe they want to hire a couple leaders who are innovators. They might have a strategy department that they want to be, like, really innovative. But for your rank and file, and for most middle managers and, you know, people down the road, innovation is not something that fits in the job description. And AI does exactly that. It gives you this capability to color outside the lines, find new ways to do things that you know need to get done, and frees you up to do the experimental new stuff and to invent upon your innovation, like, to stack on these things, and really change the way either your job is, or your company is, or your function is, or, you know, fill in the blank. And that’s not something that, you know, you can just make a claim, ‘AI transformation.’ There’s a huge—it’s probably change management, first and foremost, and, you know, and it’s agreeing on the vision that your leadership and that your board wants, and then figuring out, you know, what needs to get done to get there, and how can AI help us? And, you know, all the questions that you would normally ask if you’re building a strategy or building a plan. And those are often not asked. It’s just, ‘oh, we just got 10,000 Copilot licenses, train everybody up and we’re going to have a transformation.’ Doesn’t work.

Jason Dressel  10:49

Yeah, that point you make is really, really interesting. Because, you know, one of the things that—and you and I have talked about this a lot, too—I see so many parallels of what’s happening with AI with what happened with the internet in the mid-90s. And it was truly, to your point, it was disruptive. It wasn’t transformative, it wasn’t sort of evolutionary, it was truly disruptive. And your point about how AI enables people to be innovators, very much like the internet, it sort of creates, like, structural changes in terms of what people can do. And you’re absolutely right, like, large enterprises are sort of not inherently set up to foster that. You know, you don’t want your, you know, you don’t want your speech writer to suddenly become necessarily an awesome graphic designer, right?

Dan Nestle  11:49

No. That’s right.

Jason Dressel  11:51

It’s just like, ‘hey, wait, that’s not—stay in your lane over there.’ So, I’m curious on your take on that, you know, because one of the things I wanted to kind of get your perspective on is, like, how is this disruptive technology different than maybe what we saw—and maybe, also, how is it the same—of what we saw with these other kind of periods where, you know, I think about the internet, and then, obviously, you know, social media, and then kind of just the entire sort of proliferation of, you know, the internet of things? You know, how do you think about AI in that sort of context? How are we going to be sort of talking about this technology, like, in 25 years, maybe, you know?

Dan Nestle  12:29

Well, in 25 years, we’re going to have our—we’re going to be safely ensconced in our pods, and our doppelgangers, digital doppelgangers, just plug straight into our heads as the matrix feeds off of us for batteries. That’s 25 years from now, if you listen to some people. But no, honestly, I think that, you know, it’s different. I mean, it’s—I like to think, of course, this is different than anything we’ve ever done before and anything we’ve ever seen before, because I’m right in the thick of it and, you know, it positions me well, right? To say that this is totally different, you need people like me to really understand it. That’s a little self serving. Honestly, it is—there are some similarities, and I do think there’s a lot of differences. The similarities first, I mean, you brought up the internet, social media. They’re communication revolutions. They fundamentally changed the way that information moves from point A to point B, and is absorbed and consumed. And that’s no different now, except now we can add how it’s created to the list, I think. Like, the internet, let’s say, when that first started and, you know, you had web 1.0, web 2.0, and we’re stuck there, I think, because Web3 is, what, crypto and blockchain and all that stuff? What’s happening with that, right? I mean, I don’t know, maybe you do, but it’s there. But web 1.0 to 2.0 was massive change, because it changed from, like, the broadcast, company to individual, advertising type of, like, direct communication, one way communication model, to, ‘holy cow, people can create their own stuff. I can build a website and become a publisher.’ And then, you know, that evolved into the way that people consume content, because, you know, suddenly there’s a lot of competition for eyeballs, and a digital brochure is not the same thing as a blog or an interactive website or, you know, you name it. The experience had to change, the quality of the content had to change, the degree of trust that’s required to win audiences gradually changed and continues to change. All of that was the result of, you know, that internet. So, like, we’re still feeling the knock on effects of that, I think. And then comes social media, which just kind of exacerbates this whole ‘individual as communicator,’ the power of the individual over the corporation or the company or the organization, you know, especially when it comes to buying. Now, we’ve shifted power into the hands of the buyers, of the consumers. And that shift, you know, is still real, and it’s still, you know, still should be governing the way that we deal with our audiences, you know? I don’t know if you’ve ever heard the word, ‘an audience-centric’—or, the phrase, ‘an audience-centric approach’ before, let’s say, 2009 or 8, you know? I mean, I don’t think it was popular, if you did. But, yeah, it’s social change, all that. And now, you know, now with AI, some of that is the same, I think. But now you have a whole new—it’s a technology, it’s not a technology. I mean, it’s technical, it’s based in a technology, but it’s permeated so much that it’s a force. And I, you know, I wouldn’t, I don’t know what category, necessarily, put into it—to put it into—but the way that we interact with it, the way that it interacts with us, and the black box kind of nature of the whole thing, where we don’t know what’s coming next but, whoa, one day, suddenly, you know, it’s serving up information, deep information, about something that you care about, and it’s citing sources, and it’s pulling stuff in from all these different places. And then we start to wonder, where is it pulling this stuff in from? And then you realize, oh, well, it’s actually pulling in from trusted sources. And, like, bada bing, bada bang, yada yada yada, we have GEO, right? It’s like this sort of—this thing just keeps happening, and so it has become its own audience. So that’s the—I think this is the core difference. The internet was never an audience. Social media is not an audience. It’s a channel. AI is a stakeholder now. It’s an audience. We have to both use it as a communication method, we have to also use it now for, you know, as a marketing tool and a PR and comms tool, but we also have to treat it like another stakeholder with its own set of requirements. And that’s a whole different thing.  From a marketing comms standpoint.

Jason Dressel  17:58

And maybe to dig in on that, I mean, are there any organizations that you see right now using it in that way effectively? I mean, let’s sort of dig in, you know, over the last year or so, you know, who have you seen out there kind of, you know, using this technology in truly, maybe, novel ways?

Dan Nestle  18:18

That is a very difficult question to answer, simply because, you know, I’m so in my own world, and not necessarily paying attention to what all these large organizations are doing. But every time I hear a story about, you know, like, a company like Klarna, for example, you know, adopted AI, or said they were rolling out AI, and the result was they fired all these people to replace them with AI, and then they were just, like, ‘oops, got to bring them all back!’ Those are the kind of stories that tend to pop up in my feed. You know, I’m not certain who’s doing it right, and I’m open to hearing about it. I would venture to guess that people who run PR agencies and run ad agencies and so on would have some insights into that. But I’m working a lot more with smaller orgs and individuals, and then, you know, with some upcoming projects, and what I’m seeing is that AI tends to be really an individual kind of—it’s a very individualized experience. And it’s a very individualized, you know, the outcomes are also very individualized. Maybe that’s also another big change. So, in the large companies, the ones that are allowing more freedom and exploration and, for lack of a better word, play with the AI, within the boundaries that they set, within their regulatory environment or their compliance environment, or both those things, those are the ones that are doing better, or that they’re seeing more happen. But it’s a very short jump from there to a board of directors saying, ‘oh, everybody’s using AI. Surely you’re finding some some efficiencies, and we can cut staff now, right?’ I mean, that seems to be the direction of conversation, and CEOs are not always doing the best job of defending their employee base. And I realize I’m not naming specific companies, and part of that is purposeful, but also because, you know, frankly, it is just so broad and widespread, this trend. I mean, we’re seeing a major media merger that just happened, or an acquisition that just happened, and the result is going to be at least 4000 employees cut. And probably more. Some people are saying upwards of 23,000 over time. That’s a—4000 is a real number. 23,000 is a ridiculous number. Some of that is because of duplication, overlap, in business units, right? But a lot of that is because they’re confident that now that they have generative AI, that a lot of the duplicative roles, or a lot of the things that they can take care of with AI, they don’t have to keep all those people on anymore. So, yeah, it’s happening all over. So, I don’t know who’s doing it right. I mean, if you have any suggestions about that, or if you’ve seen anybody…oh, you know who’s doing it, right? The History Factory. History Factory is doing it right.

Jason Dressel  21:47

Well, I would say that, actually, and not to sound like we’re, like, bragging on our own podcast. But I think one of the big breakthroughs of who I think is doing it right is companies that are understanding how to forge it in their area or their domain of expertise. And this is something that—we shared our new AI tool with one of our clients, with a CTO, and they—she did such an effective job of articulating exactly what her organization is doing in terms of how they are developing and wielding and applying AI, and how we at History Factory are doing it. And I think what I would add to that, Dan, is that the best companies are, essentially, not necessarily the experts in AI technology, but they’re figuring out how to, essentially, use and harness these tools in the areas where they are experts. And I do think that one of the distinctive differences with this technology, as we’re touching on, is that you don’t have to have this, you know, really complex set of technical skills to be able to get a lot out of it, right?

Jason Dressel  22:57

Right. It’s totally organic. And, you know, back to the point of this, this CTO, to her credit, you know, that’s not sort of the mindset of her organization. It’s like she recognizes that with AI, her organization’s mandate is to build product for the marketplace that is aligned with the value proposition and expertise of what her organization does for society. And essentially, you know, so from her vantage point, it’s like, it’s, you know, when she looked at our technology as an example, she’s like, this is awesome. She’s like, I don’t need it. But I bet a lot of people in our organization needs it, and they should probably buy this. You know what? I mean, very different mindset, because, and I think you’re absolutely right. I think that’s one of the things that we’re all sort of collectively realizing with AI is, in a weird way, it’s not, it’s not technology, right? I mean, it is, but it’s like, it’s, it’s, it’s, it’s a medium, it’s, it’s just, yeah, it’s just a lot in the hands of in the hands of you, it’s going to do something. In my hands, it’s going to do something. I’m going to do something different. Because it is just

Dan Nestle  23:11

In fact, you need none, right? You need zero technical skills to be able to get a tremendous amount out of it, and with a modicum of technical skills, you can just really shoot out into space. And it’s crazy what you can do with just understanding a little, little bit about what scripts are, what code is, and what logic is. I mean, stuff that communicators haven’t really been been looking at. And who can blame us? You know, something you said that just really made me remember, or kind of brought back to mind, this whole idea of AI as a technology. And, you know, I think you’re totally on the mark with using AI, or working with AI, or partnering with AI to really bring out and enhance and augment your domain expertise, you know, whether you’re an individual or a company. So, like, this whole idea of AI being individualized, if you think of the corporation as a single corpus, then the AI should be individualized for that corporation, like, for its needs. And that is, you know, its domain expertise, its customer base, all these things. The AI should be molded, trained, used for those use cases and for those purposes. The problem, or one of the biggest problems, that we’ve had is that—and this is, I’m not trying to throw shade on any function or department, but I guess I’m gonna—the IT departments of the world, and the CIOs of the world, and the CTOs of the world are the ones who have owned this, and they treat it like a technology that has to be implemented, rolled out, across an organization, and they follow the same playbook. ‘You’re implementing AI, you need to do this four hours of training, and then we’re going to teach you, like, three basic things, how to push this button, this button, that button. Boom, you’re a user now.’ So, they’re training the people, their employees. Employees are clicking on the lessons and the units, you know, their corporate training system, whatever they’re using, is recording the fact that, ‘okay, 97% of our employees have completed the modules.’ So, then they go out to their boards and they say, ‘AI adoption is 97%. Done.’ But, you know, if it’s a strategic partner, if AI is to be your domain expertise kind of enhancer or augmenter, it has to sit cross-functionally, and it also has to sit heavily with the strategy people and the communications people and the marketing people and the sales people. I mean, it’s like, it’s very different in that way. It’s not a typical technology.

Jason Dressel  26:02

It’s totally organic. And, you know, back to the point of this CTO, to her credit, you know, that’s not sort of the mindset of her organization. It’s like, she recognizes that, with AI, her organization’s mandate is to build product for the marketplace that is aligned with the value proposition and expertise of what her organization does for society. And essentially, you know—so, from her vantage point, it’s like, you know, when she looked at our technology as an example, she’s like, ‘this is awesome.’ She’s like, ‘I don’t need it, but I bet a lot of people in our organization needs it, and they should probably buy this.’ You know what I mean? And so it’s a very different mindset, because—and I think you’re absolutely right. I think that’s one of the things that we’re all sort of collectively realizing with AI is, in a weird way, it’s not technology, right? I mean, it is, but it’s like, it’s a medium. It’s just—yeah. It’s just a lot different.

Dan Nestle  27:01

In the hands of you, it’s going to do something, in my hands, I’m going to do something different. Because it is just the most powerful thing we’ve ever had that is able to transform something that’s in your head into a workable piece of writing, or a workable product, even, you know? It’s crazy how you can just start with, ‘you know what? I want to build an app about this. Help me.’ And that’s all you have to say. And then, you know, three hours later, you’ve got an app. You never have to—it may not be a good one, but when have we ever had that ability before? Never. Never. So, it is—

Jason Dressel  27:41

And I think what’s interesting about it, too, to your point about CEOs and jobs and all of that, I mean, I actually think that one of the challenges is that I don’t think that the business community has been honest enough with the fact that this is going to take away jobs. I mean, the reality is, like, this does have the kind of life-changing implications that may be—I don’t know, we may look back and say that this had some of the same profound effects as the transition from an agrarian to an industrial society in terms of, you know, how we’re spending our time. And the reality is, you know, 150 years ago, a very large percentage of people spent their time, you know, working in an agrarian environment. And over 30, 50 years, that changed very rapidly. It doesn’t mean that in 50 years, or in 20 years, people aren’t going to have jobs. I’m not so sure. I don’t accept that premise, because I think there’s also just an element of—human civilization is, you know, we’re built to want to work, right? We’re built to want to do things. So, I’m not so sure I—but I do think that right now, for the next however long, there’s going to be a massive sort of reallocation of talent and expertise and how the jobs are getting reshuffled.

Dan Nestle  29:01

Oh, I totally agree. And I think it’s common with large societal transformations. And you mentioned the industrial revolution is one. The invention of the automobile was another, you know, and the railroad. Hell, the cotton gin, you know? I mean, just even within the Industrial Revolution, you had these moments in time that completely transformed everything, you know? So, the advantage—one of the big differences, though, is that moving from an agrarian to an industrial kind of culture, or industrial economy, you’re starting almost from zero with these new factories and new things and, you know, there’s plenty of people out there who weren’t working, or who didn’t have a way to make a living, and all of a sudden, ‘hey, look! See this big brick building? Come over here. Sit down here for 14 hours a day, do this one task, you know, and you will get money.’ Now, the downside of all that was not made clear, I’m sure, to everybody but, you know, that led to so many different societal changes, and it led to urbanization more than anything else, right? And then new things were just invented. New things just happened on the backs of all that. AI, you know, I don’t know if it’s gonna cause a shift like that, so I don’t know if it’s as big as the Industrial Revolution, I don’t necessarily agree with that, but we don’t know what new jobs, new roles, new directions of society, anything—we don’t know what’s going to be created based on all of this. In healthcare alone, you know, where—this is one the bright spots of AI, you know—in healthcare and medicine, you know, AI is identifying diseases. It’s, you know, able to support pharmaceutical research at rates and at accuracy levels that we have never, ever seen. So, the outcomes for health look to be, I mean, like, ridiculously good for us. But the system has not caught up to that, and hasn’t caught up. And it’ll be a while before it catches up. But what happens when it does? What happens when it does? Then, do insurance costs go down? You know, does the great bureaucracy that causes pharmaceutical costs to be, like, through the roof, does that change? It should, you know? So, these are the knock on effects. It’s going to take a while. But we, in our current position in time, cannot envision a future that is anything other than linear for most people. Or flat. Because you just know what you know. You know? I mean, that’s why futurists are so valuable to us, you know? They make it their business. And, you know, when they’re right, they’re right. But when they have a methodical and evidence-based kind of theory to say, ‘okay, we’re going to go in this direction, or we could go in this direction.’ We just don’t know what jobs are going to come, you know? And I feel like you, I kind of trust humanity in some ways to figure it out. I mean, we are an inventive and an innovative species. And, you know, necessity is the mother of invention, and all that. We will make new things happen. We’re doing it on an individual level every single day. The stuff I’m doing with my business and the things I’m creating did not exist, you know, two years, three years ago. Two years ago. So, you know, it’s—I’m positive about at least that much.

Jason Dressel  33:02

Yeah. So, let’s talk a little bit about that, and some of the things that you are doing in the space, which I think is really cool and really interesting, and I certainly have benefited from some of the tools you’ve developed. I mean, you talk about AI really being sort of a force multiplier for individuals, and I wanted to kind of hear sort of your way of articulating that, and then some of the kind of concepts and frameworks—I know you’re a big framework guy—so maybe you can talk a little bit about, you know, some of the frameworks that you’re using in your work.

Dan Nestle  33:38

Sure. Well, you know, AI is a force multiplier for individuals. I love that, the way you’ve eloquently stated it. You know, for me, it was always about two things. Like, all right, I have this great thing available, I seem to have taken to it like a fish to water. What shortcuts can I find to make my life easier? That’s it. Like, that’s where I start. Because I always say that I’m fundamentally a lazy person, which means, you know, I just want to find the easiest way to do things. And as a lazy person, I am dedicated to working 16 to 18 hours a day to make my life easier. It’s two fights. It’s ridiculous. But that’s what I found myself doing, right? Like, okay, I’ve been in comms for a long time, you know, a couple of decades. You know, there was a time where I really enjoyed spending time and writing. That time has long passed. I like writing things that I want to write, but I don’t want to sit there and be told, you know, ‘you have to do 250 words about this, 500 words about this.’ So, that’s where it starts. Oh, you know, one of the first use cases of AI is it writes. Okay, great. But I also am a perfectionist and have serious problems with bad writing. Ask anybody who’s ever worked for me, or worked with me, and—you know, I like the red pen. And that did not jive well with the early—even now, with a lot of AI stuff. So, I took it—it was my mission in life to make the thing write better. Like, that was where I started. It’s got to write better. It has to be able to. And I just started to break it down into: what makes good writing? You know, what do you need as a, let’s say, as an executive, or as a leader, or as a—if you want to be a quote, unquote ‘thought leader,’ what are the elements that make that possible? In this world where, you know, media relations is no longer the key to PR success, what is? Well, it’s attention. And how do you earn attention? Well, you earn attention by standing out. You stand out by having a point of view. Your point of view has to be solid and legit, and it has to be backed by experience and backed by, hopefully, evidence, and has to be coming from the mouth of someone who has authority and trust. All those things. So, if you go to AI and say, ‘write me a thought leadership piece,’ none of those things are taken into account. So, I tried to build all that. And to a point, to a certain point, I have, I believe, come to a point where I’ve taken a lot of those qualities and built it into what I call my content engine, or my executive influence engine, or, you know, whatever it’s going to be called at any given time. It’s really a content creator, an AI-based content creator, that is 100% focused on your own knowledge and your own body of work—you know, the thing, the expertise, the domain expertise you talked about before—and taking your own knowledge, your own expertise, your writing style, your voice, and layering into that what’s needed to communicate with your audiences today. And the outputs are pretty good. Then, when you throw an ethical layer on top of that—which, I think communicators certainly have ethics, or they should have ethics, in the front of their mind at all times—you know, it just seemed natural to me to put an ethical layer on it. So, you know, you build this layered engine. That was something that I would never have done several years ago because, you know, it requires some sort of knowledge of how a technology works. But with AI, you don’t need that much knowledge. You just need to sort of keep pushing and questioning and interrogating and being curious. And those are the things that I could do pretty easily. That was second-nature to me. So, you know, that’s where I really started out, doing that, to bring me forward to this kind of content creation system. But then, you know, you have a content creation system. Wonderful, great. That’s fantastic if you’re Jason Dressel, and you know, you know, where you stand in the world, or at least you have a solid clue of what your POV is and the kinds of things you need to talk about, because you run a company and you get it. The vast majority of people do not know this, right? Especially leaders, experts, people who run their own— even people who run their own companies. So, then you need to assess that. Like, somebody might need the help, but they don’t know they need the help. Or even if they do, they’re going to go the wrong direction. So, I figured, oh, you know what? Maybe I could just make it easier to figure out where somebody stands in the world. And then that led to a whole assessment system. So, I built an assessment system, you know? And everything kind of leads to another thing. And then I’m in the shower and thinking, oh, yeah, boy, this thing is pumping out 40-page reports. It looks fantastic, you know. Oh, wait a second, what if—shouldn’t it have something about LinkedIn in there? Oh, it should. And then I’m, like, hurrying out of the shower, going right back to my AI, and saying, all right, let’s build the LinkedIn module. You know what I mean? This is the way it happens. For me, anyway. And you know there’s—

Jason Dressel  39:17

Well, and it’s interesting, too, because—sorry to cut you off, but what you’re talking about, this is where sort of the intersection of the work you’re doing and a lot of the work we’re doing at History Factory for companies at scale are overlapping. And you’ve got this concept I’d love you to talk about of what you call—I believe it was intellectual archeology? And how that kind of feeds into this concept.

Dan Nestle  39:40

Sure. You know, one of the worst things about AI and, you know, AI slop, as it’s now, you know—and actually, the word ‘AI slop,’ I think I’m going to have to ban it from my vocabulary, because now it’s officially—AI slop has become AI slop. I mean, it’s like, it’s too big of a thing. It’s, like, the Webster word of the year or something. But the reason for it is mostly because AI fills in the blanks. It’s a big Mad Libs machine, and if you tell it something to do, it’s going to do it. If you don’t give it enough information, it’s going to make up the information. Sometimes, it will hallucinate the information. And it’s all about how much context you give it. So, when you are writing, or when you are using AI to help you co-create something—and I would never advocate you just delegate 100% to AI, it’s a co-creation process, right? But when you do that, the quality of what AI gives you is based 100% on what you give it. So, intellectual archeology, you know, we have years and years and years of stuff stuffed away not only in our brains, but on our hard drives and across the internet and, you know, in your previous company files, all these different things. And the longer you’ve been an expert, or the longer you’ve been a professional, the more of this stuff that there is. And who knows what’s in those archives? You know, when you pull back the—I don’t know how many metaphors I’m going to mix here—but when you peel back that first layer of the onion, and you start digging into the mind—I’m sorry, it’s like, it’s crazy—but when you start going deep into, ‘all right, what do I know?’ You know, it becomes like an archeology exploration, you know? Like, ‘oh, there’s a fragment of something there and there’s a—oh, remember that podcast I did three years ago? There’s something there. And I remember I was on these videos, and, you know, hey, I’ve done, like, loads and loads of town halls, there’s got to be something there.’ So, you’re looking around and you’re identifying the places that you’ve got to start digging. And sometimes you have to sit there and sift for a while, but sometimes you’re making these big finds. And it’s all what’s already—what you’ve done. Like, it’s your knowledge. So, you unearth these precious items, and then you put them in your warehouse, and before you know it, you’ve got something like the end of Raiders of Lost Ark, where you’ve just got this huge, huge warehouse, but instead of things being boxed away, they’re all open and accessible. Because now, that’s where AI comes in, you know? You can throw everything in there, and all of those shards and dust motes and large vases and urns and whatnot, you know, they then become the ingredients for, like, a big stew that is all you. A ‘you stew,’ you know? I’ve never said that before, but you can take your ‘you stew’ now, and—

Jason Dressel  42:49

I love a ‘you stew.’ That’s great. You got your—

Dan Nestle  42:51

—and that’s where it goes. Yeah.

Jason Dressel  42:53

Well, it’s interesting, too, because when you think about that on an individual basis, and if you’re working, you know, with, you know, executives, you know, entrepreneurs, you know, thought leaders, and you’re thinking about how you’re building up that ‘you stew; for them, one of the gaps that they may have—and this is what we’re dealing with, kind of, at scale, with our organizational clients, right?—is they’re thinking, basically, the data that they’re using, and the content, the archeological materials that you’re talking about, are all just inherently digital. And they’re not. I mean, if you’re an executive that’s been around for 35 years, it’s like, you’ve got to go back and also be going through your files and be pulling out, you know, your typewritten reports from, you know, the early 90s as part of that. Or, you know, the pieces that have been published. And so, that’s also one of the things we’re seeing is the gaps of AI, because it is inherently biased towards more recent digital content.

Dan Nestle  43:57

Yeah, in an ideal world, you know, I would build an engine—and I’d do this for myself—that is the totality of everything I’ve ever done. Because AI can allow you to do that. There’s so much space that I can probably find everything I’ve ever written, everything I’ve ever said, whatever, I could probably lump it all in and use that as my LLM, or as my kind of RAG system, the resources that I will use for the AI, right? But we don’t live in that kind of world where, if you’ve been around for too long—you know? The good news is that, of course, POVs change. Your thoughts change over time. And to capture, you know, the real you, and what you’re actually thinking, it doesn’t always require every little bit. It’s very different than a corporate archive, than the archive work you’re doing, which should have every scrap, right? You know, you’re building the Library of Alexandria and, you know, what I’m building is, you know, like, maybe the Bloomberg terminal. I don’t know. It’s like, just something a little bit different, where the need is deep, but not too deep. And when an executive or a leader—anybody, really—doesn’t have a lot of digital information, you know, the beauty of the way that the AI works is it’s a pattern-matching machine. I said it’s a big Mad Libs generator, right? So, as long as you have enough patterns to refer to, you can start. And then you can optimize as you go. But how do you get those patterns if you don’t have the old material? You can just sit in front of a microphone and talk. Talk about it. Ask a lot of questions. Be very curious. Without being too much of an interrogator, be an interrogator, you know, dig down deep. And then those patterns emerge, and the voice emerges, and the pattern, you know, everything comes through. Then the writing happens, you know, as you go, you have to figure it out, like, what the best patterns are, if you don’t have a lot of evidence to start with. The most important thing is that you don’t want to misrepresent yourself, and you don’t want to misrepresent your ideas. So, it’s better to be a little prescriptive from the go and expand as you go.

Jason Dressel  46:31

Yeah. I mean, the magic of AI is, like, what we’ve said from the start, right? Or, at least, what I’ve always said is, like, it sucks at context and it can suck at authenticity, so everything that you’re talking about—and, again, a lot of the work we’re doing is kind of solving for the same problems from a different angle—is essentially solving for the gaps in sort of context and authenticity because of the kinds of data sets and the kinds of material that it’s pulling from.

Dan Nestle  47:04

Yeah. And it’s getting better and better at it. I mean—but I would never, like, I don’t think that there will be a moment when, you know, the AI will completely replace the executive for their communications. I mean, look, let’s face it, a typical corporate communication where you’re just making a product announcement, or you’re just, you know, ‘hey, it’s Thanksgiving, I want to—’ Whatever. These things can be done by anyone. But for a thought piece, a POV, you know, preparing someone for an interview, you know, getting to the depth, getting to the depth of, like, vision and strategy, that is not something that, you know, you can just throw out there based on a prompt. You know, you need to have the human heavily involved in that process. And, you know, by the time, you know, the engine that I create will put something out, give you something that’s anywhere from 70 to 85, maybe 90%, ready to roll, depending on the topic and the nature of the post, you know, a short post, you might be able to just run with as is. There’s no—it’s not high stakes. But anything high stakes, you know, treat it like it’s, like, a really good draft from a ghost writer, and you’ll be in the right zone.

Jason Dressel  48:26

Yeah. So, last thing, kind of prediction time. Any thoughts on, you know, if we’re talking a year or two from now, what might we be talking about that we’re not talking about yet?

Dan Nestle  48:41

I—Okay. I’ve been trying to earn the badge, ‘futurist,’ and failing miserably over my time, so—

Jason Dressel  48:50

You’re in a safe space.

Dan Nestle  48:51

—so, I do not have the best—

Jason Dressel  48:53

We’re a History Factory podcast. What better place to do some workshopping on the future?

Dan Nestle  49:00

I’ll tell you, I don’t have the best record in all this. However, take it as you will, you know, this individualized use of AI is such a huge change because, like, when you think about Excel or Word or these other, like, kind of work-changing software, you know, the experience is the same for every person. Some people are just better at it than others. With AI, it’s not about being better at it or worse at it. It’s just about, like, how you find that way to communicate with it that works and that yields the results that you really want. And then you layer in, like, okay, I can do this better with frameworks, and I could do this better with logic, and then you start to kind of figure that stuff out, but ultimately, it’s your thought pattern and your way of connecting dots. Because of that, a proliferation of individuals like me, I suppose, who are going to go out into the world on their own and not need to be employed by anybody because, you know, they understand how to use AI, and they can build their own team for 20 bucks a month—right now, anyway, initially—for the smallest investment, you get the most corporate power that you could ever have had as an individual. So, there’s no such thing as a solopreneur anymore with AI. Now, you have a small company. And it’s going to make it more—the incentives will be greater, I think, to kind of keep doing that, especially as people are getting laid off more and more, and especially as knowledge workers get laid off. The flip side of that is that a lot of those folks haven’t gone anywhere with AI. They’re not ready. So, they’re going to jump too soon into that milieu, and you will get—you’ll see a proliferation of, like, individuals who are holding themselves out as AI consultants and creators or whatever, but the quality will be demonstrably lower as an aggregate. So, you know, it’s going to make it harder for people out there to do well on their own. However, there will be a large kind of separation of the wheat from the chaff, so to speak, and you’ll see kind of super consultants develop, and everyone else. But then, shortly thereafter—in June, as a matter of fact! No, I’m not going to say that. I’m not going to give a date. But shortly thereafter, there’s going to be—I’m calling this ‘the big oopsie.’ And all these companies that’re laying people off, and this AI transformation that isn’t working, at some point they’re going to realize they’ve let go the kinds of things that you need for the History Factory. They’ve let go of wisdom. They’ve let go of institutional knowledge. They’ve let go of deep domain expertise that have been the fuel for their innovation for years. And they’re going to go, ‘oh, hey, sorry. Want to come back? We’re going to start hiring people back.’ Not all of them, maybe, but they’re going to start hiring—I think they’re going to have to hire people back. And it’ll be interesting to see whether they go back, you know? Because there’s a lot of other things happening out there that’s going to make it harder to go back, you know? RTO mandates and such, you know, these are all things that are happening that are going to make it harder for employees and for workers. So, ‘the big oopsie’ is a big one, and the proliferation of individual business owners. And, you know, big picture, agents, you know, agentic AI. I’m not as bullish on that as a lot of folks are, but I think it will be—once we get past this immediate kind of techno-bubble of ‘agentic, this and agentic, that,’ it’s just going to be embedded in things. So, more and more, like, you know, Chroniqle, for example, your platform, people will just be—instead of their first thought going to ChatGPT to find out about the history of their company, they’ll just have an agent, you know, that’s based in, let’s say, Chroniqle. ‘Go find me this and go find me that.’ And those kinds of solutions, I think, are going to get more and more important. So, yeah, maybe it’s not much, but I wouldn’t bet on it, guys. Keep your salary safe. But that’s what I think is going to happen.

Jason Dressel  53:44

Yeah, I think that. I think the next thing, in the next couple of years, I think there is going to be this integration of what’s happening with AI and robotics, and I don’t think people are prepared for that, the way that’s going to happen. And even the metaverse. I mean, the thing—and you and I have probably talked a little bit about this—it’s like, I feel like right before the AI ChatGPT just, you know, broke our brains, and it just felt like it just kind of crashed onto the scene out of nowhere, and before that, it was, like, ‘metaverse, metaverse.’ But I think there’s going to be an element of this kind of convergence of AI—

Dan Nestle  54:16

There has to be.

Jason Dressel  54:16

—this notion of Web3 and robotics, when all that technology comes together, I think it’s going to make things really interesting.

Dan Nestle  54:26

Well, you know, I got one more that you just reminded me. Because I think, you know, it’s very short-sighted to dismiss Web3 right now, you know, with deepfakes and with the problems we’re having, rage farms and all this fake content out there, and even, you know, AI slop, and people trying to game the GEO engine by creating multiple and multiple websites—it’s almost like Black Hat SEO ages at times. Blockchain could be the solution to that, by providing the validation of what’s real and what’s not, you know, at least at the beginning. And I think—I wouldn’t be surprised if there’s loads of people already doing that, working on that. But I would love to see that happen.

Jason Dressel  55:14

Yeah. Cool. Well, we have a saying at History Factory: Nothing looks older faster than the future. So, we’ll leave it at that. But thanks for your insights. Really interesting, as always, and we’ll catch up soon, buddy.

Dan Nestle  55:33

Thanks, Jason. It’s good to be here.

Jason Dressel  55:39

That’s it for this episode of the History Factory podcast. Thanks again to Dan Nestle. You can learn more about Dan and his work at be-inquisitive.com. That’s B, E, hyphen, inquisitive.com. And I also encourage you to check out his excellent podcast, “The Trending Communicator.” Thanks to all of you for listening to the History Factory podcast. Stay safe and we’ll be back soon with a new episode. Be well.

View Transcript