Justin Trombold Shares Top 5 AI Adoption MISTAKES for Enterprise
Enterprise AI adoption is frequently failing because companies treat generative AI as a magic bullet to reinvent their business, rather than a specialized tool to solve well-documented, specific problems. Justin Trumbold explains that true generative AI readiness requires a deep understanding of current operations, clear enterprise objectives, and proper alignment before layering any new technology on top. Successful implementation requires actively evaluating an organization's adaptability, cross-departmental collaboration, and end-user proficiency. Leaders must empower their teams to define their mundane task constraints and pilot safe, outcome-driven solutions to achieve incremental efficiencies, rather than attempting sweeping, unstructured transformations that don't match the company's operating reality. As AI continues to evolve and lower the barrier to entry for content generation, the future of business and consulting belongs to strategists who can ask the right questions. Those who can orchestrate the intersection of business strategy, change management, and AI execution will thrive, while those who rely on unvetted, out-of-the-box LLM outputs will quickly drown in low-confidence information.
Discussed in this episode
- Why assuming generative AI will automatically solve undefined organizational challenges is the primary mistake leaders make.
- The necessity of documenting existing processes and mapping customer value chains before applying AI solutions.
- How generative AI readiness is measured through enterprise alignment, end-user proficiency, and structural adaptability.
- The danger of deploying highly dynamic AI solutions into rigid, risk-averse organizational cultures.
- Why defining the precise handoff between human execution and machine automation is critical to agentic AI success.
- Building custom, private LLM instances for specific analyses instead of buying generic out-of-the-box tools.
- The shifting landscape for consultants, where fast, AI-augmented execution will bifurcate the professional services market.
- Using tools like Grok to collate and organize tabular data rather than relying solely on ChatGPT for low-confidence text outputs.
Episode highlights
- — Welcome and introduction of Justin Trumbold
- — The biggest AI mistake enterprises make
- — Documenting processes before applying AI tech
- — Defining generative AI enterprise readiness
- — The danger of rigid companies using dynamic AI
- — Defining human vs. machine skills and agents
- — Assessing current AI proficiency and collaboration
- — The future of the consulting and agency industry
- — Why asking the right questions matters most
- — Rapid-fire questions and daily AI productivity tools
Key takeaways
- Map and document processes before applying AI.
- Align AI strategy to enterprise operations.
- Match AI tools to organizational adaptability.
- Focus on outcomes, not rapid deployment.
- Good prompts require asking the right questions.
Transcript
A mistake that people make is that generative AI is a tool to reimagine and reinvent your business. And it certainly can be that at some point. Talk about how to make AI strategy actionable, measurable, and aligned with the business. If you think about you had eight different people in the group ask a question about like what's a go-to market strategy for X Y and Z.
Same model, maybe even worded the same way. They'll get different things. Welcome back to another episode of The Bridge the Gap podcast powered by none other than Revenue Reimagined. Today's guest is Justin Trumbold, founder and president of Antison Advisors, and one of the clearest voices on how enterprises can move beyond generative AI hype to actually build real value creating strategies.
Before launching Antison, Justin spent nearly a decade leading strategy work at Deloitte and Grant Thornton. He now works with leadership teams to connect the dots between enterprise objectives, market realities, operating models, and yes, AI innovation. We're going to cut through some noise here, talk about how to make AI strategy actionable, measurable, and aligned with the business you already, already run. Justin, welcome to the show.
Thank you. It's great, it's great to be here. And now, now I have to live up to that introduction. So, thank, thank you for.
See the only problem with my intro is we, we do, we do try to like lift people up. Yeah. You earned it, man. You earned it.
You've done some good stuff. Dale, welcome to the show. Hey, I'm back. I'm back.
And, and having technical problems, but now I'm back. Although, I did have my eye dilated today, so I can't see a thing. So, this should be interesting. But, Justin, welcome to the show.
Appreciate it, man. Um, you talk a lot about AI hype, enterprise reality. Um, what, when you were building your organization, what problems are you trying to solve for the biggest firms that they don't even know that they have yet? Yeah, it it's, it's a great question to, to tee it up.
And I would even before I answer that say that one of the things that generative AI has, has done is a lot of the, the problems that large organizations have, small organizations have the same ones. They just might manifest in slightly different ways. So there's been kind of a condensing of, of what that looks like. But to, to answer your question, the, the main, the, the main, the main challenge that I'm seeing that, that that's showing up and, and what leaders or even individuals anywhere in the business aren't seeing is a presumption that a tech solution in terms of generative AI is going to solve what your challenges are.
But what really, the way, the mistake they're making and what they're not seeing is that you have to have clarity around what it is you're trying to solve, and then align the technology solution to that. And oftentimes that isn't going to be an out-of-the-box solution. And so you, as an organization, you have to have a culture in place that is able to identify those opportunities. Oh, you mean you have to like actually document the challenge that you're having.
You have to identify where you can potentially solve it. I feel like we just had this conversation ten minutes ago. Well, well, or, or even, or even document processes. So there's, there's a, there, there's a common thread that you know, what, what, what do we mean if an organization's customer first?
It's like, well, what it doesn't mean is you have it on, on the side of you know, a thing on the wall at the company. Maybe you have that, but that's, of course, not sufficient. It means that you are able to dig in and unlock the customer's value chain and understand what it is they're trying to do. And you organize a company around delivering that.
Right? So that, that's, that isn't just, hey, this is a feature on the product. It's, it's how you work internally, how you reward your people. So they're, they're doing activities that are customer first.
It's how you then explore opportunities and examine the customer value chain and think about the impact on the customer value chain if you make a decision. Similarly with generative AI, so that's a core principle that's been around for a long time. It's we have these things that we do. And back to your question, Dale, a mistake that people make is that generative AI is a tool to reimagine and reinvent your business.
And it certainly can be that at some point. Yep. But you probably don't want to start there. You, you probably want to start with, well, what are we doing now?
Where can we improve? And then how do you make a concert, as, as, you know, Adam, you have with your, your great introduction, how do you put in a clear path to then identify those opportunities, test them, decide if you want to scale them, and then make appropriate investments? Yeah. Then you can start getting into this idea of, well, what now?
And, and I'll, I'll just close on this here. This, the what now could be, well, now we're 20% more efficient. Can our, you know, can our sales people, can anybody, anybody that has some value-adding activity that's being, you know, let's say, squeezed by mundane day-to-day things, what of those things can you do more of? Right?
So that can lead to growth, it can lead to other, other, you know, can revenue advances can lead to to innovation. But I think something that's interesting and time will tell how this plays out is, if you just give your people more time to think about things and not do things, what types of transformative business models are going to emerge? You know, what, what ideas will people have that are working in the business that were spending so much time there and they just, they had ideas that occurred to them, but they just dismissed them because they had to get back to the spreadsheet or whatever they were doing. It'll just be interesting to see how those dynamics emerge in, in the coming, the coming years.
Yeah, so let, let's, let's piggyback off that. So in order to solve these problems and in order to make sure that we are fixing the right things, whether it be generative AI or not, for the record. Those strategies have to be deeply enshrined and connected to how companies operate. And you have said often that most AI strategies fail, quote, because they're disconnected from how companies actually operate.
And Dale talks about this quite a bit as well. Give me, like a concrete example of what you mean by that and and how we actually solve that problem, because I agree. People now are like, oh, I'm just going to bring in some AI to fix this problem. Well, yeah.
What's the problem? What are the results? Like what are you doing? Yeah, well, well, let me, let me frame it up and then I'll give an example on that.
So what, where that statement originated from is when we, we were, we were conducting some research on what makes organizations generative AI ready. And two, two main themes emerged. There is, there's a concept of a, a high level of generative AI readiness. So these are across categories like, you know, is your generative AI strategy aligned to your enterprise strategy?
Sounds simple enough, but probably quite hard to, quite hard to do. You know, is, is end-user proficiency where it should be? Is there a level of collaboration that's in place? Do you have characteristics of scalability and adaptability?
Mm. I, I, I'm, I'm going to interrupt you for one second. What you just said is key. Do you have characteristics of scalability and adaptability?
You can't just bring in AI and expect it to fix something if you are the very red tape, very structured, it has to be exactly this way and we never adapt from anything organization. And we've seen this time and time again. We had a client who is very rigid, but let's bring an AI to fix everything. Well, no, because now you're going to complain, like you're, you're too rigid.
Can you adapt? Can you be scalable? Can you be flexible? What's your appetite for things going wrong?
The mindset that people have, and I'm curious your thought on this, we see it all the time of, I'm going to bring AI in and it's going to fix everything. Listen, dude, I don't know what to tell you, but unless you know what you're trying to fix, how you're trying to fix it, how to prompt, what the outcome is, what the input should be, you're just going to replicate bad and you're going to make it 100 times worse. Well, and, and this is where I I'll, I'm going to answer your question, but I'll, I'll say that within that level, those, that level of readiness, there are organ, there's high variance in how those, how organizations, we, we call them, there were five, there was four or five or six. It depends on, you know, you had some different ideas, but let's just say five personas within each of those categories.
And to that example of the, the risk averse group that may be very process-oriented, very rigid. There's a line of sight for these different types of organizations that you would be more likely to generate value from generative AI. But you just have to be careful about where you make those investments, right? So you wouldn't want to, in that example you provided, you wouldn't want to bring in something that was or experiment with, with generative AI solutions that are rapidly transforming the way you're doing business.
What you may want to do is leverage that rigidity and that governance to come up with some, you know, very well-defined processes and think of some safe plays that you can put into place that are more augmenting the people. So, so you can be judicious there. And so that's, to the quote that you had shared, when, when you're talking about the, the, the question that, that you just asked in terms of, well, what does it mean if you just plug something on top of what you're doing? Those, those personas give you a starting point in terms of saying, like, okay, let's, let's maybe not play here and let's play here.
It, it gives you a directional roadmap for now. But for every company, it's taking that step back and either going into the business and having your people define processes, or if you, if organizations do have more centralized, um, either centers of excellence or generative AI functions and they want that to come from the top, they have to be very clear of what it is they're solving for first. Now, the challenge that those groups will have is that, as you said, people in the group won't know what they are, they won't know how to use them. They have to not just engage their people in the ideation, they have to engage them in the actual development of, well, or identification of where those pain points are and development and testing of the solutions.
Right? Because then, not only have you figured out if there's a, there's something worth solving, but then when you make that investment, your people know what it is. They know how to use it, they know how to think about the data outputs. They know, I should expect efficiencies in these ways, and now I can, I can improve the way I work in, in other ways.
And so, regardless of how they show up, this taking a step back, identifying problems at the, at the, the lowest point in the organization executionally or strategically, and then layering the solution on top of it, engaging the people that are going to use the solution or digest and act on the information is critical. Yeah, and, and if you get down to like the basics of defining the process, understanding where your challenges are, and then identifying what a human should do and what a machine should do, and then like it's almost a skill. Like there's a set of skills, there's this concept Anthropic just put out on not doing it from an AI agentic perspective, but doing it at a skill level, and they're starting to deploy these skills that agents can then pick up. Okay, I need to go do a prospecting skill.
That has a series of things that we do. Now, we're getting into like the micro execution of a human, what a human would do so that we can give the right instructions to the right part of the agent to go do the right thing. And that is just it's very monotonous to be honest with you. It's like just very it it gets down to like minutia level that I think people have always struggled with.
We've, we've do go to market strategy and execution for many companies, forget about prior to, prior to AI. And it was hard enough for them to say, okay, what do you want to do first? How do we drive an IC ideal customer profile? How do we drive a value proposition?
And all of those intragall little pieces that you have to run and execute on. Like it was hard enough to document it at that point. They're like, well, AI can just go do it now. So you're, you're putting bad process into generative AI.
And the, what you're seeing is just a really ship process coming out much faster. So we're just getting worse process faster. Well, and I'll, I'll tell a story. I I love what you said, Dale.
And I I was working with a, a small organization, and a big part of what they wanted to improve was their ability to analyze geographic markets and understand if it was an attractive place for for real estate investments. And, and in what way? You know, commercial, you know, multi-family, office, mixed use, whatever the, the asset class was. And the question came up of, well, could we just find a company that creates a market analysis generative AI tool?
And the answer is yes, you could. And, and would it be wrong? Probably not for what the tool is trying to do, but it might be very wrong for what you're trying to do in terms of your organization. I think we all can appreciate that if you have decisions to be made about, even if let's say it is a real estate tool that you bought, but what makes an attractive investment could be very different for one organization to another.
You know, and if you're talk, like are you someone that does acquisitions or partners or, you know, you're looking to develop the asset. You know, there there are all these variables, plus the criteria of what makes something attractive could be different. And so if you went in and took the time and said, let's get our hands dirty in a nice private instance of an LLM, and let's start exploring how we can unpack market attractiveness. Take the time to sit down and document it.
Like, what does that mean? What are the drivers of market attractiveness for us? And you build a tool. Then you go and have that discussion with the company that can build an agent for you.
Now you know we've got a good fit. We still have the caveat. Let's, let's assume that this output is something that a really good analyst would provide. Let's still, let's still think about it.
Let's still add that, that element of decision-making, but you get something that's not only much closer to the right answer, uh, but your people know what it's deciding upon, and it just helps drive decisioning. And there, I I'd like to say there's a close. You're on mute, Dale. You shook your head and you talked, but we didn't hear you.
It was. It's still figuring it out. And my dog was barking. Yeah.
I was just going to say, for your listeners, there's maybe hoping there'd be some clothes and and they did it and they lost, they're still figuring it out. You know, but I I think that first step of like, hold on, like, you've got to think about it this way. And perhaps there's a time where that becomes less true, where the agent can also be the agent, and it can figure out all those things automatically. We're not at that time yet.
And once we get there, then we'll have to have another conversation because that's a whole, a whole another, a whole another dynamic. But we are still at the phase where that input and that, that understanding of what the process is, what the analysis looks like, is a critical predicate to then putting agentic solutions or any, any solution uh, together. So let's, let's shift a little. Let's talk about how you design, excuse me, the, the generative AI ready enterprise.
So you co-authored a white paper that was called Generative AI Readiness. Um, what does AI readiness actually look like? And we've talked about this a little bit, but if you were just going in, you're giving a high-level talk with a, you know, enterprise-level organization. Here are the four or five things you need to do to be ready for generative AI.
What are those? People buy from people. That's why companies who invest in meaningful connections win. The best part?
Gifting doesn't have to be expensive to drive results. Just thoughtful. Sendoso's intelligent gifting platform is designed to boost personalized engagement throughout the entire sales process. Trust me, I led sales for a Sendoso competitor, and I can tell you, no one does gifting better than Sendoso.
If you're looking for a proven way to win and retain more customers, visit sendoso.com. Yeah, let's, let's start, I'll, I'll just do a a process answer, then we can talk about characteristics if we want to go that way. So you need to understand where you are today.
And where you are today means and it doesn't have to be with our diagnostic, of course, but taking a step back and saying, how well do we actually have a cohesive generative AI strategy? How good are our people actually, are they well-equipped beyond and it doesn't mean that they can query it. Like, do they know how to apply an LLM in a business context? How equipped are your leaders to digest the information that's, that's produced from an LLM and act upon it.
So that, that, that, the proficiency. There's the collaboration. How siloed are you? You know, do you have a track record of everybody keeping their own data and, you know, marketing not talking to IT, not talking to to to whomever.
So I'm talking boring, like, that's a, that's a bad thing just for your listeners. That's, that's I'd say that'd be a low readiness situation. The scalability and adaptability. You know, do you have some characteristics that, you know, let's say as an example, when you've made tech investments in the past or major investments in the past, does the enterprise follow through with them?
You know, is there value creation that typically comes from a change in the way of working? How good is your organization at change management? How clear are your governance processes? And it doesn't even have to be for AI, but just in general, are you good at setting up governance and then following it?
So starting with understanding that. Once you understand that, then you get that sense of what you can do today, what you can't do today. Then on what you can do today, that's where you follow that, that pathway of engaging your people all the way up to, to making an investment. What you can't do today, that's where you set the long-term roadmap of, hey, we, we score really, we're, we're a really poor collaborative organization.
We have a very good, well-defined generative AI strategy, but what do we need to do to break down those silos? And so you get very targeted and you say, let's change these things. That's, and so that's where the change management and transformation comes in, which is very, very hard uh, for an organization of one. That's hard.
You know, and it gets harder every, N plus one all the way up to whatever it is. Put those changes in place and then as you become more proficient, revisiting what you can do and how you can execute it. So that's from a process standpoint what, what that looks like. And, and so it's interesting because a lot of what you're talking about sounded very academic, right?
And so we, you started your career as an academic research, so that makes a lot of sense. How does that academic research background shape the approach into a business strategy? Yeah, you know, and, and that was, I, I love the question because what was a big learning curve for me was taking that academic principle and putting it together in something that's, that's attainable. So the first part of that discussion, where I was describing what is readiness look like, you got the win now, win later path.
You know, and then, and then it's about, it's about diagnosing and improving. That's a very academic answer and it's also a very hard thing to do. And so what, what I came to realize and what we realized in our group is that not only does it make more sense for our business model to have a more achievable outcome that doesn't require transformation, that isn't this like academic exercise. That's where we came up with, well, how does that manifest, how, how do those, how can we put a process and help organizations put a process in place that's more real?
Which is that idea of defining processes, empowering your team, having a clear way that you show if, if something works or not. How do you scale? How do you invest? That's where that, because that, if you unpack those processes, it, it brings in characteristics from that diagnostic into a very real-world pathway.
Right? So what we found is that larger organizations are, are perhaps better suited to going the more academic diagnostic thing first and really figuring out where the problems are and and fixing them. Smaller organizations or business units are better suited to starting with that, okay, well, what do we do tomorrow? Right.
And what do we do the day after that? So it, it's, it's a, it's a marrying between the two, and I I would say that consulting in general is afflicted by the same type of thing of, you know, leaving the deck behind, right? And you have a nice framework to think about something, but you don't actually have anything that you could, you could execute. So, it's, it's that it's overcoming that idea of, yes, it's great to talk about principles and, and, you know, academic exercises to illustrate points, but that can't be what you're offering a client.
Right. There has to be some meat on the bone. Yep. Totally agree.
And, and they, and they have to understand how that translates into metrics and profit. And that's why a lot of times where we see, we like to build like an AI charter kind of like what you're saying, like, what does it look like across everything? And then you take like a proof of concept, proof point, and say, okay, let's execute on this and see where we generate return on it. We end up going a lot in the go-to-market because it seems like that's a good place to get a return because you have high, high spend resources doing low-end tasks like research or prospecting, that kind of stuff.
Well, and I think that's the we I talk with clients a lot about fear. And, and what I see with organizations that are, we'll just say they're heavily reliant on front-office activities, or that's, you know, they're, their business model is very focused on being able to sell or, or whatever that looks like. Let's not, let's not worry about a direct solution that generates revenue, talking about generative AI. Let's just identify those things that are constraining your people from doing those high-value activities.
Let's make those more efficient. And then you have more time. And that's, it doesn't mean that it's easy, but it's something you can they can wrap their head around and they have a starting point. Yep.
To say, let's, let's have our people doing more of what they need to be doing. Yep. Totally agree. Go ahead, Dale.
What, you're not going to, you're not going to participate in this, in this episode? I I I I it's funny to me because I feel like we talk about these things all the time. And people look at us sometimes like we have three heads. Um, and a lot of what you're saying, Justin is, not only do I feel it's right.
Um, but it's validating. Um, and Dale, I know you probably feel that way more, more than I do because you, you typically are talking about this much more than I am. Um, where people just feel like, oh, come on in and start AI. And everything you're saying, I feel like we've been trying to, you know, pound into people for the better part of six, 12 months.
Um, and people just don't, um, don't get it. When you look at the future of AI, um, the future strategy, so things are changing, right? Where do you see, and I I might be putting ourselves out of business with this question. Um, where, where do you see the role of consultants, strategists, like how do we all evolve in a world where AI could generate insights instantly?
I'm curious your thoughts, then I'll, then I'll give you mine because I actually had a prospect say to me the other day, oh, well, I could just put all this in chat GPT and get the same thing. Um, Yeah. Good luck, but I'm curious your opinion on that. Yeah, I think, you know, outside of the idea of saying, let's try to convince that person that they might go wrong by doing that.
I think if if consultants or professional services are honest with themselves, and I have some good data to back this up from speaking to to clients of professional services firms. There, there is a very real threat. You know, people aren't, it's at least in concept, right? You have this dynamic to where you have very expensive engagements.
You have teams and and pricing models that don't necessarily match the effort and and deployment models that are possible. You have these new entrants coming in, and they aren't just boutique nobodies. They're your constellations of people from really great consulting pedigrees that are, you know, I think the popular thing now is the consulting the monolith model or something like that where they're there's a essentially a a a AI analyst on the bottom or like an individual. And there's like one of those.
And then there's more of like an engagement manager than the senior person. And they're bringing this very dynamic, very quick, less expensive go-to-market product. Those aren't really hitting the big players yet, but clients are starting to expect that. And so what I expect will happen is that there'll be, there'll be a bifurcation where the organizations that can really differentiate on strategy.
And that is they have the expertise to come in and provide that, that, that experience and those insights that can be augmented by generative AI or brought to life by generative AI. They're going to continue to win, although perhaps at a lower, lower price point. And then you have organizations that are, let's say your scale players, you know, your larger consulting firms that do a bit of everything. I think the organizations that become really, really effective at orchestrating the, the business strategy, the transformation and the tech ecosystem in this collaborative market are going to be the winners in that space.
And they're going to take a lot of that business. And then you have perhaps people like us that are coming in. I think we'll have the same, we, we have a bit of of an advantage in that, you know, we're, we're, we're small and can go in. And I think, I don't know if that means we can crack the Fortune 500, but perhaps we can start cracking business units at large companies.
Do we want to crack the Fortune Do you want to? No, no, that's a whole separate conversation as well. But it, it's going to, it's going to fragment the market. There's going to be some really big winners and some names that persist.
And then there's going to be a lot of new challengers, either three, four, five-person shops or, you know, 100, 150, 200 shops that, that, uh, that, that have that have a lot of expertise. And it's probably going to get more and more specialized in terms of of what clients are expecting. Yeah, I I I, I fundamentally agree. I think one of the things that we have said from the beginning before AI was really big, right?
So we're, we're coming up on our three-year birthday, so to speak. Um, we often get asked, well, how many hours do I get for X, Y, Z, 000 per month? And my answer is always the same, like we don't trade time for money, we trade outcomes for money. Um, and we have become very AI forward, AI focused.
And if listen, if I I'll give you an example. We're we're creating a, we have a client that's never hired a BDR before, never managed a BDR before. Um, and I put together, Dale, keep me honest, a fairly complex BDR onboarding program complete with like the workbook, the calendar, the 18 resources you're going to need, the certifications, etc. This historically would have been a massive deliverable um, in a sprint that would have taken me ten hours a week for four weeks to build.
Um, it literally took me, if I'm being generous, three hours. Um, a lot of back and forth, but very, bless you. A very AI-forward approach. And certainly, you know, people will say, well, I'm going to pay you the same amount of money to do that in four hours as four weeks.
You should be thrilled to pay me to do it in four hours because it means I could go do something else for you. Um, so I think when you're they can do something, they can start advancing that whatever they're they're trying to do. Right? There's a value and that's that speed.
100%. So I think that, you know, consultants and agencies and clients who are not leveraging AI the right way, which is not to be clear, um, going to chat GPT or Claude and saying write me a cold email sequence, um, or using some bullshit prompt. Um, that is where I see the future going and I see that, you know, do I think that people are going to be able to go use AI to replace all of us? No, because you still need the execution.
Um, you still need to be able to operationalize it. Like great, you get this output from Claude. Well, what the hell are you going to do with it? Um, and that's where the operators come in.
I think more and more what we're going to see historically, we've seen people who have worked with other consultants who have given them a pretty slide deck that has been garbage that then come to us and ask us to fix it. Um, and I'm happy to take that business all day long, by the way. But what I think we're going to see now is people who are like, I tried to do this with chat GPT and I couldn't figure it out and now I need your help. Um, and I think part of the business is going to shift to showing them how to use AI the right way.
But it's just, it's all strategic thinking. Like the whole thing, like the iteration back and forth. It's like what, what I'm realizing is we need a mindset shift as individuals to I, I interact with the machine as I would interact with a human being to get answers that I would want out of a human being. It's not like create me this thing and then go take it.
It's like create me this thing and then like look at it and be like, okay, this is good, this is not good, this needs to modify and iterate till from your perspective, in the knowledge that you have, that you feel comfortable with it. Well, I think the, the great analogy is that everybody has questions, or everyone has problems. Not everyone has good questions. And, and so I think the, the consultancies, the advisors, whomever, even within a business, the individual in a group, that knows how to ask the right questions, and at least at this stage, knows, as we were talking about before, knows how to engage tools to help answer those questions.
Yep. And, and that's, that's not such a simple process. And I think Adam, with the idea of, of using chat GPT, you know, that deluge of, of information, what I would call low-confidence information. That's not going to lead to more action, that's going to lead to more confusion.
Yeah, 100%. Because there's going to be all these different, it's just if you think about you had eight different people in the group ask a question about like what's a go-to market strategy for X Y and Z. Same model, maybe even worded the same way. They'll get different things, they'll have different opinions.
And, and is that better to have eight different perspectives on on a go-to market strategy. So there's, there's a value in precision in asking and answering the right questions and using generative AI as a tool to help facilitate more rapid, more accurate answers to those questions. Yeah, 100%. And so the firms that can do that, I think are going to, are going to win or at least win in so far as we can predict how any of this is going to play out in the coming in the coming years.
It's not going away. It's just going to be different. Yeah. It's going to be different.
It's going to be an interesting ride for sure. Um, with that, we are almost at time. Let's dive into some rapid fire. Uh, here's the rules, ten words or less.
Um, otherwise for every word that you go over, Dale owes me $10. Um, so feel free to talk as long as you want. Adam just loves the rules. He never follows If I I am a rule-driven person.
It is black. You're a rule follower, but you I am not a rule follower by any stretch of the imagination. Um, if I go under, you know, Adam, do you have to pay Dale $10? There you go.
I mean, typically. Um, What's the one KPI that you see executives over index on? Revenue per FTE. Hmm.
Interesting. Interesting. Interesting. I like that one.
That's a new one. I I I mean, it's not a new one, it's new that you said that, that that's over overhyped, but um, it'll be interesting to see how that plays out. I wouldn't say it's overhyped, I would say it's, it's very useful but applied indiscriminately. Yeah.
Fair point. So it it has to be has to be thought about in the right context. What is your go-to AI productivity tool today? It it's it's nothing sexy.
I like grock. Oh, okay. I know it's not I I I you might I was hoping an application that I've been particularly satisfied with. I For your listeners, I I use it a lot because it it doesn't do as good of a job with generating, you know, beautiful text, but it's very good at collating information.
Hmm. So there the grock markdown table is very good if you have content that you want to organize and put into Excel or or something like that. It it does a it does a nice job with that. Good to know.
I'll have to try it. What's the most overused word in AI right now? You took our jobs. I was going to say agent.
Or or you did I But but yeah, or another way I could think about that is I I think artificial generalized intelligence. And and I think that's damping down a little bit of of late if we're being in in real time here, but that's uh that to me, it's we alluded to it a bit earlier, but predicting what to do in an environment like that is it's incomprehensible, I think. So you almost have to pretend like it's not going to like something like that won't happen anytime soon. At least that's my that's my approach to it.
Um, one piece of advice for leaders chasing AI transformation too fast. Slow, slow down. Think about outcomes. I like it.
Love it. Very important. I'm actually going to steal. But I'll say this, start.
So you said too fast, but if people haven't, if they're chasing it too slow, perhaps. Yep. Start, but keep it deliberate. And focus on outcomes.
Love it. I'm going to steal Dale's to wrap up the show. I'm going to I'm going to play Dale for a minute. Last one, Justin.
Favorite or dream vacation destination. I couldn't even get it right. You can't even do it right. Yeah.
Yeah, I think that was easy. So, uh, I think the, the Croatia and the Croatian Islands. We, we were going to go there for my honeymoon, but COVID got in the way of that. Uh, and then now we have kids.
So I don't know what the timeline is for getting back there. So that's something my wife and I have on our radar. Hopefully, you know, at some point we can, we can get back there. It's supposed to be beautiful.
We have friends who just went. We were supposed to go, but we didn't either for various reasons. Um, Justin Trumbold, thank you so much for joining the show. Enjoy Croatia when you get there.
Thanks for all the great takes. Thank you so much, guys. I appreciate it. Hey, Justin.
Take care.