Why Tool Fragmentation Is Killing Your Go-To-Market Execution | Sayanta Ghosh
Sayanta Ghosh
The RevOps landscape has become a chaotic mix of tool fragmentation and Gen AI FOMO. Sayanta Ghosh points out that teams are drowning in tools but starving for execution outcomes. Instead of trying to fix everything at once or throwing AI at broken processes, teams need to focus on specific motions and gain data clarity before deploying any automation. The distinction between AI workflows and AI agents is critical for modern GTM execution. Workflows are deterministic, making them ideal for repetitive tasks like account planning that interact with sacred systems of record. Agents, meanwhile, excel at retrieving data and making contextual decisions, but they lack the predictability required to be left fully unchecked in a CRM. Ultimately, a fully autonomous revenue stack isn't the immediate goal. Strategic thinking, design, and business direction remain firmly in human hands, while AI excels at gathering breadcrumbs of information and executing small steps. The future belongs to smaller, highly connected teams that collaborate seamlessly with AI to execute at scale.
Discussed in this episode
- How Gen AI exacerbated an already crowded RevOps tool stack, creating intense FOMO.
- Why fixing broken data structures is a prerequisite before applying AI solutions.
- The critical difference between probabilistic AI agents and deterministic AI workflows.
- Why deterministic workflows are much safer for interacting with sacred systems of record.
- Using AI for signal-based selling to identify patterns and track competitor engagement.
- The importance of human strategic oversight in GTM versus handing the keys to AI.
- How AI excels at collecting and analyzing massive amounts of fragmented information breadcrumbs.
- Strategies for RevOps leaders to prepare for the future by learning from diverse AI implementations.
Episode highlights
- — Introduction and the current state of RevOps
- — The FOMO and chaos caused by Gen AI
- — Data points are a mess above $10M scale
- — Managing tool fragmentation as a RevOps leader
- — Workflows vs. Agents: Understanding determinism
- — Balancing human strategy and AI execution
- — Bridging product capabilities and customer expectations
- — The future of the autonomous revenue stack
Key takeaways
- Gen AI increases tool noise, complicating RevOps orchestration.
- AI cannot magically fix messy or unstructured CRM data.
- Use deterministic workflows for repetitive actions in CRMs.
- Deploy AI agents for data retrieval and broad analysis.
- Human strategy drives execution while AI collects breadcrumbs.
Transcript
We Gen AI let's like consider it like pre 2022. Uh even by that time the tool stacks were getting crowded right. Uh people had 20 25 tools kind of working together. I mean not working together actually.
We're going to talk about the future of Rev Ops. Why dashboards aren't enough and what it really takes to truly build an autonomous revenue stack. The data points have always been a mess, right? Any organization which has kind of scaled to anywhere more than $10 million dollars.
I mean the data is all over the place. Welcome back to another episode of the Bridge the Gap podcast powered by Revenue Reimagined. Happy New Year, first episode of 2026. We have an amazing guest with us today.
We have Shayanto Ghosh, who is the co-founder and CEO of Nrev, where he's building AI agents to automate RevOps end-to-end. He's lived the RevOps problem from every angle, founder, product leader, operator. And he's now focused on eliminating all of the fragmented tools, you know, the 27 tools you have to log into to run your RevOps and manual workflows that slow GTM teams down. We're going to talk about the future of RevOps, why dashboards aren't enough.
I know you all love your pretty dashboards, but dashboards are not RevOps. And how AI agents change execution and what it really takes to truly build, I'm going to use Dale's favorite word, an autonomous revenue stack. Shayanto, welcome to the show, man. Thank you so much.
Thanks, Dale, for having me here. And happy New Year, of course. Yeah, happy New Year. Happy New Year.
So, um, the first segment we kind of want to jump into is why RevOp is still broken. And you said RevOp teams are drowning in tools, but starting for outcomes. Where does the system actually break down? Yeah, this is a very broad question, Dale.
I think I'll take a little bit of a fundamental approach. Um, and then kind of look into a few probably anecdotes that I've come across. Um, I think we Gen AI, let's like consider it like pre-2022. Uh, even by that time, the tool stacks were getting crowded, right?
Uh, people had 20, 25 tools kind of working together, I mean, not working together, actually, uh, working in on different problems and motions. But as Gen AI came in, the noise became louder, uh, and more chaotic, right? There's so much shouting about, you know, you should do this this way, there's so much thing about, you know, this motion, uh, that channel, you know, so many playbooks, etcetera, coming in that is getting really convoluted, uh, for a Rev Ops person to really put their minds around. Um, from my hearings and probably now it's been like a few hundred conversations at least with Rev Ops professionals, I think that's the chaos and FOMO that they are living in.
And I'm, I'm not saying that it's only Rev Ops. I mean, I work with engineers as well, right? I mean, they're probably the people under maximum FOMO. Every day, something or the other is changing in their lives.
Uh, but I think curiosity and awareness are two very important things in today's life. Uh, so it's the same case for Rev Ops. Uh, there is a lot of confusion. There is a lot of hyper information about everything that's there.
Uh, and yeah, ultimately making things work together, the term that we often use a lot is orchestration, becomes even more difficult with so many different systems in place. And so on that on that part, what what part of that stack is where where are you seeing the most common failure points between sales, marketing, CS? Is it the CRM? Is it not the specifications?
Is it the data? Like, where are you seeing the common failure points happening? Yeah, look, the data points have always been a mess, right? Any organization which has kind of scaled to anywhere more than $10 million, I mean, the data is all over the place.
And unless you have a very sturdy plan in place, AI is not going to solve that. I mean, AI is not a magic wand, necessarily, right? So, uh, I think AI just fixes everything, really? You You can't Clean your data.
you have to fix it. I wish it could, Adam. Uh, I honestly feel that everyone It is shit out, my friends. No, I mean, I mean, we talk about AI as if AGI is here.
Uh, but it's unfortunately not, right? But, um, yeah, so, uh, the point I was making was, uh, I think there's the importance of clarity and focus that is very important. Um, there've been some brilliant operators we've been, uh, you know, working with, uh, where they know what to focus on and that really solves a lot of gaps, uh, to have that clarity on, okay, to like at least for this quarter, our goal is to kind of focus on, let's say, five different motions and figure out what's failing and what's working for us. I think that clarity makes a lot of, uh, difference.
Yeah, that's that's a good perspective on it. I think the clarity and the niching down into solve trying to solve one thing or two things really well versus like, I'm going to try to solve this entire process and breaking the whole thing. Yeah, yeah. And 99% of the time, that's that's what we see.
Correct. Correct. And I think one thing that's changed in the AI world is, let's say, I'll just give a random example. Let's say, signal-based selling, right?
Uh, there are so many different nuances to that, and so many different implementations that could work for your company is very different. Like important to understand at any point in time. You know, uh, we've probably seen like 20 different types of implementations of just signal-based selling, right? Um, and it's very important to iterate because there's so much flexibility available with AI that you can iterate and figure out what process works and what gives you the most outcome.
So, let let's piggyback on that a second, right? There's a lot of variations of signal-based selling, there's a lot of tools out there, there's a lot of agents out there, and we'll talk about agents in a second. But, you know, you said you've spoken to hundreds of RevOps professionals. I'm curious, what's the one RevOps task today that still feels so absurdly manual given all the advanced tools that we have that can automate some things?
I'm probably going to name, uh, a very fundamental task, which is probably managing all your tools. I mean, when the CRO comes and asks how many tools do we have working on this problem and how do they work together? I think every Rev Ops person just goes into stuttering. But honestly, this is a very nuanced question.
So if if we look at a very large company, and I'm not very habituated to enterprises as much as I know about, let's say, PLG companies, as well as like early, like mid-market stage companies. Uh, so I think the processes kind of change a lot within these. So I I think there there's a more nuanced take to it. So in PLG, it would probably kind of defining the, you know, funnel, defining the steps that makes a lot of critical, um, questioning for the, uh, Rev Ops people.
Whereas when we look at mid-market, I mean, it's that difficult transition between thinking about where the forecasting is important versus where the, I mean, we should still focus on that, that lead bucket that we have and how do not make it leaky. Uh, so, yeah, it's a very funda, I mean, very broad question, but I think the most common thing is when you're asked how much are we spending on tools, how much, how many tools do we have, and what is working together, how? I, uh, in a previous life, I was brought on to a company as CRO, and that was one of my first questions, right? How many tools do we have doing various different things?
No exaggeration, the sales reps had 28 tools that they logged into to do various different things. It was mind-blowing to me. Um, there is no reason for that. And tell me, Adam, when the Yeah, I was just saying that tell me Adam, when the CFO steps in and then asks you to cut down on tool, what's your strategy to that?
You got to really look at what I mean, then it was all manual. Like what tool does what? How many who's logging into what tool? What's the frequency?
What's, you know, the weekly average usage? How are we using the tools? What value? What's the overlap?
It it yeah, where's the over? Like it it was not a fun process, um, by any stretch of the imagination. Because the problem is, let's just say you have I I'll use data providers for an example, right? Let's say for some reason, someone bought both Zoom Info and Cognizum.
Well, great. Half your team likes one, half your team likes the other, half the team hates one. Who who do you piss off? Um, that's why these decisions can't be made in a bubble.
Like I find all too often tooling decisions are typically made by Rev Ops with no input from anyone else. Um, my my wife leads CS at an organization and they have a Rev Ops leader who decided to buy a tool, um, against what she and the sales leader wanted because this is what the Rev Ops leader said was going to be the better tool. And low and behold, nine months later, what tool do you think is getting ripped out? There's cost of loss, there's cost of like there's so much money and time that was wasted.
And listen, everything shouldn't be decision by committee. Um, but you you have to look at this from a different point of view. Correct. All right, let's shift gears.
Let's talk a little bit about, um, I know this is one of Dale's favorite topics. Um, automation versus AI agents. A lot of folks that we talk to are like, oh, you know, we're going to we're we're agentic. We have agentic AI and, you know, we we've built this great AI agent.
And what they've really built is a couple of workflows, um, you know, that maybe are linked together, maybe aren't. But you're building AI agents versus AI workflows. What's the difference and why does it matter? Sorry, and first, let me clarify, Adam, uh, we're somewhere in the middle, we try to take the best out of both worlds.
Uh, but I'll tell you where the pros and cons, uh, kind of stand out, what agents are good at versus what automations or workflows as we can interchangeably call them are good at, right? Um, and I think one thing, uh, that's very important to mention here is that, um, in 2025 and before 2025, workflows and automations were very difficult to set up, but what happened in 2025 was that AI became very deterministic. That means if you ask what is 2+2 to an, uh, to an LLM, it could have said FOUR, it could have said 4.0, it could have said four before 2025.
Right? What happened in 2025 is that that predictability a million out of a million times came to, uh, came to a, like became accessible for all of us. And what that enabled us to do was to pass on the output of one of the LLMs to the next and kind of string them to perform a a few steps of action that could complete a meaningful task. Right?
Uh, so that gave rise to, I think, most of, uh, you know, how workflows became really powerful. Um, agents, on the other hand, are basically things that take decisions as they go, and of course, they have certain, you know, tools and, um, what can I say, like certain capabilities in their power or skill sets in their power to, and they have the discretion of using it whenever they want. Right? So, keeping that in mind, I think when we talk about deterministic, right?
Um, so the most obvious thing is that you wouldn't want your sacred systems of record to be touched by anything that is not deterministic. Right? It could go and just delete records for, for that matter, at any point in time, given whatever task. And I'm just saying that this is not very easily perceivable because you don't typically talk to an LLM a million times, but when automations run, it it typically happens hundreds of thousands or millions of times.
And then it can make errors at volumes. Right? So that is the biggest con, I think, of agentic, uh, systems. Whereas, on the other hand, like automations or workflows are a little more complex to set up.
Um, and is is what I would say, like requires a little bit of a technical acumen to kind of understand how these handoffs are happening. On the other hand, agents kind of manage these handoffs themselves. Right? So, that's what I think is typically, like the pros and cons and, so where we need determinism is where you want to do things repeatedly.
Right? So, for example, if you want to have an account plan for every different account that's there for for ready to be renewed in quarter three, let's say, you would want it to be in a structure that is followed for every account. That's how you'll be able to kind of compare and drive a like a more meaningful actionable plan accordingly. But when you want to kind of quickly just retrieve data from a system and try to analyze a few data points, I'm not saying like thousands of data points, I mean, agents work much better.
Um, so, yeah, I mean, it's a discretionary manner, um, and, yeah, I think you have to use agents versus workflows in different things where they're strong at. So, Dale, did you have something? No, good, good. So, I think that makes total sense, right?
There there has to be a differentiation. Um, I do agree with that technical expertise. Like it's funny. I I consider myself a very tech savvy person, like when it comes to like normal software.
I will say in full transparency, when it comes to like our workflows and our agents, like that is all Dale's expertise. Fun fact, he used to be a coder. Um, but like I it's I don't have that expertise. Could I learn it?
Yeah, if I took the time. I think a lot of people don't take the time to learn it and they wind up building something that breaks. I'm a huge fan of vibe coding. I think tools like Bolt are are awesome.
The problem you have is everyone thinks, oh, I'll build this app in Bolt, which is great. The app works great. Then you ask Bolt to do all the integrations. Then one thing changes on the platform you're integrating to and your integration's screwed.
Now nothing's working. And because you don't have technical expertise, you don't know how to go and fix it. Um, and that's the problem with people who are building these, and I use the term very loosely, SAS products that they're selling commercially and then it stops working for the customer. Um, so very good call out there.
Tying that, um, to what it looks like, um, we'll call it agents or autonomy. Let's use the term autonomy. Where do you think it's important to draw the line between what I'll call human judgment versus agents or workflow autonomy specific to RevOps? Yeah, I think, uh, and I'll try to break this down a little bit because, uh, yeah, so let's say strategies for that matter, right?
What could work for us, how could it work for us? Um, those two questions need to be answered by, uh, humans. Um, I at least don't see that happening in the next two or three years that AI drives the the decisions around what will work for a business and how it should be implemented because then you're basically just handing over the business to AI, which could probably do it better than you if AI becomes that capable. So I think strategically, uh, humans do need to take those measured decisions.
AI could definitely, uh, you know, look at patterns and figure out what will probably suit you and recommend things much better because you as humans, we don't have as much memory as, let's say, an LLM would have. Right? Uh, so True. Yeah.
kind of analyzing that data and telling you where your faults are, probably, uh, you know, give it to AI, but only take it as like, I mean, just recommendations. And then, of course, use your expertise to decide what to kind of go in. So I totally believe that even when we're talking about autonomous GTM, we're still saying that the power or of strategic thinking, I think, at least is in the hands of humans. But then when we talk about creating those automations, um, I think that could be taken over by AI to a large bit, maybe, we're still not there at 2026, beginning of 2026, but it'll definitely come in.
I have a lovely anecdote on this, actually. Uh, I remember this one, uh, customer, they lost a deal, uh, to one of their competitors, and then they noticed that, uh, this, like basically, this competitor was engaging with this customer for a long while. It had been like four or five months that they literally saw this person commenting on their posts, etcetera, etcetera. Right?
And then they thought about the fact that why can we not start tracking this automatically, which will not just give give us an idea of who, which competitors are going to steal our deals, but also give us the opportunity to look into their pipeline. Yeah. Right? I mean, it kind of is such a phenomenal thought, but it comes from humans actually first figuring out that first-hand experience of seeing what can help where.
And then once you have that idea or that strategy in mind, then probably the systems will become powerful enough to kind of handle those and create automations around it. People buy from people. That's why companies who invest in meaningful connections win. The best part, gifting doesn't have to be expensive to drive results.
Just thoughtful. Sendoso's intelligent gifting platform is designed to boost personalized engagement throughout the entire sales process. Trust me, I led sales for a Sendoso competitor, and I can tell you, no one does gifting better than Sendoso. If you're looking for a proven way to win and retain more customers, visit sendoso.
com. That's really cool. And I do think people that have, um, business knowledge plus the ability to have design thinking, because you're really talking about design thinking when you're talking about vibe coding or iterations through this. Like I just find that the iterations are happening much faster, because if you have the business knowledge and you can think in structure, design structure, you can actually iterate much quickly, much more quickly through it.
Um, and then you find new applications that you can automate. You're like, I wish I could automate that all the time, because like I I think about this all when I'm starting to do work, I'm like, it'd be great to take whatever I'm trying to accomplish, and because we do this, we've been doing this now for three years, there's just things that we do over and over and over again. One of them, something like go-to-market foundations, where you're developing ideal customer profiles and buying personas and value propositions. Like we're doing that all the time, that could take somewhere between four to 12 weeks depending on the amount of information you need to gather, customers you're bringing in place, and why not enable the AI to do all that startup work?
So then you're just iterating through it and you're not starting from scratch because I I find the place people have the the most challenge is actually just getting started. Like in an idea, in a concept. So if you can if AI can help you conceptualize it, then I think you're moving in a in a good direction. Um, Correct.
I think, Dale, that's a very good point. I mean, when we kind of presented present these kind of, uh, let's say, and discuss these strategies that are working from one customer to another and have that, you know, closed loop forum of those customers discussing, you know, what is working for them. One thing that, you know, comes up very often as an aha moment is that everyone says AI is really good at collecting breadcrumbs of information, right? It's in almost impossible to kind of go in and collect thousand different small pieces of information.
But AI is really good at doing these small tasks of collecting these sources of information, tediously going through every single site. Let's say, just one example that comes to my mind is there was this person who was trying to look at, you know, Y Combinator startups that were, um, you know, every single keyword, if any commonalities and keywords mentioned in every single, uh, Y Combinator startup that were funded in the last five years, let's say. Right? So, doing that, those small tasks across this large volume, uh, is something that AI has done really well.
And then you can come together and get, you know, analyze patterns from it. So it's basically breaking down those tasks into small parts and then kind of, yeah. Yeah, yeah. I love that.
And I think that's really smart. Um, let's switch up a little bit and get some product lessons from from what we like to call the trenches. Um, so you've built products at startups at scale. Um, how has that shaped and how do you think about building those types of technologies for GTM teams?
Um, good question. This is like really close to my heart. Like, uh, um, we started, uh, Nrev where exactly three years back, that's exactly the time GPT launched. Uh, I mean, December 2022, I think is when GPT launched.
Um, and, uh, or December 2021, I'm just confusing myself. But anyhow, um, so that was the exact time when we started out, right? And we were probably one of the most, I don't know, whether fortunate or unfortunate, but we were we were kind of transitioning to a different kind of a world altogether, right? Uh, so the kind of problems that people were looking to address might have still remained the same, but the way of approaching it kind of dramatically shifted.
It was like a tectonic shift there, right? So I actually went through, um, you know, building something enterprise-grade, uh, which like hundreds of thousands of users from just one company was using in a more, um, in a more, what should I say, a programmatic manner if I could keep it that way, like the the way software works, suddenly people expecting much more from a software, suddenly taking decisions, suddenly making sense out of unstructured data as we call it. Um, you know, these kind of things started coming in. And even before AI had the capabilities of doing things, people's expectations, of course, were like hitting the roof.
Right? And there was so much hype about it. Uh, so it's it's actually very difficult to kind of, um, you know, wearing the product hat is also difficult in trying to bridge the gap bridge the gap between, uh, and I'm mentioning bridging the gap, but uh, bridge the gap between I was like, I like it. Perfect.
Yeah, customers and technology, right? And it's so difficult when the customer sentiments have crossed for certain huge threshold, whereas technology hasn't really gotten up, uh, caught up, uh, as much. Uh, so yeah, it's been a it's been a challenging journey, but a few learnings, I think, um, again, I think it's just first principle thinking, but a few learnings have always been that initial days was about educating customers that AI will never be 100% correct. You have to know that it's going to do 80, 85% of your tasks, maybe.
Maybe you can break it down smaller, pay more and get more efficient AI, uh, to do tasks. Uh, but it's not going to work as much as thinking that you can employ like 100 interns and even if you employ 100 interns, I'm sure they're going to make mistakes. But that was a big challenge, kind of educating customers as to how products were evolving. And, um, of course, the second was, you know, us understanding the nuances of LLMs.
I remember kind of going back to my, you know, university textbooks to understand a lot of fundamentals of LLMs and how things would kind of hand over to each other, I mean, data got hand over to one another, uh, things of that nature. So, yeah, I think, uh, it's a very interesting last three years that has been for, uh, all of us. Uh, right? And no matter which role you are in.
But yeah, product has been especially difficult for sure. Well, and I think, I think if we double click on that a little bit from like a product human perspective, one of the things that I'm realizing as I go through this process is, where are what's the ability for a human to actually interact with a bot, an agent, the computer, the LLM, all like, you know, in a way that they would communicate with a human being. So I think we get into this this there's a there's a mindset shift that needs to happen because we have this block. Like I'm if I talk to the the LLM, I have to talk to it in a certain way.
And it's like, no, you just ask it questions like you would ask a human being, and it'll come back and give you responses and then you go back and forth. I think we've gotten programmed into this place probably from the start of like chat G P T, you ask it a random question and you just get a response and you just move on. Like that's not the way if you want to be productive and successful in this motion it works. You actually have to like go back and forth with it, have a conversation.
Like you're still strategically thinking, it's just like you're not strategically thinking with a human being, you're strategically thinking with the LLM and then it becomes a much better motion for you. So, Yeah. that's kind of where I've seen some of the challenges for people. That's absolutely right.
I think, um, I think I honestly feel that there's a lot more to LLMs than just conversations that we can have. Um, you know, once we get an idea of, uh, how in smaller tasks LLMs can help us, then we kind of have to go a little deeper to understand, you know, what are the infrastructure level things that we can set up on an LLM, where it's not just one LLM, but multiple, uh, LLMs passing on information as they collect from one to another and then kind of, uh, becoming more productive on that aspect, right? Um, so just a single LLM might not be the best thing to converse with, but, but I do agree, at least our perception of things have changed in the way, uh, we used to feel LLMs would respond versus now where we're where we are. Yeah, yeah.
Uh, totally agreed. So, taking that to what the future looks like. So, maybe not the next six months, maybe not even the next year, but then in the next three to five years, do we get to a fully autonomous revenue stack? And if so, what does that look like?
I don't think so, Adam. Uh, I mean, as I mentioned, like, uh, uh, if there comes a time when there is a completely autonomous, um, you know, AI stack, then you're basically handing your business over to AI. I mean, where are you where are you as a human fitting in into that business, right? Um, I, I honestly feel that, you know, while again, AI is good at these small tasks at a strategic level, it definitely needs a human to kind of come in and put in their expertise.
Um, where I kind of see like GTM teams at the end of like 2027, 2028 is I'm drawing a long, uh, closer term horizon because it's very difficult nowadays to predict like going further down the line. Uh, but where I do feel that it's going to kind of converge towards is a way better web of communication between humans and AI. Right? I mean, humans will understand exactly from what perspective and why I'm getting some information from, uh, certain workflows that are going in the background, whether it's agentic or whether it's workflows.
I'm not going to get into that. Uh, but it'll definitely be a much better collaboration. I think that's where the world's headed towards. Um, we're going to definitely need smaller teams.
I mean, smaller teams, like let me put it this way, like smaller teams will probably function, uh, in a more connected manner, uh, rather than, you know, very large teams where there has to be a lot of communication between humans and then you have to put in those AIs, which means even more amount of communication. Um, so ultimately, I think tasks I'm not saying enterprises will fail, but what I see is like the tasks are going to be kind of, um, convergent towards smaller teams, whereas then the teams could work toward the towards a bigger goal. Right? So, uh, but yeah, I definitely feel that the collaboration between the two is definitely going to come in a lot better.
Yeah, I I I agree with you. I think that that collaboration is key. The question is, what are founders or CROs or RevOps leaders start doing now to prepare for that? Because it is coming.
Like the world is changing. How do you make sure that you don't get hit upside the face like 85% of people did with AI in general, um, and that they're actually ready for this? Yeah. I'll I'll kind of bring in, uh, an opinion that I'm very from, I mean, which is very close to my perspective.
Um, we work with a lot of companies which kind of have different kind of motions. I mean, they're building different automations and different motions into place on Nrev, right? I think I have a very, very, um, lucky position if I don't have a better word for it, uh, to kind of actually see this at a 30,000 feet. That same strategy, uh, being deployed in different types of business scenarios, yielding different types of results, actually gives a very good view of what can work for yourself.
Right? What I mean to say is that today, you have to be really curious and have to learn from each other in terms of, uh, you know, what can work for you. Right? It's that inspiration and innovativeness that has to kind of come in more than kind of building more technical knowledge about AI is what where my perspective is because as we have these, you know, closed-room conversations between our different customers, uh, you know, the aha moment comes when, uh, someone says, oh, you're doing this as well, but, you know, this thing hasn't worked exactly the way we've implemented it.
And then they say, okay, I mean, we did it this way on this half of the, uh, the system that we've set up. And so this has worked, but the next thing hasn't worked. And then they kind of collaborate and exchange ideas. And that's where really good ideas actually come about from.
And, of course, you iterate on that to find out the best for you. Uh, but I think that being curious and learning from each other is definitely something, uh, that could place us much ahead of the others. Totally agree. Totally agree.
Should we should we start getting into some rapid-fire roundup? Let's do it. You kick it off, sir. Okay, um, what's your what application what one besides Nrev, what's one AI application that you can't live without right now?
Oh, it's very strangely Replit. I love Replit amongst all the other. Replit? Yeah.
Yeah. It gives me a good, um, dashboard to things basically I'm building out of Nrev. I love it. Interesting.
You're you're a Replit fan, right? Uh, I used to do a lot of Replit I used to do a lot of Replit work. Um, I am now doing a lot of work in AI Google Studio. Um, and I I find it I find the, um, the integration to the ecosystem a lot tighter because you can take that from Studio to anti-gravity and then deploy it on the something like a render, um, and and you kind of have like a a path.
So, Wow, Dale, you're way more technical than I thought you were, but Right. Yeah. I I was getting a little annoyed with Replit, to be honest with you, when they came out with their super agent, like they just sucked up like all credits and like I I don't know, I I found it like a credit grab versus anything else. Hmm.
That's very interesting. I think they've solved it to a certain extent, but it's I think these keep competing with each other, like tomorrow it could be emerging. On the third day it could be Bored Ape or anything for that matter. Yeah.
Yeah. What's uh, what's the most overused metric in go-to-market today? I I don't know, uh, it's coming up for sure, but it's revenue per number of employees is something that there's a lot of buzz about. Uh, I don't know whether it should be there or not.
It's definitely a good metric for a business from a from a core business perspective. But yeah, that that's something that's coming up in the air. You're you're the second person who said that. So there there's got to be something to that.
What's uh, who's a founder or a product leader that you admire and why? Hmm. Very interesting. So, uh, this would have to go to Lenny.
Um, he, again, I feel this is from the perspective of being curious and learning from others. I think he again has a very fabulous pedestal where he gets to observe so many companies kind of developing their products in a certain way, uh, learnings from it being shared across the board. So, yeah, definitely. What um, what's one workflow you'd happily delete forever?
Wow, you keep deleting I keep deleting workflows all the time. Uh, and that's the magic of workflows, right? You don't need it's not a manual payment upfront payment kind of a thing. You can use it, remove it.
I mean, kind of, it's a tough question. So, um, so I had this, uh, workflow that used to remind me to follow up with people I'm in conversation with. Um, very simple workflow. But I realized it just classifies my emails in the wrong like the the most weird way possible.
So I just removed it just a few days back. Yeah. I love it. I know the one Dale's going to ask to wrap it up.
last one as we follow up. What's um, what's your dream vacation destination? That was out of the blue. Um, I've been thinking about New Zealand for a while.
Uh, would love to be there. Awesome. Great choice. something on your mind, Dale and Adam?
Where do you want to go? I mean, I I I travel way too much as it is. Um, and Dale has a boat, so he doesn't travel. Um, although this month starting Friday, I think we both spent a cumulative like, I don't know, five days at home, um, between then and the end of the month.
We have uh, I have Chicago, then we have Utah, New York, LA. Uh, we're like traversing the country this month. Um, Dale, Dale used to travel. Um, I still travel.
I still travel. He just doesn't I just don't tell him when I travel. You don't travel out of the US. You don't You don't think so?
I mean, unless you unless you bring your background with you. Um, another story for another day. Shantoo, thank you so much for joining the show. Thank you.
Where can people learn other than LinkedIn? How can people learn more about you and Nrev? Uh, of course, our website, which is Nrev.ai.
And, uh, our application's open to everyone. Uh, there are some lovely, uh, pre-built, you know, workflows and pre-built, um, I mean, basically pre-built motions in place that our customers have built out. We love building a museum around it. I mean, not the museum that we archaeologically look at, but a more modern, uh, use of the word museum where you can really learn from what others are implementing, how they're going about it.
Uh, so, yeah, I strongly urge people to go into our place, as we call it, and check them out. Nice. Shantoo, thank you so much for joining us. It's been a pleasure.
Happy New Year. Um, and it was great talking to you. Thanks, Dale. Thanks, Adam, for having me here.
Happy New Year as well.