I wanted to capture what I’m experiencing now—the good, the bad, and the ugly—so I can better understand how to use AI with more accuracy and purpose as it grows. This article reflects just one slice of experience with AI, mainly around sales workflows and communication, and is only one of many.
Honestly, I’ve barely scratched the surface when it comes to exploring how to use these tools to streamline my work more effectively! The quality of AI is improving fast, and I expect it will continue to evolve dramatically.
At first blush, after reading this article, you may think I'm a naysayer, but that is the furthest from the truth. I LOVE technology and am blown away by what is available now and how it's already impacting not only my personal life but also my work.
So here goes!
The Naysayer
My problem with AI right now is that I don’t trust it. My personal use case is for sales: I create a project and put the call transcripts in along with quotes, presentations, or other relevant documentation.
Missing the Mark on Accuracy
When I use that data to extract insights or information, it frequently responds with incorrect information—it could be plain wrong, or misconstrued, or not what I need or was looking for.
I also just installed the new HubSpot connector for ChatGPT and asked it how many sales calls were made today. It took about 5-10 minutes to respond with the wrong answer of 0. That’s bad news for someone relying on what they assume is accurate information.
Then I asked HubSpot Copilot the same questions, and it responded with a report for last month. It told me to edit the report with the data I want to show in it. The job is not done, and it’s going to take me step by step through the process to hopefully, eventually, get to the data I need. It’s not a bad starting point for a newbie.
However, I could go build the report much faster myself because I am a HubSpot expert and know how to find and extract the information. I've learned it and don't need help.
When AI Slows You Down
Unfortunately, with every wrong piece of information, I can’t trust any of the information. I must double-check to make sure it’s right, which eats into the time savings we are supposed to get from this technology.
Another thing I find is that when I ask ChatGPT to help write summaries, recaps, or article drafts using the data points I loaded in, it doesn’t necessarily give me the information I want in the format that makes the most sense to me or for the situation. It doesn’t know any nuances I couldn’t program in (personalities, feelings, intuitions, outside influences or ideas).
At first read, it seems fine, and it can pass for decent to someone who may not know better.
But here’s what I find happens: upon deeper analysis to double-check and tweak the AI output, I end up going back and forth with the chat, then popping it into my favorite editing tool, and then I still edit it quite heavily from there.
It may not save that much time in the long run compared to doing the same task without ChatGPT. In some cases, it actually adds an extra layer of complexity instead of removing one.
Rewiring Our Thinking Patterns
AI does change the areas of the brain that are being used and exercised. For example, if I ask ChatGPT to help me with a plan, it will populate the outline from my basic inputs. In this scenario, I’m not thinking of the plan ideas anymore; I am thinking about what questions to ask ChatGPT so that it will give me an answer that makes the most sense to me.
The issue is that coming up with the plan ideas is half the fun! In this scenario, there is no more spending time researching and learning various approaches and creating a plan born from my human brain full of nuances that can never be replicated by data loaded into this machine.
When we rely too much on AI for ideation, we trade our creative energy for operational efficiency. Over time, this could shift not just what we do but who we are.
If ChatGPT gives us all the plan details and tells us exactly what to do, what will happen to differentiation, innovation, and self?
Will ChatGPT Be Our Ultimate Decision Maker?
One trend I can see happening that worries me is that we will all just start taking orders from AI. For any complex problem or decision in life or work, “Ask ChatGPT what to do.” Not only does it take away the autonomy of making decisions for yourself, but it always has a positive affirmation to make you feel like no matter what, you are doing a great job, you’re a great person, and you’re making the best decision possible. It’s a little much! (Unless I truly am as great as ChatGPT says I am, LOL).
You know what they say: If you don’t use it, you lose it. Our human capabilities will change as we incorporate AI into our work and decisions. The way we think is shifting already.
Understanding What’s Under the Hood
One of the biggest risks I see is when people don’t really understand how the data is being populated or processed and use AI to guide decisions anyway. If we’re not careful, we’re handing over our thinking to machines without knowing what’s behind the curtain. How will we know what it is telling us is the right thing? We need to stay in control of what information we use, how we interpret it, and when it’s appropriate to apply it.
Hidden Costs We Don’t See
Beyond how AI affects our work and cognition, there’s another concern that makes me uneasy: its environmental cost.
As I learn more about these data centers and the massive amounts of fresh water required to run them, it makes me uncomfortable to watch the system chug for long periods to parse data to help me with a problem I can solve myself just as well, if not better, and with less energy.
But if it is a problem someone doesn’t know how to tackle and needs guidance and help with, AI is golden to help them get on the right track.
The Yeasayer
With all that said, three use cases stand out as complete game-changers.
- AI helps people who aren’t experts in an area become decent. For example, I have a small client—a solopreneur masonry specialist (my Uncle Tom!)—who wanted to change some copy on his website. Uncle Tom is one of the kindest people you will ever have the privilege to meet, but he works with his hands: he is not a writer. A tool like AI to help him write and communicate more effectively is gold. It’s a game-changer.
- As a seller, I get a lot of value from adding call transcripts to the AI, then asking it to pull talking points, and I can check them by asking for it to deliver the exact words the person said, so I can see for myself without having to search back through the call video. It ensures I don’t miss any key details, pain points, objections, or other critical business information that will inform our scope of work. This functionality blows my mind! After all, remember in the “olden days” when we had to listen carefully and take notes? LOL. Now, every word spoken is recorded and transcribed. It’s awesome and terrible all at the same time.
- Another moment when AI really delivered was when I was learning a software tool—Aircall—for the first time. I wasn’t familiar with it at all, but a chat popped up in the interface, and it answered every one of my questions quickly, clearly, and in a sequence that matched how I was thinking through the problem. It was honestly a great experience. That kind of contextual, in-the-moment help is where AI shines.
Tools like ChatGPT will uplift the masses. They will make information easier to absorb and understand, and give people skills that would be otherwise impossible. They will democratize knowledge and level the playing field for those without access to formal training or expertise.
For people who are already experts in an area, at least in the current publicly available iterations, these tools may actually slow them down or steer them in the wrong direction in certain scenarios.
Is AI better than a human expert? Right now, I feel confident to say it is not. It can help an expert be better and faster, yes. Agreed. Is it better than someone who is not an expert? I feel confident to say absolutely yes.
My colleague likes to say, “Fast, cheap, and good. Choose 2.”
- Fast and Cheap (not good)
- Fast and Good (not cheap)
- Cheap and Good (not fast)
Where will AI end up? It's an interesting thought.
Comments