Staying Human with Tech

Subscribe
Archives
August 4, 2025

What’s Your Future With AI?

As someone who focuses on the changing world of work, this question comes up frequently these days.  Hype is not the goal for this article.  But we cannot ignore this conversation as it will be with us for the next few years.  With that conversation, we may need to keep some critical questions in mind.

Do I Really Need To Be Concerned with AI?

You may have friends, family, or colleagues that still ask this question. Maybe you think about this question. Let me answer the question with a story.

In the late 1960s, a television show predicted a future where vast stores of information could be called up in an instant and communication could connect people millions of miles away. That show, called Star Trek, inspired thousands of engineers and scientists to create new technologies. Today, with high-speed internet, I meet with clients and colleagues in Portugal, Germany, South Africa, India, and across the Americas. I have smart watches, smart phones, and smart tablets, to help me stay in touch with them and pull up any information I need to help them build to better. Star Trek predicted this would be possible in 200 years. It became reality in less than 50 years.

So what does the new technology of AI promise? Let’s look at the near term for now.

What Can AI Do For Us Now and Soon?

Current AI technologies such as generative AI and, more specifically, large language models get all the attention these days. Why? Because they are magical mimics. They can mimic our writing patterns, our speech patterns, and even visual patterns (I’ll talk specifically how this is disrupting the job market in a future article).

It’s magical because most people do not understand how th the mimicry is created. But due to ever-increasing computational power, access to any kind of information (and misinformation) via the internet, and the economic drivers encouraging rapid development, every month seems to bring a new capability. AI still mimicks us.

However, because of the ever-increasing investments in these technologies, we will see new forms of mimicry in 2026. This is a fairly easy prediction as I recently got a behind-the-scenes peak at training for the next round of AI agents. (If you are not familiar with the term “AI agents”, think AI that can do tasks for you with or without having to enter a command.)

These agents were being trained for almost ANY TYPE OF PROCEDURAL TASK: Legal procedures that may be done by a law clerk or a lawyer, health care procedures that normally is tracked by a health care professional or administrator, project management procedures (such as checking status of task, monitoring financial expenditures, and reporting to appropriate stakeholders — this one had an existential pinch to some of my early career), IT maintenance procedures, financial procedures (from accountant to director-level), financial services procedures (Making me wonder about my financial advisor?), auditing procedures, insurance procedures, data science procedures (enlarging the vast amount of information being tracked), human resource procedures (hiring, onboarding, training, payroll), marketing and sales procedures (analyzing markets, launching and monitoring campaigns, monitoring sales territories, coaching for better sales calls), customer service procedures, general and operations management procedures, administrative assistance procedures, and training and development procedures.

In summary, if you can teach a procedure to anyone, you can teach an AI agent. That agent will then do it rapidly, repeatedly, tirelessly, anticipate a wide range of alternatives, and be ready to execute the next set of procedures under the right conditions.

It will also be incredibly messy for the next few years.

Will Experts Still Be Needed?

The mess will show up in the quality of the procedures. In IT, the term “Garbage In, Garbage Out” describes what happens to any system when error-prone data or misinformation enters any digital system. I’m seeing the same thing with training of AI agents. These agents are being trained strictly on procedures in a wide variety of of contexts and with little attention to the quality of the procedures or the impact. How efficient is the procedure? What can be done for continuous improvement? What are the criteria for continuous improvement? I’m not seeing these questions addressed in the procedural training data.

We saw something similar with programming in 2024. Keep in mind that programming can be a bit more defined with algorithms and data types. But programming represents just another form of procedures. The early generated code from these latest AI systems were considered garbage code as they were trained on a wide variety of sample code publicly available. And some of that sample code was garbage.

Then, experts became important to rank and tune the results of the AI-generated code. Just as the Large Language Model hallucinated with words and images, they also halucinated with code. The experts corrected the AI models and the models improved.

But you still have to understand the goals of the procedures and the impact.  I saw one comment in 2024 Reddit post that said, “AI makes good engineers better and bad engineers worse.” Checking with software engineers I know, that’s still true. The AI models still don’t have the understanding of the larger business goals, the user experience, the long-term maintainability, or security concerns.

So anyone that works in repetitive, procedural, or online work will see their work change with these AI agents. But this is not the first time humans have been introduced to power tools. For instance, a power saw doesn’t make for a good carpenter. But a good carpenter can be a productive carpenter if they learn what a power saw can do for them. Their creativity and ability to bring a customer’s vision to reality still requires a human.

So how are you learning to use your new AI power tools?  (I’ll provide some ideas soon.)

Hope this helps,

Mark Kilby

P.S. I’ve started releasing videos later in the week to enhance the topics in the newsletters. If you liked the “What’s in it for Me/Us/Them?” framework in the last email, you might appreciate the story I share in this video on how to use it to grow an international professional group. And if you are curious about what happened to the first group mentioned in the story, join me for this event on August 28 - https://www.meetup.com/agile-orlando/events/310146416/

Don't miss what's next. Subscribe to Staying Human with Tech:
Join the discussion:
Howard B Esbin
Aug. 4, 2025, afternoon

"A power saw doesn’t make for a good carpenter. But a good carpenter can be a productive carpenter if they learn what a power saw can do for them." Indeed!

Reply Report
Bluesky LinkedIn
Powered by Buttondown, the easiest way to start and grow your newsletter.