Will AI Be the End of Employees and Workers?
What leadership and talent management should know about how AI will affect our future
"This is the end," one of my friends mumbled to me. He showed me a video of a janitorial robot cleaning a well-organized restroom. In the video, the robot entered the restroom that was already cleaned. The robot cleaned the sink, toilet and managed to add paper towels, soap and toilet paper. Once the robot finished, it exited the bathroom and moved down the hall to the next bathroom.
Anyone who's actually done janitorial work knew that creators made an artificial situation, as restrooms are never that clean nor organized - and that's from someone who hasn't done janitorial work since junior high school. "It's only a matter of time before none of us have jobs and we're replaced by robots," my friend finally said as the video finished.
He echoes what many believe at present: AI will do all or most of our work in the future. While AI will have an impact on work in the future, it will be more limited than the hysteria expects. AI will also increase the value of some overlooked skills. In the same way that all of us must have good information to act appropriately, AI is no different. I start this post with this point because it's missed everywhere. We tend to forget how unpopular truth is (accurate information), but AI must have accurate information in order to succeed.
What Is AI?
Before we delve into how AI will affect the future, let’s quickly address some growing misinformation about AI. For instance, a large language model (LLM) differs from a robotic application of AI. When people say AI, they sometimes umbrella these applications. However, we should be aware that these are not the same.
Artificial intelligence attempts to replicate human action through the use of machines. These actions can include - but are not limited to:
Kinesthetic: AI involved in physical movement is generally referred to as robotics
Intellectual: AI involved in reasoning can fall into a number of categories such as LLM, anomaly detection, application-specific AI, etc.
Sensory: AI involved in images, videos and sound along with other senses outside of robotics
For instance, Stuxnet was an example of an application-specific AI in that it learned its environment and attacked based on what it learned. But Stuxnet had limited application. An LLM with a large data source may have more application in interacting with humans. All of these are applications of AI.
Keep this in mind when people say AI, as you will observe some people misapplying types of AI to problems. For instance, an LLM may tell you how to do something, but that’s much different than actually doing the action.
Let's first look at a past prediction I made related to healthcare and how we can expect this pattern to repeat with AI.
Past Healthcare Prediction
While in college over a decade ago, I researched and made predictions about the Millennial generation, as well as some of what we'd see with the next generation - iGenZ. These predictions have all come to pass. One prediction I made cautioned of significant consequences related to the healthcare field:
The United States almost graduates more lawyers than doctors. It also discourages young men from entering college, who tend to study science. Add a retiring generation that will need more healthcare to this mixture and you can see that we will be facing in the future: A major shortage of healthcare workers while we have a massive need for healthcare along with increased bureaucracy around healthcare.
When people correctly observe that American's life expectancy has been plummeting, I highlight these as the underlying reasons.
How this relates to AI involves information. Even if we have accurate information, we have to take action as quickly as possible. But as I discovered when I warned about what I was seeing, accurate information lacks attention or focus.
If talent management and leadership want a takeaway here, realize how one piece of accurate information can be easily be drowned out by many pieces of irrelevant or inaccurate information.
This is basic human behavior. In the New Paradigm For Financial Markets, George Soros observes that "Americans go to great lengths to deny or forget about reality. Yet if you forget about reality, it's liable to catch up with you." His observation isn't only true for Americans - it applies to people in general. If people can avoid a problem, then they will avoid the problem until they can’t.
Research Becomes Gold
AI requires information. As we’ve seen, one detail of accurate information can be drowned out by incorrect and irrelevant information. AI will not be able to stop this. In fact, AI will amplify the incorrect information.
Exercise 1
Either type on a blank note application or write out on a paper one of your concerns from a year ago based on events at the time. If you’re unable to think of one of your concerns, either read communications you had at the time, a journal entry, or files you have from that time.
How much of that event consumes your mental energy now? Keep in mind that in a few cases that your answer might be more.
What were your inputs at the time that made you aware of the event or events? How did those inputs amplify the event or events?
How did the event change how you acted and did this align with your vision?
I recommend that you repeat this exercise with multiple concerns from a year ago. As you repeat this exercise multiple times, you will notice a pattern. The information inputs that you had did not remind you of your vision, like I am doing now. They distracted you from your vision. These distractions added costs. One of those costs is who you are now - are you closer to who you want to be and what you want to accomplish? Look at your notes and see how your concerns from inputs affected this.
AI will be no different. AI has input too and that input affects its results. If the input is false, corrupt or distracted, the output will be the same. We cannot forget here that AI is already being used as a replacement for critical thought. People ask AI questions and assume that AI knows the answers or the steps to find the answer. This is not new in the sense that people were using search engines before AI, though the search engines would point to a list of human answers.
You just completed an exercise (Exercise 1) that required critical thought. You had to stop and think about your answers. You had to evaluate the impact of what you considered. That required focus and reflection. AI cannot do this for you. In fact when we stop and consider focus and reflection, both require reduced input.
Related To Research: Critical Thought
In a recent survey of small business owners, the majority of leadership and talent management at small business found that iGenZ lacked critical thinking skills and common sense. What's completely missing in the media reporting this is that this is precisely the effect of search engines. Search engines castrate critical thought. They also castrate research because you assume the information you get is correct. How do you know if it's not?
AI will only amplify this problem.
As a quick independent study with my group of friends (mostly Millennials and iGenZ), I posed a challenge that involved describing a flower. All of them tried to initially use their phones to show me a picture, but I stopped them and said "Describe it in words, no pictures." Most were not able to do this because they lacked imaginative practice of imagining a flower and describing what they saw in their mind. Yet this practice - imagining something and describing what we're imagining in detail - is one of the most important skills we'll ever develop as humans. This skill inherently involves our creative power: we imagine something that doesn't exist and create it.
The Physical World Returns
Exercise 2
Imagine a world in which AI does most work that reduces the cost of work by 90%. Very little work is done by humans. As a note, this is currently the prediction of most people (and it will be wrong).
Given that few people work, consider how they will be able to obtain resources. If you think AI will give them the resources, ask how AI will be able to distribute limited resources. Also, ask what will happen if AI makes a mistake with a resource that results in a shortage? If people are dependent on AI and it makes a catastrophic mistake, what happens to people?
One consequence to the rise of the digital world has been the misunderstanding of the physical world. When I write that our physical world is managed like a fractional reserve banking system, most readers will not understand what I mean. In simple terms, actual demand is hidden through an electronic demand system where each unit is owned by more than one person.
A well known example of this is gold: there are over 24.40 million electronic metric tonnes of gold [see point 5 in the Epilogue], but there are only 244,000 metric tonnes of physical gold [1]. This occurs because gold is managed through a fractional reserve contract system where hundreds of people can own and hold contracts on the same ounce of gold.
In other words, AI is happening at the same time that we see maximum financialization. We should be aware that the pendulum is nearing (if not already passed) the maximum point of de-materialization. We're about to see the pendulum swing hard in the other direction for generations (plural). The cocoa industry is getting an early preview of this at the time this writing (January 2024). This is only a preview of what is to come in materials. Returning to the opening story of the janitor robot: since the robot’s release, the price of the lease has tripled. Ironically, it’s no longer cost efficient to use. A big part of this is the resources involved in robotics - they’re not infinite.
AI will make it easy to automate tasks that require little creative power or physical effort. It will begin to need numerous resources as both are required. This is where we start to experience resource limitations. Like the janitorial robot, it’s cheaper until everyone wants it - then the costs skyrocket along with the underlying resources. Why Millennials and iGenZ don’t realize this reality is because neither generation understands how much of their reality requires the physical world. As a fun example of this, ask members of both generations to name 5 elements from the periodic table that make up their phones.
The Only Cure For Low Prices Is Low Prices
We often forget that the cure for low prices is low prices. The same applies to high prices: the only cure for high prices is high prices. High prices attract competition. Low prices attract industry abandonment.
This may seem irrelevant to AI. It’s not. If you listen closely to some of the predictions with AI, they completely misunderstand this reality about high and low prices. This reality will not change, regardless of what AI can do in the future.
AI Changes Contribution Incentives
One consequence of AI will be input. As of now, people have added information because they have incentive to add information. Every “free” site you see (including this) exists because the person is providing evidence of expertise along with building a relationship with readers. Shortcutting this with AI will mean that people contribute less information (or none) over time. This pattern has already been felt in one industry where I’ve consulted and this pattern will impact other industries.
As the use of AI grows with AI getting the demand for information, people have less incentive to provide information. I won’t dig into the legal challenges here, but I know of many lawsuits in process against AI development. Some AI developers took information which they had no legal authorization to take and fed it to their artificial intelligences. Even if courts don’t protect private property (information), the consequence of these actions will be contributing less over time.
AI may be worthless in less than a decade without input. While truth doesn’t change, knowledge does. Consider the differences between truth and knowledge and how this will affect AI.
Truth. Books are full of truth that few people read. For instance, one of my favorite books, Character: How To Build It, can no longer be purchased anywhere. The book is full of truths that people re-discover. But people could have saved years of their life if they had read the book. This is how truth works.
Knowledge. Knowledge differs. Some knowledge does change over time and requires new input. For instance, you may know how to calculate the standard deviation. But with a billion data points, you probably use a tool. How you can get this with tools differs and can change over time. You’ll need new knowledge in the future if tools change.
The “Reddit Problem”
Discovering the Problem
In 2017, I developed an idea of a charitable project. I dislike most charity because I agree with John D Rockefeller’s thoughts that charity should help a person become independent of it. I’ve seen numerous charities that actually have an incentive to not solve the problem (think of any cancer foundation; more problems equal more money!).
I created my idea, paid for my idea to be translated into the most popular languages, and paid money to have it shared everywhere across social media and on targeted media sites. The material was completely free. In addition, I excluded any links in the material, names of my private research firm, and any contact outside the account associated with the advertisement. If you read the material, you would not be able to get ahold of me, as this was the intent. I wanted people to have the solution for free without knowing who wrote it and advertised it.
I learned quite a bit about social media advertising in terms of who got disproportionate results. All social media advertising in general led to traffic, but the one that stunned me was Reddit. Reddit banned the advertisement because a user (or several users) were offended by the free content. Keep in mind that I paid for this advertisement - Reddit banned something I had paid to advertise!
Is it legal to ban something that someone paid for without reimbursement, especially if the content doesn’t violate any guidelines and is not illegal? Good question. Had this not been a charity project, I would have sought legal counsel. Consider this point if you experience something similar.
Reddit Provided Support But…
When I reached out to Reddit’s support, they apologized and reversed the ban. A week later, my advertisement got banned again. Reddit’s support had to reverse it again. I will highlight to readers that Reddit support did reverse the ban. But Reddit support couldn’t stop what the ban said about the Reddit audience in the first place. I also felt like I was micromanaging an advertisement, so I stopped paying to advertise on Reddit.
To this day, I would never spend a dollar on Reddit.
Consider the contrast with other social media sites:
Facebook brought more traffic and I had no issues.
LinkedIn brought a moderate amount of traffic, though probably more targeted.
(At the time) Twitter brought in moderate traffic and had no issue with it being reported.
The selective mainstream media platforms I used also had no issues with the advertisement.
The Underlying Issue and What This Means For AI
Whether Reddit knows it or not, it is structured in a way that can encourage anti-social behavior. This is not due to it being anonymous; I actually know many sites that maintain anonymity well. The problem is when you combine that with karma that includes upvotes and downvotes.
Some of the highest rated comments are comments that a person would never say in person. The incentive is backward. But here’s a key point related to AI: not only are many AIs using Reddit, they have the same problem. They lack humanity. At this time, Perplexity calls itself a know-it-all, but people don’t actually like know-it-alls. They also don’t trust know-it-alls over time because all life is defined by boundary.
What will happen over time is that AI will re-emphasize and amplify cordial human behavior. This will be in extreme contrast to the opposite from AI. Social skills will become paramount again - even now some people don’t realize how anti-social their skills and behavior are. But as these people fall behind, because a robot can do better when basic social skills aren’t needed, they’ll have to experience the pain of learning actual human skills.
Never forget: you are speaking with humans like you. Would you say that to the person right across from you? How about if they were going to be in your life for the next five decades? AI doesn’t get this and can’t.
Exercise 3
Attend in person events and practice speaking to people you don’t know while looking them in the eyes. Evaluate the feedback you get. Compare how you’d speak online. If there’s truly no difference and you’re getting positive feedback in person, continue - you have good people skills.
Ending Story
About a decade ago, I visited my buddy Doyle for an event in LA. Our event started early on a Saturday morning, but I was running behind. Downtown LA was over 45 minutes away from where we were staying according to both popular maps applications at the time. I had to be at the event in 15 minutes. I accepted that I would be late when the taxi pulled up to get me.
I was completely wrong.
In 12 minutes, the taxi driver drove me to the exact location where I needed to be in downtown LA. The driver didn’t use Google Maps or Waze. In fact, he used no GPS at all. He took main roads, alleys, even sidewalks. He would make remarks as he would take the route - “most people don’t realize that this alley cuts through these streets here” or “this sidewalk is never used by anyone and it’s wide enough to be a street.”
I asked him if he feared the cops pulling him over for his driving. “Not at all,” he laughed. “In LA they’re completely overwhelmed with work. They couldn’t careless if someone is driving 47 in a 25.” What became clear is that he not only understood details about routes, he understood the context of every place he drove.
Even while he drove through a maze of roads, alleys and even sidewalk, he could keep a fun conversation going. To this day, I still remember this 12 minute drive. I had fun. I made it with 3 minutes to spare when I expected to be 33 minutes late. I also learned what exceptional means. That driver was beyond amazing (he got a big tip).
Conclusion
With all the hysteria about AI at present, there's no need to add to the chorus of it solving everything when we'll see problems that few people are anticipating. What I find odd is how few people are concerned about the lack of accurate research from AI. What we find repetitively with people is that they value information. But as I am known to repeat, no data will always be better than inaccurate data. The same applies to information.
References
According to the U.S. Geological Survey. See here.
Epilogue
1. What we want to know
I share this story frequently and I advise talent management and leadership to reflect over this story as it has a powerful message.
Over a decade ago while speaking about Millennials related to personal finances, I asked a room full of bankers how many of them would be proud if their sons became plumbers. None raised their hand. I asked about other blue collar work with the same response. I then pointed out that all of them had used the result of plumbing at least several times during their day already. Even with this challenge, this did not change the executives' view of things. Over time, some of these bankers have reached out and expressed regret for sending their children to college.
I cannot stress this point enough: people want to know what they want to know.
2. An accurate prediction of AI in fiction
Unintentionally, this fictional scene predicts AI quite well. More than likely, George Lucas did not intend that to be the case. However, this scene captures part of what I expect to occur with AI.
During private presentations I make other predictions about AI where I see specific positives and cautions. Given that these other predictions are more industry specific, these do not fit the overview predictions in this post.
3. Most providers are providing good cautions
I’ve seen this and similar statements in most AI tools:
Don't enter info you wouldn't want reviewed or used.
Many people are skipping this without thinking about what this means. Whatever you provide to the AI (including your questions) will now be a part of its data. In other words, if you’re inputting something that you consider intellectual property to get help, AI can take and use that. It’s not theft because you agreed to those terms.
4. Follow-up a year later - December 2024
As I reflect over the predictions in this article and what I’ve seen this year, I continue to see the same patterns. For a simple example of this, gold miners have faced even higher costs this year when mining for gold. Yet these same gold miners are using AI.
American’s life expectancy continues to plummet. Yet many healthcare providers admit that they’re already using AI. I’m noticing some subtle changes in internet contributions, though I expect this to take longer than people realize. People are slow to change.
I’m seeing more of what people “want to know” than what is actually true. Never forget that when people invest time and energy into research, they want to get a return on that time. If research is as easy as asking an AI, then there’s no emotional investment in wanting to know if that answer is true.
5. Electronic vs Physical Gold
Related to the electronic versus physical gold: I highly suggest you watch this interview time period 1:34:00 to 1:40:00. Luke Gromen mentions several key points:
(1:39:59) Relative to when you measure the electronic vs physical gold, there is at least 100 more electronic than physical gold. At several measurements of my own, this has been as high as 120 to 150! This partially explains why the country Luke mentions in that segment wanted their physical gold. Luke also answers why the price dropped when this happened - less physical collateral for electronic gold. But remember that that price drop is temporary. As a note, the reasoning behind electronic gold is to put a “top” on the gold price. This works until everyone, starting with central banks, realizes that the electronic gold price is a fraud.
(1:36:15) Luke alleges a company reached out to one of his sources to buy gold while producing content to clients and media saying that the price of gold would decline. I happen to know two other examples of this from people in my own life (different companies). These companies product content that will say one thing, but behind the scenes they’re doing the opposite. Think about this in the context of AI: the AI will be “fed” the produced content, not what the company is doing because the company keeps this a secret. What you have here is misinformation. The effects? Very costly.
Think about the impacts here. If a reported price is fraud and this fraud can be maintained for decades, what’s the impact when this fraud is finally exposed on institutions and people? AI is amplifying this fraud right now. Every LLM is only repeating what the fraud Luke is mentioning. Worse - as Luke alleges - institutions may want to amplify this misinformation so they can take advantage of this misinformation they’re spreading.
Returning to my repeated point: it’s odd how very few people are concerned about how much bad information they’re receiving. No data are much, much, much better than bad data. Same with information.
6. Humorous recent Reddit story (November 2024 update).
After an event in November 2024 shocked the Reddit community, one of my friends started a poll asking Redditors if they thought that they were an echo chamber. Her reasoning involved the shock of the event to Reddit; she wondered if Reddit was an echo chamber. She was downvoted into oblivion and ended up deleting her post.
Downvoting is a humorous example of being non-human. What’s the in person situation of someone downvoting a question being asked - and a good question at that? Every reader who regularly attends in person events knows the answer is no one behaves this way in person.
Remember, that many Redditors are actually cordial people. What they understand is that people on Reddit are people. If you approach Reddit this way, ironically you may get more out of it because you’ll eventually connect in reality with real people.
Contextual note for posts that involve future predictions: this post was written January 2024. Consider this timing context when reflecting over the content of this post. All images are either sourced from Pixabay or created unless explicitly stated. All written content and created images are copyright, all rights reserved. None of the content may be shared with any artificial intelligence.