Nexthink’s next Solutions Spotlight should be an exciting and informative deep-dive on how to best harness AI to improve DEX and streamline your processes. Sounds great, right? More efficiency, better decision-making, and happier employees. Before we tether our future to AI uncritically, let’s pump the brakes. As someone who's spent years wrestling with the promises and perils of emerging tech – especially blockchain's early days – I see some serious red flags waving.
AI: The Unseen Algorithmic Bias
We're told AI is objective. Data in, optimized solution out. AI is only as good as the data it’s trained on. The problem? Unfortunately, the data that is getting used frequently replicates society’s biases. If your training data is biased against one demographic, the AI will accidentally prioritize that group in its analysis and recommendations. It would result in biased performance ratings, unfair distribution of resources, and at the end of the day, a less diverse and inclusive employee environment.
Example implementation #1—AI for customer service Imagine a powerful AI that scans all employee help desk tickets. If the data shows that employees from one department consistently struggle with a particular software, the AI might prioritize training for that department. Which might seem understandable. What if they’re in crisis mode because their hardware is outdated or they haven’t been properly trained? These problems would likely prevent the AI from passing, as it relies only on data and objective measurements. This sets up a dangerous self-fulfilling prophecy. If you are a member of a certain group, no matter how competent you are, others may perceive you to be less competent.
This is urgent in the EU. While Nexthink is showcasing its tech, we, as Europeans, need to ask ourselves: are we becoming overly reliant on US-based AI solutions? Are we really ceding control of our data and algorithms, all the while risking the EU’s digital sovereignty objectives? We need to foster our own AI innovation to ensure that these technologies align with our values and regulations, not just Silicon Valley's.
Job Displacement: The Quiet Revolution
Nexthink promises to "save time" with AI. Let's be honest: that often translates to fewer jobs. Proponents wish to convince us that AI will help workers focus on more valuable work like creative decision-making. That’s a much more complicated picture in reality. Most current employees don’t have the skills or access to opportunities to move into these new roles, causing widespread anxiety and job displacement.
Consider Nexthink Assist, the AI-powered virtual assistant. It’s meant to take over repetitive, mundane IT tasks to give human IT workers time to focus on big-picture, high-level tasks. What kind of incentives does that create for the IT staff that used to be dealing with all these basic stuff? Are they retrained and redeployed, or are they just fired? It’s a question that deserves to be asked and answered decisively—not distracted from with empty assurances of improved efficiency.
The Erosion of Human Connection
DEX is about experience. Human experience. But what does it mean to have that experience mediated by algorithms. 24 25 Endless surveillance and improvement on the part of AI may feel like being watched and overly controlled, which fosters distrust and removes freedom.
Think about it: if an AI is constantly tracking your productivity, identifying areas for improvement, and even suggesting specific actions, you might feel like you're working for a robot, not a human manager. This further exacerbates morale, burnout, and disassociation alienation. Ultimately, we need to keep in mind that technology exists to further human connection—not replace it. Empathy can't be algorithmized.
Security Risks: Data Is The New Oil
Running sensitive employee data through an AI system is a major security concern. Plain and simple. What measures are being taken against a potential data breach or intrusion? How is Nexthink making sure that employee data is stored and handled securely to comply with GDPR and other privacy regulations?
As we’ve witnessed time and again with data breaches over the last few years, the effects can be dire. The more information we give AI systems, the more harmful that information could be. So let’s hold AI providers to a higher standard of transparency and accountability. While pushing the envelope with innovation, we need to double down on security guards to protect our data.
The Illusion of Perfect Optimization
AI can optimize, but it can't perfect. By putting too much faith in AI, you’re limiting yourself to a strict, dogmatic environment that suppresses creativity and progressive thought. Human intuition, judgment, and empathy will remain crucial for effective management, now more than ever.
Now, picture an AI that can take all that data and optimize your meetings schedule for you. For example, it could determine the best time of day to hold a meeting, taking into account overall attendance rates and historical performance. What if that time is difficult for some employees, or conflicts with other essential obligations? An AI manager would not have the flexibility to consider all of these things and recalibrate in real time. An AI might not. We all have to keep in mind that AI can be a powerful tool, but it cannot supplant human judgment.
This moment presents an ideal opportunity to confront the opaque harms of these technologies. Let’s ensure that we are using them in a responsible and ethical way! Let's not be blinded by the hype. If we’re going to fix our broken democracy, let’s ask the hard questions and hold them accountable. The future of work depends on it. The clock is ticking! It's Thursday, August 14th, 2025, at 12:00 PM Eastern Time. We need to be prepared.