Hacker Hub - April 2026

Is Your AI Assistant Leaving the Door Wide Open?

What small business owners need to know about the hidden risks of AI-powered tools

Artificial Intelligence (AI) features are turning up everywhere. Chatbots, booking systems, website assistants, CRM tools. If it's business software, there's a good chance someone's bolted an AI feature on to it. And for small businesses, that can feel like a genuine win. Always on, never complains, handles the repetitive stuff.

But here's the thing nobody's talking about in the sales pitch: these tools can be manipulated. And not in some theoretical, nation-state-level way. In surprisingly simple ways.

The "Overly Helpful Employee" Problem
You know that person in the office who's too helpful? Holds the door for anyone, happily tells a stranger which floor the server room is on, sticks passwords on Post-it notes because "it's easier for everyone." Great intentions. Terrible security.

AI assistants can behave exactly the same way.

Our penetration testing team (the people we pay to break things before criminals do) have been testing AI features built into real business platforms. Using a technique called prompt injection (which is really just asking the AI the right questions in the right way), they were able to trick AI assistants into doing things they absolutely should not be doing.

In one test, a tester went from knowing nothing about a company's system to getting the AI to send an email on its behalf. Think about that for a second. Now think about what a criminal puts in that email. A phishing link, a fake invoice, a request for sensitive data sent straight to your customers.

Social Engineering Isn't Just a People Problem Anymore
Most business owners have heard of social engineering by now. Someone calls pretending to be from IT, talks their way past reception, gets a password. It's the basis of almost every phishing attack.

What's changed is that AI systems fall for the sametricks. Sometimes faster than people do. Our testers found that even platforms with decent technical controls could be bypassed, not through code exploits, but through conversation. The AI wanted to be helpful, and that desire became the way in.

What We Actually Found
Across two platforms we recently tested, we identified and exploited eight serious vulnerabilities in AI chat features. Not theoretical risks. Actual, working exploits. These included:

Sensitive information disclosure. AI systems were talked into revealing details about the underlying software, infrastructure, and data they could access. This is recognised by OWASP as one of the top risks for AI systems, and we saw it happen in practice.
Basic web attacks.
In one case, the AI enabled a cross-site scripting (XSS) attack, a well-known technique that could have affected anyone visiting the website.
Cloud infrastructure exposure.
In another test, only a single configuration setting stood between our tester and access to sensitive backend cloud data. One setting. That's not a security posture. That's luck.

So What Should You Actually Do?
If your business uses any AI-powered feature, whether that's in your website, support platform, CRM, or anywhere else, here are four questions worth asking:

Who built it, and did they think about security? A lot of AI features get shipped fast because they look impressive in demos. That doesn't mean anyone tested what happens when someone tries to misuse them.
What can it actually access? If your AI assistant can read customer records, send emails, or connect to other systems, it need shard limits on what it's allowed to do, not just guidelines.
Has anyone tried to break it? Independent security testing isn't optional for AI features. If no one's tried to exploit it, you don't know whether it's secure. You just don't know yet.
What's your plan when something goes wrong? Not if. When. Incident response matters as much as prevention, and most small businesses don't have a plan.

The Short Version
AI tools can do genuinely useful things for small businesses. But they also introduce risks that most people deploying them don't fully understand. The same quality that makes an AI assistant useful (it tries really hard to help) is exactly what makes it exploitable.

If you've already got AI features in your business, or you're thinking about adding them, get someone qualified to look at them properly. Because the people looking to exploit them already are.

Want to know whether your AI tools are as secure as you think? Talk to us. We test this stuff for a living.

View All Posts
Blog Image

April 1, 2026

Hacker Hub - April 2026

Our pen testers exploited 8 serious vulnerabilities in AI-powered business tools using prompt injection. Here's what small businesses need to know about the hidden security risks of AI assistants.

Read More
Blog Image

March 2, 2026

Hacker Hub - March 2026

Think hackers wear hoodies? Think again. Explore 7 surprising facts about hacker history, viruses, social engineering and cybersecurity culture.

Read More