If you've worked with a virtual assistant in the past couple of years, there's a reasonable chance they've used AI tools — ChatGPT, Claude, Gemini, Jasper, or similar — on tasks they've completed for you. The question is whether they told you. In many cases, they didn't. Not necessarily out of malice, but because many VAs see AI tools as productivity aids similar to templates or calculators — just a faster way to do the work. But whether a virtual assistant using AI tools secretly is a problem depends entirely on the context: what tasks they're using it for, what data they're feeding into it, whether the output quality is acceptable, and what your agreement specifies. This is a nuanced topic that increasingly needs a clear, explicit policy in every VA engagement. This guide helps you think through the key questions, establish a reasonable AI use policy, and decide what level of AI tool usage is acceptable for your specific use cases.
Why VAs Use AI Tools — And Why They Don't Disclose It
Virtual assistants use AI tools primarily to work faster. Writing tasks, email drafting, research summaries, social media captions, data formatting, and content templates — all of these are faster with AI assistance. For a VA juggling multiple clients, AI tools are a significant productivity multiplier.
The reason many VAs don't disclose AI usage comes down to uncertainty about how clients will react:
| VA Concern | Reality |
|---|---|
| "Client will think I'm not earning my rate" | Most clients care about output quality, not method |
| "Client might reduce what they pay me" | Possible if client perceives reduced effort |
| "Client hired me for my expertise, not AI" | Legitimate concern for skill-based roles |
| "I don't know if I'm allowed to use it" | Often there's no policy either way |
| "It might cause confidentiality issues" | This concern is well-founded for some AI tools |
The absence of a clear policy creates the ambiguity that leads to undisclosed usage. When you establish a clear AI use policy, most VAs will follow it — because they actually want clarity too.
Where AI Tool Usage Becomes a Real Problem
Most of the time, a VA using AI to draft a blog post outline or summarize a document is a non-issue. The work gets done, the quality is acceptable, and everyone moves on. But there are specific situations where undisclosed AI tool usage crosses from "minor efficiency trick" to "genuine problem":
Data security and confidentiality: When a VA pastes your client names, financial figures, proprietary processes, or personal data into a public AI tool like ChatGPT, that data may be used to train AI models or accessed by the tool provider. This can be a violation of your client confidentiality obligations and potentially regulatory frameworks like GDPR or HIPAA.
Accuracy-critical content: AI tools hallucinate. For content where factual accuracy is critical — legal documents, medical information, financial analysis, research reports — using AI output without rigorous fact-checking is a risk.
Brand voice and quality: If you hired the VA specifically for their writing voice or creative skill, and they're delivering AI-generated content that doesn't sound like your brand, the deliverable isn't what you paid for.
Billing transparency: If a VA bills 3 hours for a task that AI completed in 10 minutes with minimal editing, that's a billing integrity issue — related to the virtual assistant overcharging hours concern that many clients face.
"I wasn't angry that she used AI — I was angry that she billed four hours for a report that was clearly AI-generated and took maybe 30 minutes of her time. The issue wasn't the tool; it was the billing honesty." — Marketing Director, Financial Services Firm
Building an AI Use Policy for Your VA Engagement
Rather than hoping the issue doesn't come up, address it proactively with a clear, fair AI use policy. A reasonable policy covers:
- Which tasks AI tools are permitted for: Drafting first-pass content is usually fine; generating factual research reports or legal summaries without disclosure is not
- Data handling rules: No client names, financial data, personal information, or proprietary processes are to be entered into public AI tools
- Disclosure requirement: If AI was used to complete a significant portion of a deliverable, it should be noted in the submission
- Quality responsibility: AI-generated content must be reviewed, edited, and fact-checked by the VA before delivery
- Approved tools: Specify which AI tools are approved (e.g., tools with enterprise privacy agreements) versus prohibited
Include this policy in your VA agreement or onboarding documentation. For more on structuring your VA documentation framework, see our guide on virtual assistant SOP creation.
Does It Actually Matter? A Practical View
For many business owners, the honest answer is: it depends on the task and the output quality. If your VA is using AI to speed up email drafts, format data, or create content outlines — and the output is good and meets your standards — the method matters less than the result. The concern escalates when:
- Confidential data is being exposed to AI systems
- The billing doesn't reflect the actual effort involved
- The quality is AI-generic rather than the specific expertise you hired for
- Accuracy-critical deliverables contain AI hallucinations
Addressing this through a clear policy is far more effective than trying to detect or police undisclosed AI usage after the fact. See our article on hiring a virtual assistant for quality assurance and proofreading for how to build quality review into your workflow regardless of how work is produced.
Ready to Hire?
The question of a virtual assistant using AI tools secretly is best resolved with a clear, upfront policy rather than suspicion. Transparent VAs and transparent expectations make for much better working relationships.
Ready to hire a virtual assistant? Virtual Assistant VA connects you with trained VAs who operate with transparency, follow client-defined AI policies, and maintain strict data confidentiality standards.