If you hired a VA in the last two years, there is a strong probability they are using AI tools — ChatGPT, Claude, Gemini, Grammarly, or others — as part of their workflow. Many do not disclose this proactively. Should you be concerned?
See also: what is a virtual assistant, how to hire a virtual assistant, virtual assistant pricing.
The honest answer: it depends entirely on what they are using AI for and how.
When AI Use by a VA Is a Feature, Not a Bug
A VA who uses AI to work faster and more accurately is more productive, not less qualified:
- Using ChatGPT to draft an email before personalizing and sending — efficient
- Using Grammarly to catch errors before submitting a document — high quality
- Using AI to research a topic before compiling a summary — faster output
- Using transcription AI (Otter.ai, Descript) to produce a transcript draft before editing — professional
These uses improve output quality and delivery speed. If the VA owns the final output and applies judgment and personalization, AI assistance is a net positive.
When AI Use Becomes a Legitimate Concern
1. The VA Is Passing AI Output As Their Own Thinking
If you pay for research and get a raw AI-generated answer with no verification or added context — and the VA bills hours they did not spend thinking — that is deceptive.
What to watch for: Research summaries with no source citations, generic-sounding content without specificity to your actual business, implausibly fast turnarounds on complex research tasks.
2. Confidential Data Is Being Fed Into External AI Tools
This is the most serious concern. If your VA inputs customer data, financial information, proprietary business strategies, or personal information into public AI tools, that data may be retained and used for model training.
What to do: Establish a clear policy on which data categories may not be input into AI tools. Require use of AI tools with enterprise privacy settings (ChatGPT Team, Claude for Work) for sensitive work.
3. AI-Generated Content Is Submitted Without Quality Review
AI tools hallucinate. They generate confident-sounding incorrect information. A VA who submits AI-generated content without verifying accuracy creates liability.
What to watch for: Facts, statistics, or quotes that cannot be sourced; inconsistencies within a document; generic statements where specific detail was expected.
How to Set an AI Policy for Your VA
A simple AI use policy for VAs includes:
- Permitted uses: List what AI tools may be used for (drafting, editing, transcription, research assistance)
- Prohibited uses: Data that may not be input into AI tools (client PII, financial data, confidential strategies)
- Disclosure requirement: VAs should note when AI was used substantially in a deliverable
- Quality standard: All AI-assisted output must be reviewed and verified by the VA before submission
Share this policy during onboarding and update it as your business and the AI landscape evolve.
The Bigger Picture
The best VAs in 2026 are AI-augmented. They use AI to multiply their output while applying human judgment to ensure accuracy, appropriateness, and alignment with your brand. A VA who refuses to use AI tools is working at a competitive disadvantage.
The question is not "is my VA using AI?" — it is "is my VA using AI responsibly and transparently in ways that serve my business?"
Virtual Assistant VA trains VAs on responsible AI tool use and data privacy practices. Find a candidate prepared to work with modern AI-augmented workflows.