AI Strategy

Your Team Is Already Using AI. You Just Don't Have a Policy.

By Zac ManafortMarch 21, 20267 min read

I was working with a mid-size professional services firm last quarter. The CEO told me his company had not adopted AI yet. They were being cautious, taking their time, evaluating options. A very reasonable approach on paper.

Then I talked to his team. His marketing coordinator was using ChatGPT to write first drafts of client proposals. His operations manager had built a Claude workflow for summarizing weekly reports. Three people in his finance team were using AI tools to reconcile data across spreadsheets. None of them had told anyone, because there was no official policy and they did not want to rock the boat.

This is happening at your company right now. I would bet money on it.

Shadow AI Is the New Shadow IT

Remember when employees started using Dropbox and Google Docs before the company had approved cloud storage? IT departments spent years playing catch-up on shadow IT. The same pattern is repeating with AI, except the stakes are higher because AI tools process your company’s actual data, customer information, financial details, strategic plans, proprietary processes.

A recent survey found that over 70% of knowledge workers have used generative AI tools for work tasks. The majority did so without explicit company approval. This is not a hypothetical risk. It is a current reality.

Why This Matters More Than You Think

  • Data exposure: When your employee pastes a client contract into ChatGPT to summarize it, that data hits an external API. Depending on the tool and plan, it may be used for model training. Your confidentiality agreements probably do not account for this, and your clients definitely did not consent to it.
  • Quality inconsistency: Without guidelines, every person using AI is doing it differently. One person checks the output carefully. Another hits send without reading it. There is no standard for when AI assistance is appropriate and when it is not.
  • Liability gaps: If an AI-generated deliverable contains errors that cost a client money, who is responsible? If AI-drafted communications contain inaccuracies, what is your exposure? Most companies have not thought through these questions because they officially are not using AI yet.
  • Missed leverage: The flip side of the risk is the opportunity cost. Your people are figuring out useful AI applications on their own, but without coordination. The same problem gets solved five different ways across five teams. Nobody shares what works. The company never builds institutional knowledge about how to use these tools well.

What a Practical AI Policy Looks Like

I am not talking about a 40-page governance document that takes six months to draft and nobody reads. I am talking about a clear, one-page set of guidelines that your team can actually follow. Here is the framework I use with clients:

The Green/Yellow/Red Framework

Green: Use freely. These are use cases where AI tools are approved for regular use with basic common sense.

  • Brainstorming and ideation
  • Drafting internal communications
  • Summarizing publicly available information
  • Proofreading and editing your own writing
  • Generating code for internal tools (with code review)
  • Research and learning

Yellow: Use with review. These use cases require a human to review the output before it goes anywhere external.

  • Drafting client-facing communications
  • Creating first drafts of proposals or reports
  • Analyzing internal data (with anonymization)
  • Generating content for marketing or social media
  • Summarizing meeting notes that contain business discussions

Red: Do not use. These are hard lines where AI tools should not be applied without explicit leadership approval and a security review.

  • Processing personally identifiable information (PII)
  • Inputting confidential client data into external AI tools
  • Making final decisions on legal, financial, or compliance matters
  • Generating content that will be attributed to a specific expert without their review
  • Any use case involving regulated data (HIPAA, SOC 2, etc.)

Building the Policy: A Two-Week Sprint

You do not need months to get this done. Here is the timeline I walk clients through:

Week 1: Discovery

  • Day 1–2: Send a short, anonymous survey to your team. Ask what AI tools they are currently using, what they use them for, and what they wish they could use them for. Make it safe to be honest. This is not a witch hunt.
  • Day 3–4: Review the survey results with your leadership team. You will be surprised by what you learn. Categorize the use cases into the Green/Yellow/Red framework.
  • Day 5: Draft the one-page policy. Keep the language simple and direct. If someone cannot understand the policy in five minutes, it is too long.

Week 2: Rollout

  • Day 1: Share the policy with team leads first. Get their feedback and buy-in. Adjust if needed.
  • Day 2–3: Roll out to the full team with a 30-minute all-hands. Explain the why, walk through examples, and take questions.
  • Day 4–5: Set up an approved tool stack. Pick one or two AI platforms the company officially supports. Negotiate enterprise agreements so data handling is covered contractually. Make it easier to use the approved tools than the unapproved ones.

The Approved Tool Stack

Part of your policy should be recommending specific tools so your team is not guessing. Here is how I think about tool selection for most businesses:

  • Primary AI assistant: Pick one. Claude or ChatGPT Enterprise are the two strongest options right now. Enterprise plans give you data privacy guarantees that free tiers do not. This is not optional, the free tier of any AI tool is a data risk for business use.
  • Document and knowledge tools: If your team processes a lot of internal documents, look at tools with retrieval-augmented generation (RAG) capabilities that can work with your existing knowledge base without sending data to external servers.
  • Workflow automation: Tools like Zapier or Make can connect your AI assistant to your existing business tools. This is where AI goes from a toy to a productivity lever, when it is embedded in the workflow, not a separate tab someone has to remember to open.

Measuring What Changes

After you roll out the policy, track three things for the first 90 days:

  • Adoption rate: What percentage of your team is using the approved tools at least weekly? Low adoption means the tools are not solving real problems or the friction to use them is too high.
  • Time savings: Ask team leads to estimate hours saved per week on tasks where AI is now assisting. This number will start small and compound as people get better at using the tools.
  • Quality incidents: Track any cases where AI-generated output caused a problem, an error in a client deliverable, a miscommunication, a data concern. These incidents are learning opportunities, not failures.

The Cost of Waiting

Every month you delay putting a policy in place, your team is using AI tools without guardrails, without coordination, and without capturing the institutional knowledge that comes from organized adoption. The risk compounds and the opportunity cost grows.

The good news is that this is fixable in two weeks with minimal disruption. The companies that get AI policy right early do not just reduce risk. They build an organizational muscle for adopting new technology that pays dividends for years. The ones that wait end up scrambling when a data incident forces their hand.

If you need help building your AI policy and approved tool stack, reach out. At Trading Aloha Solutions, we help companies go from ad-hoc AI experimentation to structured, secure AI adoption that the whole organization can build on.

Need help with your growth strategy?

We help companies in AI and Web3 build strategies that drive real results.