Not All AI in Your Credit Union Is the Same

Generative AI on a laptop vs. Embedded AI on servers

Artificial intelligence is now embedded across the financial services landscape.

But one of the most common governance mistakes I see is this:

Leaders talk about “AI” as if it were a single category.

It is not.

There is a meaningful difference between:

  • Staff using generative AI tools like chat platforms
  • Vendors embedding AI inside operational systems

Those two forms of AI require different oversight models, different policies, and different risk controls.

If your credit union is going to adopt AI responsibly, the first step is classification.

Category One: User-Directed Generative AI

This includes platforms where employees intentionally interact with an AI system.

Examples include:

  • ChatGPT Enterprise
  • Claude Teams
  • Microsoft Copilot for Microsoft 365
  • Gemini for Google Workspace Enterprise

In this category, risk is largely driven by human behavior.

An employee decides what to upload.
An employee decides what to prompt.
An employee decides how to use the output.

The governance questions here are typically:

  • What platforms are approved?
  • What data may or may not be uploaded?
  • Is member information prohibited?
  • Is usage logged?
  • Has staff been trained?

This is primarily an internal governance issue.

You manage it through:

  • Acceptable use policies
  • Platform selection
  • Training
  • Administrative controls

If managed intentionally, generative AI can improve productivity and decision support without materially increasing institutional risk.

But unmanaged use—especially through free consumer tools—creates preventable exposure.

Category Two: Embedded AI Systems

The second category looks very different.

This includes AI that operates inside vendor platforms and institutional systems, such as:

  • AI-driven loan decisioning
  • Fraud detection models
  • AI-powered member service chatbots
  • Behavioral analytics
  • Marketing automation algorithms
  • Cybersecurity monitoring tools

In these cases, staff are not manually uploading prompts.

The AI is integrated into the system.

It processes data automatically.

It may influence—or even partially automate—decisions.

The governance model shifts significantly.

The primary risk vector is no longer employee usage behavior.

It is:

  • Vendor risk
  • Model risk
  • Fair lending exposure
  • Explainability
  • Regulatory defensibility

This falls squarely into vendor management and model governance territory.

Why This Distinction Matters

When a leadership team says,
“We already use AI,”
that statement is incomplete.

Are you referring to:

  • Executives drafting policy summaries in an AI chat tool?
    Or
  • A loan origination system using predictive modeling to influence underwriting decisions?

Those are not the same risk category.

They should not be governed the same way.

Treating them as equivalent creates confusion.

Generative AI: Governance Focus

Primary risk areas:

  • Data handling
  • Staff behavior
  • Policy clarity
  • Platform approval
  • Documentation

Oversight tools:

  • AI acceptable use policy
  • Approved platform list
  • Staff training
  • Internal monitoring

Embedded AI: Governance Focus

Primary risk areas:

  • Fair lending implications
  • Model bias
  • Explainability
  • Third-party oversight
  • Regulatory compliance
  • Change management

Oversight tools:

  • Vendor due diligence
  • Model validation
  • Compliance review
  • Ongoing monitoring
  • Board-level visibility

These two AI types belong in different risk conversations.

A Practical Governance Framework

Credit unions should classify AI into at least two categories:

1. User-Directed AI (Generative Tools)
Governed internally through policy and platform approval.

2. Embedded AI (Vendor or System-Integrated Models)
Governed through vendor management and model risk frameworks.

Some institutions may also add a third category:

3. Automated Decisioning AI
Where decisions are materially influenced or partially automated by predictive models.

The point is not to complicate AI adoption.

The point is to avoid oversimplifying it.

Governance Before Innovation

AI adoption does not require fear.

It requires clarity.

If your credit union has not formally distinguished between generative AI tools and embedded AI systems, governance conversations will remain vague.

And vague governance creates unnecessary risk.

Innovation without structure is exposure.
Structure without innovation is stagnation.

Credit unions need both.

Ricky Spears

Ricky Spears

Ricky Spears is Founder and Principal Consultant at CU Logics, advising credit unions on AI strategy, Microsoft 365 architecture, and operational automation. His focus is practical implementation, governance, and systems that staff can actually use.