
Establishing AI Governance in a Faith-Based Remote Startup
Internal AI Policy Design for Human Accountability, Ethical Boundaries, and Responsible Adoption
Liana H. Meyer
Independent Researcher, Future Tense
January 2026

AI Image created by Liana H. Meyer
Abstract
In 2025, Marquette L. Walker Ministries (MLWM) and its newly launched nonprofit foundation, Marquette’s Destiny Foundation (MDF), developed and adopted an internal Artificial Intelligence (AI) Use Policy and companion Staff Guide to govern emerging AI use across a fully remote, globally distributed team. As staff and volunteers began experimenting with widely available generative AI tools to support communications, fundraising, and administrative work, leadership recognized the need to establish ethical guardrails before informal practices became embedded. This case study examines how MLWM and MDF designed an internal AI governance framework grounded in human accountability, stewardship, and practical risk containment—without dedicated technical staff or external consultants. Rather than framing AI governance as a technical or compliance exercise, the policy treated AI use as a matter of discernment and responsibility, clarifying approved and prohibited uses while preserving human judgment in all decisions. The resulting materials provided leadership with a durable governance foundation and staff with accessible, plain-language guidance. This case demonstrates how small, mission-driven organizations can adopt AI responsibly while protecting trust, dignity, and organizational integrity.
Keywords: AI governance; responsible AI; nonprofit governance; faith-based organizations; human accountability; AI policy design; mission-driven organizations; ethical AI use; internal governance; remote organizations
Context
Founded by Dr. Marquette L. Walker in North Carolina, Marquette L. Walker Ministries (MLWM) provides faith-based crisis support and spiritual growth, particularly for women in crisis. In 2025, Dr. Walker expanded this mission through the launch of Marquette’s Destiny Foundation (MDF), a 501(c)(3) nonprofit serving women, men, and at-risk youth facing challenges such as homelessness and addiction.
​
Although the organization’s services are grounded in local community engagement, MLWM and MDF operate as a fully remote organization, with leadership, staff, contractors, and volunteers working asynchronously across regions and time zones. Like many early-stage nonprofits, the organization was simultaneously building internal systems, expanding programs, and formalizing governance while relying on a lean team and volunteer capacity.
​
During this period of growth, interest in AI tools emerged organically among staff. Generative AI was used experimentally to draft written materials, summarize information, brainstorm ideas, and support routine administrative tasks. These tools offered immediate practical benefits, particularly in a low-resource environment without dedicated communications or operations staff.
​
At the same time, leadership recognized that AI adoption was outpacing shared understanding. There was no formal guidance defining what types of AI use were appropriate, what activities should be restricted, or how responsibility for AI-assisted outputs should be handled. Leadership attention had been focused on programmatic expansion and organizational formation rather than emerging technology governance, a common and understandable prioritization in early nonprofit growth.
​
This convergence—rapid organizational expansion, distributed work, and informal AI experimentation—created a moment of both opportunity and risk. Leadership identified the need to introduce clear AI governance early, before informal practices became normalized and harder to unwind.
Problem Definition
The central challenge was not whether AI should be used, but how it should be used responsibly within a faith-based, mission-driven organization.
Staff and volunteers were already experimenting with AI tools in good faith, seeking efficiency and clarity in their work. However, without shared boundaries, several risks emerged. Responsibility for AI-assisted outputs was unclear. Practices varied across roles and teams. Sensitive information could be inadvertently shared with public AI systems. There was also the less visible risk of gradual over-reliance on AI outputs, particularly in contexts requiring discernment, care, or ethical judgment.
For a ministry serving vulnerable populations, these risks carried particular weight. Communications needed to be accurate and respectful. Fundraising narratives needed to be truthful and dignified. Programs and pastoral work required relational sensitivity that could not be delegated to automated systems. Leadership also wanted to avoid creating a culture in which AI use felt either mandatory or discouraged through fear. The problem, therefore, was to design an internal AI governance framework that would:
- Clarify accountability without stifling initiative.
- Establish ethical boundaries without technical complexity.
- Protect confidentiality, dignity, and trust.
- Remain accessible to non-technical staff.
- Align with the organization’s faith-based mission and value.
Method & Judgment Applied
To address this challenge, the organization adopted a two-part governance approach: a formal AI Use Policy paired with a concise Staff Guide. These documents were designed to function together, serving different but complementary purposes.
The AI Use Policy was written as a governance instrument. It defined the purpose of AI use within the organization, articulated guiding principles, outlined approved and prohibited uses, and clarified oversight responsibilities. Rather than adopting language from corporate or technical AI policies, the document emphasized stewardship, discernment, and human accountability. AI was framed explicitly as a tool—useful, but never authoritative.
The Staff Guide was developed as a practical translation of the policy. Written in plain language, it focused on everyday decision-making: what AI could be used for, what it must not be used for, and which areas required special care. The guide emphasized a “pause and ask” approach, encouraging staff to seek guidance when uncertain rather than guessing or proceeding silently.
Throughout the drafting process, judgment was exercised in several key ways.
First, scope was intentionally limited to real, current use cases. The policy avoided speculative future scenarios and instead addressed the AI tools staff were already encountering. This made the guidance immediately relevant and reduced abstraction. Second, accountability was made explicit. The policy stated clearly that responsibility for decisions, communications, and outcomes always rests with people, not systems. AI outputs were designated as advisory only, and human review was required before use. Third, high-risk domains were identified and treated with additional care. Activities involving pastoral counseling, automated decisions about individuals, profiling or ranking of beneficiaries, surveillance, fabrication of narratives, and handling of sensitive personal data were explicitly prohibited. These boundaries reflected both ethical commitments and practical risk considerations. Finally, adoption was supported without coercion. The organization did not mandate AI use, nor was AI tied to performance expectations. Staff retained discretion to use or not use AI tools within established boundaries, preserving autonomy and trust.
Ethics & Safeguards
Because AI use touches sensitive domains—including personal information, representation of vulnerable populations, and internal decision-making—the policy prioritized strong ethical safeguards. Confidentiality was treated as a non-negotiable boundary. Donor records, beneficiary information, personnel files, and pastoral communications were explicitly excluded from public AI tools. This safeguard reflected both legal prudence and ethical responsibility.
Human review and verification were required for all AI-assisted outputs. AI suggestions were treated as drafts rather than authoritative answers. Accuracy, tone, and alignment with mission remained human responsibilities.
The policy also addressed dignity and relational integrity. AI was prohibited in pastoral counseling, spiritual direction, and theological discernment, recognizing that these domains require human presence, confidentiality, and moral judgment. Similarly, automated decisions affecting individuals—such as eligibility determinations or prioritization—were restricted to prevent dehumanization or bias.
Transparency and fairness were emphasized throughout. Staff were encouraged to consider how AI-assisted work might affect real people and to watch for bias, exclusion, or distortion. The policy acknowledged that AI systems can reflect underlying biases and that human judgment is essential to mitigate harm.
Governance / Risk Implications
The adoption of the AI Use Policy and Staff Guide significantly strengthened internal governance at a critical stage of organizational growth.
Leadership gained a shared reference point for evaluating AI use across functions. Decisions no longer depended on individual discretion alone. Instead, the policy provided clarity about what was acceptable, what required review, and what was prohibited.
From a risk perspective, the framework reduced exposure to common nonprofit vulnerabilities: accidental disclosure of sensitive information, reputational harm from misleading content, and unexamined reliance on automated outputs. By establishing guardrails early, the organization prevented informal practices from hardening into institutional risk. The two-document structure also proved important. The policy established authority and accountability, while the staff guide made governance usable in daily work. Together, they supported consistency without requiring technical enforcement mechanisms.
Perhaps most importantly, the governance approach reinforced organizational trust. By framing AI governance as stewardship rather than control, leadership signaled respect for staff judgment while setting clear expectations. This balance supported responsible innovation without fear or coercion.
Outcomes & Findings
The initiative resulted in several tangible outcomes.
- Leadership formally approved and adopted the AI Use Policy as an organization-wide governance framework. The Staff Guide was distributed as a practical reference for staff and volunteers.
- Staff reported increased clarity and confidence around AI use. Knowing both what was permitted and where boundaries existed reduced uncertainty and hesitation. The “pause and ask” norm created space for dialogue rather than silent risk-taking.
- The organization also established a shared language for AI-related questions. Rather than debating tools ad hoc, teams could reference agreed principles and safeguards.
- A key finding was the value of timing. Introducing AI governance early—before widespread dependency formed—made adoption smoother and less disruptive. Staff did not experience the policy as a rollback, but as supportive guidance.
Implications for Practice
This case offers several lessons for faith-based nonprofits, NGOs, and small mission-driven organizations navigating AI adoption under resource constraints. It demonstrates that AI governance does not require technical infrastructure or specialized expertise. What it requires is clarity of responsibility, ethical framing, and accessible guidance.
​
The case also shows that values-aligned language can function as an operational tool. Framing AI governance in terms of stewardship and accountability increased comprehension and buy-in across roles. Explicit prohibitions in high-risk areas provided immediate risk reduction without complexity. For small organizations, clear boundaries are often more effective than nuanced controls that are difficult to enforce. Finally, the case underscores the importance of treating AI governance as an evolving practice. As tools and use cases change, policies must be revisited. Governance maturity lies not in permanence, but in the ability to adapt responsibly.
From Case Insight to Organizational Practice
This case shows how AI governance becomes operational when policy principles are translated into simple, repeatable behaviors. By embedding guidance into daily workflows and communication norms, MLWM/MDF turned abstract guardrails into lived practice.
- Pair policy with a plain-language explainer — A simple 1–2 page companion guide helps staff quickly understand expectations, boundaries, and real-world examples.
- Embed “Human Review First” as a norm — Require human verification before any AI-assisted content is shared or used in decisions.
- Use onboarding to socialize expectations — Introduce AI responsibilities and limits when new staff or volunteers join.
- Create a “Pause and Ask” culture — Normalize seeking guidance when AI use feels uncertain or sensitive.
- Revisit guidance periodically — Schedule lightweight reviews as tools and use cases evolve, reinforcing shared accountability.
Limitations
This case reflects a single organization in an early growth phase. Long-term behavioral impacts were not measured at the time of writing. The framework relies on trust and voluntary compliance rather than technical enforcement, which may not suit all organizational contexts.
As AI tools evolve rapidly, ongoing review and adaptation will be necessary to maintain relevance and effectiveness.
Conclusion
This case demonstrates how a faith-based, mission-driven organization can establish internal AI governance early by treating AI use as a matter of stewardship rather than optimization. Through a clear policy, accessible staff guidance, and explicit ethical boundaries, the organization created a governance framework that protects trust, dignity, and accountability while supporting responsible experimentation.
For small nonprofits navigating AI adoption amid growth and limited resources, this approach offers a practical and replicable model—one grounded not in technical control, but in human judgment, ethical clarity, and organizational integrity.
Citation & Identifiers
Author: Liana H. Meyer
ORCID iD: 0009-0002-4587-8039
DOI: Pending
Version: 1.0 (preprint)
Reviewed for clarity by Dr. Marquette L. Walker. Review does not imply endorsement.

