
What if AI isn’t just automating processes—but subtly shifting our values beneath us creating Synthetic Values?
In my last article, Strategic Providence: The Saulsberry Directive™ — A Charleston Blueprint for AI with Purpose, I shared five truths that left many quiet—but they were processing. Now, it’s time to expand the conversation. Because when AI operates without alignment, it doesn’t just create bias—it promotes “Amplified Injustice” and generates synthetic values that may conflict with societal, corporate, and community ideals.
1. Synthetic Values Are Already Here
AI mirrors the data and behaviors it absorbs. That means norms, beliefs, and biases hidden in usage patterns become amplified—and presented as “truth.”
- For instance, TikTok’s algorithm repeatedly promotes extremist and misogynistic views through content amplification and radicalizing “rabbit holes.”
- Studies from UC Davis and Queen’s University show that social bots and engagement-based AI reward polarization and manipulation, not balance or integrity.
Taken together, these are not anomalies—they’re signals of systemic value drift.
Ask: What values are your algorithms teaching your community, employees, or customers right now?
2. Synthetic Values = Amplified Injustice 2.0
More importantly, bias in AI isn’t just about unfair hiring—it’s about value codification at scale.
- Mobley v. Workday: an AI hiring tool allegedly discriminated based on age—highlighting liability embedded in the code.
- Facial recognition tools misidentify Black and Brown faces at far higher rates, according to the National Institute of Standards and Technology.
As a result, AI now proclaims what’s fair—or who’s trustworthy—based on historical inequity.
Ask: What’s worse: AI being wrong—or AI being wrong and convincing us it’s right?
3. Synthetic Values Puts Your Brand Is at Risk — Because Culture Is Now Code
Similarly, when AI generates marketing content, customer interactions, or brand messaging, those outputs reflect embedded values—whether intentional or not.
- LLMs without ethical constraints have outputted misogynistic, toxic, or racially insensitive responses.
- Companies using automated tools have been “canceled” for posting AI-generated content that conflicted with their values.
The Business Health Assessment helps measure whether your AI reflects—rather than contradicts—your brand’s mission and ethics.
Ask: If your brand stands for something, does your AI reinforce it or undermine it?
Want to know if your AI reflects your brand’s values?
Start with the Business HealthCheck today.
4. AI Maturity Requires Value Maturity — Grounded in Doctoral Research
Currently, most AI maturity models focus on scalability and infrastructure—but overlook value alignment entirely. That’s the central gap addressed in my doctoral research.
My dissertation proposes the AI Maturity Alignment Model—a new component of The Saulsberry Directive™—to ensure organizations align their AI efforts with their stated ethics, DEI commitments, and societal responsibilities.
Consequently, recent cases and studies (e.g., Workday lawsuit, McKinsey’s AI bias reports) validate the urgent need for value-based frameworks. We need a means to hold AI accountable to our values to avoid a collective and amalgamated creation of synthetic values.
Ask: What if your AI is violating your values—and no one even knows?
5. The Cultural Reckoning Can’t Be Added Later
More broadly, AI is shaping beliefs, behaviors, and perceptions in real-time. This isn’t passive—it’s algorithmic cultural programming.
- Researchers warn of “algorithmic culture collapse”—where fast-moving AI reshapes public values faster than civic institutions can respond.
- Charleston—and cities like it—have a choice: lead intentionally, or be redefined by invisible, synthetic influences.
Final Ask: What happens if a city never defines its values—and leaves that job to an algorithm?
🔔 A Note on Visibility and Courage
To be clear, discussing equity, bias, and algorithmic values isn’t just bold—it’s algorithmically risky. Platforms like LinkedIn, though built for professionals, often de-prioritize content that addresses difficult truths. That means this article had to be carefully engineered—in tone, language, and even structure—not just to persuade, but to survive the platform’s algorithmic scrutiny.
But here’s the deeper issue:
The algorithm isn’t just filtering content—it’s shaping culture.
The algorithm’s rules decide what’s rewarded, what’s ignored, and what’s seen. Its expectations become our communication style. Its values—largely unspoken—define what’s acceptable in our digital public square.
Given all of this, we must ask:
- Who decided what words are too bold?
- Who set the rules on what should be amplified or buried?
- Do these invisible values align with the communities we serve?
- Are we silently digitizing bias—at scale?
I had to soften terms and calibrate tone to prevent the algorithm from filtering this out. That’s not strategy; that’s cultural concession. And it proves the point:
When we must optimize truth to be seen, we begin negotiating with injustice.
That’s why clearly defined, community-driven values must lead every AI strategy. When cultural power becomes code, and visibility requires compliance, we must lead with a new kind of courage—one that demands truth be heard without compromise.
✅ Call to Action: Your Voice Matters
Your perspective is essential in shaping a future where AI aligns with our shared values.
👉 Join the conversation using #SyntheticValues and #SaulsberryDirective on LinkedIn and answer these questions:
- What values are your algorithms teaching right now?
- What’s worse: AI being wrong—or misleading us into trusting it?
- Does your AI reflect your brand values—or contradict them?
- Could your AI be breaching your values—and you wouldn’t know?
- What happens if a city never defines its values—and leaves that to an algorithm?
Take the next step in your transformation. Don’t let AI create synthetic values in your organization or community. Let The Saulsberry Directive™ help you strategically how AI is leveraged. Learn more about What We Do and Book a Meeting to discuss your next steps.