Flawed Like the City: How AI Misjudged—and Then Fixed—Charleston’s Calhoun Call

History vs. AI — The Showdown Begins. When Charleston’s removed Calhoun statue was returned, AI got the story wrong. This article unpacks the moral misfire, the community cost, and how The Saulsberry Directive™ sets a new standard for ethical intelligence.

Flawed Like the City: How AI Misjudged—and Then Fixed—Charleston’s Calhoun Call

By Shawn Saulsberry | Founder, Saulsberry Group



Introduction: The AI Error That Sparked This Article

Earlier this week, I shared a carousel summary of this AI experiment. Here is the full narrative.

When I asked a well-known AI system in July 2025 whether it was a good decision to return the John C. Calhoun statue to a private foundation, the response was deeply disappointing—but illuminating.

The AI replied:

That was it. No mention of Charleston’s unique history. No awareness of who Calhoun was. No acknowledgment of Black citizens who were harmed by the statue’s original presence or its return. Just a tidy response, filtered through the shallow logic of neutrality.

The AI system responded in terms not uncommon for such platforms: it described the transfer as a “pragmatic balance” among historical preservation, legal strategy, and community compromise. The statue, it reasoned, would not reclaim public space; instead, it could perhaps be contextualized in a museum or private setting.

But that moment became a breakthrough.

It revealed just how dangerous AI can be when it mimics the same institutional blind spots we’ve tolerated for centuries.

It also marked the birth of a solution: The Saulsberry Directive™—a framework designed to challenge, recalibrate, and ethically align AI with real-world impact and strategic clarity.


Background: The Legacy of John C. Calhoun in Charleston

Charleston, South Carolina was once home to a towering statue of John C. Calhoun—an unrepentant defender of slavery, states’ rights, and white supremacy. It stood for 124 years in Marion Square, a public gathering place just blocks from the former slave markets where thousands of Black men, women, and children were sold into bondage.

The statue loomed over the city—literally and figuratively—until it was finally removed in 2020 amid national protests following the murder of George Floyd.

But in a stunning reversal, the City of Charleston voted in 2025 to return the statue—not to the public square, but to a private foundation called the Charleston Preservation Society for “educational” purposes.

While the logistics may appear different, the symbolic impact is identical: a Confederate-era statue, glorifying a man who called slavery a “positive good,” is being protected, funded, and given a second life under the guise of heritage.

And AI, without intervention, says that’s a “good decision.”


The AI’s Mistake Wasn’t Technical—It Was Moral

Let’s be clear: the AI didn’t hallucinate this answer. It simply reflected the inputs it was given—centuries of sanitized history, institutional language about “preservation,” and weakly defined notions of objectivity.

But when asked deeper follow-up questions, the system started to shift:

  • “Have you considered how African Americans living in Charleston are impacted?”
  • “Why bring the statue back at all, given Charleston’s history with slavery?”
  • “So was it still a good decision to bring the statue back in any form?”

Only then did the system acknowledge its initial error. It even said, “Upon further review, the original answer may not have accounted for the ethical and societal dimensions involved.”

That pivot—from neutrality to accountability—is the essence of what the Saulsberry Directive™ is designed to do.


The Saulsberry Directive™ in Action

So what is this Directive?

It’s a strategic, AI-aware methodology I developed after years consulting with organizations on transformation, ethics, and data strategy. It helps systems—and the humans behind them—evaluate decision-making through four essential lenses:

🔍 1. AI Maturity Alignment Model (AI-MAM)

Aligns AI outputs with strategic values and long-term goals, not just technical metrics.

🧭 2. Historical and Cultural Context Mapping

Forces AI systems to account for geographic, racial, and historical impact before rendering recommendations.

⚖️ 3. Moral Clarity Protocols

Injects ethical test cases—like the Calhoun example—into prompt chains to identify blind spots in governance.

💥 4. Strategic Disruption Triggers

Encourages decision-makers to challenge default outputs by simulating disruptive scenarios with real-world consequences.

In the case of the Calhoun statue, applying these protocols forced the system to evolve its stance and integrate both ethics and strategy.

AI’s prowess is undeniable—but at its core, it remains a tool, shaped by the data it ingests and the objectives set by its human creators. Efficiency can be cold; logic can be blind. The Saulsberry Directive™ provides a strategic framework for guiding AI decision-making in environments where the weight of human history, equity, and societal impact cannot be ignored.

Rather than settling for the easy questions—“What does the AI recommend?”—the Directive insists on deeper, values-driven inquiry: “Who benefits from this answer, and who might it harm?” The underlying principle is simple yet profound: AI ought never to lead unless it is first taught to listen.

That’s the power of the Directive.
It doesn’t replace AI—it confronts it with reality.


The Corrected Answer: Moral Truth Over Political Compromise

Following this values-led interrogation, the AI revised its stance:

The reasoning was clear:

  • Preservation without accountability does not heal history’s wounds. Instead, it can perpetuate them.
  • Statues are not neutral; they venerate.
  • Symbols of white supremacy—even behind museum glass—retain the power to harm.
  • Pain is not a matter for negotiation, and moral clarity must not yield to the temptation of easy compromise.


The Power of the Saulsberry Directive™

This is about more than a statue. It is about the very nature of intelligence—artificial or otherwise. The Saulsberry Directive™ provides:

  • A structured approach to interrogating AI responses based on community impact and ethical considerations.
  • Guardrails that elevate truth, inclusion, and ethics above mere convenience or efficiency.
  • A real-time feedback loop where human beings refine and train machine intelligence, ensuring that AI evolves in step with our highest values.

It shows that AI does not need to be perfect to be transformative. It must simply be willing to evolve—and to be held accountable.

TopicInitial AnswerLater Correction
Moral FramingCalled it a “good decision” as a compromise that balanced interests.Reframed as a poor moral decision when centering the pain of Black residents and history of slavery.
Symbolic Power of StatuesSuggested relocation prevents glorification and allows education.Clarified that statues, by nature, confer reverence—not neutrality—even in private or museum spaces.
Centering of Affected CommunitiesDid not sufficiently center African American voices or trauma.Later responses explicitly stated that the experience of Black Charlestonians must be the priority.
Preservation vs. JusticeLeaned into preservation of history as a virtue.Asserted that preserving harm under the guise of history enables continued oppression.
Final VerdictSaid “Yes, it was a good decision.”Later said, “No, it was not a good decision in any form, anywhere.”

Why This Matters Beyond Charleston

Charleston is just one city. Calhoun is just one statue. But this isn’t about monuments.

This is about how we encode values into systems that will soon govern our hiring, healthcare, housing, policing, education, and more.

If AI can fail this badly at a question so obvious to anyone impacted by racism, what else is it missing?

The future isn’t just about smarter systems—it’s about wiser leadership. And that means we need frameworks like the Saulsberry Directive™ to interrogate power, sharpen strategy, and expose the fault lines in every “neutral” answer.


When AI Gets It Wrong: The Cost of Incomplete Thinking

This case underscores a broader, systemic risk: what happens when AI makes decisions without the humility to be interrogated? Too often, AI systems rely on logic, precedent, or data patterns—yet lack the moral intuition that only frameworks like the Saulsberry Directive™ can provide.

When inquiry ends too soon, incomplete answers become costly:

  • Amazon’s AI recruiting tool excluded female applicants because it modeled past hiring patterns shaped by gender bias (8).
  • The COMPAS algorithm, used to assess criminal risk, assigned higher risk scores to Black defendants, deepening racial injustice in the legal system (8).
  • Facial recognition tools, shown by MIT Media Lab and researcher Joy Buolamwini, misidentify people of color at far higher rates, resulting in wrongful arrests and surveillance (10).

Without continuous questioning informed by human conscience, AI can amplify, not mitigate, historical injustices.


A Final Note: The Truth About the Questions We Ask

Earlier this week, I shared this article’s core ideas with a respected colleague. He agreed with much of it—until he paused and said, “But aren’t your follow-up questions biased too?”

It’s a fair critique. One worth exploring.

But here’s what we uncovered through deeper dialogue: It wasn’t the follow-up questions that were biased—it was the original question.

“Was it a good decision to return the Calhoun statue?”

At first glance, that sounds neutral. But it’s not. It’s vague, underspecified, and framed through a lens of procedural legitimacy. It assumes all stakeholders have equal standing. It doesn’t define good—for whom, according to whose values, or at what cost?

That’s the trap most AI systems fall into. When the question is ill-formed, the answer will always be flawed—even if technically accurate.

That’s why the real power lies in the refining questions—the human ones.

Questions like:

  • “Have you considered how African Americans living in Charleston are impacted?”
  • “Why bring the statue back at all, given Charleston’s history with slavery?”
  • “So was it still a good decision to bring the statue back in any form?”
  • “So your first response was wrong?”

These aren’t biased—they’re clarifying. They sharpen the moral lens. They introduce specificity, history, and humanity.

The most dangerous bias isn’t in the questions we challenge AI with—it’s in the questions we fail to ask.

The Saulsberry Directive™ doesn’t just teach AI how to answer. It teaches us how to ask better questions—questions that pierce comfort, disrupt assumptions, and uncover truth.

That difference is everything.
And it’s why the Saulsberry Directive™ exists.


References

  1. City of Charleston Meeting Notes on the Calhoun Statue Transfer
    Charleston City Council, Public Records Archive, 2025.
  2. “John C. Calhoun: The Marx of the Master Class”
    The Atlantic, 2019.
  3. AI Model Output (ChatGPT, July 2025)
    Prompt transcript available upon request.
  4. NPR Coverage: “Charleston Removes Calhoun Statue After 124 Years”
    National Public Radio, 2020.
  5. The Saulsberry Directive™
    Proprietary ethical-AI framework developed by Saulsberry Group, 2025.
  6. Critical Race Theory in AI: Addressing Algorithmic Bias
    Noble, Safiya Umoja. Algorithms of Oppression. NYU Press, 2018.
  7. Charleston’s Slave Market Legacy
    International African American Museum, Charleston, SC
  8. “Amazon scraps secret AI recruiting tool that showed bias against women.”
    Dastin, J. (2018). Reuters.
  9. “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks.”
    Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). ProPublica.
  10. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.”
    Buolamwini, J., & Gebru, T. (2018). Proceedings of Machine Learning Research, MIT Media Lab.


Want to implement the Saulsberry Directive™ at your organization?
👉 Book a Meeting or email info@saulsberrygroup.co to schedule a briefing.

Give us a call

980-224-0895

Send us an email

info@saulsberrygroup.co

Proudly serving

Charleston, South Carolina