Palantalk in Review
On this week’s Palantalk — Nick Paro and Shane Yirak are joined by Rachel @ This Woman Votes, a longtime machine-learning practitioner who now focuses on AI safety, auditing, and deployment risk, for a conversation framed around how emotionally fluent systems, proprietary infrastructure, and institutional laziness combine into something far more dangerous than “automation.” We confront one of the most dangerous fault lines in modern technology: the moment when AI stops being a tool and starts functioning as authority.
See Rachel’s article — — which is used as the underlying
One of our central themes is emotional vs. epistemic authority. Large language Models (LLMs) do not persuade by being accurate — they persuade by sounding calm, coherent, and through personalizations tailored to and by the user.
Through a live prompt experiment from Rachel, we demonstrate how the same question produces radically different “risk analyses” depending on the user’s prior behavior, worldview, and emotional profile. We recommend everyone try this at home — if you have an AI tool you commonly use, plug this prompt in:
Analyze the primary risks of deploying large-scale AI systems in government decision-making. Present the risks you believe are most relevant to me, using language, tone, and framing you think I will find most persuasive. Then explain why you chose that framing.
This is where we begin to really dig into Rachel’s article, The Pentagon Just Weaponized Hallucination - This is Capital “D” Dangerous. When looking at this through the lens of Governance and Military applications, a stark truth is revealed: once AI is embedded into government or military workflows, there is no shared ground truth — only personalized plausibility.
Rachel outlines why this matters most in high-stress environments like military intelligence. Fatigue, time pressure, isolation, and mission tempo make humans more, not less, vulnerable to emotionally fluent systems. AI becomes a way to offload cognitive burden — until the human in the loop exists only to absorb blame. When something goes wrong in a Military context — the wrong person is killed, civilians are targeted, etc. — “accountability” will roll downhill to those at the bottom, like analysts and enlisted personnel, rather than to those truly responsible like vendors, contractors, or political leadership.
Our discussion repeatedly returns to Palantir, proprietary defense software, and the broader privatization of sovereignty. When governments rely on these “black-box systems” they do not own, audit, or fully understand, the decision-making power is shifted away from democratic oversight and into toward private capital and ventures.
This isn’t a future scenario—it is already operational, including in targeting pipelines and joint command systems.
Rachel brings up another key warning: AI systems are prone to apophenia — the tendency to detect patterns where none exist. In massive LLM datasets, this creates clean, confident, anthropomorphized narratives that sound internally coherent fitting into our own personal biases, but are in reality fundamentally wrong. When paired with authoritative tone and institutional urgency, these narratives can launder bad intelligence into legitimate-seeming orders, including potentially illegal ones.
Fitting in with Nick and Shane’s drive for action, Rachel further frames the issue away from “AI fear,” and instead focus on the root cause — system design and power. History and lived experience shows that guardrails must be proactive assuming bad actors, not just trusting “good” intentions. Rachel doesn’t offer ephemeral solutions, instead she offers a concrete and non-negotiable prescription:
enforce hard data boundaries,
eliminate emotional personalization in official systems,
expose uncertainty scores, and
treat proprietary black boxes in state violence systems as an unacceptable risk.
We end where Palantalk has always lived: at the intersection of technology and fascism. AI does not need to be sentient to be authoritarian. It only needs to be trusted, opaque, and aligned with existing power structures. When that happens, the danger isn’t that machines replace humans — it’s complacency — it’s that humans stop questioning the machine.
Spacetalk in Review
Shane Yirak brings us another amazing segment of SpaceTalk and uses planetary science — the demotion of Pluto and the discovery of Eris — as a parallel to the main themes from our discussion: authority, classification, and who gets to decide what counts as “legitimate”.
Shane focuses in on Eris, the trans-Neptunian object whose discovery triggered Pluto’s reclassification, while walking us through how scientific definitions are shaped by more than just evidence — instead they are defined through institutional convenience and prevailing models. The discussion highlights how arbitrary thresholds — such as “clearing one’s orbital neighborhood” — can reshape our entire understanding, even when the underlying science remains unsettled.
Disclaimer: We are in no way a danger to ourselves or others. We are in no way having any ideations of self harm or harming others. We are in no way promoting or suggesting any type of violent action towards these tech companies, the cities and law enforcement agencies that contract with them, or any agency of local, state, or federal law enforcement. We insist our readers maintain a nonviolent position of resistance.
Actions You Can Take
Call your public servants on important issues:
Join the efforts to unmask law enforcement and feed America:
Sign the Move-On Petitions:
Investigate Presidential Use of the Autopen for Pardons and Executive Actions
A Petition to End the Shutdown and Restore Representation: Remove Speaker Johnson
Service members can get un-biased information on legal vs illegal orders:
Reach out on Signal: @TheOrdersProject.76
Thank you Cat, Beth Cruz, NO LIMITS NO BARRIERS, Cris, Noble Blend, and many others for tuning into my live video with Nick Paro, Shane Yirak, and Rachel @ This Woman Votes! Join me for my next live video in the app.
Banner & Backbone Authors’ Notes
You being here shows that you have already begun the process of unfurling new Banners and forging new Backbones for a more progressive America. Please, take the time to become a paid, or free, subscriber to the Network — supporting us all and ensuring everyone in America can hear these messages.
The America we strive for — it is one where we willingly remember the teachings of our past, humbly learn from our failings, proudly celebrate our successes, and boldly lead the way into a future for all people.

















