Yeah, AI is a tool, just not the tool you think it is—and what’s being peddled to you isn’t really for your benefit. This is something I’ve been learning and that I hope you will learn as well.
So-called artificial intelligence (AI) is sometimes a confusing landscape, and really AI is a broad term that includes machine learning (ML), large language model (LLM), generative AI and probably others. AI had been used effectively in research and other scientific endeavors with a lot of success long before Chat GPT and all the rest came barreling into the market. So, and as I like to point out when I talk about this stuff, my beef isn’t with AI generally and I’m not a Luddite; I have issues with the way AI has been aggressively mass-marketed through GPT, Claude, Grok, Llama, DALL-E and others to become part of many people’s daily lives and even “replace” some of them by taking away their livelihoods.
As the executive director of a small and struggling non-profit organization, I am bombarded with emails and messages about how using AI in our work could increase our efficiency. For weeks now, I have struggled to juggle an increasing workload, owing to the fact that we no longer have an administrative assistant or communications person. Those duties and tasks are now on my plate so that I can keep our program staff paid and our core work flowing. I have to admit that I recently had to push my personal bias against AI aside and look at exactly what the technology might be able do to assist me. Sure, it could help with internal communications like email, some aspects of fundraising, as well as external communications like outreach, public relations, and social media stuff. All areas that we desperately need help with since we are so short-staffed.
But after sitting with everything that AI could do for our organization, I had to zoom out and think larger than efficiency.
Our organization was founded in 1968, the organization was named Community Change Inc., and a core value of our organization since inception has been relational building and community building—really focusing on human connection and relationships—along with our overriding mission of interrupting and dismantling white supremacy.
The ongoing quest to do more and to maximize efficiency are central tenets of white supremacy culture. It’s a culture that lives on and thrives in part because it doesn’t affirm human worth or dignity in a general sense; it almost solely elevates those who choose to pledge allegiance to the culture of whiteness. Though, as we are starting to see with the MAGA phenomenon and the current iteration of the Trump administration, even selling one’s soul to whiteness is no guarantee that it will affirm you.
Once I zoomed out my lens on AI beyond simply the most common rebuttals for why AI is bad (massive strains on electrical grids/critical water supplies and theft of other people’s work being the biggies) and looked at the use of AI as a whole in the marketplace, it became clear to me that as long as I serve as executive director, we will never knowingly use AI in any parts of our work because it ultimately goes against our organizational values. I went so far as to announce this as a recent staff meeting and they agreed with my thoughts, but went further to say they felt it’s at odds with what we are trying to build in our communities and that the use of AI in our work as organizers in this political climate isn’t even safe.
I left that meeting renewed in my spirit but with the gnawing sense that for all the think-pieces and conversations on AI over the last two years, very few people are willing to consider or even name the possibility that AI in the current iteration that is being marketed and forced upon us is actually a tool of white supremacy. A rather successful tool, I might add, as millions become addicted to the possibilities that AI holds.
In today’s world, you rarely need to put on a white sheet and publicly declare who you hate. You can create a technology that makes believers feel they are in control but that is really destroying humanity and taking us further away from any real liberation. Liberation will not come from a data center or a chatbot.
At present, the data centers required to run these technologies are more commonly found in Black, Brown and rural communities. In other words, the data centers are being placed in the communities of people that the folks in charge consider the most disposable. Communities where the most impacted are at risk for the greatest harm. The owners of these companies aren’t placing the data centers in their own neighborhoods, instead choosing marginalized communities to place these resource hogs, where it means greater risk of environmental harms (which, practically speaking, are higher risks of cancer and respiratory illness, on top of creating water supply issues). Elon Musk put the AI supercomputer facility that powers his AI chatbot Grok in Memphis, Tennessee, in a Black community that was already disproportionately affected with high rates of pollution-related illness. How often do Grok users ever stop to think about that? In fact, how often does anyone who regularly uses any form of AI knowingly stop to think about the impact on others?
Personally, one of the most unnerving things about the rapid rise and acceptance of mass-marketed AI is knowing many people in justice work spaces who are passionate about social change and justice but who think nothing about using AI as a thought-partner or to assist in their work. What are we doing here? How are we working to better the lives of one group of people, often marginalized, by creating more harm to other marginalized people?
Speaking of harm to Black and brown people, generative AI is trained on the work of folks who many years ago, much as I did, created blogs and wrote articles for online platforms. As well-known author and blogger Luvvie recently posted on Threads, “My books were actually in the database used to train how AI writes. So if you ever think my writing sounds like AI, it’s because they used my actual writing to train it. I got here first.”
Luvvie isn’t the first Black author to talk about this. I recently discovered my own work on an AI podcast about Black women. Let me repeat that. There is an AI podcast (meaning no actual humans are involved in the discussion) on Black women that scraped my work for its content. In a world where increasingly Black women and our labor are being devalued, our work being scraped (stolen and plagiarized, really) to create a Black voice and experience without the actual people is repugnant. In 2025, Black women were the largest group in the United States to lose employment under the regime in D.C.
Which brings me to my first point: Have you not noticed how so many of the initial AI images that we see are of Black people? From the cutesy grandparent videos to the sassy Gen Z and every other flavor of Black person. The world of “internet influencers” catapulted a lot of Black people into financial comfort and visibility; now we are starting to see new influencers and content creators emerge who have little to no digital footprints prior to their breakout moments. For those of us in the political and justice spaces, we are realizing that no one knows these “people.” Which begs the question: Are they people at all?
After all, we have now entered the era where AI has become so good so quickly that more and more often we have to ask, “Is that real?” The world has loved the flavor and spirit of Black people, but less so actual Black people. Well, in the era of AI, you can have all the Blackness you want, without us pesky-ass Black people actually involved.
Aside from the disturbing racial (and racist) aspects of AI, there’s the fact that two of the major companies that are behind the current iteration of generative AI being thrust upon us in this brave new world, Anthropic and Open AI, are friendly with the Trump administration.
While Anthropic’s CEO, Dario Amodei, did tell the Pentagon that he didn’t want his technology used for the type of dangerous hijinks Pete Hegseth wants, that doesn’t mean the company is a paragon of morality. It’s just that Dario probably knows that there are some things that, once we open the door, we can’t close it—maybe he just likes living a bit more than some of these deranged tech bros.
But I digress. Here’s the thing: These technologies are operating hand in glove with the administration and boosting their desire for mass surveillance. As the United States openly pushes for “safety,” AI gives an assist with the mass surveillance plans. While you may think nothing of creating plans or whatever with Claude or ChatGPT, you are helping to support the very companies whose products give the illusion of making your life easier while at the same time, they also work with the government to help keep us all unsafe. And you are literally telling them what you are doing—do you think AI agents are only scraping up copyrighting writings? No, they scrape up anything produced with them.
Right now, people are boycotting Target, Amazon and whoever but a lot of them are using technologies that support companies who actually don’t give a damn about us and whose overarching agenda seems more aligned with bringing widespread techno-fascism. If you work in justice spaces, for example, and you aren’t using Proton Mail instead of Gmail and Signal instead of the SMS system on your phone, that seems counterproductive to your goals and literally unsafe.
I understand that in employment settings, if the boss says you gotta use AI for tasks, you don’t get much wiggle room aside from “fuck you, I quit.” However, you do have a personal choice once you go home. And if you are involved in any type of organizing and activism spaces in the current political climate, you absolutely need to think about not just your safety but the safety of others. Up until a few years ago, generative AI wasn’t a part of our lives and we were just fine; we can choose how to exist in this world. While generative AI can be valuable tool, for example, for various people with disabilities, those of us who don’t literally need it need to consider the real cost of using these technologies.
Which brings me to another point—thinking about how as we fall further down the rabbit hole of technology and media, we are increasingly becoming detached from our own humanity. Too many people now believe that community can only exist online and while there is no doubt that online communities and connection serve a real purpose, as people turn to AI for love and companionship, that’s not good. We can’t build a community that nourishes and sustains us with data packets. I am sorry. AI therapists, friends, and partners provide us with a superficial level of human connection but it will never replace the human element because we are the original blueprint. It is trying to be like us, but it can’t be; not really.
If you zoom out and look more expansively and stop focusing on just your immediate wants and desires, you see that the rapid promotion and acceptance of AI into our daily lives helps to ease us into this weird dysmorphic fascism we have moved into. Given that the tech bros and white Christian Nationalists partnered up to give us Trump 2.0, you see how widespread acceptance and use of AI on our parts helps to move them closer to their goals.
What are those goals? Get rid of the pesky people who question things, keep tabs on all of us to keep us “safe,” help distort reality so we are too tired to question everything, keep us disconnected from other humans, keep us “efficient” (because white supremacy likes efficiency more than quality), get you to question everything and be too scared to trust your instincts—oh, and help destroy the planet because that takes away more of us, too.
I have no doubt that a little generative AI could move my workload along but I am more concerned about what gets lost in my quest to be efficient. Also, will it allow me to keep our staff, donors and colleagues safe or will it just give away information for the people who want to undermine our efforts? Nah, I will pass.
Critically thinking and looking at every angle is becoming a lost art and one that our long-term survival might need. Don’t outsource your thinking to a data packet that serves a master who doesn’t care about you and who just wants to strip your essence and use it to fuel their quest for unlimited riches.
If this piece resonated with you, I would love it if you would consider buying me a coffee. If you want to access all of my work, as soon as it is released consider becoming a monthly patron, if you aren’t already. I offer my work freely, to ensure that it is accessible to all but if you have the means to support it, please do so. Remember, I do work with groups and organizations, if you want to work with me, please reach out for details.




