Archived: How Trump's AI Policy Promotes Ethnonationalism

This is a simplified archive of the page at https://www.techpolicy.press/how-trumps-ai-policy-promotes-ethnonationalism/

Use this page embed on your own site:

A conversation with George Washington University Law School scholar Spencer Overton about his forthcoming paper, "Ethnonationalism by Algorithm."

ReadArchived

Audio of this conversation is available via your favorite podcast service.

In a forthcoming paper, George Washington University Law School scholar Spencer Overton argues that the Trump administration's AI policy is consistent with its broader efforts to advance ethnonationalism. By eliminating policies intended to ensure safeguards against algorithmic bias—and recasting work on such problems as ideological threats to innovation—Trump's policies embed exclusion into the technological infrastructure of the future. As a growing body of research suggests, when AI systems operate without regulation, they default to dominant patterns that reproduce racial inequality and suppress cultural pluralism.

President Donald Trump delivers remarks to the Detroit Economic Club, Tuesday, January 13, 2026, at the Motor City Casino Hotel in Detroit, Michigan. (Official White House photo by Daniel Torok)

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Good morning. I'm Justin Hendrix, editor of Tech Policy Press. We publish news, analysis, and perspectives on issues at the intersection of tech and democracy.

The chaos and violence unleashed by U.S. immigration authorities in Minnesota in the last six weeks is stunning. The Trump administration has sent thousands of agents to the state, surging into Minneapolis and Saint Paul, where immigration and customs enforcement agents are targeting people of color. The state's governor, Tim Walz (D), noted this in a direct appeal to the Trump administration to de-escalate the situation earlier this week.

Minnesota Governor Tim Walz:

My fellow Minnesotans, what's happening in Minnesota right now defies belief. News reports simply don't do justice to the level of chaos and disruption and trauma the federal government is raining down upon our communities. 2 to 3,000 armed agents of the federal government have been deployed to Minnesota. Armed, masked, under-trained ICE agents are going door to door, ordering people to point out where their neighbors of color live.

Justin Hendrix:

ICE agents have been recruited with media featuring white nationalist imagery and propaganda echoing Nazi slogans. And this type of material is openly posted by the President and the Department of Homeland Security. But the campaign by ICE is just the most egregious and immediately dangerous example of a broader set of Trump policies and actions that promote white supremacy. An incomplete list would include pardoning the January 6th insurrectionists, threatening predominantly non-white communities with collective punishment and retribution, dismantling diversity, equity, and inclusion programs across the federal government, appointing white nationalists and individuals with documented white supremacist connections to key government positions, freezing federal funding that disproportionately harms minority communities, and generally undermining the rule of law.

Today's guest is legal scholar Spencer Overton, who says the Trump administration's AI policy is another critical arena for its efforts to advance ethnonationalism. By eliminating policies intended to ensure safeguards against algorithmic bias and recasting work on such problems as ideological threats to innovation, these policies encode exclusion into the technological infrastructure of the future while maintaining a veneer of neutrality. As a growing body of research suggests, when AI systems operate without regulation, they default to dominant patterns that reproduce racial inequality and suppress cultural pluralism. Here's Spencer.

Spencer Overton:

My name is Spencer Overton. I am the Patricia Roberts Harris Research Professor of Law at GW Law School. I'm also the faculty director and founder of the Multiracial Democracy Project at GW Law.

Justin Hendrix:

Spencer, I've had the pleasure of having you on this podcast before, and grateful that you've joined us today. We're going to talk about this new paper that you have out, "Ethnonationalism by Algorithm." And I think first we should start with the core concept for any listener that doesn't immediately have a definition to mind for ethnonationalism. What should they know about that term, that concept, and why should we look at the second Trump administration through the lens?

Spencer Overton:

Justin, ethnonationalism is the idea that full belonging in a nation should depend on shared ancestry, culture, and language. In practice, it treats some people and some groups as kind of the real nation and others as outsiders who should assimilate or have fewer rights.

In the United States, we've seen this in our history. We had originally a law restricting naturalized citizenship to white persons of good character here. We've restricted voting by race. We've restricted other elements of civic participation by race.

So, that's been our history. But we see right now this ethnonationalism really resurging in particular areas. And it's not just the United States. We think about movements like AfD in Germany or Le Pen in France. This has been fueled by demographic change, cultural and economic anxiety, nativism, racial resentment. And in the United States, you talked about President Trump, the movement has taken the form of an increasingly explicit effort to banish racial diversity from the center of national political life. And we see this in terms of policy on immigration, education, history, and civil rights.

So, that's ethnonationalism in a sense. And we can go into more detail later, but that's the nutshell.

Justin Hendrix:

You write that "In the United States, artificial intelligence policy has become a critical arena for ethnonationalism." Walk us through how AI policy specifically advances an ethnonationalist agenda or can advance an ethnonationalist agenda.

Spencer Overton:

You're right. My core claim is that the second administration is using federal AI governance basically to advance its ethnonationalist agenda. In a nutshell, basically repealing many of the protections against AI bias on the first day. Saying that AI global dominance is our priority and not safeguards was another step. Certainly, prohibiting the federal government from purchasing AI fine-tuned to reduce bias is another step. And then also deterring states from preventing AI bias, which it just recently did in December in terms of calling out Colorado's algorithmic bias law as a hindrance here to innovation. These are some of the core claims in the first year or steps in terms of advancing ethnonationalism.

I do think, Justin, it's important to take a step back in this moment in terms of before Dr. King's or around Dr. King's holiday and just recognize everything he was involved with in terms of not just great speeches, core changes that occurred as a result of his important work. The Civil Rights Act of 1964 ended segregation, prohibited employment discrimination. The Voting Rights Act ended discrimination in election rules. The Immigration and Nationality Act of 1965 barred racial discrimination in immigration. And when Dr. King was assassinated, the Fair Housing Act of 1968 was immediately passed and prohibited discrimination in housing opportunities. So, these laws essentially changed our history and our past that had been very much dominated in a different way, in an exclusive, exclusionary way.

Donald Trump, through a variety of means, has dismantled a lot of those protections here in terms of disparate impacts, some other things that we'll talk about today. And my claim is we often think about immigration or repealing disparate impact or attacking diversity, equity, and inclusion and these other steps. But my claim is that the AI policy is very consistent and is one of the steps that the Trump administration has taken to advance its anti-diversity agenda.

Justin Hendrix:

Let's talk about that. Let's talk about the four-harms framework, this idea of the specific ways that unregulated AI advances ethnonationalism. You point to bias, homogenization, deception, manipulation. What are you thinking about here with regard to these four harms?

Spencer Overton:

I think most of us are familiar with bias in terms of automated systems producing worse outcomes for certain groups, sometimes because training data reflects unequal history or because proxies for race creep in. Think about facial recognition misidentifying Black women at higher rates, or hiring tools penalizing ethnic-sounding names, or health ads underestimating patients of color needs, the needs of patients of color. So, we think about those things in terms of bias.

I actually distinguish that from what I call homogenization. I think AI has a pluralism problem. There's a kind of an averaging effect that occurs in terms of not being able to appreciate pluralism. And that's a problem in our country because it essentially homogenizes outputs and basically advances what we've seen in the past in terms of conquest in the United States. So, I think that's something that's distinct that we need to be conscious of. I think that some computer scientists have started to grapple with it. I don't think political theorists have fully started to appreciate it and its implications for democracy.

Another important harm is deception, with which we're familiar in the context of race. We think about deceptive practices in voting and basically people not being able to vote because they're deceived. But deceptive practices and we think about deep fakes and that type of thing.

And I think the fourth harm that I put here is manipulation in terms of the ability to collect data and then provide content that manipulates people and moves them in a particular direction without them being conscious of it and without knowledge of it and the impact that can have in terms of culture and communities and autonomy.

So, those are four big harms. I do think, whether it's the Biden administration or... many laws have probably focused on that first one in terms of bias. Certainly, the EU prohibits manipulation here in terms of the EU AI Act. But many of these harms aren't fully developed and appreciated.

Justin Hendrix:

So, you lay out specific actions that the Trump administration has taken, which you regard as opening us up, essentially, to these harms in a worse degree. You talk about the rescission of the Biden-era executive orders, and then a number of more affirmative executive orders that the president's made, in particular the July 2025 Preventing Woke AI executive order in particular. Can we talk a little bit about that, how you look at the different actions that the Trump administration's taken over the last year? They're beginning to pile up at this point. It's beginning to get hard to keep them all in your head at once.

Spencer Overton:

That's right. And I think it's been a steady march. And often, Justin, administrations take a different approach, and often administrations repeal past executive orders. I think what's different about this is that the Trump administration repealed all of the anti-bias provisions that the Biden administration erected here without replacing them here. And so, we talk about testing, impact assessments, guidance, requiring agencies to produce guidance aimed at preventing algorithmic discrimination and disparate impacts, pushing agencies to use civil rights tools and consult affected communities, and really think about who could be harmed when AI is used in high-stakes settings. I'm not saying that the Biden AI safeguards were perfect, but they acknowledged the potential for harms and they addressed them. And Trump, day one, repealed that without replacing it. And so, that was really a first step on day one that was problematic.

Just a few days later, this Removing Barriers to American Leadership in Artificial Intelligence EO really focused on global AI dominance and really portrayed anti-bias safeguards as ideological restraints on innovation. So, in other words, "Preventing bias, preventing discrimination, that is ideological. We've got to remove it from the mix. It prevents innovation. It prevents development." That was the approach. And so, it's not just deregulation. It's reframing civil rights as an obstacle.

And a problem with that has been that, really, civil rights in the past has been really a bipartisan issue. There are a lot of senators and House members who voted to renew and update the Voting Rights Act of 1965 here in 2006. So, this movement into the notion that preventing bias and discrimination is somehow ideological or partisan here is a problem. I mean, it's really no different than saying having a law preventing burglary or preventing some other harms is somehow partisan. It's not. But that was the move that was made. And it's a problem domestically, Justin, but it's also a problem overseas. So that when we talk about exporting our products to other places, and they are framed in this particular way about a global AI dominance for the United States, these other places that want some degree of digital sovereignty are, I think, uncomfortable with that language and the products that come from it.

And so, those were two actions that were just out of the gate the first week that were problematic. And they really set the stage for what would come later and was more explicit and more forceful in repealing anti-bias protections.

Justin Hendrix:

And we're seeing some of this thinking continue to advance. And I saw this speech that Pete Hegseth gave at SpaceX just this week talking about woke AI, this idea that... He says, "The Department of War AI will not be woke. It will work for us." He goes on to say, "We will not employ AI models that won't allow you to fight wars," railing against equitable AI, against DEI, against social justice infusions "that constrain and confuse our employment of this technology." This stuff has really got quite deep. I'm not exactly sure what he means by "AI models that won't allow you to fight wars," but just in my mind goes to the point of how deeply this thinking seems to be ingrained across the administration.

Spencer Overton:

Yeah, I really think it is ingrained. And we fast-forward to July, when the president signed the Preventing Woke AI in Federal Government executive order that basically framed diversity and bias mitigation as threats to trustworthy AI and imposed these principles where it's just about truth-seeking and ideological neutrality.

Now, let's go back to what one of the key things that pushed that. I think you'll remember the predecessor to Google's Gemini created some images of a Black George Washington, but George Washington basically out of the cast of Hamilton, right? They immediately disabled, frankly, fixed their AI. And the reason that it was there is because earlier, their earlier tools had situations where they were labeling Black people as gorillas here. Or basically, you do a search for white girls and some young children come up. You do a search for Black girls and it's all XX. So, their previous tools had some real racial issues. And in attempt to fix that, they had this little issue in terms of producing a Black George Washington.

And there are some just politicians who just, frankly, they just went ballistic and basically said, "Any attempt to reduce bias in terms of these tools somehow inevitably results in untruthful AI. And therefore, you can't fine-tune or you shouldn't fine-tune AI to reduce bias." And the administration basically said, "Hey, the federal government is not going to procure AI that has been fine-tuned to reduce bias that addresses these issues." And it's a real problem because it chills companies from doing the right thing. And that was basically in July. And then we saw a development of that in terms of the OMB and others basically adopting rules to implement that in the federal government and guidance in terms of a memoranda.

Justin Hendrix:

One of the things that really strikes me about this bias question particularly, I mean, we have seen evidence that large language models are also, in some cases, biased against conservative political points of view. We had Suresh Venkatasubramanian and Nathan Matias and Emma Pierson on the podcast a few months ago to talk about a letter they wrote trying to essentially defend the scientific consensus on bias and discrimination in AI, the need to study that problem. And one of the points they made is this cuts across political lines. You're looking at this phenomenon, not necessarily to advance a particular political goal, but in order to really understand it, understand it deeply.

Spencer Overton:

And really, Justin, I think that's why some of the... there's some very conservative states that are very much against the executive order that came out in December, trying to deter states from regulating AI here. Places like Texas and Florida have been very much into regulating platforms. Now, I may disagree with that, or there may be some First Amendment issues at play, but in the absence of federal engagement on issues like bias, et cetera, states have played a role, of all political stripes.

And so, this notion that we can have no regulation and that the federal government is going to penalize states that regulate AI through withholding broadband funding or by using the discretion, if there's any discretionary funding from federal agencies, basically saying, "We're going to withhold that money as well," is a real problem. And it's not just a problem for folks of color. It's not just a problem for progressives. It's a problem for conservative states as well.

Justin Hendrix:

So, I do want to raise this disparate impact problem that you mentioned earlier. You emphasize that the administration has worked to eliminate disparate impact analysis. For any listeners who might be unfamiliar with the legal specifics, the legal doctrine here, why is this elimination so significant, particularly when it comes to AI?

Spencer Overton:

With AI, there is not an intent. Generally, from a legal standpoint, we think about a couple of different types of discrimination. We think about discrimination where there is an intent to be racist or whatever the case is, and that's an intentional discrimination. We also, though, think about the scenario of not necessarily having to prove intent, but looking at discriminatory effects or outcomes or impact here. And basically, the legal test that's been used to detect discrimination in housing and in lending and in a variety of areas has been disparate impact, where you basically... a practice that has a discriminatory effect and impact, then you ask a question, "Hey, is there a legitimate purpose behind that?" "Hey, yeah, it falls differently on different communities." "But do we need this? Is there a legitimate reason for this?" And if there is, it's fine. You continue to use it. But if there's no legitimate purpose for the practice, if there's a better way to get better outcomes and be less discriminatory, the notion is you should select that option. That's basically been the law in a variety of areas to discourage and to prevent, rather, discrimination.

One plus of it is that, one, you can look at real effects and real facts. You don't have to read anyone's mind, which is very difficult to do from an evidentiary standpoint. Also, it doesn't require that you call somebody a racist, right? You don't have to allege that someone meant to be racist. You can just say, "Hey, you may not have realized this, but there's this discriminatory impact. And so, if there really doesn't need to be, maybe you can adjust your lending practice or housing practice."

The reason this is so important in AI is that, AI, generally, there is not a discriminatory intent. Generally, to the extent that there is bias, it may come up from some pattern recognition or other things in terms of these tools. So you really need discriminatory impact as a tool. Chiraag Bains has written extensively about this for both Brookings and for the Leadership Conference on Civil and Human Rights, has really dug into this. Chiraag, he was the deputy director of the Domestic Policy Council in the Biden administration and extensively worked on AI and civil rights issues. But disparate impact is very important. The Trump administration has pulled it back generally and broadly at places like the Department of Justice and at the EEOC, the Equal Employment Opportunity Commission. But it also has a real impact in terms of AI and AI regulation and not seeing discrimination that occurs as a result of AI.

Our Content delivered to your inbox.

Join our newsletter on issues and ideas at the intersection of tech & democracy

Justin Hendrix:

Another thing I was reading this week is a piece in Science by Alondra Nelson, this piece she has called "The Mirage of AI Deregulation." And it's a really interesting piece. It more or less kind of says that something slightly even more complicated is going on with the Trump administration, that there's a kind of understanding... Or if you zoom out and look at things in a fuzzy fashion, you might think the only goal that Trump administration has is just to get government out of the way, deregulate, make sure that there aren't any rules. But what she's suggesting here is that something actually more complicated, and this is my word perhaps, more insidious is going on, what she calls intensive state intervention through industrial policy, trade restrictions, immigration controls, et cetera, et cetera, et cetera, that effectively the Trump administration is steering the technology in the direction that it wants things to go, setting the terms, effectively, including for research, including for the industry, et cetera.

I hear a little bit in what you're saying as well that maybe at a high level, when you look down, the removal of constraints and the removal of rules and the effort to constrain states and their ability to regulate, but what we're actually seeing is not laissez-faire but rather a kind of muscular intervention to make sure AI moves in this ethnonationalist direction.

Spencer Overton:

I'm ashamed I haven't seen the piece, and I'm ashamed in part because Alondra is, I just really follow her work, and she certainly inspires me here. She's got a lot of great insights here. And I think that there are several examples of this notion of it not just being deregulation, the notion of, "We're going to prevent you, Google, from fine-tuning your product to prevent discrimination," or, "We're going to deter you from doing it by saying we won't buy that product." There are a variety of decisions that have been made by this administration that's not just a deregulatory scenario. It is a scenario where we're facilitating bias, where we're facilitating particular populations and a particular brand of Americanism.

And a related note is that this is a pervasive problem. Customs and Border Protection uses an automated targeting system for risk assessments. TSA uses data-driven pre-screening through Secure Flight. The IRS uses analytics to identify returns for audit. HHS uses tools to detect fraud. DOJ components use facial recognition to generate investigative leads. And the list goes on and on. There have been thousands of applications that we've identified as a result of Biden-era executive orders that require the disclosure of these applications. And so, basically, promulgating rules that prevent these applications from being developed or fine-tuned to reduce bias, that is a statement in and of itself. It's a decision about the kind of society we want to be.

Justin Hendrix:

We've seen some reporting this week from 404 about the ways that ICE and CBP are using applications that are federating data, including one developed in part with Palantir, from across the federal government, bringing in location and spatial dimensions and other forms of specific information about people. I don't even know if there's, on some level, time to sit down and go through each of the features of the types of applications that are being used, for instance, in Minnesota, where we're seeing what literally looks like a kind of racist door-to-door campaign to find people suspected of being immigrants. Seems like it's unfolding in such real time.

Spencer Overton:

It does. And a note on this that I think that we underappreciate is the significance of the Immigration and Nationality Act of 1965. Basically, Justin, in 1960, the country was 15% people of color. Now we're about 40%. A lot of things changed as a result of that Immigration and Nationality Act of 1965, because before that, there was extensive discrimination in immigration. Northern and Western Europe, there were heavy quotas in favor of those places, and there were other places where you just could not immigrate from. So, to a certain extent, our policy gerrymandered population here.

And when we talk about race and dialect being used to target and stop people on the street for immigration violations, and when we talk about this kind of nativism and this immigration piece, there is a history there. It's not in a vacuum here. And when we talk about technology facilitating, this is not some libertarian notion where we're just going to deregulate because we want growth. We're affirmatively using tools and adopting rules and making policy decisions in a way that collects data and that basically controls the lives and shapes our population.

Justin Hendrix:

We could spend an entire podcast talking about these issues and, of course, what's happening right now in this country. But I do want to point to the third part of the paper, because despite the moment we're in, you are looking forward. You're trying to lay out a possibility for an act, what you call the Equitable AI Act, a framework that could potentially address some of these core democratic principles, fairness, pluralism, authenticity, and autonomy. Let's talk about this. What's the basic idea behind this proposal that you put forward here, this Equitable AI Act?

Spencer Overton:

Now, let me just acknowledge that there are some bills that are out there, like the AI Civil Rights Act, that I admire. This bill is certainly inspired by a variety of acts that are out there. I think one different thing about what I propose is I'm really focused on federal government use and procurement of AI in part because that's what is at play now. That's the debate at play now. Now, that doesn't mean that privately used AI shouldn't be regulated. I don't believe that at all. But certainly, the federal government has a responsibility to use AI in a way that serves all Americans in a fair way.

And so, that's the starting point in terms of the Equitable AI Act. So, some baseline AI obligations to advance democratic values in terms of disparate impacts, tests, et cetera, but then also some enhanced oversight for high-risk AI use cases. So, obviously, these rights or, rather, these applications that affect our rights, affect our opportunities, really ensuring that those are evaluated, that there is some assessment before they are deployed, and ongoing assessment of them. And then having a real enforcement infrastructure, certainly setting up some federal infrastructure to enforce the law, but also allowing for private right of action, allowing for state attorney generals to enforce it so that if you have a Department of Justice that's not enforcing it, you have some other enforcement mechanisms.

I think that this is important because we don't want to play ping pong. We don't want to just basically go back and forth and have AI policy in the future and civil rights policy dominated by executive orders. Instead, we want some stability where there's some consistency between administrations in terms of strong civil rights protections.

Justin Hendrix:

Let's talk about a couple of these in particular. I mean, fairness, pluralism, authenticity, autonomy. And they sound like things that nearly everyone would want on some level. And yet, pluralism seems to be the type of thing that would be regarded by some, those, I think, who are behind what you characterize as an ethnonationalist project, not a value that they would hold. That there's a desire to either constrain the plural and limit what we think fits in that concept, but also this desire to effectively leave the door open for being able to use systems that do target people in disparate ways. How do you imagine pluralism being something that the law can support?

Spencer Overton:

Yeah. Here's what's complex about pluralism, but then I'll talk about what I do think we can do. What's complex about pluralism in terms of regulating it is, my notion of pluralism may be a little different than your notion of pluralism. And basically saying, "We're going to give 10% of whatever to this perspective and 10% to another perspective." I think there's some First Amendment issues with that, and it may just be unworkable. But what we can do, we can utilize these tools. And I understand that there are some challenges in terms of language translation, but these tools do have a lot of possibility with regard to translation and giving people an opportunity to participate who are limited in their English proficiency. And actually using some of these tools for those purposes. There is a way to test and assess AI to understand whether or not it is usable and is it treating people who are in different communities in a similar fashion.

And so, I think that we've got to embrace pluralism. I think it's really complex because if you want to include everybody, but then you have some people at the table who don't want to include others, it's tough because you want to include those people, but they don't believe in the whole system as a whole. They want to kind of revert to a notion of conquest that we've seen in past centuries of there's one way to have a civilization or there's one religion or one belief system or one way to be an American, et cetera. I don't think that we can do as much in terms of regulation with regard to homogenization or pluralism. They're kind of different sides of the same coin here. Maybe not as much as we can do with kind of the conventional bias that we think of, which I talk about in terms of advancing fairness. But I do think we can acknowledge it as an issue and we can take some baseline steps to advance it and ensure that these tools work for different people from different backgrounds here.

I definitely appreciate the complexity, but I would say, Justin, this is the challenge that's before us. It's the challenge that's before many liberal democracies in Europe and other places, which is, how do you have a large population of people with different belief systems and different values and different cultures, but also really respect people so that they can come together and make decisions together and govern and really participate in a nation together? How can you do this without mandating one particular culture or nativist or nationalistic identity set? And I'll acknowledge it's difficult. Danielle Allen has written about having bridging institutions that allow different people from different backgrounds to connect with one another and make decisions.

But at the end of the day, Justin, I think the solution to polarization is not mandating sameness. It's not saying everybody's got to be the same or that everyone has to speak English only, that we're not going to make an attempt to reach out to people who speak different languages so that they can access government resources. I think that our future has to involve figuring out how different people can make decisions together. And technology has got to be an affirmative part of that, as opposed to just an extension of conquest, another tool that we use to advance one belief system or one way of life.

Justin Hendrix:

Well, the other paper I read this week from legal scholars was from Woodrow Hartzog and Jessica Silbey about the effect of AI on institutions. And unfortunately, I'll say that they are also convinced that in many ways, artificial intelligence will weaken institutions, including universities, the free press, many other institutions that kind of serve as, I think, the host of that type of pluralism that you're referring to.

I want to get a little bit into something that I think you've done here, which I appreciate, which is you've included counter-arguments, and then you've essentially gone to bat against those counter-arguments. The first one is one we hear all the time, the idea that AI regulation hampers innovation and US competitiveness against China. You hear that coming through in those comments that Pete Hegseth made, this idea that when we put constraints on artificial intelligence, somehow we're defanging it, we're limiting its awesome power, and that will put us on the back foot. You call this a false choice.

Spencer Overton:

Yeah, I think it's definitely a false choice for a variety of reasons, one being that other nations and people in other nations don't necessarily want an AI that's designed to advance American global dominance. They want some degree of digital sovereignty. That's one reason. I think another reason is homogenization, at least in the notion of everyone being the same and preventing intellectual diversity does not facilitate innovation and it doesn't facilitate new ideas and growth here.

So, I just think that this is a narrative that has been put forth with no real political or empirical support. And it's really, I should say, it's a political statement, but it doesn't have any real empirical support here, and it's not going to allow us to win a trade war with China. Certainly, when we talk about an AI war with China, having a strong university system where people with different perspectives are developing ideas and engaging in research, that's the kind of thing that facilitates growth and innovation. Unfortunately, that's being undermined right now.

Justin Hendrix:

You also point out that your act only covers government AI, federal contractors, and that most folks would argue that the harm's really happening in the private sector with commercial systems. How do you counter that?

Spencer Overton:

Yeah. I think we have to start somewhere. And I agree that AI as a whole needs to be and should be regulated. But I think that we've got to start somewhere. And certainly, expecting that AI that is used by government, that's procured by government, is going to work for everyone in the country is an important first step. So, I definitely don't believe it's the final step, but I do think that we've got to start somewhere. And this is an important place to start.

Justin Hendrix:

You also anticipate, as you already said, First Amendment challenges that fairness audits or bias mitigation, that compel speech, we hear that. How do you respond?

Spencer Overton:

If you look constitutionally, it's not as much of a problem for the Constitution as restriction. So, restrictions on speech, for example, are problematic. But when we look in the campaign finance context, for example, disclosure is a classic tool that really does not require... that doesn't raise the same constitutional challenges as restriction does. Now, I'm not saying that a disclosure alone is enough, but certainly disclosure, auditing, authenticity requirements, those are not going to pose many of the constitutional problems that some other tools will.

I would also say that some people say, "Hey, you're considering race. That's not colorblind. Isn't that an equal protection problem?" And certainly, if you look at the origins of the 14th Amendment or the evolution of it, the thought was not you've got to ignore discrimination or racism here in order to move forward and to avoid violating the Constitution. Certainly, even the Supreme Court, in the recent affirmative action case, didn't say that you can't consider racial disparities and then come up with a race-neutral plan. That would basically be like saying, "Okay, we can't have a top 10% plan that basically allows the top 10% of all high schools to get into a particular university because we're doing it to advance inclusion and racial inclusion." Unfortunately, that's the way this administration thinks, that you just can't think about the effects of bias here and kind of the way they advance their notion of things is that, but that's definitely not what the court has said.

Justin Hendrix:

One of the other things you raise, of course, is political feasibility, whether this is at all politically feasible, whether we could envision this passing in Congress in the current environment or anytime soon. I think in asking you about this, I want to ask about how the ground is shifting. I saw a poll this week, for instance, suggesting that support for ICE is in the basement, has plummeted, that there's actually... half the country now supports the position that ICE should be abolished, that ICE should be abolished, which I think is fascinating, how quick our kind of viewpoints on this sort of thing can shift.

But I wanted to ask, effectively, thinking about the way that politics might change and thinking about these real-time examples we're seeing now, these ICE raids using AI systems, discrimination in hiring algorithms, facial recognition misidentifying people of color, I feel like we see a different headline about that every few days, every few weeks, what does the United States look like in five years if we continue on this current trajectory towards ethnonationalism without the types of reforms that you imagine?

Spencer Overton:

We've got to have a vision in terms of the kind of country that we want. A nation where we only let people in from Europe, a nation where we suggest that some people are second-class citizens, that's not the nation Dr. King envisioned. That's not the nation that I'm in favor of.

I also, though, believe that there are these moments. There are these moments like the Edmund Pettus Bridge that led to the Voting Rights Act, these moments like Watergate that led to campaign finance reform and ethics reform, these moments where big policy change occurs because people are shocked. And I'm not a political person. I'm more of a policy person and an academic and a researcher. But I'm committed to this notion of, when those political moments happen, you have to be ready and you have to know what your vision is.

And so, I don't know when this is going to break in terms of ethnonationalism. I don't know when it's going to crack and people are going to feel like, "Hey, we looked at these people as scapegoats and that was a distraction from the real challenges that we have, and it prevented us from doing what we need to do." I just think that when it does happen, we've got to have some vision of, what does AI regulation look like? What's fair? What's inclusive for all Americans? That's got to be at the forefront of our mind. And what is this America that we want? That's a large country that's diverse, that can basically engage with any community in any country around the world, because we've got people from all over the world and we're empowered through that. So, how do we create that world? And what does tech regulation look like in that context? Where do we start?

Justin Hendrix:

My listeners are hearing this on what is Martin Luther King Jr. Day in the United States, a holiday here. You've already reflected on how you think about these issues in the broader trajectory of the civil rights movement in the United States. A lot of folks do feel like a lot of ground has been lost, that things are moving very much in the wrong direction at the moment. They're looking for reason for hope. I think it's something to hear you already thinking beyond the current moment and trying to prepare for that moment, the change of politics. I want to ask you, maybe just given the moment, a reflection on why you wanted to have this conversation so close to this day.

Spencer Overton:

So, certainly, his notion of progress does not roll in on the wheels of inevitability here, notion that change is not linear, that simply the passage of time in and of itself does not mean that we're in a better place, that this is an ongoing struggle.

Most of us are familiar with the fact that after the end of the Civil War and the Reconstruction amendments and Reconstruction legislation, Jim Crow was a response to that. And there was decades, basically, of repression against Black folks in the South and other parts of the country. We all know that California voted against the 14th Amendment because they did not want Chinese Americans who were born in the United States to become U.S. citizens, that there are these challenges where we've got some upswing, but we also have some backsliding on this.

Justin, I really look at it, as opposed to feeling, "Okay, it's so outdated for folks to be racist," et cetera, I just really... just as there will always be people who want power, just as there will always be people who commit crimes, there will always be some people who want to acquire political power or influence by pitting groups against one another and by marginalizing particular populations and "Let's blame all of our problems on this particular group." And that will exist. And I think our real question here is, what are the institutions that we have and that we can create to prevent that in terms of moving forward?

We're not entitled to be in the space where everything is equal and everything is fair. There have been many generations who have moved through spaces where there was a massive injustice and inequality here. And we are in this moment where there's some challenges and we've got to figure out how to both step up to the plate in this moment but also have hope about a moment where we can create this world, this country, this nation that we want to create.

So, I think that on one hand, we don't have to do it all in our generation. We don't have to fix everything. There are a lot of past generations that did a lot of great stuff, but they didn't fix everything. And that's not our task. Our task is really to run our race with the baton to do our part in this moment here, in this transition to the generations that are coming in the future.

So, I do think this is an important moment in terms of both thinking about the past and reflecting on the past and the progress that was made and thinking about some of the challenges of the moment, but also envisioning this future and the role that technology will play in creating this brighter future.

I am not a pessimist about this. I agree with Alondra. Ruha Benjamin has also said this, that our decisions about technology are just really human decisions about who are we going to be, both the laws that regulate the technology and the technology itself. It's not inevitable that the world is going to be in a particular place and the technology is definitely going to be bad, et cetera. This is about decisions that we make in terms of our law. It's about decisions we make in terms of the technologies that we adopt. And so, how can we be very deliberate about both envisioning the world we want and then also adopting both the technologies and the laws regulating the technologies that are going to take us toward that world we want?

Justin Hendrix:

This paper is called "Ethnonationalism by Algorithm" by Spencer Overton, George Washington University Law School. There'll be a link to it in the show notes. Spencer, thank you so much. And I look forward to speaking to you again.

Spencer Overton:

Justin, thanks for the opportunity. And just really, thanks for your service to the technology community.