Jackson Moore and I recorded a GovCon Intelligence episode on location in Wilmington, North Carolina. We were on a panel there last week, moderated by Sue Kranes at the North Carolina Military Business Center Construction Summit. Jackson is a partner at Smith Anderson in Raleigh and recently has been writing about AI-related developments in GovCon. We talked about AI hallucinations and more broadly about how using AI can come back to haunt parties in litigation. Then we went back to the topic of our panel: the recent executive order on racially discrimination DEI and IBM’s $17 million settlement.
Links
Jackson Moore at SmithLaw.com
Turning Chats Into Trial Exhibits: Litigation Risks of Generative AI Use (smithlaw.com)
You Can’t Spell Sanction without “A” and “I”: When Unchecked AI Hallucinations Result in Court Sanctions (smithlaw.com)
Whitting v. City of Athens (6th Cir. 2026)
AI Hallucinations database (damiencharlotin.com)
Appeals of Huffman Construction LLC (ASBCA Oct. 23, 2025)
U.S. v. Heppner (S.D.N.Y. Feb. 17, 2026).
Steve Koprince: What aspects of federal contracting AI is most likely to get wrong
Proposed GSA clause 552.239-7001, Basic Safeguarding of Artificial Intelligence Systems
Executive Order on Addressing DEI Discrimination by Federal Contractors
IBM Pays $17 Million to Resolve Allegations of Discrimination Through Illegal DEI Practices (justice.gov)
A transcript follows.
Introduction and Guest Background
Sam: All right. Well, for GovCon Intelligence, we’re reporting live from the North Carolina Military Business Center Summit here in Wilmington, North Carolina. And my guest today is Jackson Moore. Jackson, welcome to GovCon Intelligence.
Jackson: Welcome. It’s a beautiful day here in Wilmington, North Carolina. It’s been a great conference. Glad to be here.
Sam: Thanks very much for being on the show. Wahoowa, by the way. Jackson Moore is a graduate of the University of Virginia undergrad, but graduated from Duke Law School, another ACC school rival, in ‘95. He’s been an attorney at Smith Anderson in Raleigh, North Carolina since 2001, focusing on government contracting and business dispute resolution. Smith Anderson has about 170 attorneys in Raleigh. Just moved into a new building, I hear.
Jackson: Or renovated some existing space, so it’s great.
Sam: That’s terrific. So, a growing firm. Your GovCon practice includes compliance and bid protests before the GAO, the SBA Office of Hearings and Appeals. We work on REAs, so here working with construction contractors, a lot of REAs are involved in that. You also likely have claims in front of the Armed Services Board of Contract Appeals and CBCA. Drafting agreements, subcontracts, joint ventures, and teaming arrangements to comply with the FAR and North Carolina laws. Are you primarily working on federal laws with North Carolina or state?
Jackson: We also work on public-private projects and infrastructure in the state of North Carolina and the Southeast.
Sam: How’s it look here?
Jackson: There’s a lot of stuff being built. There’s a lot of need. There’s a hospital that is going to be built in the Triangle area to deal with mental health. That’s a first for the state, and that’s very exciting. We hope to be able to work on those opportunities, for sure.
Sam: That’s great. That’s kind of fun to walk around and see a building that’s built or a hospital and say, “Oh, I had a little part in that.”
Jackson: Yeah, absolutely.
The Use of AI in Government Contracting Law
Sam: That’s very interesting. Well, I wanted to have you on because you published an article recently about the use of AI, artificial intelligence, in government contracting, and specifically with the practice of law in government contracting. I see these things on LinkedIn every once in a while; AI is going to get rid of these jobs. Coding is one of them. Another one is lawyers. And I see this sometimes with my clients. They look things up on AI ahead of time and they come to us and they say, “Oh, well, this is what Claude told me, or this is what ChatGPT told me,” and I think, “What is my role here now? Am I the lawyer or is ChatGPT the lawyer?” So tell me a bit about how AI is being used in government contracting and specifically in the practice of government contracting legal services.
Jackson: It’s interesting, and I’m a big fan of these AI tools, if you know their limitations and what they can do. I think the primary challenge, Sam, with respect to these AI tools is they’re not searching databases necessarily for information. They’re language completion models, and they’re not thinking the way that you and I are thinking. So if you go ahead and ask a legal question to your AI tool, it will give you an answer and it may look like a very good answer. But it may be spectacularly wrong. One of the things that has come up in a lot of published decisions is where you have parties, whether they’re representing themselves or whether they’re represented by lawyers, presenting cases and arguments to the courts that are fictional. They’re hallucinations, and parties are getting sanctioned; lawyers are getting sanctioned.
There’s a decision recently from the Sixth Circuit Court of Appeals that sanctioned a party tens of thousands of dollars because multiple fake cases were presented as real cases to the court. So there are risks there, and parties who are especially thinking about presenting information or arguments in bid protests or claims to the federal government really need to scrub their information and make sure that they are citing accurate stuff to the court. Otherwise, you may wind up in the same position as those parties in the Sixth Circuit of being sanctioned by the court or the tribunal.
Sam: So is this primarily companies that are representing themselves in front of the court pro se, or are they represented by attorneys as well?
Jackson: It’s a mix. Part of the reason why lawyers are getting sanctioned representing parties is they don’t understand what the tool is doing. It will give you an answer, and you just need to go back and check and make sure that the answer is accurate, that the case exists, and that the argument presented by the AI tool exists. So it’s a mix of people who are pro se with no lawyers and people who are represented by barred attorneys. You just need to be careful about that. There’s a professor in France, of all places, who’s been gathering and keeping track of all these instances where a court has pointed out or sanctioned somebody for citing to a hallucinated case. I think he’s up to 1,200 published decisions.
Sam: Oh, wow.
Jackson: Mostly in the US.
Sam: I think I saw the Armed Services Board of Contract Appeals was very harsh against one of these representations and briefs in front of the ASBCA.
Jackson: Yeah, and it’s the kind of blocking and tackling that you would expect a party, and especially an attorney, to be checking. Because if you’re going to go ahead and present something to the board, or the court, or the agency, or the GAO, you’ve got to make sure that it is accurate. If not, you’re making a misrepresentation to the court. If you’re not checking it, then you’re just not doing the baseline work that you need to be doing.
Sam: That’s interesting because contractors use case law in front of the BCAs. They use it in front of SBA OHA and GAO, but they also sometimes use case law when they’re just corresponding with agencies and contracting officers. They say, “Well, let’s give an example regarding limitations on subcontracting. You may be interpreting limitations on subcontracting this way, but there’s also this case at the GAO or OHA that says this.” And if the contractor is using AI for that, they may be getting those cases completely wrong, and that really kills their credibility as well. So you need this verification at multiple stages in the process.
Jackson: I think anytime you’re going to tell a governmental agency, “I should win because of X, or this case says X,” you need to go back to the case and just make sure that the case exists. A lot of these cases are instances where the AI tool is designed to give you an answer. It’s going to come up with an answer. It may give you a beautifully looking answer as far as cases and citations, but it doesn’t exist. You just need to go back and make sure it exists, make sure it’s correct, and then you can go ahead and proceed and present it to whoever you’re speaking to. Because you lose credibility and you could be sanctioned.
Sam: Yeah, I saw that all the time when I was counsel at the SBA. People presented arguments to me to try to intervene on a case and they presented cases, but it’s not before a forum. It’s not like I can sanction them, but it does hurt your credibility and I think, “Oh, well, you’re not serious about this, or you haven’t really looked into this.”
Confidentiality and Data Privacy with AI Tools
Sam: On the other side of it, there may be contractors that are using AI for preparing their proposals or preparing for some side of negotiation or litigation, and they’re putting information into AI tools. It could be specialized AI tools, or it could be Claude or ChatGPT consumer-based tools that everybody has access to. What are you advising clients about putting their information into AI tools?
Jackson: Well, let’s start with their own information because they can do whatever they want with it, theoretically. If they’re taking somebody else’s information, then there may be additional problems. But let’s talk first about clients taking their own data and sharing it with an AI tool. I think the first thing that clients need to be looking at before they share anything with the AI tool is, “If this gets publicly released or is used by the AI tool, do I care?” And if you’re taking confidential pricing data and you’re sharing it with a third-party tool, what does my agreement say with the tool? Are they going to be able to use Claude or ChatGPT to use my data for purposes of training the model? Or are there disclosure rights that the party receiving your data has? If you do, then you just need to understand that you’re potentially waiving confidentiality that would otherwise apply to this information.
There was a case that is not necessarily exactly on point, but as a warning, there’s a criminal matter pending up in the Southern District of New York, an important federal court, where the judge issued an order that required the criminal defendant’s chats with Claude to be turned over to the Department of Justice in his criminal matter. Now, the questions that he was asking Claude related directly to his lawsuit: “What defense can I raise in connection to this securities fraud matter?” So, you lose confidentiality potentially regarding that information. One thing that we’re cautioning clients is if they’re taking trade secrets or attorney-client information, be very careful about sharing that information with these tools because you may no longer have attorney-client privilege or work product protection that might apply to that data.
As far as thinking about third-party information, what if an agency gives an agency record in connection with a bid protest? Same problems, probably even more so. You’ve got protective orders that are going to vary by court and jurisdiction, and that’s going to vary by court, judge, and the sophistication of the agency or tribunal. If you go ahead and put that kind of information up in a publicly facing open-source generative AI model, you may have breached the protective order that applies to that material because you now have given that information to a third party.
Sam: I wonder if courts are going to start mentioning that in their protective orders, that this does not just apply to your organization, but also to the use of AI tools.
Jackson: There are some cases that are coming out after this. The decision from the Southern District of New York is called Heppner. One of the other two cases that we talked about in our alert that we issued last week was another decision that actually disagreed with parts of Heppner, but also issued a protective order that essentially said, if you have a generative AI model that has certain protections—that does not use the data that you upload to train the model or otherwise disclose the information to the generative AI tool—then you can use generative AI for that purpose.
Sam: Oh, you can? Okay.
Jackson: So they authorize that.
Sam: If you look at the terms and conditions to figure out whether they’re using it for training.
Jackson: That’s right. But this is an evolving area, Sam. Courts are really dealing with this case by case, and you can’t rely on that one instance to say, “Well, I can therefore use my enterprise-grade ChatGPT for purposes that may otherwise violate the rules.” There’s not really any clarity about how courts are going to rule in this space. So really the ideal situation from the client perspective is don’t share information that you’re not willing to have disclosed to everybody with these tools. The protective order issue is fairly new, it’s evolving, and parties just need to be very careful about making sure they understand what the rule is that applies to their case, their protest, their appeal, and so on.
Sam: I know in litigation there was a time when you and I had been practicing law long enough that we realized, “Oh, we should start asking in discovery for text messages.” And asking for social media posts, you know, “What have you put on LinkedIn?” Is it now at the point that people are asking, “What have you put into ChatGPT?”
Jackson: I think the short answer is yes. And if you’re in a lawsuit where that kind of discovery is going to be exchanged, courts and parties are going to have to figure out if there is any line drawing. The court in Heppner, that criminal case I talked about earlier, talked about the fact that the criminal defendant had to turn over all his Claude exchanges. His lawyer did not advise him to do this; he just did it on his own. Whether or not that means that if your attorney is advising you regarding these tools that’s going to protect things and become work product under those circumstances, is again, not very clear. I don’t think I would rely on that line of reasoning if you’re using a publicly facing, open-source generative AI product.
Sam: That would be an odd move for a lawyer to make, saying, “Don’t call me with your question, instead put it into ChatGPT in my direction.”
Jackson: Putting aside the hallucinations, I understand why people would want to use these tools. It summarizes a lot of data very quickly. It’s like me, before I go to the doctor, I Google my symptoms and see what ailments I may have, and then talk with the expert about those things and try to get feedback. The challenge with “My lawyer said that I could” is your privacy policy or terms and conditions with that generative AI model, which is also a problem that the Heppner court raised. Claude can take the inputs that you send Claude, and then Claude can take the outputs that it gives you, and it could disclose them to third parties if it wished. It could disclose to a government agency, and it could use them to train its models.
The Heppner court said you didn’t have any expectation of confidentiality over this information because your terms and conditions said Claude could pretty much do whatever it wished with the information. That’s a slight overstatement, but you didn’t have any confidentiality expectation over the info, and a lot of people will get going with these tools and they’re not reading the terms and conditions. They’re not considering second and third-order risks that flow out from use. One reason why we wanted to alert clients is that just like I might ask Google or ChatGPT to tell me what ails me, clients are going to want to ask these tools questions to try to get legal answers. And you can expect that in discovery, if there is a later dispute, those searches are at risk of being disclosed.
Sam: It is a bit higher stakes than asking, “Oh, what’s a strange thing on my hand?” Which sometimes you get wrong as well. It’s going to tell you that you might have a 0.1% chance of having cancer, and then you go to the doctor who says, “No, it’s nothing. It’s just a blemish. Don’t worry about it.”
Jackson: That’s true. And one distinguishing feature, if you’re going to use these models, is you should take them to your attorney and get advice from somebody who is an expert in the jurisdiction and in the area of law where you’re practicing. The challenge with a lot of these models is it’s searching an extremely broad area of data to try to assemble an answer. It is not looking at a legal library. If you’re in North Carolina, where we’re sitting, it’s not looking at North Carolina law. So you’re not really sure exactly what it’s using to assemble the answer. Again, it’s not really researching because that’s not really what these tools do.
Sam: Yeah. It’s like, as you mentioned, Googling something; it doesn’t have the specialized knowledge. Steve Koprince put something out on Substack and LinkedIn this week regarding the areas that ChatGPT or Claude would be most likely to get wrong in government contracting. It’s those sorts of things. It’s in the detail. The other part of it is, because it’s searching on a broad base of information, it doesn’t necessarily know what the most recent information is.
Jackson: That’s right.
Sam: So thresholds change all the time. We were just talking in our session about the SBA’s new recertification rule or the FAR overhaul. It’s probably not going to give you great information on the FAR overhaul because the information has just come out in the last six months. Whereas it might have lots of information about the legacy FAR, it’s not going to have the most up-to-date information, so that may be a reason as well to be suspicious of what you’re getting from AI.
Jackson: Because it’s a language prediction machine. Again, that’s an oversimplification, but it’s trying to figure out and build an answer on what the most likely next word would be. And if you have, say, 20 years of decisions or FAR clauses to rely upon, there is a risk that the model is going to look at the stuff that it has more volume of, as opposed to things that are most accurate because they’re more recent. Like you say, if the SAT changes, it may be looking at an old SAT threshold because there’s just more data for it to look at. It’s going to give you the lower number, or an inaccurate number instead of the most recent one, because there’s simply less stuff for it to be looking at to try to build its prediction.
Sam: Maybe they solved this by now, but around January, everybody was asking, “Who won the last Super Bowl?” And then it would go back and say, “The New England Patriots,” which I guess they were the Super Bowl champions then, but they did not win the last Super Bowl if you were following that.
The Proposed GSA AI Acquisition Clause
Sam: I wanted to ask you about the GSA’s proposal of an AI clause. It’s GSAR clause 552.239-7001 about the acquisition of AI. This is for companies that may be using AI in their systems that they’re proposing to the government or in their deliverables. The GSA initially had a very quick turnaround for comments on this clause. They put it online for something like 10 days, and afterward, they extended those comments, but the comment period is closed now. The clause would give broad IP rights to the government over custom developments. It prohibits the use of government data to train models—you mentioned training—and mandates the exclusive use of American AI systems.
It also has a term on unbiased AI principles, which has provisions allowing the government to conduct assessments and suspend or terminate systems that include ideological or partisan judgments. So a question for you first on the IP terms. This would prohibit contractors from using government data to train their commercial models and grant the government full ownership over custom developments. Was that surprising to you as far as IP terms go, and what do you foresee? Do you foresee companies trying to push back on that? Is that going to push companies out of the market because they’re uncomfortable giving up so much IP to the government for that GSA clause?
Jackson: You know, I guess on the one hand, if the government data that’s being utilized is already what we’ll call public information, right? Because there’s plenty of data that’s published or issued by the Census, for example. Are there no boundaries on what is considered to be use of government data for purposes of training under this clause or not? I don’t know if we have good boundaries for that. I do think as it relates to the government’s ownership on custom developments, if you think about some of the data rights that exist benefiting the government in connection with work that is being performed that is paid for by government money. You have certain data rights, and you need to start making declarations and markings if you want to have a contract with the government and you don’t want the government to have unlimited rights in the data. You’ve got to start making markings and this sort of thing, at least in the DoD space. I certainly can see these AI companies being very cautious about how they provide anything to the government other than, “This is our commercial product.” This is our commercial off-the-shelf product. We’re not going to give you any customization because we’re worried about the government perhaps taking more than what the company may wish to have.
Sam: Any other comments on that clause in terms of what might be surprising to companies that are usually working in the commercial market and are now trying to transition to government? What should they be looking for if that clause does eventually end up in contracts?
Jackson: I think there’s the mention of American AI systems, and that’s defined in the draft clause as systems that are developed and produced in the United States, but it doesn’t say what produced in the United States means. And if you think about how technology companies may be using global data, they may be using foreign employees, they may be using open-source models where you’re not really able to easily discern what is considered to be domestic and international. I think for companies that are trying to offer AI to the government, that’s going to create uncertainty. Companies may say, “We’re just not going to operate in this space because of the uncertainty that surrounds it.”
The other thing that you mentioned, and we had our program earlier with a lot of small businesses here at this conference talking about the DEI executive order—sitting here in April 2026—that was issued last month. The GSA clause talks about AI systems being neutral and nonpartisan and prohibiting the encoding of ideological dogmas such as Diversity, Equity, and Inclusion. I think there’s some uncertainty about what all that means and how you address that from a compliance standpoint. Are you going to present principles to the government in advance? Because the government has the ability to run an automated audit under this clause as well to see whether your output is actually following these requirements. How’s that going to work?
Sam: I think a lot of that comes out of the experience when Google first issued Gemini, and it would output images that would always have people of color in them; Google, for whatever reason, had ingrained that into their AI system initially. So there was a big uproar after that, saying, “Oh, this is woke AI.” But in some ways, I think AI development has gone the other direction, where there have also been notable instances of other AI systems that do things that probably are outside of the norm of what you would really see in responses from AI companies.
Jackson: I think that’s a challenge with the AI products generally. We kind of talked a little bit earlier about how the AI model may return an inaccurate response because it has more of a certain type of data. As I understand it, the AI challenges as it relates to imaging is that if it has more of a certain race, gender, or creed of a person, it’s going to be more likely to give an output that follows that issue. So how do you deal with that? I’m not sure the technology experts can really address that fully, and I’m not sure how the government will properly regulate it. It’s unclear to me how that’s going to work.
Sam: That’s interesting because there’s some thought that an AI can be completely neutral, but because it’s training on prior data or prior images in this case, that may not be completely neutral. So that neutrality in whatever concept this is, the non-woke AI, is probably impossible because it’s all based on the prior training data.
Jackson: You mentioned impossible. People have different perspectives on these issues, of course. I expect there’s going to be litigation surrounding all this because if there’s an unresolved dispute, or if there is a statement that it’s going to be X, someone’s going to say, “I don’t think X is legal.” And now the courts are going to have to understand these models a lot better than they do overall to try to provide decisions on whether this clause is going to be enforceable as is. What does it mean to have a neutral AI model? Is that somehow implicating the First Amendment? Because now you’re essentially telling people their content has to be this. The government is paying for it, so that’s a little bit different as well. There are a lot of interesting and unsolved questions, and I don’t know what’s going to happen. It’s going to be interesting.
Implementation of the New DEI Executive Order
Sam: Another part that will be interesting, and we talked a lot about it in this morning’s session, is the implementation of the racially discriminatory DEI executive order. We are at April 15th or April 16th, and that means we’re about a week or two weeks away from that 30-day point.
Jackson: It was issued March 26th. Right.
Sam: Yeah. So we’re about 10 days away from the 30-day point of that executive order. What’s going to happen in 30 days? What does that executive order say? Is anything going to happen in 30 days, do you think?
Jackson: I think the executive order speaks to there being a FAR clause that agencies are going to have to start including in existing and new contracts regarding certifying effectively that there is no illegal discriminatory DEI within the company. And then that requirement is going to need to be flowed down by a prime contractor to the subs, all the way down to the last tier.
Sam: Right. There are so many aspects of this executive order. First of all, you mentioned it in there, “racially discriminatory” puts it outside of looking at sex, gender, and veteran status you mentioned. And then this flow-down concept of not only do you have to flow down the clause, but you also have to report up. The prime contractor has a responsibility to report a subcontractor that violates or may violate this prohibition against racially discriminatory DEI. How do you think that’s going to work?
Jackson: Wouldn’t it also obligate the subcontractor to notify if they believe their higher tier prime is in violation too? There are a couple of challenges with it. How are you going to ensure that you’re properly monitoring if you’re a higher tier, say a large prime with a lot of subcontractors? The executive order speaks about knowing or knowable violations of this clause. Knowing is a standard of intent under the False Claims Act.
Sam: What does reasonably knowable mean?
Jackson: Right. On the one hand, I think the courts have said that you can’t hide your head in the sand and say, “Well, I didn’t know.” But the “reasonably knowable” standard seems a little bit more opaque. I don’t think you have quite as much case authority around it compared to the False Claims Act. I think part of the issues that have come up with respect to the executive order is that it’s using the False Claims Act as an enforcement mechanism. Admittedly, you could do that with a wide variety of provisions that are in the FAR, but the direct statement that the Department of Justice is going to have this almost as a point of emphasis—we’ll have to see what that looks like.
Sam: That opens up a number of different avenues. One is treble damages. When you’re talking about the values of multiple contracts, that could be a lot of money. You’re talking about millions or billions of dollars. And then the other is qui tam cases, too. You can file as a whistleblower or a qui tam case. As a federal contractor, you may be looking at all your former disgruntled employees that may be raising their hands and saying, “Hey, you had a DEI program while I was there, and you should have to pay for that.” It could be a big industry soon.
Jackson: And then attorney’s fees possibly. And then you have an administrative penalty that’s per violation. Is that going to be for every invoice that winds up getting submitted when allegedly this clause had been violated? Who knows? We’ll have to see what that looks like. I think one way that we can get some indication of where the Department of Justice might be heading is a recent settlement that was issued between the Department of Justice and IBM. IBM agreed that it was going to pay $17 million. Of course, it admitted no liability, which is very common in these agreements, and denied any wrongdoing. But the government stated in some of the provisions of the settlement agreement what it believed was wrong, and there may be some guidance in that settlement agreement to try to get some understanding of where the Department of Justice might be heading when they’re interpreting this DEI clause when it ultimately gets issued.
Sam: When the EO first came out, there was talk in the legal community that this materiality would be difficult to prove in court, even though it says it in the executive order. But really, if you performed on the contract, how material was the so-called racially discriminatory DEI to really performing on the contract? But then, of course, you have IBM settling for $17 million. That’s not a small sum.
The quote from the Acting Attorney General Todd Blanche was, “Racial discrimination is illegal, and government contractors cannot evade the law by repackaging it as DEI. The department launched the Civil Rights Fraud Initiative to root out this misconduct, hold offenders accountable, and end this practice for good.” There’s another quote in there that says, “When a company accepts federal funding while engaging in practices that sort preferred disadvantaged employees on the basis of race or sex, the company is stepping outside the conditions under which the government agreed to contract with them, and we will hold them accountable.” So you see the DOJ making this explicit link to payment on the contract with these DEI programs, and that puts prime contractors, even small businesses, at a lot of risk if they had some of the programs that were in the IBM case.
Jackson: I think that’s right. If I remember the settlement agreement correctly, and I don’t have it in front of me, I think the government was asserting wrongdoing going back to the beginning of 2019, which of course long predates the executive order anyway. How that’s going to interact is going to depend upon the provision of a FAR clause that we haven’t seen yet and how it’s going to be interpreted and enforced in cases that haven’t been filed yet.
Sam: I want to mention that FAR clause too, because we just went through this whole experience with the FAR overhaul where that was all issued through class deviations. The FAR Council would put it out on a website, and the GSA would adopt it immediately. And then other agencies would come in sometimes 30 days after, sometimes even longer after, with deviations on their own right for the agency. And that was the first time that I can recall that being done—that a whole new FAR, in this case, had been rolled out through class deviation before notice and comment. And now you have it in rolling out this executive order. The executive order specifically tells the FAR Council to implement this within 60 days through a class deviation. So you don’t usually see class deviations coming up in an executive order, but this seems like this is going to be the new normal. If you can get a FAR clause done in 60 days, why wouldn’t you try and do that? But that does put pressure on your clients, on industry, to shift very quickly.
Jackson: Yeah, no, I think that’s right. And we’ll have to see the additional background that was submitted in connection with the clause itself when it winds up getting issued. I mean, do you think that kind of flows out from the UNC/Harvard admissions case? They’re essentially saying, “Well, this is now Supreme Court authority, and we’re simply enforcing it.” Is that sort of the support, do you think?
Sam: I’m sure that’s part of it. There are also elements in that case that are not recognized in this Civil Rights Fraud Initiative. At the end of that case, it says there are instances where affirmative action could be accepted. And they give the example in a footnote of the military academies. And I haven’t seen that come up in new cases. It might come up in one of these cases of the defense, saying, “Well, the Supreme Court said it’s not always illegal. There are some cases where it’s acceptable.” So I could see that footnote having a big importance in the next few years if the Civil Rights Fraud Initiative finds more companies under investigation, potentially even going to court under the False Claims Act.
Jackson: Yeah, and you would imagine, as you mentioned, there probably will be whistleblowers that are going to step forward and blow the whistle. There’s a significant financial reward for those who successfully do that. So we’ll just have to see. The challenge always with executive orders is, as you know, every time you have a new administration, you have a certain set of policies. Executive orders that were in place get reversed, other executive orders are put in their place, and the pendulum just kind of swings from one direction to the other.
Sam: And you mentioned earlier today the potential for litigation. Were you referring to litigation specifically about the executive order, or this litigation that might come in from the Justice Department through a False Claims Act case?
Jackson: I guess I was thinking primarily about the language of the FAR clause and the enforceability of the executive order. And the downstream effects when either the DOJ pursues matters or agencies are supposed to be terminating contracts if there are violations. There are going to be claims perhaps saying, “Well, we either complied, or the FAR clause was not enforceable or illegal, or I wasn’t given an opportunity to address matters.” If you can tighten up in the clause some of the language about, for example, program participation, maybe some of the uncertainties can be lessened so that the parties can at least know, “Well, we’re really only fighting about this narrow band of conduct instead of something broader.” But I would expect by the time the FAR clause gets issued, that there will be some parties who are not going to be happy about it and that they’re going to try to take court action of some kind.
Sam: Okay. Well, that’ll kick off in a matter of 70 or so days now.
Jackson: Indeed.
Sam: Alright, 60. So yeah, we’re talking about 40 days actually. That’ll be coming up with this new FAR clause. Jackson, how do people find you?
Jackson: Oh, you find me at smithlaw.com. That’s the name of our website for our firm, Smith Anderson. We’re in Raleigh. 170 or so lawyers are representing clients around the country and around the world.
Sam: Well, thanks so much for joining us. Thanks for presenting at this conference with me again. It’s great to see you.
Jackson: Thank you.
Sam: Thanks everybody.
Jackson: Bye now.
With 20 years of Federal legal experience, Sam Le counsels small businesses through government contracting matters, including bid protests, contract compliance, small business certifications, and procurement disputes. Sam obtained his law degree from the University of Virginia and formerly served as SBA’s director of procurement policy. His website is www.samlelaw.com.
This video is for informational purposes only and does not constitute legal advice.









