Sept. 6, 2025

Ep. 7 Who Wrote That

Ep. 7 Who Wrote That

What happens when governments let artificial intelligence draft their words? Who’s really behind the keyboard when AI systems churn out emails, press releases, or chatbot responses on behalf of public agencies?

In this episode, Jamie Nixon sits down with journalist Nate Sanford of Cascade PBS to talk about his reporting on how AI is creeping into government communications, and what that means for authorship, accountability, and transparency. From “transitory” excuses to the risk of bot-written bureaucracy, we dig into how AI is reshaping the fight for public records and public trust.

If you’ve ever wondered whether the next government message you read was written by a human or a machine, this episode is for you.

Links to to Nate's two part piece... Part 1 & Part 2

Support the show

Transcript + Source Docs:
Get the full hyperlinked transcript and all documents referenced in this episode:
thepublicrecordsofficer.com

Sign up for updates:
Join our mailing list for future episodes and investigations
thepublicrecordsofficer.com

Support the show:
We’re powered by public records and public support. Buy us a coffee https://coff.ee/thepublicrecordsofficer

About WashCOG:
The Washington Coalition for Open Government (WashCOG) fights for transparency and accountability in Washington State. Learn more:
washcog.org

Follow & Share
X (Twitter): @opengovpod
Instagram: @opengovpod
BlueSky: @thepropodcast.bsky.social

 

Ep. 7 Who Wrote That?

[AI VO] (0:00 - 0:45)

Before we start, a quick heads up. Some of the voices you'll hear reading documents in this podcast are AI-generated, but the words are real. They come straight from public records, produced by real people inside government.

Further, if you're a public employee who's been asked to bend the rules, or if you've seen something that just doesn't sit right, we want to hear from you, confidentially, off the record. Your identity stays with us. You can reach out to us at contact at thepublicrecordsofficer.com.

[AI VO]

You're listening to the Public Records Officer Podcast, where we fight for your right to know. Now, here's your host, Jamie Nixon.

[Nixon] (0:52 - 22:53)

Hello and welcome. This is the Public Records Officer Podcast. I am your host, Jamie Nixon.

Today, we're diving into a story that sits right at the crossroads of government transparency, technology, and trust. Artificial intelligence isn't just reshaping private industry. It's already reshaping how governments here in Washington communicate with the public.

Chat GPT and Microsoft Copilot are showing up in city emails, social media posts, and even policy documents. But what does it mean when words coming from your city hall or your state legislature might've been drafted by an algorithm? I was recently lucky enough to get Nate Sanford on the show to talk about this with me.

Nate is a Murrow News Fellow who reports for Cascade PBS and KNKX. His latest reporting has peeled back the curtain on how Washington cities are experimenting with AI and their official work. It's a conversation about transparency, accountability, and the future of public records to some degree.

I hope you'll find it as interesting and informative as I found Nate's work to be.

Nate Sanford, welcome to the Public Records Officer Podcast. Thank you so much for making some time to talk with me today about this latest piece of yours.

 

[Sanford]

Yeah, thank you for having me.

[Nixon]

I wanted to start first by saying well done. I know that a big piece like that takes a lot of time and patience and work, especially when you're working with public records and you're waiting on them to come back and you're wondering if you've got it yet, you know, that kind of a thing.

So I know these things can take some time. A little background aside, I think what piqued my interest when it came out, a couple months ago, I started thinking, you know, I started seeing the documents from Wattec regarding the implementation of Copilot into the system for a lot of the state employees. And so I was kind of looking for the right story to do a request for inputs and outputs on an AI system.

And one came a couple months ago and I went ahead and did that request. I'm just now starting to get some documents back. So when I saw yours, I was like, oh, this is great.

This is kind of right up what I was kind of looking into from a little bit different angle, still very, very similar. What was your motivation? What was it that caused you to say, hey, you know, this is something I think that I should look into and that people should know more about?

[Sanford] 

Yeah, I mean, part of it, I was, first of all, I was just mainly curious like if it was even possible to get, you know, chatbot records through public records requests, because I think it's a pretty new, new area of technology that, you know, I assumed public records law would apply, but I really wasn't sure. And I think about a year ago, I saw, I was looking through Muckrack, the kind of collaborative public records website, and I saw an example of someone, someone had tried requesting ChatGPTi lugs from like a police department somewhere. And so I was kind of just, you know, was curious to see what would happen if I tried copying that with a bunch of Washington cities.

And I really didn't, I didn't know how much I would end up getting back because I feel like there hasn't been a lot of public communication from local governments about the extent to which they are or aren't using AI tools like that. But, you know, as the first installments started coming back, it was quite a lot of documents. So I kind of expanded that to a few more cities and really just kind of poking around to see what, are governments using this technology?

What are they using it for? And I think the example has been really, really interesting so far. I limited it to Chachapiti, just kind of for simplicity.

And it seems like that's the most popular chatbot out there. Like you said, it seems like the state as well, as well as a lot of local governments are starting to pivot to co-pilot. So I definitely think records requests for co-pilot chat logs would be fruitful.

[Nixon]

Yeah. I think that, I think the one that's that I, cause I've recently done, I think last week or so I did, I did a request with Watek. And so I wanted to get a sense of what the guidelines were, what the rules were, so to speak, how they were handling that aspect of it.

So again, when, when you sort of came out of just people thinking on the right track at a certain time, when a certain thing was kind of fascinating for me, the reporting shows, you've got city officials using Chachapiti for lots of different things, you know, letters from the leadership, social posts, racial equity narratives. What surprised you most in the logs that you ended up getting? I was, I was surprised by just the overall volume.

[Sanford] 

Like I assumed that people had experimented with this and used it a bit. It was just, it seemed like all sorts of different, and you know, some, some, you know, staff had briefly experimented with it, right. And didn't have very long chat records, but it seemed like some people were really using it as part of their almost daily workflows.

And, and, you know, a lot of the records were, you know, pretty mundane things, things like debugging code or like summarizing meeting notes. But it was also being used for, I think, pretty substantial sort of, you know, policy related tasks and public communication things. And I was, I was just, I think, shocked by the overall, overall volume and the fact that there just hasn't, hadn't been a ton of public discussion about that.

[Nixon]

Interesting. Yeah. I've noticed that as well.

It seems like, you know, it's an emerging technology. I would expect government to, to at least be studying how to make effective use of it. It seems like a type of technology that could allow for savings in certain areas, as well as, you know, perhaps just greater efficiencies.

You had the, you had the story in there about a snowplow complaint. A resident felt kind of dismissed because the reply came, appeared straight from ChatGPT with little to no editing. She complained about, you know, snow not being cleared from the streets that she necessarily needed or wanted or thought was should be a priority.

She sends a letter, she gets this reply back. You end up seeing it's probably mostly a ChatGPT generated response. Do you think that story represents some type of a, I don't know, a broader risk, like losing a human connection to government of some kind?

[Sanford] 

Yeah, I definitely think there's kind of a public trust thing there. And I have, you know, it's such a kind of emerging thing that everyone's grappling with. I think everyone has different opinions about the extent to which that's okay.

Like, I'm sure some people don't have any problem with that. Right. Because the content of the city's response, it was factually accurate.

And I don't necessarily think it was that, that different from probably what a human would have written anyway. It was kind of just a very brief, thanks for, you know, we hear your concern, we're sorry that you had a bad experience driving. Well, you know, we're trying our best to address it.

Like it was, it was pretty generic.

[Nixon]

Having worked in communication shops as a public information officer for agencies, it sounds like boilerplate stuff that a lot of agencies do that, I mean, through human means. Yeah.

[Sanford]

 Yeah, exactly. So in that sense, it was, you know, it wasn't that different from what a human would have written. But, you know, still, the person I talked to when I told him it was AI generated, they were, you know, they weren't, they were kind of upset to hear that just because it felt, it felt a little dismissive.

It felt, I think a lot of people find it just kind of inauthentic. And there's sort of the question of like, did they even actually read my email? Right.

Or like, is anyone actually seriously considering my concerns if they're just pasting it into a chat bot and asking it for a response? So I think different people have different opinions about that. But I think there's certainly a risk of public trust in that, in that aspect.

[Nixon]

I hadn't really thought of the input on that side until you just mentioned it. So authorship is a big concern of mine here. Who generated the words?

Seems to be an important question for some people here. How you're taking the input also matters in this, I guess. Yeah.

So if I'm getting a letter from a human being, and I'm not even reading it, and I'm just scanning it and giving it to ChatGPT and telling it to give me a draft response. Yeah, I mean, there's no connection there. I would like to think that the person on the other side is at least hearing what's going on with me.

And even if they're using ChatGPT to generate a response, at least prompting it from some human connective level that, well, the person who you're expecting to help solve your problem with your city or your county, whatever level of government is, did hear you. Did they see you. And then they went to work with this tool over here to help them generate a response that probably would have been a response that had come out of a communication shop anyway, or very close facsimile to it.

But yeah, the input part of that, I hadn't really thought until you just mentioned that. That's a really interesting part of it.

[Sanford] 

Yeah, and there's a variety, too.

In this specific example, I had to file another records request to get a copy of the actual email the person ended up sending in response. And we kind of do a side-by-side comparison to see, did they copy it directly from ChatGPT or did they make changes? And they made very, very small changes in that case.

They added about four words, but otherwise it was basically verbatim what ChatGBT had suggested. And other examples I looked at, there was kind of a variety. There were many examples where it seemed like they had just copy-pasted entirely what ChatGBT had suggested and just used that in the final document.

But there were also cases where it was edited significantly, sometimes between maybe 50% to 70% ChatGBT sentences with some added human sentences. So there's kind of a blurry line there also. It's really, really interesting.

[Nixon]

From a records perspective, I guess in the first piece, it's a two-part piece for people who haven't seen it or read it. In the first piece, there's a trigger word in there for me, and that word is transitory. So being a public records nerd a bit, this has become a fairly recent but dominant term.

You'll see, if you're familiar with the mass auditions of Teams chats, the theory here is that, well, they decided that anything put in there, it's going to be deleted in seven days, and we can do that as long as you keep it all transitory. Transitory meaning that it doesn't have much value. If somebody asked for records, it's like if I was to, you and I worked together and I sent you an email saying, happy birthday, Nate, that might be considered a transitory communication.

It's not really work-related. It's de minimis use of the government programs and systems. Probably don't need to turn that over in a records request.

That's the theory. There was a particular leader who apparently you had asked for records from on ChatGPT, and they said that the only work they had done on it was transitory work, and that now they're logged in, they keep it. Did you have any other issues that came up, and could you get into that a little bit?

[Sanford] 

Yeah, I mean, it kind of varied. I ended up filing records requests with almost a dozen Washington cities. I think Bellingham and Everett both deserve a lot of credit for being really thorough in their response and making a really good faith effort to track down ChatGBT histories from all their employees and release them.

I'm aware it's a pretty time-intensive records request. They had to scan thousands of pages for redactions and things like that. They were both very cooperative with the request.

Some cities, we've had pushback trying to argue that it's not logistically possible for them to gather those records, or that it doesn't fall under their purview, or that it's not a public record in the first place. We've been trying to push back on some of that and are still in the process of figuring that out. I do think it's a new area of public records law that I don't think everyone's familiar with.

It's always testing the system in some ways. Some cities have been releasing them very slowly in installments. Bellingham and Everett were quite cooperative, which is why they ended up being a big focus of the story.

In that specific case, the mayor of Bellingham mentioned during her interview that she's personally used ChatGBT for her job, which was interesting to me because we didn't get any of those records in response to the request. When I asked why, she said, I wasn't logged in, so it wasn't saved. She basically said, the work I was doing was transitory, but if you file another request, you'll get some now because I'm making sure I'm logged in every time.

It's an interesting thing. The transitory stuff, she defined it as ideation or simple wordsmithing stuff. We can't really get a detailed picture of that without actually seeing the records.

[Nixon]

It's a difficult thing. I can see an argument for deliberative process exception, but that's time limited. You still have to hold the record.

Once you're done with your deliberative process and you decide, okay, we're going to publish this external communication now. We're going to make the decision that this is what we're going with. We're going to make it.

Once that happens, those records that got you to that process are supposed to become available. If you're not holding them, I am curious about their thought process and how they're coming up with that. It's interesting you mentioned both Everett and Bellingham because I thought it was interesting the contrast between the way both cities seem to be treating the situation.

It seemed as though one was being a little bit more strict towards its use and one was being a little bit more relaxed towards its use. I was wondering if you can get into that a little bit and what your takeaway was.

[Sanford]

Both cities are in the process right now of writing AI policies.

They've sent some guidance to employees in the past, part of it based on the state's, based on Wattex guidance that they issued a few years ago. Both are now writing official policies for how, when, what the guardrails are. In Bellingham, the word they used is a very permissive approach in that they're encouraging employees to experiment with these tools, giving them free reign to use different models that they think might be effective.

It's really in the experimentation phase. In Everett, they described a much more cautious approach where they're being a bit stricter about which AI tools employees are allowed to use. One big question that a lot of cities are grappling with, it seems like, is the idea of disclosure and transparency.

If you release a document or an email or whatever that was generated with AI, should you have a line somewhere saying AI was used in this process? A lot of the policies I looked at do have requirements that people include that if they used AI for something. I haven't seen many examples of official documents that have that disclosure, which makes me wonder the extent to which it's actually being followed.

Some of this guidance says you should have a line saying this document was partially generated by ChatGPT 4.0 using this prompt, and it was reviewed by this human person. It's saying that there should be a significant level of detail there for accountability and transparency, so people know what the prompt was, what the model was, and exactly how it was used to generate that document. I really haven't seen many examples of official documents that have that type of language on them.

That's a discussion that both Bellingham and Everett are having now, is should we require this type of disclosure? In Everett, it sounds like the plan is that they are going to require some sort of disclosure for most types of AI-generated content if it was used for more than simple editing type tasks. In Bellingham, they're not really sure if the new policy is going to require that.

The argument from the city's IT director was that if a human is reviewing the AI output and double-checking it, maybe making edits to it to make it more in line with what they're trying to communicate, they're the one who's taking responsibility for that. They're signing off on it, and there's no need to disclose that it was with AI because they're still just as responsible for the contents.

[Nixon]

The authorship question, again, to me, is the one that I really kind of got.

I have an ADHD brain. I will perseverate on things at night. The last couple of nights, I was thinking about this a lot.

I likened it to when I worked at the Department of Health during COVID. I was a public information officer there when COVID was going on. We did a lot of communications for the Secretary.

He rarely wrote any of it. There was usually four or five eyes on it before he got it, and then in the end, it goes out with his name on it. He doesn't say it was written by Jamie and Kelly and Julia and checked by so-and-so.

It seemed to me like that kind of authorship is how it already happens now. I guess I'm not so concerned with who authored the communication from my government agencies as long as somebody is taking accountability and ownership for whatever is coming out. Whoever's name is on it is going to have to own it.

I'm guessing there was some research into the theory behind this. You talked with a professor from UW who's a little bit more skeptical of what seems to be the coming ubiquitous use of this tech. Can you talk a little bit about where they were coming from and their concerns about it?

[Sanford] 

Yeah. I talked with Emily Bender, who's a linguistics professor at UW, who's a pretty prominent AI critic and skeptic. In terms of the experts I talked to, they're definitely on the side of this is not a technology that is ultimately doing good for society.

They have a variety of concerns. There's the environmental stuff, the plagiarism aspect, the fact that it was trained on humans' work without any consent or compensation. Those are big, broad things that people have been talking about for a while.

There's also the idea that they said it's not a text that has accountability. This is not something that is divorced from its original context. It doesn't have communicative intent, which is an interesting argument.

I think some people see it differently. Even if it's factually accurate and double-checked by a human, I think they still have the concern that it's not in its original context. It's just a pattern recognition machine that is creating the mathematical average of everything that's come before it.

There is that concern also that chatGPT is trained on data from the past, flawed pieces of human writing from the past. It's not really generating new ideas or content. It's generating the average of what came before it.

There's that concern about just a greater averaging effect that can happen if it's used for policy and communications, which is interesting. They were definitely more skeptical of its use for governments or really any context.

[Nixon]

I like using it as a copy editor.

I tend to be a pretty shitty copy editor at times. It takes a certain kind of focus and ability to look word for word, sentence for sentence. It's tough for an ADHD brain to sometimes do.

I use it sometimes for that. I'm curious where these tolerances and these lines end up being. Your piece has been great at making me start to really think about this stuff.

It'll be interesting to see how this plays out over time. I'm guessing there will probably be some follow-ups. Where do you see the story going?

[Sanford] 

Where do you want to go with it going forward? What questions do you still have that you'd like to have answered? I think there's quite a few.

I tried to do as much as possible of fact-checking some of these AI-created documents that were out there. It does seem like people are being diligent about it. The records show many examples of Chachaputi making mistakes and inventing things and just really badly messing up facts, which they're famous for.

But the humans using it seemed to be catching those mistakes for the most part. I couldn't find any examples of a published government document where one of those AI hallucinations had slipped through and snuck in. But it's possible it's out there.

I think a lot of the reporting so far has been a little hypothetical. Here are the things that could go wrong. Here are some of the concerns experts are raising.

Connecting the dots to the tangible ways this has had an impact on constituents' lives is going to be really interesting. Like you said, it's growing more ubiquitous. I don't necessarily see governments using it less.

I think maybe they'll be a bit more careful, more cautious about the fact that it is a public record. I have heard from lots of reporters who are planning to copy this now in their respective jurisdictions. I think it's a very fruitful and interesting records request.

Not even necessarily just for the way AI is being used, but just the volume of information that people are putting into these systems is just great for any local government reporter. When people use chatbots, they're not really using it for the easy questions or problems they have. If they need to write a simple email to a co-worker, they'll probably write it themselves.

But if it's a really sensitive and difficult topic, maybe they're having trouble with it, it seems like that's the type of thing that people are turning to chatbots for. You can tell it's a sensitive topic someone's trying to grapple with, with a labor dispute or a workplace issue. There's just a lot of really interesting information that ends up in these, which I think just has all sorts of potential for follow-ups.

And then I was just interested in following up on the broader debate. I think it's been a really interesting and good reaction to the stories so far.

[Nixon]

Once again, that was Nate Sanford, reporter for Cascade PBS and KNKX.

Nate, thank you again for your time and for the important reporting you're doing. You've helped bring some clarity to an issue that I'm sure will only grow more important as the years move ahead. We will link to Nate's two-part piece on this at our website, thepublicrecordsofficer.com.

I highly encourage you to read the piece as it is a topic that will be part of our lives going forward, I'm quite sure. Just a reminder, next week, we will be sitting down with journalist and author Miranda Spivak, someone whose work on government secrecy and transparency has helped to shape the national conversation on this. I encourage all of you to pick up a copy of her book, Backroom Deals in Our Backyards, How Government Secrecy Harms Our Communities and the Local Heroes Fighting Back.

For me, her book was almost spooky to go through. Just reading the foreword made me think she had been in my head and seen all the transparency issues that advocates face when trying to get to the bottom of government obfuscation. You won't want to miss the conversation, it's going to be really great.

Also, I've been informed by all three of my little fans, hi mom, that I'm supposed to plug support for the show. Honestly, it's not something I'm super comfortable with because, let's be real, I'm not doing this to get rich. The idea that anyone could become wealthy producing a podcast on government transparency is almost as laughable as records nerds having groupies.

That said, producing this thing does have some regular costs. So if you're enjoying the content, there are a few easy ways to help out. Tell your friends about the show, share our social media posts, and if you're feeling especially generous, hit the support the show button on the website.

It's right at the top on the right-hand side, it couldn't be easier to do. No pressure at all, but every little bit helps keep the lights on, the records flowing, and the heat on those like Watech.

Governor Ferguson, the legislature, AG Nick Brown, and others in power who'd probably rather I and this podcast just shut the hell up. Let's not give them the satisfaction.

Until next time.

[AI VO] (22:55 - 24:40)

That's it for this episode of the Public Records Officer Podcast. A quick note before you go. Some of the voices you heard on the show weren't from real people.

Some were totally synthetic, AI generated to read from public records and legal depositions that are, yep, public. You'll also hear real human voices like live audio from state meetings, the interviews with Joan Mell and Shauna Sowersby, and the occasional passionate rant from the show's gorgeous host. Every episode has a full transcript at thepublicrecordsofficer.com.

It breaks down which clips came from humans and which came from our robot friends. Think of it like liner notes for digital democracy. You'll also find links to the original documents and recordings we talked about, hosted on Google Drive, free and public.

So if you want to fact check us, go nuts. That's kind of the point. If this show got you fired up, or even just mildly interested, check out the Washington Coalition for Open Government.

They're a nonprofit that fights for transparency and they've got resources if you want to help or just learn more. And hey, if you work for the state and you've seen one too many messages accidentally disappear, we'd love to hear from you. Confidentially.

Unless you want to be famous. The Public Records Officer Podcast is a creation of Nixon and Daughter Productions, powered by good coffee, better whiskey, a microphone, a legal tab, and the apparent misguided belief that government should actually be accountable to people, which is adorable, really. Thanks for listening.

See you next time. And remember, you're not paranoid. They really did delete it.