Skip to main content
941

September 29th, 2025 ×

Is Responsible AI Possible? with Dr. Sarah Bird of Microsoft

or
Topic 0 00:00

Transcript

Wes Bos

Hey. What's up? This is Scott with Syntax, and just a quick message before this episode gets started.

Wes Bos

This is going to be an interview that I did with doctor Sarah Bird on Microsoft's campus. I was invited to do a Versus Code Insiders Sanity and had the extreme pleasure of being able to sit down and chat with Sarah. Sarah is the chief product officer of Responsible AI at Microsoft, but she's not just the chief product officer of Responsible AI at Microsoft. Her credentials really speak for themselves.

Wes Bos

She's been doing an extreme amount of work in the open source community for a long time, and she is responsible for many projects which you may have heard of. In fact, I'm gonna list off some of them right here. She cofounded Onyx, Fairlearn, and OpenDP's SmartNoise and was a leader in PyTorch one point o and InterpretML projects. She was an early member of the machine learning systems research community. She cofounded the MLsys Research Conference and the Learning Systems Workshop. She has a PhD from UC Berkeley.

Wes Bos

She's been epically involved in the machine learning and AI world for a long, long time, longer than most of you have even heard about this stuff. So being able to sit down with doctor Byrd and pick her brain about all kinds of stuff from, like, what is our role as individual developers with responsive AI to just her general thoughts, and we even get a sick pick out of her. So before we kick this interview off, just a quick message. This podcast is brought to you by Sentry. And in regards of AI and responsible AI, man, Sentry is doing some great stuff where it's letting you know of potential issues in your application.

Wes Bos

And using a new tool called SEER allows you to fix those right away. I had a situation, just the other day where I had this really mysterious bug. You know, the CloudFlare worker was giving me some obtuse error, and then this was not servicing correctly. I clicked the SEER button, and SEER was like, hey. I think you might wanna check this e n v variable. And that e n v variable, I could not find in the error. I have no idea how SEER found the cause of this error. But sure enough, I went to Cloudflare's dashboard, and my e n v variable that I was so confident was there had a very slight typo in it. And I I changed that, fixed it, was able to mark that as complete. It closed the GitHub issue. It fixed my problem, and I was on my way. Shout out to Sier from Sentry. It's an awesome tool. Alright. Without further ado, here is the interview with doctor Sarah Burt.

Wes Bos

Welcome to Syntax. Today, we have a really special interview. I'm joined by Sarah Bird. She's the chief product officer of responsible.ai at Microsoft.

Wes Bos

It's a tremendous pleasure to be able to sit down and chat and really pick your brain about some of the things about responsible AI, especially, from the perspective of our audience, which is primarily web developers. Right? So,

Guest 1

Sarah, welcome to Syntax. Yeah. Thank you for having me. I'm so excited to be here. Yeah. So,

Wes Bos

with you with your background, you've been working machine learning quite a while.

Wes Bos

How did how did that end up happening for you? Like, how did you go down that path into machine learning?

Guest 1

Yeah. I actually started my career in systems. So I actually originally in college worked in chip design at IBM, and I worked on the design of the original Xbox three sixty processor. Yeah. And, and so I started working in my PhD on operating systems.

Guest 1

It always kinda felt wrong to me that systems were just kind of dumb and always did the same thing. And so my thesis was in using kind of dynamic optimization techniques, convex optimization, which is, you know, a key thing in machine learning to automatically adapt resource allocation in the operating system. And so I was using kind of machine learning to make systems smarter.

Guest 1

And, it turned out to be the same time that we started to see systems JS really important for machine learning. We wouldn't have had all of the breakthroughs that we've had in AI today if we hadn't been able to scale up the the training algorithms, if we hadn't been able to scale up the data delivery and everything. And so, I really just got started at the right time at the intersection of systems and machine learning. And, I think that it is so amazing to put these different type of technologies together,

Wes Bos

and achieve something that we just, you know, really couldn't do before. Yeah. So so how does that take you down to the path of responsible AI? Because it seems like, I mean, the machine learning space has evolved considerably in the past.

Wes Bos

I mean, just consistently. Right? So how how do you get to responsible AI from there?

Guest 1

So I came to Microsoft for my postdoc in Microsoft Research in New York Sanity, where I still am. And, I was working on reinforcement learning, which is a really exciting new technology right when the deep learning kind of, boom started happening. And we were trying to figure out what were the most important problems we needed to solve to make this technology usable by more people. And so I was talking to various, you know, customers, potential early adopters, saying, what do you wanna do with the technology? And we were calling it a decision technology because the way that, reinforcement learning works is, you know, it tries different things.

Guest 1

For example, showing you a sports article and then showing you a cooking article to see which one you care about more, and it it learns as it goes.

Guest 1

And at that time, customers said, oh, can we use it to automatically interview people? And, you know, that was the moment where I'm like, well, maybe you could, but should you? Yeah. And I was lucky to be sitting with amazing, what are now, you know, leaders in the responsible AI space like Hannah Wallach and Kate Crawford. And we, came together and formed the first kind of responsible AI research group, which is called FATE at Microsoft at that time.

Guest 1

And, and that was really how we got started in it. And, you know, I've stayed in it because I think it's one of the most challenging and interesting problems to make the technology work really well in practice in every single use case in a way that people can, you know, be confident in it and actually trust it. Yeah. I guess that leads me to my my next question, which is, like, is there such a thing as

Wes Bos

responsible AI?

Guest 1

Yeah. The term JS, maybe a little bit problematic in that sense Wes what it's about is about humans being responsible with the technology and the way that we develop the people use the technology. And so it's really not about the AI being responsible.

Guest 1

It's about responsible development of AI.

Guest 1

And people sometimes, you know, get that confused. I have a lot of people ask me about how do we make AI ethical, and it's like, it's just another technology. We shouldn't be making it ethical. We should be developing it in a way that aligns with, you know, the principles we Yarn trying to achieve. Yeah. Yeah. That makes sense. I think a a lot of especially in in the web developer space, it feels like, okay. These are tools.

Wes Bos

They've been, you know, given to me by my company or we're subscribing to them, whatever.

Wes Bos

I have no part in this equation.

Wes Bos

Do you think that the individual, developer in the web space or otherwise, like, has a part to play here?

Guest 1

I completely understand, like, the feeling and the concern, but, absolutely, they have a part to play.

Guest 1

Lot of responsible AI is making sure that what you're developing really works well in practice. And so developers today absolutely have a responsibility to make sure that they're making code that is secure, for example, or that, you know, like, handles data appropriately and follows privacy policies. So in the same way, there's going to be things that you need to do in your specific use of AI that only you as the developer are going to understand. If you're designing an agent that, for example, is going to take very, you know, consequential actions, then you might need to design into your system having it go back and get human approval if there's a risk that it might be wrong.

Guest 1

And so you need to do that tailoring to your specific application.

Guest 1

And so, yeah, there's a lot that the developer needs to do. And and the organization, anything that they do is is part of the story too, but it's gonna be a one size fits all. It's not gonna be necessarily what makes sense for your particular unique thing you're developing.

Wes Bos

Yeah.

Wes Bos

Yeah. I it it there is a sense that, like, people maybe get just get overwhelmed with with this this technology and now these newfound powers, and it's like, okay. I I I guess maybe how I'm using these things isn't necessarily that important, but then you have security, privacy impacts, all those things. We're seeing apps that are being vibe coded and pushed out now that have massive security holes Wes buckets are left open and and this or that happens? How do we, like, bake security in to these processes and and safeguard ourselves from creating these things if the people who are, pushing the software out don't know any better?

Guest 1

Yeah. I I love this question because, what I like to do as much as possible and where I see my role is to figure out, like, what are the new risks we see emerging with AI? How How do we actually address those risks? And then how do we make it easy for everyone to do that? And so, of course, as much as possible, we want to just bake it into the AI system. So if the AI model produces, vulnerability and code, then we should try to train the model to do less of that. Mhmm.

Guest 1

Or if you need to run, your different sort of security checks when you check-in a PR, the more that we can just integrate that directly into GitHub Bos it just runs, so you don't even have to think about it the better. And so as much as possible, we try to just bake it into the platform or system. But, you know, that doesn't work for everything.

Guest 1

And part of what makes this actually work is people do need to understand their responsibility and their role in it. We do a lot today to educate developers on what they need to do to build production code and what the bar is to, you know, check something in. And we have, you know, checks and balances on that. Right? Many people work in an organization where you have to have someone else review your PR before it's allowed to be approved. And so we're still gonna use all those same same safeguards. There's not just, oh, okay. We'll go solve the problem for you.

Guest 1

And so education is part of that too. And, you know, that's part of why I'm here because, it's so important that people like you are helping expose everyone to these concepts and, you know, what their responsibility are because awareness is definitely part of the story. Yeah. I never I never thought about that too much with

Wes Bos

the role GitHub could play. But in the past, yeah, GitHub introduced things like, you know, warning you if you've committed, secrets or something like that. Like, to me, I guess, this falls right in line. Right? Because at the end of the day, the end result's going at GitHub. It can use these systems to analyze your code.

Wes Bos

What about, like, privacy, I guess? And what role does that play? Because I think a lot of people do get a little weirded out with sending their code to a service somewhere. In that same regard, we have local models, things running locally. Like, what is the the responsible choice there with your code as a developer in in keeping things, private, I guess? Yeah.

Guest 1

Privacy is, you know, something that's really important, both like a fundamental human right and then, also, very important in practice for, you know, businesses with their, you know, confidential data, their confidential IP.

Guest 1

And one thing that's really great actually about this generation of AI technology that some people don't realize in terms of how it works is the foundation models themselves are trained usually in a big pretraining run, and then they're they're set, and you can use them. And what they're really great at is actually reasoning over data that you are giving to it. So it doesn't need to be trained on the data to use the data effectively. And so we have a really nice potential privacy story there where you can send your data to the model. It's not learning from it at all, but you're getting very personalized response back, for example, a completion for your specific code.

Guest 1

And so we've designed a lot of the systems that we've been developing, at Microsoft and in GitHub to take advantage of that.

Guest 1

But every kind of company and service is different, and so different ones have different privacy policies. And so it's important for your organization to actually look at the specific tools you're using, understand those privacy implications, and determine whether or not that's appropriate for your application.

Guest 1

Right? We see that, you know, health care data is gonna have potentially a different standard than, you know, a sort of consumer entertainment information. And so you have to think about kind of all of those dimensions. Yeah. Do you see the usage of local tools and local models, local services becoming

Wes Bos

more viable?

Guest 1

I definitely see us going in that direction. So we released, for example, Foundry local.

Guest 1

Foundry is our AI platform, and that enables you to take a lot of the models and run them locally. And there's just increasing demand for that because it's also good from, like, a latency point of view. It's good from a connectivity point of view. You can run offline. And so we wanna enable AI to work as in many places as possible. But there's also cases where it's a really big Node, and you want the maximum power. And so running in the cloud is gonna make sense. So I think we'll just see a diversity of these different options growing. Yeah. Yeah. I guess running locally too. I I was on the plane here,

Wes Bos

and the there was no Wi Fi in the plane and the all of the the AI tools. I'm like, oh, I should have I should have set something up locally before I left here to at least have something. Yeah. Exactly. Airplane travel for all of us that travel a lot JS, like, the number one use case. We're like, I know why we still need disconnected even though, you know, plane Wi Fi is getting better, but, you know, maybe not good enough yet. Yeah. Totally. I we get a lot of comments on on our channel specifically from, like, AI skeptics who see this tech as being Scott. Scary to their livelihood, scary to their code Bos, possibly.

Wes Bos

Do you hear from a lot of people about being frightened about utilizing this stuff in software?

Guest 1

Yeah. I think that, we hear actually several kinda different types of concerns.

Guest 1

And there's a, a taxonomy that came out in the, international safety report that I quite like to think about this. And it says there's kind of three categories of concern or risk with AI.

Guest 1

One is malfunctions. And so saying that, look, it's not good enough. It, you know, produces insecure code or it it hallucinates.

Guest 1

Those are examples that we're trying to work on to address, like, the malfunction in the system. Then you hear people say, okay. You know, I'm concerned that people are gonna misuse it. They're gonna use it to, like, automate hacking and, you know, cause more risk. And those are, you know, issues we have to address as well. And then the last one, as you said, around people's livelihood JS systemic risk and concerns around what is the impact of this technology gonna be on kind of the broader system? Is it gonna change how we all have to work? And I think the answer is Wes. It is going to change. And, yes, we do need to be concerned about all of these types of risks. But part of the goal of responsibility is to break down each of these and figure out what are the right ways to make progress on them. And then at the end of the day, like, this is a tool, and you need to find the right ways to put it to use. And it's not, you know, useful in every setting. And, and so it still really puts the developer in charge to be thoughtful about when it makes sense for them and and when

Wes Bos

it maybe doesn't. Yeah. Do you think it's inevitable that the role of a software developer is inherently going to be completely different

Guest 1

in a short amount of time here? I think it's inevitable that the role of many, many jobs is gonna be completely different in a short time here. And that's probably one of the things that keeps me up most at night, and not because I don't think we're gonna get to a great place on the other side in terms of how much better jobs can be with tools like this, but the sort of upskilling and the redesigning of these jobs and doing kind of all of them at once, that's a significant challenge for us as a society and for each corporate leader. And and so we've got a lot of work to do with this, and, we need more people to contribute to that. Yeah. And then we were having a conversation Scott you and I. We were having a conversation amongst the group yesterday about

Wes Bos

human in the loop and in general, like, what is what does that role look like? And is there a future of of us, not looking at the code as an interface at all? Right? The code exists, but we're not actually manipulating the code or having a a firm eye on it. And I it that's a it's a scary thought that that that human in the loop might eventually be gone there. Is that something that you think needs to stay as a requirement?

Guest 1

So I think that we will always have human accountability. Like, it's not gonna work if we don't, and so that necessitates some concept of human in the loop. But what we will see is the way we put the human in the loop is going to be different.

Guest 1

So if we go farther down kind of the the future that you were talking about Wes people aren't looking at code anymore, it might be that we really need humans to put the investment in designing the acceptance test and signing off on the acceptance test. And so if you're sure that about how you want the system to behave and you are sure that the test really get that component to behave the way it's supposed to, why do you need to inspect inside of it? And so, it's about designing the right mechanisms for human oversight and human in the loop, but those will absolutely change as what the technology change does. You know? And so it's gonna be different over time. Yeah. It's it's definitely one of those things you can get lost in a,

Wes Bos

just a whole thinking about for a long time. Yeah. I I Deno, certainly, as somebody who likes to create things, there's, this newfound superpower of just how many, like, personal software things you can create with low effort, and you get into it, and these are, like, low stakes. And once you get into higher stakes stuff, it definitely always feels a bit heavier.

Wes Bos

For some people who are, like, kind of skeptics of AI, I I do wonder, like, if learning more about responsible usage of AI would would benefit them. Do is there, like, a, a career path for getting deeper into, like, more responsible AI technologies and topics?

Guest 1

Yeah. So, first of all, I I love the statement here because that's, like, exactly right in terms of the whole whole point of responsible AI. We also, you know, call it trustworthy AI. It's about making the technology worthy of people's trust. And so, I talk to, you know, many, customers and developers and people who are skeptical where we then walk them through, these are the safeguards we've put in place, and this is the testing we have, and this is the, like, defense in-depth approach we use. And, like, oh, I feel different about this technology now. That doesn't mean it's still it's suitable for every use case. Right? But understanding how we put these pieces together to address the risk and, you know, how we're gonna do that going forward is part of helping people understand why the technology can be worthy of trust in some use cases.

Guest 1

And is it a career path? Absolutely.

Guest 1

You know, before ten years ago, responsible AI wasn't wasn't a job. And now, I talked to people actually even yesterday. They were saying, oh, I did my master's degree in responsible AI. And so it's gone from nothing to a thing where there are degree programs and where it is a job, and it's a very rapidly growing field, which has just been, like, a joy for me to see. Yeah. And I wonder if that's a a better career path than just sticking your head in the sand and pretending this will all go away at some point. You're right.

Wes Bos

Yeah. So on this show, the last thing we do is ask for a sick pick, which is, like, anything in your life that is bringing you joy right Node. Like, anything that you're liking. We've done all kinds of things from podcasts to socks to anything.

Wes Bos

Do you have a sick pick or anything in your life that you're like, I love this right now?

Guest 1

I absolutely love, Japanese pottery right now. Cool. Yeah. Maybe maybe it's because AI, and we're talking all about machines. But, you know, it's these beautiful handmade pieces that are designed to show the imperfection, but that's part of, in essence, what makes it perfect. And so, I'm really liking the very visceral sort of

Wes Bos

physical element of that and the beauty that comes from that. Man, I love that. There's a a really nice, Japanese pottery pop up that, is down the street from me. And every time I walk by, I'm just like, I would love to go in there and buy everything.

Wes Bos

Yes.

Wes Bos

Wes would also feel the same. Yeah. Well, thank you so much, Sarah. This has been incredible.

Wes Bos

I think a Scott of developers are gonna get a lot of out of this. So thank you so much for joining me, and, just truly a great, chance to talk to you here. Yeah. Thank you so much for having me. It's such an important topic and near and dear to my heart, so I appreciate everyone taking the time to learn more. Well, thank you.

Share