Roadfire Software https://roadfiresoftware.com Tue, 13 Jan 2026 20:40:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://roadfiresoftware.com/wp-content/uploads/2026/01/cropped-roadfire-icon-square-512-32x32.png Roadfire Software https://roadfiresoftware.com 32 32 How to Create a 30-Day Personal Action Plan for AI Safety https://roadfiresoftware.com/2025/12/how-to-create-a-30-day-personal-action-plan-for-ai-safety/ Tue, 16 Dec 2025 14:46:50 +0000 https://roadfiresoftware.com/?p=2301 (Photo by Glenn Carstens-Peters on Unsplash)

In an effort to advance AI safety, I just spent 30 days contributing to Inspect AI, an open-source tool for AI model evaluations. I hope my contributions make it easier for people to work with and work on Inspect so we can better evaluate risks in AI models.

When I started, I thought that surely, in 30 days, I’d fix a bug or two, or maybe even add a new feature. Instead, I wrote documentation. Which is exactly what the project needed and what I was uniquely positioned to offer. Over those 30 days, I opened three issues, merged five pull requests, and directly contributed to closing three additional issues.

Here’s what I learned over the course of 30 days and in my reflection afterwards. If you’re going to commit to a 30-day plan like this, here’s what I’d recommend based on what worked – and what didn’t – for me:

Choose one: make a meaningful impact or learn a valuable skill

Either choose a project where you can (a) make a meaningful impact or (b) learn a valuable new skill. I chose to contribute to a tool that’s used by some of the research orgs in the AI safety space since I wanted to be able to help them by improving their tooling (to make a meaningful impact) and continue developing my Python skills (to learn something new and valuable).

I discovered that trying to do both was a mistake.

I was able to make a meaningful impact, but I didn’t spend much time with Python since my time was limited. Given enough time, I could learn more Python while contributing meaningfully to Inspect, but that wasn’t realistic for me to do in an hour a day over 30 days. I’d need more time to be able to achieve both goals. I’d also need to find a “good first issue” for Inspect, but unfortunately there were none labeled as such, and I wasn’t able to find any on my own.

So my advice: pick one. Focus on either making a meaningful impact or learning a valuable new skill. You can do the other in your next 30 days.

Give yourself an achievable challenge

I had the unrealistic goal of fixing a bug or adding a new feature to Inspect. It just wasn’t going to happen with the time I had. Here’s what I said in my plan:

I’m going to work on Inspect AI through open-source contributions, fixing bugs and adding new features as needed. I’ll fix at least one issue over the next 30 days, contributing my fix via a pull request.

In retrospect, “fixing bugs and adding new features” was wildly optimistic.

Fixing one documentation issue, though? That was easy.

When you’re new to a project – like I was to Inspect – you’re uniquely positioned to point out where the gaps are in existing docs, especially around tutorials, processes, and how to use the tools. You’ll get stuck on things, and that’s an incredibly valuable signal – you’ve found a gap in the documentation. Report an issue to let the maintainers know, and boom, you’ve made your first contribution.

After I accomplished my goal of fixing one issue in the first week, I was left wondering what to do next. I decided to just keep going, fixing more documentation issues. It would have been nice if, at the outset, I’d set some additional goals for myself. My plan could have been to fix at least three documentation issues, with a stretch goal of fixing five.

Giving yourself an achievable challenge is tough – especially on a project that’s new to you – because what you can do depends on the state of the project, and you don’t really know the state. Maybe there are some glaring weaknesses in the docs to be addressed, or some obvious bugs to fix, in which case you might want to try fixing a few. Or maybe there are only big issues and fixing just one will take you the entire 30 days.

It’s hard to know what will be achievable and challenging on a new project. Start with your best guess, then make adjustments after you’re a week or two into it.

Break it down

I broke down my 30 days into the following actions:

  1. Comment on the issue titled “Improve display of grouped metrics” and ask if it’s been completed. Deadline: Monday, Nov 10.
  2. Get more familiar with Inspect by writing and running evals. Write and run at least one by Wednesday, Nov 12.
  3. Choose an issue that I can fix, work on it, and contribute my fix (via an open pull request). Deadline: Wednesday, Dec 10.

The problem with this plan is that there’s nothing between Nov 12 and Dec 10. This is too long for me to spend on my first issue on a project. I could have broken it down more, perhaps with one thing to do each week. If you’re making a 30-day plan, I’d suggest something like this:

  • Week 1: Follow the tutorial or README that explains how to use the tool. Read the Contributing guide. When you find problems in any of the docs, open an issue.
  • Week 2: Set up your dev environment. Build the project and run the tests. Fix your own documentation issue and submit a pull request.
  • Week 3–4: Look for small code issues you can fix (“good first issue”, perhaps) or continue to improve existing docs.
  • Stretch: Fix a bigger issue or feature.

Get a friend or two to join you

My 30-Day Personal Action Plan was the capstone of BlueDot’s Technical AI Safety course, which I was lucky enough to take with a cohort. In our final discussion where we presented our plans, we also decided to reconnect at the end of the 30 days. For me, this was motivating – I wanted to be able to tell everyone that I’d accomplished my goals. And reconnecting with a small group at the end of the 30 days to discuss and reflect on how it went was both fun and insightful. I’m proud of us for moving AI safety forward, even if we didn’t completely follow our plans or accomplish all our goals.

Good luck!

All the best in accomplishing your goals for your 30-Day Personal Action Plan!

You can see my Inspect contributions on GitHub. I’m continuing to contribute to AI safety tooling and writing about the process. If this was valuable, you can support this work on GitHub Sponsors.

]]>
Your open-source project is losing contributors before they even start. Here’s why – and how to fix it. https://roadfiresoftware.com/2025/12/your-open-source-project-is-losing-contributors-before-they-even-start-heres-why-and-how-to-fix-it/ Tue, 09 Dec 2025 20:37:21 +0000 https://roadfiresoftware.com/?p=2294 (Photo by Zulian Firmansyah on Unsplash)

I recently became concerned about risks from frontier AI models, so I committed to contributing to Inspect AI to make AI safer over a 30-day period. I’ve worked on other open-source projects before, but I needed to know how things worked on Inspect so I could make meaningful contributions without duplicating effort. After searching for process documentation and finding none, I combed through issues and pull requests (PRs) to learn what others were doing, what types of PRs were getting accepted, and what I’d need to do to help. Of course, with this approach, there was no guarantee that I learned the right takeaways – I may have picked up practices that the maintainers were OK with but weren’t ideal.

I noticed that other people sometimes just created a new issue with their question, like “how do I build the docs?” and realized that everyone has these questions. If they’re not asking, they’re just digging in to figure it out on their own like I did, wasting time they could be actually contributing.

Contributor time is precious – people are volunteering their time and expertise, so it should be as easy as possible for them to make meaningful contributions to the project right away. They shouldn’t need to figure out how to build and test or claim an issue to work on.

The solution to this problem is simple: Create a CONTRIBUTING.md file in the root of your repository, and make sure it answers the questions everyone will have:

  • How do I report a bug?
  • What’s a good first issue to start working on?
  • After I choose an issue I’d like to fix, what happens next? I don’t want to work on something for a few days, only to see someone else create a pull request (PR) before I do, rendering my work meaningless.
  • How do I build and test locally?
  • Before I create a PR, what should I check or do? Do I need to install the pre-commit hooks? Lint, format, run tests?
  • Are there code formatting guidelines? Does the formatter handle all of them, or are there some I’ll need to look out for and do manually?
  • Do I need to do anything besides formatting and getting unit tests to pass? Does new code always need to be covered by unit tests?

When you create a contributing guide with this exact filename in your repository root, GitHub will show it in two additional places so it’s easier for people to find – in a Contributing tab on the repository overview (next to the README tab) and as a link in the About section in the repository sidebar.

I did this for Inspect, so now its contributing guide answers the questions I had – and the ones I saw other people asking in issues. Now those questions are documented up front so nobody needs to ask.

This solves a few problems for contributors as well as maintainers:

  • contributors don’t need to waste time asking basic questions
  • maintainers don’t need to waste time answering basic questions
  • expectations for contributors are clear so PRs are better (less repeating yourself about how unit tests are needed, please run the formatter, etc)

And most importantly, people don’t just leave because they don’t know how to help, don’t know how to build and test, or don’t know how to report an issue. They have everything they need right up front so it’s easy for them to contribute their first bug report, documentation, bug fix, or feature.

If your project doesn’t have a solid CONTRIBUTING.md yet, you’re losing contributors right now. Spend an hour writing one, and your future contributors and future self will thank you.

]]>
How a Closed GitHub Issue Led to My First Contribution to Inspect AI https://roadfiresoftware.com/2025/11/how-a-closed-github-issue-led-to-my-first-contribution-to-inspect-ai/ Fri, 21 Nov 2025 18:49:37 +0000 https://roadfiresoftware.com/?p=2276 (Photo by Kiyota Sage on Unsplash)

I’ve committed to spending 30 days working to make AI safer by contributing to Inspect AI, an open-source tool to evaluate large language models (LLMs). My goal? Make improvements to Inspect so it’s easier for AI companies and research organizations to evaluate models and, ultimately, make them safer.

So I started with a critical piece that I’m 100% confident I can improve – the documentation.

Good docs are often underrated – probably in part because engineers despise writing them, and in part because keeping them up to date can be tedious. But they can explain how things work to people who can’t (or prefer not to) read the code. They can teach people how to get started with a new tool. And they can unlock hidden features people might not otherwise notice.

As I was looking through the Inspect issues on GitHub, I saw one where someone asked “how do I build the docs?” The response helpfully spelled out exactly how to do it, which is great! That person can build the docs now, and they’ll be able to contribute.

But we can do better. The person who asked the question also made an excellent point in a follow-up comment:

Someone should put it in the README.

And now, it’s there. My PR was merged, making me an official contributor to Inspect AI! And future docs contributors will have an easier time getting started and making improvements.

The lesson? Dig through the open issues – and the closed ones – to find a small problem you can solve. After that, just do what you always do as a software engineer when presented with a problem – solve it.

]]>
A path into AI safety for software engineers https://roadfiresoftware.com/2025/11/a-path-into-ai-safety-for-a-software-engineer/ Fri, 14 Nov 2025 21:03:20 +0000 https://roadfiresoftware.com/?p=2264 (Photo by Lili Popper on Unsplash)

If you’re a software engineer wondering how you might move into AI safety, this is what the first concrete steps look like. Or, rather, what my first concrete steps look like. Yours may be different, but I hope this inspires you to get started on your journey. After 18 years nearly entirely focused on iOS apps, I realized that I want to have a meaningful impact on the world, and AI safety is a place where I can do that.

On Saturday, I completed BlueDot Impact’s Technical AI Safety course – 6 intense days that took up almost all my time, but it was totally worth it. We had excellent discussions over mechanistic interpretability, constitutional AI, evaluations, and practical ways to contribute to AI safety work. I enjoyed connecting and getting to know the people in my cohort.

And the capstone? Building a concrete 30-day action plan to answer the question “how can I make AI safer?”

I’ve started making open-source contributions to Inspect AI, an evaluation framework used by AI companies like Anthropic, Google DeepMind, xAI, UK AI Security Institute, SecureBio, and Apollo Research. I’m optimistic about making additional meaningful contributions over the next 30 days.

When I started looking at Inspect, I wasn’t sure what to do. I had seen a request for help in the BlueDot Impact Slack group with links to a few issues, but those had all been closed by the time I got there (which is great!). So I hunted around and thought a bit about what I’d need to get started.

It can be disorienting for me when I start working on a new project. I have all sorts of questions like these (if you’re nodding along, you’re not alone):

  • What’s a good first issue to start working on? Inspect actually uses a label on GitHub but there aren’t any open issues with that label.
  • After I choose an issue I’d like to fix, what happens next? I don’t want to work on something for a few days, only to see someone else create a pull request (PR) before I do, rendering my work meaningless.
  • Before I create a PR, what should I check or do? Do I need to install the pre-commit hooks? Lint, format, run tests?
  • Are there code formatting guidelines? Does the formatter handle all of them, or are there some I’ll need to look out for and do manually?
  • Do I need to do anything besides formatting and getting unit tests to pass? Does new code always need to be covered by unit tests?

So I did what any engineer would do – I documented the answers. (You document answers, too, right?)

For Inspect, I created a GitHub issue to explain all of this, suggesting a contributing guide to address it. The following day, I created the guide, doing the best I could to answer my own questions from what I’ve gleaned from the README, pull requests, and other attributes of the repository on GitHub. Then I opened a pull request that I think would be a fine addition to Inspect.

Over the next 30 days, I’ll be sharing weekly updates on my Inspect contributions – what I’m working on, challenges I hit, and practical lessons for anyone trying to break into AI safety work. No academic jargon, no assumed PhDs, just real engineering challenges and solutions.

]]>
44 Attorneys General Stand Up to AI Companies. Is it enough? https://roadfiresoftware.com/2025/09/44-attorneys-general-stand-up-to-ai-companies-is-it-enough/ Tue, 30 Sep 2025 19:03:31 +0000 https://roadfiresoftware.com/?p=2222 I’m glad to hear that attorneys general are standing up to AI companies to protect children.

A few weeks ago, Reuters reported that Meta’s policy on chatbot behavior said it was OK to “engage a child in conversations that are romantic or sensual.” This is outrageous.

I’m glad I’m not the only one who feels this way. In a bipartisan effort, 44 attorneys general signed a letter to AI companies with a very clear message:

Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.

I couldn’t agree more. They continue:

You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.

I hope this strongly worded letter from the attorneys general is enough to get the AI companies to change their behavior and protect our kids. But I fear it won’t be. This might have changed Meta’s stance on this particular issue, but will they and these other multi-billion-dollar companies suddenly become focused on safety in AI models as a result?

I’d like us all to work to incentivize AI companies to prioritize safety. Anthropic agrees, or at least they did in 2022 when they said we need to “improve the incentive structure for developers building these models” in order to get AI companies to build and deploy safer models. California Governor Gavin Newsom recently signed SB 53, which requires AI developers to publish their safety and security protocols in addition to providing whistleblower protections for people working in AI companies. Requiring transparency like this around safety is another key factor in building AI that’s safe and beneficial to everyone.

But I’m not convinced that policy alone is enough. We need engineers, researchers, journalists, and advocates — everyone, really — to work together to ensure the AI companies prioritize people over profits.

]]>
The Future of AI from BlueDot Impact https://roadfiresoftware.com/2025/08/the-future-of-ai-from-bluedot-impact/ Thu, 14 Aug 2025 19:57:43 +0000 https://roadfiresoftware.com/?p=1788 BlueDot Impact’s Future of AI course is a brisk, thought-provoking, zero-cost introduction to how AI could change our world – and what we can do to have a hand in shaping our future.

The course dives into big questions like “how might AI benefit us?” and “what risks does it pose?” before getting into things like “how can I help make it go well?”

It’s crisp and full of thoughtful ideas on what our future may look like as AI continues to develop. Most importantly, it asks questions about how AI may be used nefariously and what we might be able to do about it.

AI is everywhere, and it feels like it’s only just begun. It looks like we’re on the verge of seeing what it can really do as all the big AI companies continue to push it forward, releasing more powerful model every few months.

But there’s a problem. For every person working on AI safety, there are 100 people racing to build more powerful systems, according to BlueDot Impact. They also estimate that for every $250 spent on making AI more powerful, only $1 goes to making it safer.

That’s a huge gap, and if we want to be sure AI is developed safely and can’t be used to build explosives or create bioweapons or whatever else terrorist organizations can imagine, we need to close that gap and get more people thinking about and working on AI safety.

And that brings us back to BlueDot Impact’s course, which explores these issues, open questions, and potential solutions, whether you’re in tech, policy, education, or just a regular old (or young) citizen of this planet we all share.

Want to contribute meaningfully to the conversation? Get started with BlueDot’s Future of AI course.

P.S. I don’t make any commission on their free course (obviously?). I just fully believe this is an issue we need more people to start thinking about and working on.

]]>
The Search for More Meaningful Work https://roadfiresoftware.com/2025/07/the-search-for-more-meaningful-work/ Tue, 29 Jul 2025 15:52:49 +0000 https://roadfiresoftware.com/?p=1635 I’ve been loving Moral Ambition by Rutger Bregman. It’s inspiring, persuasive, and echoes what I’ve been thinking and feeling lately: there has to be more to life than just doing any old job. Earning money to give to effective charities might be a good start, but there’s potential to do so much more with our time, skills, and talents.

I’ve been wondering lately what it might look like to use my skills to work on solving serious problems in the world. There are so many to choose from, but most recently I’ve been looking at ways to prevent disease, fight poverty, and reduce the effects of climate change. I’m still not sure where I’ll end up, but I’m excited about the prospect of using my skills to solve one of the world’s biggest problems.

Why aren’t we measuring success by positive impact, rather than lofty titles or dollars earned?

Going forward, I intend to focus more on work that feels meaningful, even though I’m still figuring out what that looks like.

If you’ve been looking for more meaning in your work, or you’ve been feeling like there just has to be more to life, do yourself a favor and read Moral Ambition. It might just change your life.

]]>
Books With Yellow Covers That Changed My Life https://roadfiresoftware.com/2024/09/books-with-yellow-covers-that-changed-my-life/ Wed, 11 Sep 2024 02:02:38 +0000 https://roadfiresoftware.com/?p=1132
]]>
Unit Testing Pro Tip: Verify that your tests make assertions about the method under test and nothing else https://roadfiresoftware.com/2020/07/unit-testing-pro-tip-verify-that-your-tests-make-assertions-about-the-method-under-test-and-nothing-else/ Fri, 31 Jul 2020 14:51:03 +0000 http://roadfiresoftware.com/?p=1128 Nothing else. Don’t make assertions about the code you wrote in the test. Don’t make assertions about other methods in other classes. Isolate the method you’re testing using mocks or stubs and only test the method you’re testing. Your tests will be more clear and easier to debug.

To do this, start by looking at each line in the method you’re testing. Make an assertion about that line and nothing else. Repeat until you’ve made assertions about each line. Don’t add any other assertions.

Oh and if you want to ensure you always do this, just practice test-driven development. Write the tests first, then make them pass. It’s a lot easier to mess up tests if you write them after the production code.

Questions? I’d love to help. josh@roadfiresoftware.com

]]>
Detect Displays on macOS https://roadfiresoftware.com/2019/06/detect-displays-on-macos/ Mon, 17 Jun 2019 18:05:32 +0000 http://roadfiresoftware.com/?p=1126 Sometimes when I use an external display with my MacBook Pro in clamshell mode, nothing shows up — I just get a black screen. And when this happens, forcing macOS to “Detect Displays” can help.

To trigger “Detect Displays” on macOS, just hit Command+F2 (brightness up).

You can also do this using the trackpad:

  • Navigate to System Preferences > Displays
  • Hold down the Option key to reveal the Detect Displays button
  • Click Detect Displays

This works in macOS Mojave, and others have reported it works with previous versions of macOS.

]]>