Skip to content

THE README PODCAST // EPISODE 29

The open/closed equilibrium

Striking a balance between openness and control in open source projects, preserving the integrity of community insights, and how humor can transform communities.

Elapsed time: 00:00 / Total time: 00:00
Subscribe:
The ReadME Project

The ReadME Project amplifies the voices of the open source community: the maintainers, developers, and teams whose contributions move the world forward every day.

The ReadME Project // @GitHub

This month, we consider the evolution of openness in open source. The ReadME Project’s Senior Editor, Mike Melanson joins hosts Martin and Neha to discuss expert advice on why “closed to contributions” sometimes makes sense and how that model aligns with open source expectations. Additionally, maintainer, founder, and CEO of Scarf, Avi Press, highlights the benefit of analytics to maintainers and the open source community, and discusses the metrics that matter most. Also, Jessica Januik, Senior Software Engineer at Google, answers a listener question and shares insight into why humor is paramount when building team chemistry.

Here’s what’s in store for this episode:

  • 00:00 - The hosts examine what’s new in open source, highlighting new communities like Mastodon and Bluesky. 

  • 01:37 - First Commit: Open source saves the day! From climate change to nuclear radiation, open source is empowering communities to adapt to catastrophe. 

  • 05:48 - Feature Release: The ReadME Project’s Mike Melanson welcomes Ben Johnson to share key considerations when deciding how to approach project contributions. 

  • 20:00 - The Interview: Avi Press, maintainer, founder, and CEO of Scarf, shares his perspective on how the open source community, and maintainers in specific, can benefit from improved community analytics.  

  • 34:00 - #AskRMP: Jessica Januik highlights why humor is so important when building a team or community.

Looking for more stories and advice from the open source community? To learn more from the authors and experts featured on this episode, check out:

Special thanks to Avi Press for detailing community analytics for maintainers, Jessica Januik for sharing insight into building team chemistry, and Ben Johnson for walking us through his decision to limit contributions to Litestream


Martin Woodward: So Neha?

Neha Batra: Yes?

Martin: Are you one of the cool kids yet? Have you got one of these Bluesky invite codes, or would you like one?

Neha: I got a DM about a Bluesky invitation, and I've got to be honest, when I first got it, I didn't know what it was. I had to do a little bit of research on it, and I haven't moved or responded to that DM yet. What about you?

Martin: Yeah. Martin.social, that's me. I'm there.

Neha: Of course, I would expect nothing less.

Martin: This is the ReadME Podcast, the show dedicated to the topics, trends, stories, and culture in and around the developer community on GitHub. I'm Martin Woodward from the GitHub Developer Relations Team.

Neha: I'm Neha Batra from GitHub's Core Productivity Team.

Martin: Hey, so Neha, I want to get a bit existential today maybe. Why are we here? Why do we create? Why are we innovating?

Neha: Where are we going with this?

Martin: I don't know, I've just been thinking a lot lately just how exciting everything is, you know. I can't believe I get paid to do this job. I just love working in this space. There's always something to learn, something to get excited about.

One of the things I've been really getting excited about, is the change we're seeing within social media and seeing the rise of some of these more open social platforms like Mastodon, which is a fully open source project. It's just reminding me of the early days of open source when things were getting created. I don't know, it's just an exciting time.

Neha: I know what you mean now.

Martin: Okay.

Neha: I've actually been interested in it too. I think especially what I'm curious about, is I've been seeing things start to shape up with Mastodon and Bluesky.

And, I'm wondering, as we've learned the lessons in open source on how to make things more federated and grow and scale those systems, if we'll be applying the same lessons, and if we'll have the same markers of evolution? I'm curious to see what old is new and what new is new, you know what I mean?

Martin: Yeah. Just like how sometimes the lack of control and centralization, the distributed nature of things actually leads to greater creativity. What's really cool about open source, is just how much the space changes all the time because you've got access to all these really creative minds all around the world and nobody's telling you how to do something.

You can just go try things and see what works. It's the same for open source itself, how we define open source, what we use it for, the parameters that are acceptable in the community. All that's just constantly evolving.

Neha: Yeah. Actually for today's show, we're going to be doing a little ode to the industry and diving into the evolution of the culture in and around open source. There are some that I would like to call spicy topics, if I do say so myself.

We're going to be talking about things like why open source developers shouldn't be quite so afraid of analytics and tracking. What it means to have an open source project that's “closed to contributions,” and why the missing ingredient in your open source project might be humor. That's all going to be coming up.

Martin: But first, first commit.

Neha: Today, a little story about how open source can save the day. I don't know about you, Martin, but the idea of saving the rainforest and the ongoing threat to the globe of losing all of the trees and biodiversity was something that I learned about in elementary school and we're still hearing about it. It's one of the biggest environmental issues out there.

Martin: Yeah. It was always “save the rainforest,” wasn't it? Yeah, the Amazon is probably the poster child of rainforests. We hear the most about it, but this issue is happening in lots of other places.

If you take for example, the Maya Forest, it's the second-biggest continuous rainforest in South America. In this case, second place isn't the first loser. It's still really important.

Neha: Yeah. Actually, since the year 2000 alone, the forest has shrunk by 15%, which is a pretty huge amount in a short amount of time. That loss has real impacts.

Less forests in the fight against global warming as we lose trees, loss in biodiversity and even the loss of plants that could go on to be the basis of life-saving medicines.

Martin: Part of a trick of trying to protect all this land, is that it's so remote and it's just so hard to monitor. The Maya Forest alone takes up to 35 million hectares. That's a whole lot of trees to keep track of. Neha, what do we know that's really good at handling large amounts of data?

Neha: The answer has to be open source, Martin. Open source, ding, ding.

Martin: Correct answer. Enter Global Forest Watch, an open source platform managed by the World Resource Institute that uses data from satellites and elsewhere, to monitor forests around the world and share that data with governments and non-governmental organizations, NGOs.

Neha: That allows journalists, nonprofit workers, and even police enforcement to know when areas are degrading or losing trees and take action. They even have a public-facing map that you can check out.

Martin: This is just one way that open source is being used to tackle major global environmental issues. There are also projects monitoring things like radiation information near power plants, or tracking volcanic ash in the wake of eruptions to see how people and animals might be affected regionally.

Open source data is really empowering people to get the kind of essential information they need to understand how we live and work.

Neha: Which means, I guess, that tracking data isn't always a bad thing.

Martin: See what you did there, Neha? That's actually a preview to a discussion we're having later on in this show, so bonus points on that.

Neha: Yeah, I'll take it.

Neha: Martin, I think one of the things that I really love about open source, is that initially when it meets the eye, people are often talking about how it's open and how it's free. But what is really interesting to me, is how it's actually helped form these entire societies and organizations of people who are all collaborating together. And I think there's just so much more than meets the eye when it comes to managing successful open source projects. It's really cool to see how these things manifest.

Martin: Yeah. I was drawn to it from the philosophy and principles of working in the open and things. But then also selfishly, when I'm doing my own stuff sometimes I'm just throwing it out there in the open because hey, if I do it in the open, somebody might get use of it. But I'm not really trying to start the next big project or anything. I'm just throwing it out there just in case, kind of thing. 

But I also think it's a bit dangerous when people get a bit too rigid about how they define open. That's always like a red flag to me. There's never only one way to do something, and yet you get these people who sometimes would criticize you if you don't do it fully open, from the get-go, and it's quite confusing.

Today, we're going to look a bit at the story of the LightStream Project, and it was started by a guy named Ben Johnson and it's open source but…

Ben Johnson: It's an open source project in that all the source is open. Anyone can use it, it's free to use. It's all Apache 2-licensed, so you're free to take it and mix it up and use it for your own stuff. We have people add encryption and do all kinds of stuff that are outside the main line of the project.

And in that sense, it's open. But at the same time, we also have some restrictions around what kinds of contributions we accept. We do accept small things, but typically if you want any larger feature, just open an issue, we'll discuss it.

Neha: Restrictions around contributions are not the biggest deal though.

Martin: Yeah. No, maybe not. But when the project was also entirely closed to code contributions of any kind whatsoever.

Neha: Doesn't that go against basically the main tenet of open source, the fact that it's open?

Martin: Yeah. We talk about this quite a bit internally, because we always have pool requests switched on and we leave them on unless the project's archived. We've stumbled into this philosophical conversation and it's one that I think's really worth having.

Can you be open source but not open to contributions? We're going to dig into this with the ReadME Project's very own Mike Melanson. Hey, Mike.

Mike Melanson: Hey, how's it going?

Neha: I'm good. I'm really intrigued by this concept and we're going to talk about this in a broader sense and what it means for the community in a second.

But first, can you tell us why someone would make the decision for closed contributions? When it comes to a project like LightStream, why would a developer close it to contributions when it's supposed to be open source?

Mike: Sure. You guys have talked about it, but there's a lot of assumptions when it comes to open source. We have our ways of thinking about what open means. The big one is obviously around code contributions, a lot of times because that's a metric that we use, it's something we really focus on. If you talk to Ben, there's lots of different types of contributions that go beyond code. I've spoken to a lot of maintainers both for this story and for a recent Q&A we worked on.

And, for many of them, code is actually often the thing they want last. They would like documentations, they would like testing, they would like all sorts of other things. They often say that code is the easiest part. Seeing him close it off to code, after talking to him and others, wasn't surprising. But it goes against that basic assumption that people have about what it means to be open source and what it means to contribute to open source. 

And when you get a code contribution, the thing about it is is that it comes with downstream maintenance burdens, it comes with all this other stuff. The person who's making that contribution, they may not have the full scope of the project in mind, they may not have the roadmap in mind, all sorts of things. These are all the considerations that everybody has to have when it comes to contributions. For Ben, he weighed them and came out on the end of not accepting.

Ben: There's a lot of things where if someone can add a change in there, it might have a lot of adverse effects around performance. We really have to be careful about every little change that goes in there. They take a lot of testing just to make sure it doesn't break on a lot of people's systems. The amount of testing that goes into every change, is so much larger than the actual change itself.

There's all this additional work that people are asking of us on the maintainer side, part of it comes down to philosophy of what you think tools should really be. A lot of people really just appreciate having a database that works pretty fast and it just works. I think a lot of that is limiting how many features we add.

Martin: I've talked with a bunch of maintainers and they'll often not actually accept code contributions without associated unit tests, because this is a feature they don't ever use themselves. They need tests to make sure they don't accidentally break it in the future. Why do you think this was controversial if this was or wasn't open source? What would people actually take issue with?

Mike: I think it again comes down to the idea that there's assumptions around what open source means and what it entails. I actually spoke to Julia Ferraioli, and she has this idea of the social model of open source versus the technical model. The technical model is like it's talking about the Open Source Initiative's definition of open source, which it's been around since the late '90s. It hasn't even actually changed since 2007.

The first three points are the really most pertinent ones for our discussion. They talk about open source doesn't mean just access to the source code. You have to be able to redistribute the source code, sell it, use it in other code. You have to be able to include the source code and allow distribution in compiled form. Almost most importantly, you must allow modifications. Really, this is about the idea of forking a project.

Ben's project, LightStream, is 100% open source. It meets all 10 points. If you want to make a code contribution and Ben doesn't want it, you're more than welcome to fork the project and make it your own, which there's upsides and downsides to that. Do you want your project to be forked? Do you want a split ecosystem? Like you said, some people might criticize this decision, but really it's not often the maintainers, it's the people that want to make those contributions themselves.

A lot of maintainers, if you talk about contributions, the idea it comes down to is a lot of people are just there to solve their own problems. A lot of times it's a hit-and-run thing too for a lot of maintainers. You come in, you make your contribution, you don't worry about what the downstream effect is, and then you're gone. But beyond Ben doing it, there's also historical precedent.

Neha: Yeah. What you're saying is that Ben isn't the only one who's done this, when it comes to playing with the levers of different ways of being open and what you're open to?

Mike: Absolutely. Name a project really, where you could just go and contribute anything and it's automatically accepted. That's not common. LightStream actually builds on a project called SQLite or SequelLite, and it is the original project that had the open source, not open contribution clause. It says that SQLite is open source, meaning that you can make as many copies of it as you want, and do whatever you want with those copies without limitation, but it's not open contribution.

If you look at the Lua programming language, it has been around, it's grown up alongside open source and it's open source. But I spoke to the maintainer and creator of the language, one of the maybe three people that's ever contributed to the language in its lifetime. He just doesn't accept code contributions, because part of the goal of Lua is to prioritize speed and portability and simplicity. He makes a point that nobody ever brags about being the person to remove a feature.

You want to be the person to add something. Still, Ben noted that he was worried about doing this with the LightStream Project. He was worried about pushback, but really in the end, he found that people were okay with it.

Ben: I definitely had some worries about that, and I thought that was going to get a lot more pushback. A lot of people were very appreciative that someone wasn't accepting contributions so openly. I think that a lot of people had burnout and they could experience that same issue, and they appreciate having that out there.

I think having verbiage in the README just to really say, "Hey, I'm not trying to be a jerk here." I think you're great and all, but I don't want to maintain that burden. I think it does jive with how other people view open source.

Martin: That's one of the things, I think, people have to understand when they're coming in, that it's up to the maintainer of the project to actually dictate the terms by which they're giving this gift of the code and how they want to run the community. I think it's all about setting the expectations early.

Mike: Yeah. That's a point we actually come to in the article. Julia had a proposed framework for giving maintainers a way to easily communicate the status of their project. She had nine different states that a project could be assigned under. They might more easily communicate that this project is in this mode where you're not accepting contributions or it's archived, or it's experimental or various things.

Neha: I think I've seen that also when it comes to some of the newest features that came out from GitHub. There's issue forms, there's interaction limits. I know that before as a contributor and then becoming a maintainer, my mind was completely blown by how much responsibility these maintainers take when it comes to the code.

I think that it's really valuable to hear directly from the maintainers themselves, how it's not about just not wanting contributions, but really understanding all of the downstream effects of that. Being able to rechart what the pieces are that are open.

It's just so easy to see it from a contributor's perspective, "Why won't you let me help you?" There's a lot more to that story and there are now a lot of different ways to express that.

Martin: We've alluded to this shift in the definition of open source. Are we changing the parameters, do you think, of open source or have they always been changing?

Mike: I think people are always pushing the boundaries of open source. People are always trying to introduce new licenses to change what open source is. But I think really in some ways, it comes back to that social definition versus the technical definition.

Nothing about what Ben has done, like we said, was not open source. It's just that's not how people might understand it. For example, there's research projects they're open source, but they're there for verification purposes. You want to be able to verify that the code produces a certain output.

By definition, you can't actually change that code. Otherwise, you're changing the equation that you're trying to prove. There's various other types of open source that are similar to what Ben has done just with different reasoning, but they all fit under those same 10 points.

Neha: Yeah. I also think that coming back to what Martin said initially, sometimes you just put your code out there and you don't realize all of the responsibilities that you incur. Is there some kind of middle ground when it comes to being fully open in different ways?

Mike: Absolutely. We did talk to Bartek Płotka, who spoke at one of our open source maintainer summits. He came to the point that there is a middle ground. He started like many people do, very naive in some ways and very just open to accepting everything. Then he realized the effects of that. Then he went the opposite way and he went completely closed, and he realized the effects of that. You might limit new maintainers, you might stifle your community from growing, all sorts of things.

He came to the idea that you need to be in this middle ground where you enable people to help somehow, but not necessarily by making direct code contributions to your core code. He offered ideas such as using feature flags to hide experimental features behind, so that you could add certain things that people suggest, but not necessarily have it affect everybody that uses that code. Or you could have a well-structured API so people can build integrations and build add-ons.

Neha: Right.

Mike: There's definitely other ways of limiting contributions but not being closed. At the same time, obviously not being 100% open and dealing with the burden of that.

Neha: I really like that summary. I like how you really can think about the different levers of enabling a community and how you might want to do that. It's all about making that a conscious decision. Before we wrap up, Mike, can you give us a preview of what's ahead for you?

Mike: Sure, happy to. This month we have a guide from Anton Mirgorodchenko. He's a developer with cerebral palsy who rather than spending his time typing one character at a time, uses ChatGPT and GitHub Copilot to help him communicate and code. He highlights what he's learned and offers tips and tricks for how you can best incorporate AI into your own software development workflows.

We have another guide to check out from Josh Goldberg that dives into the world of static analysis tools within the JavaScript and TypeScript ecosystems. Learn all about formatters, linters, and type checkers, and discover how tools like Prettier and ESLint enhance code quality, ensure consistency and prevent errors all while saving valuable development time. As always, you can find all this and more at github.com/readme.

Neha: Awesome. Mike, thanks so much as always for coming.

Martin: Hang on. Actually, one more thing I want to throw in there thinking about it.

Neha: All right.

Martin: I'm going to be on an episode of the Sustain Podcast soon as part of our celebration of maintainer month. It's a show about health and sustainability of the open source ecosystem, and it's obviously something we're all passionate about here. This month, they're actually highlighting the work of eight maintainers, including me. That will be exciting.

I'm going to be talking with host Richard Littauer about longevity in open source. Some of my work with open source communities and some of the things I've seen along the way. If you want to check it out, go over to maintainermonth.GitHub.com or you can find Sustain in your favorite podcast app. Okay. Now you can go, Mike. Thanks.

Mike: Great chatting with you.

Martin: Neha, how do you feel about being tracked online?

Neha: I feel like, in thinking about it, I hate it, but also I feel like that ship has sailed for me. I do a lot of public talks and I'm on this podcast. I'm a little bit more public when it comes to open source as well. I don't like it, but it's also something I can't help. What about you?

Martin: Yeah, it's similar. It feels like I don't like it, but then also I'm a product manager. I'm trying to manage the building of things, and make them better. I need the data to let me know if I'm making a system better or worse. Otherwise, I'm just firing in the blind, you know what I mean?

Firing in the dark. I understand the need for data and ways of collecting it unobtrusively, but also I don't like the massive tracking. That means that when people know what I've bought from the grocery store, by the way I sit in my chair or something like that. 

Neha: Yeah. I think it's an interesting question, especially when it comes to open source, because when it comes to these founding principles about open source and how things should be, we really embrace the concept of openness and freedom. That is why we're all working together, why we're really excited about the future of software. Yet it has that weird piece about it that we have to reconcile with, which is that the minute you are open and you are working out in the open, that you are also being tracked.

Martin: That's just the issue we're going to talk about today with Avi Press. Avi is a founder and CEO of Scarf, a data analytics company. Avi, hey, good to have you here.

Avi Press: Excited to be here. Thanks for having me.

Martin: Scarf is a company that provides user and customer intelligence for open source projects, helping maintainers and businesses to understand how their projects are being used. 

There are particular use cases for commercialization here I guess, but of course, you're also working on privacy concerns. Avi, let's start with why do you think that analytics and tracking, why is that important to the open source community?

Avi: The main thing here is that open source is used everywhere and so much stuff relies on it. I think if we're building a product and we want to make it better, how are we going to know how to make it better if we don't have information about how it's being used? While I don't want to fall into the trap of saying that open source software are necessarily products, because it is software at the end of the day, it's a very similar idea, I think.

If your code is being used in a bank or a hospital or somewhere that's very critical that it is doing the right thing, it's very obvious that the people who are building it should have some visibility into it. But of course, with all the historical, cultural underpinnings of open source, it's just not trivial how we can do that and how we can all both do that safely and in a way that everyone is comfortable with.

Neha: There is this inherent pushback that you were describing from people around that analytics and tracking. Why do you think there's such negativity around tracking for most people in the community?

Avi: No one wants to be tracked. It's the same thing here in that way, but I agree that it's a lot more extreme here. The tolerance for it is so much less. I also find that very interesting. It's paradoxical that we preach openness, except when it's how you use the software. Then it's just like, "God, don't look at me. Don't look at me at all, not even a little bit." I think that's the history of open source.

This code is ownerless, it's permissionless, anyone can use it. Analytics, the model of that, very much has an owner. That data's going somewhere. That data is owned by somebody, that data can be mishandled, exploited, et cetera. I think the fact that we came from a place where that was incompatible with a lot of the ideals that we started with, just made it so that we continued to not move that part of the cultural dynamic. Even though the usage of open source really, really exploded and it is now everywhere.

Those two things, I think, are in more and more conflict over time. To me, the question is not should we or should we not track this stuff, because it's already happening. NPM has lots of data on how people download JavaScript packages and they collect a lot of information about that and they always have. It's how should that data be handled and who should have access to it? My argument here is that if anyone is going to have access to it, maintainers must be one of those groups of people.

Martin: Yeah. I've been in the position where I've had to do this in a few different open source projects, and I've been through different rounds of trying to figure out how to do it in a fair way, in an ethical way. Some of the options we came up with are like, 'Well, how do we make sure that the community has summaries of this data, not enough data so that you can de-anonymize it and individually track people?" But how do we share this in a way with the community so that we are clearly saying, "Hey, this is not our data, this is the community's data"? But also how do we provide opt-outs and all that stuff again, so that people can easily exclude themselves from that tracking and stuff? Are there specifics about how we might do this that you think are best practices in ethical, fair ways?

Avi: Yeah, absolutely. The standards around just like the internet standards already have some opinions into how this can be done. Things like do not track headers, which I guess we're now migrating more towards global privacy control. There's web standards for say, "Hey, I want to make a request to this server, but please do not track me."

That's built into Scarf natively by default, because we always want to give people ample ways to opt out. The standards that we've all agreed on already to some form or another, are a great place to start. Of course, things like takedown requests as GPR specifies them are also something that can be an improvement here as well.

Pretty much every possible thing that we can do to protect privacy, we have to do that by default. Because otherwise, no one would ever give Scarf a second thought, because that's the bare minimum you have to do. We go even further by making sure that anything that can resemble PII, personally identifiable information, is completely wiped from our system after we process it.

We might look up, "This download came from this company in this country," but then we're not going to keep the PII on hand. I think that over time, more and more platforms, it's our hope, can follow suit, offer similar kinds of privacy controls. So that people can get better understanding of how their work is being used while not sacrificing end user privacy, which we all care about and agree is very, very important.

Neha: Yeah. I think when we think about privacy and data and tracking, we think about the specific user and all of this really detailed information, but there's also common metrics. You can go to a repo and you can see the number of stars, you can see a lot of packages out there, you can see the number of downloads.

There are common metrics that are available to everyone. That involves sharing a little bit of data. I was curious about your thoughts around some of the more common metrics. For example, what are some common challenges that you see when it comes to utilizing common metrics like stars to interpret success?

Avi: The thing with stars is that they're just a distant proxy for what you actually care about. Whether someone saw my project on Hacker News, thought it was cool and gave it a star, it's nice to know for sure, love that. But if I'm trying to assess, "What impact is my work having? Should I keep doing it?" Or further, "Do I start a business around this? Do I quit my job for this?" All of these kinds of questions that one might find themselves asking, stars are I think a bit distant from that.

Getting more towards usage, is this deployed in production somewhere? Are people relying on this? Do they use it daily? Do they use it sometimes? There's so much there, that we can be exposing to maintainers in a better fashion. Some things of the common metrics that I think are really important here, one is very easy. How many users do you have? Did that grow in the last month? I am just shocked, just really shocked by the size of companies, the size of organizations that cannot answer that question, which is really wild.

I think that if we want to see open source thrive in the long run, we need to understand the value and the impact that it has on the world. Part of doing that is being able to measure some aspects of the usage of that software.

Martin: I'm going to ask you two questions in one here really then. I'm interested in for somebody who's starting out as an open source maintainer, what metrics should they be looking at, is the most meaningful for them? Then as a follow-up, if you could wave your magic wand, because I'm with you, I hate stars. I hate stars as a metric.

I get very disappointed with my own community on GitHub about how obsessed some projects give stars because I'm like, "No, stars are a terrible metric." I run queries all the time that benchmark stars against other things such as audience size and things like that, and see there's very little correlation between the two.

Anyway, so we're not rant about stars, but if you could wave a magic wand, what data would you love GitHub to be able to provide to open source maintainers to help them make more meaningful decisions?

Avi: I wish we had an hour. I think really what we want to get at is trying to assess production usage I think is really one of those big ones of, "Okay. Someone's using my project, but are they really, really relying on it or are they just playing around with it? Or what stage are they in this?" That's hard because that doesn't mean the same thing for all different kinds of projects. A library versus a database, versus a CLI tool, it's not apples to apples in all of those cases, I think.

Things like unique sources of various kinds of traffic would be really great. I know that there has been updates to things like understanding documentation access, but going even deeper would be really good. One of the things that we've been doing, is helping projects understand things like conversion rates from documentation, to actually downloading, to actually proliferation and usage and these kinds of things.

Having better analytics around docs, I think, will be very, very important for GitHub in the long run. We're already seeing some of these things in motion, and that's been really cool to see. Things like version adoption over time, I think, are very, very important. Are people picking up every new version that you put out there? That's a really good sign. If people aren't, well, but it's not even that simple either.

You don't want people just pulling down the latest version of your Docker container all the time. They're probably not using it in production if that's the case. But really what I'm getting at here, is we want to get closer and closer to understanding the actual impact your software is having, not just these kinds of off-to-the-side proxies of that information.

Neha: I think coming back to what we were talking about at the beginning, it feels really icky to talk about data. Because especially when you're like, "You want my data, I don't know what you're going to use that for."

I feel like what's really illuminating in this conversation is that once I hear what you want to use it for, you're trying to make the best decisions possible as a maintainer and potentially for your own career or how much time you want to spend on it. Then I'm like, "Wait, if you want my data for that, that's fine. That sounds great. Actually, I can benefit from that." Yeah.

Avi: I think that this kind of data sharing and we can really talk about data sharing is a way that more people can contribute to open source. If I'm not going to contribute code, maybe I just don't have time or there's too many projects I use to contribute code to all of them. I don't have enough money to donate to all of them or pay them for all the stuff they do, but I can just tell them, "Hey, I use this, I make use of it."

For those listening, if you have a project that you really like, you should tell the maintainers that. It will make their day, it will make their week. I think people often open issues to complain about stuff. I was in a position where just every time I got a GitHub notification, just get a tinge of anxiety of what has broken this time?

Neha: Yep. What is it? What is it? What did I do?

Avi: Yeah, and that's terrible. That is such a tough position to be in. But with better kinds of data-sharing initiatives, we can actually better understand the impact of our work and prioritize it more effectively. If someone says, "Hey, this doesn't work well on Windows and X, Y, and Z." I say, "Well, how many of my users are actually on Windows?" I don't know.

Right now, even if you're the most sophisticated, a very sophisticated project, you probably have to put out a survey to answer that question and that's crazy. I think these kinds of things about how is the stuff actually being used in practice? How can I prioritize my time most effectively, is how we get less burnt-out maintainers, more effectively maintain projects, better software, and just a healthier ecosystem.

Neha: Well, I think that's a great way to end. Thank you so much for coming. It was really cool to hear all of the different parts of the spectrum when it comes to thinking about this. I feel like I learned a lot.

Avi: Thanks so much for having me.

Martin: Now for Ask RMP, the place in the show where we get to grab a listener question from you and get an expert to give us their advice. This month, we're looking at the assumptions around how we're supposed to act as serious developers. Don't know what one of those is.

Moises from Fortaleza, Brazil asks, "I want to make sure I'm building chemistry and community on my team. What are some good ways to make sure that happens?" Well, for answers, we went to none other than Jessica Januik. She's a senior software engineer at Google working on the Angular framework team, and she's also a self-proclaimed pun fanatic.

Jessica Januik: For me, humor has been an essential tool. I guess when people laugh together, they just feel closer together. For me, it is most definitely using humor of some sort to disarm and, I guess, ingratiate myself to other team members. Make it very clear to them that I'm not a really difficult person to work with. I like to have fun with my work and I like to laugh. I make a lot of very silly, lighthearted, wordplay jokes and, works really well for me.

I'm constantly processing everybody’s language and searching for puns. As you might imagine, on the Angular framework team, we make a lot of web-based jokes. There’s a lot of reactions and people have views on things. Anybody who is in the web framework space will be like, “Aha, I know what Jessica’s doing,” but it didn’t start off that way. I came in and started making these jokes and suddenly people were aware that they could make these jokes.

We all get to laugh together and contribute to the “yes and” of humor. One person feeds off of the other person and then we’re all smiling and laughing. I think that's part of it. It is a group contribution that people can make and feel closer to their peers.

Martin: Now, not everyone has to be a pun machine either. Jessica says it's about making it clear you can be lighthearted and have fun at what you do.

Jessica: I actually think it has a positive impact on our user base and our community when we reach out and do videos. We get a pretty positive reaction from people when we introduce humor into a lot of our content, be it during conferences on stage or through our YouTube content.

I think people think of it as maybe more accessible. By adding humor to our content, we actually make people want to stay and watch it more. It makes it easier for people to learn. I think it has a huge impact overall in the positive direction.

Neha: Jessica says that beyond building a level of comfort with your team by introducing humor, it can actually have real impacts on your work or your products or your users.

Jessica: Being authentic is really important and I think it's really a useful tool to just be yourself. I think it builds trust, and I think it overall is just the right approach for team cohesiveness. So I always encourage being as authentic as you can whenever you're at work. That is my advice. This is why whenever I do a talk, I usually sign off with, “live long and prosper” because it's very authentic to my nerd self.

Martin: That's it for this episode of The ReadME Podcast. Thanks so much to this month's guests, Jessica Januik, Ben Johnson, Mike Melanson, and Avi Press.

Thanks to you for listening. Join us each month for a new episode, and if you're a fan of the show, you can find more episodes wherever you get your podcasts.

Make sure to subscribe, rate, and review, or drop us a note at the [email protected]. You can also learn more about all that we do at GitHub by heading to github.com/readme.

CREDITS: May is maintainer month here at GitHub. In celebration, we have a special episode dropping May 23rd featuring Kubernetes superstar and all around great human, Kelsey Hightower. Here’s a sneak peak:

Kelsey Hightower: I think it's the whole person that I try to bring to this development process. My job before tech was like fast food. And so for me, getting into technology was like a means of survival. So self-taught was the preferred option, mainly because it was the most accessible option. You could go to a bookstore in 1999 and buy a book on any topic, and it felt like you were getting that college degree that other people had access to.

CREDITS: GitHub's The ReadME Podcast is hosted by Neha Batra and Martin Woodward. Stories for this episode were reported by senior editors, Klint Finley and Mike Melanson. Audio production and editing by Reasonable Volume. Original theme music composed by Xander Singh.

Executive producers for The ReadME Project and The ReadME Podcast are Robb Mapp, Melissa Biser, and Virginia Bryant. Our staff includes Stephanie Moorhead, Kevin Sundstrom, and Grace Beatty. Please visit github.com/readme for more community-driven articles and stories. Join us again next month and let's build from here.

Martin: How many programmers does it take to change a lightbulb?

Neha: How many?

Martin: None. It's a hardware problem.

Meet the hosts

Neha Batra

Growing up in South Florida, Neha Batra has always loved building things. She dug into robotics in high school and earned a mechanical engineering degree, then jumped into a role as an energy consultant—but wanted a faster loop between ideation and rolling out new creations. Accordingly, she taught herself to program (through free online courses and through Recurse Center) and worked as a software engineer at several companies, including Pivotal Labs and Rent the Runway. She was also volunteered to make the world of open source more inclusive for marginalized genders on the board of Write/Speak/Code. Neha now lives in San Francisco, where she’s a Senior Engineering Director at GitHub designing products to improve the world of OSS. She’s also a foodie who’s into planning trips, and collecting national park magnets.

Martin Woodward

As the Vice President of Developer Relations at GitHub, Martin helps developers and open source communities create delightful things. He originally came from the Java world but after his small five person start-up was acquired by Microsoft in 2009 he helped build Microsoft’s tooling for DevOps teams, and advised numerous engineering groups across the business on modernising their engineering practices as well as learn how to work as a part of the open source community. He was the original creator of the Microsoft org on GitHub and helped set up the .NET Foundation, bringing in other companies like Amazon, Google, Samsung and RedHat to help drive the future direction of the open source platform. Martin joins the podcast from a field in the middle of rural Northern Ireland and is never happier then when he’s out walking, kayaking or sitting with a soldering iron in hand working on some overly complicated electronic based solution to a problem his family didn’t even knew they had.

More stories

About The
ReadME Project

Coding is usually seen as a solitary activity, but it’s actually the world’s largest community effort led by open source maintainers, contributors, and teams. These unsung heroes put in long hours to build software, fix issues, field questions, and manage communities.

The ReadME Project is part of GitHub’s ongoing effort to amplify the voices of the developer community. It’s an evolving space to engage with the community and explore the stories, challenges, technology, and culture that surround the world of open source.

Follow us:

Nominate a developer

Nominate inspiring developers and projects you think we should feature in The ReadME Project.

Support the community

Recognize developers working behind the scenes and help open source projects get the resources they need.

Thank you! for subscribing