The Burden of More

Posted on 2020-11-21

Technology has potentially endless scope. There always seems to be something more to do, something to optimize, some new functionality to add, some new idea to hunt or some new problem to solve. Too rarely do we stop to ask ourselves what the costs and tradeoffs are, or whether something is really worth it. So let's do that in this article!

What's wrong with more?

This post is inspired by the bottom half of this very interesting blog post and my experiences in tech circles throughout the past few years. I'll first try to describe my experiences and observations with an example:

I have recently discovered the Gemini project, which is basically a protocol and document format similar to the Web with HTTP and HTML. The striking difference is that while the web is unfathomably complex, Gemini aims to have a simple protocol that is easy to implement for coders, as well as a simple text format that is easy to write for users. While the web is at a point where it would be practically impossible for even a big, well-funded team of people to write a full-featured web browser from scratch, Gemini aims to be simple enough for a single person to implement a full-featured client in a weekend.

I think that this is a fantastic idea, because a simpler medium means less work and less time spent for both writers and programmers, while putting the focus back on the content.

While learning about Gemini, I also came across the project's mailing list and decided to subscribe. What I did not expect is that most emails there seem to revolve around new feature proposals, which kind of contradicts the project's goal of keeping things simple. The suggestions usually sound sensible and harmless, like for example "add italics to enable writers to be more expressive". Sounds reasonable, right? But what are the consequences of such a simple-sounding feature? Why am I saying that it seems contradictory to the project's goal if it's just something so basic?

Let's consider a syntax like this:
The last word is *italicised* -> The last word is italicised

To make this work, clients now have to parse every character of a line to check for the asterisks, requiring more complex code. And what happens if there is only one asterisk instead of two? Should everything after the asterisk be italicised? Or nothing? Or maybe just everything until the end of the line? What if the user wrote the asterisk for a different purpose, not for emphasis? Do we then also need to implement a way of escaping a character, to make the asterisk invalid? Then users would have to keep thinking of escaping every asterisk that isn't supposed to make something italic. And programmers need to now also deal with implementing an escaping character.

This example is supposed to illustrate that something that initially seemed so simple and harmless can very quickly snowball into something far more complex for both users and programmers. Something that is supposed to make things easier and more convenient suddenly has the opposite effect, making applications more error-prone and giving users more things to worry about.

I have sunk days (or perhaps even weeks) of time into designing Regrow.Earth. Time that I could have spent actually filling the site with content or spending time with my partner. I spent days even just trying to understand the static site generator that is powering this site. And then some more days learning to set up a server to make it accessible on the internet.

The fancy features of the web aren't just a fantastic opportunity to create something beautiful and rich, they are also pressuring me to waste my time trying to make my content look as good as all the other sites. The endless possibilities end up being a burden, requiring me to spend more and more time and effort on things that are not actually the content that I am trying to present.

Besides the cost of human time and effort, we also rarely consider the environmental cost of processing all the CSS and JavaScript that countless modern websites are littered with, or the cost of the server dynamically generating a site each time a user tries to access it, or the energy and materials required to build the servers, computers and phones in the first place, or the massive infrastructure needed to send this increasing amount of data all over the world.

This experience is representative of an attitude that seems to be widely shared in tech communities. Features are often just chosen based on what would be practical and convenient, what is present in other projects, or what is scratching the developer's personal itch or attracting their curiosity right now. Little thought is given to the consequences this has for sustainability, both in regards to the effect it has on people, as well as the effect it has on our planet.

Is it worth it?

I find that we usually don't even get to asking about whether the consequences are worth it, because we don't even know the consequences. Our technologies are so complex, most of us have absolutely no clue about what the consequences of their creation and existence are. We are too busy using it or creating ever more of it to stop and think about what its effects are.

Let's take the newest tech trend: Is it worth having lots of "smart home" and "internet of things" devices if they present a massive new attack surface that impacts our privacy and security? Even if we assume perfect privacy and security, is it worth manufacturing countless environmentally damaging devices that don't do anything revolutionarily better than the non-smart devices that are significantly less harmful? Does the difference justify the cost? Is it worth committing such massive amounts of resources and developer time to something that has so little additional use and is creating more issues than it solves? The very obvious but unsexy answer is no.

"Smart home" is an easy target to point at because of how bizzarely huge the difference between environmental costs and the gained value is. But how about social media for example? We can point and laugh at the known issues of Facebook, Instagram, Twitter and co, from the negative impact on mental health, to privacy violations, to polarization, to bullying and social division, even to manipulation of elections. But what about our free and open source, community-driven, federating, amazing alternatives like Mastodon, Pleroma, Pixelfed, Hubzilla and the countless others? Don't they fundamentally function in the same way? Don't they come with many of the same and even some new issues?

We can extend this question to FOSS in general, and open hardware too. Isn't it the same, except with "freedom" slapped on it? Does it matter how open it is if it steals my time and negatively impacts my mental health or the environment? Is the time and effort we are spending here really worth it? Are we really creating something different and worthwhile, or just the same thing but "free"?

You may find these questions provocative. Consider that I am a massive fan of FOSS and open hardware if it calms you. I simply think that we should genuinely confront ourselves with these questions and consider whether what we are working on is truly worth it. It may be uncomfortable for me to look at all the time I spent on things that I would consider not worthwhile or even harmful today, but that is my opportunity to learn. My perspective is that it wasn't wasted time if I learn from it, so let's stop, take a very good look at what we are doing and learn from it.

Helpful questions

To conclude this article, I will list some questions that we can ask ourselves to determine whether engaging with a piece of technology is worth it. If you have more ideas, just email me at unicorn@regrow.earth and I will update this list!

Health

Utility

Sustainability