The Social Responsability of Software engineers

We live in amazing times. Everywhere you look there is an ingredient for another industrial revolution that very soon will exponentially put the human race in the next level… or perhaps nowhere.

First we had the steam engine changing how organised our society and how we saw the world. It took us many decades to understand and learn how to survive with it. Then the revolution of the microchip came as well as the internet. We still have the hangover from that era, even though in the big scale of time it happened yesterday. It took us several decades to transform our traditional businesses into digital ones. “Digital transformation consultant”, according to Linkedin is still a thing, like if there is anything “not digital”.

However today we have more than 10 different revolutions colliding at the same time. Artificial intelligence, robots, Deep Learning, new materials, 3D printing, space exploration, bioengineering and so on and so forth. Everywhere you look, we are at the brisk of a new disruption that will truly change humans forever.

What is a common denominator in all those changes? Most of them are led either by engineers or scientists, and most of them have a huge component of software that runs them, whether it is in the firmwares in the hardware or in the algorithms that power the artificial intelligence decisions. Luckily, we are asking ourselves the right questions and trying to be on top of the implications it has. 

Science fiction gave us a head start on all this (3 laws of robotics, anyone?). And today we can play with our friends with mind games on how an artificial intelligence must behave should it had to choose between killing one person or another. But we are still at the very beginning of it and all those internet memes and mind games mean nothing if we don’t really use them for good on the actual applications. If we don’t move past our “black mirror” science fiction and don’t apply it. We are bound for a complete rethink of our ethical values. In the end, what implications will have what we build and how will that affect society?

We are thinking on very abstract terms yet, or better said, very high level ones. And I believe we are not truly getting into the specific implications of what it does to us, software engineers. The ones who build that code.

Most of us don’t really touch software that could “kill” people. We tend to tell ourselves that the criticality of our job won’t affect too many lives. However more and more, we become involved in life changing stuff in all fronts and we are not thinking about it enough. One algorithm could change the way somebody finds a key piece of information, a new job or a training course they need for something. It could say whether you can get a loan for your kid to study or not, or even if you are a good person or not. It could be embedded in a car that might decide your life has more or less value than other, or worse, in a smart city traffic light system that creates chaos with hundreds of accidents. This is not just about flight controller software, or banking transactional systems. You screw your code in there, people can lose millions, or hundreds of people could lose their lives. I’m talking about daily normal stuff that we code everyday. Yes, Black Mirror / Yuval Noah Harari stuff.

And even if your code is spotless, bugless, bulletproof. What that software does could be could be used in the wrong way or just be plainly evil. You could be working for a company that although legal, does not create value but destroys it. Or worse, transfers it to somebody else in an unfair manner.

My question is: Are our methodologies up to speed for what’s to come? Do we have the industry maturity needed for this? And most importantly, do we have the ethical fundamentales needed in our industry to be able to jumpstart to the next level?

As an industry we are extremely young. We are barely beginning to know ourselves and our ways of working. Perhaps we think agile is the next best thing but this is nothing but the trendy approach today. We can discuss about what is the best way to develop software, about craftsmanship, or about specific techniques like TDD or DDD. But that is intended only to help us write better software, aligned with business needs and problems in a controlled way. It doesn’t tell us whether we should be building that software in the first place.

We are (hopefully) becoming better at building software, but not necessarily at deciding whether that software should be doing what we have been asked to program it to. Is that up to us to decide, or are we just soldiers following orders?

Let me give you just a few examples:

  • It is proven that many machine learning algorithms have natural biases that can be traced back to the original development team. No matter the level of feedback you provide.
  • You can work in legal trading companies where your software lets people lose money on purpose, because your company charges by transaction, not by client profit. In some cases, to individuals about to lose everything.
  • You can work on automating marketing campaigns that mislead people to take wrong buying or life decisions. Maybe using algorithms from data they don’t even know you have.
  • You might be working on software that promotes hate, slavery, political repression or war.
  • Maybe you build software that favours a specific race, genre or sexual orientation above others
  • You could be working on control systems (for carbon dioxide emissions, for example) that fake results when passing tests.
  • Perhaps you build casual games that lure people to buy virtual goods they don’t need instead of buying other stuff they or their families really need.

(All those examples are real jobs you could be undertaking as a software developer today, and of course there are many others)

As you can see, we software engineers (all engineers in fact) have a great power. We study, train ourselves and keep improving both our skills and knowledge (with methodologies, philosophies, katas, dojos, tools, frameworks, etc.). It is only natural that more and more, over time, we accomplish even bigger things. The bigger computational power we have, the more we improve our tools, the better connection we have with everything (even with ourselves), the less boundaries we have to either save the human race, or destroy it completely. We could be the Oppenheimer’s of our age.

You probably see where I am going. Do we need a sort of deontological code in our profession? Do we need it today or may we need it in the future? And if so, how it should look like?

Again, and as an immature industry, we are not the first ones to get here. Hippocrates did it more than two millennia ago for doctors. As a doctor, with your skills, you can cause pain, you can kill, you can do anything you want with a human body. And at the same time you have to make critical decisions. A doctor arrives to a car accident with two unconscious victims, who does he or she attends first? There is no space for questions or guesses. There is a well defined processes that says all that. Those protocols can change over time with the improvements and new discoveries, but in principle, all doctors abide to it.

What does the Hippocratic oath say in short? (any doctor here, feel free to add or correct me! This is a free interpretation)

  • To respect the scientists and physicians who came before me and on whose knowledge I’m based on.
  • To use my skills to help the sick according to my ability and judgment avoiding over treatment.
  • To never use my skills to cause injury or wrongdoing even if asked to do so
  • To understand that there is a social element in medicine, and understanding first might be better than the knife.
  • To acknowledge when I don’t know something and to call colleagues or other professionals for support on a patient recovery
  • To never divulge secrets that I may attain by caring the sick, respecting my patients recovery
  • I must not play at God.
  • I do not treat a specific illness but a patient, that also affects his family and economy. My responsibility includes all those related problems.
  • I understand that prevention is preferable to cure

Why do Doctors have this, and the majority of them adhere to one version or another of the oath, and not many other professions do? Is it because Doctors are supposed to be a public service and other professionals? (while marketing, engineers, accountants are bound to business rules and hence to shareholders) If so, what are the ethics of a company? Increase shareholders money value above anything else?

We, software engineers, have a great power. Hackers have known this for a long time and since then have applied it somehow in certain companies but perhaps focused on security and privacy. Groups of developers have used their knowledge for the “greater good”. By building open source tools, or software free of private tracking and control. To mention just a few, GNU, The Open Office foundation, or the Tor community. Bitcoin and the origin of the Blockchain started as a way to prevent another financial meltdown from central banks, etc. Just a waltz in Github can provide hundreds of examples of what I’m talking about.

The Hippocratic Oath is not that specific to the profession as it seems. Take the text above and change some keywords, and you can get this:

As a Software Engineer I swear to:

  • Respect the Scientists and Engineers who came before me and on whose knowledge I’m based on. And try to build on their steps to improve our profession.
  • Use my skills to help people and companies according to my ability and judgment avoiding over engineering.
  • Never use my skills to cause injury, pain or wrongdoing even if asked to do so.
  • Understand that there is a social element in software development, and understanding first might be better than coding directly. The best line of code is the one that is not written.
  • Acknowledge when I don’t know something and to call colleagues or other professionals for support on a specific problem. To avoid hubris and be humble.
  • Never divulge secrets that I may attain by exercising my profession and that might belong to clients or companies I work for, unless they are public knowledge or developed for sharing (open source).
  • Not play at God.
  • Not work for a specific technology, but to fix a problem, that also has a context and scope. My responsibility includes caring for the whole picture and not just a part of it.
  • Understand that building properly is preferable to fixing.

So, are we responsible for our actions, and ideally for our code as well. Is it the time for a deontological code for the Software Engineering profession? I am not the first one asking this question. I believe Gotlieb started more than 40 years ago with “Social Issues in Computing”. But at the end of the day, in the trenches of the code, not many people really thinks or talks about it.

I’m building a community of professionals who are interested in the topic, and hopefully build on top of the concepts of this article. Please follow and comment if you think we should work more on this. I would very much like to hear your opinions, ideas, articles and research.

Thanks for reading!

Leave a Reply

Your email address will not be published. Required fields are marked *