Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Software Engineering Code of Ethics (acm.org)
88 points by jervisfm on Dec 1, 2013 | hide | past | favorite | 35 comments


My main contact with the ACM has been attending (pricey) ACM conferences during my PhD, occasionally publishing work in ACM proceedings. I had no contact with them when working as a software engineer, apart from bumping into their paper paywall.

"6.02. Promote public knowledge of software engineering."

ACM papers on software engineering are typically paywalled.

"6.05. Not promote their own interest at the expense of the profession, client or employer."

I often hear that researchers don't want to limit access to the paper, but feel they have to, to publish and progress.

'Pre-prints' mitigate this issue somewhat, and I acknowledge that the ACM has taken steps to open things up recently (e.g. author pays); and that while its easy to say 'make it free', it costs money to run an organisation that manages publications.

I'd still be concerned that conflicts of interest remain until open access is the standard.


> I often hear that researchers don't want to limit access to the paper, but feel they have to, to publish and progress.

I have published in several ACM conferences and this is how I feel. I much prefer USENIX's policies.

> and that while its easy to say 'make it free', it costs money to run an organisation that manages publications.

The actual research is funded by the government or in increasingly rare situations, private companies. Reviews are volunteer labor. Post-review editing (what they used to call 'typesetting') is done by the author. Putting up a PDF somewhere costs more or less nothing. So yes, the essential parts could (and should) be free.

The actual money goes to printing paper (who cares), subsidizing conferences (an event where you hang out with your friends, and then give your speech to an audience of people waiting to give their speech), and subsidizing other ACM stuff (inappropriate for the taxpayer to incur this burden).

Until the ACM makes our work free to those who have paid for it, I will not be a member of their organization. I will publish in their conferences (you have to...), but I will never link to or reference their 'official' copy.


I'm a CS researcher and educator (details in profile). Software engineering ethics has been a passion of mine lately. I teamed up with a philosophy prof to write an essay on why it's important to teach ethics in CS classes [1]. (It's an invited column for CACM.)

My coauthor has released a self-contained module with some theory and various hypotheticals that educators can use in classes [2].

I've been trying to crowdsource a set of real-world case studies with a broad coverage of various types of ethical issues [3]. I'm also gradually trying to incorporate this in my own teaching.

We'd appreciate feedback and suggestions.

[1] https://dl.dropboxusercontent.com/u/131764/web/sw_engg_ethic...

[2] http://www.scu.edu/ethics/practicing/focusareas/technology/s...

[3] https://freedom-to-tinker.com/blog/randomwalker/ethical-dile...


I don't think that ethics should be taught as part of all CS classes, and I don't think the essay you wrote made a convincing case for this.

All you have really shown is that ethical issues arise in CS. While this is true, I don't see any benefit to formally teaching thinking about ethics. It's not as if an engineer is never going to consider the ethical implications of their work just because they never took a course. It does make sense to discuss ethics when they are directly relevant, e.g. topics of privacy and security.

EDIT: rewrote to correct an error in understanding the article.


That's really interesting for me. I did a CS and EE dual degree and we have a required ethics course as part of the engineering program (it's a requirement from Engineers Canada for an engineering accreditation). In the CS program, things like this were only discussed in context (e.g. privacy and security were discussed in the elective security class I took).

Comparing the two, I did find the formal discussion in the Ethics class to be really interesting. It was a simple pass/fail course, we weren't really graded on things, other than being required to show up and talk about the kinds of issues we might encounter in the field. Some as mundane as a manager pushing you to shrink estimates and cut corners (ahem, software), some as serious as failing to do proper load analysis and having a shopping mall roof collapse after a heavy snowfall.


I imagine you might get a fair amount of interest in this kind of material, not least because ABET[1] is now requiring accredited C.S. programs to include it.

So if you're looking to put together something that others will use, my suggestion is to keep track of ABET's requirements (which are currently on the vague side in this area -- but they change every year).

[1] http://www.abet.org/


Like to hear your thoughts on the ethics of contributing to building the 1000 year Reich of Machine Assisted Police State.


That comment probably needs some fleshing out.


Has anybody tried to assign an order of precedence to these principles?

The "consistent with the public interest" clauses of the 2nd and 6th principles suggests that the first principle takes precedence when it conflicts with either #2 or #6.

This is similar to how Asimov's famous 3 laws of robotics are written: the first law always takes precedence over the second, and the first two laws always take precedence over the third.

But the other principles don't mention any order of precedence.

Different partial orderings could yield very different interpretations of what software engineers should do in certain circumstances. For example, should we value #4 (integrity and independence) above #2 (best interests of the client) or below it? What if 4.06 (refusing to participate in corrupt orgs) has a detrimental effect on public interest in the long term?

Even if we acknowledge that it is impossible to produce a fully consistent ordering of the 8 principles and their numerous sub-principles, I think it would be interesting to identify conflicts and study how different people choose to resolve them. Everyone can nod in agreement when we talk about common moral principles, but it's only when the principles begin to conflict that things get really interesting.

For example, what would have been the principles that bothered Edward Snowden the most?


The preamble addresses how they think software engineers should handle tensions between different sections of the code:

> In some situations standards may be in tension with each other or with standards from other sources. These situations require the software engineer to use ethical judgment to act in a manner which is most consistent with the spirit of the Code of Ethics and Professional Practice, given the circumstances.

> Ethical tensions can best be addressed by thoughtful consideration of fundamental principles, rather than blind reliance on detailed regulations.


Unfortunately, this entire concept is flawed by its very first point:

1.01. Accept full responsibility for their own work.

To be a fair principle, this requires a series of other things to happen, ultimately depending on something we don't know how to do yet.

Firstly, in exchange for accepting that responsibility, software engineers need the same right as any real engineer to veto the deployment of a project for which they responsible until, in their professional opinion, the work has been done to a satisfactory standard.

In order to protect engineers who do exercise their professional judgement in that way, possibly against the immediate interests of their employer/client, it must be impossible to replace them with someone else who doesn't have the same right or to just fire them and carry on without anyone else taking their place. This requires a mechanism for recognising sufficiently capable people who are qualified to take on such jobs, akin to other chartered professions.

In order to establish a recognition programme of practical value, this requires that some impartial body exist that can evaluate individuals to determine whether they are sufficiently capable to merit recognition. This evaluation must necessarily be based on objective criteria to guarantee the impartiality.

And in order to do that, we need to have objective criteria to identify "good" software developers and appropriate professional practices in the first place, which we don't. And that's why software development isn't ready to be a real engineering profession yet.


Texas licenses software engineers, as do some Canadian provinces, and IIRC (from my SE ethics course) some other US states are implementing similar programs. https://en.wikipedia.org/wiki/Software_engineering_professio...


I appreciate that some places do treat the term "software engineer" rather more seriously, but unfortunately the fundamental problem remains: it is difficult to license someone to recognise their ability if you don't have some objective means of determining that ability first, and I don't think we do yet. The existing efforts, at least those I've seen before, seem to be more concerned with general professional ethics than any particular competence with developing software specifically.


Anything that moves us towards the professionalization of Software Engineering as a bad thing, and against the public good.

Nothing has served the interests of privacy, openness and security more than the fact that anyone can become a programmer, without any certification or required education. We are a bottom up industry where big projects can be built and maintained by the efforts of engineers, without the approval of any corporation or professional body.

Ethics are not always something that are valuable to formalize and teach.

We should resist professionalization not just because it is driven by rent seekers who want to make their cut from accreditation and mandatory courses, but because it attacks the very roots of the hacker tradition.


If by professionalize, you mean state-licensing then I would agree. Compulsory licensing is simply a way to cartelize the industry by lobbying the state to create hoops and barriers to the entry of the craft under the guise of legislation "serving the public good". It's thuggery, raising prices by hampering competition. Hopefully ACM doesn't go that route, because that would be stupid.


I don't understand the point.

Replace "software engineer" with any profession and the meaning doesn't materially change.

Why do we need this?


The ACM is the type of organization that feels obligated to write a code of ethics, and dutifully pay lip-service to it. This isn't about having an actual effect on the world; it's pure social signaling.


We'll have software ethics when we have business ethics and political ethics.


Big committees sure can produce lots of words.

It is good and proper to do the good and proper thing under the circumstances of the situation in which you are presented with the opportunity to do good and proper things.

I do have some sympathy for the problem here -- it's impossible to be actionable and specific in a document like this. This document could be the springboard for other more relevant work. Maybe this needs to exist as a first step. Maybe it requires 25 people to produce. Maybe my ACM dues (and IEEE dues!) are adding value. Maybe.


When you're told to do something you can hardly go, 'Well, I'll get back to you in two weeks once I've done the preliminary public impact analysis.'

Codes of ethics for engineering are relatively simple to apply because the consequences are relatively simple to foresee, (not to say that engineering is simple, but that it remains largely an engineering problem,) build it shoddily and you broke them. And they're emotive because the consequences are both simple to foresee and catastrophic. Most people don't want to see someone killed.

But most people don't set out to write buggy code, or to cause bad things to happen in the software world - it's just that the degree of insight required to avoid such things is enormous. Especially so when working with highly restricted degrees of freedom under management that doesn't give you all the facts.

The ethics for software engineers need to be more than collections of principles pattern-matched off the ethics of other professions. They need to be different to the ethics for mechanical engineers. As the ethics for lawyers are different to those of mechanical engineers. For all that we both deal with complex systems they are very different professions, presenting their own particular types of problems with actualising ethical feelings into consistent sets of principles and actions.


Did you even read the bullet points? None of them are about writing perfect code or non-buggy code. They are so high level that, as someone else noted, they are not a software engineering code of ethics but simply a professional code of ethics.

>When you're told to do something you can hardly go, 'Well, I'll get back to you in two weeks once I've done the preliminary public impact analysis.'

You can if that's part of your process. You do plan things out before you work right?


> Did you even read the bullet points? None of them are about writing perfect code or non-buggy code.

A: I don't appreciate insults with a question mark stuck after them.

B: While they don't tell you how to write non-buggy code many of the points clearly relate to code quality:

"1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good."

Safe, meets specifications, passes appropriate tests. In other words, isn't buggy. Bugs are just behaviour that doesn't meet the spec, software that meets the spec but doesn't do what you want is poorly designed - i.e. the spec is wrong. Indeed one can advance a meaningful argument that as long as it meets the spec, software is never fixed, it's simply redesigned.

"3.01. Strive for high quality, acceptable cost and a reasonable schedule"

High quality. What would you call a piece of code with bugs in it? Low quality, one would rationally assume.

"3.10. Ensure adequate testing, debugging, and review of software and related documents on which they work."

Adequate testing, debugging. The purpose of tests is to make sure that you meet the spec across the expected input space without producing undesired behaviour - any deviation from this is a bug. Debugging - well, heck - that's directly in there.

"6.03. Extend software engineering knowledge by appropriate participation in professional organizations, meetings and publications."

Okay, so this one doesn't strictly speaking bear on bugs alone. Still writing non-buggy software should be the natural consequence of it and it fits with the theme established in the rest of the document to do with quality of code. I don't think I've met any knowledgeable programmers who write buggy code.

"6.08. Take responsibility for detecting, correcting, and reporting errors in software and associated documents on which they work."

Errors in software, aka bugs.

"7.04. Review the work of others in an objective, candid, and properly-documented way."

As with 6 not solely concerning bugs but doubtless relating to them at least partially.

"8.01. Further their knowledge of developments in the analysis, specification, design, development, maintenance and testing of software and related documents, together with the management of the development process."

Maintenance and testing. Okay, that could relate to performance, but one rather hopes you're catching any bugs that might be in there too.

"8.02. Improve their ability to create safe, reliable, and useful quality software at reasonable cost and within a reasonable time."

Reliable software, i.e. software without bugs. Heck, however poor your software is in terms of efficiency and user interface, if it has no bugs then it's at least reliable - it keeps to the contract in the spec.

> You can if that's part of your process. You do plan things out before you work right?

To an extent. A client is not gonna be super-happy to see two weeks work down as 'Corporate Social Responsibility Report - £6k' on their invoice though. Nor is your boss, if you work in a team, liable to be pleased when you tell them to wait two weeks while you make sure they're not telling you to do evil.


> To an extent. A client is not gonna be super-happy to see two weeks work down as 'Corporate Social Responsibility Report - £6k' on their invoice though. Nor is your boss, if you work in a team, liable to be pleased when you tell them to wait two weeks while you make sure they're not telling you to do evil.

For many cases of software development, just keeping your ethical principles in the back of your mind as you work should be enough. A formal study and report of the ethical considerations of your work is not necessary.

However, there are certainly cases where a more formal review would be appropriate - generally high-impact/high-cost-of-failure cases, e.g. the electrical grid, car/plane control systems, or systems that deal with money or private information. In a lot of these cases, there exist legal or professional standards that apply, such as PCI, SOX, or HIPPA. Those standards are made of more specific guidelines than this code of conduct, but they generally follow the same themes. They also generally require independent review and certification that you're following the rules, so in these cases, formal reports are already the norm.

There's a spectrum. Near one extreme of the spectrum, your client/employer is going to be happy to see two weeks down as 'Corporate Social Responsibility Report - £6k', or something similar.


This is great, and I'd be happy to sign it (well, other than being a college dropout) but the reality I've seen in industry is that 30 seconds after adopting this pledge, I would be asked to do something that contradicts it. Then get weird stares when I inform them that their request "goes against the Software Engineering Code of Ethics" while they fill out my pink slip.


Exactly, I don't see how it would be practical to work in the software field in any existing society, consistent with ACM's Principle 1. The most salient example is that, to make a living in their occupation in a capitalist country, most software engineers (and developers, architects, managers, related categories) must work for commercial companies whose business models rely on lock-in tactics, licensing that forces users to submit to data-mining in order to use the products, undermining functionality for users to benefit third parties, and other arguable abuses.

Even using a licence more restrictive than GPL seems to me unethical and against public interest, but would exclude the majority of employment prospects.

Of course not everyone agrees on the preceding point - many believe commercialization of software is perfectly ethical. Similarly there is disagreement over whether working for, say, the NSA would be ethical. And such differences in turn point up the hazard of any one organization's vision of ethical conduct being pushed on an industry.

The ultimate hazard is codes of conduct becoming defacto enforced in hard or soft state policies. The ACM does not appear to be advocating that at the moment (nor is the parent poster I'm replying to), but that is where this sort of thing leads.


> Even using a licence more restrictive than GPL seems to me unethical and against public interest, but would exclude the majority of employment prospects.

No, it would not.

You may believe it would because the vast majority of the software we use is widely distributed (OS, Web browser, Office suite, games…). On the other hand, the vast majority of produced software is actually custom software. (As you may have noticed, the two occurrences of "vast majority" doesn't refer to the same quantity.)

Custom software doesn't need to be proprietary to sustain their income.


I prefer the IEEE's code: http://www.ieee.org/about/corporate/governance/p7-8.html

It spends more time on the nature of the work, and less time covering every possible base someone on a committee thought was important.


I like the emphasis that the duty to the nebulous public interest is about actual safety and potential disasters.

"and to undertake technological tasks for others only if qualified by training or experience, or after full disclosure of pertinent limitations"

That's similar to "Engineers shall perform services only in the areas of their competence" from http://www.nspe.org/Ethics/CodeofEthics/index.html. If software engineering is to become a profession, engineering is the right model, but with recognition that the vast majority of software does not have deadly failure modes like a bridge or airplane.


"and does not diminish quality of life, diminish privacy or harm the environment."

This is a very strange meaning of "ethics". I understand ethics are relating to what that profession does, i.e. a lawyer or priest cannot ethically divulge certain communications. By pulling in this abstract public interest, the software engineer is supposed to conform to some specific balance of environmental damage and privacy against other good things. Does this make every Facebook engineer and mining software developer unethical?


I am shocked that the ACM doesn't have the code of ethics behind their traditional "serving both professional and public interests" paywall.


> 1. PUBLIC - Software engineers shall act consistently with the public interest.

In some places, "public interest" can go counter to "human rights". So I would rather have (order of precedence apply):

1a. PUBLIC - Software engineers shall act consistently to respect human rights.

1b. PUBLIC - Software engineers shall act consistently with the public interest.


"In some places, 'public interest' can go counter to 'human rights'"

Would you give an example of how you think that's possible?


Any place you can think of where a government will invoke "public interest" to silence those who challenge this government's trampling of human rights.


Doesn't count. Reality doesn't change because they use spooky definitions for their newspeak terms.

On the other hand, I'm sure there could be cases where public interest and human rights are at odds. Say, torturing an enemy soldier over an upcoming attack from his own unit. (Assuming war was the solution in the first place…)

What would you chose? I personally may chose torture, if I estimate it is the lesser of two evils. (The other evil has to be pretty big, and I'd better be sure this is really the right call, as opposed to me being motivated by revenge or something —moral dilemmas suck.)


Now what? If people would care about these codes then Facebook will shut down. (and personally, I think that would be awesome..)

I am all for a more ethical world but it just won't work like this. The best thing a software engineer can do to boost ethical standards is to __not work at an unethical organization__.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: