Yes, software. Broadly construed. Computer programs.
Some of you are thinking "ones and zeros!" and some are thinking "electrostatic charges!", which are in this case the same thing, and they're not wrong. But that's the kind of answer which is remarkably unhelpful in all but the most limited circumstances. It is most especially useless for furthering the understanding of people who do not program and who have never programmed.
And since the whole of human life around the globe is reorienting to being intermediated and organized around programmed (and sometimes programmable) artifacts, it would seem understanding programming (if not necessarily understanding how to program) would be a very important thing for all people, if only to understand the world they now live in.
But to return to the question, some of you are thinking, "instructions!" And some of you thinking that mean it in the sense of what assembly language is expressed in and some of you are thinking that in terms of the old analogy long used to explain programming to non-programmers, of it being like a shopping list – and those may or may not be different things.
But let's talk about the shopping list analogy for a moment. If you haven't heard it, it goes something like this. Imagine you were sending a robot to the store to go grocery shopping for you. So you want to give the robot a list of things to buy. Let's say that you need a loaf of bread, a gallon of milk, and a dozen eggs. You could make a list that says "a loaf of bread, a gallon of milk, and a dozen eggs". But the robot is really stupid. It's just a robot. So you need to be more specific than that. You need to tell the robot which sort of bread you want, or whether the robot should just pick the first loaf it comes to. You need to tell the robot whether to get whole or skim or low-fat milk. You need to tell the robot whether the eggs should be the cheap ones, the free-range ones, the extra omega-3 fatty acids ones, or what.
You are probably also going to want to tell the robot - because if you don't tell it, it won't know – that it should not place the milk on top of either the bread or the eggs.
You may also want to tell your robot to inspect the bread for mold, the milk for the expiration date, and the eggs for breakage, before accepting any specific item. And because the robot is so dumb, you have to specify that if the first package of Dan's Bakery Six Grain with Extra Gluten is moldy, it should replace that package on the shelf and pick a different package of Dan's Bakery Six Grain with Extra Gluten and check it – and then repeat that process until it either finds an acceptable (non-moldy) package of Dan's Bakery Six Grain with Extra Gluten or runs out of Dan's Bakery Six Grain with Extra Gluten to check, in which case it needs to.... what?
What do you want your robot to do if none of the available bread of the type you want isn't moldy?
Do you want it to pick a different type of bread? Do you have a prioritized list of bread preferences you could have the robot work down?
Do you want it not to buy bread at all?
Do you want it to buy based on some other criteria, like, "just pick the cheapest wheat bread"?
It's up to you. It's your robot. It's your shopping list. You decide.
And that's what software is made of: software is made of decisions.
I'm not just saying that one makes decisions when making software. That's true of the making of all made things, from knitting tiny sweaters for dolls to erecting valley-drowning hydroelectric dams. But dolls' tiny knit sweaters are made out of yarn or other fiber, and valley drowning hydroelectric dams are made of masonry and metal. They are also made of decisions, too, but if you subtracted out the decisions, there would still be the constituent matter: a skein of floss, a pile of concrete. If you subtracted out the decisions from a computer program, there wouldn't be anything at all.
Decisions are all there is to software. That's what it's built out of. A computer program is an instantiation of a set of decisions. There is nothing else in a computer program but decisions. To make software what one is doing is making decisions – carefully crafting decisions, and trying to express those decisions as clearly, precisely, and rigorously as possible.
A computer program is a specification of the behaviors of an inanimate object. And since inanimate objects, definitionally, do not have behaviors of their own, every single one of those behaviors specified in a computer program – and every single aspect of every one of those behaviors – has to be deliberately chosen and imbued into the object by code. Every single behavior, at every level of abstraction, represents a decision as to what the program is to do in that moment being described.
There is nothing else. If the programmer with their hands on the keyboard does not know what the program is to do, their hands remain still, the edit window remains empty.
A computer program is an expression of the set of decisions as to what behaviors an inanimate object should have, and under what circumstances.
Those decisions can be highly contingent and circumstantial; that is, a program can be told to examine momentary conditions and respond differently to different conditions, and these contingencies can be based on anything the computer program can perceive. Those decisions can be almost limitlessly sophisticated – they can be Turingly-completely contingent. They can even be (pseudo-)random. But however abstracted those decisions are, they are decisions.
If software is a building, then decisions are the concrete of the foundation, the wood of the framing, the nails and the hangers, the sheet-rock, the moisture barrier, the siding, the insulation, the wiring, the plumbing, the paint, the carpets, the carpet tacks, the tiling, the grout, the window glass and sash, the roofing, the gutters. Everything, everything, everything in a computer program – in software – is decisions. There is nothing else for a computer program to be made of.
This is the crucial thing that needs to be understood both by programmers and by the non-programmers who hire programmers to program for them.
Communications breakdowns and other strife between programmers and those they program for are overwhelmingly about conflict over decision-making.
Sometimes it's because the person retaining the programmer – whom I'll call the client – and the programmer are struggling over who gets to make a decision. But far, far more often, the conflict is over who has to make a decision.
We, in our culture, are used to thinking of making decisions as a privilege. It can be. But it's not intrinsically, or even often. Often, decision-making is an obligation. A responsibility. It is work.
("Where do you want to go to eat?" "Ehh, I don't know. Where do you want to go to eat?" "Oh, I don't know. What do you feel like?" "Anything is fine by me. Whatever you want." "I'm good with anything. You pick." "No, you pick.")
It does one a world of good to get over the often self-serving notion that "letting" other people decide things is always doing them a favor, and not, as it sometimes is, an abdication of responsibility that dumps additional work on their shoulders.
The client – that is the person commissioning work from the programmer – is the person who ultimately determines what the program should do. After all, it's their program. This means the programmer is going to need the client to be willing to answer questions about what the client wants and needs the program to do.
These questions are often terribly annoying, being both persnickety and difficult. Questions like, "On the screen that shows the listing of widgets with their frobs, will it ever be the case that a widget has more than one frob?"
The client may be tempted to respond, "Why do you even need to know?" The answer is that the programmer can make it work either way, but there are consequences of that choice. Huge consequences, that can involve a lot of money. And changing your mind later can mean throwing away a lot of work.
For instance, if your program can have an unlimited number of frobs per widget, every interface that presents the user with both widgets and the widgets' frobs needs to have some sort of scrolling or paging or other way of presenting an effectively infinite list of items, because the screen can only show so many. If you then change your mind and say, "You know what, let's limit widgets to one frob each", all of the work that went into making those screens know what to do when there were more than one frob per widget – decisions like "what should it do when there are more frobs per widget than can be displayed on one screen?" and "what order should the frobs appear?" and "should the listing of frobs be optional in some way, so the user can just see the widgets without being drowned in their frobs? If so, how should the user get to control that?" and so on and so on – all of that work gets discarded. But you're probably stuck paying for it now, if it's already happened, and the programmer's quality of life didn't get improved by having something they worked on thrown out.
(If you're not a programmer: imagine that your boss orders you to prepare a big report, and you spend a week doing so, and then when you submit it to your boss, your boss says, "Oh, this. Right. I don't actually need this after all. But thanks.")
Decisions build upon decisions. If you decide that the program you're commissioning needs to support multiple frobs per widget, then all these subsequent decisions about how the program will handle multi-frobs-widgets will need to get made – it's absolutely non-optional. Because software is made of decisions, the program literally cannot exist without these decisions getting made, in precisely the same way that your general contractor cannot pour the foundation for your house unless somebody comes up with some concrete.
The client might say, "Well, just make it (with/without) multi-frob capability, and we'll change it later." You can totally do that, and sometimes that's absolutely the right thing to do. But not making a decision is making a decision. Saying "just do this for now" is a decision, and it will get built into the software, because that is what the software is made of, that is all software has to be constructed of.
It is like telling an architect, "Just make one with one bedroom for now, and if we change our mind, we can add one later." Yes, you can do that. Maybe that's a good idea, maybe that's a bad idea. That's up to you to evaluate in light of your own circumstances. But the one thing it unquestionably is, is a decision.
Because decisions layer on top of decisions, the longer from the point a decision is made (and built into the software) – that is, the further into a software development project one goes from any one given decision-making point – the deeper built it gets into the structure of the thing. And, consequently, the more expensive, in every sense, it is to change. Changing the structure of the foundation – especially changing how it bears loads – is harder and harder the more building is built on top of it.
Again, that doesn't mean one shouldn't make changes late in the process. It does mean, however, one shouldn't be shocked to find that it's hugely expensive. This is a major cause of budget overruns.
When a programmer gives a quote or time estimate for the project, the programmer makes one of two assumptions. There are decisions – a lot of them – that only the client can make, because it's the client's software and up to the client to dictate how the software should behave. Either the programmer prepares that quote predicated on the assumption that the client has and can readily provide all the client-side decisions, or the programmer figures that the client is not going to have figured out all these sorts of decisions, and pads the estimate to accommodate the client figuring these things out as the project proceeds. That extra budget/schedule padding may be to allow for the programmer to help the client figure these things out (called "requirements discovery", and personally one of my favorite parts of programming, actually) or it may be to allow for the programmer to do work over when the client realizes a decision needs to be changed.
Clients, obviously, prefer smaller estimates to bigger ones. Everyone wants their software done faster and cheaper. This puts pressure on programmers to make the former assumption: that the client will be able to rapidly provide all these decisions as to the behavior of their program. This is typically unrealistic to expect of clients.
For one thing clients, not being programmers, almost never have thought through the behavior of their envisioned software to the level of detail and with the level of rigor necessary to instantiate it in actual code. That's not a fault, that non-programmers don't think that way, but it's a huge problem when non-programmers decide not to fund the activities which compensate for it.
Because those decisions that only the client can make need to be supplied to build the software out of. If they're not supplied by the client, well, either work halts for want of materials, or the programmer makes a wild-ass guess based on no knowledge of how the thing should work because it is not their thing, and that guess – surprise, surprise – turns out to be wrong and has to be redone, much later when it's finally discerned how wrong it is.
One way or the other, the client inevitably winds up paying for not knowing – either by forking over up-front to find out what their decisions should be, or paying for work to be redone when they belatedly find out the hard way what their decisions should have been. Or, you know, both.
That's not a punishment. That's just the cost of materials sourcing. If the client happens to have the decisions already made and lying around, and they prove sound – "Oh, hey, I already have all these two-by-fours you can use to frame the house" – the programmer can just use those. If not, somebody's got to go get some.
But here's the other thing, and an awful thing it is. Regardless of whether the programmer makes the lean estimate (with no allowance for client requirements discovery/re-writing) or the sleeker one (with such an allowance), the client may wind up exceeding the estimated allowance.
This is not because the client is bad or wrong or stupid. (Though to be certain, the client being bad or wrong or stupid will make the overrun so much worse.) This is because making a new thing in the world is hard. It is super amazingly hard. That people don't get it right is not a surprise; it's the expectation.
Right now, there is a huge controversy brewing about the programming of self-driving cars, because they take the famed "Trolley Problem" of philosophy and make it bone-crunchingly real. The decisions need to be made, and made in advance and instantiated in software: what should a self-driving car do when it cannot avoid an accident, and can only choose between striking living beings? Continue and crash into a family of five, or swerve to hit a single-occupant car? We, society as the client, don't have an answer to that. We literally do not have a consensus, or, honestly, even a vague sense of what the right answers are to the question, "What should the car do in this horrible no-win situation?"
Even if you're developing something less life-and-death than the software that drives a car, the questions about what your program should do in certain circumstances can be absolute stumpers. Sometimes we only can find out that we got them wrong by building our best guesses in and trying them out, and getting to experience them for real.
If you're going to make software, a peace needs to be made with that fact. Among people who make software professionally and commercially, this is well known. The culture of professional software development has a variety of approaches to dealing with it, and they all cost money.
For instance, there is the approach described in the maxim, "Build one to throw away". Yes, that's a real thing. It was described, possibly for the first time, by Fred Brooks in his legendary The Mythical Man-Month: Essays on Software Engineering, in 1975, where he observes (synopsis from wikipedia):
When designing a new kind of system, a team will design a throw-away system (whether it intends to or not). This system acts as a "pilot plant" that reveals techniques that will subsequently cause a complete redesign of the system. This second, smarter system should be the one delivered to the customer, since delivery of the pilot system would cause nothing but agony to the customer, and possibly ruin the system's reputation and maybe even the company.It is also the motivation behind the doctrine, attributed to Mark Zuckerberg of Facebook, "Move fast and break things". It was what motivated the Agile software development methodology, which was, as the name suggests, an approach which attempts to be more robust and less expensive in the face of the often painful emergent lessons as a project unfolds.
At the end of the day, it's just plain hard to anticipate how you're going to want your software to behave. The decisions, of which there can seem to be a nigh-infinite number on even the simplest, humblest programs, can be extremely hard to get right when made in advance.
It can be hard to tell what you're going to want your program to do in advance, and the decisions you make can turn out to interact with one another and with unanticipated environments, in surprising ways. Super surprising ways.
Trying to imbue an artifact with useful or pleasing behavior by making a whole bunch of decisions about how it should behave in advance is, necessarily, to encounter the unknown: you don't entirely know what the environment is that your programmed object will find itself in, and you don't entirely know how your choices of prescribed behavior will serve in that environment. You may think you know what you want your program to do, and then when it does it, you find you didn't.
True story: I once worked on a website for an organization which had lots and lots of research groups. The organization, reasonably enough, decided that a lot of people coming to its home page were actually looking for specific research groups, so it wanted to present the visitor with a convenient list of the research groups, all with links to the individual research groups' pages. The client representative and the designer decided the home page should have a pull-down menu on the left side of the page, that would have all the research groups listed by name. They knew it would be long because there were a lot of research groups, so the menu would need to scroll, but that would be fine.
So we implemented this thing and populated it with real data.
When the designer came to fetch me to see how it worked, he was laughing: the menu stretched halfway across the page.
They'd thought about how long the menu would be – a function of how many research groups there were – but had failed to reckon how wide the menu would be. The width of the menu was a function of how long the names were of the research groups. And while all the research groups had short nicknames the people in the organization use to refer to them internally, their official full names – the ones on their research grants – were often these two- to three-hundred character monstrosities.
This precipitated a minor political crisis in the organization. Should they be using the full names? ("But our users don't know the official names, they use the nicknames too!") Or the nicknames? ("But OMG what if the NSF saw it?! What if our NSF grant officer is trying to find us through the web site and doesn't know the nickname?!") Or let each research group choose for themselves? ("But what about consistency?!") Should we not be using a menu? Should we use a menu but be line-wrapping the names?
(We dropped the menu, and made it a link to a separate page with a listing of all the projects, in alphabetical order.)
Whenever you embark on a programming project, you are just about certain to discover you have made some bad decisions that got built into your program, and now have to be changed, possibly at considerable expense. This is true regardless of whether or not you're a programmer. It is true not because there is something wrong with you. It is true because reaching the very end edges of our ability to anticipate how we are going to want things to be and coping with the unknowns we encounter is part of the fundamental nature of developing software.
There is a joke in science, "If we knew what we were doing, we wouldn't call it 'research'." I have long wished we had an expression for a similar sentiment about programming. Because programming naturally, regularly, and just about always involves an encounter with the unknown, and having to reckon with what you didn't reckon with.
The unknown always brings uncertainty. To budgets. To schedules. To results.
So what is to be done about this? One broad consensus that software development has managed to come to is that the most important thing for keeping costs down is the client being as deeply involved in the development project as possible. If the client abdicates decision-making authority over making the decisions that only the client has a chance of making right, and the programmer halts work, or worse guesses, that does not bode well for the project. If the client stays engaged in the project making the decisions they need to make, that increases the chances the decisions the software is built out of are the right one. If the client is constantly trying out draft versions of the software, they have a chance of catching bad decisions earlier, when it cheaper to fix them, rather than later, when it is not.
This is absolutely not what clients expect. Most people who are not previously familiar with software development, when they retain a programmer, they think they are going to have a little meeting, or even a long meeting, and tell the programmer what they want, and then the programmer will go away and make it. And maybe the programmer will come back once or twice with some questions for further elaboration ("did you want this in blue or green"?), but then the programmer will present the client with the completed program.
Clients with this expectation find it vexingly confounded. They find their programmer trying to get their attention with questions much more frequently than they anticipated. They did not expect to have to spend so much time interacting with the programmer. They did not expect to have to spend so much time "trying out" parts of the program. They did not plan their schedules to allow for all that time to be sunk into the project, and now find themselves harried and frazzled as they try to respond to all the questions and still do all the other things they had on their plate. Or they put off answering the programmer's questions; their project stalls, while the programmer waits for them to get back in touch with the decisions the programmer needs, and the anticipated completion date slips and slips and slips.
Here is a question no client has ever asked me: "how much of my time will you need?" I wish I'd figured out earlier that it was a discussion that needed having explicitly.
The expectation one can retain a programmer and tell them once, up front, what all you want the program to do, and you will then some time later receive an acceptable working program is largely an unreasonable expectation.
Professionals in software development have a term for this: "throwing it over the wall". It refers to the idea you can throw the project over a metaphorical wall to the programmer, and when they're done, they throw the completed program back over the wall to you.
There is a vast history of clients who have been confronted for themselves by the decision-making-intensive nature of software development rather desperately clinging to the throwing it over the wall approach. They concluded that this process could work if only they were to sit themselves down and write down all the decisions they possibly could, thinking through every contingency they possibly can, and putting it in one big document they could then hand to their eventual programmer. The idea being that then the programmer would just look up all the client's decisions for the program in a handy document.
This kind of document is called a specification, or "spec" for short.
Now, thinking through things deeply, and even in advance, is not a bad thing, and I do not ever want to discourage it in any domain. And in software development specs can be quite helpful. When I see a project with a good and thoughtful spec, I, as a programmer, think, "These are people who will be a pleasure to work with: they are forward thinking, attentive to detail, serious about this project, and have some idea of what it is they want their program to do."
But the idea that a spec can eliminate all of a programmer's questions for the client is a false fond fancy. The idea that if a spec is just detailed and thorough enough the client won't ever have to talk to the programmer again, except to say "thank you" when the golden master disk is put in their hands, borders on the delusional.
No one will ever write a spec that is so foresightful that it will correctly anticipate all the decisions that need to be made, and all the revelations that emerge along the way about what the decisions should have been.
A spec is a plan, and no plan has ever survived contact with the end users.
The software industry also has a term for this approach: it's called the "waterfall model", because, as the water in a waterfall only flows in one direction, this approach to software development presumes a unidirectional development process, and, as the pools of a cataract, has certain predictable, ordered stages. Also, and I have never seen this discussed explicitly, it depicts a process where work only moves from higher status people to lower status people, the way water flows down hill: the waterfall model is one with certain ideas of authority baked in, where the workers at each next stage are expected to build on the previous stage's work, and cannot question those decisions or send it back for revision.
Essential to the waterfall model is that there are no iterative processes. It is a plan for what will happen in software development that entirely denies the possibility that some important decision will turn out, somewhere later down the line, to have been made incorrectly. It simply doesn't allow for that scenario. It makes no provision for what to do when that happens – even though that happening is perfectly normal in software development.
"Waterfall" is something of a swear word among many programmers today.
I understand there are those who argue that "waterfall" is a strawman invented by partisans of other, competing software development methods, and that nobody really ever does or did waterfall.
This is an incorrect representation of the complaints against waterfall. Everyone who criticizes waterfall knows that nobody ever actually does it. Because waterfall is impossible. Waterfall is such a poor match for the reality of software development exigencies, that attempts to employ the waterfall approach nigh-inevitably fail. You may think you're going to do waterfall, but that just means you are going to be painfully wrong, and merely the last in a long list.
No, the problem with waterfall isn't that clients do it. It's that clients perennially try to do it.
True story: I knew a researcher who hired a programmer to develop some software for her. She told me that she went to a conference once, for people who do her kind of research. Or so she thought: it turned out that the conference was for people who program the sorts of programs she used in her research. She was not a programmer, so (she told me) a lot of it went right over her head. But (she told me) she found it revelatory. "I learned that QA [testing] is an iterative process!" she told me, half conspiratorially and half proudly. (That was a great thing to have learned, I assured her.)
Part of why I am writing this is to confront this pattern. In software development, the client is the one who calls the shots, because it's their money. The client, reasonably enough, is very protective of their money. They would rather not spend any more of it than is necessary; they might, reasonably enough, not even want to spend the necessary part.
The problem is that, by controlling the purse strings, the client has the authority. And that puts in the client in a precarious psychological situation: they are very vulnerable to wishful thinking. The programmer has very little leverage to tell the client, no, we shouldn't plan on doing it that way, because it will probably cost you more money in the long term. Quite to the contrary, the programmer may be under pressure – say a competitive bid situation – to indulge and affirm the client's wishful thinking, to get the job at all.
The client is extremely vulnerable to convincing themselves, "None of that will happen on my project. It will be fine. I will write a great and professional spec, and I will hold the programmer to it, and it will come out great, on time and under budget, with no substantial corrections necessary along the way." There's nobody to tell them no, nobody to tell them they're being foolish, nobody to tell them they will not be the exception.
So I am. That is my purpose in writing this. One of them, anyway.
Now, non-programmers reading this perhaps are wondering, if the client is making all these decisions, what is the programmer doing?
The programmer is also making decisions. A different kind of decision. The client is the expert in what the program should do to fulfill the client's needs. The programmer is the expert in how the program should do those things. The programmer makes decisions about implementation. The programmer makes the decisions about "how am I going to express this to the computer?"
Here's an example: consider threaded comments, like those here on LiveJournal. LiveJournal is a program (well, a group of programs, but run with it) that needs to keep track of what comment is the "parent" of what other comment. Whenever anybody asks to see a page with comment on it, the LiveJournal program has to cough up the right comments, in the right order, and all threaded correctly. Threaded comments are an example of a kind of data which are arranged in a "tree". Turns out there are a bunch of ways of representing tree-shaped data in the sort of database that LiveJournal uses to store comments (a type of SQL database). In fact, there are enough different ways to do it, that somebody wrote an entire book on the topic, discussing the various ways of doing that, and their comparative merits. LiveJournal's programmer (Brad) had to decide which way he wanted to store comments in LiveJournal's database, hopefully based on which approach he thought most advantageous in his particular case.
And if a client has any sort of tree-like data that they're going to want stored – threaded comments, org charts, outlines, linnaean classification charts, e.g. – and the decision is made to use a SQL database, then the programmer is going to have to decide which way of representing the data in the database is best for their project.
The programmer takes the decisions provided by the client and extrapolates from them to make further decisions about how best to put together a computer program to suit the client. The programmer makes decisions that trade off between conflicting program values. Should the program be fast to load at the expense of being slower to run? Should the code be simple and easier for subsequent programmers to read, or more complex and thus easier to extend? Scalability vs. portability? Security vs. archiveability? Write from scratch or use pre-made components? And so forth, and so forth.
True story: I once, on a contract job, was handed a pile of tcl (it's a language) code that dynamically generated DHTML for displaying a huge tree of data in a web browser, and told to debug it. I took at look at it: it was beyond bizarre, almost unreadably baroque. I was like, WTF? Clearly, I thought, this needed re-writing, into something sane and logical, something using recursion, or even just iteration, something that was legible. I started in on doing this. And then the team lead stopped me.
"Siderea, how's that tree?"
"This code is unbelievable. The damn thing walks the outside of the tree to figure out how to render it, making use of this system of flag variables to keep track of where it is. It's pure spaghetti. I'm rewriting it to climb down the tree."
"Don't do that."
"Wha?"
"If you use recursion, every time you call the function, it will have to query the database again."
"Yes?"
"We have a really slow database."
"That slow?"
"That slow."
"Oh. So that's why it's like that."
"Yeah, grody as it is, it's a solution that gets all the data out of the database in a single call."
"...I'll put it back the way it was then."
In this circumstance, the elegant, clean readable way to do things would come with an intolerable performance cost: the end user, sitting at the web browser waiting for the page to load, would be waiting a long time. Meanwhile the poor server would be over-exerting itself, and every other page on the site that also needed to query the database would slow down, as it waited in line for the database to get back to it.
And that is the sort of decision that our hypothetical programmer is making. Hopefully, the programmer is making them based on what they anticipate the client will best want, and so these too can be decisions that the programmer wants to run by the client.
Sometimes technical decisions need to be shared between the programmer and client. For instance, if the client has no reason to have a preference, the choice of what programming language to write a program in is often left to the programmer. But clients often have reasons to care what language a program is in. Maybe the client needs the program to run on a platform that only supports certain languages. Maybe the client has standardized all their custom programs to be in the same few languages. Maybe the client wants the program in the language they know best, so they can read it and edit it for themselves (sometimes a client is a programmer!) Maybe the client is concerned about having to hire a replacement programmer and wants their program to be in a popular language with lots of programmers available to hire.
But mostly, the client relies on the programmer to make good decisions about how to implement the decisions of the client. This is both a good thing and a bad thing. The client who thinks, "Oh, good, I'm not responsible for those decisions!" should be introduced to the client who thinks, "You mean I have to trust my programmer to make important decisions?" Quite aside from the mutual edification they can provide each other, the entertainment value for observers can be considerable.
At this juncture, I think we can easily imagine a client and a programmer, having read this. Neither are too happy about it.
We can imagine the client somewhat stunned and rather dismayed in the wake of this wake-up call, looking at the programmer – their programmer – who is looking a bit apologetic and defiant and despondent at once.
"But look," says the client to the programmer, "Okay, I can see the merit in all this. It has a hideous kind of sense. But I really don't see how this can work. I know generally what it is that I want, but you keep asking me questions – what I now realize are good questions, reasonable question – about the specifics, and I just don't know the answers.
"Like take the business with the widgets and their frobs. I understand from your estimate that if we build the widget-wrangler to have multi-frob capabilities, that will be a really substantial increase in time and money. But I also understand that if we go with cheaper option of single-frob widgets, and it turns out that we really ought to have gone with multi-frob capabilities, that's going to be even more expensive to fix. But the thing is, I don't know whether we need multi-frob capabilities!
"I included the frobs in the spec in the first place because I read what seemed to me to be a reasonable and persuasively argued article about widget-wrangling software that cautioned developers not to forget to include the widgets' frobs. I can send you the URL and you can read it for yourself. Then you'll know as much as me about the relationship of frobs to widgets.
"I simply don't know if multi-frob capabilities are necessary for all widget-wrangling software. Or if multi-frob capability is necessary for our specific users. Or if multi-frob capability is really attractive to some important subset of our users, like those belonging to big organizations who pay us large amounts of money for enterprise class site licenses. Or if multi-frob capability is a nice-to-have that only matters in weird exceptional cases, and the vast majority of our users wouldn't ever miss it if it weren't there. Or, frankly, if the article is full of crap, frobs are totally passé among widget-wranglers, and multi-frob handling is an active detriment to our users. I just don't know.
"I just don't know," continues the miserable client, "And without this knowledge, I can't begin to make a cost-benefit analysis of whether or not it's worth the money to take one risk or another. I can't tell whether it would be best to go ahead without multi-frob capability, whether it would be best to build out multi-frob capability, whether it would be best to do something in-between like build without multi-frob but with a little extra investment in doing it in such a way changing to multi-frob isn't as expensive, or whether it would be best to put off doing the project entirely until I figure this out.
"I'm sure there's a best choice, I don't know what it is or how to figure out which it is."
I have brought you to this low place for a reason. It's a lot like being trapped in a cave filling up with water, where you can't see any way out – so long as your head is above the surface. So long as you keep trying to keep your face from getting wet, you'll never escape. But once you entirely immerse yourself in the problem, get right to the bottom of it, open your eyes and look around, the way out – multiple ways out – are right there.
The software development industry has developed a whole host of specialists to assist clients with producing the quality decisions necessary for quality software.
The scenario I described of a client and a programmer, which is what organically arises when non-programmers hire programmers to program for them, is not actually the staffing model used in most professional organizations that produce software. Among people who make software professionally, there is not an assumption that a client and a programmer are sufficient between them to make well all the decisions that go into a software project. Among commercial producers of software it is common, if not conventional, for there to be other sorts of professional involved.
For figuring out how software that is to be used by humans (as opposed to used by other software) should be organized and represented to the user on the screen so that the user can make sense of it, and what ways the user should be able to interact with the software, there are User Experience designers (UX for short); UX experts not only make decisions based on research into Human-Computer Interaction (yes, that's a field of study), but conduct various forms of ad hoc study – interviews with would-be users, observations of how people do the task without the software, making physical mockups on paper to test, etc. – to find out how your users, actual and prospective, can best be served by the software.
Relatedly there are Information Architects and Visual Designers, both of which fall under the umbrella of UX. Information Architects help decide the conceptual organization of text-rich or otherwise content-rich, or navigationally complex software – websites, primarily. (Both LJ and Patreon could use some more IA love!) Visual Designers make decisions about how things should look; they're they interior decorators of software, choosing colors, fonts, decoration, illustration. What should the icons on the buttons look like? Ask a visual designer.
Understanding the needs and desires of people who might want to pay you to use your software – potential customers – falls under the rubric of marketing. Market research consultants can tell you about what extant research says about the market for various products, or conduct new research. As part of this they can help make technical decisions about compatibility with hardware, operating systems, and networks your target market uses. They may also be able to report to you what competitors are doing.
Sometimes software is created to automate or virtualize a manual, paper-based business process that already exists. This can be extremely challenging, because many fine details of how the process works – and sometimes how the process needs to work – aren't written down anywhere, they are knowledge passed from employee to employee as part of on-the-job training. It can also be challenging because business processes can be enormously complex and have very many different parties participating in them; in such cases, there are effectively many different kinds of users with different needs. People who are specialists in figuring out what a business process exactly is, and how it works and what its requirements are for reproduction or replacement in software, are called business analysts. Usually, business analysts are found in very large organizations with huge internal IT needs, where they often also have oversight authority for software development processes – indeed, the "client" for an institutional software development initiative, that is, the person representing the institutional interests and charged with bringing the software into existence for their institution, may have the job title "business analyst".
"QA" stands for quality assurance. QA testers or QA engineers can be thought of as specialists who help the programmer make technical decisions about how the software should function. They do two sorts of things. They check the software that's been developed to make sure it does what it was intended to – the QA tester will make sure that when when clicked on, the button that says "MAKE GREEN" makes something green, rather than blue, or doing nothing, or crashing the computer. They also check the software that's been developed to make sure that it doesn't do what it wasn't intended to. The best description of that is probably this. (There's a wee bit of computer code at the beginning that you can skip with impunity. Just read the embedded tweets.) In performing this latter function, the QA engineer not only checks the program for mistakes, but helps the programmer (and the client) think through what the program should do under really weird circumstances.
If you're going to make software that deals with accounting, you should probably have an accountant involved. If you're going to build a digital library, you would be prudent to have a librarian contributing to the project. If you're going to put together a system for medical records, you might want to involve a physician. *coff* The term for a team member who brings specialist knowledge of some domain the software is supposed to be competent in is a subject-matter expert, or SME (often pronounced "smee"). "SME" is not a job title like the previous examples, but a role on a software development team. SMEs are able to make or inform decisions about how the software needs to work.
These are some of the kinds of specialists that our hypothetical client might seek out for assistance on this project.
Thing is, they all cost money.
Nobody's willing to pay money for something they don't see the purpose of. Clients who don't understand that software is made of decisions, who approach their development project with the assumption that decisions don't really need to be made, or can be put off, or aren't important, or are infinitely changeable – and that somehow the software will still get made, and it will be alright, and also on-time and under budget – don't see any need to pay money to get people to help them make higher quality decisions than they can make on their own.
Heck, clients who don't understand that software is made of decisions, and all that follows from it, often disdain, in their ignorance, these professional resources when they're effectively provided for free. There are corporate environments in which designers and testers and market analysts and business analysts and SMEs are on salary, and yet they have trouble giving away their insight and industry to their colleagues – both client and programmer.
Loose change thoughts:
• The technical term for what I'm talking about is "design". "Design" is the decision-making of making things out of decisions. The problem is that the "design" operator is overloaded: it means too many different things in the software development space. It is too often (erroneously) equated with visual design.
I don't think that's wholly a mistake. I think the culture of the Anglosphere can be amazingly design-hostile. (The UK may be a partial counter-example.) There is in our culture a thread of contempt for concerning oneself with how good the thing one is making is. It's considered prissy and fussy, unmanly, shrewish, unreasonable.
• This is because design is emotional labor. Any attempt to anticipate the "goodness" of the appearance, function, behavior, etc. of an artifact to its users is, definitionally, an attempt to anticipate another's subjective experience and moderate one's own choices on their behalf.
• People love being the beneficiaries of emotional labor, but that doesn't seem to stop many of them from holding it – and those who perform it – in contempt. Most especially when the prospect of doing some themselves crops up.
• This is, alas, why Steve Jobs had to be a bastard. Our culture says that his preoccupations, his sensibilities are worthless and unworthy. Steve Jobs' secret sauce was treating all design decisions as just as important as they actually are. Since the culture here deprecates doing so, getting an army of people to go along with it in prospect, before they see what the results are like (retrospect is so much easier), probably requires a lot of coercion.
• You're not wrong in surmising that I'm saying a program is a hole in a computer chip into which you pour money. Cost-control of software development is a notorious problem. This post is my own effort to answer the titular question of Tom DeMarco's essay, "Why Does Software Cost So Much?" (in the book of the same name). His answer, btw, is "Compared to what?"
• I'm not surprised I liked doing requirements discovery; it's a lot like being a therapist! It involves eliciting thoughtful, detailed exploration of complicated social phenomena about which the people involved have trouble remaining calm.
• Making software is probably more like writing fiction than any branch of engineering, in how its made of decisions. Fiction, too, is 100% made of decisions.
• How much the artifacts of our lives is made of decisions has only gone up and up up through the 20th century. It is amazing and mind-blowing to contemplate one's immediate environment and reflect on how much of what one sees looks the way it does – is the way it is – because in each and every particular, some human chose for it to be that way, and not some other way. It's called "plastic" for a reason; and when there are no defaults, everything must be chosen.
When I heard there was a podcast called "99% Invisible", it took me about a half a second to conclude (correctly) it must be about design.
• Our whole world now – for most of us, excepting visits to nature – is a "built environment": nothing is natural/default, everything is artificial/decided.
Not that those decisions are made for you; mostly they were made for other people, advancing other people's agendas. They are made as cheaply, as profitably, as seductively, as minimally tolerable as possible. Your appliances are all made to be as beguiling as possible, and not last a minute longer than you (or your demographic cohort) can be anticipated to not want to replace it.
• It is fundamental to the nature of the endeavor that when you hire a programmer to program for you, you will have to trust their programming judgment. Now I wish to be clear here: I am not telling you clients to trust your programmers. I am telling you not to hire programmers you cannot trust. If you find you cannot trust any programmers, maybe get out of the software development business. That advice holds, whether it is the case that all the programmers you encounter really are scoundrels and incompetents, or whether it's just that the notion of entrusting any part of your business/project to someone else gives you the heebie-jeebies.
(If the latter, I totally feel you. I'm the queen of control freak untrusting heebie-jeebies, myself.)
Nor am I suggesting you be uncritical. But when you hire a skilled professional to work on your behalf, ultimately, to get the benefit you're paying for, you have to let the professional do what you hired them to do, which rests on them using their professional judgment.
This post brought to you by the 63 readers who funded my writing it – thank you all so much! You can see who they are at my Patreon page. If you're not one of them, and would be willing to chip in so I can write more things like this, please do so there.
Please leave comments on the Comment Catcher comment, instead of the main body of the post – unless you are commenting to get a copy of the post sent to you in email through the notification system, then go ahead and comment on it directly. Thanks!
