Yesterday I tweeted a link to an article about overloading that I’d just finished. In that article, all my examples look a bit like this:
class Test
{
static void Foo(int x, int y = 5)
{
Console.WriteLine("Foo(int x, int y = 5)");
}
static void Foo(double x)
{
Console.WriteLine("Foo(double x)");
}
static void Main()
{
Foo(10);
}
}
Each example is followed by an explanation of the output.
Fairly soon afterwards, I received an email from a reader who disagreed with my choices for sample code. ere are a few extracts from the email exchange. Please read them carefully – they really form the context of the rest of this post.
This is really not proper. When a method can do more than one thing, you might offer what are called ‘convenience overloads’, which make it easier for the consuming developer. When you start swaying away so much that you have wildly different arguments, then it’s probably time to refactor and consider creating a second method. With your example with "Foo", it’s hard to tell which is the case.
My point is, the ‘convenience overloads’ should all directly or indirectly call the one REAL method. I’m not a fan of "test", "foo", and "bar", because they rarely make the point clearer, and often make it more confusing. So let me use something more realistic. So let me use something more realistic. This nonsensical example, but hopefully is clear: [code snipped, but it was an OrderProcessor, referring to an OrderDetail class]
…
The point here was to make you aware of the oversight. I do what I can to try to stop bad ideas from propagating, particularly now that you’re writing books. When developers read your book and consider it an "authority" on the topic, they take your example as if it’s a model for what they should do. I just hope your more mindful of that in your code samples in the future.
…
Specific to this overload issue, this has come up many times for me. Developers will write 3 overloads that do wildly different things or worse, will have 98% of the same code repeated. We try to catch this in a code review, but sometimes we will get pushback because they read it in a book (hence, my comments).
…
I assume your audience is regular developer, right? In other words, the .NET Framework developers at Microsoft perhaps aren’t the ones reading your books, but it’s thousands of App Developer I and App Developer II that do business development? I just mean that there are far, far more "regular developers" than seasoned, expert developers who will be able to discern the difference and know what is proper. You are DEFINING what is proper in your book, you become an authority on the matter!
Anyhow, all my point was it to realize how far your influence goes once you become an author. Even the simplest, throwaway example can be seen as a best-practice beacon.
Now, this gave me pause for thought. Indeed, I went back and edited the overloading article – not to change the examples, but to make the article’s scope clearer. It’s describing the mechanics of overloading, rather than suggesting when it is and isn’t appropriate to use overloading at all.
I don’t think I’m actually wrong here, but I wanted to explore it a little more in this post, and get feedback. First I’d like to suggest a few categorizations – these aren’t the only possible ones, of course, but I think they divide the spectrum reasonably. Here I’ll give example examples in another area: overriding and polymorphism. I’ll just describe the options first, and then we can talk about the pros and cons afterwards.
Totally abstract – no code being presented at all
Sometimes we talk about code without actually giving any examples at all. In order to override a member, it has to be declared as `virtual` in a base class, and then the overriding member uses the `override `modifier. When the virtual member is called, it is dispatched to the most specific implementation which overrides it, even if the caller is unaware of the existence of the implementation class.
Working but pointless code
This is the level my overloading article worked at. Here, you write code whose sole purpose is to demonstrate the mechanics of the feature you’re describing. So in this case we might have:
public class C1
{
public virtual void M()
{
Console.WriteLine("C1.M");
}
}
public class C2 : C1
{
public override void M()
{
Console.WriteLine("C2.M");
}
}
public class C3
{
static void Main()
{
C1 c = new C2();
c.M();
}
}
Now this is a reasonably extreme example; as a matter of personal preference I tend to use class names like "Test" or "Program" as the entry point, perhaps "BaseClass" and "DerivedClass" where "C1" and "C2" are used here, and "Foo" instead of "M" for the method name. Obviously "Foo" has no more real meaning than "M" as a name – I just get uncomfortable for some reason around single character identifiers other than for local variables. Arguably "M" is better as it stands for "method" and I could use "P" for a property etc. Whatever we choose, we’re talking about metasyntactic variables really.
Complete programs indicative of design in a non-business context
This is the level at which I would probably choose to demonstrate overriding. It’s certainly the one I’ve used for talking about generic variance. Here, the goal is to give the audience a flavour of the purpose of the feature as well as demonstrating the mechanics, but to stay in the simplistic realm of non-business examples. To adapt one of my normal examples – where I’d actually use an interface instead of an abstract class – we might end up with an example like this:
using System.Collections.Generic;
public abstract class Shape
{
public abstract double Area { get; }
}
public class Square : Shape
{
private readonly double side;
public Square(double side)
{
this.side = side;
}
public override double Area { get { return side * side; } }
}
public class Circle : Shape
{
public readonly double radius;
public Circle(double radius)
{
this.radius = radius;
}
public override double Area { get { return Math.PI * radius * radius; } }
}
public class ShapeDemo
{
static void Main()
{
List<Shape> shapes = new List<Shape>
{
new Square(10),
new Circle(5)
};
foreach (Shape shape in shapes)
{
Console.WriteLine(shape.Area);
}
}
}
Now these are pretty tame shapes – they don’t even have a location. If I were really going to demonstrate an abstract class I might try to work out something I could do in the base class to make it sensibly a non-interface… but at least we’re demonstrating the property being overridden.
Business-like partial example
Here we’ll use classes which sound like they could be in a real business application… but we won’t fill in all the useful logic, or worry about any properties that aren’t needed for the demonstation.
using System.Collections.Generic;
public abstract class Employee
{
private readonly DateTime joinDate;
private readonly decimal salary;
// Most employees don’t get bonuses any more
public virtual int BonusPercentage { get { return 0; } }
public decimal Salary { get { return salary; } }
public DateTime JoinDate { get { return joinDate; } }
public int YearsOfService
{
// TODO: Real calculation
get { return DateTime.Now.Year – joinDate.Year; }
}
public Employee(decimal salary, DateTime joinDate)
{
this.salary = salary;
this.joinDate = joinDate;
}
}
public abstract class Manager : Employee
{
// Managers always get a 15% bonus
public override int BonusPercentage { get { return 15; } }
}
public abstract class PreIpoContract : Employee
{
// The old style contracts were really generous
public override int BonusPercentage
{
get { return YearsOfService * 2; }
}
}
Now this particular code sample won’t even compile: we haven’t provided the necessary constructors in the derived classes. Note how the employees don’t have names, and there are no relationships between employees and their managers, either.
Obviously we could have filled in all the rest of the code, ending up with a complete solution to an imaginary business need. Other examples at this level may well include customers and orders. One interesting thing to note here: admittedly I’ve only been working in the industry for 16 years, and only 12 years full time, but I don’t think I’ve ever written a Customer or Order class as part of my job.
Full application example
No, I’m not going to provide an example of this. Usually this is the sort of thing which a book might work up to over the course of the complete text, and you’ll end up with a wiki, or an e-commerce site, or an indexed library of books with complete web site around it. If you think I’m going to spend days or even weeks coding something like that just for this blog post, you’ll be disappointed 🙂
Anyway, the idea of this is that it does something genuinely useful, and you can easily lift whole sections of it into other projects – or at least the design of it.
Which approach is best?
I’m sure you know what’s coming here: it depends. In particular, I believe it depends on:
Your readership
Are they likely to copy and paste your example into production code without further thought? Arguably in that case the first option might be the best: they may not understand it, but at least it means your code won’t be injuring a project.
Simply put, didactic code is not production code. The parables in the Bible aren’t meant to be gripping stories with compelling characterization: they’re meant to make a point. Scales aren’t meant to sound like wonderful music: they’re meant to help you improve your abilities to make a nice sound when you’re playing real music.
The point you’re trying to put across
If I’m trying to explain the mechanics of a feature, I find the second option to be useful. The reader doesn’t need to try to take in the context of what the code is trying to accomplish, because it’s explicitly not trying to do anything of any use. It’s just demonstrating how the language or platform behaves in a particular scenario.
If, on the other hand, you’re trying to explain a design principle, then the third or fourth options are useful. The third option can also be useful for the mechanics of a feature which is particularly abstract – like generic variance, as I mentioned earlier. That goes somewhere between "complete guide to where this feature should be used" and "no guidance whatsoever" – a sort of "here’s a hint at the kind of situation where it could be useful."
If you’re trying to explore a technology for fun, I find the third option works very well for that situation too. For example, while looking at Reactive Extensions, I’ve written programs to:
- Group lines in a file by length
- Give the results of a UK general election
- Simulate the 1998 Brazil vs Norway world cup football match
- Implement drag and drop using event filtering
None of these is likely to be directly useful in a real business app – but they were more appealing than solely demonstrating a sequence of numbers being generated (although with an appropriate marble diagram generator, that can be quite fun too).
The technology you’re demonstrating
This is clearly related to the previous point, but I think it bears a certain amount of separation. I believe that language topics are fairly easily demonstrated with the second and third options. Library topics often deserve a slightly higher level of abstraction – and if you’re going to try to demonstrate that a whole platform is worth investing time and energy in, it’s useful to have something pretty real-world to show off.
Your time and skills
You know what? I suck at the fourth and fifth options here. I can’t remember writing any complete, independent systems as a software engineer, and none of them have been in line-of-business applications anyway. The closest I’ve come is writing standalone tools which certainly have been useful, but often take shortcuts in terms of design which I wouldn’t countenance in other applications. (And yes, I’m sure there’s some discussion to be had around that as well, but it’s not the point of this article.)
You may think my employee example above was lousy – and I’d agree with you. It’s not really a great fit for inheritance, in my view – and the bonus calculation is certainly a dubious way of forcing in some polymorphism. But it was the best I could come up with in the time available to me. This wasn’t some attempt to make it appear less worthy than the other options; I really am that bad at coming up with business-like examples. Other authors (by which I mean anyone writing at all, not just book authors) may well have found much better examples, either by spending more time on them, being more experienced with line-of-business apps, or having a better imagination. Or all three.
I’m not too proud to admit the things I suck at 🙂 If I spent many extra hours coming up with examples for everything I write about, I would get a lot less written. I’m doing this in notional "spare time" after all. So even if you would prefer the fourth option over the third, would you rather have that but see less of my (ahem) "wisdom"? Personally I think everyone’s better off with me braindumping using examples in forms which I’m better at.
How to read examples
Most of this post has been from the point of view of an author. Briefly, I’d like to suggest what this might mean for readers. The onus is on the author to make this clear, of course, but I think it’s worth trying to be actively better readers ourselves.
- Understand what the author is trying to achieve. Don’t assume that every example will fit nicely in your application. Example code often doesn’t come with any argument validation or error handling – and very rarely does it have an appropriate set of unit tests. If you’re reading about how something works, don’t assume that the examples are in any way realistic. They may well be simplified to demonstrate the behaviour as clearly as possible without the extra "fluff" of useful functionality.
- Think about what may be missing, particularly if the context is an evangelical one. If someone is trying to sell you on a particular technology, then of course they’ll try to show it in its best possible light. Where are the pitfalls? Where does it not stack up?
- Don’t assume authority means anything. I was quite happy to take Jeffrey Richter to task on boxing for example. Jeffrey Richter is a fabulous author and clearly a smart cookie, but that doesn’t mean he’s right about everything… and I really, really don’t like the idea of anyone appealing to my supposed abilities to justify some bad decision. Judge any argument on its merits… find out what people think and why they think it, but then see how well their reasoning actually hangs together.
Conclusion
This was always going to be a somewhat biased look at this topic, because I hold a certain viewpoint which is clearly contrary to the one held by the chap who emailed me. That’s why I included a reasonable chunk of his emails – to give at least some representation to the alternatives. This post has effectively been a longwinded justification of the form my examples have taken… but does it ring true?
I can’t guarantee to change my writing style drastically on this front – at least not quickly – but I would very much appreciate your thoughts on this. I’m reluctant to exaggerate, but I think it may be even more important than working out whether "Jedi" was meant to be plural or singular – and I certainly received a lot of feedback on that topic.
For me, the example using Squares, Circles and Shapes is a clear standout in clarity. The relationship between Squares, Circles and the base Shape concept mirror the relationship between the classes exactly. This gives me as a reader important context. The “business” example is cluttered and the relationship is not as clear, requiring more mental work to separate the extraneous code from the required to make your point.
The examples using Foo, Bar and C1 are meaningless and thus the code is harder to parse mentally. Worse, the naming of C1, C2 is similar to that of C3, falsely implying a similar function.
On the use of Foo? It is used when you have not put the effort in to find a better name. Even a simple change to DoSomething or RememberedValue provides more clarity to a reader.
Personally, I think you should use the most succinct example possible in most circumstances. A more “realistic” but verbose example wastes the time of beginner and expert alike as they try to separate out the bits that you actually want them to see and the extra fluff that’s entirely tangential to the point at hand. Yes, some people will pick up crazy ideas by taking an example to mean something it does not, but I don’t think there’s any reason to believe that making an example more complete and realistic will necessarily *reduce* the number of ways it can be misinterpreted.
I think the most important point here is: make sure your brain is engaged. This is helpful for the author, but it is especially important for the reader! As you imply, a person who is trying to simply absorb information without thinking critically about what they are reading isn’t fulfilling their responsibility.
As for the examples, I’m skeptical of the “business-like” example strategy at all. In fact, your example here is a good demonstration of why: your “YearsInService” property sucks (no offense intended 🙂 ), because it’s just subtracting the year number, not calculating an actual count of years in service.
Examples that attempt to show real-world business logic are invariably contrived and too simplistic to present any real insight as to how real-world programs are written. At the same time, bugs in the implementation distract readers from the point, as they take time to worry about those aspects that have nothing to do with the point that’s really trying to be made.
If a person wants to learn about real-world design and implementation strategies, they should read a book about real-world design and implementation strategies. Or better yet, gain experience in the real world, exercising their design and implementation skills. It does not seem important at all to me, never mind critical, for a text attempting to explain the basic mechanics of a programming language to at the same time try to address higher-level design concepts. And in fact, I would be concerned that trying to address those things at the same time can hinder the effectiveness of both.
Put another way: an example on overloading taken to an extreme can in fact present the mechanics of overloading in a far more effective way than a simpler, more subtle example that follows better design principles ever could.
Which is not to say a single book can’t address both. But one could be careful to keep in mind about what fact one is actually trying to present, and stay focused on presenting that fact in the most effective way.
It’s just like design of class methods: a single method should do one single, simple thing and do it very well.
@Nat: I’m curious as to why you think DoSomething() would be better than Foo(). For me, both convey the same amount of information about what the method will do: zero. But Foo() has the benefit of being a conventional metasyntactic name, so (to me) it implies that you shouldn’t even *look* for real meaning there.
Obviously this is all subjective, so I’m not saying you’re wrong – I’m just intrigued as to where you see the benefit in DoSomething().
Trying to find ANY good example of OOP is hard. Trying to find a pithy one is basically a fools errand because the sort of things it is good for like 3G scence graph libraries, widget toppings and the like benefit from OOP because they are huge and have complex interaction requiring clear boundaries whilst having real object to concept mapping.
Previously in c# pre 2.0 the best examples would be single function interfaces which required some state. So working round the lack of closures, once they are there and the compiler deals with writing the classes involved you’re done.
Accepting this means either no longer caring about the features reason and simply using the minimal implementation to indicate the behaviour; or much or coming up with a decent meta example to which you can refer. As such your current technique, given the gratis nature of this medium, is fine.
Oh and I like Foo personally
I find the third example the easiest to parse. I find that my brain can interpret things much faster when there is a little meaning to hang onto. Things that are purely abstract require a lot more concentration to really get the hang of.
@skeet
‘Foo’ could be the name of a variable, a field, a property, a class, or a method; perhaps not in the context of your writing but certainly in the context of others. ‘DoSomething’ seems to be a method, so while reading the example perhaps less short term memory will be required (versus recalling that ‘Foo’ is a method, ‘Bar’ is a class, ‘Baz’ is a field).
Take a look at the way the C# spec does this.
It uses utility or library types like Point, Pair, Stack, Expression; and a compilable Test class with a Main() method to demonstrate usage.
For language features, I find the second option extremely useful, and I like the method names Foo, Bar, etc. I find single-letter names very disquieting as well.
But if you’re going to do that, *especially* if you’re named Jon Skeet, then I agree you should add quick aside someplace in the article or chapter to point out that this is bad design and not how one should use this feature.
While it may seem tedious, it’s just a few short words. “Skeet did it this way” carries a lot of weight in code reviews, and it’s extremely useful to be able to point out an explicit disclaimer from the source.
Personally, I think the second or third options are the most illustrative, with the second highlighting “this is what’s going on”, and the third highlighting “here’s when you want to do this.”
One issue that I’m conflicted about is example after example of code simplified for discussion, ignores performance issues, and which has left out error checking etc. that would be in production code. While stripping all of that “extraneous” material away helps us to focus on the point of the discussion, we never get an example of how we should be writing our code as a whole.
I suppose the philosophy is that if you want to learn about a language structure, you read a book on the language; if you want info on error handling, find an article on that; and there’s plenty of options when it comes to performance and algorithm resources. Synthesizing all the material into your product is an exercise left to the reader. But my concern is without at least the occassional full blown sample, someone without the exposure to (for example) exception handling best practices isn’t going to recognize that they need to brush up on it.
Anyhow, as I mentioned, option two or three are generally where I’m happiest (and from your comments, you are too, and most productive as well), but I can’t help but wonder if options four or five, or even “6” — a complete production-ready system — doesn’t have some value from time to time.
Why do people still use these Foos and Bars in their source code samples?
There is a difference between indicating that something is meaningless and being meaningless itself. Not everyone “gets” foos and bars – I for one recently had to read a Wikipedia article on how these came into being and what they actually represent. Have any of you read it?
I think using something like WriteSomething() or WriteNonsense() would be much more readable than Foo() in your code sample.
I’ll start a “No more foos and bars in source code” movement. Please, please, do not use foos and bars in your book. Please.
I agree with Igor. I have trouble understanding any code example that contains Foos and Bars. I have to always think about what kind of object/method/whatever this foo or bar should represent.
It’s not that hard to find some concrete example to illustrate a concept (with shapes, animals, or some other concrete objects).
I see the least usefulness in the “full application” category. As you said, it works best in a book that gradually works up to it. The rest of the categories have different uses, because they help answer different questions.
It’s fascinating to see how many people still criticize the second category (working but pointless code). I wonder why that is. I find this category to be the best for answering “what’s going on in here” or “how does this work under the hood”. In other words, mechanics of something.
For that kind of explanation, it works better than any other category precisely because it strips all semantics and leaves only the bare bones. Obviously, your example here was a bit extreme (C1, C2 and M), but it works just fine precisely because you have so few elements and there should be no problem for an average human mind to focus on all of them.
If there are more elements, then their names should probably be slightly more descriptive (Base and Derived come to mind), but no more than necessary. The idea, to me, is to focus on mechanics of something, not on how or why or when it should be used. That comes later, once the reader understands the fundamentals.
For non native english speakers (or to be more precise – those that are learning english) it is difficult to parse a code sample with foos and bars. They don’t know if that is a word that they don’t understand or it is just a fake name. It is much easier if you use simple words (from lesson 1 of an enlish book) to name your classes and methods – like DoSomething, SetValue, etc.
The use of Foo is usually okay on its own, but as Alex says, you can intimate more information with a name like DoSomething than you can with Foo. I speculate that it is easier to parse information into the mental model they are building up by reading the code than it is to parse out the Foo = ignore statement. My main issue is when there is more than one use of Foo or more than say three uses of garbage phrases. It becomes clutter more than it helps.
I’m on the pro-Foo side. When I read sample code with Foo/Bar in it, I expect that I can safely ignore the method names and concentrate on the other portions of the code. When I see the shape example, I expect that there is a purpose to the class and method names and I should spend more time on them.
I do have a separate question about the naming convention used in the Shape example — What’s the purpose of this line:
public Square(double side) {this.side = side;}
I know it is Microsoft convention to have the parameter name match the member variable name, but I’m not a fan. My reasoning is simple: C# does a lot to try to prevent standard C/C++ mistakes. One good example is CS0136 – variable declaration hiding. So why recommend something that can easily introduce non-obvious mistakes?
@Joel: My reason for using the same name is that it’s the best available name for both the parameter and the field. I can’t remember ever seeing it cause a problem in my production code – this only ever happens in a constructor (or in a “setter” method in Java) and it’s trivial to get right. Tools tend to tell you if something’s wrong, too 🙂
The alternative is to either find two names for the same concept, or use prefixes/suffixes – I’m not a fan of either of those alternatives.
Let me start by saying that I agree with much of what you wrote … this topic is something I have quite a bit of interest in, since I am currently working on creating training materials for mobile platform developers. Much of my example code falls into your third classification – with the exception that the code is not always a complete program.
The guidelines (in no particular order) I try to apply to my own writing are:
* Know your audience.
* Partial (incomplete) code examples are ok.
* Avoid unnecessary details in example code.
* Avoid complicated fake “real-world” business models.
* Avoid excessive brevity in class/method/variable names.
* Use simple, relatable concepts when describing abstract things.
Before going into the reasoning behind the guidelines above, I would make the observation that there are (at least) three dimensions along which you can rate sample code:
1. Abstract <----> Concrete
2. Concise <----> Verbose
3. Simple <----> Complex
There is a fourth dimension which I believe emerges from these three (as well as other factors like writing style, naming choices, commenting, etc)(+) – namely:
Clear <--> Confusing
As authors, we all want our example code to be as clear as possible. Unfortunately, there’s no simple recipe that says where we should strive to be in the other dimensions in order to get there. Some topics lend themselves better to abstract explanations rather than concrete ones (you can lose sight of the forest for the trees). Other topics are by their nature complex, and require a necessary level of complexity in their treatment if we wish to avoid confusing oversimplification. Similarly, it’s possible to be “too concise” in our examples – using one letter class, method, and variable names where more descriptive names would add clarity (*). Furthermore, it’s possible for reader to infer notions from even the best crafted example that the author did not intend – imitating naming choices, code structure, and other coding practices in contexts where they don’t apply.
Having said that, here are the rationales for some of the guidelines I’ve put forth.
* Know Your Audience – This one is extremely important, and informs virtually all of the other choices I would make. You make different choices if your audience is consists of entry level developers vs. developers with experience but new to a technology vs. developers with experience looking to gain new insights. These are three very different audiences – with different needs, expectations, and comprehension levels. These are merely examples, by the way – the range of audience backgrounds is extremely diverse, and a good author will have an intimate understanding of the concerns and point of view of his audience. It’s extremely rare for an example to be able to serve multiple audiences; therefore, I believe a good technical author must strive to identify what knowledge and background his readers will have, before crafting an example.
* Partial Examples: I’ve read more than one technical book where the author tries to provide a fully working example for each concept explained. These books tend to be chock full of dense, uncommented, hard-to-digest code. Often, the concept being taught is lost in a jungle of variables and method calls. Little is gained by forcing readers to hack and slash through this jungle – and it makes it far more likely for readers to infer intentions or practices the author may not intend. Instead, I believe that it’s OK to use an incomplete, fragmentary code sample to illustrate a point. This is a case of “less is more.” In a world where linking information is easy, it’s always possible to make more complete examples available as a download … but if you do so, make sure it compiles 🙂
* Avoid Unnecessary Detail: This is a corollary to the rule above. However, it extends into choices about: which problem domain to use in an example, which analogies to use, and how to name constructs in your code. For instance, if the purpose of an example is to demonstrate the “syntax” of a language feature, crafting a complete set of business classes is unnecessary detail. Alternatively, if the purpose of your sample code is to show how to connect related classes into a working object-model, then by all means, show that code. The more detail an example includes, the harder it is for a reader for follow the essential details, and the more likely they are to draw unintended conclusions from the code.
* Fake “real-World” Examples: I try to avoid these whenever I can. When you’re not a domain expert, it’s easy to craft an example that people who DO work in that domain will see as malformed (or worse, copy as gospel). “Real world” examples rarely live up to their eponym – they rarely capture the interesting challenges or relationships in the problem domain, and can distract or misdirect readers from the core concept you want to convey … unless, of course, what you’re trying to convey IS how to solve a business domain problem. Sometimes authors find themselves in a situation where they have to use a “real world” example in order to bridge an abstract concept to a concrete scenario to demonstrate the real-world applicability of the concept (demonstrating design patterns often requires this). In these cases, I try to choose simple domains that everyone can relate to and which are sufficiently flexible that there are no right/wrong modeling choices (I tend to use the Product/Customer/Order domain in my examples).
* Naming Brevity: This is loosely related to the unnecessary detail rule. Readers need some grounding in order to mentally bridge the gap from their current understanding of a topic to what an example attempts to convey. Using overly brief names can be confusing – it makes it hard to follow and often hard to “visually parse”. Speaking for myself, I find it challenging to keep track of more than a few simple elements with names like M/C/X, or Foo/Bar/Baz. The names start to blend together and mix in my mind. For really simple examples (say, demonstrating overloading syntax in C#) that may be fine – but even then, it’s not that hard to come up with better examples (ToString(), Parse(), or constructor overloading). Personally, I try to avoid these fake names in favor of simple but relatable ones, whenever I can.
* Simple, Relatable Concepts: Sometimes it’s the case that you want to explain something using a analogy or idiomatic case. When doing so, I prefer to choose simple examples that everyone is able to intuitively grasp. Picking good analogies is hard … they often break down when taken too far, or convey different ideas to different people depending on social or societal factors. However, some domains tend to work better than others. The Product/Order/Customer domain is one that most people (as consumers) are familiar with. Taxonomic or phylogenic hierarchies of animals are also easy to reason about – and are good at expression inheritance relationships. Shapes and construction metaphors also tend to offer a rich body of analogies to draw from. When using such a concept, it’s important to make sure that the kernel of the concept is effectively conveyed. For instance, using “class Square : Rectangle {}” may be a bad choice for demonstrating inheritance (I’ve seen more than one author craft such an example that badly breaks LSV and other S.O.L.I.D. principles in the implementation).
Ultimately, crafting good example code is hard. It requires balancing multiple considerations against *your understanding* of your *audience’s understanding*. We could easily segue into an epistemological examination of how we know what we know (or don’t know) … but at the end of the day, the written word is a static, non-interactive medium in which to convey knowledge. The best we can do is to keep things simple and make sure we always keep sight of the core idea we are trying to capture.
(+) There are many factors that contribute to the clarity and teaching effectiveness of sample code. For instance, use of good analogy choices for the problem domain, comments in the right places, etc. However, I’ve excluded these as dimensions because I don’t think they fall on a clear continuum.
(*) On the subject of super-concise code, I will say that occasionally I see examples from Eric Lippert that involve single letter code snippets. I actually think that these work well in the context that Eric employs them, because they are used to convey some interesting situation about how a language or compiler works. Reducing the example to a few simple types strips away the unnecessary detail that would (IMO) divert attention away from the concept explored.