Tuesday, November 17, 2015

The Brick Wall of C++ Source Code Transformation

In 1992, I was responsible for organizing the Advanced Topics Workshop that accompanied the USENIX C++ Technical Conference. The call for workshop participation said:
The focus of this year's workshop will be support for C++ software development tools. Many people are beginning to experiment with the idea of having such tools work off a data structure that represents parsed C++, leaving the parsing task to a single specialized tool that generates the data structure. 
As the workshop approached, I envisioned great progress in source code analysis and transformation tools for C++. Better lints, deep architectural analysis tools, automatic code improvement utilities--all these things would soon be reality! I was very excited.

By the end of they day, my mood was different. Regardless of how we approached the problem of automated code comprehension, we ran into the same problem: the preprocessor. For tools to understand the semantics of source code, they had to examine the code after preprocessing, but to produce acceptable transformed source code, they had to modify what programmers work on: files with macros unexpanded and preprocessor directives intact. That means tools had to map from preprocessed source files back to unpreprocessed source files. That's challenging even at first glance, but when you look closer, the problem gets harder. I found out that some systems #include a header file, modify preprocessor symbols it uses, then #include the header again--possibly multiple times. Imagine back-mapping from preprocessed source files to unpreprocessed source files in such systems!

Dealing with real C++ source code means dealing with real uses of the preprocessor, and at that workshop nearly a quarter century ago, I learned that real uses of the preprocessor doomed most tools before they got off the drawing board. It was a sobering experience.

In the ensuing 23 years, little has changed. Tools that transform C++ source code still have to deal with the realities of the preprocessor, and that's still difficult. In my last blog post, I proposed that the C++ Standardization Committee take into account how source-to-source transformation tools could reduce the cost of migrating old code to new standards, thus permitting the Committee to be more aggressive about adopting breaking changes to the language. In this post, I simply want to acknowledge that preprocessor macros make the development of such tools harder than my last post implied.

Consider this very simple C++:
#define ZERO 0

auto x = ZERO;
int *p = ZERO;
In the initialization of x, ZERO means the int 0. In the initialization of p, ZERO means the null pointer. What should a source code transformation tool do with this code if its job is to replace all uses of 0 as the null pointer with nullptr? It can't change the definition of ZERO to nullptr, because that would change the semantics of the initialization of x. It could, I suppose, get rid of the macro ZERO and replace all uses with either the int 0 or nullptr, depending on context, but (1) that's really outside its purview (programmers should be the ones to determine if macros should be part of the source code, not tools whose job it is to nullptr-ify a code base), and (2) ZERO could be used inside other macros that are used inside other macros that are used inside other macros..., and especially in such cases, reducing the macro nesting could fill the transformed source code with redundancies and make it harder to maintain. (It'd be the moral equivalent of replacing all calls to inline functions with the bodies of those functions.)

I don't recall a lot of talk about templates at the workshop in 1992. At that time, few people had experience with them. (The first compiler to support them, cfront 3.0, was released in 1991.) Nevertheless, templates can give rise to the same kinds of problems as the preprocessor:
template<typename T>
void setToZero(T& obj) { obj = 0; }

int x;
setToZero(x);    // "0" in setToZero means the int

int *p;
setToZero(p);    // "0" in setToZero means the null pointer
I was curious about what clang-tidy did in these situations (one of its checks is modernize-use-nullptr), but I was unable to find a way to enable that check in the version of clang-tidy I downloaded (LLVM version 3.7.0svn-r234109). Not that it matters. The way that clang-tidy approaches the problem isn't the only way, and one of the reasons I propose a decade-long time frame to go from putting a language feature on a hit list to actually getting rid of it is that it's likely to take significant time to develop source-to-source translation tools that can handle production C++ code, macros and templates and all.

The fact that the problem is hard doesn't mean it's insurmountable. The existence of refactoring tools like Clang-tidy (far from the only example of such tools) demonstrates that industrial-strength C++ source transformation tools can be developed. It's nonetheless worth noting that such tools have to take the existence of templates and the preprocessor into account, and those are noteworthy complicating factors.

-- UPDATE --

A number of comments on this post include references to tools that chip away at the problems I describe here. I encourage you to pursue those references. As I said, the problem is hard, not insurmountable.

Friday, November 13, 2015

Breaking all the Eggs in C++

If you want to make an omelet, so the saying goes, you have to break a few eggs. Think of the omelet you could make if you broke not just a few eggs, but all of them! Then think of what it'd be like to not just break them, but to replace them with newer, better eggs. That's what this post is about: breaking all the eggs in C++, yet ending up with better eggs than you started with.

NULL, 0, and nullptr

NULL came from C. It interfered with type-safety (it depends on an implicit conversion from void* to typed pointers), so C++ introduced 0 as a better way to express null pointers. That led to problems of its own, because 0 isn't a pointer, it's an int. C++11 introduced nullptr, which embodies the idea of a null pointer better than NULL or 0. Yet NULL and 0-as-a-null-pointer remain valid. Why? If nullptr is better than both of them, why keep the inferior ways around?

Backward-compatibility, that's why. Eliminating NULL and 0-as-a-null-pointer would break existing programs. In fact, it would probably break every egg in C++'s basket. Nevertheless, I'm suggesting we get rid of NULL and 0-as-a-null-pointer, thus eliminating the confusion and redundancy inherent in having three ways to say the same thing (two of which we discourage people from using).

But read on.

Uninitialized Memory

If I declare a variable of a built-in type and I don't provide an initializer, the variable is sometimes automatically set to zero (null for pointers). The rules for when "zero initialization" takes place are well defined, but they're a pain to remember. Why not just zero-initialize all built-in types that aren't explicitly initialized, thus eliminating not only the pain of remembering the rules, but also the suffering associated with debugging problems stemming from uninitialized variables?

Because it can lead to unnecessary work at runtime. There's no reason to set a variable to zero if, for example, the first thing you do is pass it to a routine that assigns it a value.

So let's take a page out of D's book (in particular, page 30 of The D Programming Language) and zero-initialize built-ins by default, but specify that void as an initial value prevents initialization:
int x;              // always zero-initialized
int x = void;       // never zero-initialized
The only effect such a language extension would have on existing code would be to change the initial value of some variables from indeterminate (in cases where they currently would not be zero-initialized) to specified (they would be zero-initialized). That doesn't lead to any backward-compatibility problems in the traditional sense, but I can assure you that some people will still object. Default zero initialization could lead to a few more instructions being executed at runtime (even taking into account compilers' ability to optimize away dead stores), and who wants to tell  developers of a finely-tuned safety-critical realtime embedded system (e.g., a pacemaker) that their code might now execute some instructions they didn't plan on?

I do. Break those eggs!

This does not make me a crazy man. Keep reading.

std::list::remove and std::forward_list::remove

Ten standard containers offer a member function that eliminates all elements with a specified value (or, for map containers, a specified key): list, forward_list, set, multiset, map, multimap, unordered_set, unordered_multiset, unordered_map, unordered_multimap. In eight of these ten containers, the member function is named erase. In list and forward_list, it's named remove. This is inconsistent in two ways. First, different containers use different member function names to accomplish the same thing. Second, the meaning of "remove" as an algorithm is different from that as a container member function: the remove algorithm can't eliminate any container elements, but the remove member functions can.

Why do we put up with this inconsistency? Because getting rid of it would break code. Adding a new erase member function to list and forward_list would be easy enough, and it would eliminate the first form of inconsistency, but getting rid of the remove member functions would render code calling them invalid. I say scramble those eggs!

Hold your fire. I'm not done yet.


C++11's override specifier enables derived classes to make explicit which functions are meant to override virtual functions inherited from base classes. Using override makes it possible for compilers to diagnose a host of overriding-relating errors, and it makes derived classes easier for programmers to understand. I cover this in my trademark scintillating fashion (ahem) in Item 12 of Effective Modern C++, but in a blog post such as this, it seems tacky to refer to something not available online for free, and that Item isn't available for free--at least not legally. So kindly allow me to refer you to this article as well as this StackOverflow entry for details on how using override improves your code.

Given the plusses that override brings to C++, why do we allow overriding functions to be declared without it? Making it possible for compilers to check for overriding errors is nice, but why not require that they do it? It's not like we make type checking optional, n'est-ce pas?

You know where this is going. Requiring that overriding functions be declared override would cause umpty-gazillion lines of legacy C++ to stop compiling, even though all that code is perfectly correct. If it ain't broke, don't fix it, right? Wrong!, say I. Those old functions may work fine, but they aren't as clear to class maintainers as they could be, and they'll cause inconsistency in code bases as newer classes embrace the override lifestyle. I advocate cracking those eggs wide open.

Backward Compatibility 

Don't get me wrong. I'm on board with the importance of backward compatibility. Producing software that works is difficult and expensive, and changing it is time-consuming and error-prone. It can also be dangerous. There's a reason I mentioned pacemakers above: I've worked with companies who use C++ as part of pacemaker systems. Errors in that kind of code can kill people. If the Standardization Committee is going to make decisions that outlaw currently valid code (and that's what I'd like to see it do), it has to have a very good reason.

Or maybe not. Maybe a reason that's merely decent suffices as long as existing code can be brought into conformance with a revised C++ specification in a way that's automatic, fast, cheap, and reliable. If I have a magic wand that allows me to instantly and flawlessly take all code that uses NULL and 0 to specify null pointers and revises the code to use nullptr instead, where's the downside to getting rid of NULL and 0-as-a-null-pointer and revising C++ such that the only way to specify a null pointer is nullptr? Legacy code is easily updated (the magic wand works instantly and flawlessly), and we don't have to explain to new users why there are three ways to say the same thing, but they shouldn't use two of them. Similarly, why allow overriding functions without override if the magic wand can instantly and flawlessly add override to existing code that lacks it?

The eggs in C++ that I want to break are the old ways of doing things--the ones the community now acknowledges should be avoided. NULL and 0-as-a-null-pointer are eggs that should be broken. So should variables with implicit indeterminate values. list::remove and forward_list::remove need to go, as do overriding functions lacking override. The newer, better eggs are nullptr, variables with indeterminate values only when expressly requested, list::erase and forward_list::erase, and override. 

All we need is a magic wand that works instantly and flawlessly.

In general, that's a tall order, but I'm willing to settle for a wand with limited abilities. The flawless part is not up for negotiation. If the wand could break valid code, people could die. Under such conditions, it'd be irresponsible of the Standardization Committee to consider changing C++ without the above-mentioned very good reason. I want a wand that's so reliable, the Committee could responsibly consider changing the language for reasons that are merely decent.

I'm willing to give ground on instantaneousness. The flawless wand must certainly run quickly enough to be practical for industrial-sized code bases (hundreds of millions of lines or more), but as long as it's practical for such code bases, I'm a happy guy. When it comes to speed, faster is better, but for the speed of the magic wand, good enough is good enough.

The big concession I'm willing to make regards the wand's expressive power. It need not perform arbitrary changes to C++ code bases. For Wand 1.0, I'm willing to settle for the ability to make localized source code modifications that are easy to algorithmically specify. All the examples I discussed above satisfy this constraint:
  • The wand should replace all uses of NULL and of 0 as a null pointer with nullptr. (This alone won't make it possible to remove NULL from C++, because experience has shown that some code bases exhibit "creative" uses of NULL, e.g., "char c = (char) NULL;". Such code typically depends on undefined behavior, so it's hard to feel too sympathetic towards it, but that doesn't mean it doesn't exist.)
  • The wand should replace all variable definitions that lack explicit initializers and that are currently not zero-initialized with an explicit initializer of void. 
  • The wand should replace uses of list::remove and forward_list::remove with uses of list::erase and forward_list::erase. (Updating the container classes to support the new erase member functions would be done by humans, i.e. by STL implementers. That's not the wand's responsibility.)
  • The wand should add override to all overriding functions.
Each of the transformations above are semantics-preserving: the revised code would have exactly the same behavior under C++ with the revisions I've suggested as it currently does under C++11 and C++14.


The magic wand exists--or at least the tool needed to make it does. It's called Clang. All hail Clang! Clang parses and performs semantic analysis on C++ source code, thus making it possible to write tools that modify C++ programs. Two of the transformations I discussed above appear to be part of clang-tidy (the successor to clang-modernize): replacing NULL and 0 as null pointers with nullptr and adding override to overriding functions. That makes clang-tidy, if nothing else, a proof of concept. That has enormous consequences.

Revisiting Backward Compatibility 

In recent years, the Standardization Committee's approach to backward compatibility has been to preserve it at all costs unless (1) it could be demonstrated that only very little code would be broken and (2) the cost of the break was vastly overcompensated for by a feature enabled by the break. Hence the Committee's willingness to eliminate auto's traditional meaning in C and C++98 (thus making it possible to give it new meaning in C++11) and its C++11 adoption of the new keywords alignas, alignof, char16_t, char32_t, constexpr, decltype, noexcept, nullptr, static_assert, and thread_local.

Contrast this with the perpetual deprecation of setting bool variables to true by applying ++ to them. When C++14 was adopted, that construct had been deprecated for some 17 years, yet it remains part of C++. Given its lengthy stint on death row, it's hard to imagine that a lot of code still depends on it, but my guess is that the Committee sees nothing to be gained by actually getting rid of the "feature," so, failing part (2) of the break-backward-compatibility test, they leave it in.

Incidentally, code using ++ to set a bool to true is another example of the kind of thing that a tool like clang-tidy should be able to easily perform. (Just replace the use of ++ with an assignment from true.)

Clang makes it possible for the Standardization Committee to retain its understandable reluctance to break existing code without being quite so conservative about how they do it. Currently, the way to avoid breaking legacy software is to ensure that language revisions don't affect it. The sole tool in the backward-compatibility toolbox is stasis: change nothing that could affect old code. It's a tool that works, and make no mistake about it, that's important. The fact that old C++ code continues to be valid in modern C++ is a feature of great importance to many users. It's not just the pacemaker programmers who care about it.

Clang's contribution is to give the Committee another way to ensure backward compatibility: by recognizing that tools can be written to automatically modify old code to conform to revised language specifications without any change in semantics. Such tools, provided they can be shown to operate flawlessly (i.e., they never produce transformed programs that behave any differently from the code they're applied to) and at acceptable speed for industrial-sized code bases, give the Standardization Committee more room to get rid of the parts of C++ where there's consensus that we'd rather not have them in the language.

A Ten-Year Process

Here's how I envision this working:
  • Stage 1a: The Standardization Committee identifies features of the language and/or standard library that they'd like to get rid of and whose use they believe can be algorithmically transformed into valid and semantically equivalent code in the current version or a soon-to-be-adopted version of C++. They publish a list of these features somewhere. The Standard is probably not the place for this list. Perhaps a technical report would be a suitable avenue for this kind of thing. 
  • Stage 1b: Time passes, during which the community has the opportunity to develop tools like clang-tidy for the features identified in Stage 1a and to get experience with them on nontrivial code bases. As is the case with compilers and libraries, the community is responsible for implementing the tools, not the Committee.
  • Stage 2a: The Committee looks at the results of Stage 1b and reevaluates the desirability and feasibility of eliminating the features in question. For the features where they like what they see, they deprecate them in the next Standard.
  • Stage 2b: More time passes. The community gets more experience with the source code transformation tools needed to automatically convert bad eggs (old constructs) to good ones (the semantically equivalent new ones).
  • Stage 3: The Committee looks at the results of Stage 2b and again evaluates the desirability and feasibility of eliminating the features they deprecated in Stage 2a. Ideally, one of the things they find is that virtually all code that used to employ the old constructs has already been converted to use the new ones. If they deem it appropriate, they remove the deprecated features from C++. If they don't, they either keep them in a deprecated state (executing the moral equivalent of a goto to Stage 2b) or they eliminate their deprecated status. 
I figure that the process of getting rid of a feature will take about 10 years, where each stage takes about three years. That's based on the assumption that the Committee will continue releasing a new Standard about every three years.

Ten years may seem like a long time, but I'm not trying to optimize for speed. I'm simply trying to expand the leeway the Standardization Committee has in how they approach backward compatibility. Such compatibility has been an important factor in C++'s success, and it will continue to be so.

One Little Problem

The notion of algorithmically replacing one C++ construct with a different, but semantically equivalent, construct seems relatively straightforward, but that's only because I haven't considered the biggest, baddest, ruins-everythingest aspect of the C++-verse: macros. That's a subject for a post of its own, and I'll devote one to it in the coming days. [The post now exists here.] For now, I'm interested in your thoughts on the ideas above.

What do you think?

Saturday, October 31, 2015

Effective Modern C++ in Korean!

The latest translation to reach my door is another two-color version, this time in Korean. Knowing no Korean, I can't assess the quality of the translation, but I can say that during the translation process, the Korean publisher found an error in the index. That's a rare event—one that indicates that the translator and publisher were paying very close attention. I take that as a good sign.

I hope you enjoy EMC++ in Korean.


PS - O'Reilly and I fixed the indexing error in the latest release of the English edition of the book, so it's not just Korean readers who will benefit from the book's newest translation.

Friday, October 23, 2015

Effective Modern C++ in Japanese!

Another day, another translation of Effective Modern C++--this time in Japanese.

Unlike the other translations I've seen, the Japanese edition uses two colors, so it's closer in appearance to the four-color American edition. That's a nice feature. A notable difference between the Japanese translation and the English original, however, is that the Japanese version uses a lot more Kanji :-)

I hope my Japanese readers enjoy this new translation. I certainly enjoy having a copy on my bookshelf.


Tuesday, September 15, 2015

Effective Modern C++ in Polish!

The family of Effective Modern C++ translations continues to grow. The latest member is in Polish.

As with all the translations I've seen so far, the Polish edition uses only one ink color (black). I therefore believe that if you're comfortable with technical English, you'll probably prefer the English (American) edition. If you prefer your C++ in Polish (including code comments!), however, I'm pleased to report that you now have that option.


Should you be using something instead of what you should use instead?

The April 2000 C++ Report included an important article by Matt Austern: "Why You Shouldn't Use set—and What to Use Instead." It explained why lookup-heavy applications typically get better performance from applying binary search to a sorted std::vector than they'd get from the binary search tree implementation of a std::set. Austern reported that in a test he performed on a Pentium III using containers of a million doubles, a million lookups in a std::set took nearly twice as long as in a sorted std::vector.

Austern also reported that the std::set used nearly three times as much memory as the std::vector. On Pentium (a 32-bit architecture), doubles are 8 bytes in size, so I'd expect a std::vector storing a million of them to require about 7.6MB. Storing the same million doubles in a std::set would call for allocation of a million nodes in the underlying binary tree, and assuming three 4-byte pointers per node (pointer to left child, pointer to right child, pointer to parent), each node would take 20 bytes. A million of them would thus require about 19MB. That's only 2.5 times the memory required for the std::vector, but in 2000, I believe it was fairly common for each dynamic memory allocation to tack on a word indicating the size of the allocated storage, and if that was the case in Austern's test, each tree node would require 24 bytes—precisely three times the memory needed to store a single double in a std::vector.

The difference in memory utilization explains why searching a sorted std::vector can be faster than searching a set::set holding the same data. In a std::vector, the per-element data structure overhead present in a search tree is missing, so more container elements fit onto a memory page or into a cache line. We'd thus expect fewer page faults and/or cache misses when looking things up in a std::vector, and faster memory accesses means faster lookups.

Many people were influenced by Austern's article and by independent experiments bolstering his conclusions. I was among them. Item 23 of Effective STL is "Consider replacing associative containers with sorted vectors." Boost was convinced, too: Boost.Container offers the flat_(multi)map/set associative containers, citing as inspiration both Austern's article and the discussion of the sorted std::vector-based AssocVector in Andrei Alexandrescu's Modern C++ Design. (In his book, Alexandrescu references the C++ Report article. All roads in this area lead to Matt Austern.)

There's nothing in the article about containers based on hash tables (i.e., the standard unordered containers), presumably because there were no hash-table-based containers in the C++ standard library in 2000. Nevertheless, the same basic reasoning would seem to apply. Hash tables are based on nodes, and node-based containers incur overhead for pointers and dynamic allocations that std::vectors avoid.

On the other hand, hash tables offer O(1) lookup complexity, while sorted std::vectors offer only O(lg n). This suggests that there should be a point at which the more conservative memory usage of a std::vector is compensated for by the superior computational complexity of an unordered container.

After a recent presentation I gave discussing these issues, Paul Beerkens showed me data he'd collected showing that on simple tests, the unordered containers (i.e., hash tables) essentially always outperformed sorted std::vectors for container sizes beyond about 50. I was surprised that the crossover point was so low, so I did some testing of my own. My results were consistent with his. Here's the data I got for containers between size 20 and 1000; the X-axis is container size, the Y-axis is average lookup time:
The lines snaking across the bottom of the graph (click the image to see a larger version) are for the unordered containers. The other lines are for the tree-based containers (std::set and std::map) and for Boost's flat containers (i.e., sorted std::vectors). For containers in this size range, the superiority of the hash tables is clear, but the advantage of the flat containers over their tree-based counterparts isn't. (They actually tend to be a bit slower.) If we bump the maximum container size up to 20,000, that changes:
Here it's easy to see that for containers with about 5000 elements or more, lookups in the flat containers are faster than those in the binary trees (though still slower than those in the hash tables), and that remains true for containers up to 10 million elements (the largest I tested):
For very small containers (no more than 100 elements), Beerkens added a different strategy to the mix: linear search through an unsorted std::vector. He found that this O(n) approach performed better than everything else for container sizes up to about 35 elements, a result that's consistent with the conventional wisdom that, for small data sets, linear search runs faster than more complicated hash-based and binary-search-based algorithms.

The graphs above correspond to tests I ran on an Intel Core i7-820 using GCC 5.1.0 and MinGW under 64-bit Windows 7. Optimization was set to -O3. I ran the same tests using Visual C++ 2015's compiler, but for reasons I have yet to investigate, all the timings for the hash tables were zero under that compiler.  I've therefore omitted data for VC++. Interestingly, the code to perform linear searches through unsorted std::vectors took zero time under GCC, though non-zero time under VC++. This is why I show no data comparing lookup speeds for all of  binary search trees, hash tables, sorted std::vectors, and unsorted std::vectors: neither GCC nor VC++ generated non-zero lookup times for all approaches.

Maybe GCC optimized out the loop doing the lookups in the unsorted std::vectors, while VC++ optimized away the loops doing the lookups in the hash tables. Or maybe my test code is flawed. I don't know. Perhaps you can figure it out: here's the test code. (It's based on code Paul Beerkens shared with me, but I made substantial changes, so if there's something wrong with the test code, it's my fault.) Feel free to play around with it. Let me know what you find out, either about the code or about the results of running it.

If Paul Beerkens' and my results are valid and generalize across hardware, compilers, standard library implementations, and types of data being looked up, I'd conclude that the unordered standard associative containers (i.e., hash tables) should typically be the ones to reach for when you need high-performance lookups in an associative container and element ordering is not important. I'd also conclude that for very small containers, unsorted std::vectors are likely to be the way to go.

As for the title of this blog post, it looks like what you should use instead of using std::set (and std::map and their multi cousins) these days is probably a truly "unordered" container: a hash table or, for small containers, an unsorted std::vector.



Jonathan Wakely posted source code showing how to replace my Windows-specific timing code with portable standards-conformant C++, and Tomasz Kamiński took my code incorporating it and sent a revised program that (1) uses the modify-a-volatile trick to prevent compilers from optimizing loops away and (2) checks the result of the lookup against the container's end iterator (because, in practice, you'd always need to do that). When I run Kamiński's code, I get non-zero lookup times for all containers for both GCC and VC++.

Here are the results I got for containers of up to 100 elements with GCC:
And here they are for VC++:
With both compilers, linear search through an unsorted std::vector is fastest for very small containers, but it doesn't take more than about 20-50 elements for the hash tables to take the lead.Whether that remains the case across hardware, compilers, standard library implementations, and types of data being looked up, I don't know. (That caveat is present in the original post above, but some commenters below seem to have overlooked it.)

Thursday, September 10, 2015

Interview with me on CppCast

I've been a loyal listener to CppCast since its launch earlier this year, so I was pleased to be asked to be a guest on the show. The result is now live. Among other things, hosts Rob Irving and Jason Turner asked me about my recent blog post on inconsistencies in C++ intialization syntax (their idea, not mine), my role as Consulting Editor for the Effective Software Development Series, common misconceptions C++ developers have about the workings of the language, Items from my books I consider especially noteworthy, advice to would-be authors and presenters, aspects of C++ I'd prefer didn't exist, and how I found myself lecturing about C++ from a nightclub stage normally used for belly dancing.

It was a fun interview for me, and I hope you enjoy listening to it.


Monday, September 7, 2015

Thoughts on the Vagaries of C++ Initialization

If I want to define a local int variable, there are four ways to do it:
int x1 = 0;
int x2(0);
int x3 = {0};
int x4{0};
Each syntactic form has an official name:
int x1 = 0;              // copy initialization
int x2(0);               // direct initialization
int x3 = {0};            // copy list initialization
int x4{0};               // direct list initialization
Don't be misled by the word "copy" in the official nomenclature. Copy forms might perform moves (for types more complicated than int), and in practice, implementations often elide both copy and move operations in initializations using the "copy" syntactic forms.

(If you engage in written communication with a language lawyer about these matters and said lawyer has its pedantic bit set, you'll be reprimanded for hyphen elision. I speak from experience. The official terms are "copy-initialization," "direct-initialization," "copy-list-initialization," and "direct-list-initialization." When dealing with language lawyers in pedantic mode, it's wise to don a hazmat suit or to switch to oral communication.)

But my interest here isn't terminology, it's language design.

Question #1: Is it good language design to have four ways to say the same thing?

Let's suppose that instead of wanting to define an int, we want to define a std::atomic<int>. std::atomics don't support copy initialization (the copy constructor is deleted), so that syntactic form becomes invalid. Copy list initialization continues to succeed, however, because for std::atomic, it's treated more or less like direct initialization, which remains acceptable. So:
std::atomic<int> x5 = 0;    // error!
std::atomic<int> x6(0);     // fine
std::atomic<int> x7 = {0};  // fine
std::atomic<int> x8{0};     // fine
(I frankly expected copy list initialization to be treated like copy initialization, but GCC and Clang thought otherwise, and [over.match.list] in C++14 backs them up. Live and learn.)

Question #2: Is it good language design to have one of the four syntaxes for defining an int be invalid for defining a std::atomic<int>?

Now let's suppose we prefer to use auto for our variable instead of specifying the type explicitly. All four initialization syntaxes compile, but two yield std::initializer_list<int> variables instead of ints:
auto x9 = 0;                // x9's type is int
auto x10(0);                // x10's type is int
auto x11 = {0};             // x11's type is std::initializer_list<int>
auto x12{0};                // x12's type is std::initializer_list<int>
This would be the logical place for me to pose a third question, namely, whether these type deductions represent good language design. The question is moot; it's widely agreed that they don't. Since C++11's introduction of auto variables and "uniform" braced initialization syntax, it's been a common error for people to accidentally define a std::initializer_list when they meant to define, e.g., an int.

The Standardization Committee acknowledged the problem by adopting N3922 into draft C++17. N3922 specifies that an auto variable, when coupled with direct list initialization syntax and exactly one value inside the braces, no longer yields a std::initializer_list. Instead, it does what essentially every programmer originally expected it to do: define a variable with the type of the value inside the braces. However, N3922 leaves the auto type deduction rules unchanged when copy list initialization is used. Hence, under N3922:
auto x9 = 0;                // x9's type is int
auto x10(0);                // x10's type is int
auto x11 = {0};             // x11's type is std::initializer_list<int>
auto x12{0};                // x12's type is int
Several compilers have implemented N3922. In fact, it can be hard—maybe even impossible— to get such compilers to adhere to the C++14 standard, even if you want them to. GCC 5.1 follows the N3922 rule even when expressly in C++11 or C++14 modes, i.e., when compiled with -std=c++11 or -std=c++14. Visual C++ 2015 is similar: type deduction is performed in accord with N3922, even when /Za ("disable language extensions") is used.

 Question #3: Is it good language design for copy list initialization (i.e., braces plus "=") to be treated differently from direct list initialization (i.e., braces without "=") when deducing the type of auto variables?

Note that these questions are not about why C++ has the rules it has. They're about whether the rules represent good programming language design. If we were designing C++ from scratch, would we come up with the following?
int x1 = 0;                 // fine
int x2(0);                  // fine
int x3 = {0};               // fine
int x4{0};                  // fine
std::atomic<int> x5 = 0;    // error!
std::atomic<int> x6(0);     // fine
std::atomic<int> x7 = {0};  // fine
std::atomic<int> x8{0};     // fine
auto x9 = 0;                // x9's type is int
auto x10(0);                // x10's type is int
auto x11 = {0};             // x11's type is std::initializer_list<int>
auto x12{0};                // x12's type is int
Here's my view:
  • Question #1: Having four ways to say one thing constitutes bad design. I understand why C++ is the way it is (primarily backward-compatibility considerations with respect to C or C++98), but four ways to express one idea leads to confusion and, as we've seen, inconsistency.
  • Question #2: Removing copy initialization from the valid initialization syntaxes makes things worse, because it introduces a seemingly gratuitous inconsistency between ints and std::atomic<int>s.
  • Non-question #3: I thought the C++11 rule about deducing std::initializer_lists from braced initializers was crazy from the day I learned about it. The more times I got bitten by it in practice, the crazier I thought it was. I have a lot of bite marks.
  • Question #3: N3922 takes the craziness of C++11 and escalates it to insanity by eliminating only one of two syntaxes that nearly always flummox developers. It thus replaces one source of programmer confusion (auto + braces yields counterintuitive type deduction) with an even more confusing source (auto + braces sometimes yields counterintuitive type dedeuction). One of my earlier blog posts referred to N2640, where deducing a std::initializer_list for auto variables was deemed "desirable," but no explanation was offered as to why it's desirable. I think that much would be gained and little would be lost by abandoning the special treatment of braced initializers for auto variables. For example, doing that would reduce the number of sets of type deduction rules in C++ from five to four.
But maybe it's just me. What do you think about the vagaries of C++ initialization?


Wednesday, July 15, 2015

Little Progress on the Keyhole Front

In 2003, I published a draft chapter of what I hoped would eventually become a book called The Keyhole Problem. "Keyholes" are technically unjustified restrictions on what you can say or express (you can read more about them here), and one of the examples I included in the chapter was a shot of a full-screened web page showing driving directions to the Hynes Convention Center in Boston:
I complained about how the designers of this page, presumably in an attempt to ensure it would look good on hand-held devices, actually ensured that it would look good only on hand-held devices.

But that was 12 years ago. Surely things have improved, especially at companies with keen design sensibilities. Or perhaps not. I recently used Apple's chat support, and this is what I saw on my screen:

On the plus side, the page for the Hynes Convention Center has improved:


Tuesday, July 14, 2015

Effective Modern C++ in Italian!

Today I received my copy of the Italian translation of Effective Modern C++. It proudly joins the German translation on my bookshelf and, I hope, soon the translations into several other languages.

Like the German translation, the Italian edition uses only one ink color (black), so I believe that if you're comfortable with technical English, you're probably better off with the English (American) edition. However, if you prefer your C++ in Italian (including the comments--they translated those, too!), it's nice to know that you now have that option.


Wednesday, July 8, 2015

EMC++ Outtake: __PRETTY_FUNCTION__ and __FUNCSIG__

Item 4 of Effective Modern C++ is "Know how to view deduced types." It's part of Chapter 1 ("Deducing Types"), which is available for free download here. In the draft of that Item published in July 2014, I discussed how the compiler-dependent constructs __PRETTY_FUNCTION__ and __FUNCSIG__ could be used to produce printable representations of types deduced for and used in template functions. By the time I published the next draft of the Item about six weeks later, __PRETTY_FUNCTION__ and __FUNCSIG__ had disappeared. In a comment on my blog post announcing the existence of the new draft, Benoit M wrote:
I noticed that the paragraph named “beyond typeid” has disappeared in this version. It was about __PRETTY_FUNCTION__, __FUNCSIG__ and other implementation-specific ways to know the types of the variables involved.

I thought it was very interesting. I suppose that you removed it because it was not about C++11 or C++14 new language features, so it is out of scope for a book called “… Modern C++”, and also because compiler extra features are by essence not portable and could even be removed in the future, so it could be a bad habit to rely on those.

Nevertheless I would appreciate if you restored that subsection, at least in a footnote.
I replied:
I decided that Boost.TypeIndex was a preferable thing to mention, because it has the same interface across multiple platforms. The information I used to have on __PRETTY_FUNCTION__, etc., won't go back in the book, but I might publish it online as a blog entry.
I filed the idea away for a time when work on the book was mostly behind me. Nearly a year later, that time has finally come.

My original plan was to take the information on __PRETTY_FUNCTION__, etc., from the July 2014 draft and turn it into a blog post, but I now think it makes more sense to simply republish the Item 4 draft that existed at that time. That way you can see the discussion of __PRETTY_FUNCTION__ and __FUNCSIG__in context, and if you're really bored, you can also compare the initial published draft of Item 4 with the version that appears in the book. The July 2014 draft of Item 4 is available HERE.

Either way, I hope you find the information on __PRETTY_FUNCTION__ and __FUNCSIG__interesting.


Monday, June 22, 2015

Effective Modern C++ Now in German!

I'm told that contracts have been signed to translate Effective Modern C++ into Korean, simplified Chinese, traditional Chinese, Russian, Japanese, German, Polish, Italian, and Portuguese, but the only translation that's made its way to my house is the German one.

For my past books, bilingual readers have told me that the original (English) version is the best. Even if I didn't know some German, I'd be inclined to say that the same thing applies here, too, if for no other reason than that the American version uses multiple ink colors, while the German version uses only one. (Highlighted code is still highlighted, because it's put in bold face.) If you're comfortable reading technical English, I therefore suggest you go with the English version. Wenn Sie aber finden, dass Sie technische Information besser auf Deutsch aufnehmen und verstehen, dann ist es natürlich sinnvoll, die deutsche Ausgabe vorzuziehen.

I suspect that that last sentence demonstrates why I don't handle the German translation myself :-)

O'Reilly has made the first chapter of the German translation available at the book's web site, so if you'd like to try before you buy, give it a look. That PDF is currently in black and white, but my understanding is that it's supposed to incorporate multiple colors, so you might check back from time to time to see if it's been updated.

Viel Spaß beim Effektives modernes C++!


Publishing Effective Modern C++, Part 2

In part 1, I divided technical publishing into four tasks (manuscript creation, production, distribution, and marketing), and I gave my perspective on publishing economics from a traditional technical publisher's point of view. In that post, my financial analysis was based on the assumption that most technical books sell no more than 5000 copies. For an author who's already been published, that assumption can be replaced with (or at least tempered by) the author's past performance. In my case, I have a history of five C++ books dating back to 1991. Each has sold more than 5000 copies.

If I wanted to impress you, I'd tell you that Effective C++, Third Edition, has sold over 225,000 copies, which, if my royalty statements are to believed and my spreadsheet summarizing them is accurate, it has. But then I'd worry that you'd think I light fires with 100-dollar bills and own an island in a tropical paradise. So I'd hasten to add that a great many of those sales took place in foreign countries, where my royalties are...well, let's put it this way: my most recent royalty statement has an entry for 10,000 books in Chinese, and my take was about 27 cents per book. That's not the kind of money that buys islands.

Still, if an author has sold well in the past, that suggests that a publisher is unlikely to lose money on a similar book project by the same author. (I say "similar" projects, because an author's success with topic X may say little about that author's success with topic Y. Donald Knuth is a legendary author in Computer Science, but I don't think his novel burned up the best-seller lists.)

For an established author, what's a reasonable approach to sharing book revenues with a publisher? I can think of three basic designs:
  • Flat rate: The author gets a flat percentage of the income produced by the book. Presumably this is a higher rate than an unproven author would receive.
  • Increasing tiers: The author gets a flat rate for the first n books, then a higher rate after that. The lower initial rate permits the publisher to quickly recover their up-front costs. There may be multiple tiers: the more you sell, the higher the rate. For example, Apress's default contract has five tiers, ranging from 10% to 20%.
  • Decreasing tiers: The author gets a flat rate for the first n books, then a lower rate after that. The higher initial rate acknowledges that the proven author can sell some books on his or her reputation alone. For such sales, the publisher brings little to the party. As sales rise, the role of the publisher in finding buyers increases, so the publisher deserves a greater share of the revenue.
I don't know anybody who's used a decreasing tiered royalty scheme, but I think it makes sense. The looks on the faces of the few authors and publishers I've mentioned it to suggest that I may be alone in this belief.

So here's where we stand, or, more accurately, where I stood in early 2014 when I started thinking about signing a publishing contract for Effective Modern C++:
  • Of the four tasks involved in publishing the book, I wanted to be responsible only for manuscript creation. However, I wanted a voice in some aspects of production (e.g., page layout, use of color, usability on mobile devices) and marketing (e.g., how the book is pitched to prospective buyers).
  • I believed that the publisher's distribution and marketing efforts would be an important factor in determining the success of the book, measured both in terms of revenue generated and programmers reached.
  • I understood that the publisher had to make a nontrivial investment to bring the book to market, but I felt that my track record as a C++ author (and my lack of a request for an advance) essentially guaranteed that the investment would be recouped.
Addison-Wesley and I undertook negotiations, and they offered generous terms. Compared to the deals most technical authors receive, I was offered more of everything: higher royalties, a bigger voice in production and marketing decisions, greater autonomy as an author--you name it. Had I signed, it would have been the most generous contract I'd had with AW in my nearly quarter century of working with them.

Unfortunately, I felt that what AW offered didn't embody a principle I had begun to realize was very important to me: fairness. I believed that the terms of the publishing agreement should fairly reflect our respective contributions to the success of the book. But how do you evaluate that? How do you disentangle the author's contribution to the book's success from that of the publisher?

Being an author, I felt that the component I was tackling--the writing--was the most important of the publishing tasks. Without a manuscript, the world's best production, distribution, and marketing teams have nothing to do. Given a good manuscript, however, anybody can produce PDF and sell it online. That is, they can produce, distribute, and market a book. They may not do it well, but they can do it. It's no longer that difficult to publish something on your own.

At the same time, professionals versed in production, distribution, and marketing could take my manuscript, make it available in multiple formats on multiple platforms, and sell a lot more copies than I could myself. Since 2010, for example, I've sold my annotated training materials through an arrangement with Artima that has me handle manuscript creation and production, while Artima is responsible for online sales and PDF downloads. Compared to "full service" publishing, production and distribution in this arrangement are quite limited (the materials are available only in PDF and at only one web site), and marketing barely exists. Feedback from readers indicates that the content of these publications is good, but sales have been underwhelming.

I conclude that it's not necessary to have skilled people working on production, distribution, and marketing to bring a book to market, but that doesn't mean those aspects of publishing aren't important. They are.

So what's fair? If a book bombs, the revenue involved isn't enough to make much difference to either party. The publisher loses money, and the author loses a chunk of his or her life. But if a book is successful--if it earns a zillion dollars over its lifetime--how much of that zillion should go to the author and how much should go to the publisher? What's a fair revenue split?

I ultimately decided that AW's offer, though generous, didn't reflect what I thought was fair. We tried to bridge the gap in various ways, but we weren't able to come to an accord. That's when I approached O'Reilly. I explained what I was looking for in a publishing agreement, i.e., what I considered a fair apportionment between author and publisher of the work to be done and the revenues ultimately received. We quickly agreed on terms, and that, in a nutshell, is why EMC++ is an O'Reilly book.

In my Advice to Prospective Book Authors, I say:
Given the uncertain financial return on your authoring effort, I suggest you figure out what's most important to you. When your book comes out, what will make you say, "I'm happy with the way things went, even if the book sells poorly"? Do you want to have designed your own cover? To have had complete editorial control? To have specified how the book would be marketed? To have made the text of the book available at your web site? If these things are important to you, you may want to trade them off against financial aspects of the contract. After all, even if the book sells zero copies, you'll still have designed your own cover, have exercised complete editorial control, have specified how the book would be marketed, etc. If those things have value to you, I encourage you to find agreement with your editor on them.
For EMC++, I ultimately decided that having contract terms I considered fair--not just generous--was important enough to warrant changing to a different publisher. I knew that such a shift was accompanied by a Pandora's box of unknowns, but I'm sincere in my advice above, so I decided to follow it myself.

I still work with AW, and I still enjoy doing it. It's still home for my three earlier C++ books, and I still act as consulting editor for the increasingly active Effective Software Development Series. (In the past nine months, we've published both Effective Python and Effective Ruby.) My decision to publish EMC++ with O'Reilly wasn't an expression of displeasure with AW. It was a reflection of the the fact that, taking everything into account, O'Reilly was a better match for the kind of arrangement I wanted to have with my publisher for Effective Modern C++.


Wednesday, May 13, 2015

Publishing Effective Modern C++, Part 1

Between 1992 and 2005, I published five books with Addison-Wesley. When I started writing what became Effective Modern C++ (EMC++), I assumed I'd publish with them again. That didn't happen. Instead, I worked with O'Reilly. Several people have asked about my change in publishers. In this post, I offer background information for my decision to change. The decision itself will be part of a later post.

This post is about publishing, not C++. Unless you're interested in some of the business aspects of publishing, I suggest moving on to xkcd now.
The post is long. You might want to sit down.

Book Publication Tasks

To me, technical book publication consists of four primary tasks:
  1. Manuscript creation is the writing of the book, including diagrams, tables, code examples, etc. It also includes having draft manuscripts reviewed for comprehensibility and technical accuracy.
  2. Production transforms a manuscript into products for consumption by book-buying customers. It includes typesetting, page layout, and cover design, as well as the generation of files suitable for printing or for delivery to end users, e.g., PDF, HTML, epub, and mobi (for Kindle). Production also includes having physical books printed.
  3. Distribution takes print books and digital book files and makes them accessible to prospective customers. That primarily means getting them into bookstores (both physical and virtual, both domestic and international), but it also includes arranging for translations into foreign languages.
  4. Marketing works to bring the book to the attention of prospective customers and to get them to take a look at it. Glossy brochures, promotional web sites, social media activities, organization of author appearances, etc.--that's all marketing stuff. So is the distribution of sample copies of the book to "big mouths:" reviewers, bloggers, high-profile community personalities, and other "thought leaders."
In a traditional author-publisher relationship, authors are primarily responsible for creating a manuscript, and publishers focus on production, distribution, and marketing. In reality, who does what is fuzzier. Publishers usually provide developmental editors as well as copyediting, artwork, and indexing services that help authors produce good manuscripts, and authors typically assist publishers in the creation of marketing materials that will appeal to the book's target audience. It's also assumed that authors will engage in their own promotional activities to complement those performed by the publisher.

For my print books with AW, I took care of both manuscript creation and production (my deliverables were print-ready PDFs), and I wrote almost all the copy used for marketing. For EMC++, I wanted to produce a book that looked good in both print and digital form, but I knew from people who'd been there that creating digital books that work well on multiple devices is a nontrivial undertaking. I therefore decided to let my publisher handle production.

The Cost of Bringing a Book to Market

For a publisher, taking on a book involves financial risk. It costs money to create a book, but there's no guarantee that the book will sell well enough to cover the expenses. One of the things publishers do is fork out the money needed to go from manuscript to book. But how big a fork are we talking about?

During my work on EMC++, I looked into self-publishing, and I spoke fairly extensively with a freelance firm with experience producing and distributing programming books. They ultimately quoted me a price of $25K to take my manuscript and create files for print-ready PDF, directly-distributable PDF, device-independent epub, and device-independent mobi. That fee also included getting the book into physical and online bookstores. Let's assume a publisher would incur roughly the same costs for production and distribution. We'll also assume that this would also cover the cost of preparing files for Safari Books Online, something that the freelance firm I talked to didn't do, but which I consider to be an important outlet for reaching corporate customers.

After digging a hole $25K deep, the publisher has final book files, but the diggings's not done. If the sales data for my books in recent years is representative, print books account for about two thirds of the revenue for a programming book. That means the publisher has to pony up cash to get books printed. The per-unit cost goes down as the size of the print run increases, but a common refrain in technical publishing circles is that most technical books sell at most 5000 copies, so unless there's good reason to believe that a book will sell better than average, a publisher won't want to print more than 5000 copies out of the gate. For a book the size of EMC++ (about 300 pages), I was quoted about $2.50/book for a 5000-copy print run--a total of about $12,500. (Bumping the print run up to 10K dropped the per-book price to about $1.90, but the cost of the total print run increased to $19,000.)

Those quotes were for a book with a two-color interior. That's the format for Effective STL and the current edition of Effective C++, and it was my plan for EMC++. O'Reilly decided to print EMC++ with a full-color interior, which makes the print book look much nicer, but also increases their per-unit printing cost. I don't know how their costs compare to the ones I was quoted.

I didn't look into the costs associated with marketing, but it's fair to say they're greater than zero. I'll pick a number out of a hat and say $5000 covers a publisher's cost to get marketing copy written, web sites implemented, press releases created, tweets generated, big mouths' egos stroked, etc. 

For a random 300-page two-color technical book with an "average" author and an initial print run of 5000 copies, then, the publisher is out $25K for production and distribution costs other than printing, $12,500 for printing, and an assumed $5000 for marketing--a total of $42.5K. But your average author probably wants an advance to compensate him or her for the 1000+-hour investment needed to produce the manuscript (per my blog post on how long it took me to write EMC++). Let's assume a $5000 advance. (I don't ask for advances, but if I had received one for $5000, that'd mean the 1350 hours I put into creating a publication-ready manuscript would have yielded a sweet $3.70 per hour. And they say higher education doesn't pay.)

The advance pushes the the publisher's up-front cost to $47.5K, which I'll round up to $50K, because I'm sure there are expenses I'm overlooking. Note that I'm not considering costs publishers incur after a book has been released, such as handling returns and tracking and issuing royalties. I'm looking only at the financial situation as of the day the first books are made available to the buying public.

Now, a one-color book would cost less than the two-color book in my scenario, but a longer book would cost more, and there are a lot of books with more than about 300 pages. For the kind of back-of-the-envelope analysis of this blog post, I'm going to go with $50K as the total up-front cost to take a manuscript and bring the resulting book to market.

Per-Book Revenue

My books generally list for about $50, so let's assume our theoretical new book lists for the same amount. For books purchased from the publisher at full price, that's also the publisher's revenue, but very few books are purchased at list price. O'Reilly routinely offers coupons for 40% off, and AW is currently running a sale on some of my books offering 50% off if you buy more than one title that's part of the promotion.

When the publisher sells a book to a retailer (e.g., Amazon, your local technical bookstore, etc.), the retailer pays the wholesale price, not the retail price. My understanding is that the discount off list to wholesalers is comparable to sale prices offered to end-customers who buy from the publisher directly, so let's assume the wholesale price of the book is 40% below list. That is, let's assume that for a book with a list price of $50, the publisher actually gets an average of about $30, regardless of whether they sell directly to individual programmers or they sell to corporate juggernauts like Amazon.

Those numbers are for the print version of a book. The list prices for digital versions are typically lower. For example, the digital version of EMC++ lists for about 15% less than the print version, and the list price for the digital Effective C++ is 20% below that for the print book. Furthermore, the discounts offered off list price for digital purchases are often larger than for print books. O'Reilly often runs sales on digital titles at 50% off list, for example, in contrast to the 40% off they usually offer for print books.

If we assume that our print book with a $50 list price has a digital list price of $41.25 (17.5% lower than print--halfway between how AW and O'Reilly treat Effective C++ and EMC++, respectively) and that digital sales typically take place at 45% off list (as opposed to 40% for print books), the publisher's revenue on digital sales is $22.69 per book. Let's call it $23.

If we now assume that print books outsell digital versions by a 2:1 margin, we come up with an average per-book revenue of $27.67 ((2*$30) + $23) / 3), which I'm going to round down to $27.50, because that matches the number I used in an earlier version of this post where I inadvertently counted the printing cost for print books twice. (Oops.)

Break-Even Points and Royalty Rates

A $50K up-front investment and an average revenue of $27.50 per book means that the break-even point for the publisher is 1819 books. Well, it would be if authors would work for their advances only. Most don't, and that brings us to royalty rates. O'Reilly used to put their standard contract online (they don't seem to do that any longer), and in that contract, the default royalty rate was 10% of the gross revenue they got from sale of the book. In the example we've been discussing, that'd be 10% of $27.50/book. With such a royalty rate, the publisher's take would drop to $24.75/book, the break-even sales number would increase to 2021 books, and we'd find we need a term to describe the per-book revenue a publisher gets after royalty costs are taken into account. I'll call it ARC ("After Royalty Costs") revenue .
When I first signed with AW in 1991, the default royalty rate was 15% (I have no idea what it is now), and with that kind of author royalty, our theoretical publisher's ARC revenue drops to $23.38/book. The break-even point rises to 2140 books.

But let's dream big. A 50-50 revenue split--a 50% royalty rate!--would drop the publisher's ARC revenue to $13.75, thus pushing the break-even sales number to 3637 books. If it's true that most programming books sell no more than 5000 copies, that implies that the chances of the book making money for the publisher are comparatively small, especially when you recall that I'm considering only costs that are incurred up to the point where the book is initially released. Also note that the break-even point includes no profit, and publishers generally take a dim view of projects unlikely to make money. We conclude (or at least I do) that for a "typical" technical book published under the conditions I've describe above, the royalty rate must be below 50% for the project to make financial sense for the publisher.

Royalty Calculations

Interestingly, at least one technical publisher offers a 50% royalty as part of its default terms: The Pragmatic Bookshelf. Look what they say:
We pay 50% royalties for our books. ... We take what we receive for a book, subtract direct costs (printing, copy edit, artwork if any, that sort of thing) and split it with you.
From an author's point of view, this is exciting, but note that before "the Prags" pay 50%, they subtract costs associated with the production of the book, including printing, copy editing, etc. That is, some of the $50K in up-front costs that technical publishers traditionally absorb as well as the cost of printing the book itself--something else publishers traditionally absorb--gets subtracted before the 50% royalty is calculated. There's nothing wrong or underhanded about that, but it's important to recognize that what The Pragmatic Bookshelf means by a 50% royalty rate is different from what a publisher like AW or O'Reilly would mean.


If you want to keep more of the money that comes from selling your book (or if you simply don't want to cede control over production, distribution, and marketing), you need to take on more of the tasks that publishers traditionally handle. In essence, you need to be the publisher, and we live in a world where self-publication is easier than it's ever been.

If you publish using Amazon's Kindle Direct Publishing, for example, you reach the entire world in one fell swoop. The royalty jumps to 70%, too, which looks pretty darned attractive. At least it does at first glance. Of the three tasks that publishers traditionally tend to, however, Amazon addresses only distribution, and it addresses it only partially. The only digital formats it covers are PDF and mobi, and it covers them only for the specific files you provide. That means no epub, no Safari Books Online, no foreign translation, and of course no print books. Taking 30% of the revenue for such a limited service seems kind of pricey to me, though in fairness Amazon also covers the rather critical job of taking customers' money and giving you your share. Amazon also distributes book updates to people who've bought your book (e.g., to correct errata), which I consider an important service.

But look more closely at the 70% royalty rate. First, it's not available worldwide. For example, it's available in the USA, England, Germany, Japan, and several other countries, but not in China, Russia, Norway, or Sweden (among others). For details, consult this page. Perhaps more importantly, it applies only to books priced between $2.99 and $9.99. Where the 70% rate doesn't apply, the royalty drops to 35%. That means that a book priced between $10 and $20 yields a lower per-book royalty than one priced at $9.99! It's clear that Amazon wants books for Kindle priced below ten bucks, which is good for book buyers and good for Amazon's market share, but whether it's good for technical book authors is a different question.

If you succumb to Amazon's arm-twisting and charge $9.99 for your book, you get a $7/book royalty. If it took you 1500 hours to write, produce, and market the book (i.e., about 150 hours--more or less a full-time month--beyond what it took me to write EMC++), you need to sell over 3200 digital copies to have earned $15/hour (a number that is beginning to gain some traction as the new minimum wage in the USA). In view of the conventional wisdom that most programming books sell no more than 5000 copies and my experience that book buyers tend to prefer their book in paper form, that 70% royalty rate doesn't look as enticing as it originally did.

If you color outside the pricing lines laid down by Amazon or if you sell a lot of books outside the 70% countries, you're in 35% territory, and under those conditions, you should probably consider talking with The Pragmatic Bookshelf folks. (This assumes that your motivation for self publication is to increase your royalty rate. If your motivation is to retain control over all aspects of your book's production, distribution, and marketing, you'll probably find that all roads lead to self publication.)

Self publication can encompass more than just Kindle, of course. Kindle Direct Publishing can be one piece of a larger self-publishing strategy, whereby you supplement Kindle sales with epub and print sales through other channels. An example of an author who's gone that route is Bob Nystrom with Game Programming Patterns, which is available in a number of formats and through a number of channels. (The Kindle version costs $24.95. Bob didn't knuckle under. On the other hand, he complements his book-purchasing options with free online access to the book's content, so I think it's safe to say that neither Amazon nor anybody else will push Bob around.)

Another option is to bypass Amazon entirely, assuming the burden of distributing and selling your book directly. This lets you keep 100% of the proceeds from sales, though you then incur the cost of the online transactions (e.g., fees for credit card processing) as well as the rewards, such as they are, of customer service. But it puts you in the driver's seat of every aspect of your book's publication. That's the approach Bruce Eckel and Diane Marsh took for Atomic Scala.

Publishing Effective Modern C++

I briefly considered self publication via Kindle Direct Publishing, because I thought it would be an interesting experiment to publish EMC++ for $9.99 and see what happened. The $7/book royalty would have yielded more income than I'd get from a traditional publisher selling the book to Amazon for $23 unless I could have negotiated a 30% royalty rate (which is very high for traditional technical publishers), and the lower sales price would presumably have led to more sales, hence greater total revenue. I fantasized that it would also have shaken up the market for programming books and paved the way to a world where $10 was the new normal for high-quality technical books in digital form. The result would be parades honoring me, statues of me in Meyers Town Squares around the world, and a recurring role on The Big Bang Theory! (Hey, it was a fantasy, okay?)

However, I really wanted a traditional publisher. I didn't want to deal with production, distribution, or marketing, nor did I want to find people to subcontract those pieces to and have to manage them. Furthermore, the authors I know who've done the self-publishing thing still generally skip some distribution channels that I consider important, e.g., Safari Books Online or foreign translations. Working with a traditional publisher lets me focus on what I want to do (produce a manuscript), secure in the knowledge that it's somebody else's job to address the aspects of book publication that have to be done, but that I don't want to spend time on.

What I was looking for, then, was a publishing partner for Effective Modern C++ that would do the heavy lifting in the following areas:
  • Production: Turn my manuscript into high-quality files for print and digital publication. The digital files should work with and look good on multiple kinds of e-reading software (e.g., Acrobat Reader for PDF, Kindle for mobi, ibooks for epub), on multiple physical devices (e.g., tablets of various sizes and capabilities as well as conventional computer monitors), under multiple OSes, and under a variety of user configurations (e.g., portrait or landscape orientation, varying font sizes, etc.). The book should also look good in Safari Books Online (which is an HTML-based platform). When I update the book to address errata, all forms of the book should be correspondingly updated. (If you check out the EMC++ errata list, you'll see that my work on the book was far from over when I sent the "final" files to O'Reilly last September.)
  • Distribution: Get the book into online as well as brick-and-mortar bookstores, both in the United States and internationally, ideally in both print and digital form. Get the book into Safari Books Online. Arrange for the book to be translated into foreign languages. Push book updates for customers of digital versions out to those customers when updates are made available.
  • Marketing: Get the word about the book out in ways that I can't or that would not occur to me. I have no interest in marketing and, probably not coincidentally, I'm not any good at it, but I recognize that just because a book gets published doesn't mean anybody pays attention to it. 

Monday, May 4, 2015

"Effective Programming" Sale at Addison-Wesley

Those wacky marketing gurus at Addison Wesley are at it again, offering a variety of "Effective Programming" books at 40% off -- 50% off if you buy two or more. Two of my books--Effective C++ (both print and digital) and the digital collection of all my Addison-Wesley C++ books--are part of the promotion, but the sale includes C++ books and videos by others, including Andrei Alexandrescu, Herb Sutter, Nicolai Josuttis, John Lakos, and Bartosz Milewski.

The promotion isn't limited to C++. You'll also find books and videos on Java, Python, Ruby, and C#, including several from my Effective Software Development Series. If you're a programmer, you're sure to find something you can use.

To initiate your shopping spree, click here, and don't forget to use the discount code EPSALE at checkout. No, I don't get a kickback (other than my usual book royalties).

The promotion runs until May 18.


Sunday, April 26, 2015

New EMC++ Excerpt Now Online

We've just put a new excerpt from Effective Modern C++ online. This time it's
  • Item 41: Consider pass by value for copyable parameters that are cheap to move and always copied.
It's available at the same place as all the other Items we've put up, namely here.

This Item grew out of my concern that move semantics is causing pass by value to lose its traditional reputation of being unreasonably expensive for user-defined types. More and more, I see coding examples from "thought leaders" (i.e., people who write articles or books or who give presentations at conferences, etc. -- you know, people like me :-}) where common types (e.g., std::vector, std::string, std::unique_ptr) are passed by value with nary a second thought. This worries me. I think it's a bad habit to get into. I blogged about why I think this is a particularly unjustifiable practice for move-only types like std::unique_ptr  here, and the ensuing comment stream makes for interesting reading, IMO.

What I really object to isn't the practice itself, but the risk that people will employ it without being aware of its consequences. I also worry about people taking an idea that can be reasonable under fairly constrained circumstances and overly generalizing it. In my view, pass by value is a strategy that should typically be considered only if (1) you have a parameter whose type is copyable (i.e., isn't a move-only type), (2) its move operations are cheap, and (3) the parameter will be unconditionally copied. Even then, I argue that you should only consider the use of pass by value, because even if all three criteria are fulfilled, there are still situations where it can incur a significant performance penalty.

I hope you find this latest excerpt from the book interesting and useful. I also hope you enjoy my finding a way to work allusions to both Mary Poppins and Jabberwocky into a single Item. I haven't checked, but I'd like to think I'm the only author who's found a reason to use supercalifragilisticexpialidocious in a book on C++.


Wednesday, April 22, 2015

The Time Needed to Write Effective Modern C++

Initial cover design.
Nobody asked me about the writing of Effective Modern C++ (EMC++), but I wanted to talk about it a little, so here we go.

In my now-somewhat-dated article for prospective technical book authors, I mention how much time authors invest in writing a book, with estimates ranging from 1.7 to 6 hours per finished book page. I was curious about how much time I spent writing EMC++, so I tracked it, sort of. I actually tracked the days where working on the book was my primary activity, and the result was that I spent 29 weeks from the day I started writing to the day I had a complete draft of the entire book. (The weeks were not always consecutive.) During these weeks, writing the book was essentially my full-time job.

If we figure a 40-hour work week, that'd yield about 1160 hours, but although writing EMC++ was my primary activity during those weeks, it wasn't my only activity. Let's knock that number down by 20% to account for my occasionally having to spend time on other things. That yields 928 hours to produce a full draft of the book.

Sending in files for publication.
During that time, Item and chapter drafts were being reviewed by outside readers, but I hadn't had time to revise the manuscript to take all their comments into account. Doing that (i.e., going from a full book draft to a "final" manuscript), revising the "final" manuscript to take the copyeditor's comments into account, and marking up the manuscript for indexing took another 11 weeks, i.e., 352 hours (again assuming an 80% time investment and 40-hour weeks). That yields a total of 1280 hours.

At that point, I'd marked up the manuscript for indexing by the publisher, but I hadn't reviewed the resulting index, nor had I reviewed the typeset pages or digital files for the rest of the book. That work took place in bits and pieces over the course of about 8 weeks. I didn't track my time, but I figure it took at least two full-time weeks on my part, so let's call it another 64 hours. That pushes the total "writing" time (which includes reviewing and processing comments from outside readers of pre-publication manuscripts as well as reviewing pre-publication files from the publisher) to about 1344 hours. Let's round up and call it 1350.

That amount of time, viewed as a full-time 40-hour-per-week job, corresponds to 33.75 weeks, which is a little under eight full-time months. EMC++ has about 310 final printed pages that I wrote (i.e., excluding pages whose content was generated entirely by the publisher), so my productivity was roughly 4.3 hours per final printed page.

My book has about 310 pages. Bjarne's fourth edition of The C++ Programming Language has about 1340. Do the math and marvel at the effort such a book requires. Even if he's twice as productive as I am, that represents 2880 hours--sixteen full-time months! I'm glad I don't have his job.

Before you can write a book on modern C++, you have to learn about C++11 and C++14. For me, that work started in 2009--four years before I felt able to write a book on it. Here are some EMC++-related milestones:
2009Started studying C++0x (the nascent C++11).
July 1, 2013Started writing what was then known as Effective C++11/14.
June 20, 2014Completed full draft of Effective Modern C++.
September 5, 2014Submitted final manuscript and index information to O'Reilly.
November 2, 2014Approved print and digital versions of the book for publication.
December 4, 2014Received first printed copy of EMC++.
As an aside, preparing materials for a technical talk generally takes me about 30 minutes per slide, so the one-hour talk I gave at CppCon last year probably took about 23 hours to put together.