Dr. Sanders has been programming since the early '60s, when assembler was the chosen language, and computers with 16K of core memory were room-sized mainframes. Combining academic pursuits with his long career as a computer professional, he earned M.S. and Ph.D. degrees at Lehigh University. After twelve years of teaching Computer Science at SUNY Geneseo (with an occasional summer contract as a Systems Engineer at IBM in Rochester), he founded Sanders-Indev, Inc. His e-mail address is firstname.lastname@example.org.
When an error occurs during a run, the customer has two immediate questions: "What caused the error?" and "What did the program do when it found the error?" A complete answer to the first question can prevent a recurrence of the error. The second question must be answered so the customer can assess correctly the value of any data produced before (and after) the error.
Instead of calling the vendor's technical support lines, which can be frustrating and expensive for everyone, customers should be able to get answers to the above questions from error messages and other output produced by the program. Ideally, the product documentation may provide a section or appendix explaining error messages.
This brings product quality into sharp focus. Specifically, how well does the error message and product documentation answer the two questions mentioned above? Is the error message too brief to be useful? Does it contain unnecessary jargon? Is it misleading? Is it written with a professional tone; that is, without personal references, condescension, or arrogance? Does it reveal low quality in language usage, spelling, or grammar? Is there an "errors and warnings" section in the product documentation that explains the error message?
Most programmers don't like to write documentation. And since we tend to believe that errors will be rare occurrences, the quality of error messages and documentation is likely to be sacrificed (or even omitted) whenever deadlines force shortcuts. Many additional factors can combine to produce poor error messages and documentation, some of which are listed below:
With such tendencies and forces operating, it isn't surprising to find software packages in which error messages show one or more of the following problems:
The first two problems can be solved by providing an extended explanation of each message in an "Errors and Warnings" section of the documentation. The last can be solved by reviewing each error message for bad tone, grammar, etc., prior to release of the product.
An appendix to the user's guide or technical manual is the most common way to explain error messages. An "Errors and Warnings" appendix should list alphabetically each error message that could be displayed by the product, together with an explanation. Where possible and appropriate, the explanation should give information about the cause of the error, its consequences for the program's output, suggestions for salvaging work that might otherwise be lost, and suggestions for preventing a recurrence of the error. Although the error message itself must often sacrifice completeness for brevity, the documentation of it should be more complete -- yet without excessive technical jargon. This pragmatic balance of brevity and completeness is often difficult to achieve, but it is a hallmark of good technical writing.
To construct an "Errors and Warnings" appendix, it's necessary to get a list of every error message that could be output during operation of the product. In a small project with few source files, this can be done by examining the source code. To write explanations of an error message, it's usually necessary to analyze the context in which the message might be displayed. Although the string of characters representing an error message is hosted by a specific function or macro in the source, that routine may be called from any number of distinct locations in the program -- in response to any number of different error situations. This means that the explanation of an error message can depend on the source location from which its host routine was called.
As the number of source lines grows, so does the number of error messages. It soon becomes unlikely that manual methods will be able to maintain complete and accurate information for error documentation. This is especially true when responsibility for writing and maintaining project modules must be re-assigned.
A program is needed that will write error messages found in a C source file onto a report file. The messages for the report file should be formatted to match, as closely as possible, the output of the programs from which the error-message strings are extracted.
To extract error messages automatically from the source files of a C project, some assumptions about the structure of the source code to be examined are necessary. Although most of these assumptions are guaranteed to hold by the syntax rules of C, others must rely on coding standards established within the organization.
In a C program, error messages are coded as quotation-mark-delimited strings in calls to a formatted-output function (printf(), fprintf(), sprintf(), etc.), or to some other routine that ultimately calls one of those functions. To make it possible to distinguish such calls from ordinary calls to formatted-output functions, a simple expedient such as #define ERRMSG printf can be implemented easily. With such a convention in place, the first quotation-delimited argument of a call to ERRMSG can be assumed a printf-formatted error message. Note that the parser must distinguish calls to ERRMSG from mere occurrences of its name, and must be able to recognize the end of its argument list.
An error-message string shouldn't be copied verbatim onto the screen or report file, because it's printf-formatted. Escape sequences and format specifiers must be removed, to avoid ruining the appearance and readability of the output. Further, line breaks encoded within the string (by interior newline escapes) should be honored by writing newlines to the output. Finally, the C rules for string continuation and automatic concatenation should be observed, in order to assemble the complete string without superfluous line breaks.
As an example of some of the techniques useful in extracting strings from C routines, a program that extracts the first string occurring in each function and executable macro definition of a C source file is presented in Listings 1 through 61.
Listing 1 (exstrng.h) contains some variables, constants, executable macros, and function prototypes used in the program. Macro CKENDRTN keeps track of blocks during parsing of the input file. SAVPOS and RSTPOS are used to save and restore the position of the parser within the file. READLINE reads the next non-empty line from the input file, strips any leading blanks from it, and stores it in the line buffer. GETNOJNK increments the line-buffer index to a character that isn't whitespace, a line-continuation backslash, or (optionally) part of a quote-delimited string, reading from the input file as needed. MATCHDLM increments the line-buffer index to the closing parenthesis or brace that matches the one indexed when it was invoked. During its scan for the matching delimiter, MATCHDLM will call function qstring() (Listing 5) to output the first n strings it encounters, where n is an argument passed from the caller.
Listing 2 (parse2.h) contains some general parsing constants and macros. NEXTLINE uses fgets() to fill a string buffer from the next line of a file, while maintaining a line count. NOTMEMLN increments the buffer index to a character that is not a member of a caller-supplied set, stopping at the end of the buffer. NOTMEMFL does the same, except that it replenishes the string buffer from the input file if the end of the buffer is reached. ISMEMLN and ISMEMFL operate like their counterparts, except the index is incremented to a character that does belong to the caller-supplied set. NOBLANKR saves the file position in a caller-supplied variable, reads the next line from an input file into a string buffer, then increments the buffer index past whitespace and empty lines, reading additional lines from the file as needed.
Listing 3 (exstrng) is the main function. It examines each character of the input file named on the command line, assembling valid C identifiers in a buffer. Each time it has a complete identifier, it calls tstfmhdr() (Listing 4) to see if the identifier is the name of a function or macro in the context of the routine's header line.
Listing 4 (tstfmhdr) checks the pattern of braces and parentheses that follow the latest identifier. By also taking into account the preceding identifier and checking for certain keywords, tstfmhdr() determines whether the current identifier is the name of a function or macro in the context of its definition header. It can also detect a "local" macro, that is, one whose definition occurs within the statement block of a function definition. If tstfmhdr() detects a function or macro header, it uses MATCHDLM (Listing 1) to output the first quoted string encountered within the current block. MATCHDLM calls qstring() (Listing 5) to filter the string and write it on the report file.
Listing 5 (qstring) will either skip a quoted string or output a filtered version of it, according to an option parameter passed by its caller. It will honor all automatic concatenations and line continuations encountered. The string is assumed to be printf-formatted, making it necessary to filter escape sequences and formatting codes from the output. Except for leading and trailing newline escapes, those found embedded in the string are executed by writing a newline to the output device. This ensures the line breaks in the string will appear on the output device as originally coded. To avoid extraneous line spacing on the output device, leading and trailing newline escapes are not executed.
Listing 6 (skipcomm) skips one or more contiguous comment lines. It recognizes both C and C++ syntax for comments.
Once a report of error messages is available, it's possible to review them for accuracy, content, literacy, tone, grammar, and conformance to other standards of quality. Tone and literacy in error messages are especially important. Most people recognize bad grammar, misspelling, and inappropriate tone as signs of low quality, even when they might not realize the exact cause. When work flow is interrupted by an error, customers become highly sensitive to the way the error is handled by the software, including the associated messages. In such situations, enduring judgments of quality (good or bad) are formed, which are attributed not only to the product itself, but to everyone associated with it.
At minimum, messages displayed by professional software should be literate, and should have a tone that shows respect for the customers. If they seem to be the work of people who are either unaware of the importance of those messages, or simply unable to write2, permanent damage to your professional image can be done.
There are many ways to make the wrong impression on your customer through messages, including the following:
Reviewers and testers should be alert to these kinds of problems in error messages. Programmers might benefit from discussions or training sessions that emphasize the importance of programmed messages in the overall quality of the product. The sidebar "Programmed Messages, Tone, And Respect For The Customer" offers some opinions and advice on writing messages for software designed as professional tools.
Unlike customers, programmers and quality-assurance personnel need to know where each error-message string occurs in the source code, in case a correction must be made. To meet this need, an extraction program would have to report the name of the host function or macro, the name of the file, and the line number. (The example program presented with this article captures this information, but doesn't report it.)
A program that successfully locates and reports error messages might be extended to report information about the source context of the error message. This information would be useful in discovering code that may have been overlooked in the normal test plan for the project. It should include the file name and line number of the statement that would cause the error message to be output, along with the name of the routine that would be executing at the time. If the host routine (the routine in which the message string resides) is a function, it's clear that the error message is issued in the context of that function alone.
But if the host routine is a macro, the search for the context of the error message becomes recursive. This is because the compiler's pre-processor copies the host macro's definition (which has the message string in it) into every source location at which the host macro is invoked -- and the host macro can be invoked from other macros, any or all of which can be invoked by others, and so on. Ultimately, that host macro, with its embedded error-message string, is compiled into the code of one or more functions, at one or more places within those functions. Thus, the hosted error message can be triggered from any of those places. The function name, file name, and line number of each different place from which the error message can be triggered should be reported.
This would require mapping the caller/called relationships among all functions and macros in the project. Further, because these relationships span both .C file and .H files, the parser would be required to follow recursively any #include statements, wherever encountered. Although the details of these processes are beyond the scope of this article, the ideas outlined in the preceding sections suggest ways of improving the quality of error messages and documentation, and the techniques offered in the supplied code can make that effort easier.
2Poor writing is pervasive. Awkward slang like "opt for" rather than "choose", blatant mis-use like "impacted" for "affected", and padded phrases like "on a weekly basis" instead of "weekly" are examples. Incorrect grammar, spelling, and punctuation are equally common.