March 21, 2011

Unit Tests: taking a step back with 9 simple guidelines


Although Unit Testing should be common knowledge, still a lot of people do not write effective Unit Tests or miss some basic concepts. In this post I'll touch upon some basic guidelines to write effective Unit Tests by using my own rules for writing guidelines. Of course this is subjective and corresponds to my own experience, so feel free to add your own as a comment. If you have no idea how to sell Unit Testing to your manager, you might also want to read one of my previous posts.

#1 Dependency Injection/Inversion of Control MUST be used everywhere

The biggest advantage of DI & testing is the ability to inject classes of your own choice for testing purposes. This makes it much easier to really test a single unit, with all its exceptions, and to mock (see below) certain classes.

Although frameworks such as Spring and Guice make it easier to use DI, you can apply DI perfectly without those frameworks as well by having a dedicated controller/configuration class that simply injects objects into other objects instead of letting the classes instantiate new objects themselves. When testing certain methods, you can easily inject other objects instead.

#2 Tests SHOULD be small and fast

It is a very thin line between actually testing a unit of code and simple integration tests. Certain dependencies, such as DAOs to load data from a database for example, SHOULD be mocked and injected in the actual code so that there is no dependency on a database when simply testing some business logic. This guarantees a fast execution of the test code and the fact that only a simple unit of code is being tested.

Mocking simply means mimicking behaviour of real objects in a controlled way. A very good framework for mocking is for example http://mockito.org/

#3 Unit Tests MUST NOT have side effects

A Unit Test MUST clean up any mess it makes when testing code so that the system is restored to the original state before the test was executed. This makes sure every single test is guaranteed to start from the same deterministic starting position and there are no surprises because a previously executed test has left the system in an inconsistent state.

You MUST use the setup and cleanup methods to handle these kind of things without the need of repeating the same code or polluting the actual test.

#4 Unit Tests MUST NOT depend on each other

The order in which Unit Tests are run MUST NOT be assumed because the order is non-deterministic in standard Unit Test frameworks. Even if you would not comply to item #3, there is no way that you can be sure that a certain test can depend on the result of a previous test, because you have no idea which one will be executed first.

Unit Tests can not depend on each other by design to keep things simple and clean. This is not a bug!

#5 For every bug found, a second Unit Test MUST be implemented 

The whole idea behind Unit Testing is to make sure or prove that code is fit for use. If a bug is found, the code is clearly not fit for use and the code must be fixed. Once the bug is fixed, a new Unit Test MUST be written to proof that the code is now fit for use. This must be repeated for every bug found.

Each new Unit Test SHOULD also be documented with some background information about the bug and with a reference to the bug or issue tracking system when applicable.

#6 Every method implementing business logic MUST at least have one Unit Test

Getters and setters SHOULD NOT be tested since they generally do not contain any business logic, but all other methods MUST have at least one Unit Test implemented to prove that the code is actually fit for use. Whenever a bug is found, item #5 MUST be applied.

Unit Testing user interfaces is much more difficult and MAY be skipped, but at least the controller or presenter (in MVC or MVP respectively) MUST be tested. MVP has some advantages to make Unit Testing easier, but that's a whole other discussion ( see also http://www.martinfowler.com/eaaDev/uiArchs.html)

#7 Unit Tests MUST be kept in the same package as the source code

When Unit Test classes are kept in the same package, it is much easier to test all the methods (except for private methods) of a certain class.

You MUST keep the Unit Tests in another directory though to separate your Unit Tests from your actual application code. This makes releasing much easier.

#8 Unit Tests MUST be time independent

This might sound strange, but when a test is written to run only on Februari 29th or in the morning because your application behaves differently at that specific time for example, your tests are not guaranteed to reflect reality when they are executed on another time. When a job is scheduled to run on a certain time for example, you MUST NOT test the scheduled job itself (the actual code may never be executed), but you MUST test the actual business logic in the job.

For Java specific Unit Testing, be also very carefully when you are dependent on the Date or Calendar classes. There are many problems with those standard implementations (http://parleys.com/#st=5&id=100&sl=1) and pretty much any alternative, such as Joda time (http://joda-time.sourceforge.net/), is better suited, even in normal business logic outside Unit Tests!

#9 Unit Tests MUST NOT load data from a hard-coded location on a filesystem 

All files MUST be put next to the actual test classes and retrieved as a stream through the classloader (ie. in Java: TestClass.class.getClassLoader().getResource AsStream("<package>.<filename>") ).

When files need to be manipulated, you MUST copy them first to a temporary directory (ie. in Java: System.getProperty("java.io.tmpdir")) and manipulate the files over there by injecting the reference to the resource in your actual business logic. You MUST empty the created files in this directory at the end of your test to comply with item #3.




February 16, 2011

Separating the sheep from the goats during an interview (part 2 - Responsibility)

In a previous post I wrote about the different qualities we look for in an employee. Although "Learnability" is a very important one, it is closely followed by "Responsibility" and "Social". In this post I will elaborate on "Responsibility" and how it relates to and actually complements "Learnability". 

Responsibility

Responsibility is all about getting things done and delivering quality at the same time. Getting things done or finishing the job alone is not enough. You do not want people to operate in an uncontrollable way and make short cuts just to deliver. They must at least respect some basic rules set by your company or the industry you operate in to govern things, so that they will be accountable for as well as getting things done as for finishing the job in an acceptable and standardized way.

There are a lot of people with a high learnability who just never get things done. This is definitely a no go, even PhD students for whom learnability is more important than for anyone else have to deliver. 

How do you test if someone gets things done? This is not simple, but the first indication can already be found in the CV. How long do they stay on a project? Is it feasible to actually deliver something useful during that time if they only stayed on the job for a short time? If you are not sure, ask it during the interview. How do they write/talk about their previous projects? Does it sound like they have actually finished something? (eg. I made... I completed... I wrote... I delivered...). During the interview, also ask if they encountered any problems after  finishing the project. How did they handle those issues? Were there any follow-up projects they worked on, etc...

The fact that someone get things done, does not mean (s)he should take short cuts all the time to deliver. Some people work very fast because they take short cuts, but the quality of the work is very low. The delivered work is way below any acceptable standard and especially for larger projects, this will hurt a lot at some point in the (near) future.

This can easily be tested by checking their work methods. Have they followed well known procedures? Can they deliver proof that what they've done are up to or even above industry standards? See also my previous post about how Toyota has build a culture of stopping to fix problems to get quality right the first time. Not every company has a culture like this and it might be hard to find out if somebody works towards quality if it is not supported by the companies they worked in. To find out if these candidates would be able to deliver quality, you should check if they at least know about the industry standards and what they would to ensure they can be met. The latter is again closely related to their learnability.

At some point, people must be given the responsibility to deliver quality and must be given the means to proof the quality. It is only then they can be taken accountable for the quality they deliver. Getting things done and delivering quality seem contradictory, but if you take the quality part into account from the very beginning, you'll gain a lot in the long run.

So cut the crap and find people who can get things done that meets quality standards! That is what we do!



February 08, 2011

Using logging in Java libraries

While logging can be very useful in an application, it's still a question to me whether it is desirable to use logging statements in a library. If the library is well-designed, is there a need for logging? Is it not up to the application to log what seems relevant to it?

Use cases for logging

There are use cases for logging events in your application, including performance logging of audit logs. These can be seen as the implementation of functional requirements, and are typically implemented using AOP. The application signals some sort of an event, and one implementation could be to simply write out the event in a logfile. But these kinds of logging are hardly ever implemented in libraries, as they are the result of functional requirements.

Another typical case of logging is error handling. But a well-designed library should signal the application using the library something is wrong, and let the application decide what to do when an error occurs. Often this involves at least logging it, once and only once by the way!


So why include logging in a library? It might be useful if it's a closed source library you can't easily debug. But for open source libraries using a debugger is the way to go if you want to know what's going on.


Logging in a library

If a library has logging in it anyway, how should it be implemented ideally? There are some useful tips listed in the dark art of logging about what and how to log in general, but there's more...

In a library, please think about the logging library dependency. When writing code, an important part of controlling dependencies is by depending on interfaces instead of implementations. So when depending on a logging library, please just depend on an interface or facade (such as slf4j) instead of an implementation (such as log4j). A single dependency on e.g. slf4j-api would be sufficient, let the application using the library decide which implementation it wishes to use.

Unfortunately there are still a lot of libraries out there depending on commons-logging, log4j or other implementations. Using these libraries in an application involves a process of excluding the logging dependencies and including the slf4j bridging implementations, e.g. using the Maven dependency management. This way the applications and its libraries all log via the slf4j-api, and a single logging implementation can be chosen by the application or the environment it is deployed on (OSGi container anyone?).

Using log4j
Finally a special note to the log4j adepts out there. If you insist on using log4j in your library, try to avoid depending on the log4j appenders and filters, or use the PropertyConfigurator. Doing so makes it impossible to use the slf4j bridging.

It's also not a good idea to include a log4j.properties file in your library jar. Since log4j looks for a config file on the classpath, this config file might get to be used for the entire application although the application adds a config file itself to the classpath. In that case it's a matter of who comes first on the classpath, which might lead to unexpected logging behavior that is really hard to figure out.

It is in fact a tip if you're using log4j and your application logging don't seem to use your own log4j.properties: search the classpath, including third-party jars, for other config files.

February 07, 2011

ActorRegistry scope using Akka in OSGi bundles

When building a truly modular software application, OSGi really is an obvious choice these days. The modules are OSGi bundles, each exporting services other bundles can consume. Clean and simple, what else can one want?

Things get a bit more complicated when developing OSGi bundles using Scala, and to be more specific, when using Scala actors. The Scala libraries are available as OSGi bundles, so that's a no-brainer. But what happens when you want to use actors in Scala, and you decide to use the Scala actors from the Akka project?


Akka itself consists of multiple modules, and each of them is an OSGi bundle. Some of the Akka 3rd party dependencies are on the other hand not available as OSGi bundles, so that's why Akka provides one big dependency OSGi bundle. Not really the nicest solution, but the Akka guys can't help it that their dependencies aren't available as OSGi bundles. When creating actors in Akka, each actor is registered in the ActorRegistry so the actor can later on easily be looked up, sent messages to be started and stopped etc. This ActorRegistry is a singleton, so there's one registry for all actors in an application.


Using Akka actors and the ActorRegistry in an OSGi environment triggers some interesting architecture questions however. An OSGi application bundle acts as some sort of mini-application, and allows to export services and keep all the other functionality of a bundle private. If this bundle creates actors however, they're registered in the application-wide ActorRegistry, and are available to other bundles without being exported explicitly as OSGi services. This makes the use of OSGi services superfluous, that's one way to look at it. But at the same time the bundle can no longer control which actors (acting as services) are made available publicly: every actor is accessible via the ActorRegistry.


In an OSGi environment the application bundles are mini-applications themselves. They can be started and stopped, they can execute logic when started and stopped (using a bundle activator), they come and go. When a bundle starts it typically creates it actors, when stopped the actors should be stopped. Be careful however when using the ActorRegistry to stop the actors of a bundle, as using shutdownAll would stop all the actors of all the bundles.


There are things you can try to achieve some more control in what your bundle makes available to other bundles. Using TypedActor's or by using Scala case classes as messages you achieve more strong-typed behavior, which is a good thing anyway. But it also allows you not to make these classes available to other bundles by not including them in the OSGi bundle's export-package. Using the functionality of your actor without the message classes becomes impossible this way, but the actor is still available in the ActorRegistry, and can be controlled this way by other bundles (stopped, started, ...).


Ideally in an OSGi environment each bundle should have it's own ActorRegistry, and make actors available to other bundles by exporting them as OSGi services. Or at least it should be possible to use Akka actors this way. Or maybe there are other solutions to achieve this more application-bundle-like behavior?


There's a thread on the Akka-users mailinglist on this topic. There's a suggestion to use classloader isolation for the different bundles and use remote actors to communicate between bundles. Feel free to provide any other insights or suggestions in the comments.

February 04, 2011

Effectively deliver your message when writing a job opening (part 2): the candidate profile

So you decided to hire people to work for your company or project, as did we. In a previous post we talked about identifying the target audience for your job opening, we'll now deal with the profile your ideal candidate should match.

Think about what you find really important

Instead of writing down a long list of required knowledge, to us how you work is more important. We like the getting-things-done mentality, but the quality of work delivered is equally important to us. It's mandatory you think about the design and architecture of software, and not just to start coding. Modularity, scalability, reliability and other -ilities will come back to haunt you if you don't pay attention to them. This is also the experience we were talking about earlier.

And of course we can't afford to ignore all of this, since this is the core of our business. We are a team of highly specialized experts providing real quality solutions to our customers. This is the way we work, but it's also who we are and how we are known in the market. So this is also our way to check if you'll fit in in the team, and match the philosophy of our company.

So when composing the ideal profile of our candidates, we don't list required knowledge of this or that framework. We do however provide some pointers that should give you as a candidate an idea of what knowledge we think is required to do the job we have in mind for you. Don't worry if you don't already know all of it, that's ok. But some topics at least should ring a bell, and ideally these are exactly the things you already knew a little bit, but always wanted to know more about. As a consequence, your ability and willingness to continuously learn is very important to us.

So think about what you find really important aspects for the profile of a candidate. It's easy to write and entire page of must-have knowledge and experience, it's much more difficult to bring a consistent message of what your requirements really are and at the same time effectively deliver your message as a future employer. It should breathe the culture and atmosphere of your company, and even this profile part of a job opening should offer a clear view what a candidate can expect when actually working in the company.

We'll talk about this perspective and what to think about when formulating your offer as an employer should in the next follow-up post.


February 02, 2011

Effectively deliver your message when writing a job opening

In case you didn't know yet, at Xume we have an job opening for Java experts. Senior developers or technical architects as well as young potentials are most welcome to contact us and find out if we can create the perfect job experience for each other.

As Xume consultants we are often involved in the process of hiring people for our customers, both in composing job profiles as well as in screening potential candidates. And we've also a lot of experience in analyzing what a customer is really looking for when we receive a request consultancy work formulated as a vacancy. But writing a job profile is not an easy task. Now we've written one down ourselves, we'll share some thoughts and tips on how to do it, and motivate the choices we've made.


So what do you have to think about when you want to write a vacancy to hire people? In this post we'll focus on identifying your target audience. Follow-ups will deal with key requirements, offering perspective to a candidate and finally some words about the selection procedure.


Identify your target audience

When writing a text in general, you always have to think about your target audience. This is especially important when writing a job vacancy. In our case, we deliberately chose to write a single job profile for both senior and more junior candidates. The motivation behind this is very simple: how will you distinguish between both?


We don't believe the age of a candidate or the years of work experience matters very much in our sector (IT). It's all about relevant experience, and you can have more relevant experience after one year of professional work than some other people have after ten years.


Another option is to list required knowledge of this or that framework. If you do so, be aware that you are defining criteria in terms of knowledge and not necessarily in terms of experience. This might be relevant if you actually need your candidate to bring specific knowledge aboard, or you don't want to invest too much in the education of your candidate, e.g. when hiring a consultant. But in our case we're hiring employees and we do want to invest in training and education, so the required knowledge boils down to knowing Java and JEE. We believe you as a candidate will acquire further knowledge quickly if you have relevant experience in general, when you are part of a team of experts and if you haven proven you're eager to learn.


In a follow-up post we'll talk about defining the key requirements for our ideal candidate. In the meantime do check our
job opening, and if you know anyone that might be interested in joining us, please spread the word!

January 30, 2011

Selling unit testing to managers/executives

A few years ago, I had to defend to an executive the extra cost of implementing unit tests in a project compared to the return it would give. This was during an architecture board meeting and he pretty much caught me off guard, so I had to come up with something really quickly.

Luckily I remembered an article I read a long time ago that compared Japanese and English players in the automotive industry. Overall during the sixties and seventies (maybe even eighties), Japanese built cars were of a higher quality than the cars built in the Western world. The amount of 'broken' cars produced by American or European were incredibly high compared to what Japan produced. Those cars could not be sold as is and had to be taken apart and repaired while they haven't even left the factory, which was very expensive!

To explain unit testing to the executive, I told him the above story. Then I elaborated on the way Toyota solves this by implementing something similar to 'unit tests' to test the quality of the most recent addition in the intermediate product (i.e. unfinished car). They prefer to stop the process and rather get the quality right the first time than to take apart the car at the end of the line for reparation, which is obviously very costly. This saved Toyota a lot of money and made them much more productive at the same time.

By comparing this to a more easy to understand case and showing the financial benefit using unit tests had, the executive was immediately convinced. I didn't even have to give figures, because it made so much sense. He didn't know that Toyota did this and was even surprised to hear it.

I never found back the article, but I did find a book on the Toyota way (http://www.amazon.co.uk/Toyota-Way-Management-Principles-Manufacturer/dp/0071392319/) which elaborates more on the case above and also elaborates on how Toyota works in general, which is also a very interesting read. They might have to re-read it themselves considering the recent events/recalls ;-).

Principle 5 is the one I am referring to and here is a summary of that principle:


Principle 5. Build a culture of stopping to fix problems, to get quality right the first time.
  • Quality for the customer drives your value proposition.
  • Use all the modern quality assurance methods available.
  • Build into your equipment the capability of detecting problems and stopping itself. Develop a visual system to alert team or project leaders that a machine or process needs assistance. Jidoka (machines with human intelligence) is the foundation for “building in” quality.
  • Build into your organization support systems to quickly solve problems and put in place countermeasures.
  • Build into your culture the philosophy of stopping or slowing down to get quality right the first time to enhance productivity in the long run.

You might argue that Toyota was rather doing integration tests instead of unit tests, but for an executive this is pretty much the same. Hope this helps whenever you have to defend unit testing in your company!



January 16, 2011

Separating the sheep from the goats during an interview (part 1 - Learnability)

I previously talked about the perfect CV and how an applicant can survive an interview. Although both posts are more oriented towards candidates, they are also useful for employers. When you receive a CV that is written as it should be and when a candidate can act normal without suffering too much from stress, you are already halfway in evaluating the candidate. In this post, I want to go deeper in what -in my opinion-  a company should look for in an ideal employee and how to test this during a one hour interview.

How can you actually evaluate a candidate in one hour?

During the interview, I always let the candidate elaborate on his latest job/function and only explain the content of the job at the end of the interview if he is actually a worthy candidate. Most of the times I already interrupt him during the first few minutes to steer the candidate towards the things I want to know that are important for the job and to look for the 3 major qualities I look for in the ideal candidate.

1. Learnability
2. Responsibility
3. Social

In this post I will elaborate on the first quality: learnability. In next posts, I'll go deeper into the other two qualities.

Learnability

Learnability is all about being smart and the willingness to learn. Being smart is not enough by itself. I rather check the learnability as a whole of the candidates. They should be smart to be able to actually understand things AND should also be willing to learn.

How do you test smartness? I always have a question in my sleeve that can easily be answered at first sight, but which will be used as a trigger for a lot of follow up questions. If they know the answers, good for them, they'll get the next question that builds upon the first question and the answer they gave. So far, we probably are testing their encyclopedic knowledge, things they know already anyway.

It really gets interesting when they do not know the answer immediately. This is where you go beyond encyclopedic knowledge and can see if they really understand what they are talking about. Just give them the answer yourself, explain it in detail and look if they can follow. You may explicitly ask them if they understood the answer. Then you can hit them with another question that builds upon your own answer. Look at their reaction. Have they really understood everything you said? Do they ask questions themselves to follow your reasoning? Can they reason with you? Can they actually answer the new question or do they blank out? Even if they can not answer again, do the same again, give the answer yourself, explain it in detail and hit them with the next question. See how far you can go. If you can see that they stick with you until the end and they don't get frustrated or completely blank out, you got a smart person in front of you.

Don't try to be too smart with your questions though, you could end up like Google expecting an answer which is in fact wrong on a question they ask during interviews:
Just choose something quirky or not so obvious you encountered in your own experience.

Being smart does not mean they are actually willing to learn new things though. To test their willingness, I ask what they do at home to stay up to date, are they reading articles, blogs, sites, books, etc... in their free time? Even if they only do this during their daytime job, reading a blog or article does not take a lot of time and keeps them sufficiently informed of what is happening in their domain of expertise. If they do stay up to date, I always ask them to tell me something about the things they've read and what they like or don't like about it. What are the evolutions? What do they think what the future brings?

Learnability might be a knife that cuts on both sides. If you have a lot of people with high learnability, they might never get things done, because they are always looking for the next best thing and never finish their actual work. You might not always need people who want to learn all the time. Sometimes it is sufficient that they are smart enough to understand the task at hand and make sure there is a smooth continuation of existing tasks. More on this when I talk about responsibility.

I generally reserve 20-30 minutes for this part during a one hour interview, because I think this is really important.

In a next post I'll go deeper into responsibility. If you think you're learnability is high, you are already one step closer to working for us.

January 05, 2011

A foundation for writing guidelines

I've been several times in a situation where I had to write guidelines for one of our clients . Guidelines, instructions, procedures, protocols, ... some seem more strict than others, but they all suffer from the same problem: how to clearly indicate the do's and don'ts.

A few years ago, I got inspired by what is known as an RFC (Request for Comments) in the internet world. Very roughly speaking, RFCs describes how machines must communicate over the internet. One RFC in particular (RFC 2119), written by Scott Bradner from Harvard University, describes some key words to signify the requirements in a (RFC) document.

Although the RFC clearly states that "they must not be used to impose a particular method they must not be used to try to impose a particular method on implementors where the method is not required for interoperability" (roughly speaking for the internet again), I found them very useful for guidelines in general.

The key words are as follow (excerpt from the originial document):
  1. MUST - This word, or the terms "REQUIRED" or "SHALL", mean that the definition is an absolute requirement of the specification.

  2. MUST NOT - This phrase, or the phrase "SHALL NOT", mean that the definition is an absolute prohibition of the specification.

  3. SHOULD - This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

  4. SHOULD NOT - This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid reasons in particular circumstances when the particular behaviour is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behaviour described with this label.

  5. MAY - This word, or the adjective "OPTIONAL", mean that an item is truly optional.  

Let's look at a simple example of how these key words can be used in practice. Although I don't have grass in my own garden, I found a simple article on "How to mow grass like a pro" with some straight forward instructions on - you guessed it - how to mow a grass like a pro to make it truly look slick. Here is the adapted version that uses the key words to emphasize certain points on what to do and what not to do:

  1. The lawn MUST NOT be wet before mowing grass. Being wet only causes grass to clod up and create mounds of grass through out your lawn. When cutting grass when dry; the clippings spread out evenly, and fall into the lawn and disappears. Of course this depends on how high the grass is before cutting. If grass is exceptionally high, you SHOULD bag the grass and dispose of it.

  2. You MUST NOT cut grass too short. Mowing grass too short allows you to possibly scalp the ground and leave dead spots. As a safe measure, you SHOULD cut grass at about 2 1/2 to 3 inches to be safe. Some even prefer 3 1/2 inches. Depending on how level the ground is; if there are unlevel mounds and drainage trenches, you SHOULD consider cutting as high as possible to avoid scalping.
  3. You SHOULD mow grass before weed eating or trimming. 
  4. This will enable you to make sure you will not have to go over the lawn twice. By weed eating after you mow; those corner spots will stick out like a sore thumb, and you will be able to do a more professional job and not miss anything.

  5. A choice of weed killer can be a number of brands. When you choose a weed killer, you MUST be sure you mix it properly. You MUST read the instructions on the label for mixing it correctly. You MUST NOT spray if the wind is 10 mph or above. You MUST NOT spray around young shrubs and flowers, large trees will handle these weed killers, but take precautions anyway.

  6. You SHOULD spray around your home and barns or storage sheds. You SHOULD spray about 3 inches wide. This will look neat and will not leave a wide dead space that is ugly. You SHOULD NOT spray weed killer in drainage areas on your property, this will only cause erosion eventually. You MAY spray ditches but be advised to only spay the very bottom, you MUST NOT spray the sides.

  7. You MUST use an edger for your side walks and walk ways. You MUST NOT spray these areas or even weed eat them. This will only cause the edge of the grass to get wider, and will not look professional. Edging these areas will have a neat straight look

This looks like overkill, but by using the key words, you make sure that no misinterpretation is possible. Although using the key words does not prevent you of adding arguments or elaborate some parts in the document to clarify things. The key words are there to make sure the requirements are well understood.

All guidelines I write now are superseded by a chapter (eg. "Keywords to indicate requirement levels") describing the above key words. Throughout the whole document, I apply the key words consistently to emphasize requirements in the document. The key words are also in uppercase to even emphasize them more.

It is now much easier to write more to the point and stricter guidelines, without sounding arrogant. By describing the rationale behind the key words in a introductory chapter, they are perceived as a formal part of the document and not as me pointing with a finger.