sönke Albers* EstEEm IndIcators: mEmbErshIp In EdItorIal boards or honorary doctoratEs – dIscussIon of “QuantItatIvE and QualItatIvE rankIngs of scholars” by rost and frEy 1 IntroductIon
It is very commendable that Rost and Frey (2011) and in a similar paper already Frey
and Rost (2010) have alerted us once again to the fact that any “quantitative” ranking of
researchers suffers from substantial shortcomings. The authors argue along two lines. First,
that quantity does not represent quality. But quality is hard to measure, and since publi-
cations are comparatively easy to measure, more and more rankings are being published
based on publication output. Rost and Frey (2011) based on Frey and Rost (2010) give
many reasons as to why such quantitative indicators do not provide qualitative results,
although attempts have been made to also take qualitative aspects into account. In partic-
ular, output has been weighted with the quality of the journal. On the one hand, Albers
(2009) identifies rankings such as those by Fabel et al. (2009), which weight journals
only by citation impact and impute journals that are not listed in the Social Science Cita-
tion Index, as not even providing face validity. On the other hand, rankings based also
on judgments of journal reputation such as Jourqual2 (Schrader and Hennig-Thurau
(2009)) have proven to be more plausible (Handelsblatt (2009)). Frey and Rost (2010)
list many arguments for why the impact factor of citations is not a measurement of
quality. However, I note that Hirsch (2005) introduced the aspects of quantity and cita-
tion impact as quality indicators by combining them into what he cal s the “H-Index.”
This index measures the highest H number of papers of an author that received H or more
citations. This is a non-compensatory way of taking both productivity and impact into
account. It is also interesting to note that the H-Index and its variants can be applied to
the database of Google Scholar by the software “publish and perish” provided by Anne-
Wil Harzing on her webpage http://www.harzing.com/pop.htm. She also discusses that
web-based analyses result in many more citations than those based on any other proprie-
tary database like the Web of Science. The availability of software like the one by Harzing
implies that the impact can be measured not only based on journal articles as appearing
in established databases like the Social Science Citation Index, but also on monographs,
textbooks, book chapters, articles in non-included journals and even working or discus-
sion papers. This overcomes one of the main critiques of impact analysis (Frey and Rost
* Sönke Albers, Professor of Marketing and Innovation and Dean of Research, Kühne Logistics University, Brook-
torkai 20, 20457 Hamburg, phone +49 40 328707-211, fax: +49 40 328707-209, e-mail: soenke.albers@the-
sbr 63 January 2011 92-98
Concerning the quality of articles, Armstrong (1995) argues that B-level journals often
publish more innovative ideas than do A-level journals. This is because reviewers of A-level
journals favor extreme rigor at the expense of innovativeness or relevance. This observa-
tion could explain why some journals now have editorial policies that accept that an inno-
vative idea may not need to reflect the same degree of rigor as a standard execution of
a research question. From this discussion, I conclude that quantitative measures do not
provide sufficient information on the ultimate quality of an article, i.e., whether it has
Second, researchers must also assume many more roles other than conducting research.
For example, researchers not only do their own research, but must also review the research
of others. Since members of editorial boards are typically chosen because of their valued
contributions, Frey and Rost (2010) suggest using the number of memberships on edito-
rial boards – perhaps weighted with the reputation of the journal – as an overall measure
of research quality. The authors show that for the population of economics researchers,
this measure has a small overlap with rankings based on the number of publications, and
thus measures a different aspect. They replicate this study with reversed authorship for the
area of management in Schmalenbach Business Review (sbr). They confirm a main result
of their 2010 study that the ranking position depends heavily on the kind of measures or
indicators applied. In this current article, they go further by showing that for the popu-
lation of management researchers, this measure is inversely U-shaped with respect to the
number of publications, and thus measures indeed a different aspect. Rost and Frey (2011)
believe that their esteem indicator, which measures the standing of a researcher, is a truly
qualitative measure. They do not introduce this indicator as a better measure of the value
of a researcher, but as an additional one covering additional aspects.
To better evaluate the advantages of this new measure we have to address what purpose
these rankings can or should serve. Are they helpful for established researchers in their
field? In most cases, the answer is no, because these researchers themselves are able to easily
overlook the research that is being carried out and to assess how valuable the resulting arti-
cles are. In contrast, quantitative rankings may provide some information to those people
who are not fully informed, but would like to be. For example, students may obtain
information on where professors who are active in research are located. Companies may
also use these rankings to find out where they can hire promising students and engage
in joint projects. The main beneficiaries of these rankings are university administrations.
They do not have expert knowledge to assess whether someone is a good researcher and
may use the rankings to gain such information. Administrators can use this information
for budget al ocation, compensating researchers, and promotion decisions. Rost and Frey
(2011) based on Frey and Rost (2010) object to the fact that these major decisions are
based on questionable indicators and will therefore worsen rather than improve research
quality. They are in line with several other researchers such as Adler and Harzing (2008)
and Kieser (2010). In the very end, everybody chases after the highest possible number of
publications in A-level journals, regardless of whether or not they have something of value
to report. Although this scenario might occur, we also see that many researchers do not
strive for publications in reputed journals at al . According to a database operated by KOF
Konjunkturforschungsstel e of ETH Zürich (formerly “Forschungsmonitoring”), 502 out
sbr 63 January 2011 92-98
of 1,479 researchers from Germany, Austria, and Switzerland have not published a single
article in a B-level journal or better. Thus, these rankings may direct us to strive for more
rigorous research instead of fol owing the easy way of “sel ing” non-rigorous ideas. Hence,
quantitative rankings can be especially beneficial for young researchers and young univer-
sities, both of whom have to prove that they can conduct rigorous research. However, the
more mature a researcher becomes, the more likely it is that additional aspects such as
services to the university, associations, and practitioners will gain importance.
Taking the benefits of the various stakeholders into account, it is clear that university
administrators gain the most from quantitative indicators, because the rankings give
immediate insights into the productivity of researchers. Whether this productivity also
implies value-creation is a second question that might be answered by other indicators
such as the one suggested by Frey and Rost (2010) and subsequently by Rost and Frey
(2011). Hence, the proposed esteem indicator truly has something to offer. However, only
a minority of researchers serve on the editorial boards of reputed journals such as those
investigated by Rost and Frey (2011). Thus, discriminatory information is not available
While I agree in principle that the indicator “number of memberships of editorial boards”
offers some information, I am not convinced that it is a good idea to validate this measure
by investigating its relation to the number of publications per year. The course of a reputed
researcher’s career wil general y display a phase of increasing productivity, fol owed by
some kind of plateau, and then a decrease of productivity. Normal y, members of edito-
rial boards are chosen after they have published a number of articles in very good journals,
thus showing that they can judge the quality of the articles they review. This observation
points to the cumulative number of articles as a better measure against which the editorial
memberships should be compared. I wonder whether the inverse U-shaped relation would
then still hold. In addition, I do not understand Figure 2 in Rost and Frey (2011) which
exhibits a functional relation between number of articles per year of a researcher and the
number of editorial board memberships. This relation does not predict more than one
editorial board membership for al possible numbers of publications per year, although
the observed number varies from zero to six. Finally, the membership indicator may also
be subjective, because it is unclear which journals to include. The fewer highly prestigious
journals are taken into account, the more likely it is that the indicator is similar to quan-
titative research measures and also not discriminating across the majority of researchers.
Including more journals reduces the likelihood that the indicator measures research repu-
tation, but makes it more discriminatory. Unfortunately, Rost and Frey (2011) do not
give detailed advice on how to construct their esteem indicator.
Although I have so far taken only research excellence into account, it is important to
note that professors have many responsibilities such as teaching, supporting younger
scholars, servicing the university and, in the case of business administration, the business
world (Rost and Frey (2011)). These responsibilities make multitasking necessary. Thus,
any serious ranking must be based on multiple criteria, which requires using different
methods of assessing the overal productivity of researchers. Editorial board membership is
a measure close to research reputation. A more holistic view of a professor’s achievements
sbr 63 January 2011 92-98
can be obtained with the help of honorary doctorates, which are popular in Germany.
Such honorary degrees can be received not only for significant research contributions,
but also, for example, by establishing research relationships with foreign universities, or
helping other universities to establish themselves.
Therefore, I investigated whether honorary doctorates reflect only research productivity
or if they also include various other aspects. I used the data of members of VHB as given
in their directory. In addition, I obtained data from KOF Konjunkturforschungsstelle of
ETH Zürich (formerly “Forschungsmonitoring”) for 1,479 researchers from universities
in the German-speaking countries, Germany, Austria, and Switzerland, on the number
of publications for the Jourqual2 categories A+, A, B, C, D, and E, as wel as the score
according to the scheme of Handelsblatt 2009. Instead of using correlation coefficients, I
computed the mean rank of al 1,479 researchers with an honorary doctorate, which came
to 346. Given that 51 researchers hold an honorary doctorate, this discrepancy means
that many researchers with equivalent quantities of research do not have an honorary
degree which also implies that other drivers are considered that also lead to an honorary
degree; one of these drivers may be a strong relationship with a foreign university. Split-
ting the researchers with an honorary degree into one group that obtained their degrees
from a university in Germany, Austria, or Switzerland (GAS) and another group that
obtained their degrees from somewhere else, I find that the mean rank for the GAS
group is 268 compared to 533 for the others. This difference implies that the holders of
an honorary doctorate from abroad have a weaker publication record, on average, and
must have obtained the degree for other reasons. Another issue might be that because
of the German tradition, researchers have obtained the degree more for their German
publications in the main journals Zeitschrift für Betriebswirtschaft (ZfB), Zeitschrift für betriebswirtschaftliche Forschung (zfbf), and Die Betriebswirtschaft (DBW), which are
reflected in B-level journals (according to Jourqual2, Schrader and Hennig-Thurau 2009).
A logistic regression that I ran provides strong evidence for this. Table 1: Results of a logistic regression Regression Standard Wald Degrees of Significance Exp (B) coefficient B Statistic
sbr 63 January 2011 92-98 Table 1 shows that I regressed the existence of an honorary doctorate (1/0) on the number
of publications weighted according to the number of authors n per publication with
2/(1+n) in the respective classes A+, A, B, C, D, and E of Jourqual2. The results (see Wald
statistic) show that the researchers with an honorary doctorate can best be separated from
those without an honorary degree by the weighted number of not only the A+ journals,
but also according to the B-level journals, to which the German ones belong. This result
provides evidence that, especial y for older researchers who are also the ones most likely to
hold an honorary degree, publications in the German journals are indeed important.
On the other side, the classification table (see Table 2) shows that while the result is signif-
icant, a high number of 205 researchers exist who have published as much as the ones
with honorary doctorate. In addition, there are 19 out of 51 honorary doctorate holders
whose publication record would not allow the inference of such a doctorate.
Table 2: Classification result of logistic regression Predicted no honorary Predicted honorary Correctly classified doctorate (0) doctorate (1)
Another point that I find questionable is the use of correlations for comparing rankings
suggesting that if two rankings are highly correlated with each other, then they are consid-
ered to be equivalent. Such an equivalence does not help an individual who is ranked very
differently in these rankings even when everybody else is ranked more or less the same.
While it may appear scientifical y sound not to worry about different measures as long
as they are correlated, it is not appropriate when individuals are evaluated who can, as a
result, end up at quite different ranks. Therefore, Frey and Rost (2010) show the differ-
ences of various rankings by displaying the full ranking order so that the reader can accu-
rately assess the kind of overlap present. I cannot show this ful ranking order for honorary
doctorate holders because the data by “KOF Konjunkturforschungsstel e” of ETH Zürich
As is shown by Frey and Rost (2010) and the replication in Rost and Frey (2011), there
are indicators for qualitative aspects of research. Clearly, to a certain extent, they measure
something beyond research quantity, but due to the smal number of researchers involved,
both indicators do not give discriminatory information on the majority of researchers.
Moreover, these indicators do not directly address the issue of whether research has
led to innovative ideas that change behavior or thinking. Because business administra-
sbr 63 January 2011 92-98
tion is an applied science, any valuable article should have an impact on practice. But
this impact is difficult to measure, because it is more often indirect than direct. Ideas
published in journals are rarely read by practitioners; they must be “translated” by people
who specialize in knowledge transfer because researchers and practitioners speak different
“languages” (Kieser and Leiner (2009)). The results of research diffuse through many chan-
nels, including consulting companies, and thence into practice. It appears to be very diffi-
cult to assess such a process and trace back the origins of a valuable idea. It might be a
challenging task for future research.
What can be done? In this case, many aspects must be evaluated. Should evaluation be
compensatory or non-compensatory? The H-Index mentioned earlier is non-compen-
satory because the index increases only if additional papers receive even more citations.
However, any evaluation of academics should be compensatory. Some researchers have
talents for research, some others for teaching or writing textbooks, and yet others are
perfect in managing research or translating it for practitioners. In my opinion, every-
body should perform the activity at which he or she excels which would not only be most
beneficial for the economic welfare, but also better address the original idea of “univer-
sitas”. The only condition is that a portfolio of different talents results. However, such a
portfolio is important for the entire system, not for just a single university. In the latter
case, it might be good to have a rather homogeneous group of professors at a certain
place because homogenous people tend to col aborate more easily. For example, if high
performers want to be among other high performers, then ranking lists might offer a
certain information value to a hiring committee. But in many cases, quantitative ranking
lists may be misused. Instead of working with ranking lists, it is often better for univer-
sity managers to come up with detailed target agreements with professors that ask for
specific behavior targeting the achievement which would benefit the university. Both Rost
and Frey (2011) and Kieser (2010) argue that hiring decisions should be based on expert
evaluations that truly represent qualitative judgments instead of quantitative rankings.
However, especial y for the non-experts who are very often members of hiring commit-
tees, quantitative rankings provide a transparent benchmark of information, making it
harder for the experts to engage in selfish behavior. rEfErEncEs
Adler, Nancy and Anne-Wil Harzing (2009), When Knowledge wins: Transcending the sense and nonsense of aca-
demic rankings, Academy of Management Learning and Education 8, 72-95.
Albers, Sönke (2009), Three Failed Attempts of Joint Rankings of Research in Economics and Business, German
Armstrong, J. Scott (1995), Quality Control Versus Innovation in Research on Marketing, Journal of Marketing
Fabel, Oliver, Miriam Hein, and Robert Hofmeister (2008), Research Productivity in Business Economics: An Inves-
tigation of Austrian, German and Swiss Universities, German Economic Review 9, 506-531.
Frey, Bruno S. and Katja Rost (2010), Do rankings reflect research quality?, Journal of Applied Economics 13, 1-38.
sbr 63 January 2011 92-98
Handelsblatt (2009), Handelsblatt-Ranking Betriebswirtschaftslehre (BWL), http://www.handelsblatt.com/politik/
bwl-ranking/bwl-ranking-methodik-und-interpretation;%202175006.
Harzing, Anne-Wil (2010), Publish or Perish, Software for quantifying research productivity and discussion on
various measures on http://www.harzing.com/pop.htm.
Hirsch, Jorge E. (2005), An index to quantify an individual’s scientific research output, Proceedings of the National Academy of Sciences 102, 16569-16572.
Kieser, Alfred and Lars Leiner (2009), Why the Rigour–Relevance Gap in Management Research Is Unbridgeable,
Journal of Management Studies 46, 516-533.
Kieser, Alfred (2010), Unternehmen Wissenschaft?, Leviathan 38, 347-367. Rost, Katja and Bruno S. Frey (2011), Quantitative and Qualitative Rankings of Scholars, sbr 63, 63-91. Schrader, Ulf and Thorsten Hennig-Thurau (2009), VHB-JOURQUAL2: Method, Results, and Implications of the
German Academic Association for Business Research’s Journal Ranking, BuR – Business Research 2, 180-204.
sbr 63 January 2011 92-98
Stigmatisation vue de l’intérieur Témoignage de Lucie Boissinot dans le cadre de la journée de sensibilisation à la détection et au traitement précoce des premières psychoses. ______________________________________________________________ Le pire des maux en ce qui me concerne, c’est la maladie et non la Lorsque mon fils est tombé malade, il m’a été très difficile d
DRUGS ASSOCIATED WITH QT INTERVAL PROLONGATION 2–65 Drugs by Class Associationa Torsadogenicb FDA Label ingc Comments Anesthetics Non-specific arrhythmias reported in PI. Anti-arrhythmics IV affects QTc less than oral; proarrhythmia infrequent. Rate appears lower than that of quinidine. Rate appears lower than that of quinidine. “Quinidine syncope” in 2–6% of patient