Sérgio D. Sousa, Elaine M. Aspinwall, A. Guimarães Rodrigues
The Authors
Sérgio D. Sousa, School of Engineering, University of Minho, Braga,
Portugal
Elaine M. Aspinwall, School of Engineering, University of Minho, Braga,
Portugal
A. Guimarães Rodrigues, School of Engineering, University of Minho, Braga,
Portugal
Acknowledgements
The authors are grateful to all the companies that participated in this
survey. This work was partially supported by British Chevening
Scholarships (Grant POR 0100109) and Fundação para a Ciência e a
Tecnologia (Grant SFRH/BD/6939/2001). This paper is an extended version of
the work presented at the First International Conference on Performance
Measures, Benchmarking and Best Practices in New Economy – Business
Excellence 2003, 10-13 June 2003, University of Minho, Portugal.
Abstract
Purpose – To determine the current state of knowledge related to
performance measures and their degree of implementation in small and
medium enterprises (SMEs) in England.
Design/methodology/approach – The paper starts with a literature review
and then goes on to discuss the methodology used. The survey is briefly
presented together with the analysis of the resultant data. General
opinions regarding performance measurement in English SMEs are described,
including the most important measures and the biggest obstacles to the
adoption of new ones. Hypotheses about differences between groups are
tested and discussed.
Findings – This work concludes that there is a gap between the
theory/knowledge of performance measures and the practice in English SMEs.
Training of employees and difficulty in defining new performance measures
were highlighted as the major obstacles to the adoption of new performance
measures.
Research limitations/implications – The low response rate of the survey
precludes the generalisation of the findings.
Practical implications – Innovation and learning measures should be
applied more widely.
Originality/value – This paper is relevant to academics and SME managers
because it supports the existence of a gap between the theory of
performance measurement and its degree of implementation. In addition, it
introduces both theoretical information on performance measurement,
including that based on the balanced scorecard perspectives, and practical
information from a survey conducted in English SMEs.
Article Type: Research paper
Keyword(s): Performance measures; Small to medium-sized enterprises;
Balanced scorecard; Total quality management; England.
Benchmarking: An International Journal
Volume 13 Number 1/2 2006 pp. 120-134
Copyright © Emerald Group Publishing Limited ISSN 1463-5771
Literature review
For the purpose of this research “performance measurement” (Neely et al.,
1995) has been defined as the process of quantifying the efficiency and
effectiveness of action, and “performance measure” as a metric used to
quantify that action. Small and medium enterprises (SMEs) were taken to be
those companies with less than 250 (50 for small ones) workers (Commission
of the European Communities, 2003) and:
no more than 25 per cent of the capital or voting rights were held by
one or more enterprises which were not, themselves, SMEs; and
the annual turnover was less than ∈40 m (∈7 m for small companies) or
the total balance sheet was less than ∈27 m (∈5 m for small companies).
Traditional methods of measuring a company’s performance by financial
indices alone have virtually disappeared from large organisations (Basu,
2001). Non-financial measures are at the heart of describing strategy and
of developing a unique set of performance measures that clearly
communicate strategy (Kaplan and Norton, 1992, 1996), and help in its
execution (Frigo, 2002).
Frigo (2002) reported the existence of a gap between strategy and
performance measures, which failed to support the communication of
strategy within an organisation. André and Saraiva (2000) noted that there
was quite a large gap between available models and current company
practices in Portuguese companies. Hudson et al. (2001) concluded that
although there was a widespread acceptance of the value of strategic
performance measurement amongst the SMEs that they studied, none had taken
steps to redesign or update their current performance measurement systems.
Many excellence models and performance measurement frameworks, like the
EFQM (2001) excellence model, Kanji’s business scorecard (Kanji and Sá,
2002), the performance prism (Neely et al., 2002), and the balanced
scorecard (Kaplan and Norton, 1992), have proposed ways of using the TQM
philosophy. According to Ahmed (2002), the most popular ones to have drawn
the attention of researchers include the balanced scorecard and the EFQM.
Kanji and Sá (2002) state, for example, that the new approach to
performance measurement suggested in the balanced scorecard is consistent
with business excellence and TQM. The balanced scorecard is relevant to
both small and large organisations, however, neither a comprehensive
literature review nor any empirical research exists on implementing the
balanced scorecard in SMEs (Andersen et al., 2001).
The interest, over the last decade, in TQM and quality awards has
highlighted the importance of performance indicators in achieving quality
excellence. Quality measures represent the most positive step taken to
date in broadening the basis of business performance measurement (Bogan
and English, 1994). Models of excellence and improvement initiatives based
on TQM principles reflect the importance of not only complying to
specifications but also to delighting an organisations’ stakeholders.
The relationship between TQM practice and organisational performance is
significant (Samson and Terziovski, 1999), and TQM implementation
correlates with quality performance (Brah et al., 2002), despite some
contradictory cases (Shaffer and Thomson, 1992; Ittner and Larcker, 1997;
Sterman et al., 1997; Wilbur, 2002). Many of the failures of TQM in small
organisations are related to bad implementation strategies and processes
(Hansson and Klefsjo, 2003). Wood and Childe (2003) showed that it was
possible to establish relationships between process improvement actions
and performance requirements.
The adoption of the process approach to quality management systems (QMS)
was one of the most important aspects of the year 2000 revisions of ISO
9001 and ISO 9004 (Hooper, 2001). The new ISO 9001 standard (ISO, 2002)
requires fact-based decisions and continual measurement and improvement of
performance results (Karapetrovic and Willborn, 2002). These changes have
narrowed the gap between the requirements of a QMS and those of the EFQM
excellence model. Both reinforce the need to measure not only the critical
success factors of an organisation but also the satisfaction of its
stakeholders, to allow and assure continuous improvement aligned with
strategy.
Juran and Godfrey (1999) and Campanela (1999) considered quality costs to
be the main driver, when selecting quality improvement projects. This can
also be done with the support of the balanced scorecard, making it a
strategic management tool as suggested by Cobbold and Lawrie (2002).
The EFQM (2003) recognises that organisations, on their journey to
excellence, may show different levels of maturity. The selection of the
best approach to measure the effectiveness of a system will ultimately be
based on the maturity of the quality efforts, the type of organisation or
process, and other TQM tools applied concurrently (Campanela, 1999). Brah
et al. (2002) reported that the size of a company and the extent of its
experience with TQM affect the rigor of implementation and the resulting
level of performance quality. However, the nature of a company
(manufacturing or service) does not seem to have a significant effect on
either aspect.
Hudson et al. (2001) concluded that a discrepancy between theory and
practice was identified in the development processes employed by SMEs,
including a lack of strategic forethought, lack of communication between
managers and the lack of a structured process for development. They also
suggest that there are substantial barriers to strategic PM systems’
development in SMEs. Neely et al. (1995) pointed out that measurement is a
luxury for SMEs – success and failure are obvious. They have concluded
that the cost of measurement is an issue of great concern to managers in
SMEs.
Methodology
The steps followed in this research are similar to those followed by
Saraph et al. (1989) and Yusof and Aspinwall (2000b).
Following a literature review, the subject of performance measurement was
discussed with both academic and non-academic specialists and hypotheses
were formulated. This provided the basis for the construction of a
questionnaire which was pre-tested and revised. The final survey form was
sent by e-mail, to privately owned SMEs in England (both from the service
and industrial sectors).
The data was analysed using the SPSS package v11.0. The reliability and
validity of the questionnaire were also verified. A test for possible bias
from respondents was analysed as suggested by Armstrong and Overton
(1977).
Survey
The questionnaire consisted of three main sections: the company
background, the level of knowledge about performance measures, and the use
of specific performance measures. The first section was intended to
determine general information like number of employees, sector of
activity, number of clients, types of product made, whether a certified
quality system was held, the level of TQM and quality measures adoption
and confirmation that the company was indeed an SME. Each respondent was
also asked to select, from a list of nine, the quality initiatives that
had been adopted in their company. In addition, they were asked to state
their company’s strategic objectives to establish whether or not they
adopt adequate performance measures to track their evolution.
The second section consisted of 22 statements about the performance
measurement system of the company, including aspects such as the company’s
strategy, the selection of performance measures, their implementation and
the results. The respondents were asked to rate their degree of agreement
with each statement according to a five-point Likert scale from 1
“strongly disagree” to 5 “strongly agree”. Zero was added in case of
doubt. This section also contained a question to determine the most
important performance measures used in the company, and one for the
obstacles likely to be encountered if adopting new ones. The actual
criteria that allow companies to win new orders, as suggested by Neely et
al. (1994), were also assessed.
The balanced scorecard (Kaplan and Norton, 1992, 1993) was chosen as the
basis for the third section of the questionnaire mainly because of its
simplicity, general acceptance among practitioners and researchers, and
its close association with strategy (Kaplan and Norton, 1996). The
objective of this section was to investigate the importance and use of
different performance measures. A Likert scale similar to that used in the
second section was used to rate the importance and the use of each
measure.
Questionnaire reliability and validity
The reliability of the questionnaire, which measures internal consistency,
was studied through Cronbach’s α. This method allows for the calculation
of the α coefficient if one variable is removed from the original set,
making it possible to identify the subset that has the highest reliability
coefficient. If all the results are above 0.7, the scales are judged to be
reliable.
In the second section, of the questionnaire, all four groups (components)
were considered reliable after deleting 2 of the 22 statements
(variables). The α coefficients varied between 0.744 and 0.890.
Measures in third section were organised as suggested in the balanced
scorecard, and as can be seen in Table I, all groups of measures were
considered reliable.
Within the customer measures group, delivery was not considered reliable,
and therefore, was removed from further analysis. This is not critical to
this study because other components regarding customer performance
measures are being considered.
Content validity is always subjectively evaluated by the researcher
(Churchil, 1979; Saraph et al., 1989). An instrument has content validity
if it contains a representative collection of items and if sensible
methods of test construction were used (Yusof and Aspinwall, 2000b). It is
strongly believed that the second and third sections of this survey
instrument have content validity as they were well received by the pilot
respondents and by several academics and company managers who assessed
them.
Construct validity was tested for the second and third sections using
principal components analysis. Each measure or variable within a component
should have a significant correlation with variables of the same component
and low correlation with others (Hair et al., 1998). The objective of
construct validity analysis is to verify if all the statements that
translate the concept under study are unifactorial. If this happens the
group is considered homogeneous.
In the second section, only one variable was deleted to assure that all
groups were unifactorial (Table II), i.e. in each group only one component
was extracted, thus all groups were considered homogeneous. The
Kaiser-Meyer-Olkin (KMO) indicator, which is a measure of sampling
adequacy and should not be lower than 0.5, was also verified in all cases.
Variables within each component gave correlations higher than 0.635 in all
cases.
Eight variables out of 61 were deleted in the third section to make each
group unifactorial (Table III). The results indicate that in both sections
each set of variables constitutes a homogeneous group. Thus each one
translates one concept.
Predictive or criterion-related validity was tested as suggested by Owlia
and Aspinwall (1998) and Yusof and Aspinwall (2000a). A greater use of
performance measures should correspond to a greater understanding of the
company’s performance measurement system.
A linear regression analysis was performed on the overall use of
performance measures (from the second section) against the components
identified in the third section. The adjusted R2 value was 68.2 per cent,
suggesting a good fit. To improve this value, a reduction in the number of
factors was considered. Using the stepwise method to select the variables
to be added to, or removed from, the regression model, the adjusted R2
value increased marginally to 68.6 per cent. The overall perception of the
performance measurement system (OPPMS) can be expressed through the
following model: Equation 1 A residual analysis was carried out to
validate the assumptions of normality, constant variance and zero mean.
The model suggests that English SMEs report a higher use of performance
measures if they use financial, quality performance and training of
employees’ measures. The negative relationship associated with the use of
customer performance and innovation measures, suggests that these measures
may not be perceived as performance measures.
The results, overall, show that the instrument reflects predictive
validity.
Results
The questionnaire was sent to 400 companies and 52 were returned
completed. Four of the respondents were not classified as SMEs resulting
in a response rate of 12 per cent. This is low for a postal survey and so
caution must be exercised when generalising conclusions. The returns were
organised into two groups to test possible bias of respondents. No bias
was found and so it can be assumed that non-respondents would have similar
characteristics to the respondents.
Figure 1 shows the breakdown of respondent companies by number of
employees.
The wide range of activities covered by respondent companies is shown in
Figure 2, and includes SMEs from the service sector.
The majority of respondents were certified to ISO standards (Figure 3),
but only 14 per cent had completed the transition to ISO 9001:2000.
Continuous improvement or total quality management can be implemented
following a Plan-Do-Check-Act (PDCA) cycle. Thus it is fundamental in the
planning phase to define activities to improve strategic objectives, which
will then be monitored. Respondents selected profitability (53 per cent)
as the main strategic objective followed by quality (22 per cent) and
flexibility (10 per cent). When asked about the criteria that most helped
their companies to win orders, manufacturing quality came in first
followed by price (Table IV). It appears that despite other important
factors, the quality/price relationship is still of major importance for
English SMEs and cannot be forgotten when initiatives are deployed within
an organisation.
Table V presents the quality initiatives already implemented in the
respondents’ companies. Setting up a quality department can be explained
as a result of ISO standards or simply as a means of implementing the
necessary activities to improve quality and to track their evolution. As
employee involvement to improve quality and establishing measures of
quality progress received 65 and 46 per cent, respectively, it is expected
that approximately half of companies use measures to assess quality
progress. The same data allow us to conclude that statistical process
control, an efficient tool to understand the variation of a process is
used only in 23 per cent of companies.
General opinions about performance measurement were asked on strategy,
selection of measures, implementation and results (Figure 4), as all of
these are important in the process of continuous improvement. An ANOVA
test on the four means showed a significant difference between them at the
5 per cent level. The assumption of homogeneity of variances was verified
through Levene’s test. The results group has the lowest score, meaning
that the consequence of using performance measures is not well understood,
and a balance amongst these groups should result in better performance
measurement systems.
Obstacles to the adoption of new performance measures in SMEs include
computer systems issues, lack of top management commitment and the
existing accounting system (Bourne et al., 2000; Neely et al., 1997). The
respondents considered (Figure 5) training of employees as the most
important obstacle, followed by difficulty in defining new measures, which
could be the result of lack of skills of employees and leadership,
confirming the importance of top management commitment. The cost of the
performance system must always be analysed and is considered of great
concern to SMEs.
According to the literature, companies should adopt a balanced use of the
four groups of measures, as organised in the balanced scorecard. However,
respondents considered some measures more important than others, as shown
in Figure 6.
It is curious to note that on-time delivery is not perceived to be a
relevant criterion to win new orders (Table IV) but it is considered the
most important performance measure. This may be because, if a problem
occurs in the process or with the supplier it will be reflected in this
measure. In-process quality was perceived to be the second most important
measure.
Balanced scorecard
Grouping all the performance measures together, importance was rated by
the respondents as 3.55, on average, and use as 3.18. This implies that
although the respondents considered performance measures important, they
are not used accordingly. After verifying the homogeneity of the
variances, an ANOVA was performed. This resulted in a p (or significance)
value of 6.3 per cent, which, being just larger than 5 per cent, was too
large to be able to conclude a real difference. However, looking at the
four groups separately, financial measures are considered the most
important and are widely used, while innovation and learning measures are
rated less important and are less used (Figure 7).
The four groups of measures analysed in this study were assessed to find
out if there was a gap between the perceived importance and the practice
or use for each group. Tests were performed, using the ANOVA with a 5 per
cent significance level, to see if there were any differences between the
means of:
importance and use of each group of measures;
use for companies from the service and industrial sectors;
use for SMEs; and
use for companies certified according to a quality standard and others.
The group internal business process exhibited a significant difference
between the importance and use of productivity measures, thus there are
measures in this group that should be put to more use, such as “output per
employee or per labour-hour”, “time spent on each stage of product
development”, “time to process an operation”, “number of errors per unit”,
“number of billing errors per unit”, “production volume”, “absenteeism”,
and “injury lost days”. There was insufficient evidence to conclude
differences between the importance and use of quality performance
measures, meaning that if they are considered to be important they are
being used. The same was also true for the financial measures group.
A significant difference was found between the importance and use of both:
Employee training measures (i.e. in the innovation and learning group),
which include measures such as “quality related training provided to
employees”, “percent of employees who have quality as a major
responsibility”, “surveys of employee satisfaction/attitudes” and
“improvement of employee skill/knowledge levels”.
Customer requirement measures (i.e. the customer group), which include
measures such as “ability to adapt or tailor products to customer
needs”; “response time to customer requests for ‘specials’”; and
“accuracy of interpretation of customer requirements”.
Again, there was insufficient evidence to suggest differences in the level
of use of performance measures between industry and service enterprises,
and between SMEs. However, in this sample, medium enterprises make greater
use of internal business process and financial measures while for the
small ones it is the use of innovation and customer measures.
Companies certified to a quality standard and those that were not, did not
show any significant differences between their mean levels of use of
performance measures. Levene’s test for the homogeneity of variances was
violated in customer performance measures. Figure 8 shows this difference
in variance, suggesting that SMEs working to a quality standard are more
likely to adopt customer performance measures. A similar conclusion can be
drawn from other measures but this was the only case that was
statistically significant.
Conclusions
The study investigated the current level of knowledge of performance
measures and their degree of implementation in English SMEs. It identified
differences between some groups of companies and presented the biggest
obstacles to the introduction of new measures.
Results indicate that the SMEs surveyed, recognise the importance of the
performance measurement system but their level of use was significantly
lower. This implies that there is a gap between theory and practice, which
could be considered an improvement opportunity for English SMEs.
Performance measures can be used to influence behaviour and, thus, affect
the implementation of strategy (Neely et al., 1994). The OPPMS as part of
a continuous improvement process, linking strategy to results is not
balanced, meaning that this cycle is not fully understood by SMEs’
managers. Although it is not necessary to use all the measures suggested
in the questionnaire, an alignment between strategy and performance
measures makes them more effective (McAdam and Bailie, 2002).
Training of employees and difficulty defining new performance measures
were highlighted as the most important obstacles to the adoption of new
performance measures. This may reflect a lack of skills by employees and a
difficulty in understanding the process. Only a minority of the respondent
SMEs were applying statistical process control and cultural change
programmes.
The data collected from this survey suggests that there are no significant
differences in the use of performance measures between industry and
service enterprises, and between SMEs. However, this requires further
study, since one limitation of this study was the low response rate, which
precludes a generalisation of these findings.
Overall, financial measures were the most widely used, while innovation
and learning measures were rated less important and were less used. The
most important performance measures were not consistent with criteria to
win new orders.
Based on the data collected, a gap was detected between the importance and
use of some measures suggesting that SMEs should use more productivity,
employee training and customer requirement measures. In particular, the
level of use of innovation and learning measures should increase if SMEs
can resolve the major obstacles, identified in this work, to the adoption
of new measures: training of employees and difficulty defining new
measures.
This research is part of a PhD programme to develop a simple and
easy-to-use framework to allow SMEs to create their own performance
measurement system, aligned with strategy, to allow the achievement of
pre-determined goals.
Equation 1
Figure 1 Number of workers
Figure 2 Sectors of activity
Figure 3 SMEs’ quality assurance system
Figure 4 Overall perception of the performance measurement system
Figure 5 Obstacles to the adoption of new performance measures
Figure 6 Most important performance measures
Figure 7 Importance and use of the balanced scorecard
Figure 8 Use of customer performance measures for SMEs working/not
working according to a quality standard
Table I Reliability of measures in the third section
Table II Principal component analysis of the second section
Table III Principal component analysis of the third section
Table IV Criteria to win new orders
Table V Quality initiatives adopted by English SMEs
References
Ahmed, A.M. (2002), “Virtual integrated performance measurement”,
International Journal of Quality & Reliability Management, Vol. 19
No.4, pp.414-41.
Andersen, H., Cobbold, I., Lawrie, G. (2001), “Balanced scorecard
implementation in SMEs: reflection on literature and practice”,
Proceedings of the Fourth SMESME International Conference, Aalborg
University, Aalborg, Denmark, pp.103-12.
André, M., Saraiva, P. (2000), “Approaches of Portuguese companies
for relating customer satisfaction with business results”, Total
Quality Management, Vol. 11 No.7, pp.929-39.
Armstrong, J.S., Overton, T.S. (1977), “Estimating nonresponse bias
in mail surveys”, Journal of Marketing Research, Vol. 14 pp.396-402.
Basu, R. (2001), “New criteria of performance management”, Measuring
Business Excellence, Vol. 5 No.4, pp.7-12.
Bogan, C.E., English, M.J. (1994), Benchmarking for Best Practices –
Winning through Innovative Adaptation, McGraw-Hill, New York, NY, .
Bourne, M., Mills, J., Wilcox, M., Neely, A., Platts, K. (2000),
“Designing, implementing and updating performance measurement
systems”, International Journal of Operations & Production
Management, Vol. 20 No.7, pp.754-71.
Brah, S.A., Tee, S.S.L., Rao, B. (2002), “Relationship between TQM
and performance of Singapore companies”, International Journal of
Quality & Reliability Management, Vol. 19 No.4, pp.356-79.
(1999), in Campanela, J. (Eds),Principles of Quality Costs –
Principles, Implementation and Use, ASQ Quality Press, Milwaukee,
WI, .
Churchil, G.A. (1979), “A paradigm for developing better measures of
marketing constructs”, Journal of Marketing Research, Vol. 16
pp.64-73.
Cobbold, I.M., Lawrie, G.J.G. (2002), “The development of the
balanced scorecard as a strategic management tool”, Proceedings of
the PMA 2002, Boston, MA, USA, pp.125-32.
Commission of the European Communities (2003), “Creating an
entrepreneurial Europe – the activities of the European Union for
small and medium-sized enterprises (SMEs)”, Commission of the
European Communities, Brussels, available at:
http://europa.eu.int/comm/enterprise/entrepreneurship/promoting_
entrepreneurship/, .
EFQM (2001), “Moving from the SME model to the EFQM excellence model
– SMEs version”, EFQM, available at:
www.efqm.org/publications/downloads/MovingModelsPDF.pdf, .
EFQM (2003), “Introducing excellence”, available at:
www.efqm.org/Downloads/pdf/0723-InEx-en.pdf, .
Frigo, M.L. (2002), “Nonfinancial performance measures and strategy
execution”, Strategic Management, August, pp.6-9.
Hair, J.F., Anderson, R.E., Tatham, R.L., Black, W.C. (1998),
Multivariate Data Analysis, Prentice-Hall, Englewood Cliffs, NJ, .
Hansson, J., Klefsjo, B. (2003), “A core value model for
implementing total quality management in small organisations”, The
TQM Magazine, Vol. 15 No.2, pp.71-81.
Hooper, J. (2001), “The process approach to QMS in ISO 9001 and ISO
9004″, Quality Progress, Vol. 34 No.12, pp.70-3.
Hudson, M., Smart, A., Bourne, M. (2001), “Theory and practice in
SME performance measurement systems”, International Journal of
Operations & Production Management, Vol. 21 No.8, pp.1096-115.
ISO (2002), “ISO 9000 – quality management systems”, International
Organisation for Standardization, ISO, Geneva, available at:
www.iso.ch/iso/en/iso9000-14000/iso9000/qmp.html, .
Ittner, C.D., Larcker, D.F. (1997), “The performance effects of
process management techniques”, Management Science, Vol. 43 No.4,
pp.522-34.
Juran, J.M., Godfrey, A.B. (1999), Juran’s Quality Handbook,
McGraw-Hill, New York, NY, .
Kanji, G., Sá, P. (2002), “Kanji’s business scorecard”, Total
Quality Management, Vol. 13 No.1, pp.13-27.
Kaplan, R., Norton, D. (1992), “The balanced scorecard – measures
that drive performance”, Harvard Business Review, Vol. 70 pp.71-9.
Kaplan, R., Norton, D. (1993), “Putting the balanced scorecard to
work”, Harvard Business Review, Vol. 71 pp.134-47.
Kaplan, R., Norton, D. (1996), “Using the balanced scorecard as a
strategic management system”, Harvard Business Review, Vol. 74
pp.75-85.
Karapetrovic, S., Willborn, W. (2002), “Self-audit of process
performance”, International Journal of Quality & Reliability
Management, Vol. 19 No.1, pp.24-45.
McAdam, R., Bailie, B. (2002), “Business performance measures and
alignment impact on strategy”, International Journal of Operations &
Production Management, Vol. 22 No.9, pp.972-96.
Neely, A., Adams, C., Kennerley, M. (2002), The Performance Prism:
The Scorecard for Measuring and Managing Business Success, Financial
Times/Prentice-Hall, Harlow, .
Neely, A., Gregory, M., Platts, K. (1995), “Performance measurement
system design”, International Journal of Operations & Production
Management, Vol. 15 No.4, pp.80-116.
Neely, A., Mills, J., Gregory, M., Richards, H. (1994), “Realising
strategy through measurement”, International Journal of Operations &
Production Management, Vol. 14 No.3, pp.140-52.
Neely, A., Richards, H., Mills, H., Platts, K., Bourne, M. (1997),
“Designing performance measures: a structured approach”,
International Journal of Operations & Performance Management, Vol.
17 No.11, pp.1131-52.
Owlia, M.S., Aspinwall, E.M. (1998), “A framework for measuring
quality in engineering education”, Total Quality Management, Vol. 9
No.6, pp.501-18.
Samson, D., Terziovski, M. (1999), “The relationship between total
quality management practices and operational performance”, Journal
of Operations Management, Vol. 17 No.3, pp.393-409.
Saraph, J., Benson, P., Schroeder, R. (1989), “An instrument for
measuring the critical factors of quality management”, Decision
Sciences, Vol. 20 No.4, pp.810-29.
Shaffer, R.H., Thomson, H.A. (1992), “Successful change programs
begin with results”, Harvard Business Review, Vol. 70 No.1, pp.80-9.
Sterman, J.D., Repenning, N.P., Kofman, F. (1997), “Unanticipated
side effects of successful quality programs: exploring a paradox of
organizational improvement”, Management Science, Vol. 43 No.4,
pp.503-34.
Wilbur, J.H. (2002), “Is time running out for quality”, Quality
Progress, Vol. 35 No.7, pp.75-9.
Wood, C., Childe, S. (2003), “Strategic performance measures for
business process re-design”, Proceedings of Business Excellence I –
Performance Measures, Benchmarking and Best Practices in New
Economy, University of Minho, Portugal, pp.102-7.
Yusof, S.M., Aspinwall, E.M. (2000a), “TQM implementation issues:
review and case study”, International Journal of Operations &
Production Management, Vol. 20 No.6, pp.634-55.
Corresponding author
Sérgio D. Sousa can be contacted at: sds@dps.uminho.pt