Benchmarking the Postgraduate Admission Process

October 2000
Volume 7 • Number 4

Contents

Benchmarking the Postgraduate Admission Process

This article reports the main findings of a benchmarking study on the postgraduate admission process of higher education institutions. The project has involved three departments at the University of Manchester Institute of Science and Technology (UMIST), eight specialist master’s programs within the Manchester School of Management (MSM), and five other universities in the United Kingdom, Germany, Hong Kong, and Spain. The work was undertaken as part of a European Union Leonardo da Vinci-founded project. The first four steps of the benchmarking process have been completed, namely the selection of the subject and the partners, and the collection and analysis of data. While the process is not yet complete it is felt that important lessons are worth reporting. These include the applicability and feasibility of benchmarking to higher education, and the suitability of the various types of benchmarking, in particular internal benchmarking to this environment.


Key words: benchmarking in higher education

by T. Fiekers, Research Institute Technology And Work, University Of Kaiserslautern; B.G. Dale, Manchester School Of Management, Umist; D.A. Littler, Manchester School Of Management, Umist; W. Voß, Research Institute Technology And Work, University Of Kaiserslautern

INTRODUCTION

 The concept of benchmarking was popularized by the seminal work of Camp (1989), based on the experiences of Rank Xerox; it was the first comprehensive documentation of a structured approach to benchmarking. Since the publication of this work many books of a similar nature have been published (for example, Bogan and English 1994; Cook 1993; and Zairi and Leonard 1994). In the main, these have tended to concentrate on the benchmarking process and its main steps, the code of conduct of benchmarking, and the benefits of the process, using examples to pinpoint best practice and potential pitfalls. By and large, however, the texts are repetitive and fail to extend the concepts outlined by Camp’s (1989) earlier work. Zairi (1995) points out that there has been a lack of empirical research on benchmarking, and any conducted has tended to concentrate on questionnaire surveys of the concept. Apart from a small handful of papers (for example, Hanson and Voss 1995; Le Sueur and Dale 1997; Leonard 1996; Love et al. 1998; Prasad, Tata, and Thorn 1996; Voss, AhlstrÖm, and Blackmon 1997), there is little to sensitize the people involved in carrying out a benchmarking exercise to the issues that need to be faced and the means of resolving some of the main problems encountered.

Since 1996, the School of Management (MSM) at the University of Manchester Institute of Science and Technology (UMIST) has been involved in an European Union Leonardo da Vinci project undertaken jointly with Mondragon Eskola Politeknikoa (MEP), Institut National Polytechnique de Grenoble (INPG), and the University of Kaiserslautern. The broad aims of this project were to consider quality issues in higher education, including the applicability of the ISO 9000 series to higher education, the use of the European Foundation for Quality Management (EFQM) Model for Excellence, and a study of the applicability of benchmarking to key processes in higher education institutions.

The focus of this paper is the examination of benchmarking, which originated in manufacturing, and after the publication of Camp’s (1989) original work the concept was employed in a variety of different sectors. Camp’s (1998) recent book outlines examples from not only manufacturing and services but also the nonprofit sector, including government and higher education. As Engelkemeyer (1998) points out, however, benchmarking in higher education is still relatively uncommon and there are only a few published examples of projects. She also notes that benchmarking can be more easily applied to the nonacademic activities such as administrative process (for example, payroll and billing) and surrounding activities like catering and computing services rather than to teaching and research. However, as Fram and Camp (1995) point out: “Using benchmarking to only improve the peripherals in higher education would be tantamount to having a company use benchmarking to only improve administrative processes. Benchmarking has to extend to the core business unit.”

As part of the Leonardo da Vinci project, MSM launched a pilot benchmarking project of its postgraduate admission process. MSM is the largest of UMIST’s 20 departments with currently some 800 undergraduates and 300 postgraduates. It has been a pioneer in management education since 1918. The school is one of two UK management and business schools to have received the highest rankings for research (for example, international significance) and teaching (excellent). UMIST, as a university, is ranked sixth overall in the United Kingdom, and has some 5000 undergraduates and 1500 postgraduates.


The first four steps of Camp’s (1989) benchmarking process–the selection of the subject and the partners, and the collection and analysis of data–have been completed. The results achieved and the experiences gained are examined in this paper. The paper, based on this experience, also comments on the applicability of benchmarking to higher education. The outcomes and the process are described under the broad headings of the four steps, and this is followed by a discussion and an analysis of the main findings.

BENCHMARKING OF THE POSTGRADUATE ADMISSION PROCESS

Step 1–Selection of the Benchmarking Subject

The first step of the approach advocated by Camp (1989) is the selection of the subject to be benchmarked, which relates to the problem being addressed. Usually, a process that is causing a bottleneck or giving concern is chosen. The following are main reasons why the UMIST principal investigators decided to focus on the postgraduate admission process of MSM.

  • The admission process is a key process, since this is the first direct encounter between student and university.
  • The admission process is a major determinant of the quality of student intake and has far-reaching implications on many of the university’s processes and activities.
  • It was felt that the data relating to the process were readily available both within UMIST and MSM, thereby aiding the testing of the benchmarking concept.

In outlining these reasons it should be noted that just because a process can be benchmarked does not necessarily mean it should be benchmarked.

Although the admission process represents only a small part of the educational process, it is a complex activity. It begins with the marketing of individual specialists, master’s, and research programs, followed by the processing of inquiries and applications, and finally includes administrative matters such as formal registration, fee payment, and so on. The admissions to the UMIST programs are undertaken on a rolling basis, and an application can be considered up to the start date of the new postgraduate academic year.

To keep the benchmarking project as simple as possible, and to facilitate the testing of its applicability to higher education, it was decided to concentrate on application handling, covering all activities from the arrival of an application to the point at which the student registers. Furthermore, the focus was restricted to taught master’s degrees (that is, a one-year program involving an approximately six-and-a-half month taught-and-examination period followed by a five-and-a-half month dissertation period). For example in MSM there are eight specialist master programs covering accounting and finance, business economics, international business, marketing, operations management, organizational psychology, personnel management, and technology management. UMIST currently has more than 50 different master’s programs.

According to writers such as Zairi and Leonard (1994) and Love et al. (1998), understanding the business process of the subject under study is one of the major prerequisites for a successful benchmarking study. The first step was to produce a flowchart of the postgraduate admission process of UMIST and MSM. Figure 1 is a simplified version of the complete chart, highlighting the main stages of the process. This shows that, in the first place, each application is seen by the Graduate School (GS) Tutors, who determine whether or not the minimum educational requirements are fulfilled by the applicant. The main criteria are the first degree result, provided by the transcripts of the previous university and/or appropriate work experience. Following this, the application is passed to the respective department, where it is checked by the Program Tutor for special program requirements. In MSM, there is one Program Tutor for each specialist master’s program. The Program Tutors require additional qualifications, such as English language tests for overseas applicants and the Graduate Management Admission Test (GMAT). Finally, after the decision of the Program Tutor, the postgraduate office within MSM takes over the handling of the application. That office informs the student about the decision, and in the case of those offered a place, the postgraduate office is responsible for the communication process until the registration date.

Flowcharting the admission process was crucial, helping to highlight key activities, their interfaces, and areas of responsibility. It also enabled the researchers to identify process-based measures that provide the basis for performance indicators. From a wide range of process measures the following three were selected.

  1. The time taken to make key decisions on a master’s program application. Speed of response is regarded as key in the student decision-making process and is related to the likelihood of the student taking up a place offered on a program. This measure is also an indication of any inefficiencies and bottlenecks in the process.
  2. The percentage of applications that receive an offer for a place on a master’s program. This provides the basis for the calculation of two key ratios: (1) offer rate (that is, the percent of applicants who receive an offer from the total number of applications); and (2) take-up rate (that is, the percent of applicants who register for a program after receiving an offer from the total number of offers made).
  3. The quality of the applicants with respect to their undergraduate degree results. The School Curriculum and Assessment Authority (1997) has identified previous academic achievement as the most appropriate indication of future academic performance. It is also common across subject areas and countries.

Figure 2 relates these three process-based measures to the postgraduate admission process. A1-A8 represent the time-based measures; B1-B17 represent the percentage of application measures; and C1-C11 represent the quality measures.

From these three process-based measures five performance indicators were developed. These indicators are viewed as critical success factors of the postgraduate admission process, giving insight into its efficiency and effectiveness. These five performance indicators, including their subperformance indicators, are as follows:

  1. Conversion rate
    1.1      Conversion rate: that is, number of registritions/ number of applications.
    1.2      Conversion rate: that is, number of registritions/ number of offers.
  1. Predictability and capacity planning ability
    2.1      Reply rate: that is, the ratio of applicants who replied to the university with respect to offer accepted, offer declined, and so on, after receiving an offer.
    2.2      Reliability of reply: that is, the ratio of applicants who registered on a program after acceptance of an offer.
    2.3      Time of reply: that is, the average number of days, from receipt of an offer, taken by applicants to reply.
    2.4      Surprise rate: that is, those applications that were not foreseen by the university and/or school. The students coming within this classification had received an offer but had not bothered to reply and appear, without warning, at the start of a semester with a desire to enroll on a program.
  1. Speed of response
    3.1      Time to reject: that is, the average number of days after the arrival of an application at the university until a rejection decision is made and the applicant is informed accordingly.
    3.2      Time to offer: that is, the average number of days after the arrival of an application at the university until an offer is made and the applicant informed.
  1. Decision rate: that is, those applications about which a decision is not taken
  1. Quality of intake
    5.1      Percentage of “Firsts:” that is, the percentage of students to whom a First Class degree was awarded.
    5.2      Percentage of “Lower Seconds:” that is, the percentage of students to whom a Lower Second Class degree was awarded.

The conversion rate and quality of intake, which are interrelated, are outcome-related indicators, reflecting the performance of the admission process. These two measures need to be considered together since it can be assumed that, at least in the short term, the conversion rate can be increased at the expense of the quality of the intake and vice versa. The remaining three performance indicators are in-process measures, and these explain the means and process by which the results are achieved.

Step 2–Selection of Benchmarking Partners

Having agreed on the subject to be benchmarked, the next step, according to Camp (1989), is selection of the benchmarking partners. There are a number of different types of benchmarking: internal, competitive, functional, and generic, reflecting the nature of the partners.

Internal benchmarking is the easiest form of benchmarking to conduct, and involves benchmarking between businesses or functions within the same group of companies. Cases of internal benchmarking are rarely documented in the literature. In a university environment this means between different departments of the same university (for example, benchmarking the postgraduate admission process between MSM and the Department of Mechanical Engineering at UMIST) or between functions within one department (for example, benchmarking the admission process of MSM’s specialist master’s programs, such as between international business and marketing).


Competitive benchmarking is against direct competitors. In a university context this means benchmarking between different universities or their departments. In some situations benchmarking between departments of the same university could be regarded in a similar context to competitive benchmarking. A case in point is when departments of the same university compete for centrally distributed money or resources depending on their performance with respect to research and teaching or if they compete for the same students. The extent to which benchmarking with overseas universities can be considered as competitive is subject to debate. Totally different circumstances regarding educational policy, degree structure, competitive nature of the environment, and so on have to be taken into account, and often processes and practices are so different that benchmarking becomes more akin to functional/generic. Almost all higher education benchmarking exercises documented in the literature (for example, Camp 1998 and Weller 1996) apply this form of benchmarking.


Functional/generic benchmarking is a comparison with the “best-in-class” in different industries, often considered to be world-class in their own right. Functional benchmarking is comparisons to similar functions within the same broad band of business, while generic benchmarking looks at the broader similarities of processes and functions, usually in disparate operations. One typical example of this, with respect to the postgraduate admission process, is the means by which large international consulting firms deal with a considerable number of applications from undergraduates for their trainee consultant positions.

The advantages of starting with internal benchmarking are that internal partners are not only relatively easy to approach, but once they have agreed to cooperate the structure and type of the data are more comparable than that between different universities, with regional differences not distorting the comparison. In addition, differences in image between universities, which, especially in the area of admissions, have to be taken into account, are small, even though they can exist between different departments of the same university.


Despite these advantages, the shortcomings of only focusing on internal benchmarking are equally obvious. It is more likely that many small improvements will result, rather than any major breakthroughs, as the way of doing things within the same university is likely to be similar. In addition, departmental differences, both internally, (for example, structure, size, and so on) as well as externally, (for example, subject popularity, student profile and habits in terms of the number of applications placed, take-up rate of offers, and so on), is an issue that needs to be taken into consideration.

Internal Benchmarking

To assess the relevance of the benchmarking concept it was decided to focus, in the first place, on internal benchmarking. It was impossible to include all UMIST departments, and, therefore, some degree of preselection was necessary. A matrix was developed with the annual number of applications for entry to a master’s program on one axis and the percentage of actual registrations on programs from these applications on the other (Figure 3). This helped to pinpoint departments, which were similar to MSM (that is, Department 13) with respect to external factors such as subject popularity as well as student profile and habits.


In Figure 3 both axes have been divided into three categories–small, medium, and high. The data indicate that there is a negative relationship between the two variables in the case of departments that have a higher application rate. In general, they appear to be more selective in their acceptance of candidates. It was felt that Department 6 was the best benchmarking partner for MSM since it had a high number of applications like MSM but with a higher percentage of successful registrations.


A second preselection criterion was the average time it took a department to offer a place or to reject an application. Figure 4 shows the results of the first internal screening. As can be seen, Department 5 exhibits the best performance. This department also showed similarities to Department 6 and MSM regarding the relationship between the number of applications and percentage of registration, ranking medium on both axes.

From this analysis, Departments 5 and 6 were subsequently approached to cooperate in the project.


External Benchmarking

The second stage of the project was extended to include external benchmarking partners. To avoid differences in terms of subject popularity, student profile, and student habits, the focus was on business and management schools. There is, however, considerable competition among schools, and partner selection is not without competitive considerations.

The benchmarking literature (Camp 1989; Zairi and Leonard 1994) stresses that to get maximum benefit from a benchmarking project, the partner should be “best-in-class.” However, as there is little information available on who and what is best practice regarding the subject of postgraduate admission, the first step was to obtain some performance data. For this reason, a questionnaire was prepared focusing on the postgraduate admission process. It included questions such as: the parts of the postgraduate (PG) admission process that are centralized; the composition of applicants’ degree results; the average number of days after arrival of an application until a reject decision is taken and the candidate informed; the percent of applicants who withdraw their application; and the percent of applicants who register.

The questionnaire contained data usually not exchanged between universities, and it was felt that by sending it to universities on a random basis would not generate a high response rate. Love et al. (1998) found that completion of the questionnaire by the organization wishing to engage potential partners in a benchmarking project was a good means of eliciting effective responses and thereby pinpointing cases of best practice for subsequent follow-up visits. It was thought, however, that this approach would not work, as a potential partner university may take the data as provided in the completed questionnaire and not respond. Instead, a more focused approach was chosen using personal contacts with potential partners to arrange an appointment with staff responsible for the admission process. To facilitate this, a presentation was prepared giving an insight into MSM’s postgraduate admission activities and an indication of the results achieved. In this way it was hoped trust would be built up; a vital prerequisite for successful future cooperation in a benchmarking project.

Initially, four UK universities were approached to take part in the project and the presentation given to them. Two “old” universities, which were from the top 10 UK research universities (Higher Education Founding Council England 1998), were approached. These are traditional universities whose activities are balanced in terms of research and teaching. Two “new” universities were also approached. These are former polytechnics whose activities are primarily focused on teaching. The two old universities expressed their interest in joining the study for the following reasons.

  • To make more visible their admission process
  • To relate their performance to that of the benchmarking partners
  • To compare performances between different departments using an approved set of key performance indicators
  • To establish the relationships between the partner universities as a basis for similar future activities

The reasons the two new universities did not wish to participate in the benchmarking project were mainly because of differences in the admission process and the student profile (for example, the number of part-time students enrolled in master’s programs).

In addition, three universities from outside the United Kingdom–Germany, Hong Kong, and Spain–participated in the study. The same presentation, as per the UK universities, was given to the contact person at the respective university and further communication was undertaken by e-mail and searching of sources such as Web pages.

Step 3–Data Collection

According to Camp (1989), the final step of the planning phase of the benchmarking process is the planning and execution of data collection. Higher education in the United Kingdom is in the public sector and as a consequence a considerable amount of data is kept for statistical reasons and performance comparisons. However, generating primary data, particularly comparative quantitative performance data, is difficult. Morgan and Murgatroyd (1994) point out that academics prefer qualitative to quantitative data. In addition, a considerable amount of data in higher education is of a personal nature. This applies, in particular, to admission data, which are covered by the Data Protection Act. Nevertheless, the two internal partners, the two from the United Kingdom, and the three overseas partners agreed to share their admissions data.

Following the initial presentation, the partners were aware of the kind of data that would be needed and how to approach their collection. To support the benchmarking activity, the researchers visited all the internal and UK partners. In the first place, the focus was on facilitating the data-gathering activity and to flowchart the respective admission process to ensure data comparability. In addition, further common activities were agreed, such as developing a common questionnaire to obtain students’ perceptions of the admission system. This included questions such as: in general, how many days after receiving the application would you normally expect a decision to be made by a university? When were you asked to reply to your offer? When did you reply? When did you finally decide to come to the university?

The gathering of the data was not as difficult as initially envisaged. Most of the data were available from the admission system database. The main problem was related to the process-based measures. This was due to the considerable variation of admission practices with respect to key stages and activities undertaken, along with some degree of inconsistency in the terms used. However, flowcharting the process helped to reduce this problem and helped to identify the appropriate measures.

In contrast to the approach employed in the United Kingdom, data exchange with the overseas partners was restricted to the questionnaire, with only one site visit to the German partner carried out. Two main differences regarding the postgraduate admission process were identified. Firstly, few programs comparable to the UK master’s type programs were found. Secondly, admission practices were found to be very different. For example, in German universities, there is a date by which all applications have to be handed to the university at which the applicant wishes to study and only after that date does the processing of applications begin. In this situation speed of response comparisons are meaningless.

Two main problems with respect to data gathering were encountered. First, most admission data came within the 1998 Data Protection Act. This act limits the use of personal data exclusively to a prestated purpose, and only if the person concerned (that is, the applicant) agrees. Fortunately, the benchmarking project could be registered under the less-restrictive 1984 version of the act. The second problem relates to the comparability of the process-based measures due to the variation of admission practices.

Step 4–Gap Analysis

Camp (1989) describes the first step of the analysis phase as the identification of the performance gap. This is the difference that exists between the current performance level and the benchmark with respect to the critical success factors of the process. Percentages are usually used to express the size of the gap in specific time horizons.

An example of the gap analysis for the speed of response for benchmarking partners A and B is shown in Figure 5. This measure comprises the time to reject and time to offer.

It can be seen that benchmarking partner A is faster than partner B regarding both the time to reject an applicant (that is, 18 compared to 46 days) and the time taken to offer a place on a program (24 compared to 33 days). Meaningful conclusions, however, can only be drawn when a target performance measure is introduced. The target, according to Balm (1996), should be the equivalent of “total customer satisfaction,” and, therefore, allows not only a comparison between the partners themselves, but also the relationship of all performance levels to the target. It also challenges the assumption that the benchmark is always the best performance level found among the partners. In most cases, this target is easy to derive (for example, for the performance indicator- decision rate, this would be 100 percent of the applications). In cases where the target is unequivocal, as for example in the case of speed of response, Balm (1996) suggests gathering data on customer needs and expectations by means of a questionnaire or focus group and to use these results as a guideline for target setting.

This approach was followed in the study. A questionnaire was sent to a sample of applicants by UMIST and the two UK benchmarking partners, on issues surrounding decision speed and response along with a range of factors including information provided, communication with the university and admitting department, welcome package, and reputation of the university and department. Analysis of data received from 72 applicants showed that a target of 14 days to decide an application and to inform the student about the decision would meet the applicants’ expectations. This gives for time to reject a 78 percent and 30 percent performance against target for partners A and B, respectively, and similarly for time to offer 58 percent and 42 percent, respectively.

The final step is to summarize the relative performance levels of the various subindicators to the overall level of the respective indicator. For this it is necessary to use a weighting, which reflects the respective importance of the subindicators. The weightings–which are pre-set–should be decided at the same time as the performance indicators, thereby preventing any manipulation of the data. For example, in the case of speed of response, the time to offer was weighted three times as important as the time to reject. This weighting is obviously subjective but was based on what was perceived to be important by applicants to master’s programs. With this 3-to-1 weighting, partner A achieves a percentage achievement against target of 63 percent while partner B’s achievement is 39 percent, identifying potential improvement opportunities in both cases. Similar calculations revealed the gaps for the remaining four indicators–conversion rate, predictability and capacity planning ability, decision rate, and quality of intake.

Having calculated the gap, authors, such as Zairi (1995), stress the importance of visualization if, in particular, more than one indicator needs to be considered to capture the level of complexity; Ahmed and Rafiq (1998) recommend the use of a spider-web diagram to highlight multiple gaps. The spider-web diagram has also the advantage of visualizing performance not only in comparison with “best-in-class” but also to the expanded benchmark. Figure 6 shows the spider-web diagram of the postgraduate admission project for the three UMIST departments, and Figure 7 reflects a comparison for benchmarking partners A and B.


Figure 7 shows that partner B has a lower conversion rate and its admission process is less predictable, but it is quicker in making decisions. This indicates that partner B has a competitive disadvantage with respect to conversion rate and predictability and capacity planning ability, but has an advantage with respect to speed of response. These findings represented the starting point for the investigation of the drivers that account for these superior practices.

The following are the prime reasons for the superior performance of university A with respect to conversion rate.

  • A higher share of applicants had previously studied for their undergraduate degree at university A in comparison to university B.
  • The entrance requirements, particularly for the above category of students, were lower at university A.
  • University A sends applicants not only an offer letter, but also an offer package containing detailed information about the university, the course, accommodation, and so on.
  • The English Language Teaching Centre of university A is heavily promoted in the postgraduate material, indicating that coordinated facilities to upgrade the language skills of students from outside the United Kingdom are available.

The higher predictability and capacity planning ability is achieved by the following factors.

  • University A charges a deposit of �100 (approximately $150.00 U.S.) for accommodation–in the case of some overseas students charging for the course is against the law.
  • The offer package of university A contains a prepaid acceptance card, which has to be completed and returned by the student within one month of receipt of the offer. In contrast, university B sends its acceptance cards much closer to the registration date.
  • At university B, three weeks prior to registration it is impossible to update the status of students in the database.

The superior performance of University B regarding speed of response was facilitated by the following factors.

  • The postgraduate admission process at university B is simpler and the flow of data smoother than at university A.
  • The admission system database at university B, even though it has the limitations indicated above, is superior to that at university A. The system at university B is based on a client-server principle, enabling all departments to access the central database. Whereas the central database at university A is only accessible by the Graduate School Office, and departments run their own databases and these lack compatibility.

The completion of the gap analysis, including its visualization, represents the current state of the project. At this stage in the project it was agreed that the primary focus would be on benchmarking with the two UK partners. The main reasons for this were as follows:

  • Performance gaps were identified between the three internal partners but, despite careful selection, departmental differences with respect to admission policies were significant. For example, the offer rate of one of the internal partners was more than twice that of MSM, but at the same time, the take-up rate of offers was less than half. These type of issues had a considerable impact on the comparability of performance measures.
  • Benchmarking with the three overseas partners with respect to the educational system and the institutional specifics of the partner universities lead to significant variations in the chosen performance measures. For example, two of them enjoyed a conversion rate registrations/offers of almost 100 percent. The size of the gap with the UK universities indicates there is some potential for improvements, akin to generic benchmarking, but the amount of time and resources required to examine this further, in particular with respect to the quality of student, exceed the budget of the research project.
  • Benchmarking with the two direct competitors was the most promising because of similarities in organization structure and market conditions.

DISCUSSION AND ANALYSIS

The following is a summary of the results achieved.

  • Identification of key performance indicators (KPIs): From the qualitative and quantitative analysis, a set of KPIs for the postgraduate admission process has been developed. When monitored over a period of time these can serve as a framework for self-evaluation and subsequent improvement.
  • Pinpointing opportunities for improvement: The KPIs provided the basis for comparison with benchmarking partners’ and also with customer expectations. This not only highlighted key improvement areas but also gives confidence that it will result in added value for customers. At a recent workshop of the Leonardo da Vinci partners, the assistant registrar at UMIST commented “As a result of this benchmarking project we now have a better idea of what matters to postgraduate students, rather than what people who were themselves students in a different era, think [about] matters, and of why eventually they decide to come to UMIST and apply in the first place” (Beresford 1999, 5).
  • Development of improvement actions: On discovery of a performance gap, best practice was shared and internalized leading to various suggested improvement. For example, to improve the conversion rate it was decided by UMIST that the offer letter should be replaced by an offer package, including details about the English Language Teaching Centre, providing potential students with additional data to place them in a better position for making decisions. Changes have been made to the admission system database to make it more flexible.
  • The contribution of enabling practices toward results was recognized and acknowledged: The set of performance indicators, and scope of the comparison, contained not only output-related but also in-process measures, thereby producing a focus on the means to achieve improved results.

The following two key factors have contributed to the success of the benchmarking project.

  1. Top level support: There was strong support from within UMIST (that is, the dean of postgraduate studies and assistant registrar) for the identification of potential improvement areas for postgraduate-related matters, including admissions. Benchmarking was seen as appropriate for this purpose, because it highlighted which issues need to be improved (that is, strategic focus), why (that is, customer focus), and how (that is, operational focus).
  2. Strategic focus: The process chosen is important to the success of UMIST, which was confirmed by the findings of the student questionnaire survey. This ensured that the process and its measurement got appropriate attention from all those required to provide the relevant data.

Lessons Learned

From the application of the first four steps of Camp’s (1989) benchmarking concept to the postgraduate admission process the following lessons have been learned with regard to its applicability to higher education.

  • In general, the methodology of best practice benchmarking is applicable to higher education. The application is time-consuming and, even though the concept is simple, practice is needed to utilize it for maximum success. To complete these first four steps has taken six months. This is a similar finding to that when applied to the business practices of a utility organization; see Love et al. (1998).
  • The project has attracted attention from all levels of UMIST from the Graduate School Office staff and Tutors to the Dean of UMIST. However, a climate of continuous improvement–one of the success factors of best practice benchmarking–is not yet adequately established in higher education, and this puts constraints on the implementation of findings. The drive to put into place changes to processes tends to be dependent on the motivation of individuals in a system in which people can have different and often conflicting objectives, much different than the picture pointed out by Dale (1999) with all staff motivated and aligned in a common direction.
  • A set of indicators was introduced on the basis of which performance could be evaluated and compared. There are, however, other important areas of higher education, such as lecturing and research, where performance cannot be measured as easily as the postgraduate admission process. Therefore, other benchmarking studies focusing on different processes of a university might fail due to the lack of agreement, understanding, and acceptance of the selected KPIs.
  • When performance indicators consist of subperformance indicators, it is necessary to select and use weightings, no matter how subjective. Otherwise, subconsciously, all subperformance indicators tend to be weighted equally. As this weighting has an impact on the outcome it needs to be agreed at the stage when the indicators are selected (that is, data gathering and analysis); thereby preventing any potential bias in weightings chosen.
  • It should not be forgotten that the analysis of the postgraduate admission process was based on the most current point data. To identify appropriate trends it is important to use data over more than one academic cycle. Therefore, the results obtained should be treated with caution.
  • In a business situation it is usual to use a benchmarking team for the process under investigation. In the project described in this paper a facilitator approach, in the form of the researcher, was employed. This worked well but was dependant upon the acceptability of the researcher to both the academic and the nonacademic staff. This was aided by the sponsorship and supervision of the project by the head of MSM and a professor who is a specialist in quality management.

With regard to the type of benchmarking the following are the main findings.

  • There is a high diversity of practices within a university, which justifies internal benchmarking as a first source of performance improvement. This facilitated the identification of departments displaying best practices. However, considerable differences in departmental policy, structure, market conditions, and background have tended to limit the transfer and application of these practices. Nevertheless, internal benchmarking was crucial to staff becoming familiar with the methodology of benchmarking, the process under study, and identifying opportunities for improvement.
  • The findings that emerged from competitive benchmarking have been more beneficial than the internal analysis. However, increasing “friendly” competition in the UK higher education market, use of performance measures by the government to allocate resources to universities, and the lack of history in information sharing have the potential to undermine this type of benchmarking.

CONCLUSIONS

With the introduction of national targets as one feature of quality assurance, the term of benchmarks and benchmarking has become increasingly widespread in higher education in recent years, in particular, the United Kingdom. While a university can position and map its courses against the appropriate recognized standards, this will not necessarily result in the desired performance improvements as enabling practices for superior performance remain uncovered. In addition, the focus tends to be restricted merely to course design and delivery, while other activities, such as student admission and staff development, are neglected. An analogy can be drawn with the ISO 9000 series of quality management system standards, which is based on control, audit, and review in comparison to self-assessment against a recognized excellence model such as the Malcolm Baldrige National Quality Award or the European Foundation for Quality Management, which requires self-diagnosis of strengths and opportunities for improvement.

The concept of best practice benchmarking gives a wider choice of the process to be benchmarked, focuses on identification and sharing of best practices and their underlying enabling factors with selected partners, and, finally, results in implementation of the findings. Best practice benchmarking goes beyond mere standards and measurement against them. Furthermore, as best practice benchmarking focuses on processes it gives the opportunity to extend the search for best practices beyond the higher education sector.


REFERENCES

Ahmed, P. K., and M. Rafiq. 1998. Integrated benchmarking: A holistic examination of select techniques for benchmarking analysis. Benchmarking for Quality Management and Technology 5, no. 3:225-242.

Balm, G. J. 1996. Benchmarking and gap analysis: What is the next milestone? Benchmarking for Quality Management and Technology 3, no. 4:28-33.

Beresford, A. J. 1999. From ignorance to enlightenment. Working paper, UMIST.

Bogan, C. E., and M. J. English. 1994. Benchmarking for best practice: Winning through innovative adaptation. New York: McGraw-Hill.

Camp, R. C. 1989. Benchmarking: The search for industry best practices that lead to superior performance. Milwaukee: ASQC Quality Press.

–––, ed. 1998. Global cases in benchmarking: Best practices from organizations around the world. Milwaukee: ASQ Quality Press.

Cook, S. 1993. Practical benchmarking: A manager’s guide to creating a competitive advantage. London: Kogan Page.

Dale, B. G., ed. 1999. Managing quality, 3d ed. Oxford: Blackwell Publishers.

Engelkemeyer, S. W. 1998. Applying benchmarking in higher education: A review of three case studies. Quality Management Journal 5, no. 4:23-31.

Fram, E. H., and R. C. Camp. 1995. Finding and implementing best practice in higher education. Quality Progress 28, no. 2:69-73.

Hanson, P., and C. Voss. 1995. Benchmarking best practice in European manufacturing sites. Business Process Re-Engineering and Management Journal 1, no. 1:60-74.

Higher Education Founding Council England. 1998. Research League Tables. London: HEFCE.

Leonard, K. J. 1996. Information systems and benchmarking in the credit scoring industry. Benchmarking for Quality Management and Technology 3, no. 1:38-44.

Le Sueur, M., and B. G. Dale. 1997. Benchmarking: A study in the supply and distribution of spare parts in a utility. Benchmarking for Quality Management and Technology 4, no. 3:189-201.

Love, R., H. S. Bunney, M. Smith, and B. G. Dale. 1998. Benchmarking in water supply services: The lessons learnt. Benchmarking for Quality Management and Technology 5, no. 1:59-70.

Morgan, C., and S. Murgatroyd. 1994. TQM in the public sector. Buckingham: Open University Press.

Prasad, S., J. Tata, and R. Thorn. 1996. Benchmarking maquiladora operations relative to those in the USA. International Journal of Quality and Reliability Management 13, no. 9:8-17.

School Curriculum and Assessment Authority. 1997. Target setting and benchmarking in schools. Consultation Paper, Department of Education, London.

Voss, C., A., C. AhlstrÖm, and K. Blackmon. 1997. Benchmarking and operational performance: Some empirical results. Benchmarking for Quality Management and Technology 4, no. 4:273-285.

Weller, L. D. 1996. Benchmarking: A paradigm for change to quality education. The TQM Magazine 8, no. 6:24-29.

Zairi, M. 1995. The integration of benchmarking and BPR: A matter of choice or necessity? Business Process Re-Engineering and Management Journal 1, no. 3:3-9.

Zairi, M., and P. Leonard. 1994. Practical benchmarking: A complete guide. London: Chapman and Hall.

BIOGRAPHIES

Thomas Fiekers holds a degree in industrial engineering from the University of Kaiserslautern, Germany, with part of it read at the State Marine Technical University of Saint Petersburg, Russia, and at the Manchester School of Management, UMIST. Since 1996, he has been involved in the European Union-funded ISOTRAIN, a project dealing with quality improvements in higher education. He has recently completed an appointment for Schlumberger in Paris, in the area of business processes and benchmarking, and has now taken up a position with Andersen Consulting.

Barrie Dale is the United Utilities Professor of Quality Management and head of the Operations Management Group at the Manchester School of Management, UMIST. He is an academician of the International Academy for Quality, editor of the International Journal of Quality and Reliability Management, and a director of the Trafford Park Business Forum.

Dale has been researching the subject of quality and its management since 1981. Dale is the person to contact for discussion of this paper. He may be contacted as follows: University of Manchester Institute of Science and Technology (UMIST), PO Box 88, Manchester M60 1QD, United Kingdom; 0161-200-3424; Fax: 0161-200-8787; E-mail: Barrie.Dale@umist.ac.uk.

Dale Littler is head of, and professor of marketing in, the Manchester School of Management. He specializes in marketing strategy, new product development, and consumer behavior toward innovative offerings. He has led a program of research financed by the Economic and Social Research Council (ESRC) on information and communication technologies, and is currently engaged on research on marketing strategy supported by the Teaching Company Scheme. He is a member of the Academic Senate of the Chartered Institute of Marketing.

Wolfgang Voß is a researcher and consultant on self-assessment and organization development at the Research Institute Technology and Work, University of Kaiserslautern. He is currently involved in implementing quality management concepts in education institutions such as universities and vocational schools.

If you liked this article, subscribe now.

Featured advertisers

Article
Rating

(0) Member Reviews

Featured advertisers





ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.