Volume 7 • Number 4
Benchmarking the Postgraduate Admission Process
This article reports the main findings of a benchmarking study on the postgraduate admission process of higher education institutions. The project has involved three departments at the University of Manchester Institute of Science and Technology (UMIST), eight specialist masters programs within the Manchester School of Management (MSM), and five other universities in the United Kingdom, Germany, Hong Kong, and Spain. The work was undertaken as part of a European Union Leonardo da Vinci-founded project. The first four steps of the benchmarking process have been completed, namely the selection of the subject and the partners, and the collection and analysis of data. While the process is not yet complete it is felt that important lessons are worth reporting. These include the applicability and feasibility of benchmarking to higher education, and the suitability of the various types of benchmarking, in particular internal benchmarking to this environment.
Key words: benchmarking in higher education
by T. Fiekers, Research Institute Technology And Work, University Of Kaiserslautern; B.G. Dale, Manchester School Of Management, Umist; D.A. Littler, Manchester School Of Management, Umist; W. Voß, Research Institute Technology And Work, University Of Kaiserslautern
The concept of benchmarking was popularized by the seminal work of Camp (1989), based on the experiences of Rank Xerox; it was the first comprehensive documentation of a structured approach to benchmarking. Since the publication of this work many books of a similar nature have been published (for example, Bogan and English 1994; Cook 1993; and Zairi and Leonard 1994). In the main, these have tended to concentrate on the benchmarking process and its main steps, the code of conduct of benchmarking, and the benefits of the process, using examples to pinpoint best practice and potential pitfalls. By and large, however, the texts are repetitive and fail to extend the concepts outlined by Camps (1989) earlier work. Zairi (1995) points out that there has been a lack of empirical research on benchmarking, and any conducted has tended to concentrate on questionnaire surveys of the concept. Apart from a small handful of papers (for example, Hanson and Voss 1995; Le Sueur and Dale 1997; Leonard 1996; Love et al. 1998; Prasad, Tata, and Thorn 1996; Voss, AhlstrÖm, and Blackmon 1997), there is little to sensitize the people involved in carrying out a benchmarking exercise to the issues that need to be faced and the means of resolving some of the main problems encountered.
Since 1996, the School of Management (MSM) at the University of Manchester Institute of Science and Technology (UMIST) has been involved in an European Union Leonardo da Vinci project undertaken jointly with Mondragon Eskola Politeknikoa (MEP), Institut National Polytechnique de Grenoble (INPG), and the University of Kaiserslautern. The broad aims of this project were to consider quality issues in higher education, including the applicability of the ISO 9000 series to higher education, the use of the European Foundation for Quality Management (EFQM) Model for Excellence, and a study of the applicability of benchmarking to key processes in higher education institutions.
The focus of this paper is the examination of benchmarking, which originated in manufacturing, and after the publication of Camps (1989) original work the concept was employed in a variety of different sectors. Camps (1998) recent book outlines examples from not only manufacturing and services but also the nonprofit sector, including government and higher education. As Engelkemeyer (1998) points out, however, benchmarking in higher education is still relatively uncommon and there are only a few published examples of projects. She also notes that benchmarking can be more easily applied to the nonacademic activities such as administrative process (for example, payroll and billing) and surrounding activities like catering and computing services rather than to teaching and research. However, as Fram and Camp (1995) point out: Using benchmarking to only improve the peripherals in higher education would be tantamount to having a company use benchmarking to only improve administrative processes. Benchmarking has to extend to the core business unit.
As part of the Leonardo da Vinci project, MSM launched a pilot benchmarking project of its postgraduate admission process. MSM is the largest of UMISTs 20 departments with currently some 800 undergraduates and 300 postgraduates. It has been a pioneer in management education since 1918. The school is one of two UK management and business schools to have received the highest rankings for research (for example, international significance) and teaching (excellent). UMIST, as a university, is ranked sixth overall in the United Kingdom, and has some 5000 undergraduates and 1500 postgraduates.
The first four steps of Camps (1989) benchmarking processthe selection of the subject and the partners, and the collection and analysis of datahave been completed. The results achieved and the experiences gained are examined in this paper. The paper, based on this experience, also comments on the applicability of benchmarking to higher education. The outcomes and the process are described under the broad headings of the four steps, and this is followed by a discussion and an analysis of the main findings.
BENCHMARKING OF THE POSTGRADUATE ADMISSION PROCESS
Step 1Selection of the Benchmarking Subject
The first step of the approach advocated by Camp (1989) is the selection of the subject to be benchmarked, which relates to the problem being addressed. Usually, a process that is causing a bottleneck or giving concern is chosen. The following are main reasons why the UMIST principal investigators decided to focus on the postgraduate admission process of MSM.
In outlining these reasons it should be noted that just because a process can be benchmarked does not necessarily mean it should be benchmarked.
Although the admission process represents only a small part of the educational process, it is a complex activity. It begins with the marketing of individual specialists, masters, and research programs, followed by the processing of inquiries and applications, and finally includes administrative matters such as formal registration, fee payment, and so on. The admissions to the UMIST programs are undertaken on a rolling basis, and an application can be considered up to the start date of the new postgraduate academic year.
To keep the benchmarking project as simple as possible, and to facilitate the testing of its applicability to higher education, it was decided to concentrate on application handling, covering all activities from the arrival of an application to the point at which the student registers. Furthermore, the focus was restricted to taught masters degrees (that is, a one-year program involving an approximately six-and-a-half month taught-and-examination period followed by a five-and-a-half month dissertation period). For example in MSM there are eight specialist master programs covering accounting and finance, business economics, international business, marketing, operations management, organizational psychology, personnel management, and technology management. UMIST currently has more than 50 different masters programs.
According to writers such as Zairi and Leonard (1994) and Love et al. (1998), understanding the business process of the subject under study is one of the major prerequisites for a successful benchmarking study. The first step was to produce a flowchart of the postgraduate admission process of UMIST and MSM. Figure 1 is a simplified version of the complete chart, highlighting the main stages of the process. This shows that, in the first place, each application is seen by the Graduate School (GS) Tutors, who determine whether or not the minimum educational requirements are fulfilled by the applicant. The main criteria are the first degree result, provided by the transcripts of the previous university and/or appropriate work experience. Following this, the application is passed to the respective department, where it is checked by the Program Tutor for special program requirements. In MSM, there is one Program Tutor for each specialist masters program. The Program Tutors require additional qualifications, such as English language tests for overseas applicants and the Graduate Management Admission Test (GMAT). Finally, after the decision of the Program Tutor, the postgraduate office within MSM takes over the handling of the application. That office informs the student about the decision, and in the case of those offered a place, the postgraduate office is responsible for the communication process until the registration date.
Flowcharting the admission process was crucial, helping to highlight key activities, their interfaces, and areas of responsibility. It also enabled the researchers to identify process-based measures that provide the basis for performance indicators. From a wide range of process measures the following three were selected.
Figure 2 relates these three process-based measures to the postgraduate admission process. A1-A8 represent the time-based measures; B1-B17 represent the percentage of application measures; and C1-C11 represent the quality measures.
From these three process-based measures five performance indicators were developed. These indicators are viewed as critical success factors of the postgraduate admission process, giving insight into its efficiency and effectiveness. These five performance indicators, including their subperformance indicators, are as follows:
The conversion rate and quality of intake, which are interrelated, are outcome-related indicators, reflecting the performance of the admission process. These two measures need to be considered together since it can be assumed that, at least in the short term, the conversion rate can be increased at the expense of the quality of the intake and vice versa. The remaining three performance indicators are in-process measures, and these explain the means and process by which the results are achieved.
Step 2Selection of Benchmarking Partners
Having agreed on the subject to be benchmarked, the next step, according to Camp (1989), is selection of the benchmarking partners. There are a number of different types of benchmarking: internal, competitive, functional, and generic, reflecting the nature of the partners.
Internal benchmarking is the easiest form of benchmarking to conduct, and involves benchmarking between businesses or functions within the same group of companies. Cases of internal benchmarking are rarely documented in the literature. In a university environment this means between different departments of the same university (for example, benchmarking the postgraduate admission process between MSM and the Department of Mechanical Engineering at UMIST) or between functions within one department (for example, benchmarking the admission process of MSMs specialist masters programs, such as between international business and marketing).
Competitive benchmarking is against direct competitors. In a university context this means benchmarking between different universities or their departments. In some situations benchmarking between departments of the same university could be regarded in a similar context to competitive benchmarking. A case in point is when departments of the same university compete for centrally distributed money or resources depending on their performance with respect to research and teaching or if they compete for the same students. The extent to which benchmarking with overseas universities can be considered as competitive is subject to debate. Totally different circumstances regarding educational policy, degree structure, competitive nature of the environment, and so on have to be taken into account, and often processes and practices are so different that benchmarking becomes more akin to functional/generic. Almost all higher education benchmarking exercises documented in the literature (for example, Camp 1998 and Weller 1996) apply this form of benchmarking.
Functional/generic benchmarking is a comparison with the best-in-class in different industries, often considered to be world-class in their own right. Functional benchmarking is comparisons to similar functions within the same broad band of business, while generic benchmarking looks at the broader similarities of processes and functions, usually in disparate operations. One typical example of this, with respect to the postgraduate admission process, is the means by which large international consulting firms deal with a considerable number of applications from undergraduates for their trainee consultant positions.
The advantages of starting with internal benchmarking are that internal partners are not only relatively easy to approach, but once they have agreed to cooperate the structure and type of the data are more comparable than that between different universities, with regional differences not distorting the comparison. In addition, differences in image between universities, which, especially in the area of admissions, have to be taken into account, are small, even though they can exist between different departments of the same university.
Despite these advantages, the shortcomings of only focusing on internal benchmarking are equally obvious. It is more likely that many small improvements will result, rather than any major breakthroughs, as the way of doing things within the same university is likely to be similar. In addition, departmental differences, both internally, (for example, structure, size, and so on) as well as externally, (for example, subject popularity, student profile and habits in terms of the number of applications placed, take-up rate of offers, and so on), is an issue that needs to be taken into consideration.
To assess the relevance of the benchmarking concept it was decided to focus, in the first place, on internal benchmarking. It was impossible to include all UMIST departments, and, therefore, some degree of preselection was necessary. A matrix was developed with the annual number of applications for entry to a masters program on one axis and the percentage of actual registrations on programs from these applications on the other (Figure 3). This helped to pinpoint departments, which were similar to MSM (that is, Department 13) with respect to external factors such as subject popularity as well as student profile and habits.
In Figure 3 both axes have been divided into three categoriessmall, medium, and high. The data indicate that there is a negative relationship between the two variables in the case of departments that have a higher application rate. In general, they appear to be more selective in their acceptance of candidates. It was felt that Department 6 was the best benchmarking partner for MSM since it had a high number of applications like MSM but with a higher percentage of successful registrations.
A second preselection criterion was the average time it took a department to offer a place or to reject an application. Figure 4 shows the results of the first internal screening. As can be seen, Department 5 exhibits the best performance. This department also showed similarities to Department 6 and MSM regarding the relationship between the number of applications and percentage of registration, ranking medium on both axes.
From this analysis, Departments 5 and 6 were subsequently approached to cooperate in the project.
The second stage of the project was extended to include external benchmarking partners. To avoid differences in terms of subject popularity, student profile, and student habits, the focus was on business and management schools. There is, however, considerable competition among schools, and partner selection is not without competitive considerations.
The benchmarking literature (Camp 1989; Zairi and Leonard 1994) stresses that to get maximum benefit from a benchmarking project, the partner should be best-in-class. However, as there is little information available on who and what is best practice regarding the subject of postgraduate admission, the first step was to obtain some performance data. For this reason, a questionnaire was prepared focusing on the postgraduate admission process. It included questions such as: the parts of the postgraduate (PG) admission process that are centralized; the composition of applicants degree results; the average number of days after arrival of an application until a reject decision is taken and the candidate informed; the percent of applicants who withdraw their application; and the percent of applicants who register.
The questionnaire contained data usually not exchanged between universities, and it was felt that by sending it to universities on a random basis would not generate a high response rate. Love et al. (1998) found that completion of the questionnaire by the organization wishing to engage potential partners in a benchmarking project was a good means of eliciting effective responses and thereby pinpointing cases of best practice for subsequent follow-up visits. It was thought, however, that this approach would not work, as a potential partner university may take the data as provided in the completed questionnaire and not respond. Instead, a more focused approach was chosen using personal contacts with potential partners to arrange an appointment with staff responsible for the admission process. To facilitate this, a presentation was prepared giving an insight into MSMs postgraduate admission activities and an indication of the results achieved. In this way it was hoped trust would be built up; a vital prerequisite for successful future cooperation in a benchmarking project.
Initially, four UK universities were approached to take part in the project and the presentation given to them. Two old universities, which were from the top 10 UK research universities (Higher Education Founding Council England 1998), were approached. These are traditional universities whose activities are balanced in terms of research and teaching. Two new universities were also approached. These are former polytechnics whose activities are primarily focused on teaching. The two old universities expressed their interest in joining the study for the following reasons.
The reasons the two new universities did not wish to participate in the benchmarking project were mainly because of differences in the admission process and the student profile (for example, the number of part-time students enrolled in masters programs).
In addition, three universities from outside the United KingdomGermany, Hong Kong, and Spainparticipated in the study. The same presentation, as per the UK universities, was given to the contact person at the respective university and further communication was undertaken by e-mail and searching of sources such as Web pages.
Step 3Data Collection
According to Camp (1989), the final step of the planning phase of the benchmarking process is the planning and execution of data collection. Higher education in the United Kingdom is in the public sector and as a consequence a considerable amount of data is kept for statistical reasons and performance comparisons. However, generating primary data, particularly comparative quantitative performance data, is difficult. Morgan and Murgatroyd (1994) point out that academics prefer qualitative to quantitative data. In addition, a considerable amount of data in higher education is of a personal nature. This applies, in particular, to admission data, which are covered by the Data Protection Act. Nevertheless, the two internal partners, the two from the United Kingdom, and the three overseas partners agreed to share their admissions data.
Following the initial presentation, the partners were aware of the kind of data that would be needed and how to approach their collection. To support the benchmarking activity, the researchers visited all the internal and UK partners. In the first place, the focus was on facilitating the data-gathering activity and to flowchart the respective admission process to ensure data comparability. In addition, further common activities were agreed, such as developing a common questionnaire to obtain students perceptions of the admission system. This included questions such as: in general, how many days after receiving the application would you normally expect a decision to be made by a university? When were you asked to reply to your offer? When did you reply? When did you finally decide to come to the university?
The gathering of the data was not as difficult as initially envisaged. Most of the data were available from the admission system database. The main problem was related to the process-based measures. This was due to the considerable variation of admission practices with respect to key stages and activities undertaken, along with some degree of inconsistency in the terms used. However, flowcharting the process helped to reduce this problem and helped to identify the appropriate measures.
In contrast to the approach employed in the United Kingdom, data exchange with the overseas partners was restricted to the questionnaire, with only one site visit to the German partner carried out. Two main differences regarding the postgraduate admission process were identified. Firstly, few programs comparable to the UK masters type programs were found. Secondly, admission practices were found to be very different. For example, in German universities, there is a date by which all applications have to be handed to the university at which the applicant wishes to study and only after that date does the processing of applications begin. In this situation speed of response comparisons are meaningless.
Two main problems with respect to data gathering were encountered. First, most admission data came within the 1998 Data Protection Act. This act limits the use of personal data exclusively to a prestated purpose, and only if the person concerned (that is, the applicant) agrees. Fortunately, the benchmarking project could be registered under the less-restrictive 1984 version of the act. The second problem relates to the comparability of the process-based measures due to the variation of admission practices.
Step 4Gap Analysis
Camp (1989) describes the first step of the analysis phase as the identification of the performance gap. This is the difference that exists between the current performance level and the benchmark with respect to the critical success factors of the process. Percentages are usually used to express the size of the gap in specific time horizons.
An example of the gap analysis for the speed of response for benchmarking partners A and B is shown in Figure 5. This measure comprises the time to reject and time to offer.
It can be seen that benchmarking partner A is faster than partner B regarding both the time to reject an applicant (that is, 18 compared to 46 days) and the time taken to offer a place on a program (24 compared to 33 days). Meaningful conclusions, however, can only be drawn when a target performance measure is introduced. The target, according to Balm (1996), should be the equivalent of total customer satisfaction, and, therefore, allows not only a comparison between the partners themselves, but also the relationship of all performance levels to the target. It also challenges the assumption that the benchmark is always the best performance level found among the partners. In most cases, this target is easy to derive (for example, for the performance indicator- decision rate, this would be 100 percent of the applications). In cases where the target is unequivocal, as for example in the case of speed of response, Balm (1996) suggests gathering data on customer needs and expectations by means of a questionnaire or focus group and to use these results as a guideline for target setting.
This approach was followed in the study. A questionnaire was sent to a sample of applicants by UMIST and the two UK benchmarking partners, on issues surrounding decision speed and response along with a range of factors including information provided, communication with the university and admitting department, welcome package, and reputation of the university and department. Analysis of data received from 72 applicants showed that a target of 14 days to decide an application and to inform the student about the decision would meet the applicants expectations. This gives for time to reject a 78 percent and 30 percent performance against target for partners A and B, respectively, and similarly for time to offer 58 percent and 42 percent, respectively.
The final step is to summarize the relative performance levels of the various subindicators to the overall level of the respective indicator. For this it is necessary to use a weighting, which reflects the respective importance of the subindicators. The weightingswhich are pre-setshould be decided at the same time as the performance indicators, thereby preventing any manipulation of the data. For example, in the case of speed of response, the time to offer was weighted three times as important as the time to reject. This weighting is obviously subjective but was based on what was perceived to be important by applicants to masters programs. With this 3-to-1 weighting, partner A achieves a percentage achievement against target of 63 percent while partner Bs achievement is 39 percent, identifying potential improvement opportunities in both cases. Similar calculations revealed the gaps for the remaining four indicatorsconversion rate, predictability and capacity planning ability, decision rate, and quality of intake.
Having calculated the gap, authors, such as Zairi (1995), stress the importance of visualization if, in particular, more than one indicator needs to be considered to capture the level of complexity; Ahmed and Rafiq (1998) recommend the use of a spider-web diagram to highlight multiple gaps. The spider-web diagram has also the advantage of visualizing performance not only in comparison with best-in-class but also to the expanded benchmark. Figure 6 shows the spider-web diagram of the postgraduate admission project for the three UMIST departments, and Figure 7 reflects a comparison for benchmarking partners A and B.
Figure 7 shows that partner B has a lower conversion rate and its admission process is less predictable, but it is quicker in making decisions. This indicates that partner B has a competitive disadvantage with respect to conversion rate and predictability and capacity planning ability, but has an advantage with respect to speed of response. These findings represented the starting point for the investigation of the drivers that account for these superior practices.
The following are the prime reasons for the superior performance of university A with respect to conversion rate.
The higher predictability and capacity planning ability is achieved by the following factors.
The superior performance of University B regarding speed of response was facilitated by the following factors.
The completion of the gap analysis, including its visualization, represents the current state of the project. At this stage in the project it was agreed that the primary focus would be on benchmarking with the two UK partners. The main reasons for this were as follows:
DISCUSSION AND ANALYSIS
The following is a summary of the results achieved.
The following two key factors have contributed to the success of the benchmarking project.
From the application of the first four steps of Camps (1989) benchmarking concept to the postgraduate admission process the following lessons have been learned with regard to its applicability to higher education.
With regard to the type of benchmarking the following are the main findings.
With the introduction of national targets as one feature of quality assurance, the term of benchmarks and benchmarking has become increasingly widespread in higher education in recent years, in particular, the United Kingdom. While a university can position and map its courses against the appropriate recognized standards, this will not necessarily result in the desired performance improvements as enabling practices for superior performance remain uncovered. In addition, the focus tends to be restricted merely to course design and delivery, while other activities, such as student admission and staff development, are neglected. An analogy can be drawn with the ISO 9000 series of quality management system standards, which is based on control, audit, and review in comparison to self-assessment against a recognized excellence model such as the Malcolm Baldrige National Quality Award or the European Foundation for Quality Management, which requires self-diagnosis of strengths and opportunities for improvement.
The concept of best practice benchmarking gives a wider choice of the process to be benchmarked, focuses on identification and sharing of best practices and their underlying enabling factors with selected partners, and, finally, results in implementation of the findings. Best practice benchmarking goes beyond mere standards and measurement against them. Furthermore, as best practice benchmarking focuses on processes it gives the opportunity to extend the search for best practices beyond the higher education sector.
Ahmed, P. K., and M. Rafiq. 1998. Integrated benchmarking: A holistic examination of select techniques for benchmarking analysis. Benchmarking for Quality Management and Technology 5, no. 3:225-242.
Balm, G. J. 1996. Benchmarking and gap analysis: What is the next milestone? Benchmarking for Quality Management and Technology 3, no. 4:28-33.
Beresford, A. J. 1999. From ignorance to enlightenment. Working paper, UMIST.
Bogan, C. E., and M. J. English. 1994. Benchmarking for best practice: Winning through innovative adaptation. New York: McGraw-Hill.
Camp, R. C. 1989. Benchmarking: The search for industry best practices that lead to superior performance. Milwaukee: ASQC Quality Press.
, ed. 1998. Global cases in benchmarking: Best practices from organizations around the world. Milwaukee: ASQ Quality Press.
Cook, S. 1993. Practical benchmarking: A managers guide to creating a competitive advantage. London: Kogan Page.
Dale, B. G., ed. 1999. Managing quality, 3d ed. Oxford: Blackwell Publishers.
Engelkemeyer, S. W. 1998. Applying benchmarking in higher education: A review of three case studies. Quality Management Journal 5, no. 4:23-31.
Fram, E. H., and R. C. Camp. 1995. Finding and implementing best practice in higher education. Quality Progress 28, no. 2:69-73.
Hanson, P., and C. Voss. 1995. Benchmarking best practice in European manufacturing sites. Business Process Re-Engineering and Management Journal 1, no. 1:60-74.
Higher Education Founding Council England. 1998. Research League Tables. London: HEFCE.
Leonard, K. J. 1996. Information systems and benchmarking in the credit scoring industry. Benchmarking for Quality Management and Technology 3, no. 1:38-44.
Le Sueur, M., and B. G. Dale. 1997. Benchmarking: A study in the supply and distribution of spare parts in a utility. Benchmarking for Quality Management and Technology 4, no. 3:189-201.
Love, R., H. S. Bunney, M. Smith, and B. G. Dale. 1998. Benchmarking in water supply services: The lessons learnt. Benchmarking for Quality Management and Technology 5, no. 1:59-70.
Morgan, C., and S. Murgatroyd. 1994. TQM in the public sector. Buckingham: Open University Press.
Prasad, S., J. Tata, and R. Thorn. 1996. Benchmarking maquiladora operations relative to those in the USA. International Journal of Quality and Reliability Management 13, no. 9:8-17.
School Curriculum and Assessment Authority. 1997. Target setting and benchmarking in schools. Consultation Paper, Department of Education, London.
Voss, C., A., C. AhlstrÖm, and K. Blackmon. 1997. Benchmarking and operational performance: Some empirical results. Benchmarking for Quality Management and Technology 4, no. 4:273-285.
Weller, L. D. 1996. Benchmarking: A paradigm for change to quality education. The TQM Magazine 8, no. 6:24-29.
Zairi, M. 1995. The integration of benchmarking and BPR: A matter of choice or necessity? Business Process Re-Engineering and Management Journal 1, no. 3:3-9.
Zairi, M., and P. Leonard. 1994. Practical benchmarking: A complete guide. London: Chapman and Hall.
Thomas Fiekers holds a degree in industrial engineering from the University of Kaiserslautern, Germany, with part of it read at the State Marine Technical University of Saint Petersburg, Russia, and at the Manchester School of Management, UMIST. Since 1996, he has been involved in the European Union-funded ISOTRAIN, a project dealing with quality improvements in higher education. He has recently completed an appointment for Schlumberger in Paris, in the area of business processes and benchmarking, and has now taken up a position with Andersen Consulting.
Barrie Dale is the United Utilities Professor of Quality Management and head of the Operations Management Group at the Manchester School of Management, UMIST. He is an academician of the International Academy for Quality, editor of the International Journal of Quality and Reliability Management, and a director of the Trafford Park Business Forum.
Dale has been researching the subject of quality and its management since 1981. Dale is the person to contact for discussion of this paper. He may be contacted as follows: University of Manchester Institute of Science and Technology (UMIST), PO Box 88, Manchester M60 1QD, United Kingdom; 0161-200-3424; Fax: 0161-200-8787; E-mail: Barrie.Dale@umist.ac.uk.
Dale Littler is head of, and professor of marketing in, the Manchester School of Management. He specializes in marketing strategy, new product development, and consumer behavior toward innovative offerings. He has led a program of research financed by the Economic and Social Research Council (ESRC) on information and communication technologies, and is currently engaged on research on marketing strategy supported by the Teaching Company Scheme. He is a member of the Academic Senate of the Chartered Institute of Marketing.
Wolfgang Voß is a researcher and consultant on self-assessment and organization development at the Research Institute Technology and Work, University of Kaiserslautern. He is currently involved in implementing quality management concepts in education institutions such as universities and vocational schools.
If you liked this article, subscribe now.
(0) Member Reviews