Empirically Testing Some Main User-Related Factors for Systems Development Quality - ASQ

Empirically Testing Some Main User-Related Factors for Systems Development Quality

Contents

Download the Article (PDF, 192 KB)

Tor Guimaraes, Tennessee Technological University, and D. Sandy Staples and James D. McKeen, Queen’s University

The importance of user-related factors to system success has long been recognized by various researchers. This study attempts to test the importance of these variables as determinants of system quality. It has brought together some user-related variables (degree of user participation, user expertise, user/developer communication, user training, user influence, and user conflict) previously studied separately by different authors into a more cohesive model. Data from 228 systems have been used to test proposed relationships between the independent variables and system quality. The results confirm the importance of user participation, user training, and user expertise as significant variables for system quality. User/developer communication, user influence, and user conflict are found to possibly have only an indirect effect on system quality.

Key words: information systems, quality measurement, system quality, user satisfaction

INTRODUCTION

As business dependence on software systems increases, so does the need to ensure that these systems perform according to specifications and/or user needs and wants. Despite continuous efforts to improve the software development process, controlling software quality remains difficult in today’s software development environment. Fok, Fok, and Hartman (2001) found that total quality management (TQM) programs can be helpful in information system quality improvement. However, a recent study by Pearson, McCahon, and Hightower (1995) found that it normally takes three to five years for the quality program to yield significant benefits in areas of customer satisfaction and quality of product and services. Meanwhile, a study by Jones (1986) found the costs of defect removal to be among the top expenses in software development projects. Furthermore, inadequate and insufficient published empirical studies on software quality have made it difficult for project managers to effectively apply available software metrics and strategies in management and quality control.

Many of the present difficulties come from the relatively complex nature of software quality management. Humphrey (1989) classifies the measurement of software quality into five general areas: development, product, acceptance, usage, and repair. These areas are to be measured in terms of objectivity, timeliness, availability, representativeness, and ability to control by developers.

The importance of system quality cannot be overestimated given the enormous amount of company resources spent on information systems and the degree of company dependence on the increasing collection of system applications. System quality should be construed as a primary surrogate of system success. Some important research questions are: Will system quality be affected by some of the same variables found important to system success? How important is user participation and other user characteristics for system quality?

The importance of user participation in systems development as an ingredient for system success has been studied widely (Hwang and Thorn 1999; Mahmood et al. 2000; Robey 1994; Barki and Hartwick 1994a and 1994b). There are many published reviews of this literature (for example, Ives and Olson 1984; Pettingell, Marshall, and Remington 1989; McKeen, Guimaraes, and Wetherbe 1994; and Cavaye 1995). Additionally, there have been a few studies of contextual factors, such as the complexity of the business problem being supported by the system, the complexity of the system being developed, and user training that affects these interrelationships (for example, Lin and Shao 2000; McKeen, Guimaraes, and Wetherbe 1994).

One may expect that finding evidence to corroborate the essential role that users play during development should be simple. Surprisingly, this is not the case. While the majority of research evidence finds user participation/involvement correlated with various measures of system success (for example, Mahmood et al. 2000; Hwang and Thorn 1999), the literature has often presented conflicting results. Some studies have shown user participation to be positively correlated with system success, negatively correlated with system success, and sometimes nonsignificantly correlated with system success (see Brodbeck 2001). As mentioned previously, other user characteristics beyond mere participation in the system development process have been found to be important factors by various authors. The main objective of this study is to focus on user-related factors, which have been found to be significant in system success, and test their validity for system quality, and to propose and test an expanded model in this important area.

In the next section, the authors define the primary constructs studied here (system quality, user participation, user expertise, user/developer communication, user training, user influence, and user conflict) (see Figure 1). They explain the nature of each variable and develop testable hypotheses. Following that, the authors explain the methodology used and present the results of their tests. Finally, they discuss the implications of the results for managers and researchers.

THE THEORETICAL BACKGROUND AND HYPOTHESES TESTED
The Dependent Variable

System quality From an engineering perspective, the quality of a product or service is commonly measured in terms of its fitness for intended use, that is, it must be adequate for the application the customer has in mind (Dilworth 1988). According to the American National Standards Institute (ANSI), quality “is the totality of features and characteristics of a product or service that bears on its ability to satisfy given needs” (ANSI/ASQC 1978). Quality control activities are undertaken with the objective of designing, developing, and tailoring a product to satisfy user’s requirements (Evans and Lindsay 1980). Enterprises that have attained high levels of quality state that the ultimate yardstick of quality is attaining maximal satisfaction of customer’s needs and expectations (CIO 1991). Thus, the importance of assessing quality as something perceived by the end user of the product or service is widely recognized in industry.

Similarly, measures of user satisfaction (UIS) with computerized systems have been widely used as measures for system quality (Guimaraes, Igbaria, and Lu 1992; Yoon, Guimaraes, and O’Neal 1995). UIS is defined as the extent to which users believe the information system available to them meets their information requirements. The summary results obtained from the UIS instrument provide a subjective assessment of system quality. User satisfaction with a system deals with how users view their information system but not the technical quality of the system. In other words, it measures users’ perception of the information services provided, rather than a direct assessment of the functional capabilities of the system. UIS is a widely used method (Dilworth 1988) to measure whether users believe their information system meets their information requirements. This is a very reliable construct that has been rigorously tested and validated by many researchers (Bailey and Pearson 1983; Baroudi and Orlikowski 1988; Gallagher 1974; Ives, Olson, and Baroudi 1983; Jenkins and Ricketts 1986; Larcker and Lessig 1980). Using Likert scales, it collects user perceptions about the system such as accuracy of information produced, timeliness of reports, and attitude of support staff.

Gatian (1994) tested the validity of using user satisfaction as a surrogate measure of system effectiveness and confirmed its construct validity. Following the rationale presented previously, the authors chose user satisfaction with the output of the system as the measure of system quality, the dependent variable in this study. The specific items included in the measures for this and the other constructs in this study are presented later in the variable measurement section and are listed in Appendix A.

The Independent Variables

User participation This refers to the extent to which non-information system members of an organization are engaged in activities related to systems development (Robey, Farrow, and Franz 1989). According to Barki and Hartwick (1994a), participation can therefore be measured by “assessing the specific assignments, activities, and behaviors that users or their representatives perform during the systems development process.” Using meta-analytical techniques, Hwang and Thorn (1999) reviewed the information system literature and concluded that user participation has a positive correlation with system success as measured by system quality, use, and user satisfaction. Thus, the authors propose the following hypothesis: H1: User participation is directly related to system quality.

User expertise User expertise is a user’s acquired experience and skill level with regard to computer usage and development (Igbaria, Guimaraes, and Davis 1995). Not all users are equal in their ability to participate meaningfully within the system development process. It seems intuitive that their level of expertise in the development of systems would be important. User expertise is gained through experience on previous development efforts and through training in preparation for the tasks they are required to perform. Experienced users are expected to perform to higher standards given their facility with the “tools of the trade” (for example, methodologies, notation, processes, language, tools, acronyms, documents, deliverables, and pro-forma analysis). The authors expect this facility (that is, expertise) to have a positive effect on the nature of their participation, its impact on system quality, as well as the formation of beliefs. That is, user expertise will have an impact on the behavioral aspect, as well as the psychological aspect, of system development. Users with high expertise are able to participate more efficiently and effectively during the development process and, through this participation, are able to form more accurate expectations about the functioning of the resultant system (and its impact on their working lives) than users with less expertise. For these reasons, the authors expect that the relationship between user participation and system quality will be stronger when user expertise is higher.

Previous research has established how user expertise raises expectations and performance levels within the systems development process. Saleem (1996) found that “users who perceive themselves as functional experts are unlikely to accept a system unless they exerted a substantive influence on its design.” This result was found to hold in both experimental and field research. It is based on the belief that the participation of expert users in system design should result in a better quality system through integration of employee expertise, better understanding of users’ information requirements, superior evaluation of the system, and more accurate formation of expectations regarding the new system and its impact on the organization. Thus, the authors propose the following: H2: User expertise is directly related to system quality.

User-developer communication User-developer communication indicates the quality of the communication that exists between the systems designers and the user participants (Monge et al. 1983; Guinan 1988). Communication plays a key facilitating role within the process of application system development. According to McKeen, Guimaraes, and Wetherbe (1994),

“What facilitates productive, collaborative effort in the conduct of systems development is effective communication…due to the necessity of users to convey their understanding and insight of business practice accurately and completely to developers who, in turn, must receive this information and translate it into a working computer system. Accordingly, effective communication works to the benefit of both parties.”

It is through articulation, conveyance, reception, and feedback that user/system requirements gain currency and have effect. Communication, to be effective, must flow both ways—from sender to receiver and vice versa. With effective user-developer communication, participation will be more meaningful. Users’ inputs will be heard and understood by developers and users will be able to understand technical tradeoffs as described by developers. As a result, effective communication will provide clarity. Beliefs will be based on a more comprehensive understanding of the system deliverables and the system itself will be implemented as articulated. In situations where effective communication is lacking, the benefit of user participation is lessened—users fail to convey their needs for (and understanding of) the system under development accurately and developers fail to seek, explain, and translate user needs into system requirements effectively. As a result, ineffective communication weakens the relationship between user participation and system quality. Conversely, the authors argue that the relationship between user participation and system quality is stronger when user-developer communication is of high quality. Empirical research bears this out. In a study of 151 application systems, McKeen, Guimaraes, and Wetherbe (1994) found that user-developer communication moderated the relationship between user participation and user satisfaction and had a direct impact on user satisfaction. They found that, in situations where there was effective user-developer communication, the relationship between user participation and user satisfaction was stronger than in situations where communication was less effective.

The quality of communication has a psychological impact on systems development as well. With ineffective communication, users convey/form ideas, impressions, and expectations of the end system based on incomplete (or inaccurate) information due to misunderstandings between themselves and the design team. Although the authors are not able to cite empirical evidence to support this assertion, they expect that the relationship between user participation and user involvement will be stronger where there is effective communication and weaker where there is not. In sum, they propose: H3: User-developer communication is directly related to system quality.

User training The importance of user training for system success has been widely recognized (Nelson and Cheney 1987; Santhanam, Guimaraes, and George 2000; Igbaria et al.1995; Yoon, Guimaraes, and O’Neal 1995). Training is important to provide a general background to familiarize users with the general use of computer technology, the process of systems development, and to help users effectively use the specific system under development. Based on that the authors propose: H4: User training is directly related to system quality.

User influence Robey, Farrow, and Franz (1989) define user influence as the extent to which members of an organization affect decisions related to the final design of an information system. Furthermore, they argue that it is through participation that users exercise this influence. McKeen, Guimaraes, and Wetherbe (1994) concur and claim, “without participation, there can be no influence.” Saleem (1996) outlines the role of user influence within system development by differentiating it from user participation as follows:

“Participation varies in degree, that is, in the extent of user influence on the system design…this variation may be conceived as a continuum. On the low end of this continuum, user input is not solicited or is ignored; and, on the high end, user input forms the basis of system requirements…. Thus, participation and influence are not synonymous; a participant user may or may not have any influence on the system development.”

With high levels of influence, users become active decision makers within the system development process. Through the exercise of their responsibilities, these instrumental players are able to shape the resultant system to function in ways that best advance their vision of automation. As compared to users with low levels of influence, these users participate (that is, the behavioral dimension) much more effectively and form beliefs about the system (that is, the psychological aspect) with greater acumen based solely on their ability to affect the end product of development. Thus, the authors expect the relationship between user participation and user satisfaction (the behavioral impact) to be stronger where user influence is high and weaker where it is not.

Empirical research has demonstrated the importance of user influence in systems development. Hunton and Beeler (1997) found that participation by mandatory users was significantly related to user performance leading them to conclude, “participation by mandatory users may be ineffective, particularly if the users do not gain a sense of overall responsibility (that is, control).” Barki and Hartwick (1994b) identified three components of user participation—overall responsibility, user-information system relationship, and hands-on activity—but found that overall responsibility was the key dimension of user participation. Interestingly, overall responsibility (which refers to user activities and assignments reflecting overall leadership or accountability for the system development project) is closely related to the concept of user influence.

Doll and Torkzadeh (1989) argued the importance of user influence because of the likelihood that “without adequate influence to change things and affect results, users are likely to see their participation as a waste of time or, worse still, as an act of social manipulation.” By differentiating user participation and user influence, it is possible to understand how user participation is most useful when balanced appropriately with user influence. Such a balance gives rise to “meaningful” participation (Barki and Hartwick 1994b). Imbalanced situations (that is, high participation accompanied by low influence or low participation accompanied by high influence) would result in “hollow” participation (in the first instance) and “coercive” participation (in the second instance). According to Saleem (1996), users caught in the “hollow” participation role may feel manipulated, while those in the “coercive” participation role would exert undue influence over the system development without participating fully.

Closely related to influence/control and the preceding argument is the concept of “voice.” Hunton and Price (1997) differentiate participation by voice (the probabilistic control over the decision-making process) from participation by choice (the deterministic control because the degree to which choice impacts the decision outcome is known in advance). In another work, Hunton and Beeler (1997) articulate instrumental voice as the opportunity for users to express their opinions, preferences, and concerns to decision makers, thus providing users with a sense of control during the development process since the expression of instrumental voice is expected to become manifest in the decision outcome. The exercise of voice engenders feelings of ownership, relevance, and importance on the part of users. For all these reasons, the authors propose: H5: User influence is directly related to system quality.

User conflict As pointed out by Hartwick and Barki (1994), multiple definitions of conflict exist (Putnam and Wilson 1982; Hocker and Wilmot 1985), and the various definitions reveal three key facets: conflict occurs among interacting parties; there is divergence of interests, opinions, or goals among these parties; and these differences appear incompatible to the parties. Such conditions occur frequently during systems development (DeBrabander and Thiers 1984; Kaiser and Bostrom 1982; Smith and McKeen 1992). In every case, conflict between users and system developers is expected to produce negative results during the system development process. Ultimately, such conflict may impair communication during the development process, discourage user participation, and lead to dysfunctional behavior. For these reasons the authors propose: H6: User conflict is inversely related to system quality.

A quantitative research design was chosen to examine the proposed relationships among the various constructs in the research model. The next section describes the sample, measures, and analysis methods employed to test the research model.

METHODOLOGY
The Sample

Given the variables being studied, the authors’ sample is focused on application systems developed by information system professionals for a definable set of business users within an organization. A letter describing the research project was sent to 30 chief information officers (CIOs) from companies in a single geographic area to seek their potential interest in collaborating. Of the 23 who responded favorably, each was asked to provide “political” support for the project by distributing a one-page document that described the project (its goal, timelines, and deliverables) and introduced the researchers. Of those who declined to participate in this research project, reasons given were due to company policy regarding divulging company information (n=1), not interested in the topic (n=2), and too busy at the time (n=4). At each company, 10 application systems were selected according to the following criteria: 1) each had been implemented; 2) each had been fully operational for at least six months; and 3) each had been developed by the internal information systems department.

The primary contact for each application system was the project manager (and/or project leader) responsible for its development. These individuals were asked to complete the first part of the questionnaire pertaining to the identification of the system, operational platform, development cost/time, system complexity, and business processes supported. The research team worked with the project managers to identify a primary user of the system (that is, an individual who was part of the project development team and a current user of the system). This individual provided all additional information for each system (see description under section “Construct Measurement”). The researchers met with the primary user to explain the project briefly, identify the system under scrutiny, and distribute the questionnaire. Completed questionnaires were collected internally and returned to the researchers. Researchers conducted necessary follow-up telephone calls. Of the 230 completed questionnaires, only two sets were deemed unusable due to the inability to locate either the project manager or the project leader. The final sample size was 228, and a summary of the characteristics of the systems is presented in Table 1.

Construct Measurement

Details on how each construct was operationalized in this study are provided next. Appendix A contains a list of the questions used for each construct.

System quality Quality was measured by a 10-item scale adapted from Yoon, Guimaraes, and O’Neal (1995) and previously used by Guimaraes, Yoon, and Clevenson (2001). The scale is a measure of end-user satisfaction with various aspects of the system, including items regarding output information content; accuracy, usefulness, and timeliness; system response/turnaround time; system friendliness (ease of learning and ease of use); and documentation usefulness. Each item was measured on a five-point Likert scale indicating the extent of user satisfaction along each item. The scale ranged from “1” (no extent) to “5” (great extent). End users answered these questions.

User participation The measure of end-user participation in the system development process was adapted from Doll and Torkzadeh (1989) and Santhanam, Guimaraes, and George (2000). Respondents were asked to what extent they were primary players in each of nine specific activities, such as initiating the project, establishing the objectives for the project, determining the system availability/access, and outlining information flows. The five-point scale ranged from “1” (not at all) to “5” (great extent). End users answered these questions.

User experience This measure was adapted from Igbaria, Guimaraes, and Davis (1995). It assessed user computer experience by asking respondents to rate the extent of their experience relative to their peers along five dimensions: experience using systems of the type, using the specific system, using computers in general, being a member of a system development team, and as a member of the development team for the specific system being studied. The rating scale ranged from “1” (not at all) to “5” (to a great extent).

User/developer communication The measure was originally developed by Monge et al. (1983) and modified by Guinan (1988) to assess communication quality. Subsequently, it was used by McKeen, Guimaraes, and Wetherbe (1994). Using a scale ranging from “7” (very strong agreement), “4” (neutral feelings or don’t know), to “1” (very strong disagreement with), users were asked to rate the communication process between themselves and the systems developers along 12 statements regarding whether developers had “a good command of the language,” were “good listeners,” and “expressed their ideas clearly.”

User training This measure was proposed by Nelson and Cheney (1987) and has been used extensively (Santhanam, Guimaraes, and George 2000; Igbaria et al. 1995; Yoon et al. 1995). Respondents were asked to report the extent of training that affects their use of the specific system. There were five sources: college courses, vendor training, On-Site training, self-study using tutorials, and self-study using manuals and printed documents. For each source, this was measured with a five-item scale ranging from “1” (not at all) to “5” (to a great extent).

User influence Based on the work of Robey and Farrow (1982), Robey, Farrow, and Franz (1989), and Robey et al. (1993), Hartwick and Barki (1994) used a measure for user influence composed of three items: How much influence did you have in decisions made about this system during its development? To what extent were your opinions about this system actually considered by others? Overall, how much personal influence did you have on this system? For this study, end users were asked to rate the degree of influence along each item with a scale ranging from “1” (not at all) to “5” (very much).

User conflict Based on the work of Robey and Farrow (1982) and Robey et al. (1989; 1993), this study adopted the measure for user/developer conflict used by Hartwick and Barki (1994). It is composed of three items that asked: Was there much conflict concerning this system between yourself and others? To what extent were you directly involved in disagreements about this system? Was there much debate about the issues concerning this system between yourself and others? For this study, end users were asked to rate the degree of conflict along each of these items using a scale ranging from “1” (not at all) to “5” (very much).

In this study the authors chose measures that had demonstrated reliability and validity in previous studies. The number of items used to measure each construct along with indicators of reliability and correlations among the constructs are summarized in Table 2. As discussed in the results section, psychometric properties of all constructs were acceptable.

Data Analysis

To test the proposed hypotheses, the relationships between the independent variables and the dependent variable are separately assessed through the calculation of Pearson’s correlation coefficients. To address the possibility that the independent variables are also interrelated, multivariate regression analysis has been undertaken to produce a model capable of explaining the largest possible variance in the dependent variable.

RESULTS

Table 2 reports Cronbach’s alpha for each of the constructs in the research model. Cronbach’s alpha should exceed 0.7, which it does for all scales in Table 2, indicating adequate reliability. Discriminant validity was assessed by conducting exploratory factor analysis with all the items from all the constructs. Appendix B contains the pattern matrix from this analysis. A clear pattern of factors emerged. Each item loaded highly on the intended factor (that is, construct), along with the other items designed to tap into that construct, and the items had acceptably low cross-loadings (that is, all items, with one exception, did not load higher on any factor other than its target construct). These findings demonstrate good discriminant validity among the constructs, indicating that the questions used in this study tap into the meaning of the intended construct, but do not substantially tap into the meaning of any of the other constructs. The results from the regression analysis of the research model are summarized in Table 3.

Results From Hypothesis Testing

Based on the results presented in Table 2, the following hypotheses are accepted at the significance level or better:

  • H1: User participation is directly related to system quality.
  • H2: User expertise is directly related to system quality.
  • H4: User training is directly related to system quality.

The following hypotheses cannot be accepted:

  • H3: User-developer communication is directly related to system quality.
  • H5: User influence is directly related to system quality.
  • H6: User conflict is inversely related to system quality.

Table 3 shows that user participation, user training, and user experience combined can explain 61 percent of the variance in user satisfaction.

Other Interesting Results

As one would expect, Table 2 also indicates that more experienced and/or more trained users tend to participate more in system development activities and tend to communicate better with systems developers. Further, users reporting to have more influence over the system development process tend to have better communication with system developers. Users with more training and/or reporting better communication with system developers tend to have less conflict during the system development process.

DISCUSSION AND MANAGERIAL IMPLICATIONS

The main objective of this study was to test a set of hypotheses regarding user characteristics proposed by various authors as important determinants of systems success, in this case defined as system quality. The importance of user participation in the system development process, user training, and user experience has been strongly corroborated. The other variables (user/developer communication, user influence, and user conflict) seem to have no significant direct relationships with system quality. Without user participation in systems development, obviously, user training and experience will not have an effect on system quality. The same can be said about user/developer communication, user influence, and conflict. Even with formal user participation, it is possible that developers dominate the development process, allowing for little user influence without sacrificing system quality in cases where user requirements are well defined from the outset.

Previous user experience with computer technology and the system development process is directly related to system quality, user participation, and user/developer communication. Managers have to strike a balance between employing experienced users too often and providing inexperienced users the opportunity to participate in system development projects and to develop their computer technology knowledge and skills useful for future projects. For the more critical projects, managers must ensure that experienced users are available to participate. In cases where user requirements are not clear, managers must promote user influence, user/developer communication, and user conflict resolution to enhance system quality.

The importance of user training comes across not only as a determinant of system quality but also as a significant factor for user participation in the system development process, for improving user/developer communication, and to reduce user conflict during the system development process. Needless to say, managers must take more seriously the importance of user training to improve system quality, to improve relations with the user community, and more effectively use company information technology (IT) resources in the long run. Attention to the importance of user training as a factor for system quality is particularly important now as IT expenditures are being cut and managers are more likely to cut less “tangible” items first.

While user/developer communication seems to have no direct relationship to system quality, it is a significant factor in reducing user conflict during the system development process and to give users a feeling that they actually can influence the process of system development. On the other hand, users will be more likely to strive for better communication with system developers if they believe they can influence the development process and get the system they want. As mentioned earlier, for new systems with complex or poorly understood user requirements, it is critical that managers ensure strong user participation, user influence, and user conflict resolution by promoting user/developer communication. This process may lead to considerable changes to system requirements and design. This in turn may call for a prototyping approach to systems development that requires flexible tools and methodologies. While that may increase systems development costs and time, it is more preferable to developing systems that are unused or useless.

REFERENCES

ANSI/ASQC. 1978. Quality systems terminology. Milwaukee, Wisconsin: American Society for Quality Control.

Bailey, J. E., and S. W. Pearson. 1983. Development of a tool for measuring and analyzing computer user satisfaction. Management Science 29, no. 5: 530-545.

Barki, H., and J. Hartwick. 1994a. Measuring user participation, user involvement and user attitude. MIS Quarterly: 59-79.

Barki, H., and J. Hartwick. 1994b. User participation, conflict, and conflict resolution: The mediating roles of influence. Information Systems Research 5, no. 4: 422-438.

Baroudi, J. J., and W. J. Orlikowski. 1988. A short-form measure of user information satisfaction. Journal of Management Information System 4, no. 4: 44-59.

Brodbeck, F. C. 2001. Communication and performance in software development projects. European Journal of Work & Organizational Psychology 10, no. 1: 73-94.

Cavaye, A. L. M. 1995. User participation in system development revisited. Information & Management 28: 311-323.

Chief Information Officer: The Magazine for Information Executives. 1991. Special Issue on Companies Where Quality Counts (August) 1-32.

DeBrabander, D., and G. Thiers. 1984. Successful information systems development in relation to situational factors which affect effective communication between MIS users and EDP-specialists. Management Science 30, no. 2: 137-155.

Dilworth, J. B. 1988. Production and operations management, 3rd edition. New York: Random House.

Doll, W. J., and G. Torkzadeh. 1989. A discrepancy model of end-user computing involvement. Management Science 35, no. 10: 1151-1171.

Evans, J. R., and W. M. Lindsay. 1980. The management and control of quality. St. Paul, Minn.: West Publishing.

Fok, L. Y., W. M. Fok, and S. J. Hartman. 2001. Exploring the relationship between total quality management and information system development. Information and Management: Amsterdam 38, no. 6: 355-371.

Gatian, A. W. 1994. Is user satisfaction a valid measure of system effectiveness? Information and Management 26, no. 3: 119-131.

Gallagher, C. A. 1974. Perceptions of the value of a management information system. Academy of Management Journal 17, no. 1: 46-55.

Guimaraes, T., M. Igbaria, and M. Lu. 1992. The determinants of DSS success: An integrated model. Decision Sciences 23, no. 2: 409-430.

Guimaraes, T., Y. Yoon, and A. Clevenson. 2001. Exploring some determinants of ES quality. Quality Management Journal 8, no. 1: 23-33.

Guinan, P. J. 1988. Patterns of excellence for IS professionals: An analysis of communication behavior. Washington, D.C.: ICIT Press.

Hartwick, J., and H. Barki. 1994. Explaining the role of user participation in information system use. Management Science 4: 440-465.

Hocker, J. L., and W. W. Wilmot. 1985. Interpersonal conflict, 2nd edition. Dubuque, Iowa: William C. Brown.

Humphrey, W. S. 1989. Managing the software process. Reading, Mass.: Addison-Wesley.

Hunton, J. E., and J. D. Beeler. 1997. Effects of user participation in systems development: A longitudinal field experiment. MIS Quarterly 21, no. 4: 359-388.

Hunton, J. E., and K. H. Price. 1997. Effects of the user participation process and task meaningfulness on key information system outcomes. Management Science 43, no. 6: 797-812.

Hwang, M. I., and R. G. Thorn. 1999. The effect of user engagement on system success: A meta-analytical integration of research findings. Information & Management 35, no. 4: 229-236.

Igbaria, M., T. Guimaraes, and G. Davis. 1995. Testing the determinants of microcomputer usage via a structural equation model. Journal of Management Information Systems 11, no. 4: 87-114.

Ives, B., and M. Olson. 1984. User involvement and MIS success: A review of research. Management Science 30, no. 5: 586-603.

Ives, B., M. H. Olson, and J. J. Baroudi. 1983. The measurement of user information satisfaction. Communications of the ACM 26, no. 10: 785-793.

Jenkins, J. M., and J. A. Rickets. 1986. The development of an MIS satisfaction questionnaire: An instrument for evaluating user satisfaction with turnkey decision support systems. Working Paper #295, Indiana University, Bloomington, Ind.

Jones, C. 1986. Programming productivity. New York: McGraw-Hill.

Kaiser, K. M., and R. P. Bostrom. 1982. Personality characteristics of MIS project teams: An empirical study and action-research design. MIS Quarterly 6, no. 4: 43-60.

Larcker, D. F., and V. P. Lessig. 1980. Perceived usefulness of information: A psychometric examination, Decision Sciences 11, no. 1: 121-134.

Lin, W. T., and B. B. M. Shao. 2000. The relationship between user participation and system success: A simultaneous contingency approach. Information & Management 37, no. 6: 283-295.

Mahmood, M. A, J. M. Burn, L. A. Gemoets, and C. Jacquez. 2000. Variables affecting information technology end-user satisfaction: A meta-analysis of the empirical literature. International Journal of Human-Computer Studies 52, no. 4: 751-771.

McKeen, J. D., T. Guimaraes, and J. C. Wetherbe. 1994. The relationship between user participation and user satisfaction: An investigation of four contingency factors. MIS Quarterly 18, no. 4: 427-451.

Monge, T. R., S. G. Buckman, J. P. Dillard, and E. M. Eisenberg. 1983. Communicator competence in the workplace: Model testing and scale developments. Communication Yearbook 5: 505-527.

Nelson, R., and P. Cheney. 1987. Training end-users: An exploratory study. MIS Quarterly 11, no. 4: 547-559.

Pearson, J. M., C. S. McCahon, and R. T. Hightower. 1995. Total quality management: Are information systems managers ready? Information and Management 29, no. 5: 251-263.

Pettingell, K., T. Marshall, and W. Remington. 1989. A review of the influence of user involvement on system success. In ICIS Proceedings, Boston, Mass.: 227-236.

Putnam, L. L., and C. Wilson. 1982. Communicative strategies in organizational conflict: Reliability and validity of a measurement scale. Communication Yearbook 6. Newbury Park, Calif.: Sage Publications.

Robey, D. 1994. Modeling interpersonal processes during systems development: Further thoughts and suggestions. Information Systems Research 5, no. 4: 439-445.

Robey, D., and D. Farrow. 1982. User involvement in information system development: A conflict model and empirical test. Management Science 26, no. 1: 73-85.

Robey, D., D. Farrow, and C. R. Franz. 1989. Group process and conflict in system development. Management Science 35, no. 10: 1172-1189.

Robey, D., C. R. Franz, L. A. Smith, and L. R. Vijayasarathy. 1993. Perception of conflict and success in information system development projects. Journal of Management Information Systems 10, no. 1: 123-139.

Saleem, N. 1996. An empirical test of the contingency approach to user participation in information systems development. Journal of Management Information Systems 13, no. 1: 145-166.

Santhanam, R., T. Guimaraes, and J. George. 2000. An empirical investigation of ODSS impact on individuals and organizations. Decision Support Systems 30: 1-72.

Smith, H. A., and J. D. McKeen. 1992. Computerization and management: A study of conflict and change. Information & Management 22: 53-64.

Yoon, Y., T. Guimaraes, and Q. O’Neal. 1995. Exploring the factors associated with expert systems success. MIS Quarterly 19, no. 1: 83-106.

BIOGRAPHIES

Tor Guimaraes holds the J. E. Owen Chair of Excellence at Tennessee Technological University. He has a doctorate in MIS from the University of Minnesota and a master’s of business administration degree from California State University, Los Angeles. Guimaraes was a professor and department chairman at St. Cloud State University. Before that, he was assistant professor and director of the MIS Certificate Program at Case-Western Reserve University. He has been the keynote speaker at numerous national and international meetings sponsored by organizations such as the Information Processing Society of Japan, Institute of Industrial Engineers, the American Society for Quality, IEEE, ASM, and Sales and Marketing Executives. Guimaraes has consulted with many leading organizations including TRW, American Greetings, AT&T, IBM, and the Department of Defense. He was also the editor-in-chief of Computer Personnel, an Association for Computing Machinery journal. Working with partners throughout the world, Guimaraes has published more than 100 articles about the effective use and management of information systems and other technologies. He can be reached by e-mail at tguimaraes@tntech.edu .

Sandy Staples is an associate professor in the School of Business at Queen’s University, Kingston, Ontario, Canada. His research interests include the enabling role of information systems for virtual work and knowledge management, and assessing the effectiveness of information systems and information systems practices. Staples has published articles in various journals and magazines including Organization Science, Information & Management, Journal of Strategic Information Systems, Journal of Management Information Systems, Communications of the Association of Information Systems, International Journal of Management Reviews, Business Quarterly, Journal of End-User Computing, OMEGA, and KM Review. He is currently an associate editor of MIS Quarterly and serves on the editorial board of other journals.

James D. McKeen is currently a professor of MIS at the School of Business, Queen’s University in Kingston, Ontario, Canada, and is the founding director of the Queen’s Centre for Knowledge-Based Enterprises, a research think-tank for the knowledge economy. He received his doctorate in business administration from the University of Minnesota. His research interests include IT strategy, user participation, the management of IT, and knowledge management in organizations. His research has been published in the MIS Quarterly, Journal of Information Technology Management, Communications of the Association of Information Systems, Journal of Systems and Software, International Journal of Management Reviews, Information and Management, Communications of the ACM, Computers and Education, OMEGA, Canadian Journal of Administrative Sciences, Journal of MIS, KM Review, and Database. He currently serves on the editorial board of the Journal of End User Computing, regularly reviews articles for many MIS journals, and was the MIS area editor for the Canadian Journal of Administrative Sciences for seven years. McKeen has been working in the field of information systems for many years as a practitioner, researcher, and consultant. He is a frequent speaker at business and academic conferences.

 

Return to top

Featured advertisers


ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.