The early 1990's were an especially difficult time for American higher education. Colleges and universities were faced with resource reallocation decisions growing out of an economic recession that had gripped most areas of the country. In making determinations as to how faculty and financial resources should be allocated - or reallocated - the decision-making context was further exacerbated by external criticism of the academic enterprise. That criticism is perhaps best summarized in the 1990 Change Magazine article by Robert Zemsky of the University of Pennsylvania and William Massey of Stanford University, describing what they refer to as the "academic ratchet:"
"A term to describe the steady, irreversible shift of faculty allegiance away from the goals of a given institution, toward those of an academic specialty. The ratchet denotes the advance of an independent, entrepreneurial spirit among faculty nationwide, leading to increased emphasis on research and publication, and on teaching one's specialty in favor of general introduction courses, often at the expense of coherence in an academic curriculum. Institutions seeking to increase their own prestige may contribute to the ratchet by reducing faculty teaching and advising responsibilities across the board, thus enabling faculty to pursue their individual research and publication with fewer distractions. The academic ratchet raises an institution's costs and results in undergraduates paying more to attend institutions in which they receive less than in previous decades."
The University of Delaware's Office of Institutional Research and Planning had for years been collecting detailed data on teaching workloads, instructional costs, and externally funded scholarly activity. Metrics such as the proportion undergraduate courses taught by tenured or tenure track faculty, FTE students taught per FTE faculty, instructional expenditures per student credit hour, and externally funded research per FTE faculty, among others, were readily accessible. The measures largely enabled the University to answer the question, "Who is teaching what to whom, and at what cost?," and positioned the institution to respond to critics such as Zemsky and Massy. Moreover, senior administration at the University used these metrics to compare instructional activity between and among departments within related disciplines, e.g., humanities, physical sciences, social sciences, etc., and to frame questions with regard to instructional costs and productivity over time. These questions formed the basis for resource allocation and reallocation decisions.
When current University of Delaware President, David P. Roselle, arrived in 1990, he indicated that as useful as these interdepartmental comparisons were within the University, the data would be even more valuable if comparisons could be made between and among comparable disciplines at colleges and universities across the country. The Office of Institutional Research and Planning was charged with responsibility for collecting interinstitutional cost and productivity data at the level of the academic discipline.
The initial data collection was undertaken in 1992, and in retrospect, it was fairly primitive. Data were collected from 14 research universities, 15 doctoral universities, and 48 comprehensive colleges and universities. The data were analyzed to test the assumption that comprehensive colleges have a higher student credit hour production and lower costs than doctoral universities which, in turn, would teach more and at lower costs than research universities. The data found an altogether different pattern, clearly the result of sample dependency. The results were presented at the 1994 national meetings of the Association for Institutional Research (AIR) and the Society for College and University Planning (SCUP). The limitations in the data in the 1992 collection were fully described. (Middaugh, 1994)
What was remarkable about reaction to the findings was not concern over the stated results, but rather enthusiasm among colleges and universities that institutions were prepared to share detailed information on teaching loads, instructional costs, and externally funded scholarly activity. The Office of Institutional Research and Planning at the University of Delaware was encouraged to replicate the study, refining the methodology where appropriate to correct for issues of sample dependency and other potential sources of error.
The University of Delaware absorbed the full cost of the 1992 study, and external funding for subsequent iterations of the Delaware Study was essential. In 1995, the Office of Institutional Research and Planning received a Cooperative Research Grant from TIAA/CREF to underwrite administrative costs associated with a second data administration. These funds were used, in part, to disseminate information to expand the study sample. Equally important, the funds were used to assemble an Advisory Committee to examine the 1992 data collection instruments, methodology, and calculation conventions, and to make appropriate recommendations for modifications and enhancements. The Advisory Committee was comprised of individuals with national reputations for expertise in collecting data on faculty workloads, and on budgetary issues associated with collecting data on instructional, research, and public service expenditures. The Advisory Committee, which has a rotating membership, continues to meet to this date, ensuring the ongoing relevance and viability of the project. The 1995-96 data collection embraced 32 research universities, 43 doctorate granting universities, and 85 comprehensive and baccalaureate colleges and universities. The results of the data collection were not only sensible, reflecting a much broader sample; they demonstrated that, nationally, faculty teach far more than was the popular perception.
The 1995 TIAA/CREF grant provided the substantive basis for applying for a much larger grant from the Fund for Improvement of Postsecondary Education (FIPSE), which was awarded to the Office of Institutional Research and Planning in 1996, and allocated in excess of $100,000 to fund the project over a three year period. As the result of this infusion of resources, over 250 colleges and universities have participated in the Delaware Study, and the 1998-99 data collection moved the project to a state of self sufficiency. The data sharing project has emerged as the tool of choice nationally for collecting consistent and reliable information on teaching loads, instructional costs, and productivity at the academic discipline level of analysis.
Essential Elements of the Delaware Study
As noted earlier, the single greatest challenge confronting the Delaware Study following the initial 1992 data collection was the development of a methodology, data definitions, calculation and reporting conventions that would yield consistent and reliable data that had significant utility to participating institutions. The TIAA/CREF and FIPSE grants allowed the creation of an Advisory Committee which confronted this challenge head-on. The Committee had had a rotating membership over the years, but several members serve on a continuing appointment. Dr. Paul Brinkman, Director of Institutional Planning at the University of Utah, and author of several books and articles on costing in higher education, has been a continuing member, as has Robert Kuhn, Vice Chancellor for Budget Planning and Analysis at Louisiana State University. Deborah Teeter, Director of Institutional Research and Planning at the University of Kansas and a nationally recognized expert of faculty activity analysis, has also served on the Advisory Committee since its inception.
The Delaware Study Advisory Committee has developed a reporting convention that is consistent with the best practices in the areas of instructional workload and financial analysis, and which has met with consistent approval from academic and financial planning officers. It has eliminated the ambiguity from the 1992 data collection, and has strengthened and expanded the scope of data collection to the point where the Delaware Study is no longer experimental. It is an established, state-of-the-art data collection consortium that will continue to evolve over time to meet the changing needs of academic and financial planners at colleges and universities.
Appendix A to this paper contains the Delaware Study Data Collection form. It is useful to examine the data elements being collected before moving into a discussion of how they are analyzed, reported, and used. The Delaware Study collects data, by academic discipline, as defined by the National Center for Education Statistics' "Classification of Instructional Programs" (CIP) taxonomy. Colleges and universities are required to submit data at the 4-digit CIP code level, e.g., 27.01 is mathematics, 38.01 is philosophy, 40.08 is physics, 45.11 is sociology, and so on. Every course at every college and university in the country is assigned a CIP code. Consequently, it is possible to track each of those courses to a discipline within the CIP taxonomy. As the result, in looking at departments and programs across institutions, there is confidence in the comparability of those units.
Data are also collected on the highest degree offered. This was one of the initial Advisory Committee modifications to the data collection back in 1995. The 1992 effort did not collect this data element, to the detriment of the subsequent analysis. Once graduate study is entered into the equation, both teaching loads and instructional expenditures are profoundly affected. Because of this modification, the Delaware Study is now able to report national benchmarks in two important and different arrays - by Carnegie institution type, and by highest degree offered. Data are also collected on academic calendar type, as the Office of Institutional Research at the University of Delaware has developed an algorithm to make data from semester and quarter calendar institutions comparable.
The crux of teaching load data are collected in Part A of the Data Collection Form. Data reflect the Fall semester or quarter in the academic year immediately preceding the data collection, thereby ensuring timely and "fresh" data. As evident from the matrix in Part A, teaching activity is measured in terms of student credit hours and organized class sections taught. The teaching activity is measured for four discrete categories of faculty. Of course, the primary concern nationally is whether - or how much - tenured and tenure track faculty teach. They are reflected in the first row of the matrix. The remaining faculty categories include "other regular faculty," i.e., individuals who are on recurring contracts with the institution, but who will never be eligible for tenure. There is a growing debate among faculty unions that college administrations are attempting to cut instructional costs through increased use of lower-salaried, non-tenurable faculty. The extent to which this assertion is accurate can be accurately assessed within the metrics of the Delaware Study. The matrix also collects teaching information for "supplemental faculty," i.e., non-recurring faculty such as adjuncts, administrators who teach, etc., and for graduate teaching assistants.
The decision to measure teaching activity in terms of student credit hours taught was a deliberate one on the part of the Delaware Study Advisory Committee. After extended discussion about the prospect of using contact hours, it was determined that the contact hour unit lacked consistency and stability across disciplines on a single campus, never mind across institutional boundaries. The student credit hour, on the other hand, is a derivative of the Carnegie course unit, and has consistency and integrity at institutions throughout the country. Thus measures such as "number of student credit hours taught" or "expenditures per student credit hour" have common definitions and meanings across campuses.
The Delaware Study Advisory Committee was fully cognizant, however, that all instructional activity is not measured in terms of student credit hours. Consequently, in addition to student credit hour generation, data are collected on "organized class sections." In a great many instances, the organized class section reported will be the lecture-based section that carries student credit hour value. But the Delaware Study also collects data on other, zero-credit organized class sections, typically laboratory, recitation, and discussion sections that are associated with the credit-bearing lecture portion of the course. These zero-credit sections meet at regularly scheduled times and consume instructional resources just like the lecture portion of the course, but would be totally obscured and lost in the analysis if the data focused solely on student credit hours. The volume of teaching activity would be significantly understated, and the cost data associated with the course would be distorted.
Student credit hour and organized class section data are collected by level of instruction, i.e., lower division (typically freshman and sophomore level courses), upper division (typically junior and senior), and graduate level. The data are further arrayed by organized class and individualized instruction methods of delivery. This latter distinction enables the Delaware Study to measure instructional activity such as master's thesis and doctoral dissertation supervision. Representative benchmarks on teaching loads, taken from the Delaware Study, are found in Appendix B.
Part B of the Data Collection Form collects information on full academic year and fiscal year teaching productivity and instruction, research, and public service expenditures. Data on fiscal year instructional expenditures are broken out into salaries, benefits, and other-than-personnel expenses (e.g., travel, supplies, non-capital equipment, etc.). This allows for determination of the personnel intensity of instructional expenditures, comparison of benefits packages as components of instructional costs, and so on. Total fiscal year expenditure data for research and public service activity are also collected. Representative benchmarks on instructional costs and productivity, as well as externally funded scholarly activity, are found in Appendix C.
It is important to note that the Delaware Study Advisory Committee initially determined, and has repeatedly reaffirmed the decision, to collect data on direct expenditures for instruction, research, and service. The definitions as to what constitutes a direct expenditure are clear and precise, rooted in policy statements from the National Association for College and University Business Officers (NACUBO). Indirect costs are far murkier. Indirect cost rates vary by institution, and by discipline within the institution. Consequently, in talking about instructional costs and productivity, the Delaware Study Advisory Committee opted to use metrics with consistency and integrity across institutions. By definition, the Delaware Study is not a full cost model. It is, however, a consistent and reliable tool for assessing the direct costs associated with teaching, research, and service, and their relative relationships with overall faculty activity.
Using Data From the Delaware Study
The Delaware Study is intended primarily to be a tool of inquiry for framing questions as to why teaching loads, instructional costs, and faculty productivity in a given academic department or program at an institution are similar to or different from national benchmarks for that department or program. And while the Delaware Study was designed primarily for institutional use, the national data base that underpins the benchmarks yields a rich source of information as to how much faculty actually teach, and the relative costs of instruction.
The University of Delaware, for example, uses data from the Delaware Study as one component in an overall process of academic program review. The University Provosts focuses much of the Delaware Study data analysis on the activities of tenured and tenure track faculty. He opts to do this because this category of faculty is a "fixed cost," that is, they are permanently employed until they retire or resign. Consequently, the Provost is interested in the return on investment. Figure 1 is a sample of a single page "department profile" provided to the Provost. The chart captures two years of Delaware Study data, displaying the University's measures as a percentage of the national benchmark in each year for the following categories:
The single page snapshot quickly tell the Provost that, for the two iterations of the Delaware Study under examination, the department in question has tenured and tenure track faculty who teach in excess of the national benchmarks, as measured in terms of undergraduate student credit hours, total student credit hours, and organized class sections taught. Direct expense per student credit hour taught is higher than the national benchmark. However, this may be acceptable for a couple of reasons. Tenured and tenure track faculty in this department teach more than the national average for this particular department. And the national benchmark is a mean score from data reported (with appropriate outliers excluded) and has not been adjusted for cost of living considerations. The University of Delaware is located in the Washington D.C. to Boston corridor, where the cost of living is significantly higher than the rest of the nation. Since faculty salaries - and especially tenured and tenure track faculty salaries - account for between 85 and 90 percent of direct instructional expenses, on average, this is a very real consideration. Finally, it is obvious from the charts that there is significant external research activity in this department, and that provides further context for considering the acceptability of instructional workload and cost indicators.
While the University of Delaware does not make cost of living adjustments in using the data, other institutions participating in the Delaware Study do. This is one of the attractive features of the Delaware Study - each institution has the capability of adjusting the data to meet their own unique needs. Indeed, the University of Oregon uses the national benchmark data to create a mirror image of itself in terms of constituent departments and programs , and uses that mirror to project costs and teaching loads into the future based upon trend data. And the Delaware Study is of value to state systems. Major higher education systems including California State University, ther State University of New York, the University of North Carolina, and the Louisiana Board of Higher Education have, or are participating in the Delaware Study. Indeed, the Delaware Study Project Director devoted an entire chapter in a recently published volume, The Multicampus System, to a discussion of productivity in systems. (Middaugh, 1999a)
At the institutional level, participants in the Delaware Study are strongly urged not to use the national benchmarks in any given year to reward or penalize departments or programs. As noted earlier, the Delaware Study is intended to be a tool of inquiry for framing questions as to where and why, over time, a given department or program is positioned relative to national benchmarks. It must be continually underscored that the Delaware Study is purely a quantitative analysis, and in no way addresses the qualitative dimension of an academic department or program. Certain academic departments, including some at the University of Delaware, have higher costs and lower teaching loads than national Delaware Study benchmarks - and they wouldn't have it any other way for purely qualitative reasons. At the University of Delaware, departments are challenged annually to provide the measurable qualitative dimensions to their operations that provide the context within which Delaware Study data should be considered.
The Delaware Study is also receiving national attention as a tool for better understanding the extent to which faculty are engaged with students, particularly undergraduates. This is especially important as colleges and universities deal with external criticism similar to that of Zemsky and Massy, as cited in the introduction to this paper. Equally important, the data enable colleges and universities to speak quantitatively with skeptical legislators and parents, who are called upon to fund those institutions.
In the Winter 1998-99 issue of Planning for Higher Education, the Journal of the Society for College and University Planning, three years of Delaware Study data were examined. (Middaugh, 1999b) The article demonstrated remarkable stability and consistency in the Delaware Study data, as it reflected instructional activity across 25 academic disciplines typically found at most colleges and universities. The data demonstrated that, on average across those disciplines, over half of the lower division student credit hours generated at research universities are taught by tenured and tenure track faculty, with 2 out of 3 undergraduate student credit hours being generated by this faculty category. The proportions were progressively higher as the analysis moved from research universities, to doctoral universities to comprehensive institutions to baccalaureate colleges, respectively.
This is not trivial. The currency used to determine full time/part time student status is the number of student credit hours taken in a given term. Progress toward a degree is measured in terms of student credit hours successfully completed. And that currency - the student credit hour - is being generated, in far larger proportions than popularly perceived, by the faculty group in whom colleges and universities have most invested, i.e. tenured and tenure track faculty.
At the same time, the Delaware Study data on organized class sections taught reveals lower division and undergraduate proportions for tenured and tenure track faculty that are lower than those for student credit hour generation. This simply confirms what is common knowledge - that graduate teaching assistants frequently meet the zero-credit laboratory, recitation, and discussion sections of a course, while the tenured/tenure track faculty teaches the lecture section. There is no body of research to suggest that this practice is pedagogically unsound. Indeed, the Delaware Study data suggest that it enables tenured and tenure track faculty to enable institutions to realize their missions. Faculty at research universities bring in three times the external funding as their counterparts at doctoral universities, and 25 times those at comprehensive institutions. When the economic impact of research and public service activity at a major university is examined, the job and tax revenue generation is substantial, and underscores the necessity of this type of institutional mission. On the other hand, faculty at comprehensive institutions - with primary teaching missions - do teach significantly heavier loads than doctoral and research university faculty, and do so at lower costs.
The Delaware Study of Instructional Costs and Productivity has matured over the past decade into a major data sharing consortium that is the preeminent national data source for information on teaching loads, instructional costs, and overall faculty productivity. The prime mover in this maturation process has been the Delaware Study Advisory Committee, which has systematically refined and enhanced the data collection instruments, methodology, definitions, and analytical conventions. The result is a sophisticated and comprehensive data base that is being used by nearly 300 institutions across the country.
In looking toward the future, the Delaware Study has a number of planning objectives. Naturally, an increased institutional participation rate is a goal. The Advisory Committee has targeted two constituent groups for increased participation - private and independent colleges and universities, and historically black colleges and universities. Private and independent institutions currently comprise only about one-fourth of the participant pool. There less of a culture of data sharing among these institutions than is the case for state assisted and state related institutions. Private institutions have also expressed a concern that they will somehow be disadvantaged should the data indicate that their costs are higher than public institutions. While these concerns are real, they are not valid. Cost per student credit hour taught is not a function of whether the revenue source for dollars spent is tuition or state appropriation. Rather, it is a function of what goes into the instructional function. This includes variables such as class sizes, student/faculty ratios, and other issues which private institutions can use to their advantage in discussing costs with parents and benefactors.
Historically black institutions have minimally participated in the Delaware Study over the years. The benefits of management data of the sort generated by the Delaware Study are self-evident both with regard to enhancing efficiency and cost effectiveness, and in making a case for more equitable resources using comparative data as the foundation.
In expanding the participant pool, the greatest obstacle is not institutional concern over possible misuse of the data. That issue has been fully addressed by the fact that the data reported by a given institution are confidential and the data set has institutional identities fully masked. Moreover, in order to access the data, an institution must actually participate in the Delaware Study, hence there is a commonality of interest in responsible use of the data.
The challenge to expanded participation rests largely on the issue of data sophistication. The data collection process at the institutional level is not trivial. Many smaller schools find themselves without the computing hardware, software, and sometimes even the personnel needed to disaggregate teaching loads and instructional costs to the CIP code level of analysis. The author of this paper and the Delaware Study Advisory Committee are constantly exploring ways to simplify the data collection process and increase participation, while at the same time ensuring that the quality and amount of information from the Study is in no way compromised.
In terms of additional data elements, the Delaware Study Advisory Committee is carefully exploring strategies for measuring and quantifying non-externally funded faculty activity in areas other than instruction. Certainly teaching loads and instructional costs are impacted by the extent to which faculty devote time to out-of-classroom activities such as academic advising, institutional committee work, curriculum development, etc. Moreover, faculty in areas such as the fine arts and humanities are expected to engage in scholarly activity as a prerequisite to promotion and tenure. But because these disciplines are not privy to the volume of external funding for research as are the hard sciences and engineering, much of this activity is not currently being captured in the Delaware Study. The Advisory Committee is fully cognizant of these issues and is developing appropriate metrics for providing fuller contextual information for examining instructional costs.
The demand for consistent and reliable information on productivity and accountability at higher education institutions is not a passing fad. It is information that is long overdue, and the Delaware Study of Instructional Costs and Productivity will continue to play a key focal role in describing the effectiveness and efficiency of institutional stewardship of financial and human resources. We continue to publicize the results of the Delaware Study at regional and national meetings of the Association for Institutional Research and the Society for College and University Planning, through articles in respected national journals such as Planning for Higher Education, and through forums such as that being sponsored by the Institute for Higher Education Policy.
The Delaware Study is now a permanent fixture in the repertoire of data collection and analytical tools. As its visibility continues to grow and interest in participation continues to expand, it will meet the needs a greater and greater segment of the higher education community.
Middaugh, M.F.   Interinstitutional Comparison of Instructional Costs and Productivity, by Academic Discipline: A National Study. A paper presented at the Annual Forum of the Association for Institutional Research, May 1994.
Middaugh, M.F. Instructional Costs and Productivity, by Academic Discipline: A National Study Revisited. A paper presented at the Annual Forum of the Association for Institutional Research, Albuquerque, New Mexico, May 1996.
Middaugh, M.F. Productivity in Systems. The Multicampus System: Perspectives on Practice and Prospects (G. Gaither, ed.). Sterling, Virginia: Stylus Press, 1999a, pp. 128-141.
Middaugh, M.F. How Much Do Faculty Really Teach? Planning for Higher Education, 27(2): 1-11. 1999b
Zemsky, R. and Massy, W. Cost Containment: Committing to a New Economic Reality. Change, 22(6): 16-22. 1990
Copyright © University of Delaware: 2002