Occasional Papers

Occasional Papers examine the current state-of-the-art of learning outcomes assessment. Occasional Papers are organized by topic area. Click on each banner to expand the selection, read a brief synopsis, view icons for intended audiences, and download the papers. 

Latest NILOA Occasional Paper

NILOA’s 41st Occasional Paper “A Comprehensive Approach to Assessment of High-Impact Practices” co-released with the Association of American Colleges and Universities (AAC&U), outlines a process of effectively assessing high-impact practices at your institution. High-impact practices, such as learning communities, capstones, undergraduate research, and community-based experiences, are effective pedagogies. Most of these practices have been around for decades. The vast majority of campuses can proudly point to multiple high-impact practices happening somewhere within their institutions. Given the intense focus across institutions of higher education on identifying, tagging, and touting their high-impact practices, assessment is what separates the committed practitioners from the casual adopters. A good assessment plan for high-impact practices starts with acknowledging three things. One, the name alone does not make them high-impact. Two, evidence of effect requires assessing more than outcomes, alone. And three, assessment must be, at every stage, attentive to equity. Read more…

History & Future Directions of Assessment

The Occasional Papers in this section provide a foundation upon which to build an understanding of assessment that is focused on improvement, along with emerging trends in the assessment landscape. If you are new to assessment, we recommend beginning with the Foundational Readings on Assessment and moving your way down the list.

November 2009

Assessment, Accountability, and Improvement: Revisiting the Tension

Many of the same tensions that characterized the accountability and improvement purposes of student learning outcomes assessment when the assessment movement began in the mid-1980s still exist today. This paper examines these tensions and how they can be managed, if not completely resolved.

 

January 2011

From Gathering to Using Assessment Results:
Lessons from the Wabash National Study

The Wabash Study is a longitudinal research and assessment project designed to provide participating institutions with extensive evidence about teaching practices, student experiences, and institutional conditions that promote student growth. Despite the abundant information they receive from the study, most Wabash Study institutions had difficulty identifying and implementing changes. In this paper, we review faulty assumptions we made about assessment in creating the Wabash Study, including our initial thoughts about the primary obstacles to good assessment, and have distilled the lessons learned from our experience into five practical steps.

February 2013

Changing Institutional Culture to Promote Assessment of Higher Learning

While criticism reproaches the academy, a more fundamental problem looms: how to address higher education’s shortfall in higher learning. To say it plainly: in both quantity and quality, college learning is inadequate. At most institutions the campus culture does not prioritize transformative learning. The purpose of this paper is to help realign the assessment conversation by arguing for institutional culture change that puts higher learning first and simultaneously embraces systemic assessment as a prerequisite of and central condition for a culture in which learning is the priority. 

July 2011

Learning Outcomes Assessment in Community Colleges

The open access mission of community colleges demands working with individuals with widely diverse academic skill levels and educational backgrounds. Learning outcomes assessment in community colleges presents an array of opportunities and challenges. This paper analyzes the findings from two recent surveys, one of institutional researchers and one of chief academic officers from community colleges, to better understand the state of student learning outcomes assessment in this increasingly important sector. 

January 2012

From Denial to Acceptance: The Stages of Assessment

In some ways, the assessment movement is similar to what individuals experience as they move through Kübler-Ross’s (1997) stages of grief: denial, anger, bargaining, depression, and acceptance. Articles on assessment published in Change between 1986 and 2011 illustrate the analogy, since the magazine has been a congenial venue for papers focused on learning in higher education. During the initial denial stage, faculty and staff could not understand why assessment was necessary, which led to anger that outside forces were trying to mandate it. However, demands for accountability continued to create pressure for colleges and universities to assess student learning, leading institutions to try bargaining with state officials and regional accreditation agencies. Unflattering national evaluations of American higher education such as the Spellings Commission report propelled many institutions into depression. But eventually, reluctantly, slowly, and unevenly, many institutions came to an acceptance of assessment and its role in higher education.

April 2018

Assessment and Accreditation: An Imperiled Symbiosis

This paper reviews the accomplishments of higher education accreditation relative to its symbiotic relationship with assessment, acknowledges serious criticisms and proposed reforms, and indicates how accreditation might reform itself so as to disarm calls for radical change, improve its performance, strengthen the institutions and programs it serves, and enhance public understanding of and appreciation for higher education.

October 2016

Tracing Assessment Practice as Reflected in Assessment Update

At some future point, when a definitive history of the assessment movement is written, one of the most frequently cited, influential publications will be Assessment Update (AU). Since 1989, this bimonthly newsletter has been published by Jossey-Bass in partnership with Indiana University-Purdue University Indianapolis (IUPUI). It is no coincidence that the two most frequent contributors to AU, Trudy Banta—AU’s founding editor and intellectual muse—and Peter Ewell, are also among the most prolific thinkers and writers shaping the scholarship and practice of student learning outcomes assessment. In this featured NILOA occasional paper, Banta and Ewell with the assistance of Cynthia Cogswell mine the pages of AU between 2000 through 2015 to distill the major themes and advances that characterize the evolution of assessment as a field of professional practice.

October 2010

Regional Accreditation and Student Learning Outcomes: Mapping the Territory

While institutions engage in assessment for various reasons, one principle reason is to meet the expectations of accreditors. Accreditation in the United States serves as both a quality assurance and accountability mechanism, and has been the focus of much discussion since the Spellings Report and the Reauthorization of Higher Education Act, the common contention being regional accreditation organizations should be assuring high levels of quality education from the institutions they accredit. This paper examines the policies and procedures of the seven regional accreditors as they relate to student learning outcomes assessment. 

January 2017

Equity and Assessment: Moving Towards Culturally Responsive Assessment

As colleges educate a more diverse and global student population, there is increased need to ensure every student succeeds regardless of their differences. This paper explores the relationship between equity and assessment, addressing the question: how consequential can assessment be to learning when assessment approaches may not be inclusive of diverse learners? The paper argues that for assessment to meet the goal of improving student learning and authentically documenting what students know and can do, a culturally responsive approach to assessment is needed. In describing what culturally responsive assessment entails, this paper offers a rationale as to why change is necessary, proposes a way to conceptualize the place of students and culture in assessment, and introduces three ways to help make assessment more culturally responsive.

November 2018

Assessment 2.0: An Organic Supplement to Standard Assessment Procedure

The discipline of assessment has matured to the point where there is general agreement on best practices. However, the field has made little progress in developing a theoretical basis. Without a generalizable theory, assessment professionals remain focused on the details of practice—getting it done—instead of systems thinking in the service of improving, revising, growing, or otherwise developing a field that is still far from perfect. We bring sociological theory to bear on learning outcomes assessment to understand its strengths and challenges from a systems point of view. Using this theoretical understanding, we propose an alternative method of assessment (Assessment 2.0) designed to supplement the assessment work already being done while at the same time avoiding its most difficult challenges. Assessment 2.0 is organic and grows naturally from the professional judgment and experience of instructors rather than from the highly structured, linear procedure commonly followed in standard assessment practice.

November 2013

Sharpening Our Focus on Learning: The Rise of Competency-Based
Approaches to Degree Completion

This occasional paper by Rebecca Klein-Collins examines competency-based education in the higher education system. The author defines a competency-based education as one that focuses on what students know and can do rather than how they learned it or how long it took to learn. This paper defines unifying concepts shared by different competency-based education programs, describes current competency-based models using the direct assessment approach, and examines the national policy context that could determine the extent to which these programs are able to go to scale. 

February 2018

Using ePortfolio to Document and Deepen the Impact of HIPs
on Learning Dispositions

There is growing awareness of the importance of dispositional attributes to effective performance, both during college and in the workplace. In this paper, we examine multiple facets of dispositional learning such as fluid intelligence and interpersonal and intrapersonal competencies, and explain why participation in well-designed High-Impact Practices (HIPs)—activities such as learning communities, service learning, undergraduate research, and community engagement—can help students cultivate conscientiousness, resilience, self-regulation, reflection and other learning dispositions. In addition, we demonstrate how and why the use of ePortfolio practice can extend, deepen, and document the impact of HIPs on these essential but often overlooked and difficult-to-measure attributes.

Our Roles in Assessment Work

Various actors throughout institutions are involved in assessment of student learning. “Faculty and Assessment” provides resources and insights on faculty involvement in assessment, addressing topics such as academic freedom, pedagogy, and non-tenure-track faculty. “Partners in Assessment and Their Roles” offers a breadth of knowledge on how assessment professionals, student affairs, support services, librarians, and others can collaborate with assessment efforts.

November 2014

Assessment and Academic Freedom: In Concert, Not Conflict

Scholars and practitioners of learning outcomes assessment widely recognize the importance of faculty engagement with the planning and implementation of assessment activities. Yet garnering participation by the majority of faculty has remained a significant challenge due in part to faculty concerns over the purposes of assessment, the value that it holds, and the costs of its implementation. This paper considers another claim that contributes to faculty resistance: that learning outcomes assessment is a fundamental abridgment of academic freedom.  Faculty control of the curriculum and effective shared governance set the stage for assessment that supports and builds on the faculty’s ongoing efforts while protecting their historic and essential right to academic freedom.

April 2010

Opening Doors to Faculty Involvement in Assessment

Much of what has been done in the name of assessment has failed to engage large numbers of faculty. This paper examines the dynamics behind this reality, including the mixed origins of assessment and a number of obstacles that stem from the culture and organization of higher education. Recent developments are identified that promise to alter those dynamics, including and especially the rising level of interest in teaching and learning as scholarly, intellectual work. The paper closes by proposing six ways to bring the purposes of assessment and the regular work of faculty closer together.

July 2016

Pedagogical Choices Make Large Classes Feel Small

Many students begin their college experience enrolled in large introductory classes. These classes are likely to enroll students who are at risk of leaving college without a degree and have the potential to reach first-generation, undeclared, and underrepresented minority (URM) students. Unfortunately, large lecture classes can make it difficult for students to develop meaningful relationships with faculty members or peers, even though it is known that the presence of strong faculty-student relationships predicts student engagement. Additionally, efforts to increase students’ engagement and persistence have taken place outside of the classroom. We believe that some evidence-based practices developed outside the classroom are ripe for use in large lectures. In this paper we describe an integration of academic content with practices that support student engagement and success in a large general education course, Child Development, and offer principles that might guide redesign of other large classes.

May 2011

What Faculty Unions Say About Student Learning Outcomes Assessment

Three major national faculty unions –American Association of University Professors (AAUP), American Federation of Teachers (AFT) and National Education Association (NEA)—help shape the work conditions of faculty. In this paper, representatives from each of the organizations describe their group’s positions on student learning and educational attainment and the role of assessing student learning outcomes.  

July 2014

Student Outcomes Assessment Among the New
Non-Tenure-Track Faculty Majority
 

The faculty today is dramatically different from 30 years ago. It is largely non-tenure-track; faculty work has been unbundled into teaching-, research-, or service-only roles, and faculty may be provided little institutional support and have minimal connection to the institution and enterprise. While this change has been occurring over several decades, leaders on many college campuses have not responded to this shift. The absence of policies and practices aligned with the realities faced by this new majority faculty has significant implications for how faculty can be involved in student learning outcomes assessment. This paper explores the potential for non-tenure-track faculty to meaningfully contribute to student learning outcomes assessment and outlines policies and practices that can facilitate such contributions.  

September 2019

Co-Designing Assessment and Learning:
Rethinking Employer Engagement in a Changing World

The U.S. Chamber of Commerce Foundation’s Talent Pipeline Management® (TPM) is as a partnership model that allows employers to more meaningfully signal their competency needs to educators, and how educators can, in turn, describe their evidence of learning in ways that are understood by employers relative to those competency needs. This Occasional Paper describes the unique challenges a dynamic, changing labor market poses for employer-education partnerships, and how employers and education partners can use TPM® as a framework for engaging one another in co-designing learning pathways that produce evidence of learning that is meaningful to both sides.

April 2018

 A Portrait of the Assessment Professional in the United States:
Results from a National Survey

While the systematic assessment of student learning has been undertaken since the 1980s, scant research is available that outlines a profile of assessment professionals or the roles and responsibilities these individuals perform in institutions of higher education. This paper presents the results of the Assessment Professional Survey (n=305). By examining the demographics, range of roles and responsibilities, types of methodological skills and the service contribution of these professionals, this study provides the first national portrait of the assessment professional. 

December 2010

The Role of Student Affairs in Student Learning Assessment

Assessment in student affairs has been around for nearly as long as student affairs has played a formal role in student learning. But as the student affairs role in and contributions to student learning have evolved, so too have the purposes of assessment in student affairs. This paper describes the contributions student affairs professionals can make in campus-wide student learning outcomes assessment—by linking the student affairs mission to the institution’s mission, purpose, and strategic plan; by forming partnerships with faculty and administrators; and by sharing their expertise on student learning and development. 

February 2019

Creating Student-Centered Learning Environments and Changing Teaching Culture: Purdue University’s IMPACT Program

How does a large research university establish a culture supporting student-centered evidence-based teaching? This paper describes Purdue University’s IMPACT course design program, now in its 7th year, which has involved 321 instructors, 529 courses, and in some semesters as many as 95.1% of first-time, full-time undergraduate students. IMPACT uses assessment on multiple levels: What should we examine in addition to grades to document achievement of learning outcomes in individual courses? How do we measure the learning climate and student engagement in a class? Most important, how does a faculty development program focused on course redesign lead to meaningful and lasting institutional change? In telling this story, including lessons learned, readers will discover ways to enhance and evaluate their own faculty development programs to effect evidence-based and teaching-centric culture changes on their own campuses.

September 2011

Gaining Ground: The Role of Institutional Research in Assessing Student Outcomes and Demonstrating Institutional Effectiveness

Student learning outcomes are central to the purpose of educational organizations, and the assessment of these outcomes supplies some of the most important evidence demonstrating institutional effectiveness. Drawing on the results of a national survey of institutional research (IR) offices, this paper describes the varied organizational characteristics and analytical activities of these offices, giving special attention to IR’s role in assessing student outcomes. This paper describes the variable maturity among IR offices and summarizes the roles and responsibilities of IR staff, identifies some of the complexities and challenges associated with assessment and evaluation and suggests strategies for demonstrating institutional effectiveness and building a culture of evidence.

April 2012

An Essential Partner: The Librarian’s Role in Student Learning Assessment

The authors argue that librarians, both independently and in partnership with other stakeholders, are systematically and intentionally creating learning outcomes, designing curriculum, assessing student achievement of learning goals, using assessment results to identify practices that impact learning, and employing those practices to positively impact student experience. Focusing on information literacy as a student learning outcome, the authors begin by outlining ideas behind information literacy and how it connects with general education, credit course, and discipline outcomes. Examples are provided throughout of how institutions have developed student learning assessment processes, concluding with possible challenges and solutions of librarians engaging in student learning assessment and contributing to overall student success.

November 2017

Creating Sustainable Assessment Through Collaboration:
A National Program Reveals Effective Practices

While the value of collaboration among diverse campus constituents is widely recognized, it is not easily achieved. This paper synthesizes the results of the program, Assessment in Action: Academic Libraries and Student Success (AiA) by the Association of College and Research Libraries. Five compelling findings emerged from an assessment process grounded in collaborative planning, decision-making, and implementation. We assert that the AiA experience serves as a framework for designing assessment approaches that build partnerships and generate results for improving student learning and success through action research, and that the program results demonstrate how libraries contribute to fostering broad student outcomes essential to contemporary postsecondary education.

Tools & Resources

When determining your assessment processes and practices, there are many decision points to explore, such as which measures, approaches, and models of assessment to utilize, how best to leverage technology, and how to allocate resources to support assessment efforts. “Measures and Approaches to Assessing Student Learning” provides information on different means by which to assess student learning. The next category offers insight into the relationship between “Technology and Assessment” in the digital age, including choosing technology software to fit your needs, tips for assessment in online courses, and using technology to be more transparent about learning. “Cost and Resource Allocation” explores the various points of consideration for cost-benefit analysis as well as the overall cost of assessment processes and practices.

November 2019

A Comprehensive Approach to Assessment of High-Impact Practices

High-impact practices, such as learning communities, capstones, undergraduate research, and community-based experiences, are effective pedagogies. Most of these practices have been around for decades. The vast majority of campuses can proudly point to multiple high-impact practices happening somewhere within their institutions. Given the intense focus across institutions of higher education on identifying, tagging, and touting their high-impact practices, assessment is what separates the committed practitioners from the casual adopters. A good assessment plan for high-impact practices starts with acknowledging three things. One, the name alone does not make them high-impact. Two, evidence of effect requires assessing more than outcomes, alone. And three, assessment must be, at every stage, attentive to equity. Building upon these three ideas, this Occasional Paper, co-released with the Association of American Colleges and Universities (AAC&U), outlines a process of effectively assessing high-impact practices at your institution.

December 2014

A Simple Model for Learning Improvement: Weigh Pig, Feed Pig, Weigh Pig

Assessing learning does not by itself result in increased student accomplishment, much like a pig never fattened up because it was weighed. Indeed, recent research shows that while institutions are more regularly engaging in assessment, they have little to show in the way of stronger student performance. This paper clarifies how assessment results are related to improved learning—assess, effectively intervene, re-assess—and contrasts this process with mere changes in assessment methodology and changes to pedagogy and curriculum. It also explores why demonstrating improvement has proven difficult for higher education. We propose a solution whereby faculty, upper administration, pedagogy/curriculum experts, and assessment specialists collaborate to enhance student learning.

October 2013

All-in-One: Combining Grading, Course, Program,
and General Education Outcomes Assessment

Most U.S. colleges and universities are conducting some form of student learning outcomes assessment. Yet, many institutions are stymied in their attempts to fully engage with assessment findings. Institutions typically have created separate, coexisting assessment practices, handling the assessment of courses, programs, and general education as isolated pieces rather than as interconnected components of the evaluation of students’ knowledge and skills. This paper describes the system developed and implemented by Prince George’s Community College (PGCC), in Largo, Maryland. PGCC’s assessment system—called “All-in-One”—allows faculty to capture students’ discrete skills and requires a technology platform that incorporates rubrics for grading student work. The system has allowed PGCC to conduct assessment on a large scale and avoid duplicated efforts. Additionally, it has produced a robust data set, which makes it possible to track the development of individual student skills and aggregate data at the course, program, and institutional level.

November 2013

Sharpening Our Focus on Learning: The Rise of Competency-Based
Approaches to Degree Completion

This occasional paper by Rebecca Klein-Collins examines competency-based education in the higher education system. The author defines a competency-based education as one that focuses on what students know and can do rather than how they learned it or how long it took to learn. This paper defines unifying concepts shared by different competency-based education programs, describes current competency-based models using the direct assessment approach, and examines the national policy context that could determine the extent to which these programs are able to go to scale.

July 2017

Internships, Integrative Learning and the Degree Qualifications Profile (DQP)

Internships are among the most beneficial out-of-classroom experiences designated as High-Impact Practices (HIPs). Yet, due to the diverse and unscripted nature of internship experiences, as well as the many different models for facilitating them, outcomes assessment practices are a long way from capturing the full power of internships as learning experiences. This paper draws upon the framework of the Degree Qualifications Profile (DQP) to sketch three different curricular pathways. This allows for mapping specific learning outcomes expected in internships, as well as the identification of appropriate forms of evidence for documenting achievement—including evidence from intentionally designed assignments. It concludes with suggestions for collaboration on- and off-campus that can help facilitate meaningful learning though internship experiences.

February 2018

Using ePortfolio to Document and Deepen the Impact of HIPs
on Learning Dispositions

There is growing awareness of the importance of dispositional attributes to effective performance, both during college and in the workplace. In this paper, we examine multiple facets of dispositional learning such as fluid intelligence and interpersonal and intrapersonal competencies, and explain why participation in well-designed High-Impact Practices (HIPs)—activities such as learning communities, service learning, undergraduate research, and community engagement—can help students cultivate conscientiousness, resilience, self-regulation, reflection and other learning dispositions. In addition, we demonstrate how and why the use of ePortfolio practice can extend, deepen, and document the impact of HIPs on these essential but often overlooked and difficult-to-measure attributes.

September 2012

The Seven Red Herrings About Standardized Assessments in Higher Education

This occasional paper by Roger Benjamin outlines the merit and role of standardized tests for assessment in higher education by addressing familiar arguments against standardized assessments that have confused participants on each side of the debate about the need for and the possibility of new benchmarks on student learning outcomes. Benjamin argues that the key seven assertions, or red herrings, need to be set aside in order to achieve progress toward the goal of continuous improvement in student learning outcomes. In his foreword, Peter Ewell sets the context for Benjamin’s position. Four commentaries by higher education thought leaders knowledgeable about assessment examine further the promise and pitfalls of using standardized tests to measure and enhance student learning.

February 2015

To Imagine a Verb: The Language and Syntax
of Learning Outcomes Statements
 

This essay provides language-centered principles, guidelines and tools for writing student learning outcome statements. It is focused on syntax and semantics, and takes issue with both the lack of guidance in earlier literature and specific words, phrases, tenses, voices, and abstraction in diction levels, along with ellipses and tautologies, that one reads in extant attempts to set forth learning outcomes. While placing the verb at the center of all student learning outcomes, it distinguishes between active and operational verbs, voting for the latter on the grounds that they are more likely to lead, naturally and logically, to assignments that allow genuine judgment of student performance, and offers, as more constructive cores of student learning outcomes, 20 sets of operational verbs corresponding to cognitive activities in which students engage and faculty seek to elicit. 

January 2016

Aligning Educational Outcomes and Practices

The notion of alignment has become increasingly prominent in efforts to improve student learning. Alignment is the linking of intended student learning outcomes with the processes and practices needed to foster those outcomes. Alignment is not a new idea, but it has become more salient as more campuses have devised institution-level learning outcomes, and as frameworks such as the Association of American Colleges and Universities’ (AAC&U) Essential Learning Outcomes (ELOs), Lumina Foundation’s Degree Qualifications Profile (DQP), and Tuning USA have become widely adopted. It has also become more important as students swirl through multiple institutions, stop out and return, and take advantage of the growing set of providers offering courses, badges, and certificates. This paper explores what campuses can do to facilitate this process in a way that makes a difference in the experience and achievements of learners.

January 2013

The Lumina Degree Qualifications Profile (DQP): Implications for Assessment

In January 2011, the Lumina Foundation published its Degree Qualifications Profile (DQP) to challenge faculty and academic leaders in the U.S. to think deeply about aligning expectations for student learning outcomes. Since then, the DQP has kindled extensive discussions about what the postsecondary degrees granted by American colleges and universities really mean with respect to what graduates know and can do. This paper explores some of what needs to be done, and provides a few tools and techniques that may help us move forward. Finally, we invite faculties to carefully examine what the DQP asks us to do in designing more aligned and integrated approaches to teaching, learning, and determining student competence—as well as to actively experiment with these ideas and techniques with their colleagues.

December 2009

Three Promising Alternatives for Assessing
College Students’ Knowledge and Skills

This paper discusses three promising alternatives that afford authentic, information-rich, meaningful assessments that are essential for improving student learning, and at the same time provide reportable data for comparisons. First, ePortfolios provide an in-depth, long-term view of student achievement on a range of skills and abilities. Second, a system of rubrics used to evaluate student writing and depth of learning has been combined with faculty learning and team assessments, and is now being used at multiple institutions. Third, online assessment communities link local faculty members in collaborative work to develop shared norms and teaching capacity, and then link local communities with each other in a growing system of assessment. 

October 2019

Assessing Student Learning in the Online Modality

The first part of this paper provides an in-depth discussion of the Open SUNY Course Quality Review Rubric (OSCQR)- an online course design rubric and process that is openly licensed for anyone to use and adapt. The aim of the OSCQR Rubric and Process is to assist online instructional designers and online faculty to improve the quality and accessibility of their online courses. OSCQR also provides a system-wide approach to collect data that informs faculty development and supports large scale online course design review and refresh efforts in a systematic and consistent way. This paper then explores general considerations of online teaching as they pertain to the assessment of student learning outcomes. Finally, specific examples are given of how online course instructors and distance learning administrators have designed their courses and programs to ensure appropriate assessment of learning outcomes.

September 2018

Technology Solutions to Support Assessment

Multiple software systems offer institutions rich and nuanced information about students—most schools have learning management systems (LMS) and student information systems (SIS), often supported by analytics programs that integrate data. Faculty rely on LMS and other tools like student response systems (i.e., “clickers”), Scantron, and e-portfolios to assess students’ work. Many schools have adopted Assessment Management Systems (AMS) to streamline assessment processes and enrich their evidence about student learning. Yet “meaningful implementation remains elusive.” How can institutions select assessment technologies and integrate them with existing tools? What elements should we consider when selecting technologies? Do any systems exist that address the requirements of authentic assessment in one solution?

October 2011

Assessing Learning in Online Education:
The Role of Technology in Improving Student Outcomes

The national learning outcomes assessment (LOA) movement and online learning in higher education emerged during roughly the same period. What has not yet developed is a sophisticated understanding of the power of online learning and its concomitant technologies to change how we view, design, and administer LOA programs. This paper considers how emerging techniques allow the use of performance and behavioral data to improve student learning. We postulate that technology will enable educators to design courses and programs that learn in the same way that individual students learn, and we offer some conditions that we believe are important to further this goal. 

May 2015

Improving Teaching, Learning, and Assessment
by Making Evidence of Achievement Transparent

Technology can change higher education by empowering students to make an impact on the world as undergraduates. Done systematically, this would allow institutions to close the credibility gap with an increasingly dubious public. Authentic student achievements that are addressed to a real world audience can lead to richly detailed Resume 2.0 portfolios of work that add value to degrees and the granting institutions. A guide is provided for implementation of new high-impact practices, including structured assignment creation.

December 2018

Towards a Model for Assessment in an Information and Technology-Rich 21st Century Learning Environment

What should we assess if learners can Google the answers? If learning is defined as being able to do something afterwards that could not be done before, the problem is technology now enables us to do things we were not able to do before, by using an app or Google translateNevertheless, actual learning is hard to define. The person with the best technology might fare best. It is not just the individual learners that learn, it is the whole system, including the devices that they use, and the cloud to which the devices are connected, that learns. The constraint has shifted from our ability to provide learners with information to learners’ ability to process and use information. The locus of learning has shifted from the learner to the rhizome. We should shift the emphasis from evaluating learners’ collection of knowledge to evaluating their connection to the system.

January 2010

Connecting the Dots Between Learning and Resources

Almost every institution is currently struggling to find ways to restructure its costs. Institutional and policy leaders are asking for guidance, and for data that tells them something about how to focus scarce resources in areas that make the biggest difference in access, attainment, and learning outcomes. Conventional assumptions about college finances, including more money means better quality, appear to be so commonly held that they are not seriously analyzed by institutions or addressed by researchers. The problem occurs on both sides of the equation, with not enough attention in work on student success and not enough attention on the cost side. This paper presents a conceptual approach for analyzing the relation of spending to student success, followed by an examination of what the existing research says about the topic, and concludes by recapping the research themes and suggesting directions for future work.

August 2013

What Are Institutions Spending on Assessment?
Is It Worth the Cost?

Assessment activities have proliferated over the last decade at institutions of higher education. This proliferation has in part been due to greater pressures from regional accreditors to meet assessment requirements and led to new expenditures on college campuses. We discuss a research project aimed at determining how much institutions are spending annually on assessment and whether the perceived benefit is worth the cost. An online survey was administered across the country to determine institutions’ spending in seven expenditure categories. We present expenditures by category, broadly describe the differences in spending between institutions with different enrollments, two-year and four-year institutions, and public and private.  

May 2010

Valuing Assessment: Cost-Benefit Considerations

Colleges are left guessing how much they should spend on assessment. The complexity of planning assessment budgets is increasing as institutions engage in a growing array of assessment activities. Whether deciding on direct or indirect resource allocations, there are many more opportunities for spending than resources available. So how can a campus know when enough spending is really enough? Unfortunately, campuses may focus too much on controlling their spending on assessment without equal focus on maximizing the value of the benefits derived from assessment. The true cost of assessment is determined by comparing costs relative to benefits. There are two opportunities for a campus to influence the cost of assessment; prudence in using campus resources (controlling expenditures), and assurance that assessment results produce tangible benefits (increasing the value). The application of basic cost accounting principles and application of cost-saving approaches can inform decisions about resource allocations in support of assessment.