National Institute for Learning Outcomes Assessment |

National Institute for Learning Outcomes Assessment

NILOA Guest Paper Response

We’ve invited thought leaders to provide a response to Occasional Papers that NILOA crafts. We hope that these pieces will spark further conversations and actions that help advance the field. Please sign up here to receive monthly newsletters that headline these pieces along with NILOA updates, current news items, and upcoming conferences.

Response to Equity and Assessment: Moving towards Culturally Responsive Assessment

By Thomas F. Nelson Laird and Allison BrckaLorenz
Indiana University Center for Postsecondary Research

As higher education institutions simultaneously deal with calls for greater equity across student groups and separate calls for greater accountability in terms of demonstrating that students receive a high-quality education, it remains quite possible that actions taken to make progress in one area will hinder efforts to make progress in the other. For example, it is not difficult to imagine the implementation of assessment processes and practices that satisfy the calls for greater accountability while further marginalizing students who know well the inequities built into many collegiate experiences. Montenegro and Jankowski’s occasional paper, Equity and Assessment: Moving towards Culturally Responsive Assessment, helps outline ways to avoid this problematic possibility and provides an excellent starting place for conversations about creating and using more inclusive assessment processes and practices.

As Montenegro and Jankowski argue, the diversity of cultures and practices within and coming to higher education institutions, demands assessment processes that more readily recognize, engage, and value various aspects of diversity. We agree with the authors that such processes must look for potential inequity in customary forms of assessment, seek validation and involvement of students in the assessment process, and include data disaggregation.

Many norms for assessing student learning were created with particular groups of students in mind. When considering the vast array of forms of diversity, compounded by considerations for intersectionality, it may feel impossible to create more inclusive and equitable assessments. The suggestions offered by Montenegro and Jankowski point to some practical options for instructional staff and assessment professionals to begin their quest for more culturally responsive assessment (CRA). Offering students choices in how they prefer to demonstrate their learning, allowing students to demonstrate their knowledge in multiple ways such as in student portfolios, and using rubrics to evaluate the quality of student-chosen artifacts, are all ways to provide students with more options and be collaborators in the assessment process.

The disaggregation of data is a critical component of better understanding how students with different identities and backgrounds learn and create knowledge. Although aggregate measures ranging from course grades to institution-wide graduation rates can be useful starting points for conversations, these averages often mask the successes and challenges of student subpopulations. As Montenegro and Jankowski suggest, without disaggregation, assessment professionals continue to validate the learning styles and abilities of the “normal” or “typical” student and further lead those students who fall outside the norm or conception of typical to feel that their ways of knowing and demonstrating their knowledge are less important and possibly even wrong. Further their connection of culture and intersectionality reminds us that a more traditional view of “diversity,” a focus on racial or ethnic identification, is only one part of a person’s identity and that it is the entanglement of various aspects of a person’s identity that may be most important in understanding students’ experiences and needs for support.

By proposing and describing the contours of CRA, Montenegro and Jankowski take several important first steps in helping faculty members and assessment professionals improve their thinking and practice in the area of assessment. As they are likely quite aware, there are many steps still to be taken. From our vantage point, urgent next steps for scholars and professionals include specifying principles, assumptions, definitions, models, and particular practices for CRA. For example, CRA principles may include the following.

  • CRA accounts for students’ multiple cultures

  • CRA uses varied techniques sensitive to the varied ways students describe and demonstrate their experience, knowledge, and learning

  • CRA seeks just judgments based on collected information

  • In terms of assumptions, Montenegro and Jankowski point out two flawed assumptions cooked into many current assessment processes and practices. First, folks assume “students need to demonstrate learning in specific ways for it to count” (p. 6). Second, folks assume that one assessment approach is enough at any particular instance of assessment. So, it would seem that CRA’s assumptions could start with reversals of these two flawed assumptions. To be specific, CRA assumes that (a) students can demonstrate their learning in many ways and (b) in particular assessment moments, students can be given multiple ways to demonstrate learning.

    Montenegro and Jankowski also give the beginnings of a definition of CRA. They write, CRA “involves assuring that the assessment process—beginning with student learning outcome statements and ending with improvements in student learning—is mindful of student differences and employs assessment methods appropriate to different student groups” (p. 9) and “can help reinforce a sense of belonging” (p. 10). Though these and other fragments of a definition exist throughout the document. It seems time to stitch them together into a definition that can be taken up, used, critiqued, and revised.

    In the quote from page 9, Montenegro and Jankowski point to assessment processes “beginning with student learning outcome statements and ending with improvements in student learning.” This is a way the authors acknowledge that there are models of assessment, but they do not take on models of assessment directly. We believe scholars and professionals should take on the task of describing a model of CRA and explain how it differs from existing models. Similarly, we need to know how specific practices can be adapted or created for CRA. The authors point to assessment tools like rubrics as promising examples. However, as is the case with most tools, they can be used well or poorly. Simply using a rubric does not make one’s assessment culturally responsive. So, we need to know more about how a full range of tools, from surveys and rubrics to interviews and observations, can be used as a part of CRA.

    In another definitional passage in their paper, Montenegro and Jankowski explain that CRA “is thus thought of as assessment that is mindful of the student populations the institution serves, using language that is appropriate for all students” (p. 10, emphasis added). While an ideal to reach for, it may not be possible and will often not be practical to be “appropriate for all students.” This highlights a significant dilemma facing faculty and professionals who take up culturally responsive practice, one that feels quite daunting. In fact, there are many examples of folks who have taken a step or two down the road toward cultural responsiveness and then stopped their journey when faced with making something that works for all. So, it may, in fact, be better to think of this type of assessment as culturally negotiated instead of culturally responsive. Best practice may be about entering into the negotiation process instead of reaching for and always missing the ideal. Making the shift in language from responsive to negotiated also requires faculty and assessment professionals to ask who should be involved in the negotiation—a challenging question that invites changing how power is generally distributed in the assessment process.

    As scholars and professionals proceed, there are many challenges to be faced and questions to be answered as they strive for more inclusive assessment. We highlighted some above but believe two more areas of challenge deserve mention. First, scaling up CRA to look at issues from larger vantage points such as assessing institutions, systems, states, or sectors, will not be simple. Although more inclusive assessment tools are excellent suggestions for assessing learning in courses and even in departments or disciplines, the use of such tools may be overwhelming for large-scale use. More traditional forms of large-scale assessment, such as surveys, also need to be re-examined. Ensuring that questions are sensitive to the nuances of student experiences, removing cultural bias and assumptions from items, and ensuring that options for varying aspects of students’ identities and backgrounds are all critical for more inclusive data collection.

    Second, traditional methods of using assessment results, particularly in the analysis and reporting, should also be reevaluated. Conventional methods can further marginalize minority or nontraditional groups of students. Combining groups of different students together, for example, having students of color make up a single group, and focusing on comparisons to a normative group, for example, continually comparing to White students, can give the impression that all students of color are the same and they should somehow be like White students. As Montenegro and Jankowski recommend, assessment professionals and data analysts need to be careful about the assumptions and choices they make in analyzing and reporting data from assessment efforts. This raises important questions for analysts who are trained in techniques and methods that can discourage cultural responsiveness. Researchers and assessment professionals should strive to find a reasonable balance between conducting rigorous analyses and respecting cultural differences.

    As should be clear, we greatly appreciate Montenegro and Jankowski’s effort in outlining and delineating CRA. We also strongly encourage the authors and others to continue this line of work, building out CRA (or CNA, culturally negotiated assessment) into a full-blown set of CRA assessment resources that help make culturally responsive/negotiated assessment common within higher education.

    About the authors:

    Thomas F. Nelson Laird is an associate professor in the Higher Education and Student Affairs Program and Director of the Center for Postsecondary Research at Indiana University Bloomington. Tom's current work concentrates on improving teaching and learning at colleges and universities, with a special emphasis on the design, delivery, and effects of curricular experiences with diversity. He is principal investigator for the Faculty Survey of Student Engagement, a companion project to the National Survey of Student Engagement. Author of dozens of articles and reports, Tom’s work has appeared in key scholarly and practitioner publications. He also consults with institutions of higher education and related organizations on topics ranging from effective assessment practices to the inclusion of diversity into the curriculum.

    Allison BrckaLorenz is the project manager for the Faculty Survey of Student Engagement and a research analyst for the National Survey of Student Engagement. In her work, she helps people use data to make improvements on their campuses, uses data to highlight the experiences of traditionally marginalized subpopulations, and provides professional development opportunities and mentoring to graduate students. Her research interests focus on the teaching and learning of college students and the accompanying issues faced by faculty, and the experiences of small and understudied populations with an emphasis on the engagement of queer and gender variant students.
    Darby Roberts

    For additional responses to this paper please click below:

    Video response by Dr. Eboni Zamani-Gallaher

    Response by Jan McArthur

    Response by Melissa Wright

    Response by Jodi Fisler

    Response by Pamela Felder