National Institute for Learning Outcomes Assessment |

National Institute for Learning Outcomes Assessment

NILOA Guest Viewpoints

We’ve invited learning outcomes experts and thought leaders to craft a Viewpoint. We hope that these pieces will spark further conversations and actions that help advance the field. To join the conversation, click the link below the Viewpoint. You can also sign up here to receive monthly newsletters that headline these pieces along with NILOA updates, current news items, and upcoming conferences.

 

Why Are We Assessing?
Higher Education Assessment Professionals

 

We the undersigned have all dedicated a portion of our careers to helping our institutional colleagues assess student learning. Many of us are or have been teaching faculty, and it’s our passion for teaching and helping students learn that drew us to this work.

We work at all kinds of institutions, large and small, public and private, research universities and two-year colleges. Our common bond is a conviction that, as good as American higher education is, today’s students—and society—need not just a good but the best possible education. We see assessment as a vital tool to making that happen.

We’ve found that assessment, when done well, can benefit students, faculty, co-curricular staff, and higher education institutions in a number of ways, including contributing to better learning.

For students, the clear expectations for learning that are part of good assessment practices help them understand where they should focus their learning time and energies. When learning outcomes, learning activities, assignments, and other assessments are clear and integrated with each other, student learning is more meaningful and long-lasting. Assessment, especially through grading and other feedback processes, motivates students to do their best. And feedback from assessment helps students understand their strengths and how they might try to improve.

For faculty and co-curricular staff, assessment helps them understand and thereby improve student learning by encouraging reflection on questions such as the following: What do you most want your students to learn? Why? How are you helping them learn those things? How well are they learning those things? How do you know? How might you help them learn more effectively? Assessment encourages faculty and co-curricular staff to collaborate with students and each other in discussing these questions and deciding how best to help students learn. These conversations help faculty and staff see how courses and other learning experiences link together to form coherent, integrated programs and how the courses and learning experiences they offer contribute to student success in subsequent pursuits.

For colleges and universities in an era when American higher education is sometimes criticized as expensive or irrelevant, assessment enables them to provide evidence to students, their families, taxpayers, donors and, yes, accreditors that, if students successfully complete this course or program, they will indeed have learned the important things that faculty and staff commit to in the institutional mission, catalog and course syllabi. Many of us who work in assessment see translation as an important part of our responsibilities; we aim to translate the work of faculty and co-curricular staff into terms that students and other stakeholders—including accreditors—easily understand and appreciate, showing them that everyone’s investments in higher education are worthwhile.

Students, faculty, co-curricular staff, and colleges and universities will generally see these benefits of assessment only when assessment is reasonably well done. So what are good assessment practices? The movement to articulate and assess learning outcomes systematically is about 25 years old—a blink of an eye in the history of higher education. We’re still figuring assessment out, and we readily acknowledge that there’s plenty of room for improvement in how we assess. But we have learned that assessment is most effective under the following circumstances.

Students, faculty, and co-curricular staff share responsibility for student learning. An impressive body of research demonstrates that “learning-centered” strategies—those in which students are actively engaged in their learning and faculty and students share responsibility for learning—are remarkably effective in helping today’s students learn and succeed. We cannot force students to learn, but we can create motivating and effective educational environments that make learning more likely to occur, and evidence from assessment can help us do so.

Institutional leaders make student learning a valued priority.  They actively encourage faculty and co-curricular staff to employ research-informed educational strategies and to use assessment and other systematic evidence to decide how best to do so. They invest institutional resources to help faculty and staff do this. They help create time and space for faculty and staff to collaborate on discussions and decisions on teaching, learning, and assessment. They make sure that faculty and staff receive clear guidance, helpful coordination, resources, and constructive feedback that help faculty and staff decide what and how to assess. They ensure that faculty and staff are recognized in meaningful ways for their work to systematically assess and improve student learning.

Faculty and co-curricular staff are respected leaders and partners in the assessment process. Those who determine curricula, teaching methods, and learning strategies collaborate to determine the best ways to assess student learning.

Everyone takes a flexible approach to assessment. Teaching is a human endeavor, and every institution, program, and student cohort is unique, so one size does not fit all. Faculty and co-curricular staff help choose and use assessment tools and strategies that are appropriate to their discipline and setting and that will give them useful information on student learning.

Assessment respects and builds on what faculty and staff are already doing well. For literally thousands of years, faculty have been assessing student learning through grading and feedback processes. Today, assessment simply builds this work into processes of collaborative, systematic inquiry.

Everyone focuses on collecting information that’s genuinely useful in understanding and improving student learning. If anyone finds that something hasn’t been helpful, they try to figure out what went wrong and implement an alternative approach.

Assessment is kept as cost-effective as possible. Everyone routinely compares the time spent on assessment with the usefulness of the process and results in understanding and improving student learning. Everyone aims to minimize fruitless or time-intensive assessment activities. Reports on assessment activities and findings have clear purposes and audiences and are kept to the bare-bones minimum needed to meet those needs.

Everyone recognizes that the perfect can be the enemy of the good. While assessment is a form of systematic inquiry, it does not necessarily have to be approached as empirical research; it’s designed to collect reasonably good quality information to help everyone make better decisions. Common sense applies here; assessments that may lead to major, expensive changes may need to be more rigorously designed than those informing minor adjustments to a learning activity. Of course, if you want to conduct research on how best to help students learn, great! The higher education community needs more scholarship on teaching, learning, and assessment.

Disappointing outcomes are viewed as opportunities for improvement and are addressed fairly, supportively, and compassionately. Resources are available to help faculty and co-curricular staff identify and implement strategies to try to improve student learning, and those who make assessment-informed changes are recognized for their work.

There is an institution-wide commitment to innovation and improvement. If everyone is satisfied with the status quo, there’s no point in assessing.

Is all this worthwhile? Here are a few examples of assessment work making a big impact:

After using rubrics to assess student learning in its writing-intensive, capstone, and service-learning courses, Daemen College hired a writing coordinator and writing-in-the-disciplines specialist, added an information literacy component to its first-year writing course, increased the proportion of first-year writing courses taught by full-time faculty from 35 to 90 percent, and offered workshops to faculty teaching writing-intensive courses. (For more information, see www.aacu.org/sites/default/files/files/VALUE/daemen.pdf)

After assessment results suggested the need to improve students’ digital literacy, Carlow University implemented an extensive faculty professional development program. (For more information, see http://nsse.indiana.edu/NSSE_2016_Results/index.cfm)

After assessing first-year students’ writing and finding disappointing outcomes for critical thinking and information competence, Norco College appointed course mentors and created a handbook and model assignments for faculty teaching first-year writing courses. (For more information, see “Can Assessment Loops Be Closed?” in the July-August 2014 issue of Assessment Update.)

To sum all this up: assessment is most effective and useful when faculty and co-curricular staff are valued, respected, supported, and engaged as part of a community that focuses purposefully and collaboratively on helping every student receive the best possible education. We are all committed to helping everyone at our institutions make that happen. Call on us—we are here to help.

March 24, 2018

Click here to view the full list of the 130+ cosigners.

 

_________________________________________________________________________________________

 

Check out our past Viewpoints:

In Appreciation of Clifford Adelman
Peter T. Ewell

Why Are We Assessing?
Higher Education Assessment Practitioners

The EEQ CERT: A New Way to Assure and Communicate Program Quality, Relevance, & Value
Melanie Booth

Bringing Student Voices to the Table: Collaborating with our Most Important Stakeholders
Ann E. Damiano

Responses to “The Misguided Drive to Measure ‘Learning Outcomes’”
Natasha Jankowski

Collaborating for Individual and Institutional Success: Libraries as Strategic Campus Partners
Jennifer Duncan, Kacy Lundstrom & Becky Thoms

Rethinking the Role of Work in Higher Education
David W. Marshall

Demand interoperability to dramatically improve the educational ecosystem
Jeff Grann

The Comprehensive Student Record at Dillard University
Demetrius Johnson

NILOA Remembers Assessment Pioneer Sister Joel Read of Alverno College
Peter Ewell, Pat Hutchings, & Russ Edgerton

The Neuroscience of Learning and Development: How can Evidence Legitimize Self-Reflection?
Marilee Bresciani Ludvik

Taking Stock of the Assessment Movement – Liberal Education, Winter, 2017
Peter Ewell, Pat Hutchings, Jillian Kinzie, George Kuh & Paul Lingenfelter

Eight Years On: Early—and Continuing—Lessons from the Tuning Project
Daniel J. McInerney

Real-time Student Assessment: Prioritizing Enrolled Students’ Equitable Progress toward Achieving a High-Quality Degree
Peggy Maki

Academic and Student Affairs Sides of the House: Can We Have an Open Concept Learning Design?
Darby Roberts

Just Assessment. Nothing More. Nothing Less.
Wayne Jacobson

Design for a Transparent and Engaging Assessment Website
Frederick Burrack and Chris Urban

Improvement Matters
Peter Felten

Working Together to Define and Measure Learning in the Disciplines
Amanda Cook, Richard Arum, and Josipa Roksa

The Simplicity of Cycles
Mary Catharine Lennon

Helping Faculty Use Assessment Data to Provide More Equitable Learning Experiences
Mary-Ann Winkelmes

Ignorance is Not Bliss: Implementation Fidelity and Learning Improvement
Sara J. Finney and Kristen L. Smith

Student Learning Outcomes Alignment through Academic and Student Affairs Partnerships
Susan Platt and Sharlene Sayegh

The Transformation of Higher Education in America: Understanding the Changing Landscape
Michael Bassis

Learning-Oriented Assessment in Practice
David Carless

Moving Beyond Anarchy to Build a New Field
Hamish Coats

The Tools of Intentional Colleges and Universities: The DQP, ELOs, and Tuning
Paul L. Gaston, Trustees Professor, Kent State University

Addressing Assessment Fatigue by Keeping the Focus on Learning
George Kuh and Pat Hutchings, NILOA

Evidence of Student Learning: What Counts and What Matters for Improvement
Pat Hutchings, Jillian Kinzie, and George D. Kuh, NILOA

Using Evidence to Make a Difference
Stan Ikenberry and George Kuh, NILOA

Assessment - More than Numbers
Sheri Barrett

Challenges and Opportunities in Assessing the Capstone Experience in Australia
Nicolette Lee

Making Assessment Count
Maggie Bailey

Some Thoughts on Assessing Intercultural Competence
Darla K. Deardorff

Catalyst for Learning: ePortfolio-Based Outcomes Assessment
Laura M. Gambino and Bret Eynon

The Interstate Passport: A New Framework for Transfer
Peter Quigley, Patricia Shea, and Robert Turner

College Ratings: What Lessons Can We Learn from Other Sectors?
Nicholas Hillman

Guidelines to Consider in Being Strategic about Assessment
Larry A. Braskamp and Mark E. Engberg

An "Uncommon" View of the Common Core
Paul L. Gaston

Involving Undergraduates in Assessment: Documenting Student Engagement in Flipped Classrooms
Adriana Signorini & Robert Oschner

The Surprisingly Useful Practice of Meta-Assessment
Keston H. Fulcher & Megan Rodgers Good

Student Invovlement in Assessment: A 3-Way Win
Josie Welsh

Internships: Fertile Ground for Cultivating Integrative Learning
Alan W. Grose

What if the VSA Morphed into the VST?
George Kuh

Where is Culture in Higher Education Assessment and Evaluation?
Nora Gannon-Slater, Stafford Hood, and Thomas Schwandt

Embedded Assessment and Evidence-Based Curriculum Mapping: The Promise of Learning Analytics
Jane M. Souza

The DQP and the Creation of the Transformative Education Program at St. Augustine University
St. Augustine University

Why Student Learning Outcomes Assessment is Key to the Future of MOOCs

Wallace Boston & Jennifer Stephens

Measuring Success in Internationalization: What are Students Learning?
Madeleine F. Green

Demonstrating How Career Services Contribute to Student Learning
Julia Panke Makela & Gail S. Rooney

The Culture Change Imperative for Learning Assessment
Richard H. Hersh & Richard P. Keeling

Comments on the Commentaries about "Seven Red Herrings"
Roger Benjamin

Ethics and Assessment: When the Test is Life Itself
Edward L. Queen

Discussing the Data, Making Meaning of the Results
Anne Goodsell Love

Faculty Concerns About Student Learning Outcomes Assessment
Janet Fontenot

What to Consider When Selecting an Assessment Management System
R. Stephen RiCharde

AAHE Principles of Good Practice: Aging Nicely A Letter from Pat Hutchings, Peter Ewell, and Trudy Banta

The State of Assessment of Learning Outcomes Eduardo M. Ochoa

What is Satisfactory Performance? Measuring Students and Measuring Programs with Rubrics
Patricia DeWitt

Being Confident about Results from Rubrics Thomas P. Judd, Charles Secolsky & Clayton Allen

What Assessment Personnel Need to Know About IRBs
Curtis R. Naser

How Assessment and Institutional Research Staff Can Help Faculty with Student Learning Outcomes Assessment
Laura Blasi

Why Assess Student Learning? What the Measuring Stick Series Revealed
Gloria F. Shenoy

Putting Myself to the Test
Ama Nyamekye

From Uniformity to Personalization: How to Get the Most Out of Assessment
Peter Stokes

Transparency Drives Learning at Rio Salado College
Vernon Smith

Navigating a Perfect Storm
Robert Connor

It is Time to Make our Academic Standards Clear
Paul E. Lingenfelter

In Search for Standard of Quality
Michael Bassis

Avoiding a Tragedy of the Commons in Postsecondary Education
Roger Benjamin