Frequently Asked Questions
Academic Departments and Programs
- What is being required and when?
- Should we include intangibles?
- How much detail needs to be included in our statements of student learning goals and objectives?
- Are we supposed to set goals for majors, courses, or what?
- What level of detail goes to Middle States?
- What happened to Course Evaluations?
- What is the difference between "Direct" and "Indirect" assessment?
- Can graded coursework be used in assessment?
- Are we supposed to address students' improvement or absolute levels of abilities?
- How rigorous does this assessment work need to be?
- Do we have to submit assessment projects for IRB review?
- Don't we do this already?
Administrative Departments
- What is being required and when?
- What is the difference between departmental assessment and performance assessment?
- What is the difference between "Direct" and "Indirect" assessment?
- Can we just do a survey?
- How does this relate to Middle States?
- Where can we find assistance?
Academic Departments and Programs
What is being required and when?
Each academic department articulated its departmental goals and objectives for student learning in 2011-12, and began assessing them in 2012-13. Now each department should be fully engaged in an annual cycle of assessment, as outlined in our Academic Assessment Plan. Programs articulated goals in 2015-16 and began assessment in 2016-17. The Provost's office provides feedback, guidance, and assistance with these efforts.
Should we include intangibles?
As a Liberal Arts institution, Swarthmore places great value on many qualities that are intangible. Departments' goals should reflect what is important to them, and so we have taken the view (contrary to "best practices") that we ought to include these. Certainly there are things that can't be measured, although a variety of evidence might give us confidence that we're on the right track. But omitting them would be inconsistent with our mission. For some interesting reading on "Essential Learning Outcomes" of a Liberal Education, see AAC&U's VALUE Initiative.
How much detail needs to be included in our statements of student learning goals and objectives?
It is logical to start with broad goals, and then to refine each into more detailed objectives. The more detail you are able to articulate, the better positioned you will be when you set out to assess them. However, it is fine to leave a goal statement at somewhat of a broad level until your department is ready to engage with it more fully. Once that happens, we would expect the goal to be more clearly defined with more specific objectives/outcomes.
Are we supposed to set goals for majors, courses, or what?
We should articulate goals for majors, minors, and courses (which would include non-majors).
What level of detail goes to Middle States?
We report on assessment to Middle States in narrative form, describing our processes and activities, providing examples, and referencing supplemental materials as appropriate. They do not require departmental submissions directly to them, but would expect to see evidence of our work in our materials, on our website, and in internal reporting that might be reviewed by an external team of evaluators for our decennial self-studies and interim reports.
What happened to Course Evaluations?
Course evaluations should continue to be administered. Please see the Provost's clarification about our current policy. However, course evaluations constitute indirect evidence of student learning — they tell us about student perceptions of what has been learned, which are important, but they don't show in a direct or objective way what has actually been learned. Therefore departments should also be assessing student learning in direct ways.
What is the difference between "Direct" and "Indirect" assessment?
Direct assessment requires looking at actual student work, such as projects, exams, and presentations (graded to reflect individual learning goals), for evidence of learning. Indirect assessment uses other information that suggests that learning is taking place, such as self-reported learning gains in a course evaluation, or graduation rates. Middle States provides a useful sheet of examples that helps to illustrate, called "Evidence of Student Learning." [pdf] Indirect assessment is valuable and important, but it cannot be a substitute for Direct assessment.
Can graded coursework be used in Assessment?
Traditionally, a grade reflects a combination of achievements, such as a range of materials covered on an exam, or multiple competencies reflected in a paper. In this way, they are not useful for this kind of assessment. But if a grade can be formulated (compartmentalized, deconstructed, unpacked...) in a way that reflects individual learning goals, it could potentially yield very useful feedback, when summed across all students. In this configuration grades would be entirely appropriate for assessment. See under Resources-Tools: Embedded assignments, and Rubrics.
Are we supposed to address students' improvement or absolute levels of abilities?
It depends on your goals. Your department may have a goal that all students will improve in a particular competency, and not be concerned about the absolute level of achievement. Another goal may suggest that there is a threshold level of achievement that all students must meet.
How rigorous does this assessment work need to be?
The purpose of assessment is to provide for ourselves compelling information that will help us understand what students are learning, and where changes might be needed and effective. We do not need scientifically developed procedures, double-blind studies, multiple independent raters, etc., or to produce publishable, generalizable research. We do need good information that helps us in making curricular decisions. Do what you reasonably can to maximize objectivity and rigor, but assessment should support your primary work — not deter you from it.
Do we have to submit assessment projects for IRB review?
As a general rule, assessment of student learning (including course evaluation) is considered "quality improvement" work, and projects do not need to be reviewed by the IRB.
Don't we do this already?
Swarthmore has a long history of self-reflection and continuous improvement. Our Honors Program and capstone experiences in every major are evidence of our commitment to meaningful assessment. However, in other areas we are not always as systematic as we might be, nor do we document the careful consideration that is behind curricular changes. The framework of assessment provides a way of linking pedagogy and results that is useful for both our own illumination and for others to better understand what we do.
Administrative Areas
What is being required and when?
Each administrative and co-curricular department was required to articulate its mission, goals, and objectives for institutional effectiveness by summer 2016, and began assessing them annually in 2016-17, with reports due in the summer. (Contact your supervisor for exact dates.) Many departments have already been engaged in this work, but in the past each division had its own approach to reporting. An institution-wide process has now been established and all units are expected to report on their work in the established cycle, using the standard Template provided. The Director of Institutional Effectiveness, Research & Assessment provides feedback, guidance, and assistance with these efforts.
What is the difference between departmental assessment and performance assessment?
Assessment of goals and objectives in the functional areas of the College should not be confused with Performance Assessments of employees. These processes differ significantly in terms of focus and purpose. Departmental assessment, which we are calling "Assessment of Institutional Effectiveness" (IE) focuses on the goals for a functional area of the College, and should be a transparent process that aims to improve the effectiveness of the area. Employee performance assessment is a private process, focused on evaluating individual employee and his or her performance of job responsibilities.
What is the difference between "Direct" and "Indirect" assessment?
Direct evidence is that which reflects the actual performance of tasks, rather than a secondary measure. For example, direct evidence for the goal of processing transcripts efficiently might be the volume, turnaround time, and the error rate of transcripts processed, while a satisfaction rating by students requesting transcripts would provide an indirect reflection of efficiency. (If a department has a goal specifically about the perceptions of those it serves, such a rating could constitute a direct assessment of that goal.)
Can we just do a survey?
Though a survey often seems an obvious way for administrative areas to identify needs and collect feedback about its effectiveness at meeting them, it should be approached cautiously for several reasons. First, our populations are already over-surveyed, and each additional survey contributes to a cycle of fatigue, suppressed responding, and reduced value of our surveys. Second, survey findings often yield indirect rather than direct evidence of effectiveness (see above). If you decide that a survey is absolutely necessary, please visit the Institutional Research website to find out what other surveys are going on (Recent, Current, and Upcoming Surveys), register your own (Register Your Survey), and learn about resources and best practices (Survey Resources).
How does this relate to Middle States?
The Middle States Commission on Higher Education is our accrediting body. It requires that colleges and universities demonstrate adherence to a set of specific standards, which are "Standards for Accreditation and Requirements of Affiliation." Standard VI is "Planning, Resources, and Institutional Improvement." (Standard V is "Educational Effectiveness Assessment.") These focus particularly on assessment, although each of the seven standards includes a component of assessment. We provide evidence that we are meeting all of the standards through our decennial Self-Study and provide updates in our interim Periodic Review Report. Click here for a brief overview (seven pages) from Middle States: Assessing Student Learning and Institutional Effectiveness: Understanding Middle States Expectations [pdf]
Where can we find assistance?
The Director of Institutional Effectiveness, Research & Assessment is charged with guiding administrative areas in designing their assessment processes and providing advice about assessment projects. Please contact her (rshores1) for assistance.