Center for Teaching

Grading student work.

Print Version

What Purposes Do Grades Serve?

Developing grading criteria, making grading more efficient, providing meaningful feedback to students.

  • Maintaining Grading Consistency in Multi-Sectioned Courses

Minimizing Student Complaints about Grading

Barbara Walvoord and Virginia Anderson identify the multiple roles that grades serve:

  • as an  evaluation of student work;
  • as a  means of communicating to students, parents, graduate schools, professional schools, and future employers about a student’s  performance in college and potential for further success;
  • as a  source of motivation to students for continued learning and improvement;
  • as a  means of organizing a lesson, a unit, or a semester in that grades mark transitions in a course and bring closure to it.

Additionally, grading provides students with feedback on their own learning , clarifying for them what they understand, what they don’t understand, and where they can improve. Grading also provides feedback to instructors on their students’ learning , information that can inform future teaching decisions.

Why is grading often a challenge? Because grades are used as evaluations of student work, it’s important that grades accurately reflect the quality of student work and that student work is graded fairly. Grading with accuracy and fairness can take a lot of time, which is often in short supply for college instructors. Students who aren’t satisfied with their grades can sometimes protest their grades in ways that cause headaches for instructors. Also, some instructors find that their students’ focus or even their own focus on assigning numbers to student work gets in the way of promoting actual learning.

Given all that grades do and represent, it’s no surprise that they are a source of anxiety for students and that grading is often a stressful process for instructors.

Incorporating the strategies below will not eliminate the stress of grading for instructors, but it will decrease that stress and make the process of grading seem less arbitrary — to instructors and students alike.

Source: Walvoord, B. & V. Anderson (1998).  Effective Grading: A Tool for Learning and Assessment . San Francisco : Jossey-Bass.

  • Consider the different kinds of work you’ll ask students to do for your course.  This work might include: quizzes, examinations, lab reports, essays, class participation, and oral presentations.
  • For the work that’s most significant to you and/or will carry the most weight, identify what’s most important to you.  Is it clarity? Creativity? Rigor? Thoroughness? Precision? Demonstration of knowledge? Critical inquiry?
  • Transform the characteristics you’ve identified into grading criteria for the work most significant to you, distinguishing excellent work (A-level) from very good (B-level), fair to good (C-level), poor (D-level), and unacceptable work.

Developing criteria may seem like a lot of work, but having clear criteria can

  • save time in the grading process
  • make that process more consistent and fair
  • communicate your expectations to students
  • help you to decide what and how to teach
  • help students understand how their work is graded

Sample criteria are available via the following link.

  • Analytic Rubrics from the CFT’s September 2010 Virtual Brownbag
  • Create assignments that have clear goals and criteria for assessment.  The better students understand what you’re asking them to do the more likely they’ll do it!
  • letter grades with pluses and minuses (for papers, essays, essay exams, etc.)
  • 100-point numerical scale (for exams, certain types of projects, etc.)
  • check +, check, check- (for quizzes, homework, response papers, quick reports or presentations, etc.)
  • pass-fail or credit-no-credit (for preparatory work)
  • Limit your comments or notations to those your students can use for further learning or improvement.
  • Spend more time on guiding students in the process of doing work than on grading it.
  • For each significant assignment, establish a grading schedule and stick to it.

Light Grading – Bear in mind that not every piece of student work may need your full attention. Sometimes it’s sufficient to grade student work on a simplified scale (minus / check / check-plus or even zero points / one point) to motivate them to engage in the work you want them to do. In particular, if you have students do some small assignment before class, you might not need to give them much feedback on that assignment if you’re going to discuss it in class.

Multiple-Choice Questions – These are easy to grade but can be challenging to write. Look for common student misconceptions and misunderstandings you can use to construct answer choices for your multiple-choice questions, perhaps by looking for patterns in student responses to past open-ended questions. And while multiple-choice questions are great for assessing recall of factual information, they can also work well to assess conceptual understanding and applications.

Test Corrections – Giving students points back for test corrections motivates them to learn from their mistakes, which can be critical in a course in which the material on one test is important for understanding material later in the term. Moreover, test corrections can actually save time grading, since grading the test the first time requires less feedback to students and grading the corrections often goes quickly because the student responses are mostly correct.

Spreadsheets – Many instructors use spreadsheets (e.g. Excel) to keep track of student grades. A spreadsheet program can automate most or all of the calculations you might need to perform to compute student grades. A grading spreadsheet can also reveal informative patterns in student grades. To learn a few tips and tricks for using Excel as a gradebook take a look at this sample Excel gradebook .

  • Use your comments to teach rather than to justify your grade, focusing on what you’d most like students to address in future work.
  • Link your comments and feedback to the goals for an assignment.
  • Comment primarily on patterns — representative strengths and weaknesses.
  • Avoid over-commenting or “picking apart” students’ work.
  • In your final comments, ask questions that will guide further inquiry by students rather than provide answers for them.

Maintaining Grading Consistency in Multi-sectioned Courses (for course heads)

  • Communicate your grading policies, standards, and criteria to teaching assistants, graders, and students in your course.
  • Discuss your expectations about all facets of grading (criteria, timeliness, consistency, grade disputes, etc) with your teaching assistants and graders.
  • Encourage teaching assistants and graders to share grading concerns and questions with you.
  • have teaching assistants grade assignments for students not in their section or lab to curb favoritism (N.B. this strategy puts the emphasis on the evaluative, rather than the teaching, function of grading);
  • have each section of an exam graded by only one teaching assistant or grader to ensure consistency across the board;
  • have teaching assistants and graders grade student work at the same time in the same place so they can compare their grades on certain sections and arrive at consensus.
  • Include your grading policies, procedures, and standards in your syllabus.
  • Avoid modifying your policies, including those on late work, once you’ve communicated them to students.
  • Distribute your grading criteria to students at the beginning of the term and remind them of the relevant criteria when assigning and returning work.
  • Keep in-class discussion of grades to a minimum, focusing rather on course learning goals.

For a comprehensive look at grading, see the chapter “Grading Practices” from Barbara Gross Davis’s  Tools for Teaching.

Creative Commons License

Teaching Guides

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules
  • Teaching Tips

The Ultimate Guide to Grading Student Work

Strategies, best practices and practical examples to make your grading process more efficient, effective and meaningful

' src=

Top Hat Staff

The Ultimate Guide to Grading Student Work

This ultimate guide to grading student work offers strategies, tips and examples to help you make the grading process more efficient and effective for you and your students. The right approach can save time for other teaching tasks, like lecture preparation and student mentoring. 

Grading is one of the most painstaking responsibilities of postsecondary teaching. It’s also one of the most crucial elements of the educational process. Even with an efficient system, grading requires a great deal of time—and even the best-laid grading systems are not entirely immune to student complaints and appeals. This guide explores some of the common challenges in grading student work along with proven grading techniques and helpful tips to communicate expectations and set you and your students up for success, especially those who are fresh out of high school and adjusting to new expectations in college or university. 

What is grading?

Grading is only one of several indicators of a student’s comprehension and mastery, but understanding what grading entails is essential to succeeding as an educator. It allows instructors to provide standardized measures to evaluate varying levels of academic performance while providing students valuable feedback to help them gauge their own understanding of course material and skill development. Done well, effective grading techniques show learners where they performed well and in what areas they need improvement. Grading student work also gives instructors insights into how they can improve the student learning experience.

Grading challenges: Clarity, consistency and fairness

No matter how experienced the instructor is, grading student work can be tricky. No such grade exists that perfectly reflects a student’s overall comprehension or learning. In other words, some grades end up being inaccurate representations of actual comprehension and mastery. This is often the case when instructors use an inappropriate grading scale, such as a pass/fail structure for an exam, when a 100-point system gives a more accurate or nuanced picture.

Grading students’ work fairly but consistently presents other challenges. For example, grades for creative projects or essays might suffer from instructor bias, even with a consistent rubric in place. Instructors can employ every strategy they know to ensure fairness, accessibility, accuracy and consistency, and even so, some students will still complain about their grades. Handling grade point appeals can pull instructors away from other tasks that need their attention.

Many of these issues can be avoided by breaking things down into logical steps. First, get clear on the learning outcomes you seek to achieve, then ensure the coursework students will engage in is well suited to evaluating those outcomes and last, identify the criteria you will use to assess student performance. 

What are some grading strategies for educators?

There are a number of grading techniques that can alleviate many problems associated with grading, including the perception of inconsistent, unfair or arbitrary practices. Grading can use up a large portion of educators’ time. However, the results may not improve even if the time you spend on it does. Grading, particularly in large class sizes, can leave instructors feeling burnt out. Those who are new to higher education can fall into a grading trap, where far too much of their allocated teaching time is spent on grading. As well, after the graded assignments have been handed back, there may be a rush of students wanting either to contest the grade, or understand why they got a particular grade, which takes up even more of the instructor’s time. With some dedicated preparation time, careful planning and thoughtful strategies, grading student work can be smooth and efficient. It can also provide effective learning opportunities for the students and good information for the instructor about the student learning (or lack of) taking place in the course. These grading strategies can help instructors improve their accuracy in capturing student performance . 

Establishing clear grading criteria

Setting grading criteria helps reduce the time instructors spend on actual grading later on. Such standards add consistency and fairness to the grading process, making it easier for students to understand how grading works. Students also have a clearer understanding of what they need to do to reach certain grade levels.

Establishing clear grading criteria also helps instructors communicate their performance expectations to students. Furthermore, clear grading strategies give educators a clearer picture of content to focus on and how to assess subject mastery. This can help avoid so-called ‘busywork’ by ensuring each activity aligns clearly to the desired learning outcome. 

Step 1: Determine the learning outcomes and the outputs to measure performance. Does assessing comprehension require quizzes and/or exams, or will written papers better capture what the instructor wants to see from students’ performance? Perhaps lab reports or presentations are an ideal way of capturing specific learning objectives, such as behavioral mastery.

Step 2: Establish criteria to determine how you will evaluate assigned work. Is it precision in performing steps, accuracy in information recall, or thoroughness in expression? To what extent will creativity factor in the assessment?

Step 3: Determine the grade weight or value for each assignment. These weights represent the relative importance of each assignment toward the final grade and a student’s GPA. For example, how much will the final exam count relative to a research paper or essay? Once the weights are in place, it’s essential to stratify grades that distinguish performance levels. For example:

  • A grade = excellent
  • B grade = very good
  • C grade = adequate
  • D grade = poor but passing
  • F grade = unacceptable

Making grading efficient

Grading efficiency depends a great deal on devoting appropriate amounts of time to certain grading tasks. For instance, some assignments deserve less attention than others. That’s why some outcomes, like attendance or participation work, can help save time by getting a simple pass/fail grade or acknowledgment of completion using a check/check-plus/check-minus scale.

However, other assignments like tests or papers need to show more in-depth comprehension of the course material. These items need more intricate scoring schemes and require more time to evaluate, especially if student responses warrant feedback.

When appropriate, multiple-choice questions can provide a quick grading technique. They also provide the added benefit of grading consistency among all students completing the questions. However, multiple-choice questions are more difficult to write than most people realize. These questions are most useful when information recall and conceptual understanding are the primary learning outcomes.

Instructors can maximize their time for more critical educational tasks by creating scheduled grading strategies and sticking to it. A spreadsheet is also essential for calculating many students’ grades quickly and exporting data to other platforms.

Making grading more meaningful in higher education

student smiling and walking to class with a textbook in his hand

Grading student work is more than just routine, despite what some students believe. The better students understand what instructors expect them to take away from the course, the more meaningful the grading structure will be. Meaningful grading strategies reflect effective assignments, which have distinct goals and evaluation criteria. It also helps avoid letting the grading process take priority over teaching and mentoring.

Leaving thoughtful and thorough comments does more than rationalize a grade. Providing feedback is another form of teaching and helps students better understand the nuances behind the grade. Suppose a student earns a ‘C’ on a paper. If the introduction was outstanding, but the body needed improvement, comments explaining this distinction will give a clearer picture of what the ‘C’ grade represents as opposed to ‘A-level’ work.

Instructors should limit comments to elements of their work that students can actually improve or build upon. Above all, comments should pertain to the original goal of the assignment. Excessive comments that knit-pick a student’s work are often discouraging and overwhelming, leaving the student less able or willing to improve their effort on future projects. Instead, instructors should provide comments that point to patterns of strengths and areas needing improvement. It’s also helpful to leave a summary comment at the end of the assignment or paper.

Maintaining a complaint-free grading system

In many instances, an appropriate response to a grade complaint might simply be, “It’s in the syllabus.” Nevertheless, one of the best strategies to curtail grade complaints is to limit or prohibit discussions of grades during class time. Inform students that they can discuss grades outside of class or during office hours.

Instructors can do many things before the semester or term begins to reduce grade complaints. This includes detailed explanations in the grading system’s syllabus, the criteria for earning a particular letter grade, policies on late work, and other standards that inform grading. It also doesn’t hurt to remind students of each assignment’s specific grading criteria before it comes due. Instructors should avoid changing their grading policies; doing so will likely lead to grade complaints.

Assigning student grades

grading with top hat

Since not all assignments may count equally toward a final course grade, instructors should figure out which grading scales are appropriate for each assignment. They should also consider that various assignments assess student work differently; therefore, their grading structure should reflect those differences. For example, some exams might warrant a 100-point scale rather than a pass/fail grade. Requirements like attendance or class participation might be used to reward effort; therefore, merely completing that day’s requirement is sufficient.

Grading essays and open-ended writing

Some writing projects might seem like they require more subjective grading standards than multiple-choice tests. However, instructors can implement objective standards to maintain consistency while acknowledging students’ individual approaches to the project.

Instructors should create a rubric or chart against which they evaluate each assignment. A rubric contains specific grading criteria and the point value for each. For example, out of 100 points, a rubric specifies that a maximum of 10 points are given to the introduction. Furthermore, an instructor can include even more detailed elements that an introduction should include, such as a thesis statement, attention-getter, and preview of the paper’s main points.

Grading creative work

While exams, research papers, and math problems tend to have more finite grading criteria, creative works like short films, poetry, or sculptures can seem more difficult to grade. Instructors might apply technical evaluations that adhere to disciplinary standards. However, there is the challenge of grading how students apply their subject talent and judgment to a finished product.

For creative projects that are more visual, instructors might ask students to submit a written statement along with their assignment. This statement can provide a reflection or analysis of the finished product, or describe the theory or concept the student used. This supplement can add insight that informs the grade.

Grading for multi-section courses

Professors or course coordinators who oversee several sections of a course have the added responsibility of managing other instructors or graduate student teaching assistants (TAs) in addition to their own grading. Course directors need to communicate regularly and consistently with all teaching staff about the grading standards and criteria to ensure they are applied consistently across all sections.

If possible, the course director should address students from all sections in one gathering to explain the criteria, expectations, assignments, and other policies. TAs should continue to communicate grading-related information to the students in their classes. They also should maintain contact with each other and the course director to address inconsistencies, stay on top of any changes and bring attention to problems.

To maintain consistency and objectivity across all sections, the course director might consider assigning TAs to grade other sections besides their own. Another strategy that can save time and maintain consistency is to have each TA grade only one exam portion. It’s also vital to compare average grades and test scores across sections to see if certain groups of students are falling behind or if some classes need changes in their teaching strategies.

Types of grading

  • Absolute grading : A grading system where instructors explain performance standards before the assignment is completed. grades are given based on predetermined cutoff levels. Here, each point value is assigned a letter grade. Most schools adopt this system, where it’s possible for all students to receive an A.
  • Relative grading : An assessment system where higher education instructors determine student grades by comparing them against those of their peers. 
  • Weighted grades : A method ussed in higher education to determine how different assessments should count towards the final grade. An instructor may choose to make the results of an exam worth 50 percent of a student’s total class grade, while assignments account for 25 percent and participation marks are worth another 25 percent.
  • Grading on a curve : This system adjusts student grades to ensure that a test or assignment has the proper distribution throughout the class (for example, only 20% of students receive As, 30% receive Bs, and so on), as well as a desired total average (for example, a C grade average for a given test). We’ve covered this type of grading in more detail in the blog post The Ultimate Guide to Grading on A Curve .

Ungrading is an education model that prioritizes giving feedback and encouraging learning through self-reflection rather than a letter grade. Some instructors argue that grades cannot objectively assess a student’s work. Even when calculated down to the hundredth of a percentage point, a “B+” on an English paper doesn’t paint a complete picture about what a student can do, what they understand or where they need help. Alfie Kohn, lecturer on human behavior, education, and parenting, says that the basis for grades is often subjective and uninformative. Even the final grade on a STEM assignment is more of a reflection of how the assignment was written, rather than the student’s mastery of the subject matter. So what are educators who have adopted ungrading actually doing? Here are some practices and strategies that decentralize the role of assessments in the higher ed classroom.

  • Frequent feedback: Rather than a final paper or exam, encourage students to write letters to reflect on their progress and learning throughout the term. Students are encouraged to reflect on and learn from both their successes and their failures, both individually and with their peers. In this way, conversations and commentary become the primary form of feedback, rather than a letter grade. 
  • Opportunities for self-reflection: Open-ended questions help students to think critically about their learning experiences. Which course concepts have you mastered? What have you learned that you are most excited about? Simple questions like these help guide students towards a more insightful understanding of themselves and their progress in the course.
  • Increasing transparency: Consider informal drop-in sessions or office hours to answer student questions about navigating a new style of teaching and learning.  The ungrading process has to begin from a place of transparency and openness in order to build trust. Listening to and responding to student concerns is vital to getting students on board. But just as important is the quality of feedback provided, ensuring both instructors and students remain on the same page.

Grading on a curve

Instructors will grade on a curve to allow for a specific distribution of scores, often referred to as “normal distribution.” To ensure there is a specific percentage of students receiving As, Bs, Cs and so forth, the instructor can manually adjust grades. 

When displayed visually, the distribution of grades ideally forms the shape of a bell. A small number of students will do poorly, another small group will excel and most will fall somewhere in the middle. Students whose grades settle in the middle will receive a C-average. Students with the highest and the lowest grades fall on either side.

Some instructors will only grade assignments and tests on a curve if it is clear that the entire class struggled with the exam. Others use the bell curve to grade for the duration of the term, combining every score and putting the whole class (or all of their classes, if they have more than one) on a curve once the raw scores are tallied.

How to make your grading techniques easier

Grading is a time-consuming exercise for most educators. Here are some tips to help you become more efficient and to lighten your load.

  • Schedule time for grading: Pay attention to your rhythms and create a grading schedule that works for you. Break the work down into chunks and eliminate distractions so you can stay focused.
  • Don’t assign ‘busy work’: Each student assignment should map clearly to an important learning outcome. Planning up front ensures each assignment is meaningful and will avoid adding too much to your plate.
  • Use rubrics to your advantage: Clear grading criteria for student assignments will help reduce the cognitive load and second guessing that can happen when these tools aren’t in place. Having clear standards for different levels of performance will also help ensure fairness.
  • Prioritize feedback: It’s not always necessary to provide feedback on every assignment. Also consider bucketing feedback into what was done well, areas for improvement and ways to improve. Clear, pointed feedback is less time-consuming to provide and often more helpful to students. 
  • Reward yourself: Grading is taxing work. Be realistic about how much you can do and in what time period. Stick to your plan and make sure to reward yourself with breaks, a walk outside or anything else that will help you refresh. 

How Top Hat streamlines grading

There are many tools available to college educators to make grading student work more consistent and efficient. Top Hat’s all-in-one teaching platform allows you to automate a number of grading processes, including tests and quizzes using a variety of different question types. Attendance, participation, assignments and tests are all automatically captured in the Top Hat Gradebook , a sophisticated data management tool that maintains multiple student records.

In the Top Hat Gradebook, you can access individual and aggregate grades at a glance while taking advantage of many different reporting options. You can also sync grades and other reporting directly to your learning management system (LMS). 

Grading is one of the most essential components of the teaching and learning experience. It requires a great deal of strategy and thought to be executed well. While it certainly isn’t without its fair share of challenges, clear expectations and transparent practice ensure that students feel included as part of the process and can benefit from the feedback they receive. This way, they are able to track their own progress towards learning goals and course objectives.

Click here to learn more about Gradebook, Top Hat’s all-in-one solution designed to help you monitor student progress with immediate, real-time feedback.

Recommended Readings

graded assignments and exams

The Ultimate Guide to Metacognition for Post-Secondary Courses

graded assignments and exams

25 Effective Instructional Strategies For Educators

Subscribe to the top hat blog.

Join more than 10,000 educators. Get articles with higher ed trends, teaching tips and expert advice delivered straight to your inbox.

APS

  • Teaching Tips

Dealing With Students Missing Exams and In-Class Graded Assignments

Teachers often become more aware of students’ out-of-class activities than they might wish. Announcements and memos from the dean of students inform about sporting teams and their games and tournaments, forensics, service learning conferences, community-based work, and the like. And teachers quickly become familiar with student lifestyles and illnesses ¾ mono, strep throat, hangovers, the opening of deer and fishing seasons, quilting bees, family vacations, and their family mortality statistics. The relationship between exams and mandatory in-class work and the death of students’ cousins and grandparents is so high it should be a concern of the National Center for Disease Control. Given all this, it is a certainty that students will miss exams and other required activities. What is a teacher to do?

If you want to hear colleagues express frustration, ask them about make-up exams and assignments. Despite knowing intellectually that such absences will occur, teachers hope and pray, even in public institutions, that all of their students will take exams as scheduled. Alas, such prayers are rarely answered, and teachers are faced with the practical issues of keeping track of students who miss exams and assignments, as well as managing make-ups.

All of our advice, except that related to ethics, should be read through the filter of the type of institution where you teach, and the types of courses you teach and how large they are. For example, at a small liberal arts school, where teaching is a faculty member’s primary responsibility, more time may be spent with students who miss exams or assignments, and more creative (time consuming) alternatives may be practical as compared with someone teaching classes of 300 or 500 or more in a Research I institution.

Ethics Teachers are not to cause students harm; we must treat them fairly and equitably, and they must be allowed to maintain their dignity (Keith-Spiegel, Whitley, Balogh, Perkins, & Wittig, 2002). Whatever your procedures are for students who miss exams and required in-class work, they must be equitable, providing students equal chances to earn a good grade by demonstrating equal knowledge. The hard part may be balancing academic rigor and accountability for what students are to learn with a fair and manageable process for those who miss required exams and assignments.

Make-up Exams These should not be more difficult than the original test but must be, as best as you can design, alternate forms of the same exam. Exam banks that accompany texts make designing such alternate forms of multiple-choice tests relatively easy, and colleagues teaching two or more sections of the same course in a semester, who give alternate forms of exams, are often a good source of advice on this matter. Be thoughtful about the following:

  • An essay make-up exam may be unethical if regular exams are multiple choice or short answer (or vice versa), since students must study differently and they may be more difficult.
  • An oral exam may “punish” students who do not think well on their feet, or are more socially anxious.
  • Scheduling make-up exams at inconvenient or undesirable times may express your frustration, but you or someone else will have to be there at the “inconvenient” time also, and such arrangements raise issues of foul play.
  • It may be inequitable to students who meet all course requirements to allow their peers to do extra credit or drop their lowest grade instead of making up a missed exam.

In-class Assignments The same considerations exist for students who miss in-class required presentations, or other graded work. If possible, students who were to present should be given opportunities to make up the assignment using the same grading criteria.

Planning Ahead

Spell-out Missed Exam Procedure in Course Policies No matter how well you teach or what inducements or penalties you impose, some students will miss exams and required class activities. Good educational practice argues that you plan for this reality as you design your course, not two days before (or after) your first exam. You want as few surprises as possible once the course begins.

Put your policies in your syllabus. Have a section in your syllabus on exams and other graded work. Specify your policies and procedures if students know in advance they will be absent, or how to notify you if, for whatever reason, they were absent, and any effect, if any, absences will have on their grade.

Keep your policy clear and simple. Before finalizing your syllabus, ask a few students to read your make-up policy to determine if it can be easily understood. If your explanation of what students are to do in the case of missing an exam, and how their grade is affected, is not easily understood, revise it. In developing your policy, do you want students to:

  • Notify you if they know they will miss, preferably at least 24 hours in advance, and give you the reason? Talking with you before or after class offers the best opportunity to provide feedback if the reason is questionable, to work out alternatives, and so forth. E-mail also can be useful.
  • Notify you as soon as possible after missing an exam or required assignment and give the reason? Again, in person or e-mail work best.
  • Present a letter from an authority (e.g., physician) documenting the reason? Keep in mind any student can “forge” such documentation or manipulate it in other ways, e.g., “Fred came to see me complaining of a severe headache.”
  • Have their grades lowered if their absence is not “acceptable” (e.g., overslept versus seriously ill)? How will you decide what is acceptable? Our experience suggests that “legitimate” reasons for absence include, but are not limited to: illness of the student or a close relative, accident, court appearance, military duty, broken auto, hazardous weather, and university activities (e.g., athletics, forensics).

Policies should reflect the nature of the exam or graded assignment. If you are teaching an introductory course and each module largely stands alone, it may be appropriate for students to make up a missed exam late in the semester. But if you want students to demonstrate knowledge or competency on an exam or assignment because future course material builds on that which comes earlier, you want to give the students much less time to make up the missed work.

Common policies. A common procedure is for the teacher, teaching assistant, or departmental secretary to distribute and proctor make-up exams during prearranged times (Perlman&McCann, in press). You might also consider allowing students to take make-up exams during exam periods in other courses you are teaching.

Make your policies easy to implement. To maintain your sanity and keep your stress level manageable, you must be able to easily implement your policies. For example, even if you, a secretary, or a graduate student distribute and proctor make-up exams, problems can arise. For example:

  • The secretary is ill or on vacation, or you are ill or have a conference to attend. You never want to change the time make-ups are available to students once these are listed in the course syllabus. Have backups available who know where make-up exams are stored, can access them, and can administer and proctor them.
  • Too many students for the make-up space. Investigate room sizes and number of rooms available. You may need more than one room if some students have readers because of learning disabilities.
  • Students often forget there is a common make-up the last week of the semester. Remind them often and announce this policy on class days when students are taking an exam, as this may be the only time some students who have missed a previous exam come to class.

Encourage appropriate, responsible, mature behaviors. Take the high road and let students know how they “should” behave. For example, one colleague includes this statement in the syllabus:

I expect students to make every effort to take required exams and make course presentations as scheduled. If you know in advance you will miss such a requirement, please notify me. If you are ill or other circumstances cause you to miss a required graded activity, notify me as soon as possible.

One of our colleagues states in her syllabus for a psychology of aging class, “It is very bad form to invent illnesses suffered by grandparents!” By giving students exemplars on how to behave appropriately, you can then thank them for their courtesy and maturity if they follow through, positively reinforcing such behaviors.

God lives in the details. Always err on the side of being “concrete.” If a make-up exam is at the university testing center, tell students where the testing center is. If you or a secretary hold make-up exams in an office, you may want to draw a map on how to get there. It is not uncommon for students to fail to find the office at the time of the exam, and wander around a large university building.

Students Who Miss Exams You have a variety of alternatives available on how to treat students who miss a scheduled exam. Select those that fit your course and the requirements of learning students must demonstrate.

Requiring make-up exams. If you collect all copies of your multiple choice or short answer exams, you may be able to use the same exam for make-ups. Our experience is that it is extremely rare that students deliberately miss an exam to have more time to study, whereas asking peers about specific exam questions more commonly occurs. Your experiences may be different. However, if you put exams on file at the university testing center, and students can take them weeks apart, you may want different forms. If you have concerns, you will need to prepare an equivalent, alternative form of the regular exam, as is often the case for essay tests.

Using procedures other than a make-up exam. Some faculty have students outline all text chapters required for an exam, use daily quiz scores to substitute for a missed exam, use the average of students’ exams to substitute for the one missed, score relevant questions on the comprehensive final to substitute for the missed test, or use a weighted score from the entire comprehensive final substituted for missed exam. Some teachers just drop one test grade without penalty (Buchanan&Rogers, 1990; Sleigh&Ritzer, 2001). Consider whether students will learn what you want from various alternatives and whether this work is equal to what students must demonstrate on exams before adopting such procedures. If your course contains numerous graded assignments of equal difficulty, and if it is equitable for students to choose to ignore a course module by not studying or taking the exam, you should consider this process.

Other teachers build extra credit into the course. They allow all students opportunities to raise their grades, offering a safety net of sorts for those who need to “make-up” a missed exam by doing “additional” assignments such as outlining unassigned chapters in the text.

Scheduling make-ups. Pick one or two times a week that are convenient for you, a department secretary, or teaching assistant, and schedule your make-ups then. Some faculty use a common time midway through the semester and at the end of the semester as an alternative.

Students Who Miss Other In-Class Assignments Allowing students to demonstrate learning on non-exam graded assignments can be tricky. Such assignments often measure different kinds of learning than exams: the ability to work in groups, critical thinking as demonstrated in a poster, or an oral presentation graded in part on professional use of language. But you do have some alternatives.

Keeping the required assignment the same. If the assignment is a large one and due near the end of the semester, consider using an “incomplete” grade for students who miss it. Alternatively, students can present their oral work or poster in another course you are teaching if the content is relevant and time allows it. The oral required assignment also can be delivered just to the teacher or videotaped or turned in on audiotape.

Alternative assignments. As with missed exams, you can weigh other assignments disproportionately to substitute for in-class graded work — by doubling a similar assignment if you have more than one during the semester, for example. The dilemma, of course, is not allowing students easy avenues to avoid a required module or assignment without penalty. For example, oral assignments can be turned in as written work, although this may negate some of the reasons for the assignment.

When we asked colleagues about alternatives for missed in-class graded assignments (as compared with exams), almost everyone cautioned against listing them in the course syllabus. They felt that students could then weigh the make-up assignment versus the original and choose the one that gave them the greatest chance of doing well, and also the least amount of anxiety (in-class presentations often make students nervous). They recommended simply telling students that arrangements would be made for those missing in-class required graded work on a case-by-case basis.

Students Who Miss the “Make-Up” On occasion, students will miss a scheduled make-up. Say something about this event in your syllabus, emphasizing the student’s responsibility to notify the instructor. We recommend that instructors reserve the right to lower a student’s grade by “x” number of points, or “x” letter grades. If you place exams at a university testing center, you may not find out the work has not been made up until the course is over, leaving you little choice but to give the student an “F” on that exam or assignment.

When the Whole Class Misses a Required Exam or Assignment On rare, but very memorable, occasions the entire class may miss an exam or assignment. For example, both authors have had the fire alarm go off during an exam. After a bomb threat cleared the building during his exam, the campus police actually contacted one author to identify whether a person caught on camera at a service station was a student calling in the bomb scare. (It was not.) The other author experienced the bomb squad closing a classroom building during finals week due to the discovery of old, potentially explosive, laboratory chemicals. Of course, the blizzard of the century or a flood might occur the night before your exam. What is a teacher to do?

The exam or graded assignment must be delayed. Prepare beforehand. Always build a make-up policy into your syllabus for the last exam or student presentation in a course. Talk with your department chair or dean about college or university policy. State that if weather or other circumstances force a make-up, it will occur at a certain time and place. This forethought is especially important if you teach at a northern institution where bad winter weather is not unusual. For exams and assignments during the semester, the policy that works best is to reschedule them (again, stating this in your syllabus) for the next regular class period. Call attention to this policy early in the semester, and post it on your course Web site. The last thing you want to do is call or e-mail everyone in the class to tell them an exam has been cancelled.

An exam or graded assignment is interrupted. Graded assignments such as oral presentations are easily handled. If time allows, continue after the interruption; if not, continue the next class period or during your designated “make-up” time.

If something interrupts an exam, ask students to leave their exams and answers on their desks or hand them in to you, take all personal materials, and leave immediately. A teacher can easily collect everything left in most classes in a few moments. Leave materials on desks if the class is large, or be the first person back to the room after the interruption. Fire alarms, bomb scares, and the like usually cause a lot of hubbub. Only if you have a lengthy two- or three-hour class, with time to allow students to collect themselves and refocus, and no concern about their comparing answers to questions during the delay, should the exam be continued that same day or evening.

If the interruption occurs late in the class period, you might tell students to turn in their work as they leave. You can then determine how you want to grade exams or the assignment, using pro-rated points or percentages, and assign grades accordingly.

If the interruption is earlier in the hour, the exam will have to be delayed, usually until the next class period. With a multiple-choice exam, we advise giving students the full (next) class period to finish their exams. If you are concerned about students comparing questions they have already answered, you will have to quickly develop an alternate exam.

A teacher’s decisions are more complicated if the exam is short answer or essay. Students may have skimmed all essay or short answer questions before an interruption. Will they prepare for those questions before the next class period? What if some students only read the first essay question but do not know the others they must answer? Preparing an alternate exam may be feasible, but students need to know you will do so, so they do not concentrate their studying on specific topics you will not ask about.

We know that such class interruptions are rare, but they can wreak havoc with students and teachers, be stressful, and raise issues of fairness that echo throughout the rest of the course. We advise teachers to talk with colleagues, and we have found a department brown bag on the topic fascinating. Your colleagues may have some creative and sound advice.

Summary A teacher needs to plan ahead. Take some time to think about what it means for you and students who miss required in-class work. A little preparation can save a lot of time and hassle later in the semester. Students deserve and will appreciate policies that are equitable and manageable.

Author’s Note: The authors are interested how teachers deal with missed or interrupted graded in-class work (and their horror stories). Contact us with your ideas and experiences at [email protected] .

References and Recommended Reading

  • Buchanan, R. W., & Rogers, M. (1990). Innovative assessment in large classes. College Teaching, 38 , 69-74.
  • Carper, S. W. (1995). Make-up exams: What’s a professor to do? Journal of Chemical Education, 72 , 883.
  • Davis, B. G. (1993). Tools for teaching . San Francisco: Jossey-Bass.
  • Keith-Spiegel, P., Whitley, B.G. E. Jr., Balogh, D. W., Perkins, D. V., & Wittig, A. F. (2002). The ethics of teaching: A casebook (2nd ed.). Mahwah, NJ: Erlbaum.
  • McKeachie, W. J. (2001). Teaching tips: Strategies, research, and theory for college and university teachers (11th ed.) Boston: Houghton Mifflin.
  • Nilson, L. B. (2003). Teaching at its best: A research-based resource for college instructors (2nd ed). Bolton, MA: Anker.
  • Perlman, B., & McCann, L. I. (in press). Teacher evaluations of make-up exam procedures. Psychology Learning and Teaching, 3 (2).
  • Sleigh, M. J., & Ritzer, D. R. (2001). Encouraging student attendance. APS Observer, 14 (9), pp. 19-20, 32.

graded assignments and exams

Do you know of any research related to taking points off an exam for students who take a make-up for whatever reason? It is mentioned in this article but I’m interested in evidence to back up that it is fair and/or punitive in a college setting with adult learners. Thank you. Gerri Russell, MS, RN

graded assignments and exams

I teach introductory nutrition and other biology classes. If a student can prove that they missed an exam or assignment for a verifiable reason, even if they let me know ahead of time (usually technology related reasons), I let them make it up without taking points off. If they can’t prove it I take off points as follows: 10% off per day late during the first week after the assignment is due. Half credit earned after that. Even if they know there are always students who just miss things for no apparent good reason. I feel like this is fair because it gives them the responsibility for making it up, and I’d rather people become familiar with the material, rather than just not do it at all.

graded assignments and exams

I think that the mid semester tests must be abolished from all colleges/universities in order to let them prepare for the final exams without any pressure of getting grades,this will not give rise to any decompetition then,so I personally feel that my suggestion will be very useful I want everyone to obey that

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

About the Author

BARON PERLMAN is editor of "Teaching Tips." A professor in the department of psychology, distinguished teacher, and University and Rosebush Professor at the University of Wisconsin Oshkosh in the department of psychology, he has taught psychology for 29 years. He continues to work to master the art and craft of teaching. LEE I. MCCANN is co-editor of "Teaching Tips." A professor in the department of psychology and a University and Rosebush Professor at the University of Wisconsin Oshkosh, he has taught psychology for 38 years. He has presented numerous workshops on teaching and psychology curricula, his current research interests.

graded assignments and exams

Student Notebook: Five Tips for Working with Teaching Assistants in Online Classes

Sarah C. Turner suggests it’s best to follow the golden rule: Treat your TA’s time as you would your own.

Teaching Current Directions in Psychological Science

Aimed at integrating cutting-edge psychological science into the classroom, Teaching Current Directions in Psychological Science offers advice and how-to guidance about teaching a particular area of research or topic in psychological science that has been

European Psychology Learning and Teaching Conference

The School of Education of the Paris Lodron University of Salzburg is hosting the next European Psychology Learning and Teaching (EUROPLAT) Conference on September 18–20, 2017 in Salzburg, Austria. The main theme of the conference

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.

College of Business

Assignments, exams, & grading.

When we hear the word “assessment,” most of us think about course assignments and exams . While assignments and assessments are not the same thing, assignments are often used as formative or summative assessments of student learning. As formative assessments, assignments can support the scaffolding of student learning, allowing students to practice new skills so they are able to apply them in more complex projects. Instructors can provide timely feedback to support skill-building and correct errors. As summative assessments, assignments can help instructors evaluate student achievement of learning outcomes.

Grades are a hot topic for students and faculty alike, as they can determine a student’s status, scholarship eligibility, and future prospects. Instructor grading practices and philosophies can vary, often causing student anxiety and concern. Faculty conversations about grades often focus on fairness, rigor, equity, and how to truly measure learning. 

The growing set of resources on this site will explore good practices for transparent and equitable assignment design, exam design and administration practices, as well as the array of conversations around grading practices. Alternative grading practices such as Ungrading and contract grading have piqued the curiosity of instructors looking for ways to engage students in their own learning and make learning rather than grades the primary focus of instruction.

Read More About Assignments, Exams, & Grading

Creating and using rubrics for assignments.

What is a Rubric? A rubric is a guide that articulates the expectations for an assignment and communicates the level of quality of performance or learning. As an assessment tool, a rubric sets the criteria for evaluating performance or work

Facilitating and Assessing Student Engagement in the Classroom 

Improving student engagement in the classroom is a hot topic amongst faculty these days. At some point in the conversation, the mythological college student makes an appearance. That perfect student of the past was always on time, had always done

Mapping Your Course

Backwards Design Basics When you are planning a road trip, you probably start with the places you want to end up and then plan your route, including overnight stops, food options, and perhaps a few scenic spots to rest along

Online or Remote Exams: Considerations and Effective Practices 

Remote, hybrid, and online teaching and learning have prompted most instructors to rethink course instruction and question previously held assumptions about how people learn. How to administer exams can be a particularly thorny issue, especially for courses with large enrollments

Assessment in Large Enrollment Courses

Many of my conversations with faculty focus on the challenges they have with doing good learning assessment in large classes. The best learner-centered assessment approaches are no match for 200-person enrollment. I mean, can you imagine reading 200 5-page essays?

Grade Calculator

Use this calculator to find out the grade of a course based on weighted averages. This calculator accepts both numerical as well as letter grades. It also can calculate the grade needed for the remaining assignments in order to get a desired grade for an ongoing course.


(optional)
Final Grade Goal
Weight of Remaining Tasks

graded assignments and exams

Grade Format: Points, percentage, mix Letters
Weight Format: Percentage Points
Show Final Grade Planning Options
 

Final Grade Calculator

Use this calculator to find out the grade needed on the final exam in order to get a desired grade in a course. It accepts letter grades, percentage grades, and other numerical inputs.

Related GPA Calculator

The calculators above use the following letter grades and their typical corresponding numerical equivalents based on grade points.

Letter GradeGPAPercentage
A+4.397-100%
A493-96%
A-3.790-92%
B+3.387-89%
B383-86%
B-2.780-82%
C+2.377-79%
C273-76%
C-1.770-72%
D+1.367-69%
D163-66%
D-0.760-62%
F00-59%

Brief history of different grading systems

In 1785, students at Yale were ranked based on "optimi" being the highest rank, followed by second optimi, inferiore (lower), and pejores (worse). At William and Mary, students were ranked as either No. 1, or No. 2, where No. 1 represented students that were first in their class, while No. 2 represented those who were "orderly, correct and attentive." Meanwhile at Harvard, students were graded based on a numerical system from 1-200 (except for math and philosophy where 1-100 was used). Later, shortly after 1883, Harvard used a system of "Classes" where students were either Class I, II, III, IV, or V, with V representing a failing grade. All of these examples show the subjective, arbitrary, and inconsistent nature with which different institutions graded their students, demonstrating the need for a more standardized, albeit equally arbitrary grading system.

In 1887, Mount Holyoke College became the first college to use letter grades similar to those commonly used today. The college used a grading scale with the letters A, B, C, D, and E, where E represented a failing grade. This grading system however, was far stricter than those commonly used today, with a failing grade being defined as anything below 75%. The college later re-defined their grading system, adding the letter F for a failing grade (still below 75%). This system of using a letter grading scale became increasingly popular within colleges and high schools, eventually leading to the letter grading systems typically used today. However, there is still significant variation regarding what may constitute an A, or whether a system uses plusses or minuses (i.e. A+ or B-), among other differences.

An alternative to the letter grading system

Letter grades provide an easy means to generalize a student's performance. They can be more effective than qualitative evaluations in situations where "right" or "wrong" answers can be easily quantified, such as an algebra exam, but alone may not provide a student with enough feedback in regards to an assessment like a written paper (which is much more subjective).

Although a written analysis of each individual student's work may be a more effective form of feedback, there exists the argument that students and parents are unlikely to read the feedback, and that teachers do not have the time to write such an analysis. There is precedence for this type of evaluation system however, in Saint Ann's School in New York City, an arts-oriented private school that does not have a letter grading system. Instead, teachers write anecdotal reports for each student. This method of evaluation focuses on promoting learning and improvement, rather than the pursuit of a certain letter grade in a course. For better or for worse however, these types of programs constitute a minority in the United States, and though the experience may be better for the student, most institutions still use a fairly standard letter grading system that students will have to adjust to. The time investment that this type of evaluation method requires of teachers/professors is likely not viable on university campuses with hundreds of students per course. As such, although there are other high schools such as Sanborn High School that approach grading in a more qualitative way, it remains to be seen whether such grading methods can be scalable. Until then, more generalized forms of grading like the letter grading system are unlikely to be entirely replaced. However, many educators already try to create an environment that limits the role that grades play in motivating students. One could argue that a combination of these two systems would likely be the most realistic, and effective way to provide a more standardized evaluation of students, while promoting learning.

Search

Search form

  • About Faculty Development and Support
  • Programs and Funding Opportunities
  • Consultations, Observations, and Services
  • Strategic Resources & Digital Publications
  • Canvas @ Yale Support
  • Learning Environments @ Yale
  • Teaching Workshops

Teaching Consultations and Classroom Observations

  • Teaching Programs
  • Spring Teaching Forum
  • Written and Oral Communication Workshops and Panels
  • Writing Resources & Tutorials
  • About the Graduate Writing Laboratory
  • Writing and Public Speaking Consultations
  • Writing Workshops and Panels
  • Writing Peer-Review Groups
  • Writing Retreats and All Writes
  • Online Writing Resources for Graduate Students
  • About Teaching Development for Graduate and Professional School Students
  • Teaching Programs and Grants
  • Teaching Forums
  • Resources for Graduate Student Teachers
  • About Undergraduate Writing and Tutoring
  • Academic Strategies Program
  • The Writing Center
  • STEM Tutoring & Programs
  • Humanities & Social Sciences
  • Center for Language Study
  • Online Course Catalog
  • Antiracist Pedagogy
  • NECQL 2019: NorthEast Consortium for Quantitative Literacy XXII Meeting
  • STEMinar Series
  • Teaching in Context: Troubling Times
  • Helmsley Postdoctoral Teaching Scholars
  • Pedagogical Partners
  • Instructional Materials
  • Evaluation & Research
  • STEM Education Job Opportunities
  • AI Guidance for Faculty and Students
  • Yale Connect
  • Online Education Legal Statements

You are here

Grades and grading, the problem with grades.

Grading is a perpetually thorny issue. No one likes to assign grades, but virtually everyone acknowledges the necessity of doing so. Grading can be the cause of sleepless nights for students and teachers alike, as well as the source of frustration and dispute when two parties disagree over the appropriateness of a grade. Why this strife?

Grading is about standards, and standards imply judgment. Quantitative disciplines are somewhat advantaged in this area because people who teach these subjects use judgments like “right and wrong,” while people in the humanities and other argument-oriented disciplines are stuck with “better and worse.” No one gets off easy in this grading game. Many students will flee quantitative subjects convinced that such disciplines are tyrannical and uncreative. Those same students run back after a semester in the humanities, hissing all the way that grading is subjective, personal, and unfair. Scylla, meet Charybdis. Rock, meet hard place.

But maybe this predicament is caused not by the standards but by the way we apply them—or fail to apply them—throughout the semester. In quantitative subjects, running through a series of problems without giving students the opportunity to consider the whence, whither, and wherefore doesn’t exactly inspire them to think, excite their curiosity, or make them feel like they’re part of the game. No wonder they’re peeved when the questions on the test require original or high-level thought. Conversely, we may be setting students up for disappointment and frustration in the humanities when their contributions in section—no matter how far afield, poorly argued, or lacking in evidence—are greeted with a smile of approval, only to be skewered when that same level of thought appears in a paper.

The point is, standards aren’t just for tests. They’re for learning, thinking, discussing, the whole shebang. View grades not solely as big red letters written atop each assignment and quiz, but within the larger context of feedback. So explain the standards to your students, apply the standards (nicely) during discussion and problem solving, and show your students that you, your section, and their hard work are the ticket to meeting them. You’ll be a lot better off at grading time.

But your students are not the only ones who should be getting feedback in the classroom. A person’s teaching, as with any other activity, only improves with practice and constructive criticism. The practice will come with time; The Tables Turn! Students Evaluating You will suggest ways to elicit that constructive criticism.

Why Grading is So Hard: The Jekyll and Hyde Effect

Most TFs begin their semester with the hope that they will be great teachers: their students will be inspired, they will learn the material, they will do the work, and they will get good grades — all because we, as teachers, will guide them through. But eventually they hand in an assignment and we are suddenly transformed into the merciless arbiter of an impersonal standard. No longer are we the friendly, helpful TFs that we once were.

We are the Grim Grader, slashing at the fields of undergraduate effort with the sharp scythes of A- and B+. We change hats; we shift loyalties. No wonder there’s some emotional fallout.

Easing the Pain

The best way to alleviate some of the tension between your roles as helper and evaluator is to set clear expectations and standards at the outset, preferably in writing. Students need to know what constitutes an A paper, what constitutes a B paper, and, heaven forbid, what constitutes an F (yes, Virginia, there are grades lower than C). Are students graded on attendance? Class participation? Will papers be graded on style as well as content? You don’t necessarily have to come up with these guidelines on your own, though: check with the supervising faculty member to see if any standards or grading strategies have already been set.

With such explicit expectations in place, students will be far more understanding when you make the leap from Joe Smiley the TF to judge, jury, and executioner. (Don’t worry, there haven’t been any real executions at Yale since the 1950s.) By setting clear standards at the outset, you’ll avoid a lot of student complaints about your grading. Here are some suggestions for making those standards clear.

Have All the Answers

In grading exams, lab reports, and problem sets, consider preparing an answer key (or some clear indication of what made for an average/better/excellent answer) and making it available to your students—perhaps posted outside your office or on the class web site. Provide the correct answer to each question and indicate which responses earned partial credit and how many points you’ve deducted for certain errors. You can then have students compare their work to this model before coming to you with complaints or questions about grading.

Samples of Brilliance

Weaker writers often have no idea what a strong paper looks like, and they will have great difficulty improving their writing if they don’t see what to change. Consider distributing sample papers to let students know what you consider to be worthy of a high grade. You might also find that a few of your best writers will be willing to have their papers made available (anonymously, of course) for future students to read and learn from.

Put That Preaching into Practice

Make reference to the written guidelines you give students when you comment on their work so that they can see where their performance does or does not measure up to your expectations. In the course of your first semester of teaching, you may find that you develop a new set of instructions (written or unwritten) for preparing a good term paper, problem set, or lab report. The next time you teach, write up these standards and discuss them with students before the first assignment is due.

Approaches to, and Techniques for, Grading Fairly

Clear expectations serve a useful purpose only if they’re fairly and consistently enforced. In other words, you can’t run a classroom in which attendance counts against you only if you’re a registered Republican. In a course where all students complete identical problem sets, papers, or exams, the same criteria must be applied in grading each student’s work. Here are some suggestions for increasing consistency, which can be applied in many different situations.

Grade the Question, Not the Student

When grading an exam or assignment with multiple sections, grade all responses to the same question (or set of related questions) together. This makes it less likely that a student’s overall level of performance on the exam or assignment will cause you to give a grade for a particular section that is undeservedly high or low. Naturally, this approach is about as exciting as an evening with C-SPAN, but the good and poor performances will stand out much more clearly this way than if you alternate among topics or question formats, and it will be easier for you to develop and adhere to consistent scoring criteria.

The item-by-item approach also works well if there are multiple TFs for the same course. If each of you grades a particular question or group of questions on every exam, you’ll be in a better position to assure students that everyone’s work has been evaluated in the same way.

Waiting for Godot

As you begin grading a particular assignment or exam question, read through several students’ answers without marking grades. At the very least, restrict yourself to tentative marks in pencil. This will give you a sense of the overall range of students’ responses before you start inscribing final grades in indelible red ink.

Although you’ll probably think about the components of a good answer before reading any exams at all, students will occasionally surprise you by interpreting the question very differently from what you or the professor had in mind. Similarly, a question will sometimes prove to be much more difficult than you anticipated. Because such problems are often the fault of the testing instrument rather than of the student, it’s important that you understand how students are actually approaching the question before you begin to grade.

After you finish grading, review the first few assignments you graded. You will often find that you were much nastier with the red ink at the beginning of the grading process than at the end, and you may be pleasantly surprised to find that some of the first assignments you graded made points other students failed to mention. You will also have developed a more refined sense of a “good” as opposed to an “average” or “weak” performance over the course of your grading, and you may realize that the first assignments you read were better (or worse) than you initially thought. For these reasons, you may not want to mark any grades in pen until you’ve finished with the whole set of exams or papers and are happy with the distribution of grades as a whole.

Grade Blind

If you’ve come to know your students well in section or lab, you may have definite expectations, hopes, or fears about their performance on major assignments. In order to avoid being influenced by what you know or anticipate about a student’s work, you might want to keep the grading as anonymous as possible: just fold back the cover sheet of each paper or exam so that you can’t see the student’s name. (If you want to do this with papers, you should make a point of asking students to include their name only on the cover page.)

Grading without regard to students’ identities does not prevent you from commenting on how students’ work has progressed (or degenerated) over the course of the semester. Once the actual assigning of letter grades is complete, you can always go back to your written comments and praise students who have made notable improvements (or caution students who have done the reverse).

Partners in Crime

No TF is an island. Cooperation with other TFs can take different forms, each of which has its own advantages and disadvantages.

  • Assuming your supervising faculty member does not hold a practice grading session, one possibility is for the TFs in a course to grade an exam or other assignment together, dividing the questions among themselves and conferring in unusual or borderline cases.
  • A second alternative is to exchange exams or papers with other TFs, ensuring that nobody grades their own students’ work.
  • Third, TFs in the same course may seek to standardize their grade distributions even if they do the grading individually. This might involve attempting to equalize (roughly) the percentage of students in each TF’s sections who receive a particular grade, or having TFs share student work they feel exemplifies each grade category with the others (an A paper, a B paper), or some combination of these strategies. After conferring with your fellow TFs, you might then need to go back and adjust borderline students’ grades up or down to ensure balance across sections.
  • Finally, the professor may have instructed all TFs to adhere to a certain grade distribution. This can bring problems of its own, but it does eliminate the issue of consistency across sections.

What to Do When Students Challenge Your Grade

A common scenario: you return students’ papers and, after the usual period of sighing and moaning, a student approaches you with the dreaded “I’d like to talk to you about my grade.” What then?

Wait a Minute

The first thing to do is stall for time. No joke. Don’t be pressured into hearing a case and making a decision on the spot. There will probably be other students around, and you might be in a rush to get out of the classroom. Unless the grade change is truly minor and unquestionable, set up another time when you can give the student your full consideration (within a few days, to be fair). Then, before you meet the student, take some time to remind yourself what your grading standards are. Also, if you have the student’s paper available, reconsider how the paper fits those standards (it’s always a good idea to make copies of your comments for future reference).

Another option is to have the student write out his or her side of the story and turn it in with a copy of the exam or paper. That way, you’ll have time to review the case before meeting to discuss it. If the case really is clear-cut and simple, it won’t take long to explain it, and it won’t take you long to make a decision on the merits of the student’s case.

Let students talk during such conferences. In fact, let them talk a lot. Resist the temptation to jump in with your defense. Shouting, “Zip it! You failed!” will only exacerbate the situation. Many students take getting a bad grade very personally, so don’t escalate things by making the grading process personal as well.

Why do students complain about a grade? There are several possibilities.

  • The student is embarrassed about getting a low grade and is trying to win your approval as a person, or perhaps trying to show you that she is smarter than the grade reflects.
  • The student is genuinely trying to learn how to write better papers or do better on exams.
  • The student is trying to figure out how to get a better grade in the future.
  • The student is just trying to get a higher grade right now.

Dealing with the last possibility can be frustrating, but don’t assume that that’s the reason when in fact any of the other possibilities might be the case. (We don’t have to tell you what happens when you assume, do we?) Always imagine that your student has higher motives, and aim your conversation at that level. You can always give the student the option of having the supervising professor read and re-evaluate the paper or exam. Just be sure to remind the student that the grade could go down even further.

One last thing: if you allow a student to rewrite a paper, make sure that you allow every student that opportunity. In this case, it can’t be only the squeaky wheel that gets the grease. You gotta grease ’em all.

“Grade Compression” (Or, Grading from A to…C?)

Once upon a time, in universities far, far away, getting a C meant that your performance was average. A was at the top, F was at the bottom, and C was cleverly placed right in the middle (don’t ask what happened to E). But now we live in the world of Upward Grade Homogenization (UGH, commonly called Grade Inflation or — as Yale prefers it, “grade compression”), where most students, TFs, and professors consider C to be a bad grade—not the worst possible grade, mind you, but certainly less than average. So, the question is, what are the possible effects of UGH on your grading procedures?

  • If you reserve A’s for truly excellent work and give out C’s only for truly terrible work, you might end up cramming all of your students into an over-crowded B/B+ range that doesn’t really differentiate between their levels of work.
  • You might start throwing out A’s like a clown with a bag of candy—everybody gets one. The truly exceptional work is mixed in with the mediocre.
  • Because C is considered a bad grade, you might be reluctant to give out D’s and F’s even if the student truly deserves it
  • You might get crazy and start inventing new grades like “Super A+++.” These grades, while eye catching, will not be recognized by the Registrar.

There are divergent, strongly-held beliefs about grades and grade inflation, and the grading policies of individual TFs are unlikely to be the catalyst for any institutional change. In short, there’s not a lot to be said about UGH except that it exists. Bummer, babe. About the only thing we can say is that there is still a distinct spectrum of grades. If a student is blowing off the class or handing in terrible work, don’t be afraid to give out D’s or F’s. If a student hands in a term paper three pages long, poorly written, and smelling suspiciously like cheap beer, give it the grade it deserves: B-, of course.

When a Student Is Failing

Before midterm, the Registrar sends each faculty instructor or PTAI copies of a form that requests information about students doing unsatisfactory work, particularly those who are in danger of failing the course. However, it’s not always clear by midterm whether a student is on track to fail the course. Nevertheless, if you suspect that a student is in danger of failing, or even in danger of getting a D, let your supervising faculty know; if you’re a PTAI, you can use the form to alert the student’s residential college dean or you can contact the dean directly.

Unsatisfactory grades—D’s and F’s—are taken very seriously. If you give a student a D or an F for a final grade, there is a chance that the grade will be challenged. This is another reason to have crystal clear standards for how you plan to grade student papers, exams, and assignments. It’s also a reason to keep copies of assignments and exams and detailed records of each student’s performance for future reference.

Grading Student Writing

Grading student writing, especially for a first-time TF, can be a nerve-wracking experience. Let’s face it, some of us have gotten to the point where we don’t never think about the mechanics of writing papers no more.

Here are some suggestions that might help you get gooder at dealing with student writing.

Check in with Others

If you’re concerned about grading written work, share a few student papers with the instructor or other TFs in order to compare their responses with your own. TFs in some courses routinely exchange essays they feel are particularly good examples of each grade level. These can be accompanied by stale gum to give them a real collector’s feel: “I’ll give you a B+ from my Shakespeare course if you give me an A- from Contemporary Lit.”

Read Me, Please

Getting students to read your comments is one of the most difficult challenges in the grading process. Almost all students turn first to the grade, and some barely glance at the marginal comments, much less re-read the essay and reflect on how it might have been improved. This is less of a problem if every paper assignment includes the opportunity for consultation and revision, but that process can be very time-consuming and impractical.

One partial solution is to require small revisions for every paper. You might ask students to rewrite an introduction or conclusion, or to rework a key paragraph, which they would hand in at the next meeting. Students are more responsive to feedback when it is ongoing, focused, and clear.

Another solution is to summarize your margin notes or pick up on one or two big issues in a discursive comment (2-4 sentences) at the end of the assignment. This gives you a chance to refer back to previous work (“This concluding paragraph is stronger that last week’s”), give suggestions (“Review the passé simple !”), or even slip in an old chestnut or two (“Try harder!” or “Keep up the good work!”).

Don’t Overdo It

It’s usually a massive waste of time to overhaul a paper until it works well. Most TFs do want to point out basic grammatical and stylistic mistakes but don’t feel obliged to cover every last detail in the paper. You don’t want a situation in which you hand back a paper with more red ink on it than black (not that you should necessarily use red ink, but you get the point). In most cases there is a limit to the amount of constructive criticism a student can absorb. Consider choosing a few major points to emphasize instead of trying to be comprehensive. Also, be sure to respond to some positive aspects of the writer’s work (however much of a stretch this is) rather than only pointing out negatives.

Splitting the Atom

If it’s appropriate for your class, you might consider using split grades (e.g., B/A-) to indicate separate evaluations for style and content. Students who make great arguments but write their papers like VCR instructions might get an A for content and a C for style (averaging out to a B for the final grade). On the other hand, students who write beautifully but with an argument so obvious that it doesn’t need arguing (“Metallica rocks hard”) might get an A for style and a C for content.

Other Grading Issues Not to Be Dealt with in This Section

There are certain Yale policies and procedures that have a significant influence on grading that are not discussed in this section. These include issues such as “cut restriction,” dean’s excuses, withdrawals from a course, temporary incompletes, reading week, and Credit/D/Fail. For more on these fascinating topics, see Chapter 7 .  

YOU MAY BE INTERESTED IN

graded assignments and exams

Reserve a Room

The Poorvu Center for Teaching and Learning partners with departments and groups on-campus throughout the year to share its space. Please review the reservation form and submit a request.

Associates in Teaching Program

The competitive Associates in Teaching (AT) program, offered in collaboration between the Graduate School of Arts and Sciences and the Poorvu Center for Teaching and Learning, allows Ph.D. students to expand their teaching experiences and responsibilities in partnership with a faculty member.

graded assignments and exams

The Center for Teaching and Learning routinely assists members of the Yale community with individual instructional consultations and classroom observations.

graded assignments and exams

  • About this Blog
  • About the Editors
  • Contact the Editor
  • How to Follow This Blog

A Beginner’s Guide to Standards Based Grading

By Kate Owens , Instructor, Department of Mathematics, College of Charleston

In the past, I was frustrated with grades. Usually they told me very little about what a student did or didn’t know. Also, my students didn’t always know what topics they understood and on what topics they needed more work. Aside from wanting to do well on a cumulative final exam, students had very little incentive to look back on older topics. Through many conversations on Twitter, I learned about Standards Based Grading (SBG) and I implemented an SBG system in several consecutive semesters of Calculus II.

The goal of SBG is to shift the focus of grades from a weighted average of scores earned on various assignments to a measure of mastery of individual learning targets related to the content of the course. Instead of informing a student of their grade on a particular assignment, a standards-based grade aims to reflect that student’s level of understanding of key concepts or standards. Additionally, students are invited to improve their course standing by demonstrating growth in their skills or understanding as they see fit. In this article I will explain the way I implemented SBG and describe some benefits and some drawbacks of this method of assessment.

BwZTOdbIEAAr9sE

I chose Calculus II to try an SBG approach because it was my first time teaching the course, so I could build my materials from the ground up. Also, unlike several other courses I teach, the student count remains low — approximately 25 per section. Before the start of the semester, I created a list of thirty course “standards” or learning goals. Roughly, each goal corresponded to one section of the textbook. I organized the thirty standards around six Big Questions that I felt were the heart of the course material. One Big Question was, “What does it mean to add together infinitely many numbers?” The list of standards served as answers to these Big Questions.  The list of standards and a description of the grading system were distributed to the students on the first day of class. During the semester, students were given in-class assessments in the form of weekly quizzes, monthly examinations, and a cumulative final examination. The assignments themselves were similar to those found in courses using a traditional grading scheme, but they were assessed differently. Rather than track a student’s total percentage on each particular assignment, for every problem I examined each student’s response and then assigned a score to one or more associated course standards. I provided suggested homework problems both from the textbook and using an online homework platform, but homework did not factor directly into a student’s grade. Instead, if I noticed a student needed more practice at a particular sort of problem, I would direct her to the associated homework problems for additional practice.

During in-class assessments, a single quiz or exam question asking a student to determine if an infinite series converged might also require the student to demonstrate knowledge of (a) “The Integral Test , ” a strategy for determining if a series converges or diverges; (b) “Improper Integrals , ” the process used to evaluate integrals over an infinite interval; (c) some method of integration, such as “Integration by Parts,” and (d) some prior knowledge about how to evaluate limits learned earlier in Calculus I. For each of these concepts, I assign a different score (on a 0-4 scale), roughly correlated with a GPA or letter-grade system. During the semester, I tracked how well each student did on each of the thirty standards.  

Since some standards appeared in a multitude of questions throughout the semester, a student’s current score on a standard was computed as the average of the student’s most recent two attempts. Outside of class, each student could re-attempt up to one course standard per week. Usually these re-attempts occurred during office hours and were in the form of a one- or two-question quiz. My rationale for continually updating student scores is that I want grades to reflect a current level of understanding since I want students to aim for a continued mastery of course topics.   Over the course of the semester, their scores on standards can move up or down several times. Students are motivated to continue reviewing old material since they know that they might be assessed on those ideas again and their previous grades could go in either direction.

At the end of the term, each student had scores on approximately thirty course standards. To determine a student’s letter grade, I used the following system:

  • To guarantee a grade of “A”, a student must earn 4s on 90% of standards, and have no scores below a 3.
  • To guarantee a grade of “B” or higher, a student must earn 3s or higher on 80% of course standards, and have no scores below a 2.
  • To guarantee a grade of “C” or higher, a student must earn 2s or higher on at least 80% of course standards.

I adapted this system from one Joshua Bowman used. I like it because it captures my feeling that an “A-level” student is a student who shows mastery of nearly all concepts and shows good progress toward mastery on the others; meanwhile, a “B-level” student is one who consistently does B-level work. Also, this system requires students earn at least a passing grade on each course topic. In a traditional system, a student might do very well in some parts of the course, very poorly in others, and earn an “above average” grade. In the system I used, for a student to earn an “above average” grade, they must display at least a passing level of understanding of all course concepts. While students aren’t initially thrilled with this requirement, most are happy once I explain they can re-attempt concepts often   (within some specific boundaries) and so the only limit on improving performance is their motivation to do so.

There are three major advantages of tracking scores on standards. First, I can quickly assess student performance:

first

Second, I can give meaningful advice to students:

second

Third, I can determine what topics are in need of review or additional instruction:

third

Students have noted that SBG has several benefits for them as well. They aren’t limited by past performance and can always improve their standing in the course. Many students who describe themselves as “not math people” or those who say they suffer from test anxiety appreciate that their grades can continue to improve, thereby lowering the stakes on any particular assessment. In my office, conversations are almost always about mathematical topics instead of partial credit, why they lost points here or there, or what grade they need on the next test to bring their course average above some threshold. The change in types of conversations during my office hours has been amazing, and for this reason alone I will stick with SBG in the future. Students review old material without prompting, they feel less stress over any individual assignment, we don’t have conversations about partial credit or lost points, and they are able to diagnose their own weaknesses.

With that said, the SBG system also has some disadvantages. First, it takes a thorough and careful explanation to students about the way the system works, why it was chosen, and why I believe it is to their benefit. Student buy-in is critical and it isn’t always easy to attain. I have found that spending a few minutes of class time discussing SBG every day for the first one or two weeks is more helpful than giving a lot of explanation on any particular day. Students need some time to think about what questions and concerns they have, and I encourage them to voice these in class whenever they like. Initially, students think that this system will be too much work for them, or that their course grades will suffer since past strong performance could be wiped out in the future. (In contrast, by the end of the semester, almost all students say they really appreciated this method and felt they learned more calculus than they would have in a traditionally graded course.) Second, several students complained that their grades were not available through our online learning management system; I still haven’t found a way to convince our online gradebook to work in an SBG framework. Instead, students must come to my office to review their scores with me outside of class time. Third, choosing both the correct number of course standards as well as a thorough description of each standard has been challenging. It’s difficult to balance wanting each standard to be as specific as possible while keeping the total number of standards workable from both my viewpoint and that of the students.

After several semesters of using an SBG framework, I believe the benefits to the students outweigh the disadvantages.  At this point, I don’t have any firm data about student learning outcomes, but I do have some anecdotal evidence. The feedback from my students about this method of grading and, in particular, the details of my implementation has been very positive. I have received several e-mails from former students who, even semesters later, realize how much SBG changed their perspective on the learning process, or who wished their new instructors would switch to an SBG system. Comments on my student evaluations have mentioned that they feel their grade accurately reflects how much calculus they know, rather than how well they performed on a particular assignment, or how much they were punished from making arithmetic mistakes. As one student noted, “this class was not about how well you could take a test or quiz or do homework online that sucked. It was about the amount of calculus you understood and your effort to be better at it.” As a calculus instructor, this describes my exact goal for my course.

If you are interested in trying an SBG approach in your own courses, here are four questions to jump-start your journey:

  • What are the core ideas of your course? What concepts or ideas do you want students to master?
  • How many standards do you think you can track? You need them to be specific enough that students can understand exactly what each one means, but you also need to have few enough that your grading workload is manageable. I have 30 for a 16-week semester.
  • Will you allow re-attempts? What kinds of limits will you set, if any? I found that limiting students to re-attempting only one standard per week was essential in cutting down my grading workload. This limit also gave students the opportunity to focus on one topic at a time, rather than re-attempting several at once just to see what would stick.
  • How will a final assessment, project, or exam count? In my course, a student’s course score on each standard is a weighted average: 80% comes from their pre-final exam score and the remaining 20% comes from the score earned on the final itself. In this way, the final exam contributes about 20% to the student’s letter grade in the class, a figure in line with what is commonly used in my department.
  • How will you convert all the scores on standards into a letter grade?

Online SBG Resources

  • Twitter hastags: #sbg, #sbgchat, #sblchat
  • http://tinyurl.com/SBGLiterature , Scholarly articles related to SBG (list maintained by Matt Townsley)
  • http://thalestriangles.blogspot.com/search/label/sbg , SBG blog posts by Joshua Bowman (@Thalesdisciple)
  • http://shawncornally.com/wordpress/?p=673 , Standards-Based Grading FAQ by Shawn Cornally
  • http://blogs.cofc.edu/owensks/tag/sbg/ , my own blog posts about SBG
  • https://plus.google.com/communities/117099673102877564377 , a newly formed Google Plus community for anyone interested in conversations about standards-based or specifications-based grading

19 Responses to A Beginner’s Guide to Standards Based Grading

' src=

Hello – I am a senior studying math education at the University of Illinois. I will be student teaching Algebra 1 and Geometry next semester, both of which use Standards Based Grading methods. To be honest, prior to reading your blog post I did not have a very positive opinion of SBG. To me, it seemed like too discrete a way of assigning students an assessment score. However, the comment you stated that I really liked and will stick with me is, “I want grades to reflect a current level of understanding since I want students to aim for a continued mastery of course topics.” This really got me thinking, since I remember all the times both in high school and college when I thought, “If only I had another chance…I really knew that material, but I wasn’t in the right mindset in that moment.” You’re right…SBG allows this to happen, and from a student’s perspective, I can see why this would probably be preferred. It seems like it’s worked really well at the college level with your Calculus students. My one worry is that since I will be using this with freshmen/sophomore students in high school, they will avoid doing well on exams from the start since they know they can just retake it if they don’t do well. While it is clear your students’ motivation increased with your SBG implementation at the college level, I’m not so sure about how to make it work so effectively next semester with my high school students. Do you have any advice on strategies I can use to make it seem like the optimal strategy and have students get the most out of it?

' src=

Hi Cam! Thanks for your comment. I hope to throw together some of my thoughts in reply, but please ask me again if I miss a key concern or question.

As it turns out, many of the educators pioneering non-traditional grading approaches are in the K12 community. For example, Frank Noechese (@fnoschese on Twitter; website https://fnoschese.wordpress.com/about/) is a Physics teacher at a secondary school whose standards-based grading philosophy inspired me to make the leap. I have joined a Gooogle+ community of standards-based learning educators and we would love to have your insight as you navigate your own path — join us at https://plus.google.com/communities/117099673102877564377 . Indeed, as the community formed, a few people in the K12 community were happy to sign up. Their experiences will possibly be more aligned with what you’ll see next year than my own. I consider myself a relative newcomer to the SBG/non-traditional grading movement.

As far as your specific concern: “[T]hey will avoid doing well on exams from the start since they know they can just retake it if they don’t do well.” I was worried about this, too. What I found is that limiting the number of standards that could be attempted weekly helped quite a bit. I have 30 standards per 16 weeks, and at a one standard per week cap, students realize they must get close to mastery on at least some topics. Additionally, after the first exam, I try to encourage students as much as possible to come to my office, even if they believe they aren’t ready to re-attempt yet. Sometimes I find that what they are lacking isn’t mathematics, but instead confidence; after a brief chat, I can tell they know the material, and what they seek is encouragement instead of insight.

I think at the heart of your concern is something every educator must face — occasionally we all have students who, for whatever reason, don’t put 100% of their effort into their studies. I wish SBG was a magic wand for this issue, but it isn’t. In my experience, a student who earns a C-minus in a traditional course is very likely to earn a C-minus in my standards-based course for exactly the same reasons. As an instructor, my target is those B or C level students who have a lot of motivation & work ethic (but perhaps who lack confidence) to improve their standing. If a student is determined to fail a course, there isn’t much I can do — but if a student really wants to learn and demonstrate mastery of the material, I see it as my job to cheer them on as they work toward this goal.

I hope that this helps and that you’ll come join our Google+ community and conversation; or find me on Twitter: @katemath.

' src=

I am also a senior studying Math Education at the University of Illinois at Urbana-Champaign. I will be student teaching next semester in a high school in Champaign that also uses SBG.

I think it’s really interesting that you were able to implement this at the college level. I’ve only heard of this being used in K-12 like you’ve mentioned in your reply to another commenter. Do you feel that this method of grading can be applied more widescale at the college level? I feel like for students who were recently introduced to SBG from a traditional style and then going back to traditional in college is unhelpful to students in the long run. What are your thoughts on this?

Hi Peter, I am hoping to develop an SBG approach in many of my college courses. Next semester, I’ll be implementing an SBG system in a very different course — “College Algebra”, which is the lowest level mathematics course offered at the College of Charleston. My process of switching to an SBG philosophy has been strongly supported by the advice, knowledge, and experience of several online colleagues. I have found that asking lots of questions has led to many fruitful conversations about these issues, so I encourage you to keep asking whatever pops to mind.

As far as students switching from SBG to traditional (or the other direction), this is something I have also wondered about. My own conclusion is that my students face a similar transition between any two instructors. For example, one instructor might focus a lot on grading homework, whereas another doesn’t grade homework but has daily graded quizzes. These challenges are common in every college experience, regardless of grading approach or philosophy. My own experience makes me believe that I should do what I feel is in the best interest of my students, even if this is a different approach than the one taken by my colleagues. I believe that having an open and honest dialogue with both groups — both my colleagues and my students — is important.

Lastly, I’ve received a lot of feedback from prior students that my SBG implementation has changed the way they approach their education for the better. They value our conversations on what it means to learn, on why I think the SBG approach is in both their interest and my own, and also on how their education is essentially their responsibility. It is my job to give them a clear picture as to what they know, where they can improve, and support that improvement whenever possible; it is their job to “do the work,” face the challenge head on, and strive to do the best that they can. I hope to be more of a cheerleader or coach for them, rather than a judge & jury. Students seem to agree that an SBG philosophy allows me to do this and they appreciate the extra work it takes, on their side and my own.

Come join our Google+ community if you’re interested in perspectives apart from my own! We are looking forward to continuing this conversation.

' src=

I have been slightly exposed to standards based grading in my last two years of college, and I like it for a few reasons. Namely, I like that it allows for better understanding of individual progress in actual learning than traditional grading, and that it redefines success by allowing students to retest and continue to demonstrate learning and improvement. You also mention several disadvantages, but many of them are results of SBG not being “mainstream”. Clearly, this post shows that standards based grading is a success for Calculus II, and probably for most other math courses, so why is it so difficult to facilitate a switch to SBG several orders of magnitude larger than a single classroom? I understand that education reform is slow to begin with, and gets slower the more you try to reform, but don’t many educators share your perspective on SBG? I know that as a student, I would appreciate standards based grading far more, as it just feels more like learning than traditional grading does. As a future teacher, I want to afford my students this opportunity, but I fear the community and department backlash for being a “new teacher” with new tools. Is there a strategy better than just buckling down and grading in this manner regardless of what anyone else says?

Hi Kyle, I am in the process of planning for next semester. I’ll be teaching several sections of our “College Algebra” course for the first time, and I’m developing an SBG-approach for this class. The class will be quite different than Calculus II. First, there will be many more first-year students. Second, many of them won’t be in science or mathematical majors. I am excited to see how they respond. Third, I’ll have many more students than I did in Calculus II. I’m hoping that it goes well; I plan to blog about what I learn at my own blog (http://blogs.cofc.edu/owensks) and also share my experiences with our Google+ community (https://plus.google.com/communities/117099673102877564377).

You mention: “I fear the community and department backlash for being a “new teacher” with new tools.” I’d be lying if I said this hadn’t crossed my mind as well, especially considering that this semester (Fall 2015) I faced my Third Year Review as part of our Tenure & Promotion process. With that said, I believe it’s my job to use my best professional judgement to figure out what I think is best for my students — meanwhile focusing on being completely transparent about the hows & whys of my choices, whether to my department, my administration, or my students. For me, I can’t imagine going back to a traditional grading philosophy because of the experiences I’ve had in my SBG courses. In outlining the “hows and whys” in my T&P documents, I found that my colleagues were very supportive of my non-traditional approach. After ten years in the university classroom, I have found all the departments I’ve worked with to be places that welcome innovation, so long as that innovation is well-supported by strong professional judgment and honest, ongoing conversations.

Come join our Google+ community and see if everyone else will echo my experience. (I’d be curious to know what they have thought throughout their careers!) Check out https://plus.google.com/communities/117099673102877564377 .

' src=

The SBG system is a great step forward in the way teachers and professors approach learning. Speaking from personal experience, this system of grading allows for students to learn at their own pace, to be in charge of their own mastery of the material, and ultimately reinforces the subject matter. With this system, it also prevents one “bad day” from tanking the students grade. Of course their are limits to where and how the SBG system can be applied, but for Calc II, it worked beautifully. Dr. Owens was able to teach one of the best — and yet one of the hardest — classes I’ve ever taken, while allowing me to learn at a rate that suited me and promoted my learning. At least for every math class I have ever taken, SBG would’ve improved the experience by promoting learning as opposed to memorizing.

' src=

I have been exposes to SBG along with the concept of visual learning and I have fallen in love with the idea of both of these, but I am getting nervous implementing them in my classroom. I appreciated that you highlighted the pros and the cons that you discovered. The thing that encourages me the most about your review is that you said in your office, discussion went from partial credit to math topics. Isn’t that what the discussions should be? Student learning seems like it would increase so much if students were concerned about learning, not their grade. I think the fact that you said student buy-in is crucial and your four questions are exactly on point. One thing I am really nervous about is the amount of time it seemed as though you put in – and you mentioned that it was for smaller class sizes. Do you have any advice for SBG in a high school with 35 kids to a classroom and 6 difference classes? I think it would be a great benefit to my students and school to move toward SBG but I am afraid to take that first step.

Hi Ali, I was nervous too, before my first SBG class. I think this is just part of the process we all go through when making big changes to our courses. As far as particular advice about your high student count (35*6), I would suggest designing a system that is easy (perhaps Pass/No Pass?) and somewhat automated — for example, if you have access to test generation software, using that to create multiple versions of a single re-assessment rather than having to write each one individually. You’re welcome to join our Google+ community (the link is above) and there you might find people whose SBG experience is more akin to your situation & who can offer even more insight than I can. Good luck 🙂

' src=

This is amazing! I do have one question for you: How do you go about recording your grades? What gradebook program do you use?

' src=

Same question. Also, on the retakes … were there problems about access? I mean, that some students could make the office hours and others could not?

Hi Kevin. I didn’t have any access problems. I tried to schedule my office hours around times I knew the students would be free. Occasionally, I’d set up an appointment to meet with someone if they really had a conflict. In cases of a busy week, our admin assistants help us proctor, so rarely (once or twice) I left a re-assessment quiz with them for a student to take during normal business hours. I didn’t like this option since I always wanted to sit down and chat with the student before they tried another problem, just to help clear up any underlying misunderstandings of the material.

Since writing this post, I’ve started using our online LMS gradebook. It isn’t a great fix. For example, since students take quizzes a different number of times, this data can’t really be stored in the gradebook. We have a D2L product. I did figure out how to do a “Selectbox” grade, so I have my EMRN system there. I have one column per standard and I update the dropdown menu each time a student makes an attempt at a standard. I also save some data in Excel on my office computer where I feel like I have more control over how calculations are handled.

Hope this helps!

' src=

Hi, I am a high school math teacher that teaches a variety of classes from Algebra 1 to co-teaching Dual Credit College Algebra. I really am interested in SBG because it sounds like it focuses more students on math topics instead of their grade all the time. I come across the topic of grades almost every day and I really feel like students are so wrapped up in the grade that they aren’t really learning as much; instead they are trying to memorize. I already implement a rework process within my regular grading system because I really like to see my students find their errors and learn from them. However, I don’t know how well SBG would work in the high school setting. Is SBG something that should be implemented school-wide to help the students understand the process or will in not matter if I am the only one in the high school to implement this? How many of your colleagues use this same system?

Hi Kristie,

So far, none of my colleagues in my department are using an SBG approach. However, there are a few folks around the university who are trying either specifications grading or SBG outside of the math department. I think your students would benefit from this approach even if you’re the only one using it.

' src=

Thank you for sharing!! I have just started my 3rd year at a High School and am teaching a new prealgebra course with students who have failed math classes in the past. I felt the SBG would be a good way to get these students back into a growth mindset. So far they have fought against it quite a bit, but mainly because they don’t like change. They also seem opposed to not having extra credit opportunities. I am curious, do you have any thing in SBG that is similar to extra credit?

I felt like EC wouldn’t really fit a SBG approach where learning must be shown. Sinced once you show you have mastered a standard, your grade will reflect that growth and the EC would not be needed.

' src=

Kate, This post is one of the most concise and comprehensive sources on Mastery-Based Grading I’ve read. And I’ve read a lot, because I am currently creating my Mastery model. Right now I am struggling with balancing my general and specific outcomes. This is why I am especially interested in your 6 Big Questions and how your specific standards answer these questions. Here is my question: Have you used these Big Questions in your grading in any way? How were they present in your system? In other words, did they play any other role other than helping to create a meaningful structure of the course (which is already a lot!)? Thank you!!!

One of the big struggles with moving to an SBG system is you really have to figure out what it is you want your students to learn. For me, using Big Questions has been really helpful in my course prep because it focuses my attention on what the point of the course is. Also, in past semesters, I’ve often asked students on the final exam “What were the Big Questions in this course?” and I’ve been really impressed with their responses.

I realize that my students will probably, at some point, forget how to do things I’ve taught them (think: quotient rule! integration by parts!). And I think I’m OK with this. What I would like them to remember from my course, even if they forget the details about the methods we’ve implemented, is what kinds of questions we were asking. So in my instruction and documentation, I try to make clear “This is the Big Question we’re struggling with right now”.

So, specifically: 1. I don’t think the Big Questions really are part of their grade (although sometimes I ask my students if they remember them or not). 2. They are present in my system as an organization tool, both when I’m writing my standards at the start of the course, and also as a structure within the semester to bring the conversation back to “What are we even trying to do today?”

I hope this helps!

Thank you Kate, it definitely does! And I can see how important it is for a Calculus course. My general outcomes/big questions for a high school Algebra 2 course may not be so profound, although just like in Calculus, I’d like my student to internalize big ideas about relationships between real world processes, functions, equations, graphs etc. rather than necessary particular types of equations and graphs. So I can totally accept your philosophy and practice! Thanks again, Yelena

' src=

This sounds great for math or science classes. I teach high school history, and we focus on citizenship along with reading and writing using evidence to support claims. I find that many of my students come to ninth grade without any knowledge about their country due to more emphasis on reading and math. Elementary grades have stopped teaching history to make sure kids are reading better (and they are not) or working on new math techniques. In using standards based grading for history, how will I be able to assess students’ knowledge based on the state standards accurately. Much of history is based on knowledge. One cannot write about history without learning the basics. When a student reaches the high school level, they should be able to read and write in an acceptable manner, but we all know students are passed on, and I believe part of the problem is this re-do until they get it. I do not think all students will always get it. Life does not always offer second chances, but that is what we are teaching them. That is why we have students who fail, especially when they try college. I know this will not be read or responded to, but I am very skeptical of passing everyone.

Comments are closed.

Opinions expressed on these pages were the views of the writers and did not necessarily reflect the views and opinions of the American Mathematical Society.

  • Search for:
  • Active Learning in Mathematics Series 2015
  • Assessment Practices
  • Classroom Practices
  • Communication
  • Early Childhood
  • Education Policy
  • Faculty Experiences
  • Graduate Education
  • History of mathematics education
  • Influence of race and gender
  • K-12 Education
  • Mathematics Education Research
  • Mathematics teacher preparation
  • Multidisciplinary Education
  • Online Education
  • Student Experiences
  • Summer Programs
  • Task design
  • Year in Review
  • January 2022
  • November 2021
  • October 2021
  • September 2021
  • February 2021
  • January 2021
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • February 2020
  • January 2020
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014

Retired Blogs

  • A Mathematical Word
  • inclusion/exclusion
  • Living Proof
  • On Teaching and Learning Mathematics
  • PhD + epsilon
  • Entries feed
  • Comments feed
  • WordPress.org

Berkeley Graduate Division

  • Basics for GSIs
  • Advancing Your Skills

Grading and Time Management

Many GSIs are familiar with the difficulty of keeping grading within the time constraints of graduate student work and other academic and personal commitments throughout the semester. If you are directly in charge of course design and assessment, consider what is reasonable for you to do in a semester and scale your and your students’ workload accordingly. Or, you might need to speak with your Instructor of Record if you anticipate the amount of grading will go beyond your contract hours. Below are a few strategies that can help you manage your time and make the process less stressful.

Before Students Submit Their Work

Remind students of the   course grading policies when you announce an exam or introduce a new assignment. This can head off superfluous regrade requests and late work.

Review and clarify the   assignment design . Clearly worded assignments and clear learning objectives will help students have a much better sense of what is expected of them and thus help them achieve those expectations. Make sure that exam questions are vetted thoroughly prior to the exam.

Clarify how you plan to grade the exam or assignment – either on your own or with your Instructor of Record and fellow GSIs if teaching collectively. Design and distribute a grading rubric , and test it out on a sampling of papers. It may also be helpful to look at a representative sampling of student work to get a sense of the common errors prior to creating your rubric.

For longer assignments, build peer review and revision into the assignment process. Not only does this head off the temptation for students to finish their assignment the night before a deadline, it helps students better recognize the separate stages and revision processes that necessarily go into making great work. See: Drafts, Edits, Revisions and Review and Revision .

Have students double-check their own work for careless errors or mistakes – rather than yourself. You might want to include a checklist that students are required to fill out and attach to their submitted assignment. Here are some examples:

  • I proofread this paper at least twice for grammar and punctuation.
  • I asked at least one other person to proofread the paper.
  • I ran the paper through a spell checker.
  • I have formatted this assignment according to the citational standards accepted in this course (MLA, APA, etc.).
  • My paper is formatted in 12 pt font, Times New Roman (or other accepted font), and double spaced with regular margins. 
  • My paper has a title. 
  • I have included my name, student ID, the date, and section number for this course. 

Adapted from: Barbara E. Walvoord and Virginia Johnson Anderson [1998], Effective Grading: A Tool for Learning and Assessment [San Francisco: Jossey-Bass], 128–29.

Consider asking students to turn in a cover letter  with their own evaluation of their work’s strongest and weakest points as well as the students’ thoughts on how they could improve the work. This may help you later on when writing comments tailored to each student’s concerns about their work. (See Commenting on Student Work, below).

Specify the means of submission . Most likely, submissions will be uploaded to bCourses or Gradescope, though if paper submissions are preferred, then specify any requirements.

Consider anonymous grading. Have your students label their assignments and exams with their SIDs and not their names.

Set up a grading technology tool like bCourses or Gradescope. Tips for using Gradescope can be found below. 

While You Are Grading

Grade while you are in a good mood.

Grade with company! In addition to being more fun, the other GSIs are a resource for grading questions. Also, if you are grading a large lecture course, it can streamline the grading consistency checks. To ensure consistency, exchange a few papers in each score range with the other GSIs, and grade them independently. Compare the scores and take corrective action if necessary.

Time yourself. Try to limit how long you spend grading each assignment (e.g., I want to grade on average 20 problems per hour or, I want to spend at most 15 minutes per essay ). If you find yourself puzzling over a particular paper, set the paper aside to grade last, when your sense of all of the students’ work has been fully developed.

If the assignment has disjoint parts, grade each part separately (e.g., if an assignment consists of three problems, grade the first problem for the entire class before you proceed to grading the second problem, etc.).

For essays, avoid the temptation to mark up or edit each page as you read and instead read the paper quickly once for argument and organization. Then, identify one or two areas where you will focus your comments and feedback during a “second pass” reading. Using a rubric will help you limit your remarks to the most essential learning outcomes you are assessing.

Sort the assignments into stacks as you grade (one stack for each grade). When you are done, check through each stack for consistency. Once you are satisfied, mark the assignments with the scores.

Make notes to yourself as you grade. This will help with consistency and make it easier to find student work if you change your mind.

You are likely to take a break in the middle of the grading task. When you resume grading, first look at papers you’ve already graded to reset your mental scale.

When you are finished grading, look again at the first few assignments you graded to see if you still agree with yourself.

For additional tips for writing-based courses, see Time Management Suggestions for Grading Student Writing .

Commenting on Student Work

Identify common problems students had with an assignment and prepare a handout addressing those problems. This helps you to avoid having to write the same comments multiple times. It also enables you to address the problem in more detail and helps students realize that others share the same problems.

Type your comments. This has a number of advantages. It allows you to keep a computer record of each student’s progress over the semester; comments can be more detailed; longer comments on common problems can be cut and pasted from one assignment to another; and it is easier for the students to read what you have written.

Do not comment on every problem or point. Focus on a couple of major points, keeping in mind the overall learning objectives for the assignment. This not only helps you to grade more efficiently, it also avoids overwhelming the students. It enables them to focus more effectively on the areas of their work where they need to put more effort to accomplish the learning outcomes for the course.

If you have asked students to submit a cover letter with their own assessment of the work’s strong and weak points, you can take these points into consideration when responding to that student work. This emphasizes that feedback is more of a dialogue with the student and less of a one-sided evaluation. 

Make sure you’ve included enough comments that the students can discern why they received a particular grade and how to direct their future effort to better achieve learning outcomes.

Instead of writing lengthy explanations for a complicated issue, write that you ask the student to come see you (“See me” or “Please see me in office hours”) . A live conversation gives you the chance to hear what the student found specifically difficult with the assignment and to tailor your feedback accordingly. It also gives students a chance to ask clarifying questions for complicated issues. Keep track and remind students if they forget to follow through.

After You’ve Graded

If appropriate for your course or section, use a spreadsheet or the Grades tool in bCourses to calculate grades. If using the bCourses grader or Gradescope, these can be directly imported to the Grades tool.

If a student consistently turns in unsatisfactory work, meet with him or her to figure out why and develop a plan of action. Often a student just needs guidance on how to approach an assignment or a more effective study strategy. If there are additional barriers preventing a student from doing their best work, this may be a time they can discuss those with you and formulate a plan to address them.

If returning work in person, try to do it at the end of section to maintain student focus, but be sure to leave enough time to discuss common problems with the class in sufficient detail.

Final Thoughts on Time Management

Document how much time you are spending, and on what, and re-evaluate. If you feel you are spending more time on grading than is warranted, speak with the  Instructor of Record and discuss  options. As your supervisor, the Instructor of Record is responsible for making sure the time expectations do not exceed what is stated in your appointment letter. Perhaps you can change the grading criteria to streamline the process. Ascertain whether it is necessary to grade every problem on an assignment? Occasionally, instructors in the problem-based disciplines decide to grade a random subset of problems on an assignment (after informing the students, of course). Are comments (instead of a grade) sufficient on rough drafts? Can you use a simpler rubric (e.g., pass/not pass instead of a five-point scale)? Can you have the students grade each other’s quizzes in section? Remember, the focus should be on helping students move forward in their learning. Sometimes, the best ways to do this are with low-stakes, ungraded assessments such as quizzes. These steps can also help reduce stress and anxiety both for students and for you as the instructor.

In the event you have already discussed these matters with your Instructor of Record, you can speak with your department’s Faculty Advisor for GSI Affairs.

Additional Tips: Getting To Know Gradescope

Likely you will be dealing with digital submissions, so it will pay dividends to familiarize yourself with the grading tools. Here, we’ll describe some useful tips about Gradescope that can speed up some aspects of grading.

Map Out Student Work With A Grading Outline And Template

To organize student submissions based on problems, Gradescope needs to know which pages of a submission correspond to a particular prompt or problem. These demarcations can be done either by the student or the grader. As a GSI you can save a lot of time by having students mark up their own submissions.

To allow students to tag their assignment, you must add an outline of your assignment. After uploading a template of the assignment to Gradescope, it will prompt you to add an outline of the problems. Click the “+” sign to add a box for each sub-part. Even if the point values are not finalized, doing this step enables students to mark which regions of their submissions correspond to different problems. This data is important for grading one problem at a time across all students efficiently.

For exams where students do not handle the submissions themselves, it is recommended that the exam be spaced out so that students’ work can fit neatly in the blank space without the need for scratch paper. In Gradescope, you can upload a blank copy of the exam and label the regions that correspond to the outline of problems. If successful, the scanned exams will all have the same format as the template and Gradescope can automatically do the tagging according to the template version. This consistency also allows Gradescope to handle multiple exams scanned and uploaded as a single file! For more information on instructor uploaded submissions, see their documentation page .

Keyboard shortcuts

  • When grading an assignment, you can put in various rubric items. Instead of clicking on each rubric item that applies to a student’s submission, you can press keys 0-9 corresponding to the items
  • To navigate between students (on the same problem), use the left and right arrow keys
  • To navigate between problems (for the same student), use the ‘.’ and ‘,’ keys
  • If a student’s solution to a single problem is spread over multiple pages, use the ‘j’ and ‘k’ keys to move between pages.

For a visual reference, Gradescope has several video tutorials .

Swarthmore College - ITS Blog

Swarthmore College – ITS Blog

Assignment sign

How to Grade Online Assignments and Exams

An increasing number of Swarthmore faculty are using online grading for assignments and exams.  Since students create most of their work digitally, it makes sense to submit an assignment electronically and avoid the need to print a hard copy.  ITS provides many different ways to collect and grade assignments.

Moodle has several tools for electronic assignments and exams.

Moodle Assignments

The Moodle assignment activity is an easy way to collect assignments from students and simplify the process of setting up due dates, granting individual extensions, and keeping track of any students that did turn in their assignment.  Moodle also provides a basic interface for viewing and grading the assignment.  It is possible to set up more complex grading forms, including rubrics.  The Moodle assignment also works well for courses with student graders because it eliminates the need to deal with stacks of paper assignments and makes it easy for the instructors to see the grading status of each assignment.

Moodle Turnitin Assignments

In addition to the built-in Moodle Assignment, there is a Moodle Turnitin Assignment.  Turnitin is best know for plagiarism detection, but also includes a grading tool that make it easy to mark up papers with frequently used comments, enter text feedback, and even record audio notes to a student.

Moodle Quizzes

The Moodle Quiz activity can be used for an online exam.  Quizzes are set up to be taken within a certain time period.  Students can be prompted to either enter text directly in the quiz, upload a document with their responses, or answer multiple-choice type questions.  This tool is handy for keeping track of a timed exam while allowing for extended time or alternate test days for certain students.  If you dread reading student handwriting in a blue book, this is worth a look.

Other tools

In addition to Moodle, ITS provides a number of other ways to grade assignments electronically.

Academic Technology has been running the Teaching with Tablets  program for several years and has provided many faculty with iPad Pros and Microsoft Surface tablets to facilitate teaching and electronic grading.  The tablets come with a stylus that is perfect for marking up student papers.

Honors Exam Software

If a higher level of security is needed, ITS can set up the same software as used for the Honors Exams for a midterm or final.  This software blocks the use of other applications and connections to the Internet.

Grading Code

The following tools are aimed at assignments and exams that involve programming.

GitHub Enterprise

If you are asking students to submit computer code, Swarthmore has its own installation of GitHub for use by community members available at github.swarthmore.edu .  It is possible to set up and share repositories with students, collect the code, then grader and enter feedback to the repository.  The Computer Science department has done a great job at scripting this workflow to make it easy for faculty and students.

MATLAB Grader

If your students code in MATLAB, the MathWorks has just released  MATLAB Grader to make it possible to assign and collect student work.  This is a new program and if you are interested in trying it out, please email [email protected] and we’ll be in touch.

If you are asking students to code in Python, Swarthmore’s JupyterHub installation could be useful to provide a web-based Python coding environment for students.  ITS is working to set up the nbgrader plugin to facilitate distributing, collecting, and grading assignments.  If you’d like to check it out, contact Andrew Ruether in ITS (aruethe2, x8254).

Featured image: Assignment by Nick Youngson CC BY-SA 3.0 Alpha Stock Images

Share this:

Home

How You'll Be Graded in an Online Course

online grading

That said, some subjects lend themselves better to sit down exams. Other subject knowledge is better evaluated through presentations and papers. Your area of study and your degree level will probably affect the types of assignments you receive in online courses.

If you’re concerned about grades and grading, take a moment to review the following assessment methods. Once you’re familiar with the possibilities, you may want to ask several college enrollment advisors which methods are most commonly used within their programs.

Proctored Exams

Some online courses require you to go to a testing center in your area, when it’s time to take tests. At the testing center, students need to present photo identification, so test proctors can ensure that no one is cheating. Proctors then oversee the exam for its entire duration. This process, along with a monitored testing environment, can make some students nervous. And the nearest testing center may a considerable distance from your home.

You can avoid proctored tests by checking with online programs, before you enroll, to determine if any on-site testing is required. If you do wind up selecting a school that requires proctored exams, make sure you prepare for them with “test runs”. Try driving to the site a day or two in advance; note how long the commute takes, so you won’t be late or lost on test day. Ask your instructor if it’s possible to try a practice exam at home. Tests are always easier when you’re familiar with the format.

Timed Quizzes and Exams

Timed, at-home exams are more common than proctored tests. You’ll be asked to log in to a site where you’ll gain access to a quiz or exam, which is usually in a multiple choice format. As soon as you log in, the clock starts, so be sure to log in only when you’re ready to begin. Many times, you’ll have the option of trying practice exams that will help you prepare for the material and the test format. Remember that one cause of test anxiety is a lack of familiarity with the process. If you want to get rid of test nerves, a great way to do so is to practice test at least twice.

Discussion Board Posts

Some instructors evaluate your performance based on the quality of your interaction with others in the discussion board forum . If you’re a quiet person, you’ll have to push yourself to vocalize thoughts and questions. Start a habit of reading with a pen in hand. Jot down your reactions and questions, so you’ll have comments to post. Keep in mind that quantity and quality are important. Five posts a day won’t do you any good if you’re not expressing anything worthwhile. Try responding to your classmates’ questions; pose your own questions; reference outside readings and websites that relate to the class topic. If you agree or disagree with points being made, offer examples that support your position.

Collaborative Activities

Group projects challenge you to work together with your classmates. Whether you’re creating a paper, a presentation, or a portfolio, your group will need to divide roles and responsibilities in a clear and effective way. You may wish to use the space that’s provided by your college or university (like a group discussion board), or you may prefer to work via email and/or conference call. The key is to understand your instructor’s expectations, and to develop a checklist that corresponds with the grading rubric.

Don’t be afraid to speak up if you’re concerned about the project’s direction, or if some group members aren’t pulling their weight. Remember that group assignments are intended to prepare you for the working world and for the everyday collaboration that business professionals must accomplish. Get in the habit of taking your work seriously. Read our article on how to survive group work for additional help.

Project assignments vary a great deal. They are usually assigned to showcase the practical skills you are learning. So, depending on your major, projects may involve writing a new computer program, designing a new website, or inventing a new recipe. It’s okay to get creative with your class projects. After all, you should be enjoying the things you’re learning. But be careful not to stray too far from the assignment parameters. As your project evolves, check in with your instructor to make sure that you are on track.

Essays are formal papers that are required in many different courses. Essays are shorter than research papers, but their requirements can vary. Some essays are opinion papers; you can write them in the first person (using “I”), and base your arguments on your own beliefs and experiences. Other essays are comparison papers, summaries, or critiques that require scholarly references.

Before you start writing, make sure you understand the assignment and all of its required components. Draft an outline. Have a classmate review your first draft, and take advantage of your college writing center if one is available. Most importantly, do not wait until the last minute to start writing. Teachers can tell when a paper has been thrown together. Read our tips for college essays for additional help.

Research Papers

Research papers are a common form of assessment at the college level. Most research papers will require you to identify a research question, develop an outline, do research in online libraries, and write a paper that answers your original question. Some of the most common pitfalls are selecting a topic that is too broad, and becoming over-reliant on Wikipedia or online sources that are not peer-reviewed.

It is a good idea to let someone review your research paper at each stage of its development. In fact, some instructors will grade you on your research process (not just the end result.) Work through multiple drafts, and clearly document your research, so you can cite your sources in the final paper. Read our article on how to write a research paper for more information.

Electronic portfolios, also called e-portfolios, are like academic scrap books. E-portfolios are often assigned as capstone projects, towards the end of a degree program. They can contain a series of projects – including research reports, presentations, video clips of your work in action, or links to websites where your work has been featured.

Education majors often create e-portfolios to encompass all of their lesson plans, student teaching videos, teaching research, and more. Business majors and other graduate students might also complete portfolios in lieu of a senior thesis or dissertation.

Fill Out The Form to Get Matched to Schools For Free!

  • Utility Menu

University Logo

GA4 Tracking Code

Home

fa51e2b1dc8cca8f7467da564e77b5ea

  • Make a Gift
  • Join Our Email List
  • Beyond “the Grade”: Alternative Approaches to Assessment

While so-called "alternative" approaches to grading are not new, attention to them has increased in recent years. This has been especially true since 2020, when COVID's disruption of our conventional modes of in-person education forced many instructors to rethink their approaches to assessment. Hand in hand with this more pragmatic rethinking came ethical considerations, as living through a pandemic unfolded alongside ongoing protests throughout the US against systemic racism and police violence, further leading instructors to question the biases inherent in, and efficacy of, the models they had long been using.

Among the alternative grading approaches that have received the most attention are specifications grading , contract grading , mastery grading , and " ungrading ." Each of these approaches is "alternative" in so far as it diverges in some way from a so-called traditional model of grading, which in its simplest and oversimplified form generally includes many of the following features:

Grades are given by the instructor to each individual student

The grade is often, but not necessarily, accompanied by more substantive feedback

Graded assignments are "high stakes," often because they are few in number, come later in the term, and/or may not be revised or resubmitted

Students have little say in creating assignments or in which assignments they complete

Students have little say in setting their own learning goals and few opportunities to reflect on their work in a course.

In practice, these general features of so-called "traditional" grading show up in different combinations in any given course. Overall, courses employing "traditional" grading tend to be more oriented towards product over process, and instructors in these courses hold more power over the assessment process than students do. Nonetheless, courses that employ traditional grading are not uniform in the ways in which student learning is assessed and graded.       

We encourage Harvard instructors to learn about and consider adopting some or all of the features of one or more of these alternative approaches to grading not because we consider traditional approaches to be inherently flawed, ineffective, or obsolete, but rather because we believe that contemplating alternative approaches in tandem with more conventional practices inevitably raises valuable questions not only about the particulars of how we are assessing our students' learning, but also about why we are asking students to perform in the ways that we are. To recognize that there are a wide array of plausible approaches to grading is to recognize that perhaps the single most important attribute of successful assessment schemes is their intentionality. 

Why Consider Alternative Grading?

Criticisms of traditional grading systems include: 

Grading systems exacerbate stress and mental health challenges among students (Horowitz and Graf, 2019; Jones, 1993).

Grades decrease students' intrinsic motivation (Pulfrey et al, 2011; Chamberlain et al, 2018).

Grading decreases students' ability to learn from feedback, as students tend to focus on a letter/numerical grade and not the accompanying feedback (Schinske & Tanner, 2017; Kuepper-Tetzel & Gardner, 2021).

Grading perpetuates inequities between students (Link & Guskey 2019; Malouff & Thorsteinsson, 2016; Feldman, 2018).

They may encourage students to be risk averse, nudging them towards courses and assignments in which they feel they can do well at the expense of new areas of potential interest and inquiry.

To combat these challenges, in recent years a significant number of individual faculty, educational researchers, and institutions from across higher education have invested in developing alternative approaches to grading—often referred to, broadly, as ungrading. While the exact details vary, these approaches typically:

Offer clear learning objectives that are aligned with how assignments are graded.

Provide transparent expectations for success.

Offer students regular and actionable feedback on their work.

Emphasize process over product, by providing students with multiple opportunities to meet expectations. If a student's first effort is not satisfactory, they may be able to revise and resubmit the work or complete another similar assignment.

Help students feel responsible for their learning and their grades by providing students with some agency over the breadth and depth of work that they undertake and giving students agency in defining their own goals and reflecting on their own growth as learner.

Offer a range of lower-stakes assignments, as opposed to a small number of higher-stakes assessments such as exams.

Overall, alternative grading aspires to recalibrate the way we evaluate and give feedback on students' work to incentivize learning and effort (rather than performance alone). These approaches provide clarity about expectations and provide students with the freedom to make mistakes as part of the natural process of learning.

A Brief Typology of Alternative Grading Approaches

Below we briefly describe four alternative grading strategies, which can be employed in a wide range of disciplines. We note that there is a lot of flexibility as to how instructors might implement any of these approaches, and that the approaches overlap with each other.

Specifications grading

In specifications grading, grades are based on the combination and number of assignments that students satisfactorily complete. The instructor designates bundles of assignments that map to different letter grades. Bundles that require more work and are more challenging correspond to higher grades. Students can choose which bundle(s) they would like to complete. 

Similar to mastery grading, the instructor defines clear learning objectives for all aspects of the course. Grading is based on meeting these objectives (satisfactory/unsatisfactory). Students typically have a small number of opportunities to resubmit work that didn't meet the standards.

Contract grading

With contract grading, the criteria for grades are determined by an agreement between the instructor and students at the beginning of the term. Each student signs a contract indicating what grade they plan to work towards, and contracts can be revisited during the term. Grades may correspond to completion of a certain percentage of work or completion of designated bundles of assignments (similar to specifications grading). Contract grading often emphasizes the learning process over the product, and as such, grading schemes may reward completion of activities (such as completing drafts and meeting individually with the instructor) as well as behaviors (such as being thoughtful in peer reviews and participating in discussions). Student work is graded on a satisfactory/unsatisfactory basis.

Mastery grading

In mastery grading, grades are directly based on the degree to which students have met the course learning objectives. An instructor first develops an extensive list of learning objectives, and then creates assessments that are aligned with these objectives. Student work is assessed on the basis of whether or not it meets a specified subset of the course objectives; partial credit is not awarded. Students are allowed multiple attempts to show mastery; depending on the nature of the assignment, students might revise their original submission or submit new work in response to related questions. The final course grade is based on the total number of objectives that the students has mastered. An instructor might designate essential objectives that everyone must meet to receive a certain grade, as well as bonus objectives that students could meet for a higher grade.

In classes that utilize ungrading, students are responsible for reflecting on and assessing their own learning. Instructors provide regular feedback on student work, but feedback on individual assignments does not include a grade. Instructors provide extensive guidance to help students reflect on their progress towards meeting their own learning goals. At the end of the term (and often at the midterm), students assemble a portfolio of work and assign themselves an overall grade for their course work. Final grades are at the discretion of the instructor; many instructors report that it is more common that they decide to increase—rather than decrease—the grade that students assigned themselves.

Support for Alternative Grading

Harvard faculty members who employ alternative grading strategies see themselves as a mentor and coach; they note that providing extensive feedback and mentoring can be more time-intensive than traditional grading. Faculty also note that alternative grading requires a high degree of trust between students and instructors. Nonetheless, the benefits are great: faculty feel that they can focus on fostering students' growth and learning, without judging or ranking their students. Moreover, students develop a sense of agency about their learning. 

The Bok Center would be happy to meet with faculty who are interested in modifying their approaches to grading. We encourage faculty to identify elements that resonate with your goals and to incorporate small changes into your teaching.

For more information ...

Blum, & Kohn, A. (2020). Ungrading (First edition). West Virginia University Press.

Chamberlin, K., Yasué, M., & Chiang, I.-C. A. (2018). The impact of grades on student motivation. Active Learning in Higher Education .

Feldman, J. (2018). Grading for equity: What it is, why it matters, and how it can transform schools and classrooms . Corwin Press.  

Horowitz, J. M., & Graf, N. (2019). Most US teens see anxiety and depression as a major problem among their peers. Pew Research Center, 20.

Jones, R. W. (1993). Gender-specific differences in the perceived antecedents of academic stress. Psychological Reports, 72(3), 739-743.

Malouff, J. & Thorsteinsson, E. (2016). "Bias in grading: A meta-analysis of experimental research findings. Australian Journal of Education .

Pulfrey, C., Buchs, C., & Butera, F. (2011). "Why grades engender performance-avoidance goals: The mediating role of autonomous motivation." Journal of Educational Psychology , 103(3), 683.

Schinske, & Tanner, K. (2017). "Teaching more by grading less (or differently)." CBE Life Sciences Education , 13(2), 159–166.

Stanny, & Nilson, L. B. (2014). Specifications grading: Restoring rigor, motivating students, and saving faculty time . Stylus Publishing, LLC.

Streifer, & Palmer, M. (2020)."Alternative grading: Practices to support both equity and learning." University of Virginia: Center for Teaching Excellence.

Supiano, B. (2019). "Grades Can Hinder Learning: What Should Professors Use Instead?" Chronicle of Higher Education .

  • Designing Your Course
  • In the Classroom
  • When/Why/How: Some General Principles of Responding to Student Work
  • Consistency and Equity in Grading
  • Assessing Class Participation
  • Assessing Non-Traditional Assignments
  • Decreasing Student Anxiety about Grades
  • Getting Feedback
  • Equitable & Inclusive Teaching
  • Advising and Mentoring
  • Teaching and Your Career
  • Teaching Remotely
  • Tools and Platforms
  • The Science of Learning
  • Bok Publications
  • Other Resources Around Campus

Duke Learning Innovation and Lifetime Education

Alternative Strategies for Assessment and Grading

Research shows that grades are often not a good reflection of student learning and growth, and that being graded can be stressful for students . In addition, many traditional grading practices can exacerbate existing academic inequalities . We encourage faculty to design assessments that directly support student learning first, with their evaluative role considered secondarily.

We have compiled some options for creating assessment activities and policies which are learning-focused, while also being equitable and compassionate. The suggestions are loosely grouped by expected faculty time commitment. Many suggest ways faculty can provide students with skills practice, feedback on their performance, and opportunities for reflection on their learning processes and growth. In all cases, the suggestions below assume some course design fundamentals including assessments and assignments aligned with course learning objectives .

Incremental Approaches 

The assessment practices in this section can be implemented in any class without substantially redesigning the course.

  • Include practice assignments which are specifically aligned with the types of things students will be graded on (and make the connection explicit). This could be ungraded practice tests, sample essays or portions of essays, mock debates or any other relevant “practice” activities. As an added benefit, practice using the assessment technologies to be sure students are comfortable using those tools before being graded on assignments using them. 
  • Scaffold larger assignments (i.e., break large assignments into smaller pieces which build to the complete product, with instructions and feedback provided for each part). An additional benefit is that scaffolding assignments allows more opportunities for student-instructor interaction. 
  • Offer a larger number of lower-stakes assignments instead of a few high-stakes assignments.
  • Include more formative, ungraded assessment opportunities such as Classroom Assessment Techniques (ideally those will also provide you with insight to adjust teaching, based on how the formative assessment demonstrates student progress and understanding). 
  • Where relevant (for papers, projects, presentations, etc.) use rubrics for grading, and share the rubrics with the students before the assignment due date (or even have the students create the rubrics). Using criterion-referenced assessment /grading rather than norm-referenced means students are each graded in comparison to standardized performance goals rather than in comparison to each other. 
  • Provide example submissions for assignments including your comments about what makes them good/poor.

Embracing UDL: blog post

  • Drop the lowest in a category of assignments
  • Allow retakes of all or some assignments
  • Substitute a test grade for a final exam grade
  • Offer varied approaches to weighting assignments in grade calculations, and perhaps even allow students to choose which weighting approach they prefer for their own grade.
  • In addition to asking students to complete content-related questions or problems, ask them to explain how they got the answer they did, what their thought process was, what resources they used to respond to the assignment, how the assignment connects to prior learning in your course or earlier courses, how they felt when they were working on the assignment, or other reflective or “meta” questions. 
  • Provide written or verbal feedback instead of or in addition to a score or letter grade, and require students to reflect on and respond to the feedback.
  • Periodically ask students to step back and reflect on progress related to course learning goals. 
  • If giving tests or quizzes using a test platform such as Sakai or Gradescope, review student performance and assessment statistics to determine whether any questions need to be revised.
  • Share with students how the assessment went and get their feedback about whether it accurately reflected their learning.
  • Take time to reflect and make notes for yourself about how to improve the assessment in the future to better align with the course learning objectives and to better support student learning.
  • Review instructions and assignment description for completeness, clarity, and lack of jargon, acronyms or culturally-inaccessible statements. 

Course Component Redesign

The approaches in this section may need somewhat higher time-commitment than those above, but shouldn’t require full course redesigns.

Using podcasts in your classroom: blog post

  • Check the students’ level of baseline knowledge before the course and/or before each unit of the course, and adjust teaching and assessments/assignments accordingly.
  • Redesign tests and quizzes to focus more on application and analysis and less on knowledge-check questions, and make tests and quizzes open book and open note, to reduce stress and more accurately replicate expectations in the working world (also provide students with a link to the Academic Resource Center in case they would benefit from advice about good note-taking skills).
  • Allow students to use varied means of demonstrating progress and learning, such as projects, multimedia, oral exam, papers (see the example highlighted in the callout box to have students create a podcast ).
  • Have students themselves create sample questions , sample prompts and sample answers/responses, and use some in your graded assessments.
  • Use a “public exam” system, in which students are given an incomplete version of the exam to study from and to augment. 
  • Switch from high-stress infrequent, high-stakes exams and tests to a relevant mixture of frequent quizzes, essays/papers/lab reports, projects, presentations, research.
  • To improve assessments in the future, review statistics about student performance and student feedback, and revise questions/prompts and assignments where needed.

Course Redesign

The approaches in this section either need intensive course redesign work (which could be time-consuming) or may need to be incorporated as part of a larger discussion about assessment within a department or major. 

Going Gradeless: blog post

  • Redesign a course with authentic assessments based on real-world problems and issues, incorporating real methodologies used in your discipline.
  • Use ungrading , an approach in which individual assignments are not given numerical or letter grades.
  • Redesign the course to be competency-based .
  • Offer significantly more feedback to students on their work, more 1:1 coaching and learning facilitation.
  • Use learning contracts (all or partly student-generated), or even individualized learning goals and learning plans.
  • Use contract grading (an example ).
  • Use specifications grading , a form of contract grading based on how much work students choose to complete in a course.
  • Changing the course to S/U can reduce student stress and increase focus on learning rather than grades. In certain circumstances, faculty may want to advocate for changing a course to S/U, but this is a decision that must be approved by the appropriate academic committee for the major, department and/or school.

Last updated 06/09/2022.

Utah State University

Search Utah State University:

Grading assignments in canvas.

On This Page

Enter Score on Grades Page

View online submission and submit a score in speedgrader, sort and name display options, annotation tools, grade entry, use a rubric, additional feedback, moving to next student, more resources.

If students aren't submitting anything, like for an in-class activity or participation points, you will need to create a "no submission" assignment on the Assignments page. Then, you can go right to the Grades page and enter a grade for each student, just as you would in spreadsheet software.

Enter "EX" (excused) for students who do not need to complete the assignment so that the score won't count as a "0" when it comes time to calculate a final grade.

When students submit artifacts online (text, document, picture, etc.) you will use the SpeedGrader to grade the assignment. The SpeedGrader will show a preview of the student submission and give you a box to enter the score and a comments field.

You can access the SpeedGrader in a couple of ways:

On the Canvas Dashboard, click on the assignment title in the To Do list

To-do list on the right side of the Canvas dashboard

In the course, click on the assignment title in the To Do list on the home page

To-do list on left side of screen on course home page

In the course, click on the Assignment page, then the title of the assignment, and click SpeedGrader

Speedgrader button on the right side of the assignment edit page

In the course, click on Grades , then the three dots, and SpeedGrader

Speedgrader button from options menu in the gradebook

Once you are in the SpeedGrader, you can sort the submissions alphabetically, by the date of submission, or by submission status and hide the student names.

If students submitted documents for the assignment, you have the option to use a pointer, highlighter, text entry, text strikethrough, or box outline to provide feedback. When using these tools, be sure to add a comment for students to view the feedback. Here is sample verbiage you might use:

Please be sure to view the comments on the body of your paper by clicking on the "View Feedback" link from the assignment page.

If you prefer to use annotation tools using an application on your computer, you can download the file and upload the edited version. For example, you could download the submitted file and open it into a word processing software like Microsoft Word and use the Track Changes feature. There, you can save your document with the annotations and upload them to SpeedGrader by using the File Attachment button underneath the comment box on the right-hand panel of the Speedgrader.

arrow pointing at paper clip icon

Enter the score in the grade field on the right panel.

Text entry field for grade on right panel

You can use a rubric to grade the assignment. See Rubrics for more details.

You can provide additional feedback to students in the form of comments, an uploaded file, video, or audio recording.

File upload, video, and audio recording icons in speedgrader right panel

You can move to the next student by clicking on the left arrow at the top of the page, or use the dropdown menu to select a specific student.

  • Canvas guide: How do I use SpeedGrader?
  • Canvas guide: How do I post grades for an assignment in the Gradebook?  

student with silver laptop smiling.

If you’re new to online school or just want to know how Penn Foster works before enrolling in a program, you may be wondering how, exactly, do online tests work? What are the expectations and how is your work graded? Here’s everything you need to know about Penn Foster exams, proctored exams, and how your tests and assignments are graded.

Laura Amendola.

Laura Amendola

Aug 12, 2024

We follow strict quality and ethics policies to make sure our content is honest, accurate, and helpful. Learn more about our editorial guidelines .

You’ve got questions and we’ve got answers! Here’s everything you need to know about Penn Foster exams, proctored exams, and how your tests and assignments are graded.

Taking tests in online classes isn’t that different from taking exams at in-person schools! You need to put in the study time, review your notes, and make sure you’re prepared before you start your exam. Unlike an in-person, classroom setting, however, you can take your exams whenever you’re ready at Penn Foster – no scheduled test dates. That means if you’re not feeling confident you’ve got a handle on the subject you studied, you have plenty of time to reread your study guides and notes and really prepare to ace that exam. On the other hand, if you feel like you know your stuff and don’t need extra study time, you don’t have to wait for a test date to jump in and take the exam!

No matter what Penn Foster program you’re working on, the majority of your tests will be short, multiple-choice exams that you’ll take after each lesson. There are some writing assignments scattered throughout depending on the program you’re enrolled in, but they are minimal. Most classes have about 2-6 lessons each.

Read more: How High School Math & English Help Students Succeed

Penn Foster’s online programs don’t have what you’d traditionally consider “final exams.” In fact, career diploma programs and our high school diploma don’t have any “finals” at all! As long as you’ve passed your courses for high school and reached an overall passing lesson average for the career diploma, you’re set to graduate. However, for college-level programs like undergraduate certificates, associate degrees, and bachelor’s degrees, you may have to take something called proctored exams. These exams act like finals for classes.

For some of your online college classes, you’ll have a longer exam you need to take before you can move on to your next semester. Proctored exams are timed, monitored tests that can be a mix of multiple choice, short answer, and short essay questions to test what you learned throughout the class. You won’t have to take a proctored exam for every class in your semester, so it’s not as overwhelming as finals week in a traditional college setting! However, depending on the degree program you’re enrolled in, you may have at least one or two proctored exams each semester.

Proctored exams are completed remotely in most cases but can be completed as a paper exam through the mail for certain exams or approved circumstances. The two ways to complete proctor exams are detailed below:

  • Through the mail. If the exam is a required paper proctor, or if you are approved to take the exam as a paper exam as a result of the Americans with Disabilities Act, there are some requirements to follow. You’ll need to choose a person to be your proctor and administer the exam to you. A proctor can be a friend or coworker, but can’t be a relative, boss, partner, or live at the same address as you do. Whomever you choose as your proctor will need to have at least an associate degree – it doesn't matter in what subject – to qualify. Many students who struggle to find a proctor find luck with their local librarian. Once you’ve chosen a proctor, you’ll submit the proctor form to the school for approval. When it comes time for your exams, we’ll mail them out to your proctor and they’ll set up a time with you to sit down and take the test. Once you’ve finished your test, the proctor will then mail it back to us for grading. The downside to the mail-in option for these exams is that it can take quite a while to get the exam, send it back, and receive your grade. Generally, it can take about 14 business days to get to your proctor, 14 business days to get back to us in the mail, and as long as three weeks to be graded. This can mean you’ll have a longer period of time before you can move on to your next semester.
  • Online. Most of our proctored exams will be taken online! They still have the same format as paper proctored exams – short answer, short essay, and multiple-choice questions – but you’re able to complete them at home. Instead of having to choose a proctor, you’ll use a third-party service that monitors the exams, usually through video format. This means you’ll need a computer with webcam access in order to complete your proctored exams. You’ll also need proof of ID. Once you’ve completed your exam, the test and the video recording are submitted for review and then graded. This is a much faster process, so exams can possibly be graded in as short a time as one week.

We use a number-letter system of grading for all Penn Foster programs. Completed exams and assignments will receive a number grade. Passing grades and grading in general works a little bit differently depending on what type of program you’re taking.

  • High school grading . In online high school classes, 65 is considered the minimum passing grade and is equivalent to a letter grade of D.
  • Career program grading . Similar to high school grading, you’ll generally need a minimum grade of 65 to pass your exams and classes. However, some career programs and certificates require that you earn at least a 70 to pass the overall program.
  • College-level grading . Grading for college-level courses is a little more complicated! We still use the number-letter system, so you’ll receive number grades for your exams and assignments, including any proctored exams you need to take. The letter grade for each course varies, as exams are weighted in our college programs, so you should always check the syllabus of a course for this info. Letter grades are converted to grade points for the purpose of computing the Grade Point Average (GPA) for each semester and the cumulative Grade Point Average for more than one semester. Grade points range from 4.0 for an A grade to 0.0 for an F grade.

Read more: Your Guide to Credit Recovery for High School

Not every subject will come as easily to you as others, so it’s not expected that you’ll earn perfect grades on every exam and assignment, and that’s okay! If you fail an exam  within your program – or even if you’re not satisfied with your grade and know you can do better – you'll have the opportunity to retake that exam. If you’re a high school student, you’ll have two optional retakes available! After you retake an exam, we honor whatever your highest grade was, so for example, if you took a test and got a 70 then retake it and get a 90, 90 is your final grade for that lesson. For college and career programs, you’ll have one optional retake for your exams.

If you fail a class, you may have to retake the class or potentially be given additional attempts on exams to graduate. These are reviewed by our Education department.

Read more: Your Guide to High School English at Penn Foster

Video Transcript

Everything that happened to us in the past does not determine our future.

One, two, three, four…

(Music plays)

The most impressive things that I learned about you all, are the fortitude, the determination you demonstrated during your academic journey. Many of you doubted whether you should even enroll in the program that you’re in, but you did it anyway.

You have proven to yourself and anyone who ever doubted you that you have what it takes. And this is just the beginning of what you’re capable of achieving. If you can close your eyes and see yourself as a business owner, a doctor, a lawyer, or whatever you want to be, you are well on your way to becoming that bigger more improved version of yourself. Never stop learning.

Congratulations and godspeed.

Since Penn Foster is completely online and self-paced, there’s plenty of flexibility that allows you to be successful in your courses. You can study at the pace that works best for you and take exams only when you’re ready, ensuring you’ve got the tools you need to pass your classes. However, some things can help set you up for success in your program from day one, including:

  • Setting a schedule. Even though your courses are self-paced, setting a goal and creating a study schedule can help you better absorb what you learn and keep you on track toward graduation. When you first get started, consider when you’d like to have your diploma or degree completed by. Then, consider how much time you realistically will have to study, whether that’s 15 minutes a day or three hours per week. From there, you can create a clear schedule that can help you manage your time and expectations.
  • Having a dedicated study space. While you can take your classes and study anywhere you go, having a dedicated study space can help you focus on what you’re learning.
  • Using Penn Foster resources. You’re not alone when you’re studying online! If you’re stuck or need help, we’ve got several resources to get you through your program including our Learning Resource Center and our dedicated instructors!

Alex Thome , Penn Foster High School’s Vice President, says, “For many of our students, we were their plan B. We appreciate that a lot of folks come to us because something didn’t work or they weren’t given the education they need. But we’re the plan A for a lot of people who just want to do something different. They don’t want the same high school experience as everyone else.”

Read more: 15 Tips for Online Learning (Your Guide to Taking Online Classes)

Whether you have questions about a grade you received, don’t quite understand the material you’re learning, or just need a little help to stay motivated, we’re here for you! Our dedicated instructors are available to help you through course-specific problems by phone, email, or through your Learner Center. You can also set up appointments to speak one-on-one with an instructor! Besides our supportive staff, you can get peer support and encouragement through our Student Community, where you can connect with alumni and other students currently working on the same program as you are. Brian Brown , Academic Manager of Penn Foster High School, says "We want to help you, call us. You're not going to have bad experiences here."

If you haven’t enrolled in a Penn Foster program  and want to know more, or aren’t sure where to get started, reach out to our Admissions Specialists today at 1-888-427-6500 !

In Demand Skilled Trades Jobs

10 Skilled Trades Jobs In Demand for 2024

Desiree Sinkevich

Des Sinkevich

Jan 05, 2023

Woman with medium dark skintone and dark curly hair, wearing scrubs to demonstrate working as a certified medical coder

Which Medical Coding & Billing Certification Exam Should You Take – the CBCS or CPC?

Penn Foster blog author icon.

Penn Foster

Sep 27, 2022

woman with glasses wearing green using cellphone on green background.

Going Back to School as an Adult (FAQs)

Jun 09, 2022

11 min read

Calculator Genius Logo

  • Grade Calculators

Weighted Grade Calculator

Weighted Grade Calculator

Assignments

Assignment 1

Assignment 2

Assignment 3

Assignment 4

Assignment 5

Assignment 6

Assignment 7

Assignment 8

Assignment 9

Assignment 10

Your Grade Average:

To determine what grade you need to get on your remaining assignments (or on your final exam), enter the total weight of all of your class assignments (often the total weight is 100). Then enter the desired grade you would like to get in the class.

Enter Desired Grade

Enter Class Total Weight

Instructions

You can use the calculator above to calculate your weighted grade average. For each assignment, enter the grade you received and the weight of the assignment. If you have more than 10 assignments, use the "Add Row" button to add additional input fields. Once you have entered your data, press the "calculate" button and you will see the calculated average grade in the results area.

If you want to calculate the average grade you need on your remaining assignments (or on your final exam) in order to get a certain grade in the class, enter the desired grade you would like to get in the class. Then enter the total weight of all your class assignments. Often the total weight of all class assignments is equal to 100, but this is not always the case. Press either the “Calculate” button or the “Update” button and you will see your average grade for the class and the results will be displayed in the results area.

Video Instructions

How to calculate weighted grade average?

  • First multiple the grade received by the weight of the assignment. Repeat this for each completed assignment.
  • Then add each of the calculated values from step 1 together.
  • Next add the weight of all the completed assignments together.
  • Finally, divide the calculated value from step 2 above by the value calculated from step 3. This gives you the weighted grade average.

Weighted Grade Formula

Weighted Grade = (w 1 x g 1 + w 2 x g 2 + w 3 x g 3 + …) / (w 1 + w 2 + w 3 + …)

Example Calculation

Here is an example. Let's say you received a 90% on your first assignment and it was worth 10% of the class grade. Then let's assume you took a test and received an 80% on it. The test was worth 20% of your grade.

To calculate your average grade, follow these steps:

  • Multiple each grade by its weight. In this example, you received a 90% on the first assignment and it was worth 10%. So multiply 90 x 10 = 900. You also received an 80% on the test and it was worth 20% of the class grade. So multiply 80 x 20 = 1600.
  • Add the calculated values from step 1 together. We now have 900 + 1600 = 2500.
  • Add the weight of all the completed assignments together. To do this, add 10% for the first assignment and 20% for the second assignment. That gives us 10 + 20 = 30.
  • Finally, divide the value from step 2 by the value from step 3. That gives us 2500 / 30 = 83.33. Therefore our weighted grade average is 83.33%.

You Might Like These Too

semester gpa calculator

Semester GPA Calculator

graded assignments and exams

Exam Average Calculator

Middle School GPA Calculator Without Credits

Middle School GPA Calculator Without Credits

Test Average Calculator

Test Average Calculator

How can we improve this page.

Instructure Logo

You're signed out

Sign in to ask questions, follow content, and engage with the Community

  • Canvas Instructor
  • Instructor Guide
  • How do I weight the final course grade based on as...
  • Subscribe to RSS Feed
  • Printer Friendly Page
  • Report Inappropriate Content

How do I weight the final course grade based on assignment groups?

in Instructor Guide

Note: You can only embed guides in Canvas courses. Embedding on other sites is not supported.

Community Help

View our top guides and resources:.

To participate in the Instructure Community, you need to sign up or log in:

Free learning & homework trusted by 300,000+ teachers

Bar chart illustrating the positive impact Seneca has on students' exam scores compared to revision guides

Recognised by

EdTechX Logo

  • Your school
  • Get mobile app
  • Evidence Seneca works
  • Revision Notes
  • Free CPD courses
  • Definitions
  • Certified teaching resources
  • Mental health & wellbeing
  • Find tutors
  • Become a tutor
  • Privacy - UK
  • Privacy - DE + AT
  • Privacy - ES
  • Privacy - FR
  • Privacy - IT
  • [email protected]
  • Help & FAQs
  • A Level Courses
  • A Level Biology Revision
  • A Level Business Revision
  • A Level Chemistry Revision
  • A Level Physics Revision
  • A Level Economics Revision
  • A Level English Language Revision
  • A Level English Literature Revision
  • A Level Geography Revision
  • A Level History Revision
  • A Level Political Studies Revision
  • A Level Psychology Revision
  • A Level Sociology Revision
  • A Level Maths Revision
  • GCSE Courses
  • GCSE Biology Revision
  • GCSE Business Revision
  • GCSE Chemistry Revision
  • GCSE Physics Revision
  • GCSE Combined Science Revision
  • GCSE Computer Science Revision
  • GCSE Design Technology Revision
  • GCSE English Language Revision
  • GCSE English Literature Revision
  • GCSE Food Preparation & Nutrition Revision
  • GCSE French Revision
  • GCSE Geography Revision
  • GCSE German Revision
  • GCSE History Revision
  • GCSE Maths Revision
  • GCSE Media Studies Revision
  • GCSE Music Revision
  • GCSE Physical Education Revision
  • GCSE Religious Studies Revision
  • GCSE Sociology Revision
  • GCSE Spanish Revision
  • KS3 Courses
  • KS3 English Revision
  • KS3 French Revision
  • KS3 Spanish Revision
  • KS3 Geography Revision
  • KS3 History Revision
  • KS3 Maths Revision
  • KS3 Science Revision
  • KS2 Courses
  • KS2 Computing
  • KS2 English
  • KS2 Geography
  • KS2 History
  • KS2 Science

Choose your region

NASA Logo

Suggested Searches

  • Climate Change
  • Expedition 64
  • Mars perseverance
  • SpaceX Crew-2
  • International Space Station
  • View All Topics A-Z

Humans in Space

Earth & climate, the solar system, the universe, aeronautics, learning resources, news & events.

Hubble Examines a Busy Galactic Center

Hubble Examines a Busy Galactic Center

An aircraft is centered in the image facing the left. It is on a runway of dark gray concrete and the surroundings are primarily dry grass, with another runway at the top of the image.

NASA Earth Scientists Take Flight, Set Sail to Verify PACE Satellite Data

What’s Up: September 2024 Skywatching Tips from NASA

What’s Up: September 2024 Skywatching Tips from NASA

  • Search All NASA Missions
  • A to Z List of Missions
  • Upcoming Launches and Landings
  • Spaceships and Rockets
  • Communicating with Missions
  • James Webb Space Telescope
  • Hubble Space Telescope
  • Why Go to Space
  • Commercial Space
  • Destinations
  • Living in Space
  • Explore Earth Science
  • Earth, Our Planet
  • Earth Science in Action
  • Earth Multimedia
  • Earth Science Researchers
  • Pluto & Dwarf Planets
  • Asteroids, Comets & Meteors
  • The Kuiper Belt
  • The Oort Cloud
  • Skywatching
  • The Search for Life in the Universe
  • Black Holes
  • The Big Bang
  • Dark Energy & Dark Matter
  • Earth Science
  • Planetary Science
  • Astrophysics & Space Science
  • The Sun & Heliophysics
  • Biological & Physical Sciences
  • Lunar Science
  • Citizen Science
  • Astromaterials
  • Aeronautics Research
  • Human Space Travel Research
  • Science in the Air
  • NASA Aircraft
  • Flight Innovation
  • Supersonic Flight
  • Air Traffic Solutions
  • Green Aviation Tech
  • Drones & You
  • Technology Transfer & Spinoffs
  • Space Travel Technology
  • Technology Living in Space
  • Manufacturing and Materials
  • Science Instruments
  • For Kids and Students
  • For Educators
  • For Colleges and Universities
  • For Professionals
  • Science for Everyone
  • Requests for Exhibits, Artifacts, or Speakers
  • STEM Engagement at NASA
  • NASA's Impacts
  • Centers and Facilities
  • Directorates
  • Organizations
  • People of NASA
  • Internships
  • Our History
  • Doing Business with NASA
  • Get Involved

NASA en Español

  • Aeronáutica
  • Ciencias Terrestres
  • Sistema Solar
  • All NASA News
  • Video Series on NASA+
  • Newsletters
  • Social Media
  • Media Resources
  • Upcoming Launches & Landings
  • Virtual Guest Program
  • Image of the Day

Sounds and Ringtones

Interactives.

  • STEM Multimedia

NASA'S Europa Clipper Spacecraft

NASA Invites Social Creators to Experience Launch of Europa Clipper Mission

Persevering Through the Storm

Persevering Through the Storm

NASA’s Hubble, MAVEN Help Solve the Mystery of Mars’ Escaping Water

NASA’s Hubble, MAVEN Help Solve the Mystery of Mars’ Escaping Water

graded assignments and exams

NASA Astronaut Don Pettit’s Science of Opportunity on Space Station

Technicians are building tooling in High Bay 2 at NASA Kennedy that will allow NASA and Boeing, the SLS core stage lead contractor, to vertically integrate the core stage.

NASA, Boeing Optimizing Vehicle Assembly Building High Bay for Future SLS Stage Production

graded assignments and exams

NASA Seeks Input for Astrobee Free-flying Space Robots

NASA Summer Camp Inspires Future Climate Leaders

NASA Summer Camp Inspires Future Climate Leaders

Still image from PREFIRE animation

NASA Mission Gets Its First Snapshot of Polar Heat Emissions

The image shows a compact, handheld water dispenser device placed on a table, with several vacuum-sealed packets of food arranged in front of it. The device is white with yellow tape and has various labels and instructions on its surface. It includes a large, graduated syringe-like component marked in milliliters, used for precise water measurement. The food packets in front of the device contain different types of rehydratable meals in various colors and textures, indicating the preparation of food for consumption in a space environment. The background features a white grid pattern, enhancing the focus on the device and food packets.

Artemis IV: Gateway Gadget Fuels Deep Space Dining

NASA’s Webb Reveals Distorted Galaxy Forming Cosmic Question Mark

NASA’s Webb Reveals Distorted Galaxy Forming Cosmic Question Mark

NASA’s Mini BurstCube Mission Detects Mega Blast

NASA’s Mini BurstCube Mission Detects Mega Blast

TRACERS

NASA Tunnel Generates Decades of Icy Aircraft Safety Data

A four-engine turboprop aircraft in a red and white livery takes off from a runway on its way to be modified into a hybrid electric aircraft. Other airplanes can be seen in the distance.

Research Plane Dons New Colors for NASA Hybrid Electric Flight Tests 

A white Gulfstream IV airplane flies to the left of the frame over a tan desert landscape below and blue mountain ranges in the back of the image. The plane’s tail features the NASA logo, and its wings have winglets. Visible in the lower right third of the image, directly behind the airplane’s wingtip is the Mojave Air and Space Port in Mojave, California. 

NASA G-IV Plane Will Carry Next-Generation Science Instrument

bright blue cloud fills the scene, orange-red rings surround bright-white stars in the upper-left corner of the image

OSAM-1 Partnership Opportunity: Request for Information 

Two robotic arms wrapped in gold material sitting on top of a black and silver box.

NASA to Support DARPA Robotic Satellite Servicing Program

A prototype of a robot built to access underwater areas where Antarctic ice shelves meet land is lowered through the ice during a field test north of Alaska in March.

NASA JPL Developing Underwater Robots to Venture Deep Below Polar Ice

Learn Math with NASA Science

Learn Math with NASA Science

In this photo taken from the International Space Station, the Moon passes in front of the Sun casting its shadow, or umbra, and darkening a portion of the Earth's surface above Texas during the annular solar eclipse Oct. 14, 2023.

Eclipses Create Atmospheric Gravity Waves, NASA Student Teams Confirm

A close up image of a set of massive solar arrays measuring about 46.5 feet (14.2 meters) long and about 13.5 feet (4.1 meters) high on NASA’s Europa Clipper spacecraft inside the agency’s Payload Hazardous Servicing Facility at Kennedy Space Center in Florida.

La NASA invita a los medios al lanzamiento de Europa Clipper

A man supporting the installation of the X-59 ejection seat.

El X-59 de la NASA avanza en las pruebas de preparación para volar

Technicians tested deploying a set of massive solar arrays

La NASA invita a creadores de las redes sociales al lanzamiento de la misión Europa Clipper

For students grades 9-12.

The University of Notre Dame team carries their rocket to the launch pad during the 2022 NASA Student Launch competition.

2024 NASA International Space Apps Challenge Hackathon

The NASA International Space Apps Challenge is a hackathon for coders, scientists, designers, storytellers, makers, technologists, and innovators around the world to come together and use open data from NASA and its Space Agency Partners to create solutions to challenges we face on Earth and in space. Registration opened July 18. Event Date: Oct. 5-6

The near side of Earth's Moon, as seen based on data from cameras aboard NASA's robotic Lunar Reconnaissance Orbiter spacecraft.

Save the Date: International Observe the Moon Night

You are invited to join observers around the world in learning about lunar science and exploration, making celestial observations, and honoring cultural and personal connections to the Moon. This annual, worldwide public engagement event takes place when the Moon is close to first quarter – a great phase for evening observing. Event Date: Sept. 14

: Students compete during NASA’s 2019 Human Exploration Rover Challenge. The 2023 culminating event returns Friday and Saturday, April 21-22, 2023, in Huntsville, Alabama.

2025 NASA Human Exploration Rover Challenge

NASA’s Human Exploration Rover Challenge (HERC) is accepting proposals for its 2025 competition that will be held in Huntsville, Alabama. HERC encourages research and development of new technologies for future mission planning and crewed space missions to other worlds. Proposal Deadline: Sept. 19

Inflated high-altitude balloon with the sun behind it sits on the tarmac

NASA TechRise Student Challenge: Virtual Field Trip

Join the NASA TechRise Virtual Field Trip to hear from NASA astronaut Jessica Watkins and participate in a live Q&A session. Learn about high-altitude balloons and how your school can win $1500 by submitting an experiment idea to the NASA TechRise Student Challenge. Event Date: Sept. 20

graded assignments and exams

Virtual Event: University Day at NASA's Glenn Research Center

NASA’s Glenn Research Center in Ohio is hosting a virtual University Day event designed to inspire high school, graduate, and undergraduate students. Attendees will learn how to apply for a NASA internship, take part in a live virtual workshop, and participate in a Q&A panel with NASA interns and Glenn experts. Registration Deadline: Sept. 20

The official portrait for NASA's SpaceX Crew-9 mission.

Watch the Crew-9 Launch!

Join us virtually for the launch of NASA’s SpaceX Crew-9 mission! This mission will carry NASA astronauts Commander Zena Cardman, Pilot Nick Hague, and Mission Specialist Stephanie Wilson, and Roscosmos cosmonaut Mission Specialist Aleksandr Gorbunov to the space station to conduct a wide-ranging set of operational and research activities for the benefit of all. Launch Date: No earlier than late September

A large metal scientific instrument with black panels is suspended off the ground by a yellow crane. A white trail of material is on the ground and attaches to a partially inflated white balloon in the background.

Cover Art Contest: Scientific Ballooning Handbook – 50th Anniversary Edition

The NASA Balloon Program Office is organizing an update to the Scientific Ballooning Handbook and inviting students to design the cover. Entries may be submitted by individuals or students working together in teams. Entry Deadline: Oct. 1

drop tower challenge 2018

2025 Drop Tower Challenge – Paddle Wheel

Student teams are invited to design and build paddle wheels that will turn in water because of the wetting properties of their surfaces when they are exposed to microgravity. Paddle wheels from selected teams will be tested in the 2.2 Second Drop Tower at NASA’s Glenn Research Center in Ohio. Proposal Deadline: Oct. 31

graded assignments and exams

NASA TechRise Student Challenge

The NASA TechRise Student Challenge invites student teams to submit science and technology experiment ideas to fly on a high-altitude balloon. Entry Deadline: Nov. 1

Exploring Careers @ NASA

n a clean room at NASA's Jet Propulsion Laboratory in Pasadena, California, engineers observed the first driving test for NASA's Mars 2020 rover on Dec. 17, 2019. Credits: NASA

What Type of Engineering is Right for You?

Astronaut Kate Rubins works inside the Life Sciences Glovebox

How To Be an Astronaut

A group of women at JPL

NASA People

Sharelle Copple

Meet Our NASA Interns

Elena Fermin

Women in STEM

Allison Nguyen sits in front of her computer

Fostering Inclusivity at NASA

Pathways Programs at NASA

Pathways Program at NASA

We need you NASA recruitment poster

NASA Job Openings

Be a nasa intern.

NASA offers paid internships that provide an opportunity for high school and college-level students to contribute to agency projects.

A group of students standing in front of a NASA (a/k/a NASA Worm) backdrop.

Celebrate NASA Astronaut Don Pettit's Soyuz Launch

NASA shape memory metal engineer Othmane Benafan

Surprisingly STEM: Memory Metal Engineer

Exploration geologist Angela Garcia smiles in a training spacesuit

Surprisingly STEM: Exploration Geologist

NASA astronaut and Expedition 67 Flight Engineer Kjell Lindgren poses for a portrait inside a crew sleeping bag aboard the International Space Station.

What Does a NASA Human Factors Engineer do?

graded assignments and exams

Surprisingly STEM: Space Food Scientist

The International Space Station (ISS) almost fills this digital still camera's frame as the Space Shuttle Atlantis closes in for docking.

ISS@25: What We Learn

graded assignments and exams

Cosmic Careers: Neutral Buoyancy Laboratory Diver

astronaut moving fluids in microgravity

Moving Water in Space – 8K Ultra HD

Chalkboard with wording Study Like A NASA Expert

NASA Experts Share Their Best Study Tips!

Facts and Figures From the Artemis I Mission

Facts and Figures From the Artemis I Mission

Hand touching robot

Surprisingly STEM: Soft Robotics Engineering

Biologists Eric Reyier, PhD holding a fish

Surprisingly STEM: Marine Biologists

whenwillieverusethis

NASA Experts Answer "When Will I Ever Use This?"

graded assignments and exams

5 Things: Black Holes

Animation How a Glacier Melts

Animation: How a Glacier Melts

graded assignments and exams

Getting Sick in Space

Nonverbal communication

How do Astronauts Communicate Nonverbally in Space?

What you need to know about From NASA screenshot

Our Home Planet and Beyond

#AskNASA with a cartoon of Mars holding a sign that reads Welcome to Mars!

You Asked, We Answered!

Elements of Webb Video series

The Materials That Made the Webb Space Telescope

An artist's rendering of the Imaging X-Ray Polarimetry Explorer (IXPE) mission traveling through space.

‘E.Z. Science’ Video Series

We Asked a NASA Scientist: Do Aliens Exist?

We Asked a NASA Expert

A portion of the Moon and the words "Why the Moon?"

Why We're Going Back

A rocket booster with flames beginning to shoot out

Rocket Science Video Series

Title screen for "No Small Steps" with Stephen Granade

The Space Launch System

Cartoon SLS rocket on launchpad and the words "How We Are Going to the Moon"

Getting Back to the Moon

Cartoon illustration of the SLS rocket next to an illustration of a brain

The Brains of NASA’s SLS Rocket

Now Available: NASA+, a new ad-free, no-cost, family-friendly streaming service that embeds you into our missions through new original video series.

Screenshots of NASA+ are shown on a mobile phone and computer against a space-themed background

Make a Corsi-Rosenthal Filter

During wildfire events, smoke can enter your home even if your windows and doors remain closed. This can lead to respiratory issues, itchy eyes, and illness. If you’re concerned about your indoor air quality due to wildfire smoke in your area an easy-to-make Corsi-Rosenthal Box can help clean your air. These homemade air filters can also help remove virus particles.

Photo of a finished assembly of a Corsi-Rosenthal air filter box

Trace Space Back to You!

Hubble's Name that Nebula game graphic

Test Your Nebulae Knowledge

Deep Space Network 35 Meter Antenna 3D Model

Models by JPL

NASA App

What Kind of Exoplanet Explorer Are You?

Screenshot from the Access Mars virtual reality experience showing the Curiosity rover on the surface of Mars

WebVR: Access Mars

GLOBE Observer

Get the GLOBE Observer App

Aero AR for Android Icon

Aeronautics AR (Android)

Aero AR for Android IOS

Aeronautics AR (IOS)

Two mosaics of Antarctica

Make and Share Arts and Crafts Inspired by Landsat

Illustration of the Orion spacecraft flying back to Earth

Build the Orion Spacecraft

Silhouettes of two people in front of a Mars landscape

Take a Virtual Trip to Mars

Computer-generated rover

Interactive 3D Rover Experience

Blueprint drawing of a six-wheeled rover with an animated face

Build Your Own Mars Rover

A tetrahedral kite

Build a Tetrahedral Kite

Reach Across the Stars banner.

Female Space Science Heroes Featured in New Interactive App

A user playing the new NeMO-Net game that helps NASA classify the world's coral from their own home

NASA Calls on Gamers, Citizen Scientists to Help Map World’s Corals

graded assignments and exams

Tracking and Data Relay Satellite Game

James Webb Space Telescope Basic Paper Model

Download, Print and Build Paper Spacecraft Models

VR view of the underwater Neutral Buoyancy Lab

Virtual Reality Tours of Commercial Crew Facilities

Explore the universe and discover our home planet with NASA through a collection of sounds from historic spaceflights and current missions.

graded assignments and exams

Read the Latest Issue of Astrobiology: The Story of Our Search for Life in the Universe

graded assignments and exams

Calling All Adventurers!

ESD Poster Set 4up

Decorate Your Space With Artemis

illustration of bees and flowers, with text: Earth Day - from big to small, we're all connected

Download Earth Day Posters

Artist concept of various aircraft in flight over an urban area.

Aeronautics Virtual Backgrounds

Orion Nebula from Wise Spacecraft

NASA Image Galleries

Credit NASA/JPL-Caltech

Download a Poster of NASA's James Webb Space Telescope

Microsoft Teams Artemis Background

Virtual Backgrounds

NASA’s Space Launch System Rocket infographic for capabilities in deep space

Space Launch System Infographics

Image of NGC2024 taken by the Hubble Space Telescope

Hubble Resource Gallery

Low-boom airplane minposter

Aero Tech Mini Posters

Collage of retro-style space posters

Space Tourism Posters

Posters of planets and moons of the solar system

Solar System and Beyond Poster Set

Astronauts onboard the ISS in a circle

Astronaut Posters

Spirit and Opportunity rovers infographic

Create or Download an Infographic

Cutaway of the Space Launch System rocket and comparisons of its height and weight

Poster: Meet the Rocket

James Webb Space Telescope-themed pumpkin

NASA-Themed Pumpkin-Carving Templates and Stencils

Join the artemis mission to the moon.

Make, launch, teach, compete and learn. Find your favorite way to be part of the Artemis mission.

Scene of NASA Artemis astronauts on the Moon conducting scientific research with a rover and power system behind them.

Get Social With NASA

Social network icons

Social Media at NASA

Houston We Have a Podcast with Astronaut Mark Van De Hei

NASA Podcasts

Virtual Guest Program

Be NASA's Virtual Guest!

Earth at Night front cover

NASA e-books

Map of Earth

From longform interviews with astronauts and engineers to narrative shows that take you on a tour of the galaxy, NASA’s diverse podcast portfolio lets you experience the thrill of space exploration without ever leaving Earth.

People on stage speaking to each other during a live broadcast

get social with nasa stem engagement

YouTube logo

NASA STEM YouTube

Flip icon

NASA STEM Flip

Pinterest

NASA STEM Pinterest

graded assignments and exams

NASA STEM X

Facebook logo

NASA STEM Facebook

Discover more topics from nasa.

NASA STEM Opportunities and Activities For Students

graded assignments and exams

Latest STEM News and Features

graded assignments and exams

NASA Centers and Facilities

graded assignments and exams

  • Open access
  • Published: 06 September 2024

Study preferences and exam outcomes in medical education: insights from renal physiology

  • Sofie Fagervoll Heltne 1   na1 ,
  • Sigrid Hovdenakk 1   na1 ,
  • Monika Kvernenes 2 &
  • Olav Tenstad 1  

BMC Medical Education volume  24 , Article number:  973 ( 2024 ) Cite this article

Metrics details

Efficient learning strategies and resource utilization are critical in medical education, especially for complex subjects like renal physiology. This is increasingly important given the rise in chronic renal diseases and the decline in nephrology fellowships. However, the correlations between study time, perceived utility of learning resources, and academic performance are not well-explored, which led to this study.

A cross-sectional survey was conducted with second-year medical students at the University of Bergen, Norway, to assess their preferred learning resources and study time dedicated to renal physiology. Responses were correlated with end-of-term exam scores.

The study revealed no significant correlation between time spent studying and overall academic performance, highlighting the importance of study quality over quantity. Preferences for active learning resources, such as Team-Based Learning, interactive lessons and formative assignments, were positively correlated with better academic performance. A notable correlation was found between students’ valuation of teachers’ professional competence and their total academic scores. Conversely, perceived difficulty across the curriculum and reliance on self-found online resources in renal physiology correlated negatively with academic performance. ‘The Renal Pod’, a locally produced renal physiology podcast, was popular across grades. Interestingly, students who listened to all episodes once achieved higher exam scores compared to those who listened to only some episodes, reflecting a strategic approach to podcast use. Textbooks, while less popular, did not correlate with higher exam scores. Despite the specific focus on renal physiology, learning preferences are systematically correlated with broader academic outcomes, reflecting the interconnected nature of medical education.

The study suggests that the quality and strategic approaches to learning significantly impact academic performance. Successful learners tend to be proactive, engaged, and strategic, valuing expert instruction and active participation. These findings support the integration of student-activating teaching methods and assignments that reward deep learning.

Peer Review reports

Introduction

Medical students are under pressure to acquire knowledge and skills in many fields during medical school, which means they need to prioritize their time and study efforts wisely. To study effectively, students must be able to choose among a variety of learning resources. Traditionally, students’ preferred mode of learning has been attributed to learning styles described as relatively stable personality traits [ 1 ]. However, recent studies indicate that students adapt their study approaches based on contextual factors, such as the perceived importance of the study topic, its difficulty, stress levels, assessment methods, and identified learning needs [ 2 , 3 ]. Research on teaching preferences among students yields mixed results regarding their inclination toward active learning methods versus passive formats such as lecturing [ 4 , 5 , 6 , 7 ].

Notably, students do not always utilize learning resources that correlate with better exam performance. This discrepancy might stem from misaligned assessment methods or students’ challenges in accurately assessing their learning needs [ 4 , 8 , 9 , 10 ]. Investigating the relationship between students’ perceptions of learning resource usefulness and exam performance is likely to offer valuable insights for educators and students in selecting the most effective learning tools.

Our study specifically targets the learning of renal physiology, a subject of increasing importance due to the rising prevalence of chronic renal diseases, which poses a significant socioeconomic challenge [ 11 ]. This concern is compounded by a notable decline in nephrology fellowship enrollments [ 12 , 13 , 14 ], suggesting a potential future strain on the nephrology workforce. The complexity of renal physiology, and nephrology as a specialty, contributes to this issue. A survey of internal medicine subspecialty fellows revealed that 31% of the respondents found nephrology to be the most challenging physiology course in medical school, and 24% would have considered nephrology had it been taught effectively [ 13 ]. A revamped renal physiology course, incorporating diverse learning resources, yielded positive student attitudes toward both renal physiology and nephrology [ 12 ].

It is plausible that students who engage in active learning and utilize multiple resources also achieve better academically. Incorporating a range of active learning resources into the curriculum could enhance educational quality and, in the long run, boost the healthcare system, including nephrology recruitment. To our knowledge, the relationship between academic success and preferred learning resources has not been explored in the context of renal physiology. Our study aimed to examine the association between students’ study time, their perceptions of learning resource efficacy, and their summative assessment outcomes to identify characteristics of successful learners.

Study context

The study was conducted at the Faculty of Medicine, University of Bergen, which offers a six-year medical program. In 2015, an integrated curriculum was implemented, emphasizing basic sciences during the first two years and progressively integrating clinical exposure. The program is structured into 12 instructional units, divided into spring and fall semesters, with each year consisting of two semesters. Additionally, there are elective periods from the 6th semester (3rd year), consisting of four weeks each January, that allow students to delve into specific topics of interest.

The program is divided into two study tracks: one based entirely in Bergen and another called ‘Vestlandslegen,’ where students spend the first three years in Bergen and the last three years in Stavanger. This structure allows for diverse clinical exposure in different healthcare settings. Clinical practice is a significant component of the program, starting early in the education. From the fourth year, students are placed in extended clinical rotations across various specialties, including psychiatry, internal medicine, surgery, obstetrics/gynecology, pediatrics, general practice, and community medicine. These rotations take place in multiple hospitals, including Haukeland University Hospital in Bergen, Stavanger University Hospital, Førde Central Hospital, and Haugesund Hospital. The program also offers opportunities for international clinical placements in Uganda and Thailand during the final years.

Our study focused on MED4 (the 4th semester), where students encounter a broad spectrum of medical sciences, including the renal and urinary systems; cardiovascular, respiratory, endocrine, digestive, and reproductive systems; as well as energy and thermoregulation, nutrition, microbiology, environmental medicine, and general practice. Environmental medicine and general practice are assessed on a continuous basis with a pass/fail evaluation, while the rest of the subjects are assessed through a 33-credit, six-hour summative exam, held at the end of the semester. This exam includes a mix of short-answer questions requiring reasoning and predominantly reasoning-based multiple-choice questions (MCQs). The grading scale for MED4 ranges from A to F, where A represents excellent performance, B is very good, C is good, D is satisfactory, E is sufficient, and F is fail.

Teaching methods in MED4 include traditional lectures, TBL sessions, practical lab courses, quizzes, and online teacher-moderated discussion forums. At least half of the teaching time is dedicated to active learning methods to enhance student engagement and understanding. All lab courses and dissections are mandatory and include compulsory assignments that must be passed. A new and unique feature of the MED4 curriculum is the introduction of formative assignments with feedback. These activities are also mandatory and designed to train students in presenting solutions to academic questions to an instructor who is not a specialist in the field. The instructors receive prior instruction from specialists and are provided with a written guide to ensure consistency in feedback. Each semester, one session is held early, in the middle, and towards the end of the semester. During these sessions, students receive two central assignments from specific subject areas that have recently been covered. Students have 90 min to prepare their responses, and the preparation can involve any resources, including artificial intelligence tools. After the preparation period, students present their solutions in pairs to a faculty member. Each pair is allotted 15 min to present, with each student presenting one of the assignments for a maximum of five minutes. Feedback is provided on whether the presentation meets, falls below, or exceeds expectations regarding clarity and accuracy for non-specialists. After these sessions, a discussion forum is created for further questions, monitored by the task owner, with participation encouraged from the entire cohort.

The renal physiology segment of MED4 encompasses an in-depth exploration of renal function, including water and electrolyte balance and acid-base homeostasis. Students are required to engage in 3 mandatory Team-Based Learning (TBL) sessions and a full-day practical lab course. Additionally, they have the option to attend 4 interactive lessons structured as 1.5-hour flipped classroom sessions. These interactive lessons require preparation in the form of reading textbook chapters, watching instructional videos, or listening to selected podcasts that succinctly cover the content traditionally delivered in lectures. These optional interactive lessons are popular, with about two-thirds of the cohort attending regularly. Complementary learning resources for this segment include suggested study group assignments, a FAQ with previously answered student questions, and ‘The Renal Pod’ podcast series. This multi-faceted approach aims to provide a comprehensive understanding of renal physiology and prepare students for clinical practice.

Data collection and study design

A 13-question survey was developed using SurveyXact to collect data for this cross-sectional study (supplementary file). The survey was designed in accordance with established guidelines [ 15 , 16 ] while being sensitive to the study context to optimize relevance for the students being surveyed. To ensure validity, we informally tested the questions with students and colleagues, and consulted with experts from the Center for Medical Education at the University of Bergen. Reliability was promoted through the use of clear and unambiguous wording to ensure consistent responses.

The survey comprised both closed- and open-ended questions, designed to identify factors students considered most important for learning renal physiology, assess the perceived difficulty of renal physiology relative to other MED4 subjects, and determine the time dedicated to studying these areas. Additionally, it evaluated the perceived utility of different learning resources, the time invested in preparing for learning activities in renal physiology, the effectiveness and optimal episode length of the renal podcast, and recommendations for the formative assignment pilot. The learning resources included in the survey were chosen based on their availability to students, previous usage in the course, and their relevance as identified in preliminary discussions with faculty and students.

The survey was distributed to 201 students enrolled in MED4. Participation was voluntary, and unique exam identifiers enabled the correlation of survey responses with exam outcomes. Anonymity was ensured, as these identifiers could not be traced back to individual students’ identities. Respondents were informed that their participation would not have any bearing on their grades, with no incentives provided. Out of the 201 students, 6 withdrew before the exam, leaving 195 students. Of these, 189 attended the exam and were considered eligible for analysis, resulting in a response rate of 38.1%.

Data analysis

The survey responses were compared to academic performance, as indicated by exam grades and scores, to discern patterns in learning efficacy and resource utilization. Respondents were categorized into three performance groups based on their MED4 exam grade A-F: ‘high performers’ (A or B, n  = 24), ‘mid performers’ (C, n  = 24), and ‘low performers’ (D, E, or F, n = 24). For each group, the percentage of responses to each closed-ended question was calculated and presented as bar charts (Figs. 2–7). The respondents’ exam scores were then organized from lowest to highest and plotted to show the cumulative distribution across the response categories (right panels of Figs. 1–2 and 4–7). In cases where fewer than eight respondents selected the same option, exam scores from respondents in adjacent categories with similar responses were combined for a more robust data representation.

To further analyze the data, Likert-scale responses from the survey were converted into numerical values on scales of 1–3, 1–5, or 1–7, depending on the question. These numerical values were subsequently correlated with total exam scores and renal physiology scores using Spearman’s rho analysis. The ‘not used’ responses from Sect. 4 of the survey were excluded from this analysis. To address potential biases introduced by merging different response categories (e.g., ‘useful’ and ‘very useful’), we re-analyzed the data without merging the categories and observed similar patterns, supporting the consistency of our findings.

Statistical tests

Exam scores are presented as the mean values ± standard deviations (SD). The number of participants (n) in the different response cohorts is indicated in the figure legends. The differences in exam scores between the whole class ( n  = 189), the sample population ( n  = 72) and the nonresponders ( n  = 117) were assessed using one-way ANOVA and linear regression analyses, assuming a normal distribution of scores. Differences in exam scores among the different response cohorts were evaluated using one-way ANOVA, with the assumption of homogeneity of variances being met, followed by Tukey’s post hoc test for multiple comparisons. Additionally, Spearman’s rho analysis was employed to examine the relationships between students’ preferences for various learning resources and their academic performance. We used Bonferroni correction for the multiple correlations to adjust for the increased risk of Type I error and report both uncorrected and adjusted p-values. All the statistical analyses were performed using GraphPad Prism software version 10.1.1. A p-value of less than 0.05 was considered to indicate statistical significance.

Sample population and grade comparison

Out of 189 eligible MED4 students, 72 (38% response rate) completed the questionnaire and provided exam identifiers for anonymous grade linkage. The grade distribution centered around a ‘C’ for both the sample and the entire class, as shown in Fig.  1 A. Regression analyses revealed a high degree of explanation (R² = 0.61) for the linear relationship between total exam scores and renal physiology scores across the whole class, the sample population, and the non-respondents. The slopes and intercepts were similar across these groups ( p  = 0.96), indicating a consistent pattern in the relationship between total scores and renal physiology scores. However, the sample population exhibited a tendency toward higher performance, particularly in the lower-performing group, as depicted in Fig.  1 B. Compared to the nonrespondents ( n  = 117), the difference in total and renal scores was statistically significant (F (5, 750) = 10.24, p  < 0.0001), indicating some degree of self-selection among the respondents.

figure 1

Comparison of exam grades and scores between the whole class and the sample population. Left panel : Grades are categorized into AB ( n  = 24/ n  = 41), C ( n  = 24/ n  = 71), and DEF ( n  = 24/ n  = 77). Right panel : Scores, represented as fractions of the total obtainable score, are plotted in ascending order against the cumulative fraction of students for the whole class ( n  = 189) and the sample population ( n  = 72), with values ranging from 1/n to n/n

Perceived determinants of success in renal physiology learning

When evaluating factors that are critical to successful learning of renal physiology, 83% of the students across all the performance groups considered the communication skills of teachers to be ‘Very important’. Self-effort and teachers’ professional competence followed, with 67% and 58%, respectively, of the students rating these as ‘Very important’.

Notably, all the ‘high performers’ considered self-effort ‘Important’ or ‘Very important’, while 4% of ‘mid performers’ and 8% of the ‘low performers’ rated it merely ‘Somewhat important’. Teachers’ charisma was deemed the least influential, with 42% of ‘high performers’, 16% of ‘mid performers’, and 36% of ‘low performers’ reporting it ‘very important’.

Spearman’s rho analysis showed that students’ perceived importance of teachers’ professional competence positively correlated with their performance in renal physiology (rho, ρ = 0.23, p  = 0.05), their perceived importance of self-effort (ρ = 0.25, p  = 0.03), teacher communication skills (ρ = 0.23, p  = 0.05), and the valuation of the following methods for learning renal physiology: TBL (ρ = 0.26, p  = 0.03), interactive lessons (ρ = 0.29, p  = 0.01), and instructive videos (ρ = 0.25, p  = 0.04). However, these correlations were not significant after Bonferroni correction for repeated analyses (p adjusted > 0.05).

Additionally, self-effort was positively correlated with the perceived usefulness of interactive lessons (ρ = 0.32, p  = 0.007, p adjusted > 0.05) and instructive videos (ρ = 0.30, p  = 0.01, p adjusted > 0.05). In contrast, teachers’ charisma was negatively correlated to the time spent on preparing for the renal physiology lab (ρ = -0.34, p  = 0.004, p adjusted > 0.05), and TBL (ρ = -0.24, p  = 0.04, p adjusted > 0.05) as well as the valuation of ‘The renal Pod’ (ρ = -0.29, p  = 0.01, p adjusted > 0.05) and ‘Renal physiology lab’ (ρ = -0.23, p  = 0.05, p adjusted > 0.05).

Interestingly, the reported time ‘spent learning renal physiology compared to other subjects in MED4’ correlated positively to its perceived difficulty (ρ = 0.43, p  < 0.001, adjusted p  < 0.01) as well as difficulty of heart physiology (ρ = 0.45, p  < 0.001, adjusted p  < 0.01). Further, the perceived difficulty of renal physiology correlated positively with that of heart physiology (ρ = 0.51, p  < 0.001, adjusted p   < 0.01) and negatively to the valuation of TBL (ρ = -0.34, p  = 0.004, p adjusted > 0.05), textbook (ρ = -0.33, p  = 0.02, p adjusted > 0.05) and recommendation for continuing formative assignments with feedback (ρ = -0.30, p  = 0.04, p adjusted > 0.05).

Linking academic performance with students’ valuation of learning tools

The most favored educational materials among the respondents were instructive video videos (96%), interactive lessons (94%), renal physiology labs (89%), and The Renal Pod (79%), each rated as ‘very useful’, ‘useful’ or ‘somewhat useful’.

As shown in Figs.  2 and 3 , ‘high performers’ and ‘mid performers’ predominantly rated ‘Interactive lessons’ (79%), ‘Asynchronous videos’ (75%), and ‘TBL’ (60%) as ‘very useful’ or ‘useful’. Conversely, ‘low performers’ preferred ‘Asynchronous videos’ (79%), ‘The Renal Pod’ (63%), and ‘Renal physiology lab’ (58%) as their top learning resources.

figure 2

Comparison of the perceived usefulness of active learning methods and academic performance. The left panel shows the distribution of students’ preferences across different grade categories; high (AB), mid (C), and low (DEF) performers. The bars are divided into segments representing preference-cohorts as indicated by the legends. The right panels depict exam scores sorted from low to high as a function of the cohort-normalized number of students. Upper panel . Significant differences in exam scores were observed among the preference cohorts for ‘TBL’. Mean exam scores for the three cohorts (right panel): ‘Very useful/useful’; 74.6% ± 9.7%, n = 40), ‘Somewhat useful’; 72.8% ± 10.0%, n = 20, ‘Not very useful/not useful’; 63.4% ± 9.5%, n = 12. ANOVA: F (2, 69) = 6.121, p = 0.0036. Post hoc tests: **p = 0.0024 between the ‘Very useful/useful’ and ‘Not very useful/not useful’ cohorts, and *p = 0.0286 between the ‘Somewhat useful’ and ‘Not very useful/not useful’ cohorts. Middle and lower panels. No significant differences in exam scores were seen among the preference-cohorts for ‘Interactive lessons’ or ‘Renal physiology lab’

Students who rated active learning methods (TBL, interactive lessons and the renal physiology lab) as most useful also obtained the highest exam scores (Fig.  2 ). This trend was particularly pronounced for TBL, where the average exam score in the ‘Very Useful’ and ‘Useful’ cohorts was 0.75 ± 0.10, compared to 0.63 ± 0.09 in the ‘Not Very Useful’ and ‘Not Useful’ cohorts (p = 0.002). The ‘Somewhat Useful’ cohort also outperformed those who found TBL ‘not useful’ or ‘not very useful’ ( p  = 0.029).

‘Other resources’, representing self-found online materials, and ‘Textbook’ were perceived as the least useful across all groups (Fig.  3 ). Notably, a trend emerged where ‘high performers’ seemed to value textbooks more than ‘low performers’ (21% vs. 8%), while ‘low performers’ showed a greater inclination towards ‘Other resources’ compared to ‘high performers’ (50% vs. 25%).

figure 3

Comparison of perceived usefulness of indicated learning methods and academic performance ( p  > 0.05). See Fig.  2 for a detailed explanation of the panel structure

Spearman’s rho analysis further supported these observations, revealing positive correlations between overall MED4 exam score and students’ valuation of TBL (ρ = 0.28, p  = 0.02, p adjusted > 0.05) and interactive lessons (ρ = 0.27, p  = 0.03, p adjusted > 0.05). Conversely, MED4 exam score correlated negatively with perceived usefulness of self-found online resources (ρ = − 0.27, p  = 0.04, adjusted > 0.05), suggesting that reliance on these resources might not be as beneficial for academic success.

Perception of subject difficulty and its relationship with academic performance

The survey data showed that circulatory and renal physiology were the most challenging subjects, with only 4% of students finding them ‘Very easy’ or ‘easy’. Specifically, 71% rated circulatory physiology and 33% rated renal physiology as ‘difficult’ or ‘very difficult’. In contrast, nutrition was perceived as the least challenging, with 58% considering it ‘easy’ or ‘very easy’, and none finding it ‘very difficult’.

Students who perceived subjects as less challenging tended to perform better academically. For instance, only 4% of ‘high performers’ found renal physiology ‘very difficult,’ compared to 8% of ‘mid performers’ and 21% of ‘low performers’. This expected trend across disciplines is exemplified in Fig.  4 , which shows that perceived difficulty in renal physiology and endocrinology is associated with lower exam scores.

figure 4

Comparison of perceived difficulty of renal physiology and academic performance. See Fig.  2 for a detailed explanation of the panel structure. Upper panel . Significant differences in exam scores were observed based on the perceived difficulty of renal physiology. Mean exam scores for the three cohorts (right panel): ‘Easy/moderate’; 75.1% ± 9.7%, n = 32, ‘Hard’; 71.8% ± 10.3%, n = 33, ‘Very hard’; 64.3% ± 10.8%, n = 8. ANOVA: F (2, 70) = 3.793, p = 0.0273. Post hoc tests: *p = 0.0228 between the ‘Easy/moderate’ and ‘Very hard’ cohorts. Lower panel : Significant differences in exam scores were observed based on the perceived difficulty of endocrinology. Mean exam scores for the three cohorts (right panel): ‘Very easy/easy’; 79.5% ± 7.8%, n = 13, ‘Moderate’; 73.8% ± 9.2%, n = 39, ‘Hard/very hard’; 64.9% ± 9.7%, n = 21. ANOVA: F (2, 70) = 11.49, p < 0.0001. Post hoc tests: ****p < 0.0001 between the ‘Very easy/easy’ and ‘Hard/very hard’ cohorts, and **p = 0.0017 between the ‘Moderate’ and ‘Hard/very hard’ cohorts

These observations align with Spearman’s rho analysis, revealing negative correlations between MED4 exam score and perceived difficulty across the MED4 curriculum. Specifically, perceptions of difficulty in endocrinology (ρ = − 0.52, p  < 0.0001, p adjusted < 0.01), heart physiology (ρ = − 0.35, p  = 0.002, p adjusted > 0.05), renal physiology (ρ = − 0.30, p  = 0.01, p adjusted > 0.05), and digestive system (ρ = − 0.25, p  = 0.04, p adjusted > 0.05).

Similarly, endocrinology showed a pattern where perceived difficulty correlated positively with perceived difficulty in other topics such as digestive system (ρ = 0.46, p  < 0.001, p adjusted < 0.01), respiratory system (ρ = 0.29, p  = 0.01, p adjusted > 0.05), Heart (ρ = 0.27, p  = 0.02 p adjusted > 0.05 and circulation (ρ = 0.26, p  = 0.03 p adjusted > 0.05. These correlations, along with negative correlations with MED4 exam score (ρ = − 0.52, p  < 0.001, p adjusted < 0.01) as well as renal score (ρ = − 0.34, p  = 0.003, p adjusted > 0.05) and the valuation of TBL (ρ = − 0.24, p  = 0.04, p adjusted > 0.05) support the conclusion that students’ perceptions of subject difficulty are aligned with both their preferred learning methods and their academic outcomes in the broader MED4 curriculum.

The influence of preparation time on academic performance in active learning settings

The left panels of Fig.  5 indicate that mid-performers (grade C) are more likely to dedicate extensive time to preparation (more than 2 h), whereas high performers (grades A and B) tend to be more efficient with shorter preparation times (less than 30 min), Among low performers (grades D, E, and F), the most frequently reported preparation time is less than 30 min, followed by 1–2 h, with a notable portion also reporting no preparation.

figure 5

Comparison of time spent preparing for active learning sessions and academic performance. See Fig.  2 for a detailed explanation of the panel structure. Upper and middle panel. No overall significant difference for TBL and Interactive lessons (p > 0.05). Lower panel. Mean exam scores for the four cohorts (right panel) based on time spent preparing for the ‘Renal Physiology lab’: ‘Not prepared’; 72.4% ± 7.9%, n = 10, ‘Less than 30 min’; 75.1% ± 8.9%, n = 31, ‘More than 1 h’; 67.2% ± 12.5%, n = 19, ‘Anything more than 2hurs’; 73.0% ± 9.7%, n = 14. ANOVA: F (3, 70) = 2.498, p = 0.0667. Post hoc tests: *p = 0.0400 between the ‘Less than 30 min’ and ‘More than 1 h’ cohorts

As shown in Fig.  5 , (lower right panel), an 11% lower exam score was noted in the cohort spending 1–2 h (0.672 ± 0.125) compared to those spending less than 30 min (0.751 ± 0.089) on preparation for ‘The renal physiology lab’ (adjusted p  = 0.04). This suggests that while time spent preparing is a factor, the quality and effectiveness of study strategies are likely more crucial for academic performance.

Notably, a positive correlation was observed between the reported time students spent on interactive lectures and their perceived usefulness (ρ = 0.38, p  = 0.01, p adjusted > 0.05), as well as with the time dedicated to preparation for these lectures (ρ = 0.51, p  < 0.0001, p adjusted < 0.01) and the ‘Renal physiology lab’ (ρ = 0.43, p  < 0.0001, p adjusted < 0.01). Conversely, a negative correlation was found with ‘self-found online resources’ (ρ = -0.38, p  = 0.008, p adjusted > 0.05). However, as with time spent on other available learning methods, no correlation was found between reported preparation time and exam score.

These findings highlight the interconnectedness between the time spent on preparation for active learning methods, their perceived value, and academic performance.

Comparison of the time spent on renal physiology relative to other subjects

While approximately 70% of respondents reported spending more time on renal physiology compared to other MED4 subjects, this increased study time did not translate into significantly different academic performance across the cohorts (Fig.  6 upper panel). Specifically, the average exam score for respondents who spent considerably more time on renal physiology was 69.6% ± 12.4%, compared to 72.2% ± 9.9% for those who spent more time, and 73.4% ± 11.0% for those who spent a similar amount or less time studying the subject ( p  = 0.69).

figure 6

The impact of study time and podcast engagement on academic performance. See Fig.  2 for a detailed explanation of the panel structure. Upper panel. No significant difference in academic performance based on time spent studying renal physiology ( p  > 0.05). Lower panel . Significant differences in performance based on podcast usage. Mean cohort exam scores (right panel): ‘All the episodes multiple times’; 71.4% ± 10.0%, n  = 13 ‘All episodes once’; 76.3% ± 8.7%, n  = 22 compared to those who listened to only some episodes: 66.2% ± 8.7%, n  = 16. ANOVA, F (4, 67) = 2.958, p  = 0.0259) post hoc test: * p  = 0.0217

Podcast engagement and its impact on academic performance

As depicted in Fig.  6 (lower left panel), a substantial majority (85%) of respondents engaged with ‘The Renal Pod’, with 63% having listened to most or all episodes at least once, and 18% reporting multiple listenings of all episodes. Notably, high-performing students (AB) in contrast to their low-performing peers (DEF), predominantly listened to all the episodes once (46%) or not at all (21%) suggesting a strategic approach to podcast utilization.

The results revealed a statistically significant differences ( p  = 0.026) in academic performance based on podcast usage (Fig.  6 , lower right panel). Specifically, students who listened to all episodes once achieved higher mean exam scores (76.3% ± 8.7%) compared to those who listened to only some episodes (66.2 ± 8.7%). This finding suggests that moderate and consistent engagement with the podcast is more beneficial for academic success than sporadic listening.

Despite these nuanced usage patterns, a strong desire for more podcast-based learning resources was expressed across all performance levels, with 69 out of 72 respondents advocating for broader podcast availability.

The impact of formative assignment perceptions on academic performance

The introduction of the formative assignment pilot in Spring 2022 allowed us to examine its impact on the academic performance of MED4 students. This assignment involved collaborative problem-solving and individual feedback sessions, as described in the study context.

Students across all grade categories expressed a preference for the expansion or continuation of the formative assignment. Specifically, 50% of AB, 42% of C, and 46% of DEF students supported this view (Fig.  7 , left panel). However, 29% of DEF students recommended discontinuation compared to only 4% of AB students (red segments in Fig.  7 , left panel). Additionally, students could respond with ‘other’, which required a free-text explanation. The majority of these free-text responses also supported the formative assignment, indicating a general preference for its continuation or expansion.

figure 7

Comparison of academic performance and recommendations for the formative assignment, showing significant differences in exam scores based on students’ perceptions of the formative assignment. See Fig.  2 for a detailed explanation of the panel structure. Mean exam scores for the three cohorts (right panel): ‘Should be expanded or continued’; 73.2% ± 10.3%, n  = 33, ‘Should be discontinued’; 63.4% ± 10.4%, n  = 12, and ‘Other’; 75.0% ± 8.7% n = 27. ANOVA: F (2, 69) = 6.136, p = 0.0035. Post hoc tests: *p = 0.0117 between the ‘Should be expanded or continued’ and ‘Should be discontinued’; **p = 0.0029 between ‘Other ' and ‘Should be discontinued’ cohort

The right panel of Fig.  7 shows that students advocating for the continuation or expansion of the formative assignment had a 13% higher mean exam score (73.2% ± 10.3%) compared to those who preferred its discontinuation (63.4% ± 10.4%, adjusted p = 0.01). ‘Other’ respondents, who largely supported the pilot, had a 15% higher mean exam score (75.0% ± 8.7%, adjusted p  = 0.003). Altogether, these results indicate potential benefits of the formative assignment approach.

Our study explored the correlations between time spent studying, students’ perceptions of the utility of learning resources, and their academic performance within the MED4 curriculum, aiming to identify characteristics of successful learners in medical education. While our focus was on renal physiology, primarily due to its comprehensive use of active learning resources, the exam results we analyzed encompassed the entire MED4 curriculum, including scores in specific subjects.

The findings reveal that there is a good correlation between students’ performance in renal physiology and their overall academic success and an interconnectedness between time spent on preparation for active learning methods, their perceived value, and academic performance. This suggests that the learning strategies and resources employed in this challenging subject have transferable value to other academic areas.

In line with our findings, Bin Abdulrahman et al. [ 17 ] highlighted that highly effective medical students tend to employ structured study habits and strategic use of learning resources, which are critical for academic success. In their study, top-performing students reported regular revision, active engagement with learning materials, and a preference for diverse study methods. These habits align with our observation that high-performing students (grades A and B) are more likely to engage with active learning resources such as Team-Based Learning (TBL), interactive lessons, and formative assignments.

We observed that both high and low performers prepared for teaching, with a tendency for most students to report investing more time in learning renal physiology compared to other subjects. However, a significant correlation between time spent studying and academic performance was not evident. This observation, reflecting the heterogeneity within both high and low performer groups, highlights how individual differences in learning strategies, cognitive capabilities, and motivation levels can influence the effectiveness of study time. Although we did not measure students’ prior knowledge, it seems that the essence of what distinguishes more effective learners is not the quantity, but rather the quality of their study time. Aligning with this concept, findings from West and Sadoski emphasize the importance of effective management skills in higher education [ 18 ]. Several other studies support this [ 19 , 20 , 21 , 22 ]. For example, Liles et al reported that 77% of students achieving ‘A’ grades reviewed lecture material on the same day, compared to just 25% of those with ‘C’ grades. Additionally, high achievers were more likely to attend classes, limit online lecture usage, and study for 6–8 hours daily [ 21 ]. In addition to time management skills, West & Sadoski suggest that self-testing skills may improve academic performance [ 18 ]. This aligns with the extensive body of work on retrieval practice, where Roediger and Butler, Karpicke and Blunt, and Dobson have demonstrated the efficacy of retrieval practice in enhancing long-term retention and academic performance [ 23 , 24 , 25 ]. This may partly explain why TBL was one of the most popular resources among high performers in our study, as it provided questions that enabled students to test their understanding of the subject. Emke et al. also found that Team-Based Learning can enhance short-term knowledge acquisition, though its long-term effects are less clear without continued practice [ 26 ].

Bansal et al. also found that high-performing students had a preference for deep and strategic learning strategies, contrary to low-performing students, who mostly used the surface approach to learning [ 19 ]. The classification of deep and surface learning approaches, along with their implications for educational practice, is based on the seminal work of Marton and Säljö [ 27 ]. While the deep approach emphasizes understanding concepts and relating ideas, the surface approach emphasizes route memorization [ 28 ].

While TBL shares some similarities with flipped classrooms, such as pre-class preparation and active learning during class, there are key differences. In a flipped classroom, students typically engage with lecture material at home through videos or readings, and then participate in interactive activities during class to deepen their understanding. TBL, on the other hand, emphasizes team-based activities where students work together to solve problems and apply concepts during class. Both approaches aim to enhance student engagement and learning outcomes, but TBL specifically focuses on collaborative learning and peer-to-peer teaching, which can foster a deeper understanding through group discussion and problem-solving.

We observed an 11% improvement in academic performance equivalent to an absolute difference of 20 points out of 179, among respondents who favored TBL and who supported the continuation of formative assignments compared to those who found TBL less favorable and recommended discontinuation of the formative assignment pilot. Notably, formative assignments were not conducted in renal physiology but in other subjects such as cardiology, endocrinology, and digestion and nutrition. Therefore, the potential influence of formative feedback on renal physiology exam performance was not directly evaluated. This suggests that some students might be less adept at interpreting and constructively utilizing the feedback inherent in TBL and formative assignments. In the literature, these abilities are described as feedback literacy and denotes “the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” [ 29 ]. Furthermore, our findings align with studies on flipped classroom methodologies in physiology, which found improved performance in new students [ 30 ] and enhanced students’ learning effectiveness and skills like self-study and problem-solving [ 31 ].

In addition, low performers tended to rank ‘self-effort’ and ‘Textbook’ as a less important factor for learning renal physiology and to rely more on ‘self-found online materials’ and ‘The Renal Pod’ compared to their better performing counterparts. Their perception of subject difficulty, especially in challenging areas like renal physiology, was negatively correlated with academic performance. This attitude, combined with their lower satisfaction with TBL and formative assignments, characterizes low performing students and aligns with their preference for passive learning resources [ 20 , 32 , 33 ]. This preference may stem from their perception of active learning methods as more cognitively demanding, as found by Deslauriers et al. [ 7 ].

While these methods are associated with better academic performance, as supported by the present study, the perceived challenge might drive some students towards passive resources which they believe are less demanding. This highlights a potential disconnect between student perceptions of learning resource effectiveness and their actual impact on academic performance. Moreover, the distinction aligns with the findings of Roediger and Karpicke, who demonstrated that students’ predictions of their own learning are often uncorrelated with actual performance, emphasizing the critical role of retrieval practice in consolidating learning [ 34 ].

An interesting finding in our study is the distinct podcast listening habits among the different student performance groups. While low performers rated ‘The Renal Pod’ as highly useful and 92% engaged with it, their listening patterns varied, with a notable proportion not completing the episodes. In contrast, high performers predominantly listened to all episodes once or not at all, suggesting that a strategic approach to using podcasts as a learning tool is associated with better academic performance in medical education.

Previous literature shows conflicting reports on whether educational podcasts can help students improve their examination scores [ 35 ]. McCarthy et al. argue that podcasts have potential as a supplement to existing curricula, where they can fulfill the need for interested learners [ 36 ]. However, if students perceive podcasts as a replacement for other learning resources, they risk reduced learning efficiency. In particular, since students often engage in other activities while listening to podcasts and listen at double speed, the educational impact may be limited [ 36 ].

In our study, textbooks were perceived as the least useful learning resource, regardless of academic performance. Previous research has indicated that replacing textbooks with evidence-based articles and summary questions does not have a negative impact on students’ academic performance or satisfaction [ 37 ]. Furthermore, studies have revealed that older students do not advise new students to buy many textbooks but rather focus on PowerPoints from professors, old exams and summary notes [ 38 ]. Since teachers typically create exams, this may lead to bias toward the lecture material and undermine students’ motivation to use textbooks. Moreover, when experiencing curriculum overload, many medical students may be compelled to adopt coping strategies and surface approaches to learning.

Although this study was conducted at a single medical school, the findings may have broader applicability. The curriculum at the University of Bergen shares similarities with many European medical schools, particularly in the preclinical phase, where integrated curricula and active learning methods are widely used. This suggests that our findings could provide valuable insights for other educational contexts employing similar strategies. Extending this research to a variety of settings would further validate these results and enhance their generalizability.

Our findings suggest that educators in similar contexts consider placing greater emphasis on active learning resources, such as TBL and formative assignments, to foster deeper learning. By strategically integrating active teaching methods into the curriculum, medical schools may enhance student engagement and academic performance, particularly in complex subjects like renal physiology.

In summary, our findings highlight the relationship between student engagement with learning materials and academic performance. Although we measured correlations between reported use or preferences and academic success, rather than the quality of engagement, we argue that the strategic use of active learning methods and resources like ‘The Renal Pod’ is more critical than the quantity of study time. This emphasize the need to teach students effective study habits. Consequently, educational approaches should extend beyond content delivery to include fostering skills in resource selection, time management, and feedback utilization. Finally, our study implies that assignment methods should be carefully considered for their educational impact, as they are likely to influence student learning behavior and outcomes.

Limitations

This study, while providing valuable insights into the learning resource preferences of successful learners in medical education, has several limitations.

Correlation vs. causation

It is important to acknowledge that correlation does not imply causation. While our correlation analyses highlight relationships between learning resource usage and exam scores, these associations may be influenced by other factors such as students’ intrinsic motivation, prior knowledge, and engagement levels.

Self-selection bias

The response rate of 38% raises concerns about potential self-selection bias, as the respondents might have been more engaged or motivated, potentially influencing the study’s outcomes. This is evidenced by our finding that the sample population exhibited a higher average total score compared to non-respondents, indicating some level of self-selection among the respondents. Even though the sample population’s academic performance in renal physiology was consistent with the overall class, the self-selection bias might limit the generalizability of our findings. Future studies with larger sample sizes are needed to validate these findings.

Exam scores vs. deep understanding

High exam scores do not necessarily equate to a deep understanding of the subject matter. The exam’s broad scope and its combination of essay questions and reasoning-based MCQs aim to reduce the likelihood of achieving good results solely through memorization. Further studies with practical assessments and long-term retention tests are recommended to better evaluate deep understanding and learning achievements.

Survey bias

The reliance on surveys for data collection introduces potential biases as students might not accurately report their study behaviors. These biases should be considered when interpreting the correlations between learning resource preferences and academic performance.

Demographic data

The absence of demographic data leaves potential factors influencing students’ preferences and performances unexplored. This decision was made to maintain participant anonymity and streamline the survey process, but we recognize that demographic factors could influence the results. The absence of questions regarding traditional lectures, which were not used in renal physiology teaching, may also limit the comprehensiveness of our findings.

Generalizability

While the study was conducted in a single institution, the relevance of the findings has been discussed in detail earlier. However, caution should still be exercised when applying these results to different educational contexts without further validation.

Students who perform well on exams tend to prefer active learning strategies and make strategic use of resources, suggesting that the quality of study time impacts academic performance more than the quantity. Based on these findings, we recommend that educators consider integrating student-active teaching methods into the curriculum and providing guidance on effective study practices to enhance learning outcomes.

Data availability

All data supporting the findings of this study are contained within the manuscript, the accompanying figures, and the supplementary file. Additional data related to this study are available from the corresponding author upon reasonable request.

Abbreviations

The University of Bergen

European Credit Transfer and Accumulation System

Multiple choice questions

Standard deviation

Team-Based Learning

Case-Based Learning

Reference list

Newble DI, Gordon MI. The learning style of medical students. Med Educ. 1985;19(1):3–8.

Article   Google Scholar  

Abdulghani HM, Al-Drees AA, Khalil MS, Ahmad F, Ponnamperuma GG, Amin Z. What factors determine academic achievement in high achieving undergraduate medical students? A qualitative study. Med Teach. 2014;36(sup1):S43–8.

Pashler H, McDaniel M, Rohrer D, Bjork R. Learning styles:concepts and evidence. Psychol Sci Public Interest. 2008;9(3):105–19.

Wynter L, Burgess A, Kalman E, Heron JE, Bleasel J. Medical students: what educational resources are they using? BMC Med Educ. 2019;19(1):36.

Bhalli MA, Khan IA, Sattar A, LEARNING STYLE OF MEDICAL STUDENTS, AND ITS CORRELATION WITH PREFERRED TEACHING METHODOLOGIES AND ACADEMIC ACHIEVEMENT. J Ayub Med Coll Abbottabad. 2015;27(4):837–42.

Google Scholar  

Bansal S, Bansal M, Ahmad KA, Pandey J. Effects of a flipped classroom approach on learning outcomes of higher and lower performing medical students: a new insight. Adv Educational Res Evaluation. 2020;1(1):24–31.

Deslauriers L, McCarty LS, Miller K, Callaghan K, Kestin G. Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proc Natl Acad Sci U S A. 2019;116(39):19251–7.

Holland JC, Pawlikowska T. Undergraduate medical students’ usage and perceptions of anatomical case-based learning: comparison of facilitated Small Group discussions and eLearning resources. Anat Sci Educ. 2019;12(3):245–56.

Minhas PS, Ghosh A, Swanzy L. The effects of passive and active learning on student preference and performance in an undergraduate basic science course. Anat Sci Ed. 2012;5(4):200–7.

Ramnanan CJ, Pound LD. Advances in medical education and practice: student perceptions of the flipped classroom. Adv Med Educ Pract. 2017;8:63–73.

Lv JC, Zhang LX. Prevalence and disease burden of chronic kidney disease. Adv Exp Med Biol. 2019;1165:3–15.

Roberts JK, Sparks MA, Lehrich RW. Medical student attitudes toward kidney physiology and nephrology: a qualitative study. Ren Fail. 2016;38(10):1683–93.

Jhaveri KD, Sparks MA, Shah HH, Khan S, Chawla A, Desai T, et al. Why not Nephrology? A survey of US Internal Medicine Subspecialty fellows. Am J Kidney Dis. 2013;61(4):540–6.

Nair D, Pivert KA, Baudy A, Thakar CV. Perceptions of nephrology among medical students and internal medicine residents: a national survey among institutions with nephrology exposure. BMC Nephrol. 2019;20(1):146.

Artino AR, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide 87. Med Teach. 2014;36(6):463–74.

Artino AR, Phillips AW, Utrankar A, Ta AQ, Durning SJ. The questions shape the answers: assessing the quality of published Survey instruments in Health professions Education Research. Acad Med. 2018;93(3):456–63.

Bin Abdulrahman KA, Khalaf AM, Bin Abbas FB, Alanazi OT. Study habits of highly effective medical students. Adv Med Educ Pract. 2021;12:627–33.

West C, Sadoski M. Do study strategies predict academic performance in medical school? Med Educ. 2011;45(7):696–703.

Bansal S, Bansal M, White S. Association between Learning Approaches and Medical Student Academic Progression during Preclinical Training. Adv Med Educ Pract. 2021;12:1343–51.

Bickerdike A, O’Deasmhunaigh C, O’Flynn S, O’Tuathaigh C. Learning strategies, study habits and social networking activity of undergraduate medical students. Int J Med Educ. 2016;7:230–6.

Liles J, Vuk J, Tariq S. Study habits of Medical students: an analysis of which Study habits most contribute to success in the Preclinical years [version 1]. MedEdPublish. 2018;7(61).

Zhou Y, Graham L, West C. The relationship between study strategies and academic performance. Int J Med Educ. 2016;7:324–32.

Roediger HL, Karpicke JD. Test-enhanced learning:taking memory tests improves Long-Term Retention. Psychol Sci. 2006;17(3):249–55.

Karpicke JD, Blunt JR. Retrieval Practice produces more learning than Elaborative studying with Concept Mapping. Science. 2011;331(6018):772–5.

Dobson JL. Retrieval practice is an efficient method of enhancing the retention of anatomy and physiology information. Adv Physiol Educ. 2013;37(2):184–91.

Emke AR, Butler AC, Larsen DP. Effects of Team-based learning on short-term and long-term retention of factual knowledge. Med Teach. 2016;38(3):306–11.

MARTON F, SÄLJÖ R. ON QUALITATIVE DIFFERENCES. IN LEARNING: I—OUTCOME AND PROCESS*. Br J Educ Psychol. 1976;46(1):4–11.

Samarakoon L, Fernando T, Rodrigo C, Rajapakse S. Learning styles and approaches to learning among medical undergraduates and postgraduates. BMC Med Educ. 2013;13(1):42.

Carless D, Boud D. The development of student feedback literacy: enabling uptake of feedback. Assess Evaluation High Educ. 2018;43(8):1315–25.

Sanchez JC, Lopez-Zapata DF, Pinzon OA, Garcia AM, Morales MD, Trujillo SE. Effect of flipped classroom methodology on the student performance of gastrointestinal and renal physiology entrants and repeaters. BMC Med Educ. 2020;20(1):401.

Lu C, Xu J, Cao Y, Zhang Y, Liu X, Wen H, et al. Examining the effects of student-centered flipped classroom in physiology education. BMC Med Educ. 2023;23(1):233.

Cen XY, Hua Y, Niu S, Yu T. Application of case-based learning in medical student education: a meta-analysis. Eur Rev Med Pharmacol Sci. 2021;25(8):3173–81.

Goyal P, Parmar C, Udhan V. Active learning in renal physiology. A students’ perspective and its outcome. Natl J Physiol Pharm Pharmacol. 2020;2020; 10(9) 701–704.

Karpicke JD, Roediger HL 3. The critical importance of retrieval for learning. Science. 2008;319(5865):966–8.

Cho D, Cosimini M, Espinoza J. Podcasting in medical education: a review of the literature. Korean J Med Educ. 2017;29(4):229–39.

McCarthy J, Porada K, Treat R. Educational Podcast Impact on Student Study Habits and Exam Performance.

Ju C, Bove J, Hochman S. Does the removal of Textbook Reading from Emergency Medicine Resident Education negatively affect In-Service scores? West J Emerg Med. 2020;21(2):434–40.

Schlenker M, Joseph P. Are Printed Textbooks Obsolete in Medical Education? University of Toronto Medical Journal. 2008.

Download references

Acknowledgements

We thank the students who participated in the survey for their contributions. Special thanks to Professor Arne Tjølsen for his assistance with data analysis, and to Birgitte Skjeldal Hageseter and Bianca Cecilie Nygård for their help in distributing the survey. We also gratefully acknowledge Media City Bergen for providing the facilities and support necessary to produce ‘The Renal Pod’ podcast.

Open access funding provided by University of Bergen.

Author information

Sofie Fagervoll Heltne and Sigrid Hovdenakk contributed equally to this work.

Authors and Affiliations

Department of Biomedicine, University of Bergen, Jonas Lies vei 91, Bergen, N- 5009, Norway

Sofie Fagervoll Heltne, Sigrid Hovdenakk & Olav Tenstad

Center for Medical Education, Department of Clinical Medicine, University of Bergen, Bergen, Norway

Monika Kvernenes

You can also search for this author in PubMed   Google Scholar

Contributions

SFH, SH, and OT conceived the idea for the study. SFH, SH, OT, and MK designed the survey questionnaire. The data analysis and interpretation were carried out by SFH, SH, OT, and MK. The initial draft of the manuscript was written by SFH and SH, while MK and OT provided guidance throughout the study, with OT making the final revision to the manuscript. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Olav Tenstad .

Ethics declarations

Ethics approval and consent to participate.

Research ethics guidelines were followed, and the project, identified as S1872, is registered in RETTE (System for Risk and Compliance), a system for processing personal data in research and student projects at the University of Bergen. The introduction to the survey questionnaire informed participants about the collection of candidate numbers and the linking of answers to exam grades. Participants gave informed consent by completing the questionnaire.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Heltne, S.F., Hovdenakk, S., Kvernenes, M. et al. Study preferences and exam outcomes in medical education: insights from renal physiology. BMC Med Educ 24 , 973 (2024). https://doi.org/10.1186/s12909-024-05964-4

Download citation

Received : 22 December 2023

Accepted : 27 August 2024

Published : 06 September 2024

DOI : https://doi.org/10.1186/s12909-024-05964-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Renal physiology
  • Teaching methods
  • Medical school
  • Learning resources
  • Academic achievement
  • Medical students
  • Active learning

BMC Medical Education

ISSN: 1472-6920

graded assignments and exams

IMAGES

  1. Grading Exams: How Gradescope Revealed Deeper Insights into Our Teaching

    graded assignments and exams

  2. PPT

    graded assignments and exams

  3. Graded Assignment 1

    graded assignments and exams

  4. Graded Assignment 1

    graded assignments and exams

  5. A full set of assignments graded!

    graded assignments and exams

  6. Graded Assignment 1

    graded assignments and exams

VIDEO

  1. How to Grade an Assignment in Blackboard Grade Center

  2. 7th Math: Semester II Graded Assignments & Assessments

  3. Maths week 8 graded assignments #bsdegree #iitmadras #mathematics

  4. Maths1 week 7 graded assignments #iitmadras#bsdegree

  5. Maths1 week 8 graded assignments

  6. Statistics1 week 8 graded assignments

COMMENTS

  1. Grading Student Work

    Grading Student Work - Vanderbilt Center for Teaching

  2. The Ultimate Guide to Grading Student Work

    Grading on a curve: This system adjusts student grades to ensure that a test or assignment has the proper distribution throughout the class (for example, only 20% of students receive As, 30% receive Bs, and so on), as well as a desired total average (for example, a C grade average for a given test).

  3. Dealing With Students Missing Exams and In-Class Graded Assignments

    The exam or graded assignment must be delayed. Prepare beforehand. Always build a make-up policy into your syllabus for the last exam or student presentation in a course. Talk with your department chair or dean about college or university policy. State that if weather or other circumstances force a make-up, it will occur at a certain time and ...

  4. Assignments, Exams, & Grading

    Instructor grading practices and philosophies can vary, often causing student anxiety and concern. Faculty conversations about grades often focus on fairness, rigor, equity, and how to truly measure learning. The growing set of resources on this site will explore good practices for transparent and equitable assignment design, exam design and ...

  5. Grade Calculator

    Grade Calculator

  6. Grades and Grading

    When grading an exam or assignment with multiple sections, grade all responses to the same question (or set of related questions) together. This makes it less likely that a student's overall level of performance on the exam or assignment will cause you to give a grade for a particular section that is undeservedly high or low. Naturally, this ...

  7. Grading Calculator

    Welcome to our online grading calculator! This user-friendly tool allows you to calculate your grade on an assignment, test, quiz, and more, based on the number of problems and the number of wrong answers. Grading Calculator. Number of Questions: Number wrong: Results: 10/10 = 100.00%. Show Grade Percentage Chart Show Decimals.

  8. A Beginner's Guide to Standards Based Grading

    To determine a student's letter grade, I used the following system: To guarantee a grade of "A", a student must earn 4s on 90% of standards, and have no scores below a 3. To guarantee a grade of "B" or higher, a student must earn 3s or higher on 80% of course standards, and have no scores below a 2. To guarantee a grade of "C" or ...

  9. Grading and Time Management

    Make sure that exam questions are vetted thoroughly prior to the exam. Clarify how you plan to grade the exam or assignment - either on your own or with your Instructor of Record and fellow GSIs if teaching collectively. Design and distribute a grading rubric, and test it out on a sampling of papers. It may also be helpful to look at a ...

  10. How to Grade Online Assignments and Exams

    The Moodle assignment activity is an easy way to collect assignments from students and simplify the process of setting up due dates, granting individual extensions, and keeping track of any students that did turn in their assignment. Moodle also provides a basic interface for viewing and grading the assignment.

  11. Troubleshoot quizzes & assignments

    Open the course the assignment is in. Click Grades and open the assignment you want to check for a saved draft. Click My submission. If your assignment has been saved as a draft, you'll be able to see and edit it. If you can see your assignment, but you can't edit it, that means you submitted the assignment.

  12. How You'll Be Graded in an Online Course

    Exams, quizzes, papers, and group projects are all par for the (online) course. That said, some subjects lend themselves better to sit down exams. Other subject knowledge is better evaluated through presentations and papers. Your area of study and your degree level will probably affect the types of assignments you receive in online courses.

  13. Beyond "the Grade": Alternative Approaches to Assessment

    Beyond "the Grade": Alternative Approaches to Assessment

  14. The Impact of Assignments and Quizzes on Exam Grades: A Difference-in

    The study found that the students were more likely to submit homework assignments during points weeks. However, it found no discernible impact on quiz scores. Latif and Miles (Citation 2011) used data from a small Canadian university to examine the impact of graded assignments on the exam performance in economics courses. The study found that ...

  15. Alternative Strategies for Assessment and Grading

    Allow students to use varied means of demonstrating progress and learning, such as projects, multimedia, oral exam, papers (see the example highlighted in the callout box to have students create a podcast). Have students themselves create sample questions, sample prompts and sample answers/responses, and use some in your graded assessments.

  16. Grading Assignments in Canvas

    On the Canvas Dashboard, click on the assignment title in the To Do list. In the course, click on the assignment title in the To Do list on the home page. In the course, click on the Assignment page, then the title of the assignment, and click SpeedGrader. In the course, click on Grades, then the three dots, and SpeedGrader.

  17. Penn Foster Exams and Grading Explained

    Penn Foster Exams and Grading Explained

  18. Assignment Types

    Gradescope allows you to grade paper-based exams, quizzes, bubble sheets, programming assignments (graded automatically or manually) and lets you create online assignments that students can answer right on Gradescope. In this guide: Assignment Types and Features. Using Gradescope for Paper-Based Assignments. Exams & Quizzes.

  19. Grading an Exam/Quiz Assignment

    Exams & Quizzes. Create rubrics for your questions if applicable. You can do this now or while grading. See, Grading with rubrics. Upload and process scans* See, Managing scans. Match student names to submissions.*. See, Managing submissions. Grade student work with flexible, dynamic rubrics and utilize AI-assisted and manual answer grouping.

  20. Weighted Grade Calculator

    Weighted Grade Calculator

  21. How many assignments are in a course? Are there exams?

    Most courses do not have just a single quiz or exam to pass, but instead have many assignments throughout the course. If you finish the course in the verified track with at least the minimum passing score, you will earn a certificate. The minimum passing score also varies by course.

  22. Grading Assignments with Sections

    Templated Assignments (Exam/Quiz and Homework/Problem Set types) can't be graded using AI-assisted grading/manual answer grouping and grading by course section at the same time. To grade a question for this assignment type, you'll have to choose between two grading scenarios : use answer grouping and grade all submissions or use grade by ...

  23. How do I weight the final course grade based on assignment groups?

    Each assignment group calculation is added together to create the final grade. For example, an instructor may create three assignment groups (A, B, and C) weighted at 20%, 50%, and 30%, respectively. The total score equation for a course with three assignment groups would be (percentage A x weight A) + (percentage B x weight B) + (percentage C ...

  24. Homework & Revision Platform

    Seneca - Homework & Revision Platform

  25. Mastering Physics

    Harness curiosity. Mastering ® Physics engages science students as they learn best: through active, immersive experiences.. With Mastering Physics, students actively engage in understanding physic concepts and building problem-solving skills for success in their course and beyond.

  26. MyLab and Mastering login

    Courses with custom logins. A small number of our MyLab courses require you to login via a unique site. If your course is listed below, select the relevant link to sign in or register.

  27. For Students Grades 9-12

    For Students Grades 9-12

  28. Study preferences and exam outcomes in medical education: insights from

    Comparison of exam grades and scores between the whole class and the sample population. Left panel: Grades are categorized into AB (n = 24/n = 41), C (n = 24/n = 71), and DEF (n = 24/n = 77).Right panel: Scores, represented as fractions of the total obtainable score, are plotted in ascending order against the cumulative fraction of students for the whole class (n = 189) and the sample ...

  29. ADRE Grade 3 recruitment exam 2024 for 12600 vacancies: Check detailed

    The HSSLC-level Grade 3 exam will be conducted on September 15th, 2024, in a single shift from 10:30 a.m. to 1:30 p.m. Assam Direct Recruitment Examination 2024: Revised exam pattern and syllabus.

  30. PDF STAAR Spring 2024 Grade 4 Math Answer Key

    STAAR Spring 2024 Grade 4 Math Appendix . 1.1 1.2 . 1.3 . Each number shown has one digit circled and one digit underlined . In which numbers is the value of the circled digit 10 times the value of the underlined digit? Select THREE correct answers.