Secret Santa 2024
Original Post
Changing The Grade Boundaries
Apparently in the UK. Our goverment changes our grade boundaries every year based on the last GCSE results. So If you got 30 marks worth a C this year. Those 30 marks would be worth a D if they lowered or a B if they raised it for the next year. My English teacher told me that a student at the school had done a test and got an E. So she re did it a year later and got the same marks but it was worth a D. Is this fair? I think not. What are your opinions?
I was under the impression that the grade boundaries were adjusted based on how well students did on the exams, as a way of dealing with the fact that different exam papers are not equal in difficulty. So if loads of students get low scores one year then they will lower the percentages needed for a certain grade, in order to compensate for the fact that they obviously had a difficult exam paper. So, with the student you mention, while they got the same marks, presumably the second paper was considered more challenging, and grade boundaries were moved accordingly.
That is the aim, but in practice it is basically impossible to produce 2 exams of identical difficulty, let alone year after year. If you're interested in the details, have a read of this: http://store.aqa.org.uk/over/pdf/GUIDETOSTANDARDSETTING.PDF

It's for the AQA exam board, but I believe similar techniques are used by the other boards.
Originally Posted by Vorlons View Post
I was under the impression that the grade boundaries were adjusted based on how well students did on the exams, as a way of dealing with the fact that different exam papers are not equal in difficulty. So if loads of students get low scores one year then they will lower the percentages needed for a certain grade, in order to compensate for the fact that they obviously had a difficult exam paper. So, with the student you mention, while they got the same marks, presumably the second paper was considered more challenging, and grade boundaries were moved accordingly.

I'll talk about the Australian system since I know more about it, but it's going to be similar no matter the system.

The mechanism you are talking about is actually two things, firstly normalization. They want the same distribution of marks each year, so once they get everyone's marks they need to adjust them up or down. If the test was too easy then people get marked down, if the test was too hard they get marked up. It's pretty rare for an entire test to be too easy or too hard, usually some sections are too hard and some too easy, so instead of getting a nice distribution of marks you get a lot of people getting all the same questions right and wrong, which leads to - for example - a lot of people getting around 70%, which is undesirable because that is too high. The marks are adjusted to get the same distribution between each year - in general this means people who are near the middle hardly move, people lower down get pushed down, and people with higher marks get pushed even higher.

In Australia final grades which are used for university entrance are based on your school marks and your exam marks, so we have a second phase known as moderation, in which your school marks are also normalized to match the national standard. If you go to a school known for being hard, then your marks will be increased because your school's averages are likely below the average. But because a school that simply has a bad group of students may also have low averages, the marks are actually normalized against your exam grades - for example if a school has an average class grade of 60 but an average exam grade of 80, they will scale the 60 class grade up to 80. Usually it's only a few points, not 20, and if the gap is that big then someone is sent out to review the classwork and tests from that school.

Stratification is the division of raw mark pool into grades (eg from 40,50,90 to E,F,A etc). I don't think we do this with our national university entrance testing, but for other testing we do this (for example our national literacy and numeracy testing which is done every second year or so since year 3). That involves dividing the data up into meaningful groups, for example "excellent" might be the top 10%, "very good" the top 10-20%, etc. Note that in this scheme being slightly under 50% isn't a big deal because being slightly under average might not be a problem, but being significantly under average is a problem.

In Australia A's, B's don't have any special meaning. They don't affect university entrance, they aren't moderated or normalized, and I don't think anyone pays them any attention. In some years fewer might get A's, in some more. It's not a problem for us since we go off the normalized data.



If letter grades mean something in your system, and what you say is true, then your system is completely fucked. For example if the GCSE's go really well one year and many get A's, then the next year they will be increased in difficulty, and the grade requirement increased, resulting in more B's, and the system will continue to swing back and forth. To repeat, in Australia it is normalized against an ideal curve and used to moderate to remove school bias. This system will be stable on any given year, and over a period of time will be stable.
<Faint> the rules have been stated quite clearly 3 times now from high staff
Originally Posted by ImmortalPig View Post
If letter grades mean something in your system, and what you say is true, then your system is completely fucked. For example if the GCSE's go really well one year and many get A's, then the next year they will be increased in difficulty, and the grade requirement increased, resulting in more B's, and the system will continue to swing back and forth. To repeat, in Australia it is normalized against an ideal curve and used to moderate to remove school bias. This system will be stable on any given year, and over a period of time will be stable.

Sounds like the system used in Australia is pretty similar to the one we have here. I should point out though, that while letter grades are important here (your grades at a-level determine which universities will accept you) the system is designed to remove unfairness. The grade requirement is not set until after the exams have been marked. Then they sit down with the data on the raw marks for the exams, and then normalise it. This ensures that, while exam difficulties may change, no one gets treated unfairly, as if the exam is hard, lots of students will do more poorly, and so the grade boundaries will be lower.
Where I live, grades don't matter, only percentage of correct answers. This whole "adjust the whole test to suit students" is fucking retarded.
Originally Posted by ynvaser View Post
Where I live, grades don't matter, only percentage of correct answers. This whole "adjust the whole test to suit students" is fucking retarded.

So what if one school has tests that are harder than other schools?

What if one subject's exams are harder than the others?

Even within one subject they usually have multiple exams to discourage cheating, how do you propose they deal with multiple exams of varying difficulties?

Normalization is necessary.
<Faint> the rules have been stated quite clearly 3 times now from high staff
Originally Posted by ImmortalPig View Post
So what if one school has tests that are harder than other schools?

What if one subject's exams are harder than the others?

Even within one subject they usually have multiple exams to discourage cheating, how do you propose they deal with multiple exams of varying difficulties?

Normalization is necessary.

I'm sorry, I was thinking about A-levels/matura for some reason, let me readjust what I said, grades matter.

Schools differ, but we kind of solved that by introducing a central test after primary school. Based on your % of points achieved on that test can you get into different secondary schools.
There are some secondary schools which were specifically created for proteges, and there are some which deal with the not so gifted. The learning material is the same everywhere, only the intensity of the teaching process differs.

If you have learned the given material properly, then does it matter how hard is the test? If everyone fails the test, they'll make the next one easier. No need to adjust the grading because noone could achieve an acceptable level. The only thing they can mess up in a test is the given time to solve the thing.
But each year they need to write new tests, how can they be sure that the new test is the same difficulty as the old test?


It seems in the UK system they adjust the benchmarks to fit the grades, so that makes sense, but I'm certain that behind the scenes they still do testing to make sure the difficulty is similar between years.
<Faint> the rules have been stated quite clearly 3 times now from high staff