This feature is designed for use on pracice or formative quizzes.
It is available for quizzes that use Interactive or Immediate feedback
behaviour.
If the teacher turns this on in the quiz settings, then once a student
has finished a question, they get a 'Redo question' button beside the
question. If they click it, then the question they finished is replaced
by a new one so they can try again to practise that particul skill or
bit of knowledge a bit more.
When randomisation is involved, the studnets will be given a question or
variant that they have not seen before if possible.
This commit also fixes MDL-32049 about the lack or valdiation message
when an incomplete respnses is submitted.
AMOS BEGIN
CPY [pleaseananswerallparts,qtype_match],[pleaseananswerallparts,qtype_multianswer]
AMOS END
2 quiz tests are creating forms with text editors manually - they need to ensure a valid
$PAGE->url is set so they don't trigger debugging warnings from $PAGE->url magic method
called by Atto.
If the validation failed, so the manual grading page was re-displayed
with a validation error, then any comment that had just been typed in
was getting lost. This fixes that.
Whether the comments on manually graded questions were visible to
students should have been controlled by the 'Specific feedback' Review
option in the quiz settings. However, the quiz was not setting
$displayoptions->manualcomment, so it did not work.
Certainty -1 has never been used in standard Moodle, but is
used in Tony-Gardiner Medwin's patches to mean 'No idea' which
we intend to implement: MDL-42077. In the mean time, these changes
avoid errors for people who have used TGM's patches.
We now compute the average CBM score, accuracy, CBM bonus and enhanced
accuracy, both for the entire quiz, and for just the questions answered.
Note that these calculations must work correctly in the presence of
descriptions, ungraded questions, and manually graded questions. For
example, imagine a essay added at the end of the quiz "Summarise what
you learned attempting this exercise." This might have max mark zero or
non-zero. The CBM statistics just ignores questions like that.
We now change so that minfraction is -6 and maxfraction is 3, so getting
the question right a low certainty gives maxmark marks, and you get a
bonus for being more confident (rather than being penalised for being
unconfident). Mathematically it is the same, but the difference is
importnat psychologically.
We also change how partially correct scores are handled.
It is too harsh to penalise a partially correct score with full
certainty by doing a linear interpolation between -6 and +3. Instead,
any partially correct score (e.g. 0.5) becomes that fraction of the
correct score (e.g. 0.5 * 3 = 1.5). Also, any incorrect score is treated
as 0, so if you have a multiple choice question that normally gives a
negative score for a wrong choice, this will now never give a score of
less than -6.
Finally we change how this is displayed to students beside the question.
Rather than saying "Marked out of 1.00", we say "Base mark 1.00", and
then later we say "CBM mark 3.00" (or whatever it is).
This parallels question_attempt->minfraction, which allows the
fractional mark to go below zere.
This is needed to allow the certainty-base marking behaviours to work
better.
At the top of the quiz reivew page, there is a table that summarises
infomration about the quiz attempt as a whole. For some question
behaviours, we would like to be able to add additional information to
that summary.
This commit introduces a generic method for the behaviour to provide
summary information about an entire question usage.
It was always a bit of a hack to use static methods on the
qbehaviour_whatever classes to return metadata about the behaviour. It
is better design to have real qbehaviour_whatever_type classes to report
that metadata, particularly now that we are planning to add more such.
For example, inheritance works better with real classes. See, for
example, the improvements in
question_engine::get_behaviour_unused_display_options().
This change has been implemented in a backwards-compatbile way. Old
behaviours will continue to work. There will just be some developer debug
output to prompt people to upgrade their code properly.
When doing Each attempt builds on last, we need to copy any response
files into a draft file area, and then re-save them.
While writing the unit test for this, I had to deal with a todo in the
question engine so that questions with files in the response could be
unit-tested.
I also found an fixed a bug with qtype_essay_question::is_same_response
and fixed some notices in the existing essay/manual graded unit tests.
At the moment, when attempt is built on the last one, "not yet answered"
message is shown, which confuses many people. This patch modifies the state to
"complete" for attempt based on previous and modifies the output string.
Many thanks to Tim Hunt for guiding me through quiz infrastructure and some code
suggestions.
1. Change behaviour admin settings so you can only select enabled
behaviours.
2. During the upgrade, change admin settings that might be currently set
to manual graded, so that instead they are set to deferredfeedback (if
that is available. If not, we just take the first alphabetically.)
The ability to set all your quiz questions to require manual grading is
an interesting possibility, but practically almost useless.
If you set that accidentally, then you are badly stuck. There is no way
to fix it after the students have answered the quiz.
Therefore, we should set the config option to hide that option from the
UI. We do this for all Moodle sites as part of the upgrade, not just for
new installs.
If any admin wants to re-enable this, they can later.
1. Autosave works in some ways just like a normal save. We ultimately
call $behaviour->process_save() to do the work, and create a new step to
hold the data.
2. However, we come in through a completely different route through the
API, starting with separate auto-save methods. This keeps the auto-save
changes mostly separate, and so reduced the chance of breaking existing
working code.
3. When the time comes to store the auto-save step in the database, we
save it using a negative sequence number.
This is a clever trick that not only distinguises these steps, but also
avoids unique key errors when an auto-save and a real action happen
simultaneously. (There are unit tests for these tricky edge cases.)
4. When we load the data back from the database, most of the time the
auto-save steps are loaded back as if they were a real save, and so the
auto-saved data is used when the question is then rendered.
5. However, before we process another action, we remove the auto-saved
step, so it does not appear in the final history.