1. getMock()
2. setExpectedException()
3. checkForUnintentionallyCoveredCode renamed to beStrictAboutCoversAnnotation
4. beStrictAboutTestSize renamed to enforceTimeLimit
5. UnitTestCase class is now fully removed.
dirname() is a slow function compared with __DIR__ and using
'/../'. Moodle has a large number of legacy files that are included
each time a page loads and is not able to use an autoloader as it is
functional code. This allows those required includes to perform as
best as possible in this situation.
Previously, the Check button was often shown disabled when it
could not be used (e.g. when the question was finished, or when an
interactive question was in the try-again state). Eventually we
realised it was better usability to hide it in these cases.
Note that when a teacher reviews an in-progress quiz attempt, they will
see a disabled Check button if the student doing the quiz can see the
button.
The change in MDL-51090 broke manually commenting on questions for which
no mark is given (max mark == 0). This fixes it, with unit tests.
Thanks to Nick Phillips for the original suggestion of the fix.
* A method to change the max mark for one question_attempt in the usage
* A method to replace one question in a usage with another, moving the
old question_attempt to the end.
* Methods to set and get metadata (string name value pairs) for each
question_attempt in the usage. This gets stored in the first step in a
way that should not interfere with anything else.
There are several improvements over what we had before:
1. We track all the questions seen in the the student's previous
quiz attempts, so that when they start a new quiz attempt, they get
questions they have not seen before if possible.
2. When there are no more unseen questions, we start repeating, but
always taking from the questions with the fewest attempts so far.
3. A similar logic is applied with variants within one question.
There is lots of credit to go around here. Oleg Sychev's students Alex
Shkarupa, Sergei Bastrykin and Darya Beda all worked on this over
several years, helping to clarify the problem and shape the best
solution. In the end, their various attempts were rewritten into this
final patch by me.
The new steps make it more efficient to create questions.
While making the changes, I took the opportunity to alter the tests to
follow Behat best practices, and only test one thing per scenario.
The MySQL (& Maria DB) query 'optimizer' was failing to handle this
query, so for that DB only, we give it an equivalent query that it can
get right.
With unit test.
Whether the comments on manually graded questions were visible to
students should have been controlled by the 'Specific feedback' Review
option in the quiz settings. However, the quiz was not setting
$displayoptions->manualcomment, so it did not work.
Before this, autosave was only working to save data when a student went
back in to continue an attempt. If the student, having crashed out,
never went back in and continued the attempt, their auto-saved responses
were not used when the attempt was automatically finished. That was a
rather bad oversight, which should now be fixed.
Adds a set of options to the essay question type which implement
the following new features:
-Adds an input format which accepts only file uploads, and no
inline text.
-Adds an option to make the inline text response optional when
attachments are enabled, so students can choose to upload
an essay file.
-Adds an option to make attachments required, so essays without
attachments will be marked incomplete.
We now change so that minfraction is -6 and maxfraction is 3, so getting
the question right a low certainty gives maxmark marks, and you get a
bonus for being more confident (rather than being penalised for being
unconfident). Mathematically it is the same, but the difference is
importnat psychologically.
We also change how partially correct scores are handled.
It is too harsh to penalise a partially correct score with full
certainty by doing a linear interpolation between -6 and +3. Instead,
any partially correct score (e.g. 0.5) becomes that fraction of the
correct score (e.g. 0.5 * 3 = 1.5). Also, any incorrect score is treated
as 0, so if you have a multiple choice question that normally gives a
negative score for a wrong choice, this will now never give a score of
less than -6.
Finally we change how this is displayed to students beside the question.
Rather than saying "Marked out of 1.00", we say "Base mark 1.00", and
then later we say "CBM mark 3.00" (or whatever it is).
This parallels question_attempt->minfraction, which allows the
fractional mark to go below zere.
This is needed to allow the certainty-base marking behaviours to work
better.
At the moment, when attempt is built on the last one, "not yet answered"
message is shown, which confuses many people. This patch modifies the state to
"complete" for attempt based on previous and modifies the output string.
Many thanks to Tim Hunt for guiding me through quiz infrastructure and some code
suggestions.
1. Autosave works in some ways just like a normal save. We ultimately
call $behaviour->process_save() to do the work, and create a new step to
hold the data.
2. However, we come in through a completely different route through the
API, starting with separate auto-save methods. This keeps the auto-save
changes mostly separate, and so reduced the chance of breaking existing
working code.
3. When the time comes to store the auto-save step in the database, we
save it using a negative sequence number.
This is a clever trick that not only distinguises these steps, but also
avoids unique key errors when an auto-save and a real action happen
simultaneously. (There are unit tests for these tricky edge cases.)
4. When we load the data back from the database, most of the time the
auto-save steps are loaded back as if they were a real save, and so the
auto-saved data is used when the question is then rendered.
5. However, before we process another action, we remove the auto-saved
step, so it does not appear in the final history.
1. Split the question_attempt tests into one class per file.
2. Imporve the API to give tests more control, and to test more of the
important code. Some of this is not used here, but it is about to be.
Comment format (FORMAT_...) was correctly being processed when the
manual grading happened as the result of a form submission. It was only
when done using the question_usage or question_attempt API method that
there was no way to specify the format. (Although I think the only place
this API as used was in the unit tests.)
Note that question_attempt::manual_grade API had to change, but I don't
think that is a real API change. Calling code should be using
question_usage::question_attempt, which is backwards compatible.
Note that now, if you don't pass format, then no error is generated, but
a developer debugging message is generated.
This was incorrect use of PARAM_CLEANHTML for these inputs.
This fix also adds some unit tests to try to verify that this does not
break again in future.
In the case where either a question_attempt had not steps, or a
question_usage had not question_attempts, the load_from_records methods
could get stuck in an infinite loop.
This fix ensures that does not happen, with unit tests to verify it. At
the same time, I noticed an error in the existing tests, which this
patch fixes.
The test data was wrong, and was triggering the work-around code that
MDL-32062 introduced. I fixed the test data.
Also, I fixed one of the tests, that had been broken.
Three things:
1. Fixes to select expectation.
2. Fixes to match walkthrough tests (No idea how these managed to pass
under Simpletest!)
3. Fix expected values for multianswer upgrade tests.