The sequence of questions that made up a quiz used to be stored as a
comma-separated list in quiz.questions. Now the same information is
stored in the rows in the quiz_slots table. This is not just 'better' in
a database design sense, but it allows for the future changes we will
need as we enhance the quiz in the MDL-40987 epic.
Having changed the database structure, all the rest of the code needs to
be changed to account for it, and that is done here.
Note that there are not many unit tests for the changed bit. That is
because as part of MDL-40987 we will be changing the code further, and
we will add unit tests then.
This data should all have been upgraded when moving to Moodle 2.1. It
was only kept as a back-up, and now, after 3 years have past, we can
clean it up.
In Moodle 2.1, there was a major DB upgrade relating to questions, and
it was possible to delay some of that upgrade. Now, those DB tables are
changing again, and the time has come to insist that all the updata has
been upgraded (or deleted).
If the validation failed, so the manual grading page was re-displayed
with a validation error, then any comment that had just been typed in
was getting lost. This fixes that.
Whether the comments on manually graded questions were visible to
students should have been controlled by the 'Specific feedback' Review
option in the quiz settings. However, the quiz was not setting
$displayoptions->manualcomment, so it did not work.
This was breaking with oracle master/master replication. Fortunately all
the places that needed to be changed were private to datalib.php. There
is still some ordering by id there, but only places where we want a
consitent, rather than meaningful, order, so that is OK.
The queries changed by this patch all have subqueries in aggregate
queries that pull out the latest step for a question_attempt. Those
queries used to look for MAX(id) but now they look for
MAX(sequencenumber). This is equivalent (for databases where ids always
increase with time, except for auto-saved steps. In the past, an
auto-saved step might have been considered latest. Now the latest step
will always be one that has been properly processed. You can aruge that
this change is an improvement. Anyway, it is a moot point. All these
queries are only used in reports which are run on completed attempts,
where there will not be any autosaved data.
Before this, autosave was only working to save data when a student went
back in to continue an attempt. If the student, having crashed out,
never went back in and continued the attempt, their auto-saved responses
were not used when the attempt was automatically finished. That was a
rather bad oversight, which should now be fixed.
Adds a set of options to the essay question type which implement
the following new features:
-Adds an input format which accepts only file uploads, and no
inline text.
-Adds an option to make the inline text response optional when
attachments are enabled, so students can choose to upload
an essay file.
-Adds an option to make attachments required, so essays without
attachments will be marked incomplete.
Plain responses, without files, were getting messed up by the
fix for MDL-39980. Something that looked like an HTML comment was being
appended.
This fix works by avoiding appending anything if there are no files. The
new unit test (which was failing before I fixed the code) confirms that
this works. The other tests should be enough to verify that there are no
regressions.
We now compute the average CBM score, accuracy, CBM bonus and enhanced
accuracy, both for the entire quiz, and for just the questions answered.
Note that these calculations must work correctly in the presence of
descriptions, ungraded questions, and manually graded questions. For
example, imagine a essay added at the end of the quiz "Summarise what
you learned attempting this exercise." This might have max mark zero or
non-zero. The CBM statistics just ignores questions like that.
We now change so that minfraction is -6 and maxfraction is 3, so getting
the question right a low certainty gives maxmark marks, and you get a
bonus for being more confident (rather than being penalised for being
unconfident). Mathematically it is the same, but the difference is
importnat psychologically.
We also change how partially correct scores are handled.
It is too harsh to penalise a partially correct score with full
certainty by doing a linear interpolation between -6 and +3. Instead,
any partially correct score (e.g. 0.5) becomes that fraction of the
correct score (e.g. 0.5 * 3 = 1.5). Also, any incorrect score is treated
as 0, so if you have a multiple choice question that normally gives a
negative score for a wrong choice, this will now never give a score of
less than -6.
Finally we change how this is displayed to students beside the question.
Rather than saying "Marked out of 1.00", we say "Base mark 1.00", and
then later we say "CBM mark 3.00" (or whatever it is).
This parallels question_attempt->minfraction, which allows the
fractional mark to go below zere.
This is needed to allow the certainty-base marking behaviours to work
better.
At the top of the quiz reivew page, there is a table that summarises
infomration about the quiz attempt as a whole. For some question
behaviours, we would like to be able to add additional information to
that summary.
This commit introduces a generic method for the behaviour to provide
summary information about an entire question usage.
It was always a bit of a hack to use static methods on the
qbehaviour_whatever classes to return metadata about the behaviour. It
is better design to have real qbehaviour_whatever_type classes to report
that metadata, particularly now that we are planning to add more such.
For example, inheritance works better with real classes. See, for
example, the improvements in
question_engine::get_behaviour_unused_display_options().
This change has been implemented in a backwards-compatbile way. Old
behaviours will continue to work. There will just be some developer debug
output to prompt people to upgrade their code properly.
quiz_question_statistics_stats renamed to question_statistics_calculator
separate class question_statistics used to store calculated stats
and api changed, also code generally cleaned up.
quiz_question_statistics_stats renamed to question_statistics_calculator
separate class question_statistics used to store calculated stats
and api changed, also code generally cleaned up.
to question bank
- move cron to clean up old cache records move code and
rename classes
- further review of the quiz reports statistics code
- starting to separate calculations of quiz stats, question stats and
response analysis
- introduce hashcode db field for cached stats convert
- code to use qubaids hashcode for caches
We just drop the old tables (including previous upgrade steps) and
re-create the new ones, because these tables just cache calculated
statistics. No important data is stored in them.
I think this used to work because mod/forum/lib.php used to be included
everywhere, and in turn included lib/formslib.php. We should declare the
dependencey explicitly.
This slighly changes the format for the way answers are stroed in the DB
in the case where there is some HTML content, but no files. This should
not cause any problems.
When doing Each attempt builds on last, we need to copy any response
files into a draft file area, and then re-save them.
While writing the unit test for this, I had to deal with a todo in the
question engine so that questions with files in the response could be
unit-tested.
I also found an fixed a bug with qtype_essay_question::is_same_response
and fixed some notices in the existing essay/manual graded unit tests.