Adds new API support within search engines for optional methods to
delete data for courses and contexts, and implements this for the
two core search plugins (simpledb and solr).
The new API is automatically called when courses or contexts are
deleted. When a whole course is deleted, it only sends the course
delete rather than sending 1,000 separate context deletions as
each activity/block is deleted.
When searching using mock results (the 'global search expects the
query' step), the result count is not correctly set. As a result, the
page incorrectly reports that there are no results and doesn't
correctly show the first page of multi-page results.
Additionally, some of the core Behat tests can now be moved to use
real searching with the simpledb engine, rather than using mock
results at all. This gives better tests.
Unfortunately it was not possible to move all of the core Behat tests
and deprecate the mock step, because some of the tests are related to
the UI for 'special' features searching by user or group, neither of
which are supported by the simpledb engine.
In MDL-59039 we introduced changes to add_documents and should return an extra $partial boolean.
We still supported implementations returning 4 elements since then,
but this issue is about removing this 4 returned elements compatibility.
Added static caching of classes to reduce load times and reduce calls to `get_component_classes`
by altering to accept a null component value to search classmap only once.
Creates a new 'Users' field in the search filters form. This field
requires new JavaScript and, to implement this, a new AJAX-callable
web service to search for users by name, with detailed restrictions
based on the current user's access to view profiles.
When restoring content, this adds it to a queue for indexing. If the
restored content was then deleted before the indexing takes place,
this caused an exception in the scheduled task.
This change makes it continue safely past missing contexts.
Implements a mechanism by which search engines can provide different
result orderings, and implements a 'by location' ordering within the
Solr search engine (available whenever the user starts their search
from within a course or activity).
Adds group support to the core search API and the Solr search engine.
This allows for:
* User searching by group (in the API only, no interface yet)
* Automatically restrict search results by group (in some cases like
separate-groups forums)
Adds a new 'Gradual reindex' link to the search areas page for each
area. When clicked, this takes you to a confirm prompt, and then
adds each context from that search area to the indexing queue.
The search areas page now displays the 'Additional indexing queue'
(if it is non-empty). The table shows the first 10 items in the
queue, and it also indicates the total number in case there are
more. (I don't think people really need to see the entire
contents of it, so I didn't implement paging.)
Adds indexpriority field to the database table which holds a queue of
indexing requests. This allows for potentially large area reindexes
to have a lower priority, so as not to halt the special indexes that
run after a course restore.
This new API returns a list of contexts for each search area. This
allows the areas to be reindexed in a sensible order (roughly
speaking, newest first) and also allows this to be controlled by
each area.
An implementation in the forum module means that forums are ordered
by the date of the most recent discussion, so that active forums
will be reindexed early even if they were created a long time ago.
Without this change its possible that the unit tests will fail at any time.
Before this change the indexing time is measured by real-time, not fake time,
making all index timings 0.
This happens as PHP offers no guarantee around the sort-order of an array for
any given two members that equate as equal. It just happens to pass for the
current array of search areas in vanilla Moodle.
The recordsets used for search indexing sometimes return results
which are invalid (e.g. cannot be found in database). When this
happens, the result in the iterator for the recordset will be
false. Due to a bug, the iterator used to stop when it encountered
a false value, which prevented indexing from getting past the
problematic record.
In addition, the iterator that skips future data resulted in the
current() function of its parent indicator being called twice per
entry, which meant that search indexing called get_document()
twice as many times.
We were previously testing tha the parent is valid, which it was, and
then fetching the current record, before fetching data from it.
However, the way in which the recordset walk works, the valid function
checks whether the _record_ itself is valid, whilst the current allows
for a callback to be applied.
In this instance, the data-entry was failing because the count of
indexfields was < 2. The recordset data itself was valid, but the view
was not, and as a result, the current() function returned false.
This false was not previously handled.
I've changed the logic so that we handle this case, and have removed a
double-negative in the process.
The search area API now includes a new function get_document_recordset
which should be implemented in preference to the older
get_recordset_by_timestamp. (It's also possible to implement both in
plugin search areas which need to work against older Moodle versions.)
Existing search areas without the new function will continue to work as
before (obviously without the new functionality).
New API \core_search\manager::request_index($context, $areaid = '')
adds the given context to a list which is intended to be indexed
later by the scheduled task.
New function \core_search\manager::is_indexing_enabled(), analagous
to existing is_global_search_enabled().
This replaces existing duplicated code, ready for more use in
following commits.