* The arrow characters in link_arrow_right() and link_arrow_left()
functions get announced by screen readers. This causes confusion
and is unnecessary.
This hack was introduced to work around a bug in MySQL 5.6.14 and
MariaDB at the time.
https://bugs.mysql.com/bug.php?id=69882
It was addressed a few months later in 5.6.16, and 5.7.4.
MariaDB merged version 5.6.16 of MySQL's InnoDB engine in MariaDB
10.0.11 and got the patch from there.
Moodle has required MySQL 5.7, and MariaDB 10.2.29 since Moodle 3.11 and
it is therefore safe to remove these hacks for these versions.
In some places we prevented cache poisoning, in others we did not. We
also did not place any restriction on the minimum value for a revision.
This change introduces a new set of functions for configonly endpoints
which validates the revision numbers passed in. If the revision is
either too old, or too new, it is rejected and the file content is not
cached. The content is still served, but caching headers are not sent,
and any local storage caching is prevented.
The current time is used as the maximum version, with 60 seconds added
to allow for any clock skew between cluster nodes. Previously some
locations used one hour, but there should never be such a large clock
skew on a correctly configured system.
Co-authored-by: Andrew Nicols <andrew@nicols.co.uk>
The selection gets lost while opening the modal dialogue to update an
embedded media. Caching the current selection allows us to update the
previously selected node instead of updating the first embedded media.
Signed-off-by: Gregor Eichelberger <gregor.eichelberger@tuwien.ac.at>
The short name of the cc licenses are trailed with a suffix containing
the version number (current 3.0 and 4.0). The old cc* licenses become
the new cc-*-3.0 licenses and are disables, because the new cc*-4.0
licenses are the current ones.
This is a backport of MDL-43195.
when `log_out` is called from `\core\oauth2\client` it will delete the refresh token,
what it actually needs to use it to get a new access token
actually logging out is not needed here, the only thing we need to make sure is,
the invalid access token is removed from the session
that is done by storing `null`
This new parameter / property will decide if we want to reduce
the run data before processing it:
- By default it will be disabled in table mode.
- By default it will be enabled in graph mode.
- The defaults can be changed by adding reducedata=[0|1] in the URLs
- Once data reduction is enabled, it stays enabled while
navigating within the xhprof reports.
This covers the 2 new functions with unit tests:
- xhprof_topo_sort()
- reduce_run_data()
Note that the example graph used in the provider is the
one shown in the issue to explain the reduction procedure.
Here we are reducing the xhprof runs data by removing the
__Mustache==>__Mustache calls and all the orphaned data.
To save N iterations what we are doing is:
0. The information is "topologically" sorted, so we ensure that
all the parents in the data are processed before the children.
(this will help a lot when cleaning orphaned data, see below).
1. First pass, all the candidate (by regexp) calls are removed
from the run data.
2. Second pass, all the orphaned information (calls that have
ended losing his parent) are also removed, so data is consistent.
Note that, normally we would need N passes to remove all the
orphaned data (because each pass creates new orphan candidates),
but, as far as we have ensured that the information is topologically
sorted (see point 0 above), all this can be done in one unique pass.
TODO:
- Add unit tests.
- Enable some system to be able to decide which utilities we
want to get the data reduced and which ones will continue
using the complete data. Right now the reduction is being
applied to all the utilities (both table and graph views).
- Document the change and, if implemented, the way to select
between complete/reduced data.
- Consider adding some caching to speed-up the reduction process
(some TODOs have been left in the code pointing to the critical
points).
If a quiz had a long job to calculate statstics running, this would
cause pages that may also attempt a recalculation (the statistics report
page or question bank) to load very slowly, and possibly result in a
database deadlock.
This change will firstly prevent the question bank page performing
analysis calculations at all, since these are not required for this
page, which will speed up loading and prevent deadlocks on this page.
Secondly, this adds a lock to the recalcuation process so that it cannot
run twice concurrently. This will present the user with a message to
indicate that it is waiting for a running calculation until it is
complete, and eventually it will timeout with a message and debugging.
The external test file URL concerns itself only with HTTP_USER_AGENT
matching, not sending response headers, which can differ according to
HTTP protocol in use by the endpoint (1.1 vs 2).
Given the returned response code itself is irrelevant to the testcase,
there's not much benefit to asserting it and risking random failures.
The above syntax is defined as supported by the class, for example the
format '5/10' means:
"At every 10th <unit> from 5 through <max>."
It is analogous to '5-<max>/10'.
Instead of doing an exact checking of the page title in
\behat_hooks::before_step(), do a more lenient check by checking that
the page title contains the acceptance test's site name.
* Use the page title separator constant when displaying the page title
during upgrade and installation.
* No need to display the site name during install when because it hasn't
been set at this point.
* Page titles should display the most unique information first. For
admin pages it would be useful to display the information that
is unique to the page first before the broader categories that the
page belongs to.
* Also use the new page title separator constant.