If there is a PHP fatal error and destructors do not run (this can
happen in out-of-memory errors, and maybe if there is an error in a
previous destructor) then Postgres cursors may be left open.
Usually this does not cause a problem because the connection is
closed anyway, but if using persistent connections, a future
request may reuse the connection with a cursor open. It then gets
errors when it tries to create a new cursor with the same name.
This change closes all cursors at the start of a persistent
connection.
MySQL v8 has added "groups" to the reserved word list. That is used
as a table name in Moodle so we have to quote it with backticks in the
SQL so that MySQL knows it's being used as an identifier.
It's more memory efficient to use `pg_fetch_assoc` for each row than to
call `pg_fetch_all` and release memory immediately. This is because we
can treat the assoc fetch like an iterator and it only fetches the
current record into memory one at a time, whilst the all fetch fetches
all records and never unsets them. Attempting to unset them is extremely
time consuming.
This is a followup of 85f47ba, where we were relaxing
the (new since phpunit 7.x) strict (===) isEqual()
comparison for strings. Copying the explanations for
easier understanding.
Link: https://github.com/sebastianbergmann/phpunit/issues/3185
Solution here is one of:
a) Return to the previous situation, making the comparison
softer. That can achieved by forcing different types, so
float == string works.
b) Changing APIs (both forms and database return strings) to
perform some conversion to floats. That would make float
comparison (with floats or strings) to work too.
The patch here follows the a) approach. Changing all the internals
for proper float handling sounds excesive when it has been working
perfectly since ever. So we went the easier route, just getting
rid of the new === comparisons when needed by changing expectation
types to float.
- Clumsy fallback only when there is no full-text search support
- Mimic solr tests
- pgsql tokenization using simple configuration
- workaround for mysql '*' search issue
- total results proper calculation
- SQL server FTS support
- Standarize dml full-text search checkings
- Upgrade note about the new dml method
- Set search_simpledb as default engine if no solr config
If the file does not have Unix line endings then the regular expression
in oci_native_moodle_database::attempt_oci_package_install() does
not split it correctly.
This leads to an invalid package being created in Oracle.
The .gitattribute file changes for oci_native_moodle_package.sql
force it to have Unix style line endings when the branch is checked
out and the file does not already exist.
The file has been modified so that the Unix style line endings are
applied even if the file already exists, for example when pulling in
this change to an existing branch.
On Postgres, get_recordset_sql loads all the results into memory
(within the Postgres library, which doesn't count towards the PHP
memory limit, but does count towards making your server run out of
memory) as soon as the query completes.
This commit changes the code to use cursors, which in Postgres
allow the results to be returned in smaller chunks (by default
100,000 rows).
The following InnoDB file format configuration parameters were deprecated
in MySQL 5.7.7 and are now removed:
- innodb_file_format
- innodb_file_format_check
- innodb_file_format_max
- innodb_large_prefix
File format configuration parameters were necessary for creating tables
compatible with earlier versions of InnoDB in MySQL 5.1.
Now that MySQL 5.1 has reached the end of its product lifecycle,
the parameters are no longer required.
The FILE_FORMAT column was removed from the INNODB_SYS_TABLES and
INNODB_SYS_TABLESPACES Information Schema tables.
Ref: https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-0.html
When updating the mysql system to utf8mb4 not all tables are
converted to the row format of compressed or dynamic. If a new
index is created there is a possibility that the table could be
using compact or redundant and then an error will be shown saying
that the index size is too large. This fix handles this exception
and converts the table over to compressed.