Table of Contents
The discussion here describes restrictions that apply to the use of MySQL features such as subqueries or views.
These restrictions apply to the features described in Chapter 21, Stored Programs and Views.
Some of the restrictions noted here apply to all stored routines; that is, both to stored procedures and stored functions. There are also some restrictions specific to stored functions but not to stored procedures.
The restrictions for stored functions also apply to triggers. There are also some restrictions specific to triggers.
The restrictions for stored procedures also apply to the
DO
clause of Event Scheduler event
definitions. There are also some
restrictions
specific to events.
Stored routines cannot contain arbitrary SQL statements. The following statements are not permitted:
The locking statements LOCK
TABLES
and
UNLOCK
TABLES
.
LOAD DATA
and LOAD
TABLE
.
SQL prepared statements
(PREPARE
,
EXECUTE
,
DEALLOCATE PREPARE
) can be used
in stored procedures, but not stored functions or triggers.
Thus, stored functions and triggers cannot use dynamic SQL
(where you construct statements as strings and then execute
them).
Generally, statements not permitted in SQL prepared statements
are also not permitted in stored programs. For a list of
statements supported as prepared statements, see
Section 14.5, “SQL Syntax for Prepared Statements”. Exceptions
are SIGNAL
,
RESIGNAL
, and
GET DIAGNOSTICS
, which are not
permissible as prepared statements but are permitted in stored
programs.
Because local variables are in scope only during stored
program execution, references to them are not permitted in
prepared statements created within a stored program. Prepared
statement scope is the current session, not the stored
program, so the statement could be executed after the program
ends, at which point the variables would no longer be in
scope. For example, SELECT ... INTO
cannot be used
as a prepared statement. This restriction also applies to
stored procedure and function parameters. See
Section 14.5.1, “PREPARE Syntax”.
local_var
Within all stored programs (stored procedures and functions,
triggers, and events), the parser treats
BEGIN [WORK]
as the beginning of a
BEGIN ...
END
block. To begin a transaction in this context,
use START
TRANSACTION
instead.
The following additional statements or operations are not
permitted within stored functions. They are permitted within
stored procedures, except stored procedures that are invoked from
within a stored function or trigger. For example, if you use
FLUSH
in a stored procedure, that
stored procedure cannot be called from a stored function or
trigger.
Statements that perform explicit or implicit commit or rollback. Support for these statements is not required by the SQL standard, which states that each DBMS vendor may decide whether to permit them.
Statements that return a result set. This includes
SELECT
statements that do not
have an INTO
clause and other
statements such as var_list
SHOW
,
EXPLAIN
, and
CHECK TABLE
. A function can
process a result set either with
SELECT ... INTO
or by using a
cursor and var_list
FETCH
statements.
See Section 14.2.9.1, “SELECT ... INTO Syntax”, and
Section 14.6.6, “Cursors”.
FLUSH
statements.
Stored functions cannot be used recursively.
A stored function or trigger cannot modify a table that is already being used (for reading or writing) by the statement that invoked the function or trigger.
If you refer to a temporary table multiple times in a stored
function under different aliases, a Can't reopen
table:
'
error occurs, even if the references occur in different
statements within the function.
tbl_name
'
HANDLER ...
READ
statements that invoke stored functions can
cause replication errors and are disallowed.
For triggers, the following additional restrictions apply:
Triggers are not activated by foreign key actions.
When using row-based replication, triggers on the slave are not activated by statements originating on the master. The triggers on the slave are activated when using statement-based replication. For more information, see Section 18.4.1.35, “Replication and Triggers”.
The RETURN
statement is not
permitted in triggers, which cannot return a value. To exit a
trigger immediately, use the
LEAVE
statement.
Triggers are not permitted on tables in the
mysql
database.
The trigger cache does not detect when metadata of the underlying objects has changed. If a trigger uses a table and the table has changed since the trigger was loaded into the cache, the trigger operates using the outdated metadata.
The same identifier might be used for a routine parameter, a local variable, and a table column. Also, the same local variable name can be used in nested blocks. For example:
CREATE PROCEDURE p (i INT) BEGIN DECLARE i INT DEFAULT 0; SELECT i FROM t; BEGIN DECLARE i INT DEFAULT 1; SELECT i FROM t; END; END;
In such cases, the identifier is ambiguous and the following precedence rules apply:
A local variable takes precedence over a routine parameter or table column.
A routine parameter takes precedence over a table column.
A local variable in an inner block takes precedence over a local variable in an outer block.
The behavior that variables take precedence over table columns is nonstandard.
Use of stored routines can cause replication problems. This issue is discussed further in Section 21.7, “Binary Logging of Stored Programs”.
The
--replicate-wild-do-table=
option applies to tables, views, and triggers. It does not apply
to stored procedures and functions, or events. To filter
statements operating on the latter objects, use one or more of the
db_name.tbl_name
--replicate-*-db
options.
There are no stored routine debugging facilities.
The MySQL stored routine syntax is based on the SQL:2003 standard. The following items from that standard are not currently supported:
UNDO
handlers
FOR
loops
To prevent problems of interaction between sessions, when a client issues a statement, the server uses a snapshot of routines and triggers available for execution of the statement. That is, the server calculates a list of procedures, functions, and triggers that may be used during execution of the statement, loads them, and then proceeds to execute the statement. While the statement executes, it does not see changes to routines performed by other sessions.
For maximum concurrency, stored functions should minimize their side-effects; in particular, updating a table within a stored function can reduce concurrent operations on that table. A stored function acquires table locks before executing, to avoid inconsistency in the binary log due to mismatch of the order in which statements execute and when they appear in the log. When statement-based binary logging is used, statements that invoke a function are recorded rather than the statements executed within the function. Consequently, stored functions that update the same underlying tables do not execute in parallel. In contrast, stored procedures do not acquire table-level locks. All statements executed within stored procedures are written to the binary log, even for statement-based binary logging. See Section 21.7, “Binary Logging of Stored Programs”.
The following limitations are specific to the Event Scheduler:
Event names are handled in case-insensitive fashion. For
example, you cannot have two events in the same database with
the names anEvent
and
AnEvent
.
An event may not be created, altered, or dropped by a stored routine, trigger, or another event. An event also may not create, alter, or drop stored routines or triggers. (Bug #16409, Bug #18896)
DDL statements on events are prohibited while a
LOCK TABLES
statement is in
effect.
Event timings using the intervals YEAR
,
QUARTER
, MONTH
, and
YEAR_MONTH
are resolved in months; those
using any other interval are resolved in seconds. There is no
way to cause events scheduled to occur at the same second to
execute in a given order. In addition—due to rounding,
the nature of threaded applications, and the fact that a
nonzero length of time is required to create events and to
signal their execution—events may be delayed by as much
as 1 or 2 seconds. However, the time shown in the
INFORMATION_SCHEMA.EVENTS
table's
LAST_EXECUTED
column or the
mysql.event
table's
last_executed
column is always accurate to
within one second of the actual event execution time. (See
also Bug #16522.)
Each execution of the statements contained in the body of an
event takes place in a new connection; thus, these statements
has no effect in a given user session on the server's
statement counts such as Com_select
and
Com_insert
that are displayed by
SHOW STATUS
. However, such
counts are updated in the global scope.
(Bug #16422)
Events do not support times later than the end of the Unix Epoch; this is approximately the beginning of the year 2038. Such dates are specifically not permitted by the Event Scheduler. (Bug #16396)
References to stored functions, user-defined functions, and
tables in the ON SCHEDULE
clauses of
CREATE EVENT
and
ALTER EVENT
statements are not
supported. These sorts of references are not permitted. (See
Bug #22830 for more information.)
Stored routines and triggers in MySQL Cluster.
Stored procedures, stored functions, and triggers are all
supported by tables using the NDB
storage engine; however, it is important to keep in mind that
they do not propagate automatically between
MySQL Servers acting as Cluster SQL nodes. This is because of
the following:
Stored routine definitions are kept in tables in the
mysql
system database using the
MyISAM
storage engine, and so do not
participate in clustering.
The .TRN
and
.TRG
files containing trigger
definitions are not read by the
NDB
storage engine, and are
not copied between Cluster nodes.
Any stored routine or trigger that interacts with MySQL Cluster
tables must be re-created by running the appropriate
CREATE PROCEDURE
,
CREATE FUNCTION
, or
CREATE TRIGGER
statements on each
MySQL Server that participates in the cluster where you wish to
use the stored routine or trigger. Similarly, any changes to
existing stored routines or triggers must be carried out
explicitly on all Cluster SQL nodes, using the appropriate
ALTER
or DROP
statements
on each MySQL Server accessing the cluster.
Do not attempt to work around the issue
described in the first item mentioned previously by
converting any mysql
database tables to
use the NDB
storage engine.
Altering the system tables in the
mysql
database is not
supported and is very likely to produce
undesirable results.
SIGNAL
,
RESIGNAL
, and
GET DIAGNOSTICS
are not permissible
as prepared statements. For example, this statement is invalid:
PREPARE stmt1 FROM 'SIGNAL SQLSTATE "02000"';
SQLSTATE
values in class
'04'
are not treated specially. They are
handled the same as other exceptions.
Standard SQL has a diagnostics area stack, containing a
diagnostics area for each nested execution context. Standard SQL
syntax includes GET STACKED DIAGNOSTICS
for
referring to stacked areas. MySQL does not support the
STACKED
keyword because there is a single
diagnostics area containing information from the most recent
statement that wrote to it. See also
Section 14.6.7.7, “The MySQL Diagnostics Area”.
In standard SQL, the first condition relates to the
SQLSTATE
value returned for the previous SQL
statement. In MySQL, this is not guaranteed, so to get the main
error, you cannot do this:
GET DIAGNOSTICS CONDITION 1 @errno = MYSQL_ERRNO;
Instead, do this:
GET DIAGNOSTICS @cno = NUMBER; GET DIAGNOSTICS CONDITION @cno @errno = MYSQL_ERRNO;
Server-side cursors are implemented in the C API using the
mysql_stmt_attr_set()
function.
The same implementation is used for cursors in stored routines. A
server-side cursor enables a result set to be generated on the
server side, but not transferred to the client except for those
rows that the client requests. For example, if a client executes a
query but is only interested in the first row, the remaining rows
are not transferred.
In MySQL, a server-side cursor is materialized into an internal
temporary table. Initially, this is a MEMORY
table, but is converted to a MyISAM
table when
its size exceeds the minimum value of the
max_heap_table_size
and
tmp_table_size
system variables.
The same restrictions apply to internal temporary tables created
to hold the result set for a cursor as for other uses of internal
temporary tables. See Section 9.4.4, “Internal Temporary Table Use in MySQL”.
One limitation of the implementation is that for a large result
set, retrieving its rows through a cursor might be slow.
Cursors are read only; you cannot use a cursor to update rows.
UPDATE WHERE CURRENT OF
and DELETE
WHERE CURRENT OF
are not implemented, because updatable
cursors are not supported.
Cursors are nonholdable (not held open after a commit).
Cursors are asensitive.
Cursors are nonscrollable.
Cursors are not named. The statement handler acts as the cursor ID.
You can have open only a single cursor per prepared statement. If you need several cursors, you must prepare several statements.
You cannot use a cursor for a statement that generates a result
set if the statement is not supported in prepared mode. This
includes statements such as CHECK
TABLE
, HANDLER READ
, and
SHOW BINLOG EVENTS
.
In general, you cannot modify a table and select from the same table in a subquery. For example, this limitation applies to statements of the following forms:
DELETE FROM t WHERE ... (SELECT ... FROM t ...); UPDATE t ... WHERE col = (SELECT ... FROM t ...); {INSERT|REPLACE} INTO t (SELECT ... FROM t ...);
Exception: The preceding prohibition does not apply if you are
using a subquery for the modified table in the
FROM
clause. Example:
UPDATE t ... WHERE col = (SELECT * FROM (SELECT ... FROM t...) AS _t ...);
Here the result from the subquery in the
FROM
clause is stored as a temporary table,
so the relevant rows in t
have already been
selected by the time the update to t
takes
place.
Row comparison operations are only partially supported:
For
,
expr
[NOT] IN
subquery
expr
can be an
n
-tuple (specified using row
constructor syntax) and the subquery can return rows of
n
-tuples. The permitted syntax
is therefore more specifically expressed as
row_constructor
[NOT]
IN table_subquery
For
,
expr
op
{ALL|ANY|SOME}
subquery
expr
must be a scalar value and
the subquery must be a column subquery; it cannot return
multiple-column rows.
In other words, for a subquery that returns rows of
n
-tuples, this is supported:
(expr_1
, ...,expr_n
) [NOT] INtable_subquery
But this is not supported:
(expr_1
, ...,expr_n
)op
{ALL|ANY|SOME}subquery
The reason for supporting row comparisons for
IN
but not for the others is that
IN
is implemented by rewriting it as a
sequence of =
comparisons and AND
operations.
This approach cannot be used for ALL
,
ANY
, or SOME
.
Subqueries in the FROM
clause cannot be
correlated subqueries. They are materialized in whole
(evaluated to produce a result set) during query execution, so
they cannot be evaluated per row of the outer query. The
optimizer delays materialization until the result is needed,
which may permit materialization to be avoided. See
Section 9.2.1.18.3, “Optimizing Derived Tables and View References”.
MySQL does not support LIMIT
in subqueries
for certain subquery operators:
mysql>SELECT * FROM t1
->WHERE s1 IN (SELECT s2 FROM t2 ORDER BY s1 LIMIT 1);
ERROR 1235 (42000): This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'
MySQL permits a subquery to refer to a stored function that
has data-modifying side effects such as inserting rows into a
table. For example, if f()
inserts rows,
the following query can modify data:
SELECT ... WHERE x IN (SELECT f() ...);
This behavior is an extension to the SQL standard. In MySQL,
it can produce indeterminate results because
f()
might be executed a different number of
times for different executions of a given query depending on
how the optimizer chooses to handle it.
For statement-based or mixed-format replication, one implication of this indeterminism is that such a query can produce different results on the master and its slaves.
View processing is not optimized:
It is not possible to create an index on a view.
Indexes can be used for views processed using the merge algorithm. However, a view that is processed with the temptable algorithm is unable to take advantage of indexes on its underlying tables (although indexes can be used during generation of the temporary tables).
Before MySQL 5.7.7, subqueries cannot be used in the
FROM
clause of a view.
There is a general principle that you cannot modify a table and select from the same table in a subquery. See Section C.4, “Restrictions on Subqueries”.
The same principle also applies if you select from a view that selects from the table, if the view selects from the table in a subquery and the view is evaluated using the merge algorithm. Example:
CREATE VIEW v1 AS SELECT * FROM t2 WHERE EXISTS (SELECT 1 FROM t1 WHERE t1.a = t2.a); UPDATE t1, v2 SET t1.a = 1 WHERE t1.b = v2.b;
If the view is evaluated using a temporary table, you
can select from the table in the view
subquery and still modify that table in the outer query. In this
case the view will be stored in a temporary table and thus you are
not really selecting from the table in a subquery and modifying it
“at the same time.” (This is another reason you might
wish to force MySQL to use the temptable algorithm by specifying
ALGORITHM = TEMPTABLE
in the view definition.)
You can use DROP TABLE
or
ALTER TABLE
to drop or alter a
table that is used in a view definition. No warning results from
the DROP
or ALTER
operation,
even though this invalidates the view. Instead, an error occurs
later, when the view is used. CHECK
TABLE
can be used to check for views that have been
invalidated by DROP
or ALTER
operations.
With regard to view updatability, the overall goal for views is
that if any view is theoretically updatable, it should be
updatable in practice. This includes views that have
UNION
in their definition. Not all
views that are theoretically updatable can be updated. The initial
view implementation was deliberately written this way to get
usable, updatable views into MySQL as quickly as possible. Many
theoretically updatable views can be updated now, but limitations
still exist:
Updatable views with subqueries anywhere other than in the
WHERE
clause. Some views that have
subqueries in the SELECT
list
may be updatable.
You cannot use UPDATE
to update
more than one underlying table of a view that is defined as a
join.
You cannot use DELETE
to update
a view that is defined as a join.
There exists a shortcoming with the current implementation of
views. If a user is granted the basic privileges necessary to
create a view (the CREATE VIEW
and
SELECT
privileges), that user will
be unable to call SHOW CREATE VIEW
on that object unless the user is also granted the
SHOW VIEW
privilege.
That shortcoming can lead to problems backing up a database with mysqldump, which may fail due to insufficient privileges. This problem is described in Bug #22062.
The workaround to the problem is for the administrator to manually
grant the SHOW VIEW
privilege to
users who are granted CREATE VIEW
,
since MySQL doesn't grant it implicitly when views are created.
Views do not have indexes, so index hints do not apply. Use of index hints when selecting from a view is not permitted.
SHOW CREATE VIEW
displays view
definitions using an AS
clause for each
column. If a column is created from an expression, the default
alias is the expression text, which can be quite long. Aliases for
column names in alias_name
CREATE VIEW
statements are checked against the maximum column length of 64
characters (not the maximum alias length of 256 characters). As a
result, views created from the output of SHOW
CREATE VIEW
fail if any column alias exceeds 64
characters. This can cause problems in the following circumstances
for views with too-long aliases:
View definitions fail to replicate to newer slaves that enforce the column-length restriction.
Dump files created with mysqldump cannot be loaded into servers that enforce the column-length restriction.
A workaround for either problem is to modify each problematic view
definition to use aliases that provide shorter column names. Then
the view will replicate properly, and can be dumped and reloaded
without causing an error. To modify the definition, drop and
create the view again with DROP
VIEW
and CREATE VIEW
, or
replace the definition with
CREATE OR REPLACE
VIEW
.
For problems that occur when reloading view definitions in dump
files, another workaround is to edit the dump file to modify its
CREATE VIEW
statements. However,
this does not change the original view definitions, which may
cause problems for subsequent dump operations.
XA transaction support is limited to the InnoDB
storage engine.
For “external XA,” a MySQL server acts as a Resource
Manager and client programs act as Transaction Managers. For
“Internal XA”, storage engines within a MySQL server
act as RMs, and the server itself acts as a TM. Internal XA
support is limited by the capabilities of individual storage
engines. Internal XA is required for handling XA transactions that
involve more than one storage engine. The implementation of
internal XA requires that a storage engine support two-phase
commit at the table handler level, and currently this is true only
for InnoDB
.
For XA
START
, the JOIN
and
RESUME
clauses are not supported.
For XA
END
, the SUSPEND [FOR MIGRATE]
clause
is not supported.
The requirement that the bqual
part of
the xid
value be different for each XA
transaction within a global transaction is a limitation of the
current MySQL XA implementation. It is not part of the XA
specification.
Prior to MySQL 5.7.7, XA transactions were not compatible with
replication. This was because an XA transaction that was in
PREPARED
state would be rolled back on clean
server shutdown or client disconnect. Similarly, an XA transaction
that was in PREPARED
state would still exist in
PREPARED
state in case the server was shutdown
abnormally and then started again, but the contents of the
transaction could not be written to the binary log. In both of
these situations the XA transaction could not be replicated
correctly.
In MySQL 5.7.7 and later, there is a change in behavior and an XA
transaction is written to the binary log in two parts. When
XA PREPARE
is issued, the first part of the
transaction up to XA PREPARE
is written using
an initial GTID. A XA_prepare_log_event
is used
to identify such transactions in the binary log. When XA
COMMIT
or XA ROLLBACK
is issued, a
second part of the transaction containing only the XA
COMMIT
or XA ROLLBACK
statement is
written using a second GTID. Note that the initial part of the
transaction, identified by
XA_prepare_log_event
, is not necessarily
followed by its XA COMMIT
or XA
ROLLBACK
, which can cause interleaved binary logging of
any two XA transactions. The two parts of the XA transaction can
even appear in different binary log files. This means that an XA
transaction in PREPARED
state is now persistent
until an explicit XA COMMIT
or XA
ROLLBACK
statement is issued, ensuring that XA
transactions are compatible with replication.
The following restrictions exist for using XA transactions in MySQL 5.7.7 and later:
XA is not fully resilient to an unexpected halt with respect
to the binary log (on the master). If there is an unexpected
halt before XA PREPARE
, between XA
PREPARE
and XA COMMIT
(or
XA ROLLBACK
), or after XA
COMMIT
(or XA ROLLBACK
), the
server and binary log are correctly recovered and taken to a
consistent state. However, if there is an unexpected halt in
the middle of the execution of one of these statements, the
server may not be able to recover to a correct state, leaving
the server and the binary log in an inconsistent state.
XA does not work with
relay-log-info-repository=TABLE
.
XA does not work with replication filters or binary log filters. Filters are permitted as long as they do not render any XA transactions empty. Filters that filter out XA transactions may cause the slave to stop with an error.
In case GTIDs are enabled and the slave does not use either
log-bin=OFF
or does not use
log-slave-updates
, XA
transactions are not crash-safe with respect to GTIDs on the
slave. If the slave stops unexpectedly while applying an
XA PREPARE
or XA COMMIT
,
then after recovery @@GLOBAL.GTID_EXECUTED
may not correctly describe the transactions that have been
applied on the slave.
Identifiers are stored in mysql
database
tables (user
, db
, and so
forth) using utf8
, but identifiers can
contain only characters in the Basic Multilingual Plane (BMP).
Supplementary characters are not permitted in identifiers.
The ucs2
, utf16
,
utf16le
, and utf32
character sets have the following restrictions:
They cannot be used as a client character set, which means
that they do not work for SET NAMES
or
SET CHARACTER SET
. (See
Section 11.1.5, “Connection Character Sets and Collations”.)
It is currently not possible to use
LOAD DATA
INFILE
to load data files that use these
character sets.
FULLTEXT
indexes cannot be created on a
column that uses any of these character sets. However, you
can perform IN BOOLEAN MODE
searches on
the column without an index.
The use of ENCRYPT()
with
these character sets is not recommended because the
underlying system call expects a string terminated by a
zero byte.
The REGEXP
and
RLIKE
operators work in byte-wise fashion, so they are not multibyte
safe and may produce unexpected results with multibyte
character sets. In addition, these operators compare
characters by their byte values and accented characters may
not compare as equal even if a given collation treats them as
equal.
The Performance Schema avoids using mutexes to collect or produce
data, so there are no guarantees of consistency and results can
sometimes be incorrect. Event values in
performance_schema
tables are nondeterministic
and nonrepeatable.
If you save event information in another table, you should not
assume that the original events will still be available later. For
example, if you select events from a
performance_schema
table into a temporary
table, intending to join that table with the original table later,
there might be no matches.
mysqldump and BACKUP
DATABASE
ignore tables in the
performance_schema
database.
Tables in the performance_schema
database
cannot be locked with LOCK TABLES
, except the
setup_
tables.
xxx
Tables in the performance_schema
database
cannot be indexed.
Results for queries that refer to tables in the
performance_schema
database are not saved in
the query cache.
Tables in the performance_schema
database are
not replicated.
The Performance Schema is not available in
libmysqld
, the embedded server.
The types of timers might vary per platform. The
performance_timers
table shows which
event timers are available. If the values in this table for a
given timer name are NULL
, that timer is not
supported on your platform.
Instruments that apply to storage engines might not be implemented for all storage engines. Instrumentation of each third-party engine is the responsibility of the engine maintainer.
The first part of this section describes general restrictions on the applicability of the pluggable authentication framework described at Section 7.3.8, “Pluggable Authentication”. The second part describes how third-party connector developers can determine the extent to which a connector can take advantage of pluggable authentication capabilities and what steps to take to become more compliant.
The term “native authentication” used here refers to
authentication against passwords stored in the
Password
column of the
mysql.user
table. This is the same
authentication method provided by older MySQL servers, before
pluggable authentication was implemented. It remains the default
method, although now it is implemented using plugins.
“Windows native authentication” refers to
authentication using the credentials of a user who has already
logged in to Windows, as implemented by the Windows Native
Authentication plugin (“Windows plugin” for short).
Connector/C, Connector/C++: Clients that use these connectors can connect to the server only through accounts that use native authentication.
Exception: A connector supports pluggable authentication if it
was built to link to libmysqlclient
dynamically (rather than statically) and it loads the current
version of libmysqlclient
if that version
is installed, or if the connector is recompiled from source to
link against the current libmysqlclient
.
Connector/J: Clients that use this connector can connect to the server only through accounts that use native authentication.
Connector/Net: Before Connector/Net 6.4.4, clients that use this connector can connect to the server only through accounts that use native authentication. As of 6.4.4, clients can also connect to the server through accounts that use the Windows plugin.
Connector/ODBC: Before
Connector/ODBC 3.51.29 and 5.1.9, clients that use this
connector can connect to the server only through accounts that
use native authentication. As of 3.51.29 and 5.1.9, clients
that use binary releases of this connector for Windows can
also connect to the server through accounts that use the PAM
or Windows plugins. (These capabilities result from linking
the Connector/ODBC binaries against the MySQL 5.5.16
libmysqlclient
rather than the MySQL 5.1
libmysqlclient
used previously. The newer
libmysqlclient
includes the client-side
support needed for the server-side PAM and Windows
authentication plugins.)
Connector/PHP: Clients that
use this connector can connect to the server only through
accounts that use native authentication, when compiled using
the MySQL native driver for PHP (mysqlnd
).
MySQL Proxy: Before MySQL Proxy 0.8.2, clients can connect to the server only through accounts that use native authentication. As of 0.8.2, clients can also connect to the server through accounts that use the PAM plugin. As of 0.8.3, clients can also connect to the server through accounts that use the Windows plugin.
MySQL Enterprise Backup: MySQL Enterprise Backup before version 3.6.1 supports connections to the server only through accounts that use native authentication. As of 3.6.1, MySQL Enterprise Backup can connect to the server through accounts that use nonnative authentication.
Windows native authentication: Connecting through an account that uses the Windows plugin requires Windows Domain setup. Without it, NTLM authentication is used and then only local connections are possible; that is, the client and server must run on the same computer.
Proxy users: Proxy user support is available to the extent that clients can connect through accounts authenticated with plugins that implement proxy user capability (that is, plugins that can return a user name different from that of the connecting user). For example, the native authentication plugins do not support proxy users, whereas the PAM and Windows plugins do.
Replication: Replication
slaves can employ not only master accounts using native
authentication, but can also connect through master accounts
that use nonnative authentication if the required client-side
plugin is available. If the plugin is built into
libmysqlclient
, it is available by default.
Otherwise, the plugin must be installed on the slave side in
the directory named by the slave
plugin_dir
system variable.
FEDERATED
tables: A FEDERATED
table can access the remote table only through accounts on the
remote server that use native authentication.
Third-party connector developers can use the following guidelines to determine readiness of a connector to take advantage of pluggable authentication capabilities and what steps to take to become more compliant:
An existing connector to which no changes have been made uses native authentication and clients that use the connector can connect to the server only through accounts that use native authentication. However, you should test the connector against a recent version of the server to verify that such connections still work without problem.
Exception: A connector might work with pluggable
authentication without any changes if it links to
libmysqlclient
dynamically (rather than
statically) and it loads the current version of
libmysqlclient
if that version is
installed.
To take advantage of pluggable authentication capabilities, a
connector that is libmysqlclient
-based
should be relinked against the current version of
libmysqlclient
. This enables the connector
to support connections though accounts that require
client-side plugins now built into
libmysqlclient
(such as the cleartext
plugin needed for PAM authentication and the Windows plugin
needed for Windows native authentication). Linking with a
current libmysqlclient
also enables the
connector to access client-side plugins installed in the
default MySQL plugin directory (typically the directory named
by the default value of the local server's
plugin_dir
system variable).
If a connector links to libmysqlclient
dynamically, it must be ensured that the newer version of
libmysqlclient
is installed on the client
host and that the connector loads it at runtime.
Another way for a connector to support a given authentication method is to implement it directly in the client/server protocol. Connector/Net uses this approach to provide support for Windows native authentication.
If a connector should be able to load client-side plugins from
a directory different from the default plugin directory, it
must implement some means for client users to specify the
directory. Possibilities for this include a command-line
option or environment variable from which the connector can
obtain the directory name. Standard MySQL client programs such
as mysql and mysqladmin
implement a --plugin-dir
option. See also
Section 25.8.14, “C API Client Plugin Functions”.
Proxy user support by a connector depends, as described earlier in this section, on whether the authentication methods that it supports permit proxy users.
This section lists current limits in MySQL 5.7.
The maximum number of tables that can be referenced in a single
join is 61. This includes a join handled by merging derived
tables (subqueries) and views in the FROM
clause into the outer query block (see
Section 9.2.1.18.3, “Optimizing Derived Tables and View References”). It also applies
to the number of tables that can be referenced in the definition
of a view.
MySQL has no limit on the number of databases. The underlying file system may have a limit on the number of directories.
MySQL has no limit on the number of tables. The underlying file
system may have a limit on the number of files that represent
tables. Individual storage engines may impose engine-specific
constraints. InnoDB
permits up to 4 billion
tables.
The effective maximum table size for MySQL databases is usually determined by operating system constraints on file sizes, not by MySQL internal limits. The following table lists some examples of operating system file-size limits. This is only a rough guide and is not intended to be definitive. For the most up-to-date information, be sure to check the documentation specific to your operating system.
Operating System | File-size Limit |
---|---|
Win32 w/ FAT/FAT32 | 2GB/4GB |
Win32 w/ NTFS | 2TB (possibly larger) |
Linux 2.2-Intel 32-bit | 2GB (LFS: 4GB) |
Linux 2.4+ | (using ext3 file system) 4TB |
Solaris 9/10 | 16TB |
OS X w/ HFS+ | 2TB |
Windows users, please note that FAT and VFAT (FAT32) are not considered suitable for production use with MySQL. Use NTFS instead.
On Linux 2.2, you can get MyISAM
tables
larger than 2GB in size by using the Large File Support (LFS)
patch for the ext2 file system. Most current Linux distributions
are based on kernel 2.4 or higher and include all the required
LFS patches. On Linux 2.4, patches also exist for ReiserFS to
get support for big files (up to 2TB). With JFS and XFS,
petabyte and larger files are possible on Linux.
For a detailed overview about LFS in Linux, have a look at Andreas Jaeger's Large File Support in Linux page at http://www.suse.de/~aj/linux_lfs.html.
If you do encounter a full-table error, there are several reasons why it might have occurred:
The disk might be full.
The InnoDB
storage engine maintains
InnoDB
tables within a tablespace that
can be created from several files. This enables a table to
exceed the maximum individual file size. The tablespace can
include raw disk partitions, which permits extremely large
tables. The maximum tablespace size is 64TB.
If you are using InnoDB
tables and run
out of room in the InnoDB
tablespace. In
this case, the solution is to extend the
InnoDB
tablespace. See
Section 15.5.2, “Changing the Number or Size of InnoDB Redo Log Files”.
You are using MyISAM
tables on an
operating system that supports files only up to 2GB in size
and you have hit this limit for the data file or index file.
You are using a MyISAM
table and the
space required for the table exceeds what is permitted by
the internal pointer size. MyISAM
permits
data and index files to grow up to 256TB by default, but
this limit can be changed up to the maximum permissible size
of 65,536TB (2567 − 1
bytes).
If you need a MyISAM
table that is larger
than the default limit and your operating system supports
large files, the CREATE TABLE
statement supports AVG_ROW_LENGTH
and
MAX_ROWS
options. See
Section 14.1.18, “CREATE TABLE Syntax”. The server uses these
options to determine how large a table to permit.
If the pointer size is too small for an existing table, you
can change the options with ALTER
TABLE
to increase a table's maximum permissible
size. See Section 14.1.8, “ALTER TABLE Syntax”.
ALTER TABLEtbl_name
MAX_ROWS=1000000000 AVG_ROW_LENGTH=nnn
;
You have to specify AVG_ROW_LENGTH
only
for tables with BLOB
or
TEXT
columns; in this case,
MySQL can't optimize the space required based only on the
number of rows.
To change the default size limit for
MyISAM
tables, set the
myisam_data_pointer_size
,
which sets the number of bytes used for internal row
pointers. The value is used to set the pointer size for new
tables if you do not specify the MAX_ROWS
option. The value of
myisam_data_pointer_size
can be from 2 to 7. A value of 4 permits tables up to 4GB; a
value of 6 permits tables up to 256TB.
You can check the maximum data and index sizes by using this statement:
SHOW TABLE STATUS FROMdb_name
LIKE 'tbl_name
';
You also can use myisamchk -dv /path/to/table-index-file. See Section 14.7.5, “SHOW Syntax”, or Section 5.6.3, “myisamchk — MyISAM Table-Maintenance Utility”.
Other ways to work around file-size limits for
MyISAM
tables are as follows:
If your large table is read only, you can use myisampack to compress it. myisampack usually compresses a table by at least 50%, so you can have, in effect, much bigger tables. myisampack also can merge multiple tables into a single table. See Section 5.6.5, “myisampack — Generate Compressed, Read-Only MyISAM Tables”.
MySQL includes a MERGE
library that
enables you to handle a collection of
MyISAM
tables that have identical
structure as a single MERGE
table.
See Section 16.7, “The MERGE Storage Engine”.
You are using the MEMORY
(HEAP
) storage engine; in this case you
need to increase the value of the
max_heap_table_size
system
variable. See Section 6.1.4, “Server System Variables”.
There is a hard limit of 4096 columns per table, but the effective maximum may be less for a given table. The exact limit depends on several interacting factors.
Every table (regardless of storage engine) has a maximum row size of 65,535 bytes. Storage engines may place additional constraints on this limit, reducing the effective maximum row size.
The maximum row size constrains the number (and possibly
size) of columns because the total length of all columns
cannot exceed this size. For example,
utf8
characters require up to three bytes
per character, so for a
CHAR(255) CHARACTER
SET utf8
column, the server must allocate 255
× 3 = 765 bytes per value. Consequently, a table
cannot contain more than 65,535 / 765 = 85 such columns.
Storage for variable-length columns includes length bytes,
which are assessed against the row size. For example, a
VARCHAR(255)
CHARACTER SET utf8
column takes two bytes to store
the length of the value, so each value can take up to 767
bytes.
BLOB
and
TEXT
columns count from one
to four plus eight bytes each toward the row-size limit
because their contents are stored separately from the rest
of the row.
Declaring columns NULL
can reduce the
maximum number of columns permitted. For
MyISAM
tables,
NULL
columns require additional space in
the row to record whether their values are
NULL
. Each NULL
column
takes one bit extra, rounded up to the nearest byte. The
maximum row length in bytes can be calculated as follows:
row length = 1 + (sum of column lengths
) + (number of NULL columns
+delete_flag
+ 7)/8 + (number of variable-length columns
)
delete_flag
is 1 for tables with
static row format. Static tables use a bit in the row record
for a flag that indicates whether the row has been deleted.
delete_flag
is 0 for dynamic
tables because the flag is stored in the dynamic row header.
For information about MyISAM
table formats, see Section 16.2.3, “MyISAM Table Storage Formats”.
For InnoDB
tables, storage size
is the same for NULL
and NOT
NULL
columns, so the preceding calculations do not
apply.
The following statement to create table
t1
succeeds because the columns require
32,765 + 2 bytes and 32,766 + 2 bytes, which falls within
the maximum row size of 65,535 bytes:
mysql>CREATE TABLE t1
->(c1 VARCHAR(32765) NOT NULL, c2 VARCHAR(32766) NOT NULL)
->ENGINE = MyISAM CHARACTER SET latin1;
Query OK, 0 rows affected (0.02 sec)
The following statement to create table
t2
fails because the columns are
NULL
and
MyISAM
requires additional
space that causes the row size to exceed 65,535 bytes:
mysql>CREATE TABLE t2
->(c1 VARCHAR(32765) NULL, c2 VARCHAR(32766) NULL)
->ENGINE = MyISAM CHARACTER SET latin1;
ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
The following statement to create table
t3
fails because, although the column
length is within the maximum length of 65,535 bytes, two
additional bytes are required to record the length, which
causes the row size to exceed 65,535 bytes:
mysql>CREATE TABLE t3
->(c1 VARCHAR(65535) NOT NULL)
->ENGINE = MyISAM CHARACTER SET latin1;
ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
Reducing the column length to 65,533 or less permits the statement to succeed.
Individual storage engines might impose additional restrictions that limit table column count. Examples:
InnoDB
permits up to 1000
columns.
InnoDB
restricts row size
to slightly less than half of a database page for 4KB,
8KB, 16KB, and 32KB page sizes. For a page size of 64KB,
InnoDB
restricts row size
to about 16000 bytes. Row size restrictions differ for
variable-length columns
(VARBINARY
,
VARCHAR
,
BLOB
, and
TEXT
). For more
information, see Section 15.6.7, “Limits on InnoDB Tables”.
Different InnoDB
storage
formats (COMPRESSED
,
REDUNDANT
) use different amounts of
page header and trailer data, which affects the amount
of storage available for rows.
Each table has an .frm
file that contains
the table definition. The server uses the following expression
to check some of the table information stored in the file
against an upper limit of 64KB:
if (info_length+(ulong) create_fields.elements*FCOMP+288+ n_length+int_length+com_length > 65535L || int_count > 255)
The portion of the information stored in the
.frm
file that is checked against the
expression cannot grow beyond the 64KB limit, so if the table
definition reaches this size, no more columns can be added.
The relevant factors in the expression are:
info_length
is space needed for
“screens.” This is related to MySQL's Unireg
heritage.
create_fields.elements
is the number of
columns.
FCOMP
is 17.
n_length
is the total length of all
column names, including one byte per name as a separator.
int_length
is related to the list of
values for ENUM
and
SET
columns. In this context,
“int” does not mean “integer.” It
means “interval,” a term that refers
collectively to ENUM
and
SET
columns.
com_length
is the total length of column
comments.
The expression just described has several implications for permitted table definitions:
Using long column names can reduce the maximum number of
columns, as can the inclusion of
ENUM
or
SET
columns, or use of column
comments.
A table can have no more than 255 unique
ENUM
and
SET
definitions. Columns with
identical element lists are considered the same against this
limt. For example, if a table contains these two columns,
they count as one (not two) toward this limit because the
definitions are identical:
e1 ENUM('a','b','c') e2 ENUM('a','b','c')
The sum of the length of element names in the unique
ENUM
and
SET
definitions counts toward
the 64KB limit, so although the theoretical limit on number
of elements in a given ENUM
column is 65,535, the practical limit is less than 3000.
The following limitations apply to use of MySQL on the Windows platform:
Process memory
On Windows 32-bit platforms, it is not possible by default to use more than 2GB of RAM within a single process, including MySQL. This is because the physical address limit on Windows 32-bit is 4GB and the default setting within Windows is to split the virtual address space between kernel (2GB) and user/applications (2GB).
Some versions of Windows have a boot time setting to enable larger applications by reducing the kernel application. Alternatively, to use more than 2GB, use a 64-bit version of Windows.
File system aliases
When using MyISAM
tables, you cannot use
aliases within Windows link to the data files on another
volume and then link back to the main MySQL
datadir
location.
This facility is often used to move the data and index files
to a RAID or other fast solution, while retaining the main
.frm
files in the default data
directory configured with the
datadir
option.
Limited number of ports
Windows systems have about 4,000 ports available for client connections, and after a connection on a port closes, it takes two to four minutes before the port can be reused. In situations where clients connect to and disconnect from the server at a high rate, it is possible for all available ports to be used up before closed ports become available again. If this happens, the MySQL server appears to be unresponsive even though it is running. Ports may be used by other applications running on the machine as well, in which case the number of ports available to MySQL is lower.
For more information about this problem, see http://support.microsoft.com/default.aspx?scid=kb;en-us;196271.
DATA DIRECTORY
and
INDEX DIRECTORY
The DATA DIRECTORY
option for
CREATE TABLE
is supported on
Windows only for InnoDB
tables, as
described in Section 15.5.5, “Creating a File-Per-Table Tablespace Outside the Data Directory”. For
MyISAM
and other storage engines, the
DATA DIRECTORY
and INDEX
DIRECTORY
options for CREATE
TABLE
are ignored on Windows and any other
platforms with a nonfunctional realpath()
call.
You cannot drop a database that is in use by another session.
Case-insensitive names
File names are not case sensitive on Windows, so MySQL database and table names are also not case sensitive on Windows. The only restriction is that database and table names must be specified using the same case throughout a given statement. See Section 10.2.2, “Identifier Case Sensitivity”.
Directory and file names
On Windows, MySQL Server supports only directory and file names that are compatible with the current ANSI code pages. For example, the following Japanese directory name will not work in the Western locale (code page 1252):
datadir="C:/私たちのプロジェクトのデータ"
The same limitation applies to directory and file names
referred to in SQL statements, such as the data file path
name in LOAD DATA
INFILE
.
The
“\
” path name separator
character
Path name components in Windows are separated by the
“\
” character, which is also
the escape character in MySQL. If you are using
LOAD DATA
INFILE
or
SELECT ... INTO
OUTFILE
, use Unix-style file names with
“/
” characters:
mysql>LOAD DATA INFILE 'C:/tmp/skr.txt' INTO TABLE skr;
mysql>SELECT * INTO OUTFILE 'C:/tmp/skr.txt' FROM skr;
Alternatively, you must double the
“\
” character:
mysql>LOAD DATA INFILE 'C:\\tmp\\skr.txt' INTO TABLE skr;
mysql>SELECT * INTO OUTFILE 'C:\\tmp\\skr.txt' FROM skr;
Problems with pipes
Pipes do not work reliably from the Windows command-line
prompt. If the pipe includes the character
^Z
/ CHAR(24)
, Windows
thinks that it has encountered end-of-file and aborts the
program.
This is mainly a problem when you try to apply a binary log as follows:
C:\> mysqlbinlog binary_log_file
| mysql --user=root
If you have a problem applying the log and suspect that it
is because of a ^Z
/
CHAR(24)
character, you can use the
following workaround:
C:\>mysqlbinlog
C:\>binary_log_file
--result-file=/tmp/bin.sqlmysql --user=root --execute "source /tmp/bin.sql"
The latter command also can be used to reliably read in any SQL file that may contain binary data.