9. Events
The events feature is a MySQL extension, not part of standard SQL. Events,
which should not be confused with binlog events, are handled by a
stored program that is executed regularly by a special
event scheduler.
Similar to all other stored programs, definitions of events are
also logged with a DEFINER clause. Since
events are invoked by the event scheduler, they are always executed as
the definer and do not pose a security risk in the way that stored
functions do.
When events are executed, the statements are written to the binary
log directly.
Since the events will be executed on the master, they are
automatically disabled on the slave and will therefore not be executed there. If the
events were not disabled, they would be executed twice on the slave:
once by the master executing the event and replicating the changes to
the slave, and once by the slave executing the event directly.
Warning:
Since the events are disabled on the slave, it is necessary to
enable these events if the slave, for some reason, should lose the
master.
So, for example, don’t forget to enable
the events that were replicated from the master. This is easiest to do
using the following statement:
UPDATE mysql.events
SET Status = ENABLED
WHERE Status = SLAVESIDE_DISABLED;
The purpose of the check is to enable only the events that were
disabled when being replicated from the master. There might be events
that are disabled for other reasons.
10. Special Constructions
Even though statement-based replication is normally straightforward, some special constructions
have to be handled with care. Recall that for the statement to be
executed correctly on the slave, the context has to be correct for the
statement. Even though the context events discussed earlier handle part
of the context, some constructions have additional context that is not
transferred as part of the replication process.
10.1. The LOAD_FILE function
The LOAD_FILE function
allows you to fetch a file and use it as part of an
expression. Although quite convenient at times, the file has to exist
on the slave server to replicate correctly since the file is
not transferred during replication, as the file
to LOAD DATA INFILE is.
With some ingenuity, you can rewrite a statement involving the
LOAD_FILE function either to use
the LOAD DATA INFILE statement or
to define a user-defined variable to hold the contents of the file.
For example, take the following statement that inserts a document into
a table:
master> INSERT INTO document(author, body)
-> VALUES ('Mats Kindahl', LOAD_FILE('go_intro.xml'));
You can rewrite this statement to use LOAD DATA INFILE instead. In this case, you
have to take care to specify character strings that cannot exist in
the document as field and line delimiters, since you are going to read
the entire file contents as a single column.
master> LOAD DATA INFILE 'go_intro.xml' INTO TABLE document
-> FIELDS TERMINATED BY '@*@' LINES TERMINATED BY '&%&'
-> (author, body) SET author = 'Mats Kindahl';
An alternative is to store the file contents in a user-defined
variable and then use it in the statement.
master> SET @document = LOAD_FILE('go_intro.xml');
master> INSERT INTO document(author, body) VALUES('Mats Kindahl, @document);
11. Nontransactional Changes and Error Handling
So far we have considered only transactional changes and have not looked
at error handling at all. For transactional changes, error handling is
pretty uncomplicated: a statement that tries to change transactional
tables and fails will not have any effect at all on the table. That’s
the entire point of having a transactional system—so the changes that
the statement attempts to introduce can be safely ignored. The same
applies to transactions that are rolled back: they have no effect on the
tables and can therefore simply be discarded without risking
inconsistencies between the master and the slave.
A specialty of MySQL is the provisioning of nontransactional
storage engines. This can offer some speed advantages, since the storage
engine does not have to administer the transactional log that the
transactional engines use, and it allows some optimizations on disk
access. From a replication perspective, however, nontransactional
engines require special considerations.
The most important aspect to note is that replication cannot
handle arbitrary nontransactional engines, but has to make some
assumptions about how they behave. Some of those limitations are lifted
with the introduction of row-based replication in version 5.1— but even in that
case, it cannot handle arbitrary storage engines.
One of the features that complicates the issue further, from a
replication perspective, is that it is possible to mix transactional and
nontransactional engines in the same transaction, and even in the same
statement.
To continue with the example used earlier, consider Example 13, where the
log table from Example 5 is given a
nontransactional storage engine while the employee
table is given a transactional one. We use the nontransactional MyISAM
storage engine for the log table to
improve its speed, while keeping the transactional behavior for the
employee table.
We can further extend the example to track unsuccessful attempts
to add employees by creating a pair of insert triggers: a before trigger
and an after trigger. If an administrator sees an entry in the log with
a status field of FAIL, it means the before trigger ran, but the
after trigger did not, and therefore an attempt to add an employee
failed.
Example 13. Definition of log and employee tables with storage
engines
CREATE TABLE employee (
name CHAR(64) NOT NULL,
email CHAR(64),
password CHAR(64),
PRIMARY KEY (email)
) ENGINE = InnoDB;
CREATE TABLE log (
id INT AUTO_INCREMENT,
email CHAR(64),
message TEXT,
status ENUM('FAIL', 'OK') DEFAULT 'FAIL',
ts TIMESTAMP,
PRIMARY KEY (id)
) ENGINE = MyISAM;
delimiter $$
CREATE TRIGGER tr_employee_insert_before BEFORE INSERT ON employee FOR EACH ROW
BEGIN
INSERT INTO log(email, message)
VALUES (NEW.email, CONCAT('Adding employee ', NEW.name));
SET @LAST_INSERT_ID = LAST_INSERT_ID();
END $$
delimiter ;
CREATE TRIGGER tr_employee_insert_after AFTER INSERT ON employee FOR EACH ROW
UPDATE log SET status = 'OK' WHERE id = @LAST_INSERT_ID;
|
What are the effects of this change on the binary log?
To begin, let’s consider the INSERT statement from
Example 6. Assuming
the statement is not inside a transaction and AUTOCOMMIT is 1, the statement will be a
transaction by itself. If the statement executes without errors,
everything will proceed as planned and the statement will be written to
the binary log as a Query
event.
Now, consider what happens if the INSERT is repeated with the same employee.
Since the email column is the primary key, this
will generate a duplicate key error when the insertion is attempted, but
what will happen with the statement? Is it written to the binary log or
not?
Let’s have a look…
master> SET @pass = PASSWORD('xyzzy');
Query OK, 0 rows affected (0.00 sec)
master> INSERT INTO employee(name,email,pass)
-> VALUES ('mats','[email protected]',@pass);
ERROR 1062 (23000): Duplicate entry '[email protected]' for key 'PRIMARY'
master> SELECT * FROM employee;
+------+------------------+-------------------------------------------+
| name | email | password |
+------+------------------+-------------------------------------------+
| mats | [email protected] | *151AF6B8C3A6AA09CFCCBD34601F2D309ED54888 |
+------+------------------+-------------------------------------------+
1 row in set (0.00 sec)
master> SHOW BINLOG EVENTS FROM 38493\G
*************************** 1. row ***************************
Log_name: master-bin.000038
Pos: 38493
Event_type: User var
Server_id: 1
End_log_pos: 38571
Info: @`pass`=_latin1 0x2A313531414636423843334136414130394346434...
COLLATE latin1_swedish_ci
*************************** 2. row ***************************
Log_name: master-bin.000038
Pos: 38571
Event_type: Query
Server_id: 1
End_log_pos: 38689
Info: use `test`; INSERT INTO employee(name,email,pass) VALUES ...
2 rows in set (0.00 sec)
As you can see, the statement is written to the binary log even
though the employee table is transactional and the
statement failed. Looking at the contents of the table using
the SELECT reveals that
there is still a single employee, proving the statement was rolled
back—so why is the statement written to the binary log?
Looking into the log table will reveal the reason.
master> SELECT * FROM log;
+----+------------------+------------------------+--------+---------------------+
| id | email | message | status | ts |
+----+------------------+------------------------+--------+---------------------+
| 1 | [email protected] | Adding employee mats | OK | 2010-01-13 15:50:45 |
| 2 | [email protected] | Name change from ... | OK | 2010-01-13 15:50:48 |
| 3 | [email protected] | Password change | OK | 2010-01-13 15:50:50 |
| 4 | [email protected] | Removing employee | OK | 2010-01-13 15:50:52 |
| 5 | [email protected] | Adding employee mats | OK | 2010-01-13 16:11:45 |
| 6 | [email protected] | Adding employee mats | FAIL | 2010-01-13 16:12:00 |
+----+--------------+----------------------------+--------+---------------------+
6 rows in set (0.00 sec)
Look at the last line, where the status is FAIL. This line was added to the table by the
before trigger tr_employee_insert_before. For the binary log
to faithfully represent the changes made to the database on the master,
it is necessary to write the statement to the binary log if there are
any nontransactional changes present in the statement or in triggers
that are executed as a result of executing the statement. Since the
statement failed, the after trigger tr_employee_insert_after was not executed, and
therefore the status is still FAIL
from the execution of the before trigger.
Since the statement failed on the master, information about the
failure needs to be written to the binary log as well. The MySQL server
handles this by using an error code field in the Query event to register the exact error code
that caused the statement to fail. This field is then written to the
binary log together with the event.