diff --git a/bin/pt-archiver b/bin/pt-archiver index b97ff620..db5542f1 100755 --- a/bin/pt-archiver +++ b/bin/pt-archiver @@ -3570,7 +3570,7 @@ sub main { if ( $o->get('stop') ) { my $sentinel_fh = IO::File->new($sentinel, ">>") or die "Cannot open $sentinel: $OS_ERROR\n"; - print $sentinel_fh "Remove this file to permit mk-archiver to run\n" + print $sentinel_fh "Remove this file to permit pt-archiver to run\n" or die "Cannot write to $sentinel: $OS_ERROR\n"; close $sentinel_fh or die "Cannot close $sentinel: $OS_ERROR\n"; @@ -4065,7 +4065,7 @@ sub main { my $bulkins_file; if ( $o->get('bulk-insert') ) { require File::Temp; - $bulkins_file = File::Temp->new( SUFFIX => 'mk-archiver' ) + $bulkins_file = File::Temp->new( SUFFIX => 'pt-archiver' ) or die "Cannot open temp file: $OS_ERROR\n"; } @@ -4281,7 +4281,7 @@ sub main { $first_row = $row ? [ @$row ] : undef; if ( $o->get('bulk-insert') ) { - $bulkins_file = File::Temp->new( SUFFIX => 'mk-archiver' ) + $bulkins_file = File::Temp->new( SUFFIX => 'pt-archiver' ) or die "Cannot open temp file: $OS_ERROR\n"; } } # no next row (do bulk operations) @@ -4420,7 +4420,7 @@ sub main { # Subroutines. # ############################################################################ -# Catches signals so mk-archiver can exit gracefully. +# Catches signals so pt-archiver can exit gracefully. sub finish { my ($signal) = @_; print STDERR "Exiting on SIG$signal.\n"; @@ -4575,13 +4575,13 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-archiver - Archive rows from a MySQL table into another table or a file. +pt-archiver - Archive rows from a MySQL table into another table or a file. =head1 SYNOPSIS -Usage: mk-archiver [OPTION...] --source DSN --where WHERE +Usage: pt-archiver [OPTION...] --source DSN --where WHERE -mk-archiver nibbles records from a MySQL table. The --source and --dest +pt-archiver nibbles records from a MySQL table. The --source and --dest arguments use DSN syntax; if COPY is yes, --dest defaults to the key's value from --source. @@ -4589,13 +4589,13 @@ Examples: Archive all rows from oltp_server to olap_server and to a file: - mk-archiver --source h=oltp_server,D=test,t=tbl --dest h=olap_server \ + pt-archiver --source h=oltp_server,D=test,t=tbl --dest h=olap_server \ --file '/var/log/archive/%Y-%m-%d-%D.%t' \ --where "1=1" --limit 1000 --commit-each Purge (delete) orphan rows from child table: - mk-archiver --source h=host,D=db,t=child --purge \ + pt-archiver --source h=host,D=db,t=child --purge \ --where 'NOT EXISTS(SELECT * FROM parent WHERE col=child.col)' =head1 RISKS @@ -4620,20 +4620,20 @@ L<"--bulk-insert"> that may cause data loss. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-archiver is the tool I use to archive tables as described in +pt-archiver is the tool I use to archive tables as described in L. The goal is a low-impact, forward-only job to nibble old data out of the table without impacting OLTP queries much. You can insert the data into another table, which need not be on the same server. You can also write it to a file in a format suitable for LOAD DATA INFILE. Or you can do neither, in which case it's just an incremental DELETE. -mk-archiver is extensible via a plugin mechanism. You can inject your own +pt-archiver is extensible via a plugin mechanism. You can inject your own code to add advanced archiving logic that could be useful for archiving dependent data, applying complex business rules, or building a data warehouse during the archiving process. @@ -4648,12 +4648,12 @@ rows. Specifying the index with the 'i' part of the L<"--source"> argument can be crucial for this; use L<"--dry-run"> to examine the generated queries and be sure to EXPLAIN them to see if they are efficient (most of the time you probably want to scan the PRIMARY key, which is the default). Even better, profile -mk-archiver with mk-query-profiler and make sure it is not scanning the whole +pt-archiver with mk-query-profiler and make sure it is not scanning the whole table every query. You can disable the seek-then-scan optimizations partially or wholly with L<"--no-ascend"> and L<"--ascend-first">. Sometimes this may be more efficient -for multi-column keys. Be aware that mk-archiver is built to start at the +for multi-column keys. Be aware that pt-archiver is built to start at the beginning of the index it chooses and scan it forward-only. This might result in long table scans if you're trying to nibble from the end of the table by an index other than the one it prefers. See L<"--source"> and read the @@ -4663,16 +4663,16 @@ documentation on the C part if this applies to you. If you specify L<"--progress">, the output is a header row, plus status output at intervals. Each row in the status output lists the current date and time, -how many seconds mk-archiver has been running, and how many rows it has +how many seconds pt-archiver has been running, and how many rows it has archived. -If you specify L<"--statistics">, C outputs timing and other +If you specify L<"--statistics">, C outputs timing and other information to help you identify which part of your archiving process takes the most time. =head1 ERROR-HANDLING -mk-archiver tries to catch signals and exit gracefully; for example, if you +pt-archiver tries to catch signals and exit gracefully; for example, if you send it SIGTERM (Ctrl-C on UNIX-ish systems), it will catch the signal, print a message about the signal, and exit fairly normally. It will not execute L<"--analyze"> or L<"--optimize">, because these may take a long time to finish. @@ -4724,7 +4724,7 @@ Ascend only first column of index. If you do want to use the ascending index optimization (see L<"--no-ascend">), but do not want to incur the overhead of ascending a large multi-column index, -you can use this option to tell mk-archiver to ascend only the leftmost column +you can use this option to tell pt-archiver to ascend only the leftmost column of the index. This can provide a significant performance boost over not ascending the index at all, while avoiding the cost of ascending the whole index. @@ -4768,7 +4768,7 @@ will not be called. Instead, its C method is called later. B: if you have a plugin on the source that sometimes doesn't return true from C, you should use this option only if you understand -what it does. If the plugin instructs C not to archive a row, +what it does. If the plugin instructs C not to archive a row, it will still be deleted by the bulk delete! =item --[no]bulk-delete-limit @@ -4828,10 +4828,10 @@ default: yes Ensure L<"--source"> and L<"--dest"> have same columns. -Enabled by default; causes mk-archiver to check that the source and destination +Enabled by default; causes pt-archiver to check that the source and destination tables have the same columns. It does not check column order, data type, etc. It just checks that all columns in the source exist in the destination and -vice versa. If there are any differences, mk-archiver will exit with an +vice versa. If there are any differences, pt-archiver will exit with an error. To disable this check, specify --no-check-columns. @@ -4855,7 +4855,7 @@ short form: -c; type: array Comma-separated list of columns to archive. Specify a comma-separated list of columns to fetch, write to the file, and -insert into the destination table. If specified, mk-archiver ignores other +insert into the destination table. If specified, pt-archiver ignores other columns unless it needs to add them to the C queries so they seek into the index where the previous query ended, then scan along it, rather than scanning from the beginning of the table every time. This is enabled by default because it is generally a good strategy @@ -5080,11 +5080,11 @@ interacts with plugins. Do not delete archived rows. -Causes C not to delete rows after processing them. This disallows +Causes C not to delete rows after processing them. This disallows L<"--no-ascend">, because enabling them both would cause an infinite loop. If there is a plugin on the source DSN, its C method is called -anyway, even though C will not execute the delete. See +anyway, even though C will not execute the delete. See L<"EXTENDING"> for more on plugins. =item --optimize @@ -5191,15 +5191,15 @@ type: int; default: 1 Number of retries per timeout or deadlock. -Specifies the number of times mk-archiver should retry when there is an +Specifies the number of times pt-archiver should retry when there is an InnoDB lock wait timeout or deadlock. When retries are exhausted, -mk-archiver will exit with an error. +pt-archiver will exit with an error. Consider carefully what you want to happen when you are archiving between a mixture of transactional and non-transactional storage engines. The INSERT to L<"--dest"> and DELETE from L<"--source"> are on separate connections, so they do not actually participate in the same transaction even if they're on the same -server. However, mk-archiver implements simple distributed transactions in +server. However, pt-archiver implements simple distributed transactions in code, so commits and rollbacks should happen as desired across the two connections. @@ -5220,23 +5220,23 @@ default: yes Do not archive row with max AUTO_INCREMENT. -Adds an extra WHERE clause to prevent mk-archiver from removing the newest +Adds an extra WHERE clause to prevent pt-archiver from removing the newest row when ascending a single-column AUTO_INCREMENT key. This guards against re-using AUTO_INCREMENT values if the server restarts, and is enabled by default. The extra WHERE clause contains the maximum value of the auto-increment column as of the beginning of the archive or purge job. If new rows are inserted while -mk-archiver is running, it will not see them. +pt-archiver is running, it will not see them. =item --sentinel -type: string; default: /tmp/mk-archiver-sentinel +type: string; default: /tmp/pt-archiver-sentinel Exit if this file exists. -The presence of the file specified by L<"--sentinel"> will cause mk-archiver to -stop archiving and exit. The default is /tmp/mk-archiver-sentinel. You +The presence of the file specified by L<"--sentinel"> will cause pt-archiver to +stop archiving and exit. The default is /tmp/pt-archiver-sentinel. You might find this handy to stop cron jobs gracefully if necessary. See also L<"--stop">. @@ -5278,7 +5278,7 @@ type: float Calculate L<"--sleep"> as a multiple of the last SELECT time. -If this option is specified, mk-archiver will sleep for the query time of the +If this option is specified, pt-archiver will sleep for the query time of the last SELECT multiplied by the specified coefficient. This is a slightly more sophisticated way to throttle the SELECTs: sleep a @@ -5296,7 +5296,7 @@ Socket file to use for connection. type: DSN DSN specifying the table to archive from (required). This argument is a DSN. -See L for the syntax. Most options control how mk-archiver +See L for the syntax. Most options control how pt-archiver connects to MySQL, but there are some extended DSN options in this tool's syntax. The D, t, and i options select a table to archive: @@ -5308,14 +5308,14 @@ option specifies pluggable actions, which an external Perl module can provide. The only required part is the table; other parts may be read from various places in the environment (such as options files). -The 'i' part deserves special mention. This tells mk-archiver which index +The 'i' part deserves special mention. This tells pt-archiver which index it should scan to archive. This appears in a FORCE INDEX or USE INDEX hint in the SELECT statements used to fetch archivable rows. If you don't specify -anything, mk-archiver will auto-discover a good index, preferring a C if one exists. In my experience this usually works well, so most of the time you can probably just omit the 'i' part. -The index is used to optimize repeated accesses to the table; mk-archiver +The index is used to optimize repeated accesses to the table; pt-archiver remembers the last row it retrieves from each SELECT statement, and uses it to construct a WHERE clause, using the columns in the specified index, that should allow MySQL to start the next SELECT where the last one ended, rather than @@ -5334,24 +5334,24 @@ purge job on the master and prevent it from happening on the slave using your method of choice. B: Using a default options file (F) DSN option that defines a -socket for L<"--source"> causes mk-archiver to connect to L<"--dest"> using +socket for L<"--source"> causes pt-archiver to connect to L<"--dest"> using that socket unless another socket for L<"--dest"> is specified. This -means that mk-archiver may incorrectly connect to L<"--source"> when it +means that pt-archiver may incorrectly connect to L<"--source"> when it is meant to connect to L<"--dest">. For example: --source F=host1.cnf,D=db,t=tbl --dest h=host2 -When mk-archiver connects to L<"--dest">, host2, it will connect via the +When pt-archiver connects to L<"--dest">, host2, it will connect via the L<"--source">, host1, socket defined in host1.cnf. =item --statistics Collect and print timing statistics. -Causes mk-archiver to collect timing statistics about what it does. These +Causes pt-archiver to collect timing statistics about what it does. These statistics are available to the plugin specified by L<"--plugin"> -Unless you specify L<"--quiet">, C prints the statistics when it +Unless you specify L<"--quiet">, C prints the statistics when it exits. The statistics look like this: Started at 2008-07-18T07:18:53, ended at 2008-07-18T07:18:53 @@ -5386,7 +5386,7 @@ on reasonably new Perl releases. Stop running instances by creating the sentinel file. -Causes mk-archiver to create the sentinel file specified by L<"--sentinel"> and +Causes pt-archiver to create the sentinel file specified by L<"--sentinel"> and exit. This should have the effect of stopping all running instances which are watching the same sentinel file. @@ -5397,7 +5397,7 @@ type: int; default: 1 Number of rows per transaction. Specifies the size, in number of rows, of each transaction. Zero disables -transactions altogether. After mk-archiver processes this many rows, it +transactions altogether. After pt-archiver processes this many rows, it commits both the L<"--source"> and the L<"--dest"> if given, and flushes the file given by L<"--file">. @@ -5406,14 +5406,14 @@ server, which for example is doing heavy OLTP work, you need to choose a good balance between transaction size and commit overhead. Larger transactions create the possibility of more lock contention and deadlocks, but smaller transactions cause more frequent commit overhead, which can be significant. To -give an idea, on a small test set I worked with while writing mk-archiver, a +give an idea, on a small test set I worked with while writing pt-archiver, a value of 500 caused archiving to take about 2 seconds per 1000 rows on an otherwise quiet MySQL instance on my desktop machine, archiving to disk and to another table. Disabling transactions with a value of zero, which turns on autocommit, dropped performance to 38 seconds per thousand rows. If you are not archiving from or to a transactional storage engine, you may -want to disable transactions so mk-archiver doesn't try to commit. +want to disable transactions so pt-archiver doesn't try to commit. =item --user @@ -5444,16 +5444,16 @@ L<"--where"> 1=1. Print reason for exiting unless rows exhausted. -Causes mk-archiver to print a message if it exits for any reason other than +Causes pt-archiver to print a message if it exits for any reason other than running out of rows to archive. This can be useful if you have a cron job with -L<"--run-time"> specified, for example, and you want to be sure mk-archiver is +L<"--run-time"> specified, for example, and you want to be sure pt-archiver is finishing before running out of time. If L<"--statistics"> is given, the behavior is changed slightly. It will print the reason for exiting even when it's just because there are no more rows. This output prints even if L<"--quiet"> is given. That's so you can put -C in a C job and get an email if there's an abnormal exit. +C in a C job and get an email if there's an abnormal exit. =back @@ -5549,13 +5549,13 @@ User for login if not current user. =head1 EXTENDING -mk-archiver is extensible by plugging in external Perl modules to handle some +pt-archiver is extensible by plugging in external Perl modules to handle some logic and/or actions. You can specify a module for both the L<"--source"> and the L<"--dest">, with the 'm' part of the specification. For example: --source D=test,t=test1,m=My::Module1 --dest m=My::Module2,t=test2 -This will cause mk-archiver to load the My::Module1 and My::Module2 packages, +This will cause pt-archiver to load the My::Module1 and My::Module2 packages, create instances of them, and then make calls to them during the archiving process. @@ -5568,22 +5568,22 @@ The module must provide this interface: =item new(dbh => $dbh, db => $db_name, tbl => $tbl_name) The plugin's constructor is passed a reference to the database handle, the -database name, and table name. The plugin is created just after mk-archiver +database name, and table name. The plugin is created just after pt-archiver opens the connection, and before it examines the table given in the arguments. This gives the plugin a chance to create and populate temporary tables, or do other setup work. =item before_begin(cols => \@cols, allcols => \@allcols) -This method is called just before mk-archiver begins iterating through rows +This method is called just before pt-archiver begins iterating through rows and archiving them, but after it does all other setup work (examining table structures, designing SQL queries, and so on). This is the only time -mk-archiver tells the plugin column names for the rows it will pass the +pt-archiver tells the plugin column names for the rows it will pass the plugin while archiving. The C argument is the column names the user requested to be archived, either by default or by the L<"--columns"> option. The C argument is -the list of column names for every row mk-archiver will fetch from the source +the list of column names for every row pt-archiver will fetch from the source table. It may fetch more columns than the user requested, because it needs some columns for its own use. When subsequent plugin functions receive a row, it is the full row containing all the extra columns, if any, added to the end. @@ -5596,21 +5596,21 @@ If the method returns true, the row will be archived; otherwise it will be skipped. Skipping a row adds complications for non-unique indexes. Normally -mk-archiver uses a WHERE clause designed to target the last processed row as +pt-archiver uses a WHERE clause designed to target the last processed row as the place to start the scan for the next SELECT statement. If you have skipped -the row by returning false from is_archivable(), mk-archiver could get into +the row by returning false from is_archivable(), pt-archiver could get into an infinite loop because the row still exists. Therefore, when you specify a -plugin for the L<"--source"> argument, mk-archiver will change its WHERE clause +plugin for the L<"--source"> argument, pt-archiver will change its WHERE clause slightly. Instead of starting at "greater than or equal to" the last processed row, it will start "strictly greater than." This will work fine on unique indexes such as primary keys, but it may skip rows (leave holes) on non-unique indexes or when ascending only the first column of an index. -C will change the clause in the same way if you specify +C will change the clause in the same way if you specify L<"--no-delete">, because again an infinite loop is possible. If you specify the L<"--bulk-delete"> option and return false from this method, -C may not do what you want. The row won't be archived, but it will +C may not do what you want. The row won't be archived, but it will be deleted, since bulk deletes operate on ranges of rows and don't know which rows the plugin selected to keep. @@ -5675,20 +5675,20 @@ This method's return value etc is similar to the L<"custom_sth()"> method. =item after_finish() -This method is called after mk-archiver exits the archiving loop, commits all +This method is called after pt-archiver exits the archiving loop, commits all database handles, closes L<"--file">, and prints the final statistics, but -before mk-archiver runs ANALYZE or OPTIMIZE (see L<"--analyze"> and +before pt-archiver runs ANALYZE or OPTIMIZE (see L<"--analyze"> and L<"--optimize">). =back -If you specify a plugin for both L<"--source"> and L<"--dest">, mk-archiver +If you specify a plugin for both L<"--source"> and L<"--dest">, pt-archiver constructs, calls before_begin(), and calls after_finish() on the two plugins in the order L<"--source">, L<"--dest">. -mk-archiver assumes it controls transactions, and that the plugin will NOT +pt-archiver assumes it controls transactions, and that the plugin will NOT commit or roll back the database handle. The database handle passed to the -plugin's constructor is the same handle mk-archiver uses itself. Remember +plugin's constructor is the same handle pt-archiver uses itself. Remember that L<"--source"> and L<"--dest"> are separate handles. A sample module might look like this: @@ -5748,7 +5748,7 @@ installed in any reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-checksum-filter b/bin/pt-checksum-filter index 9d722468..a8564764 100755 --- a/bin/pt-checksum-filter +++ b/bin/pt-checksum-filter @@ -1194,22 +1194,22 @@ exit $exit_status; =head1 NAME -mk-checksum-filter - Filter checksums from mk-table-checksum. +pt-checksum-filter - Filter checksums from mk-table-checksum. =head1 SYNOPSIS -Usage: mk-checksum-filter [OPTION]... FILE +Usage: pt-checksum-filter [OPTION]... FILE -mk-checksum-filter filters checksums from mk-table-checksum and prints those +pt-checksum-filter filters checksums from mk-table-checksum and prints those that differ. With no FILE, or when FILE is -, read standard input. Examples: - mk-checksum-filter checksums.txt + pt-checksum-filter checksums.txt - mk-table-checksum host1 host2 | mk-checksum-filter + mk-table-checksum host1 host2 | pt-checksum-filter - mk-checksum-filter db1-checksums.txt db2-checksums.txt --ignore-databases + pt-checksum-filter db1-checksums.txt db2-checksums.txt --ignore-databases =head1 RISKS @@ -1218,7 +1218,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-checksum-filter is read-only and very low-risk. +pt-checksum-filter is read-only and very low-risk. At the time of this release, we know of no bugs that could cause serious harm to users. @@ -1226,7 +1226,7 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. @@ -1237,7 +1237,7 @@ sorts it, then filters it so you only see lines that have different checksums or counts. You can pipe input directly into it from L, or you can -save the mk-table-checksum's output and run mk-checksum-filter on the +save the mk-table-checksum's output and run pt-checksum-filter on the resulting file(s). If you run it against just one file, or pipe output directly into it, it'll output results during processing. Processing multiple files is slightly more expensive, and you won't see any output until they're @@ -1347,7 +1347,7 @@ reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-config-diff b/bin/pt-config-diff index fbfeace3..ef0a8084 100755 --- a/bin/pt-config-diff +++ b/bin/pt-config-diff @@ -2921,27 +2921,27 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-config-diff - Diff MySQL configuration files and server variables. +pt-config-diff - Diff MySQL configuration files and server variables. =head1 SYNOPSIS -Usage: mk-config-diff [OPTION...] CONFIG CONFIG [CONFIG...] +Usage: pt-config-diff [OPTION...] CONFIG CONFIG [CONFIG...] -mk-config-diff diffs MySQL configuration files and server variables. +pt-config-diff diffs MySQL configuration files and server variables. CONFIG can be a filename or a DSN. At least two CONFIG sources must be given. Like standard Unix diff, there is no output if there are no differences. Diff host1 config from SHOW VARIABLES against host2: - mk-config-diff h=host1 h=host2 + pt-config-diff h=host1 h=host2 Diff config from [mysqld] section in my.cnf against host1 config: - mk-config-diff /etc/my.cnf h=host1 + pt-config-diff /etc/my.cnf h=host1 Diff the [mysqld] section of two option files: - mk-config-diff /etc/my-small.cnf /etc/my-large.cnf + pt-config-diff /etc/my-small.cnf /etc/my-large.cnf =head1 RISKS @@ -2950,7 +2950,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-config-diff reads MySQL's configuration and examines it and is thus very +pt-config-diff reads MySQL's configuration and examines it and is thus very low risk. At the time of this release there are no known bugs that pose a serious risk. @@ -2958,19 +2958,19 @@ At the time of this release there are no known bugs that pose a serious risk. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-config-diff diffs MySQL configurations by examining the values of server +pt-config-diff diffs MySQL configurations by examining the values of server system variables from two or more CONFIG sources specified on the command line. A CONFIG source can be a DSN or a filename containing the output of C, C, C, or an option file (e.g. my.cnf). -For each DSN CONFIG, mk-config-diff connects to MySQL and gets variables +For each DSN CONFIG, pt-config-diff connects to MySQL and gets variables and values by executing C. This is an "active config" because it shows what server values MySQL is actively (currently) running with. @@ -2987,7 +2987,7 @@ Option file and DSN configs provide the best results. =head1 OUTPUT There is no output when there are no differences. When there are differences, -mk-config-diff prints a report to STDOUT that looks similar to the following: +pt-config-diff prints a report to STDOUT that looks similar to the following: 2 config differences Variable my.master.cnf my.slave.cnf @@ -3001,13 +3001,13 @@ comparison fails, the tool prints a warning to STDERR, such as the following: Comparing log_error values (mysqld.log, /tmp/12345/data/mysqld.log) caused an error: Argument "/tmp/12345/data/mysqld.log" isn't numeric - in numeric eq (==) at ./mk-config-diff line 2311. + in numeric eq (==) at ./pt-config-diff line 2311. Please report these warnings so the comparison functions can be improved. =head1 EXIT STATUS -mk-config-diff exits with a zero exit status when there are no differences, and +pt-config-diff exits with a zero exit status when there are no differences, and 1 if there are. =head1 OPTIONS @@ -3218,7 +3218,7 @@ You need the following Perl modules: DBI and DBD::mysql. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-deadlock-logger b/bin/pt-deadlock-logger index 9c03acfd..332fd8d5 100755 --- a/bin/pt-deadlock-logger +++ b/bin/pt-deadlock-logger @@ -2206,32 +2206,32 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-deadlock-logger - Extract and log MySQL deadlock information. +pt-deadlock-logger - Extract and log MySQL deadlock information. =head1 SYNOPSIS -Usage: mk-deadlock-logger [OPTION...] SOURCE_DSN +Usage: pt-deadlock-logger [OPTION...] SOURCE_DSN -mk-deadlock-logger extracts and saves information about the most recent deadlock +pt-deadlock-logger extracts and saves information about the most recent deadlock in a MySQL server. Print deadlocks on SOURCE_DSN: - mk-deadlock-logger SOURCE_DSN + pt-deadlock-logger SOURCE_DSN Store deadlock information from SOURCE_DSN in test.deadlocks table on SOURCE_DSN (source and destination are the same host): - mk-deadlock-logger SOURCE_DSN --dest D=test,t=deadlocks + pt-deadlock-logger SOURCE_DSN --dest D=test,t=deadlocks Store deadlock information from SOURCE_DSN in test.deadlocks table on DEST_DSN (source and destination are different hosts): - mk-deadlock-logger SOURCE_DSN --dest DEST_DSN,D=test,t=deadlocks + pt-deadlock-logger SOURCE_DSN --dest DEST_DSN,D=test,t=deadlocks Daemonize and check for deadlocks on SOURCE_DSN every 30 seconds for 4 hours: - mk-deadlock-logger SOURCE_DSN --dest D=test,t=deadlocks --daemonize --run-time 4h --interval 30s + pt-deadlock-logger SOURCE_DSN --dest D=test,t=deadlocks --daemonize --run-time 4h --interval 30s =head1 RISKS @@ -2240,7 +2240,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-deadlock-logger is a read-only tool unless you specify a L<"--dest"> table. +pt-deadlock-logger is a read-only tool unless you specify a L<"--dest"> table. In some cases polling SHOW INNODB STATUS too rapidly can cause extra load on the server. If you're using it on a production server under very heavy load, you might want to set L<"--interval"> to 30 seconds or more. @@ -2251,13 +2251,13 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-deadlock-logger extracts deadlock data from a MySQL server. Currently only +pt-deadlock-logger extracts deadlock data from a MySQL server. Currently only InnoDB deadlock information is available. You can print the information to standard output, store it in a database table, or both. If neither L<"--print"> nor L<"--dest"> are given, then the deadlock information is @@ -2287,7 +2287,7 @@ C string. Keys are a single letter: If you omit any values from the destination host DSN, they are filled in with values from the source host, so you don't need to specify them in both places. -C reads all normal MySQL option files, such as ~/.my.cnf, so +C reads all normal MySQL option files, such as ~/.my.cnf, so you may not need to specify username, password and other common options at all. =head1 OUTPUT @@ -2379,7 +2379,7 @@ in those columns. It may also be the case that the deadlock output is so long Though there are usually two transactions involved in a deadlock, there are more locks than that; at a minimum, one more lock than transactions is necessary to -create a cycle in the waits-for graph. mk-deadlock-logger prints the +create a cycle in the waits-for graph. pt-deadlock-logger prints the transactions (always two in the InnoDB output, even when there are more transactions in the waits-for graph than that) and fills in locks. It prefers waited-for over held when choosing lock information to output, but you can @@ -2414,7 +2414,7 @@ type: string Use this table to create a small deadlock. This usually has the effect of clearing out a huge deadlock, which otherwise consumes the entire output of -C. The table must not exist. mk-deadlock-logger will +C. The table must not exist. pt-deadlock-logger will create it with the following MAGIC_clear_deadlocks structure: CREATE TABLE test.deadlock_maker(a INT PRIMARY KEY) ENGINE=InnoDB; @@ -2448,7 +2448,7 @@ first option on the command line. Create the table specified by L<"--dest">. Normally the L<"--dest"> table is expected to exist already. This option -causes mk-deadlock-logger to create the table automatically using the suggested +causes pt-deadlock-logger to create the table automatically using the suggested table structure. =item --daemonize @@ -2476,7 +2476,7 @@ By default, whitespace in the query column is left intact; use L<"--[no]collapse"> if you want whitespace collapsed. The following MAGIC_dest_table is suggested if you want to store all the -information mk-deadlock-logger can extract about deadlocks: +information pt-deadlock-logger can extract about deadlocks: CREATE TABLE deadlocks ( server char(20) NOT NULL, @@ -2516,7 +2516,7 @@ Connect to host. type: time How often to check for deadlocks. If no L<"--run-time"> is specified, -mk-deadlock-logger runs forever, checking for deadlocks at every interval. +pt-deadlock-logger runs forever, checking for deadlocks at every interval. See also L<"--run-time">. =item --log @@ -2565,7 +2565,7 @@ the last deadlock's fingerprint, then it is printed. type: time -How long to run before exiting. By default mk-deadlock-logger runs once, +How long to run before exiting. By default pt-deadlock-logger runs once, checks for deadlocks, and exits. If L<"--run-time"> is specified but no L<"--interval"> is specified, a default 1 second interval will be used. @@ -2691,7 +2691,7 @@ installed in any reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-duplicate-key-checker b/bin/pt-duplicate-key-checker index a0d49138..cf37e033 100755 --- a/bin/pt-duplicate-key-checker +++ b/bin/pt-duplicate-key-checker @@ -3873,16 +3873,16 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-duplicate-key-checker - Find duplicate indexes and foreign keys on MySQL tables. +pt-duplicate-key-checker - Find duplicate indexes and foreign keys on MySQL tables. =head1 SYNOPSIS -Usage: mk-duplicate-key-checker [OPTION...] [DSN] +Usage: pt-duplicate-key-checker [OPTION...] [DSN] -mk-duplicate-key-checker examines MySQL tables for duplicate or redundant +pt-duplicate-key-checker examines MySQL tables for duplicate or redundant indexes and foreign keys. Connection options are read from MySQL option files. - mk-duplicate-key-checker --host host1 + pt-duplicate-key-checker --host host1 =head1 RISKS @@ -3891,7 +3891,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-duplicate-key-checker is a read-only tool that executes SHOW CREATE TABLE and +pt-duplicate-key-checker is a read-only tool that executes SHOW CREATE TABLE and related queries to inspect table structures, and thus is very low-risk. At the time of this release, there is an unconfirmed bug that causes the tool @@ -3900,7 +3900,7 @@ to crash. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. @@ -4213,7 +4213,7 @@ You need the following Perl modules: DBI and DBD::mysql. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-fifo-split b/bin/pt-fifo-split index 5de73715..e8f97b8d 100755 --- a/bin/pt-fifo-split +++ b/bin/pt-fifo-split @@ -1353,19 +1353,19 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-fifo-split - Split files and pipe lines to a fifo without really splitting. +pt-fifo-split - Split files and pipe lines to a fifo without really splitting. =head1 SYNOPSIS -Usage: mk-fifo-split [options] [FILE ...] +Usage: pt-fifo-split [options] [FILE ...] -mk-fifo-split splits FILE and pipes lines to a fifo. With no FILE, or when FILE +pt-fifo-split splits FILE and pipes lines to a fifo. With no FILE, or when FILE is -, read standard input. Read hugefile.txt in chunks of a million lines without physically splitting it: - mk-fifo-split --lines 1000000 hugefile.txt - while [ -e /tmp/mk-fifo-split ]; do cat /tmp/mk-fifo-split; done + pt-fifo-split --lines 1000000 hugefile.txt + while [ -e /tmp/pt-fifo-split ]; do cat /tmp/pt-fifo-split; done =head1 RISKS @@ -1374,7 +1374,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-fifo-split creates and/or deletes the L<"--fifo"> file. Otherwise, no other +pt-fifo-split creates and/or deletes the L<"--fifo"> file. Otherwise, no other files are modified, and it merely reads lines from the file given on the command-line. It should be very low-risk. @@ -1384,13 +1384,13 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-fifo-split lets you read from a file as though it contains only some of the +pt-fifo-split lets you read from a file as though it contains only some of the lines in the file. When you read from it again, it contains the next set of lines; when you have gone all the way through it, the file disappears. This works only on Unix-like operating systems. @@ -1414,7 +1414,7 @@ first option on the command line. =item --fifo -type: string; default: /tmp/mk-fifo-split +type: string; default: /tmp/pt-fifo-split The name of the fifo from which the lines can be read. @@ -1495,7 +1495,7 @@ installed in any reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-find b/bin/pt-find index 60531b59..78e21529 100755 --- a/bin/pt-find +++ b/bin/pt-find @@ -3101,46 +3101,46 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-find - Find MySQL tables and execute actions, like GNU find. +pt-find - Find MySQL tables and execute actions, like GNU find. =head1 SYNOPSIS -Usage: mk-find [OPTION...] [DATABASE...] +Usage: pt-find [OPTION...] [DATABASE...] -mk-find searches for MySQL tables and executes actions, like GNU find. The +pt-find searches for MySQL tables and executes actions, like GNU find. The default action is to print the database and table name. Find all tables created more than a day ago, which use the MyISAM engine, and print their names: - mk-find --ctime +1 --engine MyISAM + pt-find --ctime +1 --engine MyISAM Find InnoDB tables that haven't been updated in a month, and convert them to MyISAM storage engine (data warehousing, anyone?): - mk-find --mtime +30 --engine InnoDB --exec "ALTER TABLE %D.%N ENGINE=MyISAM" + pt-find --mtime +30 --engine InnoDB --exec "ALTER TABLE %D.%N ENGINE=MyISAM" Find tables created by a process that no longer exists, following the name_sid_pid naming convention, and remove them. - mk-find --connection-id '\D_\d+_(\d+)$' --server-id '\D_(\d+)_\d+$' --exec-plus "DROP TABLE %s" + pt-find --connection-id '\D_\d+_(\d+)$' --server-id '\D_(\d+)_\d+$' --exec-plus "DROP TABLE %s" Find empty tables in the test and junk databases, and delete them: - mk-find --empty junk test --exec-plus "DROP TABLE %s" + pt-find --empty junk test --exec-plus "DROP TABLE %s" Find tables more than five gigabytes in total size: - mk-find --tablesize +5G + pt-find --tablesize +5G Find all tables and print their total data and index size, and sort largest tables first (sort is a different program, by the way). - mk-find --printf "%T\t%D.%N\n" | sort -rn + pt-find --printf "%T\t%D.%N\n" | sort -rn As above, but this time, insert the data back into the database for posterity: - mk-find --noquote --exec "INSERT INTO sysdata.tblsize(db, tbl, size) VALUES('%D', '%N', %T)" + pt-find --noquote --exec "INSERT INTO sysdata.tblsize(db, tbl, size) VALUES('%D', '%N', %T)" =head1 RISKS @@ -3149,7 +3149,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-find only reads and prints information by default, but L<"--exec"> and +pt-find only reads and prints information by default, but L<"--exec"> and L<"--exec-plus"> can execute user-defined SQL. You should be as careful with it as you are with any command-line tool that can execute queries against your database. @@ -3160,29 +3160,29 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-find looks for MySQL tables that pass the tests you specify, and executes +pt-find looks for MySQL tables that pass the tests you specify, and executes the actions you specify. The default action is to print the database and table name to STDOUT. -mk-find is simpler than GNU find. It doesn't allow you to specify +pt-find is simpler than GNU find. It doesn't allow you to specify complicated expressions on the command line. -mk-find uses SHOW TABLES when possible, and SHOW TABLE STATUS when needed. +pt-find uses SHOW TABLES when possible, and SHOW TABLE STATUS when needed. =head1 OPTION TYPES There are three types of options: normal options, which determine some behavior or setting; tests, which determine whether a table should be included in the -list of tables found; and actions, which do something to the tables mk-find +list of tables found; and actions, which do something to the tables pt-find finds. -mk-find uses standard Getopt::Long option parsing, so you should use double +pt-find uses standard Getopt::Long option parsing, so you should use double dashes in front of long option names, unlike GNU find. =head1 OPTIONS @@ -3245,7 +3245,7 @@ Combine tests with OR, not AND. By default, tests are evaluated as though there were an AND between them. This option switches it to OR. -Option parsing is not implemented by mk-find itself, so you cannot specify +Option parsing is not implemented by pt-find itself, so you cannot specify complicated expressions with parentheses and mixtures of OR and AND. =item --password @@ -3315,7 +3315,7 @@ of k, M or G (1_024, 1_048_576, and 1_073_741_824 respectively). All patterns are Perl regular expressions (see 'man perlre') unless specified as SQL LIKE patterns. -Dates and times are all measured relative to the same instant, when mk-find +Dates and times are all measured relative to the same instant, when pt-find first asks the database server what time it is. All date and time manipulation is done in SQL, so if you say to find tables modified 5 days ago, that translates to SELECT DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 5 DAY). If you @@ -3323,11 +3323,11 @@ specify L<"--day-start">, if course it's relative to CURRENT_DATE instead. However, table sizes and other metrics are not consistent at an instant in time. It can take some time for MySQL to process all the SHOW queries, and -mk-find can't do anything about that. These measurements are as of the +pt-find can't do anything about that. These measurements are as of the time they're taken. If you need some test that's not in this list, file a bug report and I'll -enhance mk-find for you. It's really easy. +enhance pt-find for you. It's really easy. =over @@ -3392,7 +3392,7 @@ a pattern. The argument to this test must be a Perl regular expression that captures digits like this: (\d+). If the table name matches the pattern, these captured digits are taken to be the MySQL connection ID of some process. If the connection doesn't exist according to SHOW FULL PROCESSLIST, the test -returns true. If the connection ID is greater than mk-find's own +returns true. If the connection ID is greater than pt-find's own connection ID, the test returns false for safety. Why would you want to do this? If you use MySQL statement-based replication, @@ -3406,7 +3406,7 @@ can assume the connection died without cleaning up its tables, and this table is a candidate for removal. This is how I manage scratch tables, and that's why I included this test in -mk-find. +pt-find. The argument I use to L<"--connection-id"> is "\D_(\d+)$". That finds tables with a series of numbers at the end, preceded by an underscore and some @@ -3414,9 +3414,9 @@ non-number character (the latter criterion prevents me from examining tables with a date at the end, which people tend to do: baron_scratch_2007_05_07 for example). It's better to keep the scratch tables separate of course. -If you do this, make sure the user mk-find runs as has the PROCESS privilege! +If you do this, make sure the user pt-find runs as has the PROCESS privilege! Otherwise it will only see connections from the same user, and might think some -tables are ready to remove when they're still in use. For safety, mk-find +tables are ready to remove when they're still in use. For safety, pt-find checks this for you. See also L<"--server-id">. @@ -3767,7 +3767,7 @@ You need the following Perl modules: DBI and DBD::mysql. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-fk-error-logger b/bin/pt-fk-error-logger index a79d9c09..6a9168ce 100755 --- a/bin/pt-fk-error-logger +++ b/bin/pt-fk-error-logger @@ -2099,22 +2099,22 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-fk-error-logger - Extract and log MySQL foreign key errors. +pt-fk-error-logger - Extract and log MySQL foreign key errors. =head1 SYNOPSIS -Usage: mk-fk-error-logger [OPTION...] SOURCE_DSN +Usage: pt-fk-error-logger [OPTION...] SOURCE_DSN -mk-fk-error-logger extracts and saves information about the most recent foreign +pt-fk-error-logger extracts and saves information about the most recent foreign key errors in a MySQL server. Print foreign key errors on host1: - mk-fk-error-logger h=host1 + pt-fk-error-logger h=host1 Save foreign key errors on host1 to db.foreign_key_errors table on host2: - mk-fk-error-logger h=host1 --dest h=host1,D=db,t=foreign_key_errors + pt-fk-error-logger h=host1 --dest h=host1,D=db,t=foreign_key_errors =head1 RISKS @@ -2123,7 +2123,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-fk-error-logger is read-only unless you specify L<"--dest">. It should be +pt-fk-error-logger is read-only unless you specify L<"--dest">. It should be very low-risk. At the time of this release, we know of no bugs that could cause serious harm to @@ -2132,20 +2132,20 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-fk-error-logger prints or saves the foreign key errors text from +pt-fk-error-logger prints or saves the foreign key errors text from C. The errors are not parsed or interpreted in any way. Foreign key errors are uniquely identified by their timestamp. Only new (more recent) errors are printed or saved. =head1 OUTPUT -If L<"--print"> is given or no L<"--dest"> is given, then mk-fk-error-logger +If L<"--print"> is given or no L<"--dest"> is given, then pt-fk-error-logger prints the foreign key error text to STDOUT exactly as it appeared in C. @@ -2378,7 +2378,7 @@ installed in any reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-heartbeat b/bin/pt-heartbeat index 3843825a..954578c1 100755 --- a/bin/pt-heartbeat +++ b/bin/pt-heartbeat @@ -3281,7 +3281,7 @@ sub main { MKDEBUG && _d('Creating sentinel file', $sentinel); my $file = IO::File->new($sentinel, ">>") or die "Cannot open $sentinel: $OS_ERROR\n"; - print $file "Remove this file to permit mk-heartbeat to run\n" + print $file "Remove this file to permit pt-heartbeat to run\n" or die "Cannot write to $sentinel: $OS_ERROR\n"; close $file or die "Cannot close $sentinel: $OS_ERROR\n"; @@ -3399,7 +3399,7 @@ sub main { . "the heartbeat table $db_tbl uses the server_id column " . "for --update or --check but the server's master could " . "not be automatically determined.\n" - . "Please read the DESCRIPTION section of the mk-heartbeat POD.\n"; + . "Please read the DESCRIPTION section of the pt-heartbeat POD.\n"; } $pk_col = 'server_id'; $pk_val = $master_server_id; @@ -3429,13 +3429,13 @@ sub main { die "The heartbeat table is empty.\n" . "At least one row must be inserted into the heartbeat " . "table.\nPlease read the DESCRIPTION section of the " - . "mk-heartbeat POD.\n"; + . "pt-heartbeat POD.\n"; } else { die "No row found in heartbeat table for server_id $pk_val.\n" . "At least one row must be inserted into the heartbeat " . "table for server_id $pk_val.\nPlease read the " - . "DESCRIPTION section of the mk-heartbeat POD.\n"; + . "DESCRIPTION section of the pt-heartbeat POD.\n"; } } } @@ -3833,29 +3833,29 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-heartbeat - Monitor MySQL replication delay. +pt-heartbeat - Monitor MySQL replication delay. =head1 SYNOPSIS -Usage: mk-heartbeat [OPTION...] [DSN] --update|--monitor|--check|--stop +Usage: pt-heartbeat [OPTION...] [DSN] --update|--monitor|--check|--stop -mk-heartbeat measures replication lag on a MySQL or PostgreSQL server. You can +pt-heartbeat measures replication lag on a MySQL or PostgreSQL server. You can use it to update a master or monitor a replica. If possible, MySQL connection options are read from your .my.cnf file. Start daemonized process to update test.heartbeat table on master: - mk-heartbeat -D test --update -h master-server --daemonize + pt-heartbeat -D test --update -h master-server --daemonize Monitor replication lag on slave: - mk-heartbeat -D test --monitor -h slave-server + pt-heartbeat -D test --monitor -h slave-server - mk-heartbeat -D test --monitor -h slave-server --dbi-driver Pg + pt-heartbeat -D test --monitor -h slave-server --dbi-driver Pg Check slave lag once and exit (using optional DSN to specify slave host): - mk-heartbeat -D test --check h=slave-server + pt-heartbeat -D test --check h=slave-server =head1 RISKS @@ -3864,7 +3864,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-heartbeat merely reads and writes a single record in a table. It should be +pt-heartbeat merely reads and writes a single record in a table. It should be very low-risk. At the time of this release, we know of no bugs that could cause serious harm to @@ -3873,24 +3873,24 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-heartbeat is a two-part MySQL and PostgreSQL replication delay monitoring +pt-heartbeat is a two-part MySQL and PostgreSQL replication delay monitoring system that measures delay by looking at actual replicated data. This avoids reliance on the replication mechanism itself, which is unreliable. (For example, C on MySQL). -The first part is an L<"--update"> instance of mk-heartbeat that connects to +The first part is an L<"--update"> instance of pt-heartbeat that connects to a master and updates a timestamp ("heartbeat record") every L<"--interval"> seconds. Since the heartbeat table may contain records from multiple masters (see L<"MULTI-SLAVE HIERARCHY">), the server's ID (@@server_id) is used to identify records. -The second part is a L<"--monitor"> or L<"--check"> instance of mk-heartbeat +The second part is a L<"--monitor"> or L<"--check"> instance of pt-heartbeat that connects to a slave, examines the replicated heartbeat record from its immediate master or the specified L<"--master-server-id">, and computes the difference from the current system time. If replication between the slave and @@ -3907,7 +3907,7 @@ row is inserted if it doesn't exist. This feature can be disabled with the L<"--[no]insert-heartbeat-row"> option in case the database user does not have INSERT privileges. -mk-heartbeat depends only on the heartbeat record being replicated to the slave, +pt-heartbeat depends only on the heartbeat record being replicated to the slave, so it works regardless of the replication mechanism (built-in replication, a system such as Continuent Tungsten, etc). It works at any depth in the replication hierarchy; for example, it will reliably report how far a slave lags @@ -3915,18 +3915,18 @@ its master's master's master. And if replication is stopped, it will continue to work and report (accurately!) that the slave is falling further and further behind the master. -mk-heartbeat has a maximum resolution of 0.01 second. The clocks on the +pt-heartbeat has a maximum resolution of 0.01 second. The clocks on the master and slave servers must be closely synchronized via NTP. By default, L<"--update"> checks happen on the edge of the second (e.g. 00:01) and L<"--monitor"> checks happen halfway between seconds (e.g. 00:01.5). As long as the servers' clocks are closely synchronized and replication -events are propagating in less than half a second, mk-heartbeat will report +events are propagating in less than half a second, pt-heartbeat will report zero seconds of delay. -mk-heartbeat will try to reconnect if the connection has an error, but will +pt-heartbeat will try to reconnect if the connection has an error, but will not retry if it can't get a connection when it first starts. -The L<"--dbi-driver"> option lets you use mk-heartbeat to monitor PostgreSQL +The L<"--dbi-driver"> option lets you use pt-heartbeat to monitor PostgreSQL as well. It is reported to work well with Slony-1 replication. =head1 MULTI-SLAVE HIERARCHY @@ -3945,16 +3945,16 @@ specify the L<"--master-server-id"> to use. For example, if the replication hierarchy is "master -> slave1 -> slave2" with corresponding server IDs 1, 2 and 3, you can: - mk-heartbeat --daemonize -D test --update -h master - mk-heartbeat --daemonize -D test --update -h slave1 + pt-heartbeat --daemonize -D test --update -h master + pt-heartbeat --daemonize -D test --update -h slave1 Then check (or monitor) the replication delay from master to slave2: - mk-heartbeat -D test --master-server-id 1 --check slave2 + pt-heartbeat -D test --master-server-id 1 --check slave2 Or check the replication delay from slave1 to slave2: - mk-heartbeat -D test --master-server-id 2 --check slave2 + pt-heartbeat -D test --master-server-id 2 --check slave2 Stopping the L<"--update"> instance one slave1 will not affect the instance on master. @@ -4155,14 +4155,14 @@ Print all output to this file when daemonized. type: string Calculate delay from this master server ID for L<"--monitor"> or L<"--check">. -If not given, mk-heartbeat attempts to connect to the server's master and +If not given, pt-heartbeat attempts to connect to the server's master and determine its server id. =item --monitor Monitor slave delay continuously. -Specifies that mk-heartbeat should check the slave's delay every second and +Specifies that pt-heartbeat should check the slave's delay every second and report to STDOUT (or if L<"--file"> is given, to the file instead). The output is the current delay followed by moving averages over the timeframe given in L<"--frames">. For example, @@ -4223,7 +4223,7 @@ Possible methods are: The processlist method is preferred because SHOW SLAVE HOSTS is not reliable. However, the hosts method is required if the server uses a non-standard -port (not 3306). Usually mk-heartbeat does the right thing and finds +port (not 3306). Usually pt-heartbeat does the right thing and finds the slaves, but you may give a preferred method and it will be used first. If it doesn't find any slaves, the other methods will be tried. @@ -4244,7 +4244,7 @@ Time to run before exiting. =item --sentinel -type: string; default: /tmp/mk-heartbeat-sentinel +type: string; default: /tmp/pt-heartbeat-sentinel Exit if this file exists. @@ -4281,19 +4281,19 @@ Stop running instances by creating the sentinel file. This should have the effect of stopping all running instances which are watching the same sentinel file. If none of -L<"--update">, L<"--monitor"> or L<"--check"> is specified, C +L<"--update">, L<"--monitor"> or L<"--check"> is specified, C will exit after creating the file. If one of these is specified, -C will wait the interval given by L<"--interval">, then remove +C will wait the interval given by L<"--interval">, then remove the file and continue working. You might find this handy to stop cron jobs gracefully if necessary, or to replace one running instance with another. For example, if you want to stop -and restart C every hour (just to make sure that it is restarted +and restart C every hour (just to make sure that it is restarted every hour, in case of a server crash or some other problem), you could use a C line like this: - 0 * * * * mk-heartbeat --update -D test --stop \ - --sentinel /tmp/mk-heartbeat-hourly + 0 * * * * pt-heartbeat --update -D test --stop \ + --sentinel /tmp/pt-heartbeat-hourly The non-default L<"--sentinel"> will make sure the hourly C job stops only instances previously started with the same options (that is, from the @@ -4416,7 +4416,7 @@ installed in any reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-index-usage b/bin/pt-index-usage index b68e405a..cbc9c6b0 100755 --- a/bin/pt-index-usage +++ b/bin/pt-index-usage @@ -5323,21 +5323,21 @@ sub _d { =head1 NAME -mk-index-usage - Read queries from a log and analyze how they use indexes. +pt-index-usage - Read queries from a log and analyze how they use indexes. =head1 SYNOPSIS -Usage: mk-index-usage [OPTION...] [FILE...] +Usage: pt-index-usage [OPTION...] [FILE...] -mk-index-usage reads queries from logs and analyzes how they use indexes. +pt-index-usage reads queries from logs and analyzes how they use indexes. Analyze queries in slow.log and print reports: - mk-index-usage /path/to/slow.log --host localhost + pt-index-usage /path/to/slow.log --host localhost Disable reports and save results to mk database for later analysis: - mk-index-usage slow.log --no-report --save-results-database mk + pt-index-usage slow.log --no-report --save-results-database mk =head1 RISKS @@ -5356,7 +5356,7 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. @@ -5496,7 +5496,7 @@ type: Hash; default: non-unique Suggest dropping only these types of unused indexes. -By default mk-index-usage will only suggest to drop unused secondary indexes, +By default pt-index-usage will only suggest to drop unused secondary indexes, not primary or unique indexes. You can specify which types of unused indexes the tool suggests to drop: primary, unique, non-unique, all. @@ -5602,7 +5602,7 @@ exist, it can be auto-created with L<"--create-save-results-database">. In this case the connection is initially created with no default database, then after the database is created, it is USE'ed. -mk-index-usage executes INSERT statements to save the results. Therefore, you +pt-index-usage executes INSERT statements to save the results. Therefore, you should be careful if you use this feature on a production server. It might increase load, or cause trouble if you don't want the server to be written to, or so on. @@ -5885,7 +5885,7 @@ reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-kill b/bin/pt-kill index aa286f8f..5a26c54d 100755 --- a/bin/pt-kill +++ b/bin/pt-kill @@ -3668,7 +3668,7 @@ sub main { MKDEBUG && _d('Creating sentinel file', $sentinel); open my $fh, '>', $sentinel or die "Cannot open $sentinel: $OS_ERROR\n"; - print $fh "Remove this file to permit mk-kill to run.\n" + print $fh "Remove this file to permit pt-kill to run.\n" or die "Cannot write to $sentinel: $OS_ERROR\n"; close $fh or die "Cannot close $sentinel: $OS_ERROR\n"; @@ -4043,36 +4043,36 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-kill - Kill MySQL queries that match certain criteria. +pt-kill - Kill MySQL queries that match certain criteria. =head1 SYNOPSIS -Usage: mk-kill [OPTION]... [FILE...] +Usage: pt-kill [OPTION]... [FILE...] -mk-kill kills MySQL connections. mk-kill connects to MySQL and gets queries +pt-kill kills MySQL connections. pt-kill connects to MySQL and gets queries from SHOW PROCESSLIST if no FILE is given. Else, it reads queries from one or more FILE which contains the output of SHOW PROCESSLIST. If FILE is -, -mk-kill reads from STDIN. +pt-kill reads from STDIN. Kill queries running longer than 60s: - mk-kill --busy-time 60 --kill + pt-kill --busy-time 60 --kill Print, do not kill, queries running longer than 60s: - mk-kill --busy-time 60 --print + pt-kill --busy-time 60 --print Check for sleeping processes and kill them all every 10s: - mk-kill --match-command Sleep --kill --victims all --interval 10 + pt-kill --match-command Sleep --kill --victims all --interval 10 Print all login processes: - mk-kill --match-state login --print --victims all + pt-kill --match-state login --print --victims all See which queries in the processlist right now would match: - mysql -e "SHOW PROCESSLIST" | mk-kill --busy-time 60 --print + mysql -e "SHOW PROCESSLIST" | pt-kill --busy-time 60 --print =head1 RISKS @@ -4081,7 +4081,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-kill is designed to kill queries if you use the L<"--kill"> option is given, +pt-kill is designed to kill queries if you use the L<"--kill"> option is given, and that might disrupt your database's users, of course. You should test with the <"--print"> option, which is safe, if you're unsure what the tool will do. @@ -4091,13 +4091,13 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-kill captures queries from SHOW PROCESSLIST, filters them, and then either +pt-kill captures queries from SHOW PROCESSLIST, filters them, and then either kills or prints them. This is also known as a "slow query sniper" in some circles. The idea is to watch for queries that might be consuming too many resources, and kill them. @@ -4105,12 +4105,12 @@ resources, and kill them. For brevity, we talk about killing queries, but they may just be printed (or some other future action) depending on what options are given. -Normally mk-kill connects to MySQL to get queries from SHOW PROCESSLIST. +Normally pt-kill connects to MySQL to get queries from SHOW PROCESSLIST. Alternatively, it can read SHOW PROCESSLIST output from files. In this case, -mk-kill does not connect to MySQL and L<"--kill"> has no effect. You should +pt-kill does not connect to MySQL and L<"--kill"> has no effect. You should use L<"--print"> instead when reading files. The ability to read a file (or - for STDIN) allows you to capture SHOW PROCESSLIST and test it later with -mk-kill to make sure that your matches kill the proper queries. There are a +pt-kill to make sure that your matches kill the proper queries. There are a lot of special rules to follow, such as "don't kill replication threads," so be careful to not kill something important! @@ -4128,7 +4128,7 @@ Usually you need to specify at least one C<--match> option, else no queries will match. Or, you can specify L<"--match-all"> to match all queries that aren't ignored by an C<--ignore> option. -mk-kill is a work in progress, and there is much more it could do. +pt-kill is a work in progress, and there is much more it could do. =head1 GROUP, MATCH AND KILL @@ -4174,7 +4174,7 @@ If both L<"--kill"> and L<"--print"> are given, then matching queries are killed and a line for each like the one above is printed. Any command executed by L<"--execute-command"> is responsible for its own -output and logging. After being executed, mk-kill has no control or interaction +output and logging. After being executed, pt-kill has no control or interaction with the command. =head1 OPTIONS @@ -4303,19 +4303,19 @@ Remove SQL comments from queries in the Info column of the PROCESSLIST. type: time -How long to run before exiting. By default mk-kill runs forever, or until +How long to run before exiting. By default pt-kill runs forever, or until its process is killed or stopped by the creation of a L<"--sentinel"> file. -If this option is specified, mk-kill runs for the specified amount of time +If this option is specified, pt-kill runs for the specified amount of time and sleeps L<"--interval"> seconds between each check of the PROCESSLIST. =item --sentinel -type: string; default: /tmp/mk-kill-sentinel +type: string; default: /tmp/pt-kill-sentinel Exit if this file exists. The presence of the file specified by L<"--sentinel"> will cause all -running instances of mk-kill to exit. You might find this handy to stop cron +running instances of pt-kill to exit. You might find this handy to stop cron jobs gracefully if necessary. See also L<"--stop">. =item --set-vars @@ -4335,7 +4335,7 @@ Socket file to use for connection. Stop running instances by creating the L<"--sentinel"> file. -Causes mk-kill to create the sentinel file specified by L<"--sentinel"> and +Causes pt-kill to create the sentinel file specified by L<"--sentinel"> and exit. This should have the effect of stopping all running instances which are watching the same sentinel file. @@ -4482,7 +4482,7 @@ See L<"--match-info">. default: yes; group: Query Matches -Don't kill mk-kill's own connection. +Don't kill pt-kill's own connection. =item --ignore-state @@ -4650,7 +4650,7 @@ These actions are taken for every matching query from all classes. The actions are taken in this order: L<"--print">, L<"--execute-command">, L<"--kill">/L<"--kill-query">. This order allows L<"--execute-command"> to see the output of L<"--print"> and the query before -L<"--kill">/L<"--kill-query">. This may be helpful because mk-kill does +L<"--kill">/L<"--kill-query">. This may be helpful because pt-kill does not pass any information to L<"--execute-command">. See also L<"GROUP, MATCH AND KILL">. @@ -4663,10 +4663,10 @@ type: string; group: Actions Execute this command when a query matches. -After the command is executed, mk-kill has no control over it, so the command +After the command is executed, pt-kill has no control over it, so the command is responsible for its own info gathering, logging, interval, etc. The command is executed each time a query matches, so be careful that the command -behaves well when multiple instances are ran. No information from mk-kill is +behaves well when multiple instances are ran. No information from pt-kill is passed to the command. See also L<"--wait-before-kill">. @@ -4677,12 +4677,12 @@ group: Actions Kill the connection for matching queries. -This option makes mk-kill kill the connections (a.k.a. processes, threads) that +This option makes pt-kill kill the connections (a.k.a. processes, threads) that have matching queries. Use L<"--kill-query"> if you only want to kill individual queries and not their connections. Unless L<"--print"> is also given, no other information is printed that shows -that mk-kill matched and killed a query. +that pt-kill matched and killed a query. See also L<"--wait-before-kill"> and L<"--wait-after-kill">. @@ -4692,7 +4692,7 @@ group: Actions Kill matching queries. -This option makes mk-kill kill matching queries. This requires MySQL 5.0 or +This option makes pt-kill kill matching queries. This requires MySQL 5.0 or newer. Unlike L<"--kill"> which kills the connection for matching queries, this option only kills the query, not its connection. @@ -4797,7 +4797,7 @@ installed in any reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-log-player b/bin/pt-log-player index 886cd3a3..2ea451f1 100755 --- a/bin/pt-log-player +++ b/bin/pt-log-player @@ -3049,21 +3049,21 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-log-player - Replay MySQL query logs. +pt-log-player - Replay MySQL query logs. =head1 SYNOPSIS -Usage: mk-log-player [OPTION...] [DSN] +Usage: pt-log-player [OPTION...] [DSN] -mk-log-player splits and plays slow log files. +pt-log-player splits and plays slow log files. Split slow.log on Thread_id into 16 session files, save in ./sessions: - mk-log-player --split Thread_id --session-files 16 --base-dir ./sessions slow.log + pt-log-player --split Thread_id --session-files 16 --base-dir ./sessions slow.log Play all those sessions on host1, save results in ./results: - mk-log-player --play ./sessions --base-dir ./results h=host1 + pt-log-player --play ./sessions --base-dir ./results h=host1 Use L to summarize the results: @@ -3079,19 +3079,19 @@ tools) and those created by bugs. This tool is meant to load a server as much as possible, for stress-testing purposes. It is not designed to be used on production servers. -At the time of this release there is a bug which causes mk-log-player to +At the time of this release there is a bug which causes pt-log-player to exceed max open files during L<"--split">. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-log-player does two things: it splits MySQL query logs into session files +pt-log-player does two things: it splits MySQL query logs into session files and it plays (executes) queries in session files on a MySQL server. Only session files can be played; slow logs cannot be played directly without being split. @@ -3102,7 +3102,7 @@ L<"--split">. Multiple sessions are saved into a single session file. See L<"--session-files">, L<"--max-sessions">, L<"--base-file-name"> and L<"--base-dir">. These session files are played with L<"--play">. -mk-log-player will L<"--play"> session files in parallel using N number of +pt-log-player will L<"--play"> session files in parallel using N number of L<"--threads">. (They're not technically threads, but we call them that anyway.) Each thread will play all the sessions in its given session files. The sessions are played as fast as possible--there are no delays--because the @@ -3132,7 +3132,7 @@ queries grouped into sessions. For example: The format of these session files is important: each query must be a single line separated by a single blank line. And the "-- START SESSION" comment -tells mk-log-player where individual sessions begin and end so that L<"--play"> +tells pt-log-player where individual sessions begin and end so that L<"--play"> can correctly fake Thread_id in its result files. The result files written by L<"--play"> are in slow log format with a minimal @@ -3295,7 +3295,7 @@ type: int; default: 5000000; group: Split Maximum number of sessions to L<"--split">. -By default, C tries to split every session from the log file. +By default, C tries to split every session from the log file. For huge logs, however, this can result in millions of sessions. This option causes only the first N number of sessions to be saved. All sessions after this number are ignored, but sessions split before this number will @@ -3589,7 +3589,7 @@ reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-online-schema-change b/bin/pt-online-schema-change index 96187e3c..7dce0a26 100755 --- a/bin/pt-online-schema-change +++ b/bin/pt-online-schema-change @@ -4720,31 +4720,31 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-online-schema-change - Perform online, non-blocking table schema changes. +pt-online-schema-change - Perform online, non-blocking table schema changes. =head1 SYNOPSIS -Usage: mk-online-schema-change [OPTION...] DSN +Usage: pt-online-schema-change [OPTION...] DSN -mk-online-schema-change performs online, non-blocking schema changes to a table. +pt-online-schema-change performs online, non-blocking schema changes to a table. The table to change must be specified in the DSN C part, like C. The table can be database-qualified, or the database can be specified with the L<"--database"> option. Change the table's engine to InnoDB: - mk-online-schema-change \ + pt-online-schema-change \ h=127.1,t=db.tbl \ --alter-table "ALTER TABLE db.tbl ENGINE=InnoDB" \ --drop-tmp-table Rebuild but do not alter the table, and keep the temporary table: - mk-online-schema-change h=127.1,t=tbl --database db + pt-online-schema-change h=127.1,t=tbl --database db Add column to parent table, update child table foreign key constraints: - mk-online-schema-change \ + pt-online-schema-change \ h=127.1,D=db,t=parent \ --alter-table 'ALTER TABLE parent ADD COLUMN (foo INT)' \ --child-tables child1,child2 \ @@ -4757,7 +4757,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-online-schema-change reads, writes, alters and drops tables. Although +pt-online-schema-change reads, writes, alters and drops tables. Although it is tested, do not use it in production until you have thoroughly tested it in your environment! @@ -4769,13 +4769,13 @@ At the time of this release there are no known bugs that pose a serious risk. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-online-schema-change performs online, non-blocking schema changes to tables. +pt-online-schema-change performs online, non-blocking schema changes to tables. Only one table can be altered at a time because triggers are used to capture and synchronize changes between the table and the temporary table that will take its place once it has been altered. Since triggers are used, this @@ -5252,7 +5252,7 @@ installed in any reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to @@ -5295,12 +5295,12 @@ their version called C as explained by their blog post: L. Searching for "online schema change" will return other relevant pages about this concept. -This implementation, C, is a hybrid of Shlomi's +This implementation, C, is a hybrid of Shlomi's and Facebook's approach. Shlomi's code is a full-featured tool with command line options, documentation, etc., but its continued development/support is not assured. Facebook's tool has certain technical advantages, but it's not a full-featured tool; it's more a custom job by Facebook for Facebook. And -neither of those tools is tested. C is a +neither of those tools is tested. C is a full-featured, tested tool with stable development and support. This tool was made possible by a generous client of Percona Inc. diff --git a/bin/pt-profile-compact b/bin/pt-profile-compact index d005a3b1..bbfc452a 100755 --- a/bin/pt-profile-compact +++ b/bin/pt-profile-compact @@ -1186,22 +1186,22 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-profile-compact - Compact the output from mk-query-profiler. +pt-profile-compact - Compact the output from mk-query-profiler. =head1 SYNOPSIS -Usage: mk-profile-compact [OPTION...] [FILE...] +Usage: pt-profile-compact [OPTION...] [FILE...] -mk-profile-compact aligns query profiler results side by side for easy +pt-profile-compact aligns query profiler results side by side for easy comparison. With no FILE, or when FILE is -, read from standard input. To view queries 2, 4 and 6 side by side: - mk-profile-compact --queries 2,4,6 profile-results.txt + pt-profile-compact --queries 2,4,6 profile-results.txt To view summaries from two runs side by side: - mk-profile-compact --mode SUMMARY results-1.txt results-2.txt + pt-profile-compact --mode SUMMARY results-1.txt results-2.txt =head1 RISKS @@ -1210,7 +1210,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-profile-compact is read-only and very low-risk. +pt-profile-compact is read-only and very low-risk. At the time of this release, we know of no bugs that could cause serious harm to users. @@ -1218,13 +1218,13 @@ users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-profile-compact slices and aligns the output from mk-query-profiler +pt-profile-compact slices and aligns the output from mk-query-profiler so you can compare profile results side by side easily. It prints the first profile result intact, but each subsequent result is trimmed to be as narrow as possible, then aligned next to the first. @@ -1312,7 +1312,7 @@ reasonably new version of Perl. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-query-advisor b/bin/pt-query-advisor index e441038a..62436010 100755 --- a/bin/pt-query-advisor +++ b/bin/pt-query-advisor @@ -6669,23 +6669,23 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-query-advisor - Analyze queries and advise on possible problems. +pt-query-advisor - Analyze queries and advise on possible problems. =head1 SYNOPSIS -Usage: mk-query-advisor [OPTION...] [FILE] +Usage: pt-query-advisor [OPTION...] [FILE] -mk-query-advisor analyzes queries and advises on possible problems. +pt-query-advisor analyzes queries and advises on possible problems. Queries are given either by specifying slowlog files, --query, or --review. # Analyzer all queries in the given slowlog - mk-query-advisor /path/to/slow-query.log + pt-query-advisor /path/to/slow-query.log # Get queries from tcpdump using mk-query-digest - mk-query-digest --type tcpdump.txt --print --no-report | mk-query-advisor + mk-query-digest --type tcpdump.txt --print --no-report | pt-query-advisor # Get queries from a general log - mk-query-advisor --type genlog mysql.log + pt-query-advisor --type genlog mysql.log =head1 RISKS @@ -6694,7 +6694,7 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -mk-query-advisor simply reads queries and examines them, and is thus +pt-query-advisor simply reads queries and examines them, and is thus very low risk. At the time of this release there is a bug that may cause an infinite (or @@ -6703,13 +6703,13 @@ very long) loop when parsing very large queries. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. =head1 DESCRIPTION -mk-query-advisor examines queries and applies rules to them, trying to +pt-query-advisor examines queries and applies rules to them, trying to find queries that look bad according to the rules. It reports on queries that match the rules, so you can find bad practices or hidden problems in your SQL. By default, it accepts a MySQL slow query log @@ -6717,7 +6717,7 @@ as input. =head1 RULES -These are the rules that mk-query-advisor will apply to the queries it +These are the rules that pt-query-advisor will apply to the queries it examines. Each rule has three bits of information: an ID, a severity and a description. @@ -7232,7 +7232,7 @@ You need the following Perl modules: DBI and DBD::mysql. =head1 BUGS -For a list of known bugs see L. +For a list of known bugs see L. Please use Google Code Issues and Groups to report bugs or request support: L. You can also join #maatkit on Freenode to diff --git a/bin/pt-query-digest b/bin/pt-query-digest index 0bcc1e12..014bb4fb 100755 --- a/bin/pt-query-digest +++ b/bin/pt-query-digest @@ -13880,44 +13880,44 @@ if ( !caller ) { exit main(@ARGV); } =head1 NAME -mk-query-digest - Analyze query execution logs and generate a query report, +pt-query-digest - Analyze query execution logs and generate a query report, filter, replay, or transform queries for MySQL, PostgreSQL, memcached, and more. =head1 SYNOPSIS -Usage: mk-query-digest [OPTION...] [FILE] +Usage: pt-query-digest [OPTION...] [FILE] -mk-query-digest parses and analyzes MySQL log files. With no FILE, or when +pt-query-digest parses and analyzes MySQL log files. With no FILE, or when FILE is -, it read standard input. Analyze, aggregate, and report on a slow query log: - mk-query-digest /path/to/slow.log + pt-query-digest /path/to/slow.log Review a slow log, saving results to the test.query_review table in a MySQL server running on host1. See L<"--review"> for more on reviewing queries: - mk-query-digest --review h=host1,D=test,t=query_review /path/to/slow.log + pt-query-digest --review h=host1,D=test,t=query_review /path/to/slow.log Filter out everything but SELECT queries, replay the queries against another server, then use the timings from replaying them to analyze their performance: - mk-query-digest /path/to/slow.log --execute h=another_server \ + pt-query-digest /path/to/slow.log --execute h=another_server \ --filter '$event->{fingerprint} =~ m/^select/' Print the structure of events so you can construct a complex L<"--filter">: - mk-query-digest /path/to/slow.log --no-report \ + pt-query-digest /path/to/slow.log --no-report \ --filter 'print Dumper($event)' Watch SHOW FULL PROCESSLIST and output a log in slow query log format: - mk-query-digest --processlist h=host1 --print --no-report + pt-query-digest --processlist h=host1 --print --no-report The default aggregation and analysis is CPU and memory intensive. Disable it if you don't need the default report: - mk-query-digest --no-report + pt-query-digest --no-report =head1 RISKS @@ -13926,13 +13926,13 @@ whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs. -By default mk-query-digest merely collects and aggregates data from the files +By default pt-query-digest merely collects and aggregates data from the files specified. It is designed to be as efficient as possible, but depending on the input you give it, it can use a lot of CPU and memory. Practically speaking, it is safe to run even on production systems, but you might want to monitor it until you are satisfied that the input you give it does not cause undue load. -Various options will cause mk-query-digest to insert data into tables, execute +Various options will cause pt-query-digest to insert data into tables, execute SQL queries, and so on. These include the L<"--execute"> option and L<"--review">. @@ -13942,7 +13942,7 @@ to users. The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL: -L. +L. See also L<"BUGS"> for more information on filing bugs and getting help. @@ -13950,7 +13950,7 @@ See also L<"BUGS"> for more information on filing bugs and getting help. This tool was formerly known as mk-log-parser. -C is a framework for doing things with events from a query +C is a framework for doing things with events from a query source such as the slow query log or PROCESSLIST. By default it acts as a very sophisticated log analysis tool. You can group and sort queries in many different ways simultaneously and find the most expensive queries, or create a @@ -13966,7 +13966,7 @@ incompatible changes in the future. =head1 ATTRIBUTES -mk-query-digest works on events, which are a collection of key/value pairs +pt-query-digest works on events, which are a collection of key/value pairs called attributes. You'll recognize most of the attributes right away: Query_time, Lock_time, and so on. You can just look at a slow log and see them. However, there are some that don't exist in the slow log, and slow logs @@ -14070,7 +14070,7 @@ select the reviewed query's details from the database with a query like C statement which clears +causes pt-upgrade to execute a successful C