Compare commits

...

40 Commits

Author SHA1 Message Date
Carlos
750d2ef9d9 Updated version to 3.2.1 in all tools 2020-08-12 14:54:55 -03:00
Carlos
232263f176 Updated Percona::Toolkit version and changelog 2020-08-12 14:46:09 -03:00
Carlos
5d00a14b94 Updated changelog 2020-08-12 13:48:58 -03:00
Mateus Dubiela Oliveira
d6ada6a7bf PT-1869: Enable slave list reloading (#456)
* PT-1869: Enable slave list reloading

* PT-1869: Fix pt-osc/slave_lag sample sizes for more consistent testing results

* PT-1869: Move slaves_to_skip to get_slaves_cb
2020-08-12 11:30:56 -03:00
Paul Jacobs
f9b510e22f PT-1870 Font differences 2020-07-30 10:34:30 +03:00
Carlos Salguero
002303fc2e PMM-6256 Updated ExplainCmd for MongoDB (#459)
* PMM-6256 Updated ExplainCmd for MongoDB

In order to make explain for MongoDB work, we need to remove the "$db"
field from the explain command generated in proto.system.profile since
it is a duplicated field that triggers a MongoDB error.

* PMM-6256 New test

* PMM-6256 Removed commented out code

Co-authored-by: Carlos <cfsalguero@gmail.com>
2020-07-29 08:39:07 -03:00
ovidiustanila
8d59ef2051 PT-1518 pt-table-checksum gives error CRC32 (#415) 2020-07-23 23:40:53 -03:00
Sergey Kuzmichev
8b5e885173 PT-1859 ( PT-1868 ) and general pt-pg-summary improvements (#455)
* Fix for PT-1868 and general pt-pg-summary improvements

This is a rather large piece of changes to pt-pg-summary, which
includes:
* Corrected dependency for models in lib/pginfo
* Fixed existing testing infrastructure and implemented new tests:
** Test for New in pginfo
** Test for TestCollectGlobalInfo
** Test for TestCollectPerDatabaseInfo
* Fixed models to reflect PG12 changes (datid 0 with no name)
** Modified gen.sh to include PG12 containers and work with the same
docker-compoe that tests use
* Updated templates and helper functions
* Fixed standby detection and template output

With these changes, pt-pg-summary works correctly with PG12 hosts.

* Extra port in pt-pg-summary models gen.sh, removing unused
2020-07-15 09:24:13 -03:00
Andrii Skomorokhov
72c5d2af82 Merge pull request #458 from percona/PMM-6213-remove-go-1.13
PMM-6213 Remove go 1.13 from travis.
2020-07-14 19:28:28 +03:00
Andrii Skomorokhov
d23ea0fe92 PMM-6213 Try go tip. 2020-07-14 16:22:55 +03:00
Andrii Skomorokhov
dbe87040a3 PMM-6213 Remove go 1.13 from travis. 2020-07-13 18:17:20 +03:00
Carlos Salguero
c7eed08e33 MongoDB EXPLAIN JSON APIs (#454)
* WIP

* PMM-4192 Updated MongoDB fingerprint

SystemProfile has been changed to use the new bson.D from the official
MongoDB driver instead of the old BsonD. Updated the fingerprinter
module and all tests

* PMM-4192 Updated MongoDB explain tests

Updated test to use bson.D instead of BsonD

* PMM-4192 Code clean-up

* PMM-4192 Changes for CR

* PMM-4192 Changes for CR

* PMM-4192 Removed unused deps

Co-authored-by: Carlos <cfsalguero@gmail.com>
2020-07-08 10:25:12 -03:00
Carlos Salguero
de27179da8 Merge pull request #453 from percona/PT-1853_self_ref_fks
PT-1853 Added --no-check-foreing-keys to pt-osc
2020-06-30 21:50:50 -03:00
Carlos
8ff3451362 PT-1853 Changed wording 2020-06-30 20:54:08 -03:00
Carlos
9f2b72e0df PT-1853 Added disable fk checks in MySQL 2020-06-30 20:09:39 -03:00
Carlos
2e62d07ba0 PT-1853 Disabled FK checks in MySQL 2020-06-30 10:12:27 -03:00
Carlos Salguero
c6b4bd747e PT-1852 Added --no-check-foreing-keys to pt-osc 2020-06-21 18:53:47 -03:00
Carlos Salguero
89440c1ad1 Merge pull request #446 from percona/PT-1822_pt-mongodb-summary.fails.on.standalone
PT-1822 Fixed get hostnames for standalone
2020-06-15 06:36:16 -03:00
PaulJacobs-percona
ec7c62b289 PT-1836 replace U+2019 with U+0027
Apostrophe change
2020-06-04 09:50:02 +03:00
PaulJacobs-percona
dd921fd657 PT-1836 replace U+2019 with U+0027 (apostrophe) 2020-06-04 09:00:14 +03:00
PaulJacobs-percona
1dc85c3160 PT-1836 Change apostrophe to standard ascii. 2020-06-04 08:52:38 +03:00
PaulJacobs-percona
14698e6045 PT-1836 fix lintian warnings: spelling error 2020-06-04 08:38:02 +03:00
Carlos Salguero
3530c7bccd PT-1822 Do not exit if rs is not enabled 2020-06-02 11:59:26 -03:00
PaulJacobs-percona
7002246cd3 Merge pull request #451 from percona/PT-1851-missing-backslash
PT-1851 Formatting escape chars as code. Other fixes for Sphinx warni…
2020-06-02 08:42:39 +03:00
Carlos Salguero
1da2cc944b PT-1822 fixed test 2020-06-01 14:57:36 -03:00
Carlos Salguero
1f62be3279 Fixes fro CR 2020-06-01 11:50:13 -03:00
Nurlan Moldomurov
a91a8decac PMM-5723 reviewdog check (#450)
* PMM-5723 Reviewdog checks.

* PMM-5723 Github token for reviewdog.

* PMM-5723 Remove dep check.

* PMM-5723 Comment for secure.

* PMM-5723 Remove unnecessary flags.
2020-05-29 15:57:51 +03:00
Paul Jacobs
c9836d5962 PT-1851 Formatting escape chars as code. Other fixes for Sphinx warnings. 2020-05-29 15:03:43 +03:00
PaulJacobs-percona
b230a9da96 Update release_notes.rst 2020-05-28 15:49:40 +03:00
PaulJacobs-percona
4101d45484 Merge pull request #449 from percona/PT-1833-missing-rn-3-1-0
PT-1833 missing Release Notes 3.1.0
2020-05-28 15:38:21 +03:00
Carlos Salguero
596b62c23b PT-1822 Fixed test 2020-05-27 21:24:18 -03:00
Paul Jacobs
feb79c37c8 PT-1833 3.1.0 release notes missing from documentation 2020-05-26 17:04:56 +03:00
Carlos Salguero
1f33cb97e6 PT-1822 Fixed for CR 2020-05-25 22:35:35 -03:00
Carlos Salguero
40f28d977a Merge branch '3.0' into PT-1822_pt-mongodb-summary.fails.on.standalone 2020-05-25 22:00:15 -03:00
Carlos Salguero
b97436f0d5 Merge pull request #448 from percona/PT-1829
PT-1829 Fixed reconnection in heartbeat
2020-05-20 11:51:24 -03:00
Carlos Salguero
5efb3bd6f1 PT-1829 Fixed reconnection in heartbeat 2020-05-20 10:53:24 -03:00
Carlos Salguero
55502267d6 PT-1822 Fixed get hostnames for standalone 2020-05-14 23:53:01 -03:00
Carlos Salguero
8e7113d457 Merge branch '3.0' of percona.github.com:percona/percona-toolkit into 3.0 2020-05-06 11:16:57 -03:00
Alexander Tymchuk
2c866898ee Merge pull request #445 from percona/docs-remove-redundant-parenthesis
docs: remove a trailing parenthesis
2020-04-28 22:43:53 +03:00
Alexander Tymchuk
64d6b61132 docs: remove a trailing parenthesis 2020-04-22 23:14:41 +03:00
274 changed files with 1432 additions and 13745 deletions

1
.gitignore vendored
View File

@@ -24,3 +24,4 @@ src/go/.env
config/deb/control.bak
config/rpm/percona-toolkit.spec.bak
config/sphinx-build/percona-theme/*
coverage.out

View File

@@ -1,7 +1,8 @@
language: go
go:
- 1.13.x
- 1.14.x
- tip
services:
- docker
@@ -26,23 +27,26 @@ env:
- TEST_MONGODB_S2_SECONDARY1_PORT: 17005
- TEST_MONGODB_S2_SECONDARY2_PORT: 17006
- TEST_MONGODB_CONFIGSVR_RS: csReplSet
- TEST_MONGODB_CONFIGSVR1_PORT: 17007 ce
- TEST_MONGODB_CONFIGSVR1_PORT: 17007
- TEST_MONGODB_CONFIGSVR2_PORT: 17008
- TEST_MONGODB_CONFIGSVR3_PORT: 17009
- TEST_MONGODB_S3_RS: rs3
- TEST_MONGODB_S3_PRIMARY_PORT: 17021
- TEST_MONGODB_S3_SECONDARY1_PORT: 17022
- TEST_MONGODB_S3_SECONDARY2_PORT: 17023
- MINIO_ENDPOINT: http://localhost:9000/
- MINIO_ACCESS_KEY_ID: example00000
- MINIO_SECRET_ACCESS_KEY: secret00000
matrix:
# REVIEWDOG_GITHUB_API_TOKEN
- secure: "px8XYeNEAFTSTb1hYZuEOxqOXUxvp3EoU+KCtPck/KNozkoS95eBd9klgr3Os4wPKloLdMhrr0VE98lukogUxA/NmnYnos01kegjWgwwM6fkob8JxaN5KK4oUFF1wmirBlrjGlw8vUErPwINmrK4BywKpDbw6Yip6FzxdlWESHI="
matrix:
include:
- MONGODB_IMAGE=mongo:3.0
- MONGODB_IMAGE=mongo:3.2
- MONGODB_IMAGE=mongo:3.4
- MONGODB_IMAGE=percona/percona-server-mongodb:3.0
- MONGODB_IMAGE=percona/percona-server-mongodb:3.2
- MONGODB_IMAGE=percona/percona-server-mongodb:3.4
allow_failures:
- go: tip
# skip non-trunk PMM-XXXX branch builds, but still build pull requests
branches:
@@ -58,8 +62,14 @@ before_install:
install:
- go get -u github.com/golang/dep/cmd/dep
# install reviewdog and golangci-lin
- curl https://raw.githubusercontent.com/reviewdog/reviewdog/master/install.sh| sh -s
- curl https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s latest
before_script:
# static analyze
- bin/golangci-lint run -c=.golangci-required.yml --out-format=line-number | bin/reviewdog -f=golangci-lint -level=error -reporter=github-pr-check
- bin/golangci-lint run -c=.golangci.yml --out-format=line-number | bin/reviewdog -f=golangci-lint -level=error -reporter=github-pr-review
# log versions
- docker --version
- docker-compose --version
@@ -69,7 +79,7 @@ before_script:
- dep ensure
script:
- docker ps
- docker ps
- go test -timeout 20m ./src/go/...
allow_failures:

View File

@@ -1,5 +1,15 @@
Changelog for Percona Toolkit
* Fixed bug PT-1859: pt-pg-summary fails for Postgres12 (Thanks Sergey Kuzmichev)
* Improvement PT-1853: Added --no-check-foreing-keys to pt-osc
* Improvement PT-1851: Backslashes missing from documentation
* Improvement PT-1836: Review and consider lintian reported issues
* Fixed bug PT-1829: pt-heartbeat doesn't reconnect for check-read-only
* Fixed bug PT-1822: pt-mongodb-summary fails on standalone mongodb instances
* Fixed bug PT-1518: pt-table-checksum gives error CRC32. (Thanks @ovidiustanila)
v3.2.0 release 2020-04-23
* Fixed bug PT-1824: Name of a constraint can exceed 64 chars (Thanks Iwo Panowicz)
* Fixed bug PT-1793: Protocol parser cannot handle year 2020 (Thanks Kei Tsuchiya)
* Fixed bug PT-1782: pt-online-schema-change: FK keys warning, but there are no foreign keys

106
Gopkg.lock generated
View File

@@ -9,14 +9,6 @@
revision = "c7af12943936e8c39859482e61f0574c2fd7fc75"
version = "v1.4.2"
[[projects]]
digest = "1:c39fbf3b3e138accc03357c72417c0153c54cc1ae8c9f40e8f120a550d876a76"
name = "github.com/Percona-Lab/pt-pg-summary"
packages = ["models"]
pruneopts = ""
revision = "f06beea959eb00acfe44ce39342c27582ad84caa"
version = "v0.1.9"
[[projects]]
digest = "1:f82b8ac36058904227087141017bb82f4b0fc58272990a4cdae3e2d6d222644e"
name = "github.com/StackExchange/wmi"
@@ -52,6 +44,14 @@
pruneopts = ""
revision = "c3de453c63f4bdb4dadffab9805ec00426c505f7"
[[projects]]
digest = "1:0deddd908b6b4b768cfc272c16ee61e7088a60f7fe2f06c547bd3d8e1f8b8e77"
name = "github.com/davecgh/go-spew"
packages = ["spew"]
pruneopts = ""
revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
version = "v1.1.1"
[[projects]]
digest = "1:c3f4f5bc77b998746a8061a52a2b8fb172c1744801165227692d702483995d58"
name = "github.com/go-ini/ini"
@@ -119,6 +119,20 @@
pruneopts = ""
revision = "bf9dde6d0d2c004a008c27aaee91170c786f6db8"
[[projects]]
digest = "1:f02c9b5d6682e16bb484f964445b22af73c0af1c66a7d3f75503adda88610d26"
name = "github.com/klauspost/compress"
packages = [
"fse",
"huff0",
"snappy",
"zstd",
"zstd/internal/xxhash",
]
pruneopts = ""
revision = "a8f778f32d263b3a95d1e5a90534f5f680560abe"
version = "v1.10.10"
[[projects]]
digest = "1:0f51cee70b0d254dbc93c22666ea2abf211af81c1701a96d04e2284b408621db"
name = "github.com/konsorten/go-windows-terminal-sequences"
@@ -127,22 +141,6 @@
revision = "f55edac94c9bbba5d6182a4be46d86a2c9b5b50e"
version = "v1.0.2"
[[projects]]
digest = "1:3108ec0946181c60040ff51b811908f89d03e521e2b4ade5ef5c65b3c0e911ae"
name = "github.com/kr/pretty"
packages = ["."]
pruneopts = ""
revision = "73f6ac0b30a98e433b289500d779f50c1a6f0712"
version = "v0.1.0"
[[projects]]
digest = "1:11b056b4421396ab14e384ab8ab8c2079b03f1e51aa5eb4d9b81f9e0d1aa8fbf"
name = "github.com/kr/text"
packages = ["."]
pruneopts = ""
revision = "e2ffdb16a802fe2bb95e2e35ff34f0e53aeef34f"
version = "v0.1.0"
[[projects]]
digest = "1:f4216047c24ab66fb757045febd7dac4edc6f4ad9f6c0063d0755d654d04f25e"
name = "github.com/lib/pq"
@@ -188,12 +186,20 @@
revision = "197f4ad8db8d1b04ff408042119176907c971f0a"
[[projects]]
digest = "1:1d7e1867c49a6dd9856598ef7c3123604ea3daabf5b83f303ff457bcbc410b1d"
digest = "1:c45802472e0c06928cd997661f2af610accd85217023b1d5f6331bebce0671d3"
name = "github.com/pkg/errors"
packages = ["."]
pruneopts = ""
revision = "ba968bfe8b2f7e042a574c888954fccecfa385b4"
version = "v0.8.1"
revision = "614d223910a179a466c1767a985424175c39b465"
version = "v0.9.1"
[[projects]]
digest = "1:256484dbbcd271f9ecebc6795b2df8cad4c458dd0f5fd82a8c2fa0c29f233411"
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
pruneopts = ""
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
digest = "1:55dcddb2ba6ab25098ee6b96f176f39305f1fde7ea3d138e7e10bb64a5bf45be"
@@ -225,6 +231,17 @@
revision = "839c75faf7f98a33d445d181f3018b5c3409a45e"
version = "v1.4.2"
[[projects]]
digest = "1:83fd2513b9f6ae0997bf646db6b74e9e00131e31002116fda597175f25add42d"
name = "github.com/stretchr/testify"
packages = [
"assert",
"require",
]
pruneopts = ""
revision = "f654a9112bbeac49ca2cd45bfbe11533c4666cf8"
version = "v1.6.1"
[[projects]]
branch = "master"
digest = "1:ad74f33a69bd6ab0bd7287003b7c1069b94cfb5213eb5597005fe2963d7dfca9"
@@ -242,11 +259,12 @@
revision = "73f8eece6fdcd902c185bf651de50f3828bed5ed"
[[projects]]
digest = "1:5a7738096093da28b02967e9f29b380341c02baa4dc3104731a62be4290369b7"
digest = "1:6085253f6bc0d9e4761ce971e02849b626de51735b35f362a34dbe5dbc3a2168"
name = "go.mongodb.org/mongo-driver"
packages = [
"bson",
"bson/bsoncodec",
"bson/bsonoptions",
"bson/bsonrw",
"bson/bsontype",
"bson/primitive",
@@ -262,23 +280,23 @@
"x/bsonx",
"x/bsonx/bsoncore",
"x/mongo/driver",
"x/mongo/driver/address",
"x/mongo/driver/auth",
"x/mongo/driver/auth/internal/gssapi",
"x/mongo/driver/connstring",
"x/mongo/driver/description",
"x/mongo/driver/dns",
"x/mongo/driver/mongocrypt",
"x/mongo/driver/mongocrypt/options",
"x/mongo/driver/operation",
"x/mongo/driver/session",
"x/mongo/driver/topology",
"x/mongo/driver/uuid",
"x/network/address",
"x/network/command",
"x/network/compressor",
"x/network/connection",
"x/network/connstring",
"x/network/description",
"x/network/result",
"x/network/wiremessage",
"x/mongo/driver/wiremessage",
]
pruneopts = ""
revision = "0d1270edf53072da4da781b76d2e1db58831152f"
version = "v1.0.4"
revision = "4ce2db174a8ec022f504b9bc0e768e284e44708f"
version = "v1.3.4"
[[projects]]
branch = "master"
@@ -325,19 +343,25 @@
revision = "342b2e1fbaa52c93f31447ad2c6abc048c63e475"
version = "v0.3.2"
[[projects]]
branch = "v3"
digest = "1:2e9c4d6def1d36dcd17730e00c06b49a2e97ea5e1e639bcd24fa60fa43e33ad6"
name = "gopkg.in/yaml.v3"
packages = ["."]
pruneopts = ""
revision = "eeeca48fe7764f320e4870d231902bf9c1be2c08"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/Masterminds/semver",
"github.com/Percona-Lab/pt-pg-summary/models",
"github.com/alecthomas/kingpin",
"github.com/go-ini/ini",
"github.com/golang/mock/gomock",
"github.com/google/uuid",
"github.com/hashicorp/go-version",
"github.com/howeyc/gopass",
"github.com/kr/pretty",
"github.com/lib/pq",
"github.com/mattn/go-shellwords",
"github.com/montanaflynn/stats",
@@ -346,6 +370,8 @@
"github.com/pkg/errors",
"github.com/shirou/gopsutil/process",
"github.com/sirupsen/logrus",
"github.com/stretchr/testify/assert",
"github.com/stretchr/testify/require",
"go.mongodb.org/mongo-driver/bson",
"go.mongodb.org/mongo-driver/bson/primitive",
"go.mongodb.org/mongo-driver/mongo",

View File

@@ -2,7 +2,7 @@ use ExtUtils::MakeMaker;
WriteMakefile(
NAME => 'percona-toolkit',
VERSION => '3.2.0',
VERSION => '3.2.1',
EXE_FILES => [ <bin/*> ],
MAN1PODS => {
'docs/percona-toolkit.pod' => 'blib/man1/percona-toolkit.1p',

View File

@@ -1359,6 +1359,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-align 3.2.0
pt-align 3.2.1
=cut

View File

@@ -45,7 +45,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -7696,8 +7696,8 @@ Example:
The file's contents are in the same format used by SELECT INTO OUTFILE, as
documented in the MySQL manual: rows terminated by newlines, columns
terminated by tabs, NULL characters are represented by \N, and special
characters are escaped by \. This lets you reload a file with LOAD DATA
terminated by tabs, NULL characters are represented by C<\N>, and special
characters are escaped by C<\>. This lets you reload a file with LOAD DATA
INFILE's default settings.
If you want a column header at the top of the file, see L<"--header">. The file
@@ -7856,8 +7856,10 @@ type: string
Used with L<"--file"> to specify the output format.
Valid formats are:
dump: MySQL dump format using tabs as field separator (default)
csv : Dump rows using ',' as separator and optionally enclosing fields by '"'.
- dump: MySQL dump format using tabs as field separator (default)
- csv : Dump rows using ',' as separator and optionally enclosing fields by '"'.
This format is equivalent to FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'.
=item --password
@@ -7887,10 +7889,10 @@ Specify the Perl module name of a general-purpose plugin. It is currently used
only for statistics (see L<"--statistics">) and must have C<new()> and a
C<statistics()> method.
The C<new( src => $src, dst => $dst, opts => $o )> method gets the source
The C<new( src =E<gt> $src, dst =E<gt> $dst, opts =E<gt> $o )> method gets the source
and destination DSNs, and their database connections, just like the
connection-specific plugins do. It also gets an OptionParser object (C<$o>) for
accessing command-line options (example: C<$o->get('purge');>).
accessing command-line options (example: C<$o-E<gt>get('purge');>).
The C<statistics(\%stats, $time)> method gets a hashref of the statistics
collected by the archiving job, and the time the whole job started.
@@ -8230,7 +8232,7 @@ Percona Toolkit. Second, it checks for and warns about versions with known
problems. For example, MySQL 5.5.25 had a critical bug and was re-released
as 5.5.25a.
A secure connection to Perconas Version Check database server is done to
A secure connection to Percona's Version Check database server is done to
perform these checks. Each request is logged by the server, including software
version numbers and unique ID of the checked system. The ID is generated by the
Percona Toolkit installation script or when the Version Check database call is
@@ -8652,6 +8654,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-archiver 3.2.0
pt-archiver 3.2.1
=cut

View File

@@ -43,7 +43,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -5747,7 +5747,7 @@ Percona Toolkit. Second, it checks for and warns about versions with known
problems. For example, MySQL 5.5.25 had a critical bug and was re-released
as 5.5.25a.
A secure connection to Perconas Version Check database server is done to
A secure connection to Percona's Version Check database server is done to
perform these checks. Each request is logged by the server, including software
version numbers and unique ID of the checked system. The ID is generated by the
Percona Toolkit installation script or when the Version Check database call is
@@ -5912,6 +5912,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-config-diff 3.2.0
pt-config-diff 3.2.1
=cut

View File

@@ -42,7 +42,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -5532,7 +5532,7 @@ Percona Toolkit. Second, it checks for and warns about versions with known
problems. For example, MySQL 5.5.25 had a critical bug and was re-released
as 5.5.25a.
A secure connection to Perconas Version Check database server is done to
A secure connection to Percona's Version Check database server is done to
perform these checks. Each request is logged by the server, including software
version numbers and unique ID of the checked system. The ID is generated by the
Percona Toolkit installation script or when the Version Check database call is
@@ -5702,6 +5702,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-deadlock-logger 3.2.0
pt-deadlock-logger 3.2.1
=cut

View File

@@ -38,7 +38,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -5677,6 +5677,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-diskstats 3.2.0
pt-diskstats 3.2.1
=cut

View File

@@ -39,7 +39,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -5765,6 +5765,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-duplicate-key-checker 3.2.0
pt-duplicate-key-checker 3.2.1
=cut

View File

@@ -1648,6 +1648,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-fifo-split 3.2.0
pt-fifo-split 3.2.1
=cut

View File

@@ -35,7 +35,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -5126,6 +5126,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-find 3.2.0
pt-find 3.2.1
=cut

View File

@@ -2239,6 +2239,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-fingerprint 3.2.0
pt-fingerprint 3.2.1
=cut

View File

@@ -37,7 +37,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -4688,6 +4688,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-fk-error-logger 3.2.0
pt-fk-error-logger 3.2.1
=cut

View File

@@ -44,7 +44,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -6386,19 +6386,6 @@ sub main {
sleep $next_interval - $time;
PTDEBUG && _d('Woke up at', ts(time));
if ( $o->get('check-read-only') && $o->get('update') ) {
my $read_only_interval = $o->get('read-only-interval') || $interval;
while (server_is_readonly($dbh)) {
PTDEBUG && _d("Server is read only. Sleeping for $read_only_interval seconds...");
sleep($read_only_interval);
if (
-f $sentinel
) {
return 0;
}
}
}
# Connect or reconnect if necessary.
if ( !$dbh->ping() ) {
$dbh = $dp->get_dbh($dp->get_cxn_params($dsn), { AutoCommit => 1 });
@@ -6409,6 +6396,17 @@ sub main {
$heartbeat_sth = undef;
}
if ( $o->get('check-read-only') && $o->get('update') ) {
my $read_only_interval = $o->get('read-only-interval') || $interval;
while (server_is_readonly($dbh)) {
PTDEBUG && _d("Server is read only. Sleeping for $read_only_interval seconds...");
sleep($read_only_interval);
if (-f $sentinel) {
return 0;
}
}
}
if ( $o->get('monitor') ) {
$heartbeat_sth ||= $dbh->prepare($heartbeat_sql);
my ($delay) = $get_delay->($heartbeat_sth);
@@ -7386,6 +7384,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-heartbeat 3.2.0
pt-heartbeat 3.2.1
=cut

View File

@@ -45,7 +45,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -7695,6 +7695,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-index-usage 3.2.0
pt-index-usage 3.2.1
=cut

View File

@@ -1127,7 +1127,7 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-ioprofile 3.2.0
pt-ioprofile 3.2.1
=cut

View File

@@ -47,7 +47,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -8323,9 +8323,9 @@ Print information to STDOUT about what is being done.
These actions are taken for every matching query from all classes.
The actions are taken in this order: L<"--print">, L<"--execute-command">,
L<"--kill">/L<"--kill-query">. This order allows L<"--execute-command">
L<"--kill"> / L<"--kill-query">. This order allows L<"--execute-command">
to see the output of L<"--print"> and the query before
L<"--kill">/L<"--kill-query">. This may be helpful because pt-kill does
L<"--kill"> / L<"--kill-query">. This may be helpful because pt-kill does
not pass any information to L<"--execute-command">.
See also L<"GROUP, MATCH AND KILL">.
@@ -8554,6 +8554,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-kill 3.2.0
pt-kill 3.2.1
=cut

View File

@@ -804,7 +804,7 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-mext 3.2.0
pt-mext 3.2.1
=cut

View File

@@ -3289,7 +3289,7 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-mysql-summary 3.2.0
pt-mysql-summary 3.2.1
=cut

View File

@@ -56,7 +56,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -4255,7 +4255,7 @@ sub get_slaves {
else {
die "Unexpected recursion methods: @$methods";
}
return $slaves;
}
@@ -5015,10 +5015,32 @@ sub wait {
my $worst; # most lagging slave
my $pr_callback;
my $pr_first_report;
### refresh list of slaves. In: self passed to wait()
### Returns: new slave list
my $pr_refresh_slave_list = sub {
my ($self) = @_;
my ($slaves, $refresher) = ($self->{slaves}, $self->{get_slaves_cb});
return $slaves if ( not defined $refresher );
my $before = join ' ', sort map {$_->name()} @$slaves;
$slaves = $refresher->();
my $after = join ' ', sort map {$_->name()} @$slaves;
if ($before ne $after) {
$self->{slaves} = $slaves;
printf STDERR "Slave set to watch has changed\n Was: %s\n Now: %s\n",
$before, $after;
}
return($self->{slaves});
};
$slaves = $pr_refresh_slave_list->($self);
if ( $pr ) {
# If you use the default Progress report callback, you'll need to
# to add Transformers.pm to this tool.
$pr_callback = sub {
my ($fraction, $elapsed, $remaining, $eta, $completed) = @_;
my $dsn_name = $worst->{cxn}->{dsn_name};
my $dsn_name = $worst->{cxn}->name();
if ( defined $worst->{lag} ) {
print STDERR "Replica lag is " . ($worst->{lag} || '?')
. " seconds on $dsn_name. Waiting.\n";
@@ -5033,21 +5055,34 @@ sub wait {
};
$pr->set_callback($pr_callback);
# If a replic is stopped, don't wait 30s (or whatever interval)
# to report this. Instead, report it once, immediately, then
# keep reporting it every interval.
$pr_first_report = sub {
my $dsn_name = $worst->{cxn}->{dsn_name};
my $dsn_name = $worst->{cxn}->name();
if ( !defined $worst->{lag} ) {
if ($self->{fail_on_stopped_replication}) {
die 'replication is stopped';
}
print STDERR "(2) Replica $dsn_name is stopped. Waiting.\n";
print STDERR "(2) Replica '$dsn_name' is stopped. Waiting.\n";
}
return;
};
}
my @lagged_slaves = map { {cxn=>$_, lag=>undef} } @$slaves;
# First check all slaves.
my @lagged_slaves = map { {cxn=>$_, lag=>undef} } @$slaves;
while ( $oktorun->() && @lagged_slaves ) {
PTDEBUG && _d('Checking slave lag');
### while we were waiting our list of slaves may have changed
$slaves = $pr_refresh_slave_list->($self);
my $watched = 0;
@lagged_slaves = grep {
my $slave_name = $_->{cxn}->name();
grep {$slave_name eq $_->name()} @{$slaves // []}
} @lagged_slaves;
for my $i ( 0..$#lagged_slaves ) {
my $lag;
eval {
@@ -5066,8 +5101,10 @@ sub wait {
}
}
# Remove slaves that aren't lagging.
@lagged_slaves = grep { defined $_ } @lagged_slaves;
if ( @lagged_slaves ) {
# Sort lag, undef is highest because it means the slave is stopped.
@lagged_slaves = reverse sort {
defined $a->{lag} && defined $b->{lag} ? $a->{lag} <=> $b->{lag}
: defined $a->{lag} ? -1
@@ -5078,6 +5115,10 @@ sub wait {
$worst->{lag}, 'on', Dumper($worst->{cxn}->dsn()));
if ( $pr ) {
# There's no real progress because we can't estimate how long
# it will take all slaves to catch up. The progress reports
# are just to inform the user every 30s which slave is still
# lagging this most.
$pr->update(
sub { return 0; },
first_report => $pr_first_report,
@@ -8594,6 +8635,12 @@ sub main {
# ########################################################################
my $set_on_connect = sub {
my ($dbh) = @_;
if (!$o->get('check-foreign-keys')) {
my $sql = "SET foreign_key_checks=0";
PTDEBUG && _d($sql);
print $sql, "\n" if $o->get('print');
$dbh->do($sql);
}
return;
};
@@ -8753,13 +8800,42 @@ sub main {
channel => $o->get('channel'),
);
$slaves = $ms->get_slaves(
dbh => $cxn->dbh(),
dsn => $cxn->dsn(),
make_cxn => sub {
return $make_cxn->(@_, prev_dsn => $cxn->dsn());
},
);
my $slaves_to_skip = $o->get('skip-check-slave-lag');
my $get_slaves_cb = sub {
my ($intolerant) = @_;
my $slaves =$ms->get_slaves(
dbh => $cxn->dbh(),
dsn => $cxn->dsn(),
make_cxn => sub {
return $make_cxn->(
@_,
prev_dsn => $cxn->dsn(),
errok => (not $intolerant)
);
},
);
if ($slaves_to_skip) {
my $filtered_slaves = [];
for my $slave (@$slaves) {
for my $slave_to_skip (@$slaves_to_skip) {
if ($slave->{dsn}->{h} eq $slave_to_skip->{h} && $slave->{dsn}->{P} eq $slave_to_skip->{P}) {
print "Skipping slave " . $slave->description() . "\n";
} else {
push @$filtered_slaves, $slave;
}
}
}
$slaves = $filtered_slaves;
}
return $slaves;
};
### first ever call only: do not tolerate connection errors
$slaves = $get_slaves_cb->('intolerant');
PTDEBUG && _d(scalar @$slaves, 'slaves found');
if ( scalar @$slaves ) {
print "Found " . scalar(@$slaves) . " slaves:\n";
@@ -8783,6 +8859,7 @@ sub main {
#prev_dsn => $cxn->dsn(),
);
$slave_lag_cxns = [ $cxn ];
$get_slaves_cb = undef;
}
else {
PTDEBUG && _d('Will check slave lag on all slaves');
@@ -8790,31 +8867,9 @@ sub main {
}
if ( $slave_lag_cxns && scalar @$slave_lag_cxns ) {
if ($o->get('skip-check-slave-lag')) {
my $slaves_to_skip = $o->get('skip-check-slave-lag');
my $filtered_slaves = [];
for my $slave (@$slave_lag_cxns) {
my $found=0;
for my $slave_to_skip (@$slaves_to_skip) {
if ($slave->{dsn}->{h} eq $slave_to_skip->{h} && $slave->{dsn}->{P} eq $slave_to_skip->{P}) {
$found=1;
}
}
if ($found) {
print "Skipping slave ". $slave->description()."\n";
} else {
push @$filtered_slaves, $slave;
}
}
$slave_lag_cxns = $filtered_slaves;
}
if (!scalar @$slave_lag_cxns) {
print "Not checking slave lag because all slaves were skipped\n";
} else{
print "Will check slave lag on:\n";
foreach my $cxn ( @$slave_lag_cxns ) {
print $cxn->description()."\n";
}
print "Will check slave lag on:\n";
foreach my $cxn ( @$slave_lag_cxns ) {
print $cxn->description()."\n";
}
}
else {
@@ -8925,11 +8980,12 @@ sub main {
}
$replica_lag = new ReplicaLagWaiter(
slaves => $slave_lag_cxns,
max_lag => $o->get('max-lag'),
oktorun => sub { return $oktorun },
get_lag => $get_lag,
sleep => $sleep,
slaves => $slave_lag_cxns,
get_slaves_cb => $get_slaves_cb,
max_lag => $o->get('max-lag'),
oktorun => sub { return $oktorun },
get_lag => $get_lag,
sleep => $sleep,
);
my $get_status;
@@ -9102,6 +9158,15 @@ sub main {
$child_table->{name},
$child_table->{row_est} || '?';
}
# TODO: Fix self referencing foreign keys handling.
# See: https://jira.percona.com/browse/PT-1802
# https://jira.percona.com/browse/PT-1853
if (_has_self_ref_fks($orig_tbl->{db}, $orig_tbl->{tbl}, $child_tables) && $o->get('check-foreign-keys')) {
print "The table has self-referencing foreign keys and that might lead to errors.\n";
print "Use --no-check-foreign-keys to disable this check.\n";
return 1;
}
if ( $alter_fk_method ) {
# Let the user know how we're going to update the child table
@@ -10396,6 +10461,20 @@ sub check_alter {
return;
}
sub _has_self_ref_fks {
my ($orig_db, $orig_table, $child_tables) = @_;
my $db_tbl = sprintf('`%s`.`%s`', $orig_db, $orig_table);
foreach my $child_table ( @$child_tables ) {
if ("$db_tbl" eq "$child_table->{name}") {
return 1;
}
}
return 0;
}
# This function tries to detect if the --alter param is adding unique indexes.
# It returns an array of arrays, having a list of fields for each unique index
# found.
@@ -11918,7 +11997,7 @@ The tool exits with an error if the host is a cluster node and the table
is MyISAM or is being converted to MyISAM (C<ENGINE=MyISAM>), or if
C<wsrep_OSU_method> is not C<TOI>. There is no way to disable these checks.
=head1 MySQL 5.7+ Generated columns
=head1 MySQL 5.7 + Generated columns
The tools ignores MySQL 5.7+ C<GENERATED> columns since the value for those columns
is generated according to the expresion used to compute column values.
@@ -12123,7 +12202,7 @@ type: string
Channel name used when connected to a server using replication channels.
Suppose you have two masters, master_a at port 12345, master_b at port 1236 and
a slave connected to both masters using channels chan_master_a and chan_master_b.
If you want to run pt-table-sync to syncronize the slave against master_a, pt-table-sync
If you want to run pt-table-sync to synchronize the slave against master_a, pt-table-sync
won't be able to determine what's the correct master since SHOW SLAVE STATUS
will return 2 rows. In this case, you can use --channel=chan_master_a to specify
the channel name to use in the SHOW SLAVE STATUS command.
@@ -12168,6 +12247,15 @@ L<"--print"> and verify that the triggers are correct.
=back
=item --[no]check-foreign-keys
default: yes
Check for self-referencing foreign keys. Currently self referencing FKs are
not full supported, so, to prevent errors, this program won't run if the table
has self-referencing foreign keys. Use this parameter to disable self-referencing
FK checks.
=item --check-interval
type: time; default: 1
@@ -13291,6 +13379,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-online-schema-change 3.2.0
pt-online-schema-change 3.2.1
=cut

View File

@@ -896,7 +896,7 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-pmp 3.2.0
pt-pmp 3.2.1
=cut

View File

@@ -64,7 +64,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -16957,6 +16957,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-query-digest 3.2.0
pt-query-digest 3.2.1
=cut

View File

@@ -2613,6 +2613,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-show-grants 3.2.0
pt-show-grants 3.2.1
=cut

View File

@@ -1245,7 +1245,7 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-sift 3.2.0
pt-sift 3.2.1
=cut

View File

@@ -40,7 +40,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -4602,7 +4602,7 @@ server. Before using this tool, please:
C<pt-slave-delay> watches a slave and starts and stops its replication SQL
thread as necessary to hold it at least as far behind the master as you
request. In practice, it will typically cause the slave to lag between
L<"--delay"> and L<"--delay">+L<"--interval"> behind the master.
L<"--delay"> and L<"--delay"> + L<"--interval"> behind the master.
It bases the delay on binlog positions in the slave's relay logs by default,
so there is no need to connect to the master. This works well if the IO
@@ -4988,6 +4988,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-slave-delay 3.2.0
pt-slave-delay 3.2.1
=cut

View File

@@ -4523,6 +4523,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-slave-find 3.2.0
pt-slave-find 3.2.1
=cut

View File

@@ -41,7 +41,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -6159,6 +6159,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-slave-restart 3.2.0
pt-slave-restart 3.2.1
=cut

View File

@@ -1993,7 +1993,7 @@ then compared to L<"--threshold"> as usual. The C<$EXT_ARGV> variable
contains the MySQL options mentioned in the L<"SYNOPSIS"> above.
The file should not alter the tool's existing global variables. Prefix any
file-specific global variables with "PLUGIN_" or make them local.
file-specific global variables with C<PLUGIN_> or make them local.
=item --help
@@ -2419,7 +2419,7 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-stalk 3.2.0
pt-stalk 3.2.1
=cut

View File

@@ -2723,7 +2723,7 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-summary 3.2.0
pt-summary 3.2.1
=cut

View File

@@ -58,7 +58,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -6190,7 +6190,10 @@ sub _get_crc_type {
$type = $sth->{mysql_type_name}->[0];
$length = $sth->{mysql_length}->[0];
PTDEBUG && _d($sql, $type, $length);
if ( $type eq 'bigint' && $length < 20 ) {
if ( $type eq 'integer' && $length < 11 ) {
$type = 'int';
}
elsif ( $type eq 'bigint' && $length < 20 ) {
$type = 'int';
}
};
@@ -13324,7 +13327,8 @@ first option on the command line.
See the L<"--help"> output for a list of default config files.
=item --[no]create-replicate-table
=item --create-replicate-table
=item --no-create-replicate-table
default: yes
@@ -13687,7 +13691,7 @@ structure (MAGIC_create_replicate):
Note: lower_boundary and upper_boundary data type can be BLOB. See L<"--binary-index">.
By default, L<"--[no]create-replicate-table"> is true, so the database and
By default, L<"--create-replicate-table"> is true, so the database and
the table specified by this option are created automatically if they do not
exist.
@@ -14178,6 +14182,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-table-checksum 3.2.0
pt-table-checksum 3.2.1
=cut

View File

@@ -55,7 +55,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -4747,7 +4747,10 @@ sub get_crc_type {
$type = $sth->{mysql_type_name}->[0];
$length = $sth->{mysql_length}->[0];
PTDEBUG && _d($sql, $type, $length);
if ( $type eq 'bigint' && $length < 20 ) {
if ( $type eq 'integer' && $length < 11 ) {
$type = 'int';
}
elsif ( $type eq 'bigint' && $length < 20 ) {
$type = 'int';
}
};
@@ -13080,6 +13083,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-table-sync 3.2.0
pt-table-sync 3.2.1
=cut

View File

@@ -8509,6 +8509,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-table-usage 3.2.0
pt-table-usage 3.2.1
=cut

View File

@@ -61,7 +61,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -11444,6 +11444,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-upgrade 3.2.0
pt-upgrade 3.2.1
=cut

View File

@@ -44,7 +44,7 @@ BEGIN {
{
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';
@@ -6257,6 +6257,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-variable-advisor 3.2.0
pt-variable-advisor 3.2.1
=cut

View File

@@ -3303,6 +3303,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
pt-visual-explain 3.2.0
pt-visual-explain 3.2.1
=cut

View File

@@ -50,7 +50,7 @@ copyright = u'2020, Percona LLC and/or its affiliates'
# The short X.Y version.
version = '3.2'
# The full version, including alpha/beta/rc tags.
release = '3.2.0'
release = '3.2.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.

View File

@@ -4,7 +4,7 @@
===============================
Percona Toolkit is a collection of advanced command-line tools
used by `Percona <http://www.percona.com/>`_) support staff
used by `Percona <http://www.percona.com/>`_ support staff
to perform a variety of MySQL, MongoDB, and system tasks
that are too difficult or complex to perform manually.
@@ -59,6 +59,8 @@ Miscellaneous
:maxdepth: 2
bugs
ipv6_support
special_option_types
authors
copyright_license_and_warranty
version

View File

@@ -567,6 +567,6 @@ Place, Suite 330, Boston, MA 02111-1307 USA.
=head1 VERSION
Percona Toolkit v3.2.0 released 2020-04-23
Percona Toolkit v3.2.1 released 2020-08-12
=cut

View File

@@ -1,8 +1,8 @@
.. _pt-mongodb-query-digest:
=======================
pt-mongodb-query-digest
=======================
==================================
:program:`pt-mongodb-query-digest`
==================================
``pt-mongodb-query-digest`` reports query usage statistics
by aggregating queries from MongoDB query profiler.
@@ -89,11 +89,11 @@ Output Example
# Time range: 2017-01-11 12:58:26.519 -0300 ART to 2017-01-11 12:58:26.686 -0300 ART
# Attribute pct total min max avg 95% stddev median
# ================== === ======== ======== ======== ======== ======== ======= ========
# Count (docs) 36
# Exec Time ms 0 0 0 0 0 0 0 0
# Docs Scanned 0 148.00 0.00 74.00 4.11 74.00 16.95 0.00
# Docs Returned 2 148.00 0.00 74.00 4.11 74.00 16.95 0.00
# Bytes recv 0 2.11M 215.00 1.05M 58.48K 1.05M 240.22K 215.00
# Count (docs) 36
# Exec Time ms 0 0 0 0 0 0 0 0
# Docs Scanned 0 148.00 0.00 74.00 4.11 74.00 16.95 0.00
# Docs Returned 2 148.00 0.00 74.00 4.11 74.00 16.95 0.00
# Bytes recv 0 2.11M 215.00 1.05M 58.48K 1.05M 240.22K 215.00
# String:
# Namespaces samples.col1
# Fingerprint $gte,$lt,$meta,$sortKey,filter,find,projection,shardVersion,sort,user_id,user_id

View File

@@ -1,8 +1,8 @@
.. pt-mongodb-summary:
==================
pt-mongodb-summary
==================
=============================
:program:`pt-mongodb-summary`
=============================
``pt-mongodb-summary`` collects information about a MongoDB cluster.
It collects information from several sources
@@ -58,14 +58,14 @@ Output Example
.. code-block:: none
# Instances ####################################################################################
ID Host Type ReplSet
0 localhost:17001 PRIMARY r1
1 localhost:17002 SECONDARY r1
2 localhost:17003 SECONDARY r1
0 localhost:18001 PRIMARY r2
1 localhost:18002 SECONDARY r2
ID Host Type ReplSet
0 localhost:17001 PRIMARY r1
1 localhost:17002 SECONDARY r1
2 localhost:17003 SECONDARY r1
0 localhost:18001 PRIMARY r2
1 localhost:18002 SECONDARY r2
2 localhost:18003 SECONDARY r2
# This host
# Mongo Executable #############################################################################
Path to executable | /home/karl/tmp/MongoDB32Labs/3.0/bin/mongos
@@ -79,9 +79,9 @@ Output Example
Started | 2016-10-30 00:18:49 -0300 ART
Datadir | /data/db
Process Type | mongos
# Running Ops ##################################################################################
Type Min Max Avg
Insert 0 0 0/5s
Query 0 0 0/5s
@@ -89,21 +89,21 @@ Output Example
Delete 0 0 0/5s
GetMore 0 0 0/5s
Command 0 22 16/5s
# Security #####################################################################################
Users 0
Roles 0
Auth disabled
SSL disabled
# Oplog ########################################################################################
Oplog Size 18660 Mb
Oplog Used 55 Mb
Oplog Length 0.91 hours
Last Election 2016-10-30 00:18:44 -0300 ART
# Cluster wide #################################################################################
Databases: 3
Collections: 17

View File

@@ -1,5 +1,7 @@
pt-pg-summary
=============
========================
:program:`pt-pg-summary`
========================
**pt-pg-summary** collects information about a PostgreSQL cluster.
Usage

View File

@@ -1,5 +1,3 @@
.. program:: pt-secure-collect
============================
:program:`pt-secure-collect`
============================
@@ -65,7 +63,7 @@ COMMANDS
Include this dir into the sanitized tar file.
.. option:: --config-file
Path to the config file. Default: ``~/.my.cnf``
.. option:: --mysql-host
@@ -133,7 +131,7 @@ COMMANDS
.. option:: --outfile
Write the output to this file. If ommited, the output file
Write the output to this file. If ommited, the output file
name will be the same as the input file, adding the ``.aes`` extension.
* **Encrypt command**
@@ -146,7 +144,7 @@ COMMANDS
.. option:: --outfile
Write the output to this file. If ommited, the output file
Write the output to this file. If ommited, the output file
name will be the same as the input file, without the ``.aes`` extension.
* **Sanitize command**

View File

@@ -1,7 +1,7 @@
Percona Toolkit
***************
v3.2.0 released 2019-04-23
v3.2.0 released 2020-04-23
==========================
Improvements:
@@ -27,6 +27,39 @@ Bug fixes:
* :jirabug:`PT-1793`: ``pt-query-digest`` was unable to handle the year 2020 because of wrong ``tcpdump`` parsing. (Thank you, Kei Tsuchiya.)
v3.1.0 released 2019-09-12
==========================
New Features:
* :jirabug:`PT-1663`: Implement retention by bytes for pt-stalk
Improvements:
* :jirabug:`PT-1705`: Make pt-online-schema-change exit with different codes depending on the status
* :jirabug:`PT-1761`: Prevent pt-osc to run under MySQL 8.0.14+ & 8.0.17
* :jirabug:`PT-1746`: diskstats not working for kernel 4.18+
Bugs Fixed:
* :jirabug:`PT-1736`: pt-kill ignores --busy-time and --kill-busy-commands=Query when there is a process with Command=Execute
* :jirabug:`PT-1575`: pt-mysql-summary does not print PXC section for PXC 5.6 and 5.7
* :jirabug:`PT-1728`: Pt-table-checksum failing to scan small tables that get wiped out often
* :jirabug:`PT-1720`: pt-pmp parses configuration files that lead to errors
* :jirabug:`PT-1114`: LP #1182180: pt-table-checksum fails when table is empty
* :jirabug:`PT-1715`: pt-upgrade documentation doesn't have the type tcpdump
* :jirabug:`PT-1344`: LP #1580428: pt-online-schema-change: Use of uninitialized value $host in string
* :jirabug:`PT-1492`: pt-kill in version 3.0.7 seems not to respect busy-time any longer
* :jirabug:`PT-1798`: CLONE - yum repos do not contain 3.1.1 of percona toolkit
* :jirabug:`PT-1797`: yum repos do not contain 3.1.1 of percona toolkit
* :jirabug:`PT-1633`: pt-config-diff doesn't handle innodb_temp_data_file_path correctly
* :jirabug:`PT-1630`: pt-table-checksum not working with galera cluster anymore since 3.0.11
* :jirabug:`PT-1734`: Tailing log_error in pt-stalk doesn't work
* :jirabug:`PT-1732`: Typo in link on percona.com
v3.0.13 released 2019-01-03
===========================
@@ -77,8 +110,7 @@ New features
* :jirabug:`PT-1571`: Improved hostname recognition in ``pt-secure-collect``
* :jirabug:`PT-1569`: Disabled ``--alter-foreign-keys-method=drop_swap`` in ``pt-online-schema-change``
* :jirabug:`PT-242`: (``pt-stalk``) Include ``SHOW SLAVE STATUS`` on MySQL 5.7 (Thanks `Marcelo Altmann <https://www.p
ercona.com/blog/author/marcelo-altmann/>`_)
* :jirabug:`PT-242`: (``pt-stalk``) Include ``SHOW SLAVE STATUS`` on MySQL 5.7 (Thanks `Marcelo Altmann <https://www.percona.com/blog/author/marcelo-altmann/>`_)
Fixed bugs
@@ -1105,17 +1137,17 @@ pt-query-digest --output json includes query examples as of v2.2.3. Some people
When using drop swap with pt-online-schema-change there is some production impact. This impact can be measured because tool outputs the current timestamp on lines for operations that may take awhile.
* Fixed bug #1163735: pt-table-checksum fails if explicit_defaults_for_timestamp is enabled in 5.6
pt-table-checksum would fail if variable explicit_defaults_for_timestamp was enabled in MySQL 5.6.
pt-table-checksum would fail if variable explicit_defaults_for_timestamp was enabled in MySQL 5.6.
* Fixed bug #1182856: Zero values causes "Invalid --set-vars value: var=0"
Trying to assign 0 to any variable by using --set-vars option would cause “Invalid --set-vars value” message.
Trying to assign 0 to any variable by using --set-vars option would cause “Invalid --set-vars value” message.
* Fixed bug #1188264: pt-online-schema-change error copying rows: Undefined subroutine &pt_online_schema_change::get
* Fixed the typo in the pt-online-schema-change code that could lead to a tool crash when copying the rows.
* Fixed bug #1199591: pt-table-checksum doesn't use non-unique index with highest cardinality
pt-table-checksum was using the first non-unique index instead of the one with the highest cardinality due to a sorting bug.
pt-table-checksum was using the first non-unique index instead of the one with the highest cardinality due to a sorting bug.
Percona Toolkit packages can be downloaded from
http://www.percona.com/downloads/percona-toolkit/ or the Percona Software

44
docs/rn.3-1-0.txt Normal file
View File

@@ -0,0 +1,44 @@
.. _PT-3.1.0:
================================================================================
*Percona Toolkit* 3.1.0
================================================================================
:Date: September 12, 2019
:Installation: `Installing Percona Toolkit <https://www.percona.com/doc/percona-toolkit/LATEST/installation.html>`_
New Features
================================================================================
* :jirabug:`PT-1663`: Implement retention by bytes for pt-stalk
Improvements
================================================================================
* :jirabug:`PT-1705`: Make pt-online-schema-change exit with different codes depending on the status
* :jirabug:`PT-1761`: Prevent pt-osc to run under MySQL 8.0.14+ & 8.0.17
* :jirabug:`PT-1746`: diskstats not working for kernel 4.18+
Bugs Fixed
================================================================================
* :jirabug:`PT-1736`: pt-kill ignores --busy-time and --kill-busy-commands=Query when there is a process with Command=Execute
* :jirabug:`PT-1575`: pt-mysql-summary does not print PXC section for PXC 5.6 and 5.7
* :jirabug:`PT-1728`: Pt-table-checksum failing to scan small tables that get wiped out often
* :jirabug:`PT-1720`: pt-pmp parses configuration files that lead to errors
* :jirabug:`PT-1114`: LP #1182180: pt-table-checksum fails when table is empty
* :jirabug:`PT-1715`: pt-upgrade documentation doesn't have the type tcpdump
* :jirabug:`PT-1344`: LP #1580428: pt-online-schema-change: Use of uninitialized value $host in string
* :jirabug:`PT-1492`: pt-kill in version 3.0.7 seems not to respect busy-time any longer
* :jirabug:`PT-1798`: CLONE - yum repos do not contain 3.1.1 of percona toolkit
* :jirabug:`PT-1797`: yum repos do not contain 3.1.1 of percona toolkit
* :jirabug:`PT-1633`: pt-config-diff doesn't handle innodb_temp_data_file_path correctly
* :jirabug:`PT-1630`: pt-table-checksum not working with galera cluster anymore since 3.0.11
* :jirabug:`PT-1734`: Tailing log_error in pt-stalk doesn't work
* :jirabug:`PT-1732`: Typo in link on percona.com

View File

@@ -29,22 +29,22 @@ use constant PTDEBUG => $ENV{PTDEBUG} || 0;
# Sub: check_recursion_method
# Check that the arrayref of recursion methods passed in is valid
sub check_recursion_method {
sub check_recursion_method {
my ($methods) = @_;
if ( @$methods != 1 ) {
if ( grep({ !m/processlist|hosts/i } @$methods)
&& $methods->[0] !~ /^dsn=/i )
{
die "Invalid combination of recursion methods: "
. join(", ", map { defined($_) ? $_ : 'undef' } @$methods) . ". "
. "Only hosts and processlist may be combined.\n"
}
}
else {
if ( @$methods != 1 ) {
if ( grep({ !m/processlist|hosts/i } @$methods)
&& $methods->[0] !~ /^dsn=/i )
{
die "Invalid combination of recursion methods: "
. join(", ", map { defined($_) ? $_ : 'undef' } @$methods) . ". "
. "Only hosts and processlist may be combined.\n"
}
}
else {
my ($method) = @$methods;
die "Invalid recursion method: " . ( $method || 'undef' )
unless $method && $method =~ m/^(?:processlist$|hosts$|none$|cluster$|dsn=)/i;
}
die "Invalid recursion method: " . ( $method || 'undef' )
unless $method && $method =~ m/^(?:processlist$|hosts$|none$|cluster$|dsn=)/i;
}
}
sub new {
@@ -73,7 +73,7 @@ sub get_slaves {
my $methods = $self->_resolve_recursion_methods($args{dsn});
return $slaves unless @$methods;
if ( grep { m/processlist|hosts/i } @$methods ) {
my @required_args = qw(dbh dsn);
foreach my $arg ( @required_args ) {
@@ -86,7 +86,7 @@ sub get_slaves {
{ dbh => $dbh,
dsn => $dsn,
slave_user => $o->got('slave-user') ? $o->get('slave-user') : '',
slave_password => $o->got('slave-password') ? $o->get('slave-password') : '',
slave_password => $o->got('slave-password') ? $o->get('slave-password') : '',
callback => sub {
my ( $dsn, $dbh, $level, $parent ) = @_;
return unless $level;
@@ -118,7 +118,7 @@ sub get_slaves {
else {
die "Unexpected recursion methods: @$methods";
}
return $slaves;
}
@@ -798,7 +798,7 @@ sub short_host {
# Returns:
# True if the proclist item is the given type of replication thread.
sub is_replication_thread {
my ( $self, $query, %args ) = @_;
my ( $self, $query, %args ) = @_;
return unless $query;
my $type = lc($args{type} || 'all');
@@ -814,7 +814,7 @@ sub is_replication_thread {
# On a slave, there are two threads. Both have user="system user".
if ( ($query->{User} || $query->{user} || '') eq "system user" ) {
PTDEBUG && _d("Slave replication thread");
if ( $type ne 'all' ) {
if ( $type ne 'all' ) {
# Match a particular slave thread.
my $state = $query->{State} || $query->{state} || '';
@@ -831,7 +831,7 @@ sub is_replication_thread {
|Reading\sevent\sfrom\sthe\srelay\slog
|Has\sread\sall\srelay\slog;\swaiting
|Making\stemp\sfile
|Waiting\sfor\sslave\smutex\son\sexit)/xi;
|Waiting\sfor\sslave\smutex\son\sexit)/xi;
# Type is either "slave_sql" or "slave_io". The second line
# implies that if this isn't the sql thread then it must be
@@ -919,7 +919,7 @@ sub get_replication_filters {
replicate_do_db
replicate_ignore_db
replicate_do_table
replicate_ignore_table
replicate_ignore_table
replicate_wild_do_table
replicate_wild_ignore_table
);
@@ -931,7 +931,7 @@ sub get_replication_filters {
$filters{slave_skip_errors} = $row->[1] if $row->[1] && $row->[1] ne 'OFF';
}
return \%filters;
return \%filters;
}

View File

@@ -18,7 +18,7 @@
# ###########################################################################
package Percona::Toolkit;
our $VERSION = '3.2.0';
our $VERSION = '3.2.1';
use strict;
use warnings FATAL => 'all';

View File

@@ -40,7 +40,7 @@ use Data::Dumper;
# slaves - Arrayref of <Cxn> objects
#
# Returns:
# ReplicaLagWaiter object
# ReplicaLagWaiter object
sub new {
my ( $class, %args ) = @_;
my @required_args = qw(oktorun get_lag sleep max_lag slaves);
@@ -80,6 +80,26 @@ sub wait {
my $worst; # most lagging slave
my $pr_callback;
my $pr_first_report;
### refresh list of slaves. In: self passed to wait()
### Returns: new slave list
my $pr_refresh_slave_list = sub {
my ($self) = @_;
my ($slaves, $refresher) = ($self->{slaves}, $self->{get_slaves_cb});
return $slaves if ( not defined $refresher );
my $before = join ' ', sort map {$_->name()} @$slaves;
$slaves = $refresher->();
my $after = join ' ', sort map {$_->name()} @$slaves;
if ($before ne $after) {
$self->{slaves} = $slaves;
printf STDERR "Slave set to watch has changed\n Was: %s\n Now: %s\n",
$before, $after;
}
return($self->{slaves});
};
$slaves = $pr_refresh_slave_list->($self);
if ( $pr ) {
# If you use the default Progress report callback, you'll need to
# to add Transformers.pm to this tool.
@@ -116,11 +136,26 @@ sub wait {
}
# First check all slaves.
my @lagged_slaves = map { {cxn=>$_, lag=>undef} } @$slaves;
my @lagged_slaves = map { {cxn=>$_, lag=>undef} } @$slaves;
while ( $oktorun->() && @lagged_slaves ) {
PTDEBUG && _d('Checking slave lag');
### while we were waiting our list of slaves may have changed
$slaves = $pr_refresh_slave_list->($self);
my $watched = 0;
@lagged_slaves = grep {
my $slave_name = $_->{cxn}->name();
grep {$slave_name eq $_->name()} @{$slaves // []}
} @lagged_slaves;
for my $i ( 0..$#lagged_slaves ) {
my $lag = $get_lag->($lagged_slaves[$i]->{cxn});
my $lag;
eval {
$lag = $get_lag->($lagged_slaves[$i]->{cxn});
};
if ($EVAL_ERROR) {
die $EVAL_ERROR;
}
PTDEBUG && _d($lagged_slaves[$i]->{cxn}->name(),
'slave lag:', $lag);
if ( !defined $lag || $lag > $max_lag ) {

View File

@@ -338,7 +338,10 @@ sub _get_crc_type {
$type = $sth->{mysql_type_name}->[0];
$length = $sth->{mysql_length}->[0];
PTDEBUG && _d($sql, $type, $length);
if ( $type eq 'bigint' && $length < 20 ) {
if ( $type eq 'integer' && $length < 11 ) {
$type = 'int';
}
elsif ( $type eq 'bigint' && $length < 20 ) {
$type = 'int';
}
};

View File

@@ -88,7 +88,10 @@ sub get_crc_type {
$type = $sth->{mysql_type_name}->[0];
$length = $sth->{mysql_length}->[0];
PTDEBUG && _d($sql, $type, $length);
if ( $type eq 'bigint' && $length < 20 ) {
if ( $type eq 'integer' && $length < 11 ) {
$type = 'int';
}
elsif ( $type eq 'bigint' && $length < 20 ) {
$type = 'int';
}
};

View File

@@ -18,7 +18,7 @@ BIN_DIR=$(shell git rev-parse --show-toplevel)/bin
SRC_DIR=$(shell git rev-parse --show-toplevel)/src/go
LDFLAGS="-X main.Version=${VERSION} -X main.Build=${BUILD} -X main.GoVersion=${GOVERSION} -X main.Commit=${COMMIT} -s -w"
TEST_PSMDB_VERSION?=3.6
TEST_PSMDB_VERSION?=4.0
TEST_MONGODB_FLAVOR?=percona/percona-server-mongodb
TEST_MONGODB_ADMIN_USERNAME?=admin
TEST_MONGODB_ADMIN_PASSWORD?=admin123456

View File

@@ -4,13 +4,10 @@ services:
standalone:
network_mode: host
image: ${TEST_MONGODB_FLAVOR}:${TEST_PSMDB_VERSION}
environment:
MONGO_INITDB_ROOT_USERNAME: ${TEST_MONGODB_ADMIN_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${TEST_MONGODB_ADMIN_PASSWORD}
command: --port=27017
volumes:
- ./docker/test/entrypoint-mongod.sh:/entrypoint.sh:ro
- ./docker/test/entrypoint-mongod.sh:/usr/local/bin/docker-entrypoint.sh:ro
- ./docker/test/mongod.key:/mongod.key:ro
- ./docker/test/ssl/rootCA.crt:/rootCA.crt:ro
- ./docker/test/ssl/mongodb.pem:/mongod.pem:ro
s1-mongo1:
network_mode: host
image: ${TEST_MONGODB_FLAVOR}:${TEST_PSMDB_VERSION}

View File

@@ -30,6 +30,8 @@ const (
envMongoDBConfigsvr3Port = "TEST_MONGODB_CONFIGSVR3_PORT"
//
envMongoDBMongosPort = "TEST_MONGODB_MONGOS_PORT"
envMongoDBStandalonePort = "TEST_MONGODB_STANDALONE_PORT"
//
envMongoDBUser = "TEST_MONGODB_ADMIN_USERNAME"
envMongoDBPassword = "TEST_MONGODB_ADMIN_PASSWORD"
@@ -39,46 +41,49 @@ var (
// MongoDBHost is the hostname. Since it runs locally, it is localhost
MongoDBHost = "127.0.0.1"
// Port for standalone instance
MongoDBStandalonePort = getEnvDefault(envMongoDBStandalonePort, "27017")
// MongoDBShard1ReplsetName Replicaset name for shard 1
MongoDBShard1ReplsetName = os.Getenv(envMongoDBShard1ReplsetName)
MongoDBShard1ReplsetName = getEnvDefault(envMongoDBShard1ReplsetName, "rs1")
// MongoDBShard1PrimaryPort is the port for the primary instance of shard 1
MongoDBShard1PrimaryPort = os.Getenv(envMongoDBShard1PrimaryPort)
MongoDBShard1PrimaryPort = getEnvDefault(envMongoDBShard1PrimaryPort, "17001")
// MongoDBShard1Secondary1Port is the port for the secondary instance 1 of shard 1
MongoDBShard1Secondary1Port = os.Getenv(envMongoDBShard1Secondary1Port)
MongoDBShard1Secondary1Port = getEnvDefault(envMongoDBShard1Secondary1Port, "17002")
// MongoDBShard1Secondary2Port is the port for the secondary instance 2 of shard 1
MongoDBShard1Secondary2Port = os.Getenv(envMongoDBShard1Secondary2Port)
MongoDBShard1Secondary2Port = getEnvDefault(envMongoDBShard1Secondary2Port, "17003")
// MongoDBShard2ReplsetName Replicaset name for shard 2
MongoDBShard2ReplsetName = os.Getenv(envMongoDBShard2ReplsetName)
MongoDBShard2ReplsetName = getEnvDefault(envMongoDBShard2ReplsetName, "rs2")
// MongoDBShard2PrimaryPort is the port for the primary instance of shard 2
MongoDBShard2PrimaryPort = os.Getenv(envMongoDBShard2PrimaryPort)
MongoDBShard2PrimaryPort = getEnvDefault(envMongoDBShard2PrimaryPort, "17004")
// MongoDBShard2Secondary1Port is the port for the secondary instance 1 of shard 2
MongoDBShard2Secondary1Port = os.Getenv(envMongoDBShard2Secondary1Port)
MongoDBShard2Secondary1Port = getEnvDefault(envMongoDBShard2Secondary1Port, "17005")
// MongoDBShard2Secondary2Port is the port for the secondary instance 1 of shard 2
MongoDBShard2Secondary2Port = os.Getenv(envMongoDBShard2Secondary2Port)
MongoDBShard2Secondary2Port = getEnvDefault(envMongoDBShard2Secondary2Port, "17006")
// MongoDBShard3ReplsetName Replicaset name for the 3rd cluster
MongoDBShard3ReplsetName = os.Getenv(envMongoDBShard3ReplsetName)
MongoDBShard3ReplsetName = getEnvDefault(envMongoDBShard3ReplsetName, "rs3")
// MongoDBShard3PrimaryPort is the port for the primary instance of 3rd cluster (non-sharded)
MongoDBShard3PrimaryPort = os.Getenv(envMongoDBShard3PrimaryPort)
MongoDBShard3PrimaryPort = getEnvDefault(envMongoDBShard3PrimaryPort, "17021")
// MongoDBShard3Secondary1Port is the port for the secondary instance 1 on the 3rd cluster
MongoDBShard3Secondary1Port = os.Getenv(envMongoDBShard3Secondary1Port)
MongoDBShard3Secondary1Port = getEnvDefault(envMongoDBShard3Secondary1Port, "17022")
// MongoDBShard3Secondary2Port is the port for the secondary instance 2 on the 3rd cluster
MongoDBShard3Secondary2Port = os.Getenv(envMongoDBShard3Secondary2Port)
MongoDBShard3Secondary2Port = getEnvDefault(envMongoDBShard3Secondary2Port, "17023")
// MongoDBConfigsvrReplsetName Replicaset name for the config servers
MongoDBConfigsvrReplsetName = os.Getenv(envMongoDBConfigsvrReplsetName)
MongoDBConfigsvrReplsetName = getEnvDefault(envMongoDBConfigsvrReplsetName, "csReplSet")
// MongoDBConfigsvr1Port Config server primary's port
MongoDBConfigsvr1Port = os.Getenv(envMongoDBConfigsvr1Port)
// MongoDBConfigsvr2Port = os.Getenv(envMongoDBConfigsvr2Port)
// MongoDBConfigsvr3Port = os.Getenv(envMongoDBConfigsvr3Port)
MongoDBConfigsvr1Port = getEnvDefault(envMongoDBConfigsvr1Port, "17007")
// MongoDBConfigsvr2Port = getEnvDefault(envMongoDBConfigsvr2Port)
// MongoDBConfigsvr3Port = getEnvDefault(envMongoDBConfigsvr3Port)
// MongoDBMongosPort mongos port
MongoDBMongosPort = os.Getenv(envMongoDBMongosPort)
MongoDBMongosPort = getEnvDefault(envMongoDBMongosPort, "17000")
// MongoDBUser username for all instances
MongoDBUser = os.Getenv(envMongoDBUser)
MongoDBUser = getEnvDefault(envMongoDBUser, "admin")
// MongoDBPassword password for all instances
MongoDBPassword = os.Getenv(envMongoDBPassword)
MongoDBPassword = getEnvDefault(envMongoDBPassword, "admin123456")
// MongoDBTimeout global connection timeout
MongoDBTimeout = time.Duration(10) * time.Second
@@ -120,6 +125,13 @@ func init() {
MongoDBSSLCACertFile = filepath.Join(MongoDBSSLDir, "rootCA.crt")
}
func getEnvDefault(key, defVal string) string {
if val := os.Getenv(key); val != "" {
return val
}
return defVal
}
// BaseDir returns the project's root dir by asking git
func BaseDir() string {
if basedir != "" {

View File

@@ -5,7 +5,7 @@ import (
"regexp"
"time"
"github.com/Percona-Lab/pt-pg-summary/models"
"github.com/percona/percona-toolkit/src/go/pt-pg-summary/models"
"github.com/hashicorp/go-version"
"github.com/pkg/errors"
"github.com/shirou/gopsutil/process"
@@ -94,11 +94,11 @@ func new(db models.XODB, databases []string, sleep int, logger *logrus.Logger) (
serverVersion, err := models.GetServerVersion(db)
if err != nil {
return nil, errors.Wrap(err, "Cannot get the connected clients list")
return nil, errors.Wrap(err, "Cannot get server version")
}
if info.ServerVersion, err = parseServerVersion(serverVersion.Version); err != nil {
return nil, fmt.Errorf("cannot get server version: %s", err.Error())
return nil, fmt.Errorf("Cannot parse server version: %s", err.Error())
}
info.logger.Infof("Detected PostgreSQL version: %v", info.ServerVersion)
@@ -198,7 +198,7 @@ func (i *PGInfo) CollectGlobalInfo(db models.XODB) []error {
}
}
if !i.ServerVersion.LessThan(version10) {
if i.ServerVersion.GreaterThanOrEqual(version10) {
i.logger.Info("Collecting Slave Hosts (PostgreSQL 10+)")
if i.SlaveHosts10, err = models.GetSlaveHosts10s(db); err != nil {
errs = append(errs, errors.Wrap(err, "Cannot get slave hosts in Postgre 10+"))

View File

@@ -7,6 +7,8 @@ import (
"os/exec"
"regexp"
"strings"
"go.mongodb.org/mongo-driver/bson"
)
const (
@@ -45,7 +47,7 @@ func LoadJson(filename string, destination interface{}) error {
return nil
}
func LoadBson(filename string, destination interface{}) error {
func LoadBsonold(filename string, destination interface{}) error {
file, err := os.Open(filename)
if err != nil {
return err
@@ -85,6 +87,26 @@ func LoadBson(filename string, destination interface{}) error {
return nil
}
func LoadBson(filename string, destination interface{}) error {
file, err := os.Open(filename)
if err != nil {
return err
}
defer file.Close()
buf, err := ioutil.ReadAll(file)
if err != nil {
return err
}
err = bson.UnmarshalExtJSON(buf, true, destination)
if err != nil {
return err
}
return nil
}
func WriteJson(filename string, data interface{}) error {
buf, err := json.MarshalIndent(data, "", " ")

View File

@@ -11,7 +11,7 @@ import (
"time"
"github.com/Masterminds/semver"
"github.com/kr/pretty"
"github.com/stretchr/testify/assert"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
@@ -153,28 +153,21 @@ func TestExplain(t *testing.T) {
ex := New(ctx, client)
for _, file := range files {
t.Run(file.Name(), func(t *testing.T) {
eq := proto.ExampleQuery{}
err := tutil.LoadBson(dir+file.Name(), &eq)
if err != nil {
t.Fatalf("cannot load sample %s: %s", dir+file.Name(), err)
}
pretty.Println(eq)
query, err := ioutil.ReadFile(dir + file.Name())
assert.NoError(t, err)
query, err := bson.MarshalExtJSON(eq, true, true)
if err != nil {
t.Fatalf("cannot marshal json %s: %s", dir+file.Name(), err)
}
got, err := ex.Run("", query)
expectErrMsg := expectError[file.Name()]
idx := strings.TrimSuffix(file.Name(), ".new.bson")
expectErrMsg := expectError[idx]
if (err != nil) != expectErrMsg {
t.Fatalf("explain error for %q \n %s\nshould be '%v' but was '%v'", string(query), file.Name(), expectErrMsg, err)
t.Errorf("explain error for %q \n %s\nshould be '%v' but was '%v'", string(query), file.Name(), expectErrMsg, err)
}
if err == nil {
result := proto.BsonD{}
err = bson.UnmarshalExtJSON(got, true, &result)
if err != nil {
t.Fatalf("cannot unmarshal json explain result: %s", err)
t.Errorf("cannot unmarshal json explain result: %s", err)
}
}
})

View File

@@ -1,24 +1,21 @@
package fingerprinter
import (
"encoding/json"
"fmt"
"regexp"
"sort"
"strings"
"github.com/percona/percona-toolkit/src/go/mongolib/proto"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"github.com/percona/percona-toolkit/src/go/mongolib/proto"
"github.com/percona/percona-toolkit/src/go/mongolib/util"
)
var (
MAX_DEPTH_LEVEL = 10
DEFAULT_KEY_FILTERS = []string{"^shardVersion$"}
const (
maxDepthLevel = 10
)
// Fingerprint models the MongnDB query fingeprint result fields.
type Fingerprint struct {
Namespace string
Operation string
@@ -28,24 +25,28 @@ type Fingerprint struct {
Fingerprint string
}
// Fingerprinter holds unexported fields and public methods for fingerprinting queries.
type Fingerprinter struct {
keyFilters []string
}
// DefaultKeyFilters returns the default keys used to filter out some keys
// from the fingerprinter.
func DefaultKeyFilters() []string {
return []string{"^shardVersion$"}
}
// NewFingerprinter returns a new Fingerprinter object
func NewFingerprinter(keyFilters []string) *Fingerprinter {
return &Fingerprinter{
keyFilters: keyFilters,
}
}
// Fingerprint process a query input to build it's fingerprint.
func (f *Fingerprinter) Fingerprint(doc proto.SystemProfile) (Fingerprint, error) {
realQuery, err := util.GetQueryField(doc)
realQuery, err := GetQueryFieldD(doc)
if err != nil {
// Try to encode doc.Query as json for prettiness
if buf, err := json.Marshal(realQuery); err == nil {
return Fingerprint{}, fmt.Errorf("%v for query %s", err, string(buf))
}
// If we cannot encode as json, return just the error message without the query
return Fingerprint{}, err
}
retKeys := keys(realQuery, f.keyFilters)
@@ -55,26 +56,22 @@ func (f *Fingerprinter) Fingerprint(doc proto.SystemProfile) (Fingerprint, error
// however MongoDB 3.0 doesn't have that field
// so we need to detect protocol by looking at actual data.
query := doc.Query
if doc.Command.Len() > 0 {
if len(doc.Command) > 0 {
query = doc.Command
}
// if there is a sort clause in the query, we have to add all fields in the sort
// fields list that are not in the query keys list (retKeys)
if sortKeys, ok := query.Map()["sort"]; ok {
if sortKeysMap, ok := sortKeys.(bson.M); ok {
sortKeys := keys(sortKeysMap, f.keyFilters)
retKeys = append(retKeys, sortKeys...)
}
sortKeys := keys(sortKeys, f.keyFilters)
retKeys = append(retKeys, sortKeys...)
}
// if there is a orderby clause in the query, we have to add all fields in the sort
// fields list that are not in the query keys list (retKeys)
if sortKeys, ok := query.Map()["orderby"]; ok {
if sortKeysMap, ok := sortKeys.(bson.M); ok {
sortKeys := keys(sortKeysMap, f.keyFilters)
retKeys = append(retKeys, sortKeys...)
}
sortKeys := keys(sortKeys, f.keyFilters)
retKeys = append(retKeys, sortKeys...)
}
// Extract operation, collection, database and namespace
@@ -88,6 +85,7 @@ func (f *Fingerprinter) Fingerprint(doc proto.SystemProfile) (Fingerprint, error
if len(ns) == 2 {
collection = ns[1]
}
switch doc.Op {
case "remove", "update":
op = doc.Op
@@ -110,7 +108,7 @@ func (f *Fingerprinter) Fingerprint(doc proto.SystemProfile) (Fingerprint, error
}
op = "find"
case "command":
if query.Len() == 0 {
if len(query) == 0 {
break
}
// first key is operation type
@@ -120,21 +118,20 @@ func (f *Fingerprinter) Fingerprint(doc proto.SystemProfile) (Fingerprint, error
case "group":
retKeys = []string{}
if g, ok := query.Map()["group"]; ok {
if m, ok := g.(bson.M); ok {
if f, ok := m["key"]; ok {
if keysMap, ok := f.(bson.M); ok {
retKeys = append(retKeys, keys(keysMap, []string{})...)
}
}
if f, ok := m["cond"]; ok {
if keysMap, ok := f.(bson.M); ok {
retKeys = append(retKeys, keys(keysMap, []string{})...)
}
}
if f, ok := m["ns"]; ok {
if ns, ok := f.(string); ok {
collection = ns
}
m, err := asMap(g)
if err != nil {
return Fingerprint{}, err
}
if f, ok := m["key"]; ok {
retKeys = append(retKeys, keys(f, []string{})...)
}
if f, ok := m["cond"]; ok {
retKeys = append(retKeys, keys(f, []string{})...)
}
if f, ok := m["ns"]; ok {
if ns, ok := f.(string); ok {
collection = ns
}
}
}
@@ -210,14 +207,14 @@ func getKeys(query interface{}, keyFilters []string, level int) []string {
switch v := query.(type) {
case primitive.M:
q = append(q, v)
case []bson.M:
q = v
case primitive.A:
case primitive.D:
for _, intval := range v {
ks = append(ks, getKeys(intval, keyFilters, level+1)...)
}
return ks
case proto.BsonD:
case []bson.M:
q = v
case primitive.A:
for _, intval := range v {
ks = append(ks, getKeys(intval, keyFilters, level+1)...)
}
@@ -233,13 +230,13 @@ func getKeys(query interface{}, keyFilters []string, level int) []string {
return ks
}
if level <= MAX_DEPTH_LEVEL {
if level <= maxDepthLevel {
for i := range q {
for key, value := range q[i] {
if shouldSkipKey(key, keyFilters) {
continue
}
if matched, _ := regexp.MatchString("^\\$", key); !matched {
if !strings.HasPrefix(key, "$") {
ks = append(ks, key)
}
@@ -250,6 +247,7 @@ func getKeys(query interface{}, keyFilters []string, level int) []string {
return ks
}
// Check if a particular key should be excluded from the analysis based on the filters.
func shouldSkipKey(key string, keyFilters []string) bool {
for _, filter := range keyFilters {
if matched, _ := regexp.MatchString(filter, key); matched {
@@ -271,3 +269,71 @@ func deduplicate(s []string) (r []string) {
return r
}
// GetQueryFieldD returns the correct field to build the fingerprint, based on the operation.
func GetQueryFieldD(doc proto.SystemProfile) (primitive.M, error) {
// Proper way to detect if protocol used is "op_msg" or "op_command"
// would be to look at "doc.Protocol" field,
// however MongoDB 3.0 doesn't have that field
// so we need to detect protocol by looking at actual data.
query := doc.Query
if len(doc.Command) > 0 {
query = doc.Command
if doc.Op == "update" || doc.Op == "remove" {
return asMap(query.Map()["q"])
}
}
// "query" in MongoDB 3.0 can look like this:
// {
// "op" : "query",
// "ns" : "test.coll",
// "query" : {
// "a" : 1
// },
// ...
// }
//
// but also it can have "query" subkey like this:
// {
// "op" : "query",
// "ns" : "test.coll",
// "query" : {
// "query" : {
// "$and" : [
// ]
// },
// "orderby" : {
// "k" : -1
// }
// },
// ...
// }
//
if squery, ok := query.Map()["query"]; ok {
return asMap(squery)
}
// "query" in MongoDB 3.2+ is better structured and always has a "filter" subkey:
if squery, ok := query.Map()["filter"]; ok {
return asMap(squery)
}
// {"ns":"test.system.js","op":"query","query":{"find":"system.js"}}
if len(query) == 1 && query[0].Key == "find" {
return primitive.M{}, nil
}
return query.Map(), nil
}
func asMap(field interface{}) (primitive.M, error) {
switch v := field.(type) {
case primitive.M:
return v, nil
case primitive.D:
return v.Map(), nil
default:
return nil, fmt.Errorf("don't know how to handle %T", v)
}
}

View File

@@ -5,16 +5,20 @@ import (
"io/ioutil"
"log"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
"github.com/percona/percona-toolkit/src/go/lib/tutil"
"github.com/percona/percona-toolkit/src/go/mongolib/proto"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.mongodb.org/mongo-driver/bson"
)
const (
samples = "/src/go/tests/"
samples = "/testdata/"
)
type testVars struct {
@@ -32,27 +36,11 @@ func TestMain(m *testing.M) {
os.Exit(m.Run())
}
func ExampleFingerprint() {
doc := proto.SystemProfile{}
err := tutil.LoadBson(vars.RootPath+samples+"fingerprinter_doc.json", &doc)
if err != nil {
panic(err)
}
fp := NewFingerprinter(DEFAULT_KEY_FILTERS)
got, err := fp.Fingerprint(doc)
if err != nil {
panic(err)
}
fmt.Println(got.Fingerprint)
// Output: FIND sbtest3 c,k,pad
}
func TestFingerprint(t *testing.T) {
func TestSingleFingerprint(t *testing.T) {
doc := proto.SystemProfile{}
doc.Ns = "db.feedback"
doc.Op = "query"
doc.Query = proto.BsonD{
doc.Query = bson.D{
{"find", "feedback"},
{"filter", bson.M{
"tool": "Atlas",
@@ -77,8 +65,8 @@ func TestFingerprint(t *testing.T) {
func TestFingerprints(t *testing.T) {
t.Parallel()
dir := vars.RootPath + samples + "/doc/out/"
dirExpect := vars.RootPath + samples + "/expect/fingerprints/"
dir := filepath.Join(vars.RootPath, "/src/go/tests/doc/profiles")
dirExpect := filepath.Join(vars.RootPath, "/src/go/tests/expect/fingerprints/")
files, err := ioutil.ReadDir(dir)
if err != nil {
t.Fatalf("cannot list samples: %s", err)
@@ -87,16 +75,19 @@ func TestFingerprints(t *testing.T) {
for _, file := range files {
t.Run(file.Name(), func(t *testing.T) {
doc := proto.SystemProfile{}
err = tutil.LoadBson(dir+file.Name(), &doc)
if err != nil {
t.Fatalf("cannot load sample %s: %s", dir+file.Name(), err)
}
fp := NewFingerprinter(DEFAULT_KEY_FILTERS)
err = tutil.LoadBson(filepath.Join(dir, file.Name()), &doc)
assert.NoError(t, err)
fp := NewFingerprinter(DefaultKeyFilters())
got, err := fp.Fingerprint(doc)
require.NoError(t, err)
if err != nil {
t.Errorf("cannot create fingerprint: %s", err)
}
fExpect := dirExpect + file.Name()
fExpect := filepath.Join(dirExpect, file.Name())
fExpect = strings.TrimSuffix(fExpect, ".bson")
if tutil.ShouldUpdateSamples() {
err := tutil.WriteJson(fExpect, got)
if err != nil {
@@ -105,6 +96,7 @@ func TestFingerprints(t *testing.T) {
}
var expect Fingerprint
err = tutil.LoadJson(fExpect, &expect)
if err != nil {
t.Fatalf("cannot load expected data %s: %s", fExpect, err)
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/percona/percona-toolkit/src/go/mongolib/fingerprinter"
"github.com/percona/percona-toolkit/src/go/mongolib/stats"
"github.com/percona/percona-toolkit/src/go/pt-mongodb-query-digest/filter"
"github.com/stretchr/testify/assert"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
@@ -62,7 +63,8 @@ func TestRegularIterator(t *testing.T) {
if res.Err() != nil {
t.Fatalf("Cannot enable profiler: %s", res.Err())
}
client.Database(database).Drop(ctx)
err = client.Database(database).Drop(ctx)
assert.NoError(t, err)
// re-enable the profiler
res = client.Database("admin").RunCommand(ctx, primitive.D{{"profile", 2}, {"slowms", 2}})
@@ -73,7 +75,8 @@ func TestRegularIterator(t *testing.T) {
// run some queries to have something to profile
count := 1000
for j := 0; j < count; j++ {
client.Database("test").Collection("testc").InsertOne(ctx, primitive.M{"number": j})
_, err := client.Database("test").Collection("testc").InsertOne(ctx, primitive.M{"number": j})
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
}
@@ -83,7 +86,7 @@ func TestRegularIterator(t *testing.T) {
}
filters := []filter.Filter{}
fp := fingerprinter.NewFingerprinter(fingerprinter.DEFAULT_KEY_FILTERS)
fp := fingerprinter.NewFingerprinter(fingerprinter.DefaultKeyFilters())
s := stats.New(fp)
prof := NewProfiler(cursor, filters, nil, s)
prof.Start(ctx)

View File

@@ -99,7 +99,7 @@ func (d BsonD) MarshalJSON() ([]byte, error) {
}
// marshal key
key, err := bson.MarshalExtJSON(v.Key, true, true)
key, err := bson.MarshalExtJSON(v.Key, false, true)
if err != nil {
return nil, err
}
@@ -120,7 +120,7 @@ func (d BsonD) MarshalJSON() ([]byte, error) {
val = append(val, '"')
} else {
// marshal value
val, err = bson.MarshalExtJSON(v.Value, true, true)
val, err = bson.MarshalExtJSON(v.Value, false, true)
if err != nil {
return nil, err
}

View File

@@ -62,7 +62,7 @@ type OplogColStats struct {
MaxSize int64
IndexSizes bson.M
GleStats struct {
LastOpTime int64
LastOpTime time.Time
ElectionId string
} `bson:"$gleStats"`
StorageSize int64

View File

@@ -79,10 +79,10 @@ type SystemProfile struct {
NumYield int `bson:"numYield"`
Op string `bson:"op"`
Protocol string `bson:"protocol"`
Query BsonD `bson:"query"`
UpdateObj BsonD `bson:"updateobj"`
Command BsonD `bson:"command"`
OriginatingCommand BsonD `bson:"originatingCommand"`
Query bson.D `bson:"query"`
UpdateObj bson.D `bson:"updateobj"`
Command bson.D `bson:"command"`
OriginatingCommand bson.D `bson:"originatingCommand"`
ResponseLength int `bson:"responseLength"`
Ts time.Time `bson:"ts"`
User string `bson:"user"`
@@ -104,10 +104,10 @@ func NewExampleQuery(doc SystemProfile) ExampleQuery {
type ExampleQuery struct {
Ns string `bson:"ns" json:"ns"`
Op string `bson:"op" json:"op"`
Query BsonD `bson:"query,omitempty" json:"query,omitempty"`
Command BsonD `bson:"command,omitempty" json:"command,omitempty"`
OriginatingCommand BsonD `bson:"originatingCommand,omitempty" json:"originatingCommand,omitempty"`
UpdateObj BsonD `bson:"updateobj,omitempty" json:"updateobj,omitempty"`
Query bson.D `bson:"query,omitempty" json:"query,omitempty"`
Command bson.D `bson:"command,omitempty" json:"command,omitempty"`
OriginatingCommand bson.D `bson:"originatingCommand,omitempty" json:"originatingCommand,omitempty"`
UpdateObj bson.D `bson:"updateobj,omitempty" json:"updateobj,omitempty"`
}
func (self ExampleQuery) Db() string {
@@ -124,7 +124,7 @@ func (self ExampleQuery) ExplainCmd() bson.D {
switch self.Op {
case "query":
if cmd.Len() == 0 {
if len(cmd) == 0 {
cmd = self.Query
}
@@ -137,15 +137,15 @@ func (self ExampleQuery) ExplainCmd() bson.D {
// "$explain" : true
// },
if _, ok := cmd.Map()["$explain"]; ok {
cmd = BsonD{
cmd = bson.D{
{"explain", ""},
}
break
}
if cmd.Len() == 0 || cmd[0].Key != "find" {
if len(cmd) == 0 || cmd[0].Key != "find" {
var filter interface{}
if cmd.Len() > 0 && cmd[0].Key == "query" {
if len(cmd) > 0 && cmd[0].Key == "query" {
filter = cmd[0].Value
} else {
filter = cmd
@@ -157,7 +157,7 @@ func (self ExampleQuery) ExplainCmd() bson.D {
coll = s[1]
}
cmd = BsonD{
cmd = bson.D{
{"find", coll},
{"filter", filter},
}
@@ -178,7 +178,6 @@ func (self ExampleQuery) ExplainCmd() bson.D {
} else {
cmd = append(cmd[:i], cmd[i+1:]...)
}
break
}
}
}
@@ -188,13 +187,13 @@ func (self ExampleQuery) ExplainCmd() bson.D {
if len(s) == 2 {
coll = s[1]
}
if cmd.Len() == 0 {
cmd = BsonD{
if len(cmd) == 0 {
cmd = bson.D{
{Key: "q", Value: self.Query},
{Key: "u", Value: self.UpdateObj},
}
}
cmd = BsonD{
cmd = bson.D{
{Key: "update", Value: coll},
{Key: "updates", Value: []interface{}{cmd}},
}
@@ -204,34 +203,34 @@ func (self ExampleQuery) ExplainCmd() bson.D {
if len(s) == 2 {
coll = s[1]
}
if cmd.Len() == 0 {
cmd = BsonD{
if len(cmd) == 0 {
cmd = bson.D{
{Key: "q", Value: self.Query},
// we can't determine if limit was 1 or 0 so we assume 0
{Key: "limit", Value: 0},
}
}
cmd = BsonD{
cmd = bson.D{
{Key: "delete", Value: coll},
{Key: "deletes", Value: []interface{}{cmd}},
}
case "insert":
if cmd.Len() == 0 {
if len(cmd) == 0 {
cmd = self.Query
}
if cmd.Len() == 0 || cmd[0].Key != "insert" {
if len(cmd) == 0 || cmd[0].Key != "insert" {
coll := ""
s := strings.SplitN(self.Ns, ".", 2)
if len(s) == 2 {
coll = s[1]
}
cmd = BsonD{
cmd = bson.D{
{"insert", coll},
}
}
case "getmore":
if self.OriginatingCommand.Len() > 0 {
if len(self.OriginatingCommand) > 0 {
cmd = self.OriginatingCommand
for i := range cmd {
// drop $db param as it is not supported in MongoDB 3.0
@@ -245,16 +244,18 @@ func (self ExampleQuery) ExplainCmd() bson.D {
}
}
} else {
cmd = BsonD{
cmd = bson.D{
{Key: "getmore", Value: ""},
}
}
case "command":
if cmd.Len() == 0 || cmd[0].Key != "group" {
cmd = sanitizeCommand(cmd)
if len(cmd) == 0 || cmd[0].Key != "group" {
break
}
if group, ok := cmd[0].Value.(BsonD); ok {
if group, ok := cmd[0].Value.(bson.D); ok {
for i := range group {
// for MongoDB <= 3.2
// "$reduce" : function () {}
@@ -284,3 +285,28 @@ func (self ExampleQuery) ExplainCmd() bson.D {
},
}
}
func sanitizeCommand(cmd bson.D) bson.D {
if len(cmd) < 1 {
return cmd
}
key := cmd[0].Key
if key != "count" && key != "distinct" {
return cmd
}
for i := range cmd {
// drop $db param as it is not supported in MongoDB 3.0
if cmd[i].Key == "$db" {
if len(cmd)-1 == i {
cmd = cmd[:i]
} else {
cmd = append(cmd[:i], cmd[i+1:]...)
}
break
}
}
return cmd
}

View File

@@ -0,0 +1,44 @@
package proto_test
import (
"testing"
"github.com/percona/percona-toolkit/src/go/mongolib/proto"
"github.com/stretchr/testify/assert"
"go.mongodb.org/mongo-driver/bson"
)
func TestExplainCmd(t *testing.T) {
tests := []struct {
inDoc []byte
want []byte
}{
{
inDoc: []byte(`{"ns":"sbtest.orders","op":"command","command":{"aggregate":"orders",` +
`"pipeline":[{"$match":{"status":"A"}},{"$group":{"_id":"$cust_id","total":{"$sum":"$amount"}}},` +
`{"$sort":{"total":-1}}],"cursor":{},"$db":"sbtest"}}`),
want: []byte(`{"explain":{"aggregate":"orders","pipeline":[{"$match":{"status":"A"}},` +
`{"$group":{"_id":"$cust_id","total":{"$sum":"$amount"}}},` +
`{"$sort":{"total":-1}}],"cursor":{},"$db":"sbtest"}}`),
},
{
inDoc: []byte(`{"ns":"sbtest.people","op":"command","command":` +
`{"count":"people","query":{},"fields":{},"$db":"sbtest"}}`),
want: []byte(`{"explain":{"count":"people","query":{},"fields":{}}}`),
},
}
for _, tc := range tests {
var want bson.D
err := bson.UnmarshalExtJSON(tc.want, false, &want)
assert.NoError(t, err)
var doc proto.SystemProfile
err = bson.UnmarshalExtJSON(tc.inDoc, false, &doc)
assert.NoError(t, err)
eq := proto.NewExampleQuery(doc)
assert.Equal(t, want, eq.ExplainCmd())
}
}

View File

@@ -138,7 +138,7 @@ func TestStats(t *testing.T) {
t.Fatalf("cannot load samples: %s", err.Error())
}
fp := fingerprinter.NewFingerprinter(fingerprinter.DEFAULT_KEY_FILTERS)
fp := fingerprinter.NewFingerprinter(fingerprinter.DefaultKeyFilters())
s := New(fp)
err = s.Add(docs[1])
@@ -184,7 +184,7 @@ func TestStatsSingle(t *testing.T) {
t.Fatalf("cannot list samples: %s", err)
}
fp := fingerprinter.NewFingerprinter(fingerprinter.DEFAULT_KEY_FILTERS)
fp := fingerprinter.NewFingerprinter(fingerprinter.DefaultKeyFilters())
for _, file := range files {
f := file.Name()
@@ -217,7 +217,6 @@ func TestStatsSingle(t *testing.T) {
}
})
}
}
func TestStatsAll(t *testing.T) {
@@ -231,7 +230,7 @@ func TestStatsAll(t *testing.T) {
t.Fatalf("cannot list samples: %s", err)
}
fp := fingerprinter.NewFingerprinter(fingerprinter.DEFAULT_KEY_FILTERS)
fp := fingerprinter.NewFingerprinter(fingerprinter.DefaultKeyFilters())
s := New(fp)
for _, file := range files {
@@ -440,7 +439,7 @@ func TestAvailableMetrics(t *testing.T) {
fExpect := dirExpect + "cmd_metric.md"
if tutil.ShouldUpdateSamples() {
err = ioutil.WriteFile(fExpect, bufGot.Bytes(), 0777)
err = ioutil.WriteFile(fExpect, bufGot.Bytes(), os.ModePerm)
if err != nil {
fmt.Printf("cannot update samples: %s", err.Error())
}

View File

@@ -2,7 +2,6 @@ package util
import (
"context"
"fmt"
"sort"
"strings"
@@ -13,8 +12,13 @@ import (
"go.mongodb.org/mongo-driver/mongo/options"
)
const (
shardingNotEnabledErrorCode = 203
)
var (
CANNOT_GET_QUERY_ERROR = errors.New("cannot get query field from the profile document (it is not a map)")
CannotGetQueryError = errors.New("cannot get query field from the profile document (it is not a map)")
ShardingNotEnabledError = errors.New("sharding not enabled")
)
func GetReplicasetMembers(ctx context.Context, clientOptions *options.ClientOptions) ([]proto.Members, error) {
@@ -92,7 +96,7 @@ func GetReplicasetMembers(ctx context.Context, clientOptions *options.ClientOpti
membersMap[m.Name] = m
}
client.Disconnect(ctx)
client.Disconnect(ctx) //nolint
}
for _, member := range membersMap {
@@ -119,6 +123,9 @@ func GetHostnames(ctx context.Context, client *mongo.Client) ([]string, error) {
var shardsMap proto.ShardsMap
smRes := client.Database("admin").RunCommand(ctx, primitive.M{"getShardMap": 1})
if smRes.Err() != nil {
if e, ok := smRes.Err().(mongo.CommandError); ok && e.Code == shardingNotEnabledErrorCode {
return nil, ShardingNotEnabledError // standalone instance
}
return nil, errors.Wrap(smRes.Err(), "cannot getShardMap for GetHostnames")
}
if err := smRes.Decode(&shardsMap); err != nil {
@@ -134,7 +141,8 @@ func GetHostnames(ctx context.Context, client *mongo.Client) ([]string, error) {
}
}
return nil, fmt.Errorf("cannot get shards map")
// Some MongoDB servers won't return ShardingNotEnabledError for stand alone instances.
return nil, nil // standalone instance
}
func buildHostsListFromReplStatus(replStatus proto.ReplicaSetStatus) []string {
@@ -257,7 +265,7 @@ func GetQueryField(doc proto.SystemProfile) (primitive.M, error) {
// however MongoDB 3.0 doesn't have that field
// so we need to detect protocol by looking at actual data.
query := doc.Query
if doc.Command.Len() > 0 {
if len(doc.Command) > 0 {
query = doc.Command
if doc.Op == "update" || doc.Op == "remove" {
if squery, ok := query.Map()["q"]; ok {
@@ -265,7 +273,7 @@ func GetQueryField(doc proto.SystemProfile) (primitive.M, error) {
if ssquery, ok := squery.(primitive.M); ok {
return ssquery, nil
}
return nil, CANNOT_GET_QUERY_ERROR
return nil, CannotGetQueryError
}
}
}
@@ -301,7 +309,7 @@ func GetQueryField(doc proto.SystemProfile) (primitive.M, error) {
if ssquery, ok := squery.(primitive.M); ok {
return ssquery, nil
}
return nil, CANNOT_GET_QUERY_ERROR
return nil, CannotGetQueryError
}
// "query" in MongoDB 3.2+ is better structured and always has a "filter" subkey:
@@ -309,7 +317,7 @@ func GetQueryField(doc proto.SystemProfile) (primitive.M, error) {
if ssquery, ok := squery.(primitive.M); ok {
return ssquery, nil
}
return nil, CANNOT_GET_QUERY_ERROR
return nil, CannotGetQueryError
}
// {"ns":"test.system.js","op":"query","query":{"find":"system.js"}}

View File

@@ -14,24 +14,34 @@ import (
func TestGetHostnames(t *testing.T) {
testCases := []struct {
name string
uri string
want []string
name string
uri string
want []string
wantError bool
}{
{
name: "from_mongos",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBMongosPort),
want: []string{"127.0.0.1:17001", "127.0.0.1:17002", "127.0.0.1:17004", "127.0.0.1:17005", "127.0.0.1:17007"},
name: "from_mongos",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBMongosPort),
want: []string{"127.0.0.1:17001", "127.0.0.1:17002", "127.0.0.1:17004", "127.0.0.1:17005", "127.0.0.1:17007"},
wantError: false,
},
{
name: "from_mongod",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBShard1PrimaryPort),
want: []string{"127.0.0.1:17001", "127.0.0.1:17002", "127.0.0.1:17003"},
name: "from_mongod",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBShard1PrimaryPort),
want: []string{"127.0.0.1:17001", "127.0.0.1:17002", "127.0.0.1:17003"},
wantError: false,
},
{
name: "from_non_sharded",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBShard3PrimaryPort),
want: []string{"127.0.0.1:17021", "127.0.0.1:17022", "127.0.0.1:17023"},
name: "from_non_sharded",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBShard3PrimaryPort),
want: []string{"127.0.0.1:17021", "127.0.0.1:17022", "127.0.0.1:17023"},
wantError: false,
},
{
name: "from_standalone",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBStandalonePort),
want: nil,
wantError: true,
},
}
@@ -49,12 +59,12 @@ func TestGetHostnames(t *testing.T) {
}
hostnames, err := GetHostnames(ctx, client)
if err != nil {
t.Errorf("getHostnames: %v", err)
if err != nil && !test.wantError {
t.Errorf("Expecting error=nil, got: %v", err)
}
if !reflect.DeepEqual(hostnames, test.want) {
t.Errorf("Invalid hostnames from mongos. Got: %+v, want %+v", hostnames, test.want)
t.Errorf("Invalid hostnames. Got: %+v, want %+v", hostnames, test.want)
}
})
}
@@ -81,24 +91,34 @@ func TestGetServerStatus(t *testing.T) {
func TestGetReplicasetMembers(t *testing.T) {
testCases := []struct {
name string
uri string
want int
name string
uri string
want int
wantErr bool
}{
{
name: "from_mongos",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBMongosPort),
want: 7,
name: "from_mongos",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBMongosPort),
want: 7,
wantErr: false,
},
{
name: "from_mongod",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBShard1PrimaryPort),
want: 3,
name: "from_mongod",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBShard1PrimaryPort),
want: 3,
wantErr: false,
},
{
name: "from_non_sharded",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBShard3PrimaryPort),
want: 3,
name: "from_non_sharded",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBShard3PrimaryPort),
want: 3,
wantErr: false,
},
{
name: "from_standalone",
uri: fmt.Sprintf("mongodb://%s:%s@%s:%s", tu.MongoDBUser, tu.MongoDBPassword, tu.MongoDBHost, tu.MongoDBStandalonePort),
want: 0,
wantErr: true,
},
}
@@ -109,7 +129,7 @@ func TestGetReplicasetMembers(t *testing.T) {
defer cancel()
rsm, err := GetReplicasetMembers(ctx, clientOptions)
if err != nil {
if err != nil && !test.wantErr {
t.Errorf("Got an error while getting replicaset members: %s", err)
}
if len(rsm) != test.want {
@@ -146,7 +166,7 @@ func TestGetShardedHosts(t *testing.T) {
},
}
for _, test := range testCases {
for i, test := range testCases {
t.Run(test.name, func(t *testing.T) {
clientOptions := options.Client().ApplyURI(test.uri)
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
@@ -156,6 +176,10 @@ func TestGetShardedHosts(t *testing.T) {
if err != nil {
t.Errorf("Cannot get a new client for host %s: %s", test.uri, err)
}
if client == nil {
t.Fatalf("mongodb client is nil i: %d, uri: %s\n", i, test.uri)
}
if err := client.Connect(ctx); err != nil {
t.Errorf("Cannot connect to host %s: %s", test.uri, err)
}

View File

@@ -161,7 +161,7 @@ func main() {
panic(err)
}
fp := fingerprinter.NewFingerprinter(fingerprinter.DEFAULT_KEY_FILTERS)
fp := fingerprinter.NewFingerprinter(fingerprinter.DefaultKeyFilters())
s := stats.New(fp)
prof := profiler.NewProfiler(cursor, filters, nil, s)
prof.Start(ctx)

View File

@@ -38,13 +38,19 @@ const (
DefaultRunningOpsSamples = 5
DefaultOutputFormat = "text"
typeMongos = "mongos"
// Exit Codes
cannotFormatResults = 1
cannotParseCommandLineParameters = 2
cannotGetHostInfo = 3
cannotGetReplicasetMembers = 4
)
var (
Build string = "2020-04-23" // nolint
GoVersion string = "1.14.1" // nolint
Version string = "3.2.0" // nolint
Commit string // nolint
Version string = "3.2.0"
Commit string
)
type TimedStats struct {
@@ -158,7 +164,7 @@ func main() {
opts, err := parseFlags()
if err != nil {
log.Errorf("cannot get parameters: %s", err.Error())
os.Exit(2)
os.Exit(cannotParseCommandLineParameters)
}
if opts == nil && err == nil {
return
@@ -206,7 +212,7 @@ func main() {
defer client.Disconnect(ctx) // nolint
hostnames, err := util.GetHostnames(ctx, client)
if err != nil {
if err != nil && errors.Is(err, util.ShardingNotEnabledError) {
log.Errorf("Cannot get hostnames: %s", err)
}
log.Debugf("hostnames: %v", hostnames)
@@ -217,12 +223,11 @@ func main() {
if err != nil {
message := fmt.Sprintf("Cannot get host info for %q: %s", opts.Host, err.Error())
log.Errorf(message)
os.Exit(2)
os.Exit(cannotGetHostInfo)
}
if ci.ReplicaMembers, err = util.GetReplicasetMembers(ctx, clientOptions); err != nil {
log.Warnf("[Error] cannot get replicaset members: %v\n", err)
os.Exit(2)
}
log.Debugf("replicaMembers:\n%+v\n", ci.ReplicaMembers)
@@ -270,10 +275,9 @@ func main() {
out, err := formatResults(ci, opts.OutputFormat)
if err != nil {
log.Errorf("Cannot format the results: %s", err.Error())
os.Exit(1)
os.Exit(cannotFormatResults)
}
fmt.Println(string(out))
}
func formatResults(ci *collectedInfo, format string) ([]byte, error) {

View File

@@ -50,9 +50,9 @@ var (
IPv6PG12Port = getVar("PG_IPV6_12_PORT", ipv6PG12Port)
PG9DockerIP = getContainerIP(pg9Container)
PG10DockerIP = getContainerIP(pg9Container)
PG11DockerIP = getContainerIP(pg9Container)
PG12DockerIP = getContainerIP(pg9Container)
PG10DockerIP = getContainerIP(pg10Container)
PG11DockerIP = getContainerIP(pg11Container)
PG12DockerIP = getContainerIP(pg12Container)
DefaultPGPort = "5432"
)

View File

@@ -127,12 +127,33 @@ func connect(dsn string) (*sql.DB, error) {
func funcsMap() template.FuncMap {
return template.FuncMap{
"trim": func(s string, size int) string {
"trim": func(size int, s string) string {
if len(s) < size {
return s
}
return s[:size]
return s[:size]+"..."
},
"convertnullstring": func(s sql.NullString) string {
if s.Valid {
return s.String
} else {
return ""
}
},
"convertnullint64": func(s sql.NullInt64) int64 {
if s.Valid {
return s.Int64
} else {
return 0
}
},
"convertnullfloat64": func(s sql.NullFloat64) float64 {
if s.Valid {
return s.Float64
} else {
return 0.0
}
},
}
}

View File

@@ -6,30 +6,38 @@ import (
"testing"
"github.com/percona/percona-toolkit/src/go/pt-pg-summary/internal/tu"
"github.com/percona/percona-toolkit/src/go/lib/pginfo"
"github.com/sirupsen/logrus"
)
type Test struct {
name string
host string
port string
username string
password string
}
var tests []Test = []Test{
{"IPv4PG9", tu.IPv4Host, tu.IPv4PG9Port, tu.Username, tu.Password},
{"IPv4PG10", tu.IPv4Host, tu.IPv4PG10Port, tu.Username, tu.Password},
{"IPv4PG11", tu.IPv4Host, tu.IPv4PG11Port, tu.Username, tu.Password},
{"IPv4PG12", tu.IPv4Host, tu.IPv4PG12Port, tu.Username, tu.Password},
}
var logger = logrus.New()
func TestMain(m *testing.M) {
logger.SetLevel(logrus.WarnLevel)
os.Exit(m.Run())
}
func TestConnection(t *testing.T) {
tests := []struct {
name string
host string
port string
username string
password string
}{
{"IPv4PG9", tu.IPv4Host, tu.IPv4PG9Port, tu.Username, tu.Password},
{"IPv4PG10", tu.IPv4Host, tu.IPv4PG10Port, tu.Username, tu.Password},
{"IPv4PG11", tu.IPv4Host, tu.IPv4PG11Port, tu.Username, tu.Password},
{"IPv4PG12", tu.IPv4Host, tu.IPv4PG12Port, tu.Username, tu.Password},
// use IPV6 for PostgreSQL 9
//{"IPV6", tu.IPv6Host, tu.IPv6PG9Port, tu.Username, tu.Password},
// use an "external" IP to simulate a remote host
{"remote_host", tu.PG9DockerIP, tu.DefaultPGPort, tu.Username, tu.Password},
}
// use an "external" IP to simulate a remote host
tests := append(tests, Test{"remote_host", tu.PG9DockerIP, tu.DefaultPGPort, tu.Username, tu.Password})
// use IPV6 for PostgreSQL 9
//tests := append(tests, Test{"IPV6", tu.IPv6Host, tu.IPv6PG9Port, tu.Username, tu.Password})
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
@@ -42,3 +50,77 @@ func TestConnection(t *testing.T) {
}
}
func TestNewWithLogger(t *testing.T) {
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
dsn := fmt.Sprintf("host=%s port=%s user=%s password=%s sslmode=disable dbname=%s",
test.host, test.port, test.username, test.password, "postgres")
db, err := connect(dsn);
if err != nil {
t.Errorf("Cannot connect to the db using %q: %s", dsn, err)
}
if _, err := pginfo.NewWithLogger(db, nil, 30, logger); err != nil {
t.Errorf("Cannot run NewWithLogger using %q: %s", dsn, err)
}
})
}
}
func TestCollectGlobalInfo(t *testing.T) {
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
dsn := fmt.Sprintf("host=%s port=%s user=%s password=%s sslmode=disable dbname=%s",
test.host, test.port, test.username, test.password, "postgres")
db, err := connect(dsn);
if err != nil {
t.Errorf("Cannot connect to the db using %q: %s", dsn, err)
}
info, err := pginfo.NewWithLogger(db, nil, 30, logger);
if err != nil {
t.Errorf("Cannot run NewWithLogger using %q: %s", dsn, err)
}
errs := info.CollectGlobalInfo(db)
if len(errs) > 0 {
logger.Errorf("Cannot collect info")
for _, err := range errs {
logger.Error(err)
}
t.Errorf("Cannot collect global information using %q", dsn)
}
})
}
}
func TestCollectPerDatabaseInfo(t *testing.T) {
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
dsn := fmt.Sprintf("host=%s port=%s user=%s password=%s sslmode=disable dbname=%s",
test.host, test.port, test.username, test.password, "postgres")
db, err := connect(dsn);
if err != nil {
t.Errorf("Cannot connect to the db using %q: %s", dsn, err)
}
info, err := pginfo.NewWithLogger(db, nil, 30, logger);
if err != nil {
t.Errorf("Cannot run New using %q: %s", dsn, err)
}
for _, dbName := range info.DatabaseNames() {
dsn := fmt.Sprintf("host=%s port=%s user=%s password=%s sslmode=disable dbname=%s",
test.host, test.port, test.username, test.password, dbName)
conn, err := connect(dsn);
if err != nil {
t.Errorf("Cannot connect to the %s database using %q: %s", dbName, dsn, err)
}
if err := info.CollectPerDatabaseInfo(conn, dbName); err != nil {
t.Errorf("Cannot collect information for the %s database using %q: %s", dbName, dsn, err)
}
conn.Close()
}
})
}
}

View File

@@ -12,7 +12,7 @@ import (
type ClusterInfo struct {
Usename string // usename
Time time.Time // time
ClientAddr string // client_addr
ClientAddr sql.NullString // client_addr
ClientHostname sql.NullString // client_hostname
Version string // version
Started time.Time // started

View File

@@ -27,7 +27,7 @@ func GetCounters(db XODB) ([]*Counters, error) {
var err error
// sql query
var sqlstr = `SELECT datname, numbackends, xact_commit, xact_rollback, ` +
var sqlstr = `SELECT COALESCE(datname, '') datname, numbackends, xact_commit, xact_rollback, ` +
`blks_read, blks_hit, tup_returned, tup_fetched, tup_inserted, ` +
`tup_updated, tup_deleted, conflicts, temp_files, ` +
`temp_bytes, deadlocks ` +

View File

@@ -15,7 +15,8 @@ func GetDatabases(db XODB) ([]*Databases, error) {
// sql query
var sqlstr = `SELECT datname, pg_size_pretty(pg_database_size(datname)) ` +
`FROM pg_stat_database`
`FROM pg_stat_database ` +
`WHERE datid <> 0`
// run query
XOLog(sqlstr)

View File

@@ -3,9 +3,10 @@ USERNAME=postgres
PASSWORD=root
PORT9=6432
PORT10=6433
PORT12=6435
DO_CLEANUP=0
if [ ! "$(docker ps -q -f name=pt-pg-summary_postgres9_1)" ]; then
if [ ! "$(docker ps -q -f name=go_postgres9_1)" ]; then
DO_CLEANUP=1
docker-compose up -d --force-recreate
sleep 20
@@ -53,7 +54,7 @@ xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
ORDER BY 1
ENDSQL
FIELDS='Usename string,Time time.Time,ClientAddr string,ClientHostname sql.NullString,Version string,Started time.Time,IsSlave bool'
FIELDS='Usename string,Time time.Time,ClientAddr sql.NullString,ClientHostname sql.NullString,Version string,Started time.Time,IsSlave bool'
COMMENT='Cluster info'
xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
--query-mode \
@@ -77,7 +78,7 @@ SELECT usename, now() AS "Time",
ENDSQL
COMMENT="Databases"
xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT12}/?sslmode=disable \
--query-mode \
--query-trim \
--query-interpolate \
@@ -87,6 +88,7 @@ xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
--out ./ << ENDSQL
SELECT datname, pg_size_pretty(pg_database_size(datname))
FROM pg_stat_database
WHERE datid <> 0
ENDSQL
xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
@@ -101,14 +103,14 @@ xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
GROUP BY 1
ENDSQL
xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT12}/?sslmode=disable \
--query-mode \
--query-interpolate \
--query-trim \
--query-type Counters \
--package models \
--out ./ << ENDSQL
SELECT datname, numbackends, xact_commit, xact_rollback,
SELECT COALESCE(datname, '') datname, numbackends, xact_commit, xact_rollback,
blks_read, blks_hit, tup_returned, tup_fetched, tup_inserted,
tup_updated, tup_deleted, conflicts, temp_files,
temp_bytes, deadlocks
@@ -116,9 +118,9 @@ xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
ORDER BY datname
ENDSQL
FIELDS='Relname string, Relkind string,Datname string,Count sql.NullInt64'
FIELDS='Relname string, Relkind string, Datname sql.NullString, Count sql.NullInt64'
COMMENT='Table Access'
xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT12}/?sslmode=disable \
--query-mode \
--query-trim \
--query-type TableAccess \
@@ -128,7 +130,7 @@ xo pgsql://${USERNAME}:${PASSWORD}@127.0.0.1:${PORT9}/?sslmode=disable \
--query-allow-nulls \
--package models \
--out ./ << ENDSQL
SELECT c.relname, c.relkind, b.datname, count(*) FROM pg_locks a
SELECT c.relname, c.relkind, b.datname datname, count(*) FROM pg_locks a
JOIN pg_stat_database b
ON a.database=b.datid
JOIN pg_class c

View File

@@ -9,10 +9,10 @@ import (
// Table Access
type TableAccess struct {
Relname string // relname
Relkind string // relkind
Datname string // datname
Count sql.NullInt64 // count
Relname string // relname
Relkind string // relkind
Datname sql.NullString // datname
Count sql.NullInt64 // count
}
// GetTableAccesses runs a custom query, returning results as TableAccess.
@@ -20,7 +20,7 @@ func GetTableAccesses(db XODB) ([]*TableAccess, error) {
var err error
// sql query
var sqlstr = `SELECT c.relname, c.relkind, b.datname, count(*) FROM pg_locks a ` +
var sqlstr = `SELECT c.relname, c.relkind, b.datname datname, count(*) FROM pg_locks a ` +
`JOIN pg_stat_database b ` +
`ON a.database=b.datid ` +
`JOIN pg_class c ` +

View File

@@ -5,8 +5,10 @@ var TPL = `{{define "report"}}
{{ template "tablespaces" .Tablespaces }}
{{ if .SlaveHosts96 -}}
{{ template "slaves_and_lag" .SlaveHosts96 }}
{{ else if .SlaveHosts10 -}}
{{- else if .SlaveHosts10 -}}
{{ template "slaves_and_lag" .SlaveHosts10 }}
{{- else -}}
{{ template "slaves_and_log_none" }}
{{- end }}
{{ template "cluster" .ClusterInfo }}
{{ template "databases" .AllDatabases }}
@@ -43,34 +45,35 @@ var TPL = `{{define "report"}}
` +
`{{ define "slaves_and_lag" -}}
##### --- Slave and the lag with Master --- ####
{{ if . -}}
+----------------------+----------------------+----------------------------------------------------+
+----------------------+----------------------+--------------------------------+-------------------+
| Application Name | Client Address | State | Lag |
+----------------------+----------------------+----------------------------------------------------+
+----------------------+----------------------+--------------------------------+-------------------+
{{ range . -}}` +
`| {{ printf "%-20s" .ApplicationName }} ` +
`| {{ printf "%-20s" .ClientAddr }} ` +
`| {{ printf "%-50s" .State }} ` +
`| {{ printf "% 4.2f" .ByteLag }}` +
`{{ end -}} {{/* end define */}}
`| {{ convertnullstring .ApplicationName | printf "%-20s" }} | ` +
`{{ convertnullstring .ClientAddr | printf "%-20s" }} | ` +
`{{ convertnullstring .State | printf "%-30s" }} | ` +
`{{ convertnullfloat64 .ByteLag | printf "% 17.2f" }} |` + "\n" +
`{{ end -}}
+----------------------+----------------------+----------------------------------------------------+
{{- else -}}
{{ end -}} {{/* end define */}}
` +
`{{- define "slaves_and_log_none" -}}
##### --- Slave and the lag with Master --- ####
There are no slave hosts
{{ end -}}
{{ end -}}
{{ end -}} {{/* end define */}}
` +
`{{ define "cluster" -}}
##### --- Cluster Information --- ####
{{ if . -}}
+------------------------------------------------------------------------------------------------------+
{{- range . }}
Usename : {{ printf "%-20s" .Usename }}
Time : {{ printf "%v" .Time }}
Client Address : {{ printf "%-20s" .ClientAddr }}
Client Hostname: {{ trim .ClientHostname.String 80 }}
Version : {{ trim .Version 80 }}
Started : {{ printf "%v" .Started }}
Is Slave : {{ .IsSlave }}
{{- range . }}
Usename : {{ trim 20 .Usename }}
Time : {{ printf "%v" .Time }}
Client Address : {{ convertnullstring .ClientAddr | trim 20 }}
Client Hostname: {{ convertnullstring .ClientHostname | trim 90 }}
Version : {{ trim 90 .Version }}
Started : {{ printf "%v" .Started }}
Is Slave : {{ .IsSlave }}
+------------------------------------------------------------------------------------------------------+
{{ end -}}
{{ else -}}
@@ -97,7 +100,7 @@ Database: {{ $dbname }}
+----------------------+------------+
| Index Name | Ratio |
+----------------------+------------+
| {{ printf "%-20s" .Name }} | {{ printf "% 5.2f" .Ratio.Float64 }} |
| {{ printf "%-20s" .Name }} | {{ convertnullfloat64 .Ratio | printf "% 5.2f" }} |
+----------------------+------------+
{{ else -}}
No stats available
@@ -144,10 +147,10 @@ Database: {{ $dbname }}
+----------------------+------------+---------+----------------------+---------+
{{ range . -}}` +
`| {{ printf "%-20s" .Usename }} | ` +
`{{ printf "%-20s" .Client.String }} | ` +
`{{ printf "%-20s" .State.String }} | ` +
`{{ printf "% 7d" .Count.Int64 }} |` + "\n" +
`{{ end -}}
`{{ convertnullstring .Client | printf "%-20s" }} | ` +
`{{ convertnullstring .State | printf "%-20s" }} | ` +
`{{ convertnullint64 .Count | printf "% 7d" }} |` + "\n" +
`{{ end -}}
+----------------------+------------+---------+----------------------+---------+
{{ else -}}
No stats available
@@ -266,8 +269,8 @@ Database: {{ $dbname }}
`{{ range . -}}
| {{ printf "%-50s" .Relname }} ` +
`| {{ printf "%1s" .Relkind }} ` +
`| {{ printf "%-30s" .Datname }} ` +
`| {{ printf "% 7d" .Count.Int64 }} ` +
`| {{ convertnullstring .Datname | printf "%-30s" }} ` +
`| {{ convertnullint64 .Count | printf "% 7d" }} ` +
"|\n" +
"{{ end }}" +
"+----------------------------------------------------" +
@@ -286,7 +289,7 @@ Database: {{ $dbname }}
" Value \n" +
`{{ range $name, $values := . -}}` +
` {{ printf "%-45s" .Name }} ` +
`: {{ printf "%-60s" .Setting }} ` +
`: {{ printf "%s" .Setting }}` +
"\n" +
"{{ end }}" +
"{{ end }}" +

View File

@@ -3,12 +3,9 @@ package sanitize
import (
"reflect"
"testing"
"github.com/kr/pretty"
)
func TestSanitizeHostnames(t *testing.T) {
want := []string{
"top - 20:05:17 up 10 days, 16:27, 1 user, load average: 0.01, 0.15, 0.19",
"Tasks: 115 total, 1 running, 114 sleeping, 0 stopped, 0 zombie",
@@ -24,8 +21,6 @@ func TestSanitizeHostnames(t *testing.T) {
copy(lines, want)
sanitizeHostnames(lines)
if !reflect.DeepEqual(lines, want) {
pretty.Println(want)
pretty.Println(lines)
t.Error("structures don't match")
}
@@ -48,7 +43,5 @@ func TestSanitizeHostnames(t *testing.T) {
sanitizeHostnames(lines)
if !reflect.DeepEqual(lines, want) {
t.Error("structures don't match")
pretty.Println(want)
pretty.Println(lines)
}
}

41
src/go/tests/doc/out/aggregate_2.6.12 Normal file → Executable file
View File

@@ -1,40 +1 @@
{
"op" : "command",
"ns" : "test.$cmd",
"command" : {
"aggregate" : "coll",
"pipeline" : [
{
"$match" : {
"a" : {
"$gte" : 2
}
}
}
],
"cursor" : {
}
},
"keyUpdates" : 0,
"numYield" : 0,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(1234),
"w" : NumberLong(4321)
},
"timeAcquiringMicros" : {
"r" : NumberLong(9876),
"w" : NumberLong(6789)
}
},
"responseLength" : 385,
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.$cmd","op":"command","command":{"aggregate":"coll","pipeline":[{"$match":{"a":{"$gte":2}}}],"cursor":{}}}

54
src/go/tests/doc/out/aggregate_3.0.15 Normal file → Executable file
View File

@@ -1,53 +1 @@
{
"op" : "command",
"ns" : "test.$cmd",
"command" : {
"aggregate" : "coll",
"pipeline" : [
{
"$match" : {
"a" : {
"$gte" : 2
}
}
}
],
"cursor" : {
}
},
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(6)
}
},
"MMAPV1Journal" : {
"acquireCount" : {
"r" : NumberLong(3)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(3)
}
},
"Collection" : {
"acquireCount" : {
"R" : NumberLong(3)
}
}
},
"responseLength" : 385,
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.$cmd","op":"command","command":{"aggregate":"coll","pipeline":[{"$match":{"a":{"$gte":2}}}],"cursor":{}}}

50
src/go/tests/doc/out/aggregate_3.2.19 Normal file → Executable file
View File

@@ -1,49 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"aggregate" : "coll",
"pipeline" : [
{
"$match" : {
"a" : {
"$gte" : 2
}
}
}
],
"cursor" : {
}
},
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(6)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(3)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(3)
}
}
},
"responseLength" : 388,
"protocol" : "op_command",
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"aggregate":"coll","pipeline":[{"$match":{"a":{"$gte":2}}}],"cursor":{}}}

51
src/go/tests/doc/out/aggregate_3.4.12 Normal file → Executable file
View File

@@ -1,50 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"aggregate" : "coll",
"pipeline" : [
{
"$match" : {
"a" : {
"$gte" : 2
}
}
}
],
"cursor" : {
}
},
"keysExamined" : 8,
"docsExamined" : 8,
"cursorExhausted" : true,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(8)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(4)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(3)
}
}
},
"nreturned" : 8,
"responseLength" : 370,
"protocol" : "op_command",
"millis" : 42,
"planSummary" : "IXSCAN { a: 1 }",
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"aggregate":"coll","pipeline":[{"$match":{"a":{"$gte":2}}}],"cursor":{}}}

52
src/go/tests/doc/out/aggregate_3.6.2 Normal file → Executable file
View File

@@ -1,51 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"aggregate" : "coll",
"pipeline" : [
{
"$match" : {
"a" : {
"$gte" : 2
}
}
}
],
"cursor" : {
},
"$db" : "test"
},
"keysExamined" : 8,
"docsExamined" : 8,
"cursorExhausted" : true,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(4)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(2)
}
}
},
"nreturned" : 8,
"responseLength" : 370,
"protocol" : "op_msg",
"millis" : 42,
"planSummary" : "IXSCAN { a: 1 }",
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"aggregate":"coll","pipeline":[{"$match":{"a":{"$gte":2}}}],"cursor":{},"$db":"test"}}

35
src/go/tests/doc/out/count_2.6.12 Normal file → Executable file
View File

@@ -1,34 +1 @@
{
"op" : "command",
"ns" : "test.$cmd",
"command" : {
"count" : "coll",
"query" : {
},
"fields" : {
}
},
"keyUpdates" : 0,
"numYield" : 0,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(1234),
"w" : NumberLong(4321)
},
"timeAcquiringMicros" : {
"r" : NumberLong(9876),
"w" : NumberLong(6789)
}
},
"responseLength" : 48,
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.$cmd","op":"command","command":{"count":"coll","query":{},"fields":{}}}

48
src/go/tests/doc/out/count_3.0.15 Normal file → Executable file
View File

@@ -1,47 +1 @@
{
"op" : "command",
"ns" : "test.$cmd",
"command" : {
"count" : "coll",
"query" : {
},
"fields" : {
}
},
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"MMAPV1Journal" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"R" : NumberLong(1)
}
}
},
"responseLength" : 44,
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.$cmd","op":"command","command":{"count":"coll","query":{},"fields":{}}}

44
src/go/tests/doc/out/count_3.2.19 Normal file → Executable file
View File

@@ -1,43 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"count" : "coll",
"query" : {
},
"fields" : {
}
},
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(1)
}
}
},
"responseLength" : 47,
"protocol" : "op_command",
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"count":"coll","query":{},"fields":{}}}

58
src/go/tests/doc/out/count_3.4.12 Normal file → Executable file
View File

@@ -1,57 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"count" : "coll",
"query" : {
},
"fields" : {
}
},
"keysExamined" : 0,
"docsExamined" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(1)
}
}
},
"responseLength" : 29,
"protocol" : "op_command",
"millis" : 42,
"planSummary" : "COUNT",
"execStats" : {
"stage" : "COUNT",
"nReturned" : 0,
"executionTimeMillisEstimate" : 0,
"works" : 1,
"advanced" : 0,
"needTime" : 0,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"nCounted" : 10,
"nSkipped" : 0
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"count":"coll","query":{},"fields":{}}}

59
src/go/tests/doc/out/count_3.6.2 Normal file → Executable file
View File

@@ -1,58 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"count" : "coll",
"query" : {
},
"fields" : {
},
"$db" : "test"
},
"keysExamined" : 0,
"docsExamined" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(1)
}
}
},
"responseLength" : 29,
"protocol" : "op_msg",
"millis" : 42,
"planSummary" : "COUNT",
"execStats" : {
"stage" : "COUNT",
"nReturned" : 0,
"executionTimeMillisEstimate" : 0,
"works" : 1,
"advanced" : 0,
"needTime" : 0,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"nCounted" : 10,
"nSkipped" : 0
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"count":"coll","query":{},"fields":{},"$db":"test"}}

37
src/go/tests/doc/out/count_with_query_2.6.12 Normal file → Executable file
View File

@@ -1,36 +1 @@
{
"op" : "command",
"ns" : "test.$cmd",
"command" : {
"count" : "coll",
"query" : {
"a" : {
"$gt" : 5
}
},
"fields" : {
}
},
"keyUpdates" : 0,
"numYield" : 0,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(1234),
"w" : NumberLong(4321)
},
"timeAcquiringMicros" : {
"r" : NumberLong(9876),
"w" : NumberLong(6789)
}
},
"responseLength" : 48,
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.$cmd","op":"command","command":{"count":"coll","query":{"a":{"$gt":5}},"fields":{}}}

50
src/go/tests/doc/out/count_with_query_3.0.15 Normal file → Executable file
View File

@@ -1,49 +1 @@
{
"op" : "command",
"ns" : "test.$cmd",
"command" : {
"count" : "coll",
"query" : {
"a" : {
"$gt" : 5
}
},
"fields" : {
}
},
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"MMAPV1Journal" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"R" : NumberLong(1)
}
}
},
"responseLength" : 44,
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.$cmd","op":"command","command":{"count":"coll","query":{"a":{"$gt":5}},"fields":{}}}

46
src/go/tests/doc/out/count_with_query_3.2.19 Normal file → Executable file
View File

@@ -1,45 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"count" : "coll",
"query" : {
"a" : {
"$gt" : 5
}
},
"fields" : {
}
},
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(1)
}
}
},
"responseLength" : 47,
"protocol" : "op_command",
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"count":"coll","query":{"a":{"$gt":5}},"fields":{}}}

80
src/go/tests/doc/out/count_with_query_3.4.12 Normal file → Executable file
View File

@@ -1,79 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"count" : "coll",
"query" : {
"a" : {
"$gt" : 5
}
},
"fields" : {
}
},
"keysExamined" : 0,
"docsExamined" : 10,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(1)
}
}
},
"responseLength" : 29,
"protocol" : "op_command",
"millis" : 42,
"planSummary" : "COLLSCAN",
"execStats" : {
"stage" : "COUNT",
"nReturned" : 0,
"executionTimeMillisEstimate" : 0,
"works" : 12,
"advanced" : 0,
"needTime" : 11,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"nCounted" : 4,
"nSkipped" : 0,
"inputStage" : {
"stage" : "COLLSCAN",
"filter" : {
"a" : {
"$gt" : 5
}
},
"nReturned" : 4,
"executionTimeMillisEstimate" : 0,
"works" : 12,
"advanced" : 4,
"needTime" : 7,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 10
}
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"count":"coll","query":{"a":{"$gt":5}},"fields":{}}}

81
src/go/tests/doc/out/count_with_query_3.6.2 Normal file → Executable file
View File

@@ -1,80 +1 @@
{
"op" : "command",
"ns" : "test.coll",
"command" : {
"count" : "coll",
"query" : {
"a" : {
"$gt" : 5
}
},
"fields" : {
},
"$db" : "test"
},
"keysExamined" : 0,
"docsExamined" : 10,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(2)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(1)
}
}
},
"responseLength" : 29,
"protocol" : "op_msg",
"millis" : 42,
"planSummary" : "COLLSCAN",
"execStats" : {
"stage" : "COUNT",
"nReturned" : 0,
"executionTimeMillisEstimate" : 0,
"works" : 12,
"advanced" : 0,
"needTime" : 11,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"nCounted" : 4,
"nSkipped" : 0,
"inputStage" : {
"stage" : "COLLSCAN",
"filter" : {
"a" : {
"$gt" : 5
}
},
"nReturned" : 4,
"executionTimeMillisEstimate" : 0,
"works" : 12,
"advanced" : 4,
"needTime" : 7,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 10
}
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"command","command":{"count":"coll","query":{"a":{"$gt":5}},"fields":{},"$db":"test"}}

34
src/go/tests/doc/out/delete_2.6.12 Normal file → Executable file
View File

@@ -1,33 +1 @@
{
"op" : "remove",
"ns" : "test.coll",
"query" : {
"a" : {
"$gte" : 2
},
"b" : {
"$gte" : 2
}
},
"ndeleted" : 1,
"keyUpdates" : 0,
"numYield" : 0,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(1234),
"w" : NumberLong(4321)
},
"timeAcquiringMicros" : {
"r" : NumberLong(9876),
"w" : NumberLong(6789)
}
},
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"remove","query":{"a":{"$gte":2},"b":{"$gte":2}}}

48
src/go/tests/doc/out/delete_3.0.15 Normal file → Executable file
View File

@@ -1,47 +1 @@
{
"op" : "remove",
"ns" : "test.coll",
"query" : {
"a" : {
"$gte" : 2
},
"b" : {
"$gte" : 2
}
},
"ndeleted" : 1,
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(1),
"w" : NumberLong(1)
}
},
"MMAPV1Journal" : {
"acquireCount" : {
"w" : NumberLong(2)
}
},
"Database" : {
"acquireCount" : {
"w" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"W" : NumberLong(1)
}
}
},
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"remove","query":{"a":{"$gte":2},"b":{"$gte":2}}}

43
src/go/tests/doc/out/delete_3.2.19 Normal file → Executable file
View File

@@ -1,42 +1 @@
{
"op" : "remove",
"ns" : "test.coll",
"query" : {
"a" : {
"$gte" : 2
},
"b" : {
"$gte" : 2
}
},
"ndeleted" : 1,
"keyUpdates" : 0,
"writeConflicts" : 0,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(1),
"w" : NumberLong(1)
}
},
"Database" : {
"acquireCount" : {
"w" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"w" : NumberLong(1)
}
}
},
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"remove","query":{"a":{"$gte":2},"b":{"$gte":2}}}

114
src/go/tests/doc/out/delete_3.4.12 Normal file → Executable file
View File

@@ -1,113 +1 @@
{
"op" : "remove",
"ns" : "test.coll",
"query" : {
"a" : {
"$gte" : 2
},
"b" : {
"$gte" : 2
}
},
"keysExamined" : 1,
"docsExamined" : 1,
"ndeleted" : 1,
"keysDeleted" : 2,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(1),
"w" : NumberLong(1)
}
},
"Database" : {
"acquireCount" : {
"w" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"w" : NumberLong(1)
}
}
},
"millis" : 42,
"planSummary" : "IXSCAN { a: 1 }",
"execStats" : {
"stage" : "DELETE",
"nReturned" : 0,
"executionTimeMillisEstimate" : 0,
"works" : 2,
"advanced" : 0,
"needTime" : 1,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"nWouldDelete" : 1,
"nInvalidateSkips" : 0,
"inputStage" : {
"stage" : "FETCH",
"filter" : {
"b" : {
"$gte" : 2
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 0,
"works" : 1,
"advanced" : 1,
"needTime" : 0,
"needYield" : 0,
"saveState" : 1,
"restoreState" : 1,
"isEOF" : 0,
"invalidates" : 0,
"docsExamined" : 1,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 1,
"executionTimeMillisEstimate" : 0,
"works" : 1,
"advanced" : 1,
"needTime" : 0,
"needYield" : 0,
"saveState" : 1,
"restoreState" : 1,
"isEOF" : 0,
"invalidates" : 0,
"keyPattern" : {
"a" : 1
},
"indexName" : "a_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"a" : [ ]
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"a" : [
"[2.0, inf.0]"
]
},
"keysExamined" : 1,
"seeks" : 1,
"dupsTested" : 0,
"dupsDropped" : 0,
"seenInvalidated" : 0
}
}
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"remove","query":{"a":{"$gte":2},"b":{"$gte":2}}}

117
src/go/tests/doc/out/delete_3.6.2 Normal file → Executable file
View File

@@ -1,116 +1 @@
{
"op" : "remove",
"ns" : "test.coll",
"command" : {
"q" : {
"a" : {
"$gte" : 2
},
"b" : {
"$gte" : 2
}
},
"limit" : 1
},
"keysExamined" : 1,
"docsExamined" : 1,
"ndeleted" : 1,
"keysDeleted" : 2,
"numYield" : 0,
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(1),
"w" : NumberLong(1)
}
},
"Database" : {
"acquireCount" : {
"w" : NumberLong(1)
}
},
"Collection" : {
"acquireCount" : {
"w" : NumberLong(1)
}
}
},
"millis" : 42,
"planSummary" : "IXSCAN { a: 1 }",
"execStats" : {
"stage" : "DELETE",
"nReturned" : 0,
"executionTimeMillisEstimate" : 0,
"works" : 2,
"advanced" : 0,
"needTime" : 1,
"needYield" : 0,
"saveState" : 0,
"restoreState" : 0,
"isEOF" : 1,
"invalidates" : 0,
"nWouldDelete" : 1,
"nInvalidateSkips" : 0,
"inputStage" : {
"stage" : "FETCH",
"filter" : {
"b" : {
"$gte" : 2
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 0,
"works" : 1,
"advanced" : 1,
"needTime" : 0,
"needYield" : 0,
"saveState" : 1,
"restoreState" : 1,
"isEOF" : 0,
"invalidates" : 0,
"docsExamined" : 1,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 1,
"executionTimeMillisEstimate" : 0,
"works" : 1,
"advanced" : 1,
"needTime" : 0,
"needYield" : 0,
"saveState" : 1,
"restoreState" : 1,
"isEOF" : 0,
"invalidates" : 0,
"keyPattern" : {
"a" : 1
},
"indexName" : "a_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"a" : [ ]
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"a" : [
"[2.0, inf.0]"
]
},
"keysExamined" : 1,
"seeks" : 1,
"dupsTested" : 0,
"dupsDropped" : 0,
"seenInvalidated" : 0
}
}
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"appName" : "MongoDB Shell",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"remove","command":{"q":{"a":{"$gte":2},"b":{"$gte":2}},"limit":1}}

34
src/go/tests/doc/out/delete_all_2.6.12 Normal file → Executable file
View File

@@ -1,33 +1 @@
{
"op" : "remove",
"ns" : "test.coll",
"query" : {
"a" : {
"$gte" : 2
},
"b" : {
"$gte" : 2
}
},
"ndeleted" : 8,
"keyUpdates" : 0,
"numYield" : 0,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(1234),
"w" : NumberLong(4321)
},
"timeAcquiringMicros" : {
"r" : NumberLong(9876),
"w" : NumberLong(6789)
}
},
"millis" : 42,
"execStats" : {
},
"ts" : ISODate("2020-01-01T00:00:00Z"),
"client" : "127.0.0.1",
"allUsers" : [ ],
"user" : ""
}
{"ns":"test.coll","op":"remove","query":{"a":{"$gte":2},"b":{"$gte":2}}}

Some files were not shown because too many files have changed in this diff Show More