Files
percona-toolkit/t/pt-summary/samples/Linux/output_008.txt
Sveta Smirnova ea6bd77501 PT-2340 - Support MySQL 8.4
- Moved data collection for THP from lib/bash/report_system_info.sh to lib/bash/collect_system_info.sh, so pt-summary reports THP status on the machine where samples were collected, not on the machine where an engineer examines samples.
2024-09-06 13:08:45 +03:00

202 lines
12 KiB
Plaintext

Hostname | s76
Uptime | 95 days, 5:41, 1 user, load average: 7,27, 6,88, 6,86
Platform | Linux
Release | Ubuntu 22.04.4 LTS (jammy)
Kernel | 6.5.0-35-generic
Architecture | CPU = 64-bit, OS = 64-bit
Threading | NPTL 2.35
SELinux | No SELinux detected
Virtualized | No virtualization detected
# Processor ##################################################
Processors | physical = 1, cores = 12, virtual = 16, hyperthreading = yes
Speeds | 1x2299.894, 1x2299.975, 1x2299.997, 1x2299.998, 1x2300.011, 1x2300.012, 1x2300.017, 1x2300.091, 1x2672.558, 1x2780.294, 4x2800.000, 1x2800.007, 1x2900.000
Models | 16x12th Gen Intel(R) Core(TM) i7-1260P
Caches | 16x18432 KB
Designation Configuration Size Associativity
========================= ============================== ======== ======================
# Memory #####################################################
Total | 62.6G
Free | 1.6G
Used | physical = 25.7G, swap allocated = 1.9G, swap used = 1.9G, virtual = 27.6G
Shared | 2.7G
Buffers | 35.4G
Caches | 33.8G
Dirty | 16372 kB
UsedRSS | 35.9G
Swappiness | 60
DirtyPolicy | 20, 10
DirtyStatus | 0, 0
Locator Size Speed Form Factor Type Type Detail
========= ======== ================= ============= ============= ===========
# Mounted Filesystems ########################################
Filesystem Size Used Type Opts Mountpoint
/dev/fuse 250G 0% fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 /run/user/1000/keybase/kbfs
/dev/mapper/vgubuntu-root 3,6T 43% ext4 rw,relatime,errors=remount-ro /
/dev/nvme0n1p1 511M 57% vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro /boot/efi
/dev/nvme0n1p2 1,7G 20% ext4 rw,relatime /boot
efivarfs 64K 45% efivarfs rw,nosuid,nodev,noexec,relatime /sys/firmware/efi/efivars
tmpfs 32G 3% tmpfs rw,nosuid,nodev,inode64 /dev/shm
tmpfs 32G 3% tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k,inode64 /dev/shm
tmpfs 32G 3% tmpfs rw,nosuid,nodev,noexec,relatime,size=6567948k,mode=755,inode64 /dev/shm
tmpfs 32G 3% tmpfs rw,nosuid,nodev,noexec,relatime,size=6567948k,mode=755,inode64 /dev/shm
tmpfs 32G 3% tmpfs rw,nosuid,nodev,relatime,size=6567944k,nr_inodes=1641986,mode=700,uid=1000,gid=1000,inode64 /dev/shm
tmpfs 5,0M 1% tmpfs rw,nosuid,nodev,inode64 /run/lock
tmpfs 5,0M 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k,inode64 /run/lock
tmpfs 5,0M 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=6567948k,mode=755,inode64 /run/lock
tmpfs 5,0M 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=6567948k,mode=755,inode64 /run/lock
tmpfs 5,0M 1% tmpfs rw,nosuid,nodev,relatime,size=6567944k,nr_inodes=1641986,mode=700,uid=1000,gid=1000,inode64 /run/lock
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,inode64 /run/user/1000
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k,inode64 /run/user/1000
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=6567948k,mode=755,inode64 /run/user/1000
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=6567948k,mode=755,inode64 /run/user/1000
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,relatime,size=6567944k,nr_inodes=1641986,mode=700,uid=1000,gid=1000,inode64 /run/user/1000
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,inode64 /run
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k,inode64 /run
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=6567948k,mode=755,inode64 /run
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,noexec,relatime,size=6567948k,mode=755,inode64 /run
tmpfs 6,3G 1% tmpfs rw,nosuid,nodev,relatime,size=6567944k,nr_inodes=1641986,mode=700,uid=1000,gid=1000,inode64 /run
# Disk Schedulers And Queue Size #############################
dm-0 | UNREADABLE
dm-1 | UNREADABLE
dm-2 | UNREADABLE
nvme0n1 | [none] 1023
# Disk Partitioning ##########################################
Device Type Start End Size
============ ==== ========== ========== ==================
/dev/nvme0n1 Disk 4000787030016
/dev/nvme0n1p1 Part 2048 1050623 0
/dev/nvme0n1p2 Part 1050624 4550655 0
/dev/nvme0n1p3 Part 4550656 7814035455 0
# Kernel Inode State #########################################
dentry-state | 3350735 3155181 45 0 2247869 0
file-nr | 35904 0 9223372036854775807
inode-nr | 1417972 554725
# LVM Volumes ################################################
Unable to collect information
# LVM Volume Groups ##########################################
Unable to collect information
# RAID Controller ############################################
Controller | No RAID controller detected
# Network Config #############################################
Controller | Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
FIN Timeout | 60
Port Range | 60999
# Interface Statistics #######################################
interface rx_bytes rx_packets rx_errors tx_bytes tx_packets tx_errors
========= ========= ========== ========== ========== ========== ==========
lo 125000000000 175000000 0 125000000000 175000000 0
enp46s0 0 0 0 0 0 0
wlp0s20f3 150000000000 150000000 0 30000000000 80000000 0
docker0 12500000000 12500000 0 30000000000 15000000 0
br-a7c7f1777df1 0 0 0 0 0 0
veth0655c9f@if604 1250000000 1000000 0 2500000000 1250000 0
vethd189986@if606 50000000 250000 0 600000000 300000 0
veth2655c84@if608 500000000 250000 0 50000000 250000 0
wg0 12500000 25000 0 5000000 25000 0
br-79118f7d5a92 15000000 225000 0 800000000 350000 0
# Network Connections ########################################
Connections from remote IP addresses
3.68.61.181 3
3.124.19.111 1
10.0.0.1 1
10.0.0.5 4
18.190.135.240 1
18.205.222.128 1
34.107.243.93 1
34.202.253.140 1
35.75.32.93 6
38.111.114.11 1
44.208.19.176 1
52.35.150.14 3
52.57.79.4 1
52.73.62.51 1
52.84.151.38 1
52.193.110.50 1
54.147.17.17 1
54.158.119.218 1
54.159.113.253 1
64.233.162.188 1
74.125.205.188 1
77.223.96.250 1
89.253.240.17 2
91.105.192.100 7
108.157.214.53 1
108.157.214.118 1
127.0.0.1 80
140.82.113.26 1
142.250.184.131 1
142.251.1.138 1
149.154.167.51 1
157.240.205.60 1
162.125.6.20 1
162.125.21.2 1
162.125.21.3 8
162.125.69.12 1
162.125.69.13 1
170.114.4.224 1
170.114.15.227 1
170.114.52.2 2
170.114.52.66 1
172.17.0.2 5
172.64.152.233 1
173.194.220.108 5
173.194.221.94 1
185.166.143.36 1
188.114.98.233 2
188.114.99.233 2
194.85.125.1 1
206.247.115.239 1
209.85.233.104 1
213.180.204.148 3
Connections to local IP addresses
10.0.0.5 15
127.0.0.1 80
172.17.0.1 5
192.168.55.2 70
192.168.255.22 1
Connections to top 10 local ports
12345 3
12347 4
3306 10
42001 3
42002 3
42003 3
4647 2
60442 1
60536 1
60740 1
States of connections
CLOSE_WAIT 9
ESTABLISHED 125
LISTEN 45
SYN_SENT 3
TIME_WAIT 25
# Top Processes ##############################################
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2407273 sveta 20 0 64144 46012 13824 R 100,0 0,1 5061:20 perl
2410420 sveta 20 0 64140 46184 13952 R 100,0 0,1 5059:23 perl
783789 sveta 20 0 2822620 506824 42752 S 94,1 0,8 5786:40 mysqld
2408748 sveta 20 0 64136 46068 14080 R 94,1 0,1 5060:24 perl
2419768 sveta 20 0 64140 46088 13952 R 94,1 0,1 5055:00 perl
2504338 root 20 0 3183396 578628 283160 S 94,1 0,9 47:38.67 s1-agent
2504336 root 20 0 289740 127780 25600 S 52,9 0,2 9:50.52 s1-scan+
7288 sveta 20 0 2688236 139560 46968 S 5,9 0,2 1028:47 Xorg
622143 sveta 20 0 1132,2g 682008 88376 S 5,9 1,0 117:46.23 slack
# Notable Processes ##########################################
PID OOM COMMAND
? ? sshd doesn't appear to be running
1499134 -17 multipathd
2849628 -17 nomad
3202899 -17 sshd
# Simplified and fuzzy rounded vmstat (wait please) ##########
procs ---swap-- -----io---- ---system---- --------cpu--------
r b si so bi bo ir cs us sy il wa st
8 0 0 0 30 200 1 1 12 2 86 0 0
9 0 0 0 8 1750 15000 50000 39 3 58 0 0
5 1 0 0 0 500 15000 22500 39 1 60 0 0
5 0 0 0 0 900 17500 20000 34 1 65 0 0
5 0 0 0 0 350 17500 25000 36 2 63 0 0
# Memory management ##########################################
Transparent huge pages are currently disabled on the system.
# The End ####################################################