Skip to content

Support Engineer Monthly Tasks

1. MySQL Server Health Check

Check /var/lib/mysql/mysql_slow.log

Current challenge: it's growing very fast. We cleaned it up when it was 200G and setup logrotate for it.

Quick Analysis Commands

Instead of manually checking huge log files, use these automated analysis commands:

bash
# 1. Generate slow query summary (recommended)
sudo mysqldumpslow -s t -t 20 /var/lib/mysql/mysql_slow.log

By the 2025-09, mysql_slow.log configured by logrotate on a weekly basis (see MySQL setup: https://github.com/InteractionDesignFoundation/IxDF-web/blob/main/infrastructure/mysql/setup.md#config).

How to handle output

It outputs a summary of slow queries ordered by accumulated time (-s t - sort by time)). Take into account both time and count for all output items. Example:

text
Count: 9713  Time=0.08s (739s)  Lock=0.00s (0s)  Rows=7.9 (76473), forge[forge]@[10.132.27.217]
  select * from `masterclass__registrations` where ((`masterclass__registrations`.`attendee_type` = 'S' and `masterclass__registrations`.`attendee_id` in (N))) and `certificate_issued_at` is not null and `masterclass__registrations`.`deleted_at` is null order by `certificate_issued_at` desc

Count: 112848  Time=0.00s (323s)  Lock=0.00s (4s)  Rows=0.0 (0), forge[forge]@[10.132.27.217]
select * from `announcement__announcements` where `is_visible_for_guests` = N and `dismissible_id` not in ('S', 'S') and not exists (select * from `geo__countries` inner join `announcement__announcement_country` on `geo__countries`.`code` = `announcement__announcement_country`.`country_code` where `announcement__announcements`.`id` = `announcement__announcement_country`.`announcement_id`) and (`expires_at` is null or `expires_at` > 'S') and (`visible_after` is null or `visible_after` <= 'S') and (`expires_at` is null or `expires_at` > 'S') limit N

It reports about 2 issues:

  • select masterclass__registrations that took total 739 seconds (within 9713 times)
  • select announcement__announcements that took total 323 seconds (within 112848 times!)

Possible actions:

  1. Update DB query to optimize it
  2. Use Laravel Cache
  3. Add a DB an index
  4. Change MySQL slow_query_log settings to reduce noise
  5. Create a GitHub issue for further investigation

🚨 Immediate Action Required:

  • Slow log file > 1GB: sudo ls -lh /var/lib/mysql/mysql_slow.log
  • Queries > 5 seconds: sudo grep "Query_time: [5-9]" /var/lib/mysql/mysql_slow.log | wc -l
  • Full table scans: sudo grep -c "Full_scan: Yes" /var/lib/mysql/mysql_slow.log

Quick performance check via New Relic (if available)

Check New Relic -> APM & Services → Databases → Sort by "Most Time Consuming" (for 1 month and 7 days)