Wie kann ich meinen MySQL-Server optimieren?

8

MySQL ist nicht mein Ding, aber ich muss einen unserer Server optimieren.

Hier sind die Anforderungen / Spezifikationen:

  • Der MySQL-Server verfügt nur über eine wichtige Datenbank
  • Wir haben nur einen "Anwendungstyp" damit verbunden, und nicht viele Instanzen sind gleichzeitig damit verbunden: höchstens 15. (diese Anwendungen sind XMPP-Bots)
  • Diese Anwendungen verfügen über eine nicht blockierende E / A, was bedeutet, dass sie niemals auf dem DB-Server "warten" und eingehende Anforderungen weiter bearbeiten, während die DB-Abfragen verarbeitet werden. Dies bedeutet, dass eine Instanz dieser Anwendung manchmal mehrere (viele!) Verbindungen zum Datenbankserver haben kann (insbesondere, wenn einige Abfragen langsam sind).
    • Alle Abfragen verwenden Indizes
    • Auf unserem Host-Computer wird nur MySQL ausgeführt. Es ist eine Xen-Instanz (@slicehost) mit 2 GB RAM.
    • Wir verwenden die InnoDB-Tabelle, weil wir einige grundlegende Transaktionen benötigen, aber wir könnten wahrscheinlich zu MyISAM wechseln, wenn dies einen echten Einfluss auf die Leistung hätte.

Da es gerade konfiguriert ist, beginnt unser MySQL-Server langsam, den gesamten verfügbaren Speicher zu verbrauchen (wir verwenden collectd, hier ist ein Diagramm). Irgendwann (nach einigen Tagen / Wochen) werden keine Abfragen mehr ausgeführt (diese Nacht wurde für 2 Stunden angehalten, und ich musste den MySQL-Server neu starten: siehe 2. Bild):

(Entschuldigung, neue Benutzer können keine Bilder posten und nur 1 Hyperlink: /)

Hier ist unsere aktuelle my.cnf

#
# The MySQL database server configuration file.
#
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port        = 3306
socket      = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram

# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket      = /var/run/mysqld/mysqld.sock
nice        = 0

[mysqld]
#
# * Basic Settings
#

#
# * IMPORTANT
#   If you make changes to these settings and your system uses apparmor, you may
#   also need to also adjust /etc/apparmor.d/usr.sbin.mysqld.
#

user        = mysql
pid-file            = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /var/lib/mysql
tmpdir      = /tmp
language            = /usr/share/mysql/english
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
# yann changed this on a friday balbla
#bind-address       = 127.0.0.1
bind-address        = 0.0.0.0
#
# * Fine Tuning
#
key_buffer      = 16M
max_allowed_packet  = 16M
thread_stack        = 128K
thread_cache_size   = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover      = BACKUP
max_connections        = 2000
#table_cache            = 64
#thread_concurrency     = 10
#
# * Query Cache Configuration
#
query_cache_limit       = 1M
query_cache_size        = 16M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# log       = /var/log/mysql/mysql.log
#
# Error logging goes to syslog. This is a Debian improvement :)
#
# Here you can see queries with especially long duration
log_slow_queries    = /var/log/mysql/mysql-slow.log
long_query_time = 3
log-queries-not-using-indexes

#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id      = 1
#log_bin            = /var/log/mysql/mysql-bin.log
expire_logs_days    = 10
max_binlog_size         = 100M
#binlog_do_db       = include_database_name
#binlog_ignore_db   = include_database_name
#
# * BerkeleyDB
#
# Using BerkeleyDB is now discouraged as its support will cease in 5.1.12.
skip-bdb
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
# You might want to disable InnoDB to shrink the mysqld process by circa 100MB.
#skip-innodb

# Fine tunig added by JG on 06/03 based on http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/
innodb_buffer_pool_size  = 1G
#innodb_log_file_size     = 256M
innodb_log_buffer_size   = 4M
innodb_flush_log_at_trx_commit = 2
innodb_thread_concurrency      = 8
innodb_flush_method            = O_DIRECT
innodb_file_per_table
transaction-isolation          = READ-COMMITTED
innodb_table_locks             = 0         

#
# * Federated
#
# The FEDERATED storage engine is disabled since 5.0.67 by default in the .cnf files
# shipped with MySQL distributions (my-huge.cnf, my-medium.cnf, and so forth).
#
skip-federated
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem



[mysqldump]
quick
quote-names
max_allowed_packet  = 16M

[mysql]
#no-auto-rehash # faster start of mysql but no tab completition

[isamchk]
key_buffer      = 16M

#
# * NDB Cluster
#
# See /usr/share/doc/mysql-server-*/README.Debian for more information.
#
# The following configuration is read by the NDB Data Nodes (ndbd processes)
# not from the NDB Management Nodes (ndb_mgmd processes).
#
# [MYSQL_CLUSTER]
# ndb-connectstring=127.0.0.1


#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/

Hier ist ein Speicherauszug langsamer Abfragen:

$ mysqldumpslow /var/log/mysql/mysql-slow.log

Reading mysql slow query log from /var/log/mysql/mysql-slow.log
Count: 5  Time=3689348814741910528.00s (-1s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  SET insert_id=N;
  INSERT IGNORE INTO `feeds` (`url`) VALUES ('S')

Count: 41  Time=1349761761490942720.00s (-1s)  Lock=0.12s (5s)  Rows=253.0 (10373), superfeeder[superfeeder]@localhost
  SHOW GLOBAL STATUS

Count: 25  Time=737869762948382080.00s (-1s)  Lock=0.00s (0s)  Rows=18.1 (452), superfeeder[superfeeder]@[172.21.1.158]
  SELECT `feeds`.* FROM `feeds` WHERE (`fetch_session_id` = 'S')

Count: 12952  Time=1424239042133230.25s (-1s)  Lock=0.00s (1s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  SET insert_id=N;
  INSERT IGNORE INTO `entries` (`chunks`, `time`, `feed_id`, `unique_id`, `link`, `chunk`) VALUES ('S', 'S', N, 'S', 'S', 'S')

Count: 29  Time=656.55s (19040s)  Lock=5.28s (153s)  Rows=0.8 (23), superfeeder[superfeeder]@[172.21.1.175]
  select salt,crypted_password from users where login='S'

Count: 39  Time=505.23s (19704s)  Lock=2.41s (94s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  DELETE FROM `feeds` WHERE (url LIKE 'S')

Count: 2275  Time=502.50s (1143184s)  Lock=3.48s (7922s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `next_fetch` = 'S', `fetch_session_id` = 'S' WHERE (`next_fetch` < 'S') LIMIT N

Count: 1  Time=443.00s (443s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN (NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL))

Count: 14  Time=289.43s (4052s)  Lock=0.71s (10s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN ('S','S'))

Count: 2  Time=256.00s (512s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN (NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL))

Count: 1  Time=237.00s (237s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN ('S'))

Count: 24  Time=191.58s (4598s)  Lock=1.12s (27s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`id` = 'S')

Count: 5  Time=144.20s (721s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN (NULL,NULL,NULL))

Count: 1  Time=101.00s (101s)  Lock=1.00s (1s)  Rows=1.0 (1), superfeeder[superfeeder]@[172.21.1.158]
  SELECT * FROM `users` WHERE (`login` = 'S') LIMIT N

Count: 79  Time=35.51s (2805s)  Lock=2.52s (199s)  Rows=0.2 (12), superfeeder[superfeeder]@[172.21.1.184]
  SELECT `feeds`.id FROM `feeds` WHERE (`feeds`.`url` = BINARY 'S' AND `feeds`.id <> N)  LIMIT N

Count: 1  Time=28.00s (28s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `last_maintenance_at` = 'S', `updated_at` = 'S' WHERE `id` = N

Count: 51  Time=23.51s (1199s)  Lock=0.12s (6s)  Rows=19.2 (981), superfeeder[superfeeder]@2hosts
  SELECT version FROM schema_migrations

Count: 5  Time=20.60s (103s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  BEGIN

Count: 65  Time=15.86s (1031s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = 'S', `last_sup_update_id` = NULL, `updated_at` = 'S', `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = 'S', `active` = 'S', `last_fetch` = 'S', `created_at` = 'S', `max_period` = 'S' WHERE (`id` = N)

Count: 23  Time=11.52s (265s)  Lock=0.00s (0s)  Rows=231.0 (5313), superfeeder[superfeeder]@2hosts
  #

Count: 132  Time=10.53s (1390s)  Lock=0.02s (2s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = 'S', `last_sup_update_id` = NULL, `updated_at` = 'S', `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = 'S', `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 62  Time=9.81s (608s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  ROLLBACK

Count: 151  Time=8.94s (1350s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@2hosts
  DELETE FROM `entries` WHERE (`time` < 'S')

Count: 25  Time=8.76s (219s)  Lock=0.00s (0s)  Rows=1.0 (24), superfeeder[superfeeder]@[172.21.1.158]
  SELECT * FROM `feeds` WHERE (`url` = 'S') LIMIT N

Count: 2  Time=8.50s (17s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  set SQL_AUTO_IS_NULL=N

Count: 8802  Time=8.44s (74319s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  INSERT IGNORE INTO `entries` (`chunks`, `time`, `feed_id`, `unique_id`, `link`, `chunk`) VALUES ('S', 'S', N, 'S', 'S', 'S')

Count: 1  Time=8.00s (8s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  INSERT IGNORE INTO `subscriptions` (`user_id`, `feed_id`) VALUES (N, N)

Count: 38  Time=7.92s (301s)  Lock=0.00s (0s)  Rows=1.0 (38), superfeeder[superfeeder]@[172.21.1.184]
  SELECT count(DISTINCT `users`.id) AS count_users_id FROM `users`  INNER JOIN `subscriptions` ON `users`.id = `subscriptions`.user_id    WHERE ((`subscriptions`.feed_id = N))

Count: 9  Time=7.67s (69s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  INSERT IGNORE INTO `feeds` (`url`) VALUES ('S')

Count: 244  Time=7.20s (1756s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = N, `last_sup_update_id` = NULL, `updated_at` = 'S', `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = N, `active` = 'S', `last_fetch` = 'S', `created_at` = 'S', `max_period` = 'S' WHERE (`id` = N)

Count: 336  Time=6.85s (2301s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = N, `last_sup_update_id` = NULL, `updated_at` = 'S', `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = N, `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 16  Time=6.38s (102s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = N, `last_sup_update_id` = NULL, `updated_at` = NULL, `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = N, `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 122  Time=5.91s (721s)  Lock=0.00s (0s)  Rows=1.0 (119), superfeeder[superfeeder]@[172.21.1.158]
  SELECT DISTINCT `users`.* FROM `users` INNER JOIN `subscriptions` ON (`subscriptions`.`user_id` = `users`.`id`) WHERE (`subscriptions`.`feed_id` = N)

Count: 299  Time=5.78s (1727s)  Lock=0.00s (0s)  Rows=1.0 (299), superfeeder[superfeeder]@[172.21.1.158]
  SELECT * FROM `feeds` WHERE (`id` = 'S')

Count: 21  Time=5.48s (115s)  Lock=0.00s (0s)  Rows=1.0 (21), superfeeder[superfeeder]@[172.21.1.158]
  SELECT * FROM `subscriptions` WHERE ((`user_id` = N) AND (`feed_id` = N)) LIMIT N

Count: 27  Time=5.37s (145s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = 'S', `last_sup_update_id` = NULL, `updated_at` = NULL, `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = 'S', `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 9  Time=4.33s (39s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = 'S', `last_sup_update_id` = NULL, `updated_at` = NULL, `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = NULL, `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 1  Time=4.00s (4s)  Lock=0.00s (0s)  Rows=1.0 (1), superfeeder[superfeeder]@[172.21.1.175]
  select id from users where login='S'

Count: 1  Time=3.00s (3s)  Lock=0.00s (0s)  Rows=22.0 (22), debian-sys-maint[debian-sys-maint]@localhost
  select concat("S",
  TABLE_SCHEMA, "S", TABLE_NAME, "S") 
  from information_schema.TABLES where ENGINE="S"

Count: 1056  Time=0.11s (111s)  Lock=0.00s (0s)  Rows=126.9 (133998), superfeeder[superfeeder]@[172.21.1.184]
  SELECT * FROM `feeds` WHERE (last_maintenance_at < 'S')

Count: 1049  Time=0.00s (1s)  Lock=0.00s (0s)  Rows=3.1 (3303), superfeeder[superfeeder]@[172.21.1.184]
  SELECT * FROM `users` WHERE (one_week_anniversary_sent = N AND activated_at < 'S')

Count: 21  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), 0users@0hosts
  administrator command: Ping

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), debian-sys-maint[debian-sys-maint]@localhost
  select count(*) into @discard from `information_schema`.`COLUMNS`

Count: 8  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=30.0 (240), superfeeder[superfeeder]@[172.21.1.184]
  SELECT DISTINCT `feeds`.* FROM `feeds` INNER JOIN `subscriptions` ON `feeds`.id = `subscriptions`.feed_id WHERE ((`subscriptions`.user_id = N)) AND ((`subscriptions`.user_id = N))  LIMIT N, N

Count: 31  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=1.0 (31), superfeeder[superfeeder]@2hosts
  SELECT count(*) AS count_all FROM `feeds`

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), debian-sys-maint[debian-sys-maint]@localhost
  select count(*) into @discard from `information_schema`.`TRIGGERS`

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), debian-sys-maint[debian-sys-maint]@localhost
  select count(*) into @discard from `information_schema`.`VIEWS`

Count: 52  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.7 (34), superfeeder[superfeeder]@[172.21.1.184]
  SELECT * FROM `users` WHERE (`users`.`remember_token` = 'S')  LIMIT N

Count: 120  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=1.0 (120), superfeeder[superfeeder]@2hosts
  SELECT * FROM `feeds`  ORDER BY feeds.id DESC LIMIT N

Count: 19  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=15.7 (299), superfeeder[superfeeder]@2hosts
  SELECT count(*) AS count_all, last_http_code AS last_http_code FROM `feeds`  GROUP BY last_http_code

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), debian-sys-maint[debian-sys-maint]@localhost
  select count(*) into @discard from `information_schema`.`ROUTINES`

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=1.0 (1), debian-sys-maint[debian-sys-maint]@localhost
  SELECT count(*) FROM mysql.user WHERE user='S' and password='S'

Tabellendefinition für Feeds:

+---------------------+--------------+------+-----+---------------------+----------------+
| Field               | Type         | Null | Key | Default             | Extra          |
+---------------------+--------------+------+-----+---------------------+----------------+
| id                  | int(11)      | NO   | PRI | NULL                | auto_increment | 
| url                 | varchar(255) | YES  | UNI | NULL                |                | 
| last_parse          | datetime     | YES  |     | 2009-08-10 14:51:46 |                | 
| etag                | varchar(255) | YES  |     | etag                |                | 
| modified            | datetime     | YES  |     | 2009-08-10 14:51:46 |                | 
| active              | tinyint(1)   | YES  | MUL | 1                   |                | 
| last_fetch          | datetime     | YES  |     | 2009-08-10 14:51:46 |                | 
| next_fetch          | datetime     | YES  | MUL | 2009-08-10 14:51:46 |                | 
| fetch_session_id    | varchar(255) | YES  | MUL |                     |                | 
| period              | int(11)      | YES  |     | 240                 |                | 
| hashed_content      | varchar(255) | YES  |     |                     |                | 
| streamed            | tinyint(1)   | YES  |     | 0                   |                | 
| sup_id              | varchar(255) | YES  | MUL | NULL                |                | 
| last_sup_update_id  | varchar(255) | YES  |     | NULL                |                | 
| last_entry_time     | datetime     | YES  |     | 2009-08-10 14:51:46 |                | 
| last_ping           | datetime     | YES  |     | NULL                |                | 
| last_http_code      | int(11)      | YES  |     | NULL                |                | 
| last_error_message  | varchar(255) | YES  |     |                     |                | 
| sup_url_id          | int(11)      | YES  | MUL | NULL                |                | 
| created_at          | datetime     | YES  |     | NULL                |                | 
| updated_at          | datetime     | YES  |     | NULL                |                | 
| last_maintenance_at | datetime     | YES  |     | 2008-08-10 21:51:50 |                | 
| min_period          | int(11)      | YES  |     | 60                  |                | 
| max_period          | int(11)      | YES  |     | 900                 |                | 
+---------------------+--------------+------+-----+---------------------+----------------+

+-------+------------+--------------------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name                             | Seq_in_index | Column_name      | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------+------------+--------------------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
| feeds |          0 | PRIMARY                              |            1 | id               | A         |         166 |     NULL | NULL   |      | BTREE      |         | 
| feeds |          0 | index_feeds_on_url                   |            1 | url              | A         |         166 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_next_fetch_and_active |            1 | next_fetch       | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_next_fetch_and_active |            2 | active           | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_sup_id                |            1 | sup_id           | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_sup_url_id            |            1 | sup_url_id       | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_fetch_session_id      |            1 | fetch_session_id | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
+-------+------------+--------------------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
Julien Genestoux
quelle
Was sagt das Fehlerprotokoll?
Shantanuo

Antworten:

12

Sie sollten MyISAM wahrscheinlich nicht in Betracht ziehen, INNODB wird für Sie arbeiten. MyISAM ist vielleicht schneller, wenn es darum geht, SELECTaber (zum Beispiel) es sperrt Ihre vollständige Tabelle bei Updates.

Wie für INNODB:

  • Berücksichtigen Sie im Allgemeinen immer mehr RAM, bevor Sie mit dem Sharding beginnen (Größe des DB = ~ RAM).
  • Schauen Sie sich die folgenden Variablen an:
    • innodb_buffer_pool_size (Wir verbrauchen ungefähr 60-70% unseres Gedächtnisses)
    • innodb_log_file_size
    • innodb_log_buffer_size
    • innodb_flush_log_at_trx_commit
    • innodb_thread_concurrency
    • innodb_flush_method=O_DIRECT
    • innodb_file_per_table
  • Wechsel von innodb zu xtradb (gleiche API)
  • Verwenden Sie die Percona-Builds (sie enthalten Leistungspatches von Google usw.)

Großartig liest:

Als Randnotiz:

  • Ein 2-GB-Slice reicht einfach nicht aus, um dies auszuführen
  • Außerdem fand ich die Speicherung auf Slicehost ziemlich langsam (Io ist ein Faktor).
  • In der Cloud kann es sinnvoll sein, früher zu sharden (Ursache für das RAM-Limit).
  • Ich würde alle Abfragen durchlaufen EXPLAIN, um sicherzustellen, dass der Index wirklich verwendet wird
Bis
quelle
Tolle Resonanz! Im Moment ist unsere Datenbankgröße 61M ... also, wenn ich Ihrer Aussage glaube, DB = ~ RAM; Wir haben genug RAM! Ich denke, der Speicher ist in der Tat langsam (weil er wohl virtualisiert ist). Ich werde deine Links lesen. Vielen Dank!
Julien Genestoux
(und ja, ich habe alle Abfragen durch EXPLAIN ausgeführt und alle verwenden Indizes)
Julien Genestoux
Hey Julien - zuerst stimmen Sie hier die Antworten ab. ;-) Zweitens nützt es nichts, nur RAM zu haben, wenn MySQL es nicht verwendet. Stellen Sie sicher, dass Sie die Einstellungen überprüfen, damit sie verwendet werden.
Bis zum
Ja, genau dort brauche ich meine Hilfe;) Ich denke, MySQL nutzt sie. (und eigentlich zu viel!, weil unser Server nach einiger Zeit keinen RAM mehr zu haben scheint). Die DB hat eine Größe von 61 MB. Wie kommt es, dass sie 2 GB RAM "frisst"?
Julien Genestoux
Können Sie die Tabellendefinition für Ihre "schlechteste" Abfrage hinzufügen?
Bis zum
2

Ich habe festgestellt, dass MySQLTuner in der Vergangenheit sehr effektiv war - es kann sinnvolle Vorschläge für Konfigurationsänderungen basierend auf der Nutzung Ihres Servers machen. Es basiert auf dem Tuning-Primer- Skript, das ebenfalls einen Versuch wert ist.

Dan
quelle
1

Gibt Ihre App die Verbindungen zum Pool frei, wenn sie fertig ist?


quelle
Nein, weil es mit ihnen eigentlich nie gemacht wird. Es ist nicht wie bei einer Webanwendung, bei der die App zwischen Verbindungen "stirbt".
Julien Genestoux
0

Wenn Sie die where-Klauseln durch bestimmte IDs ersetzen können (vorausgesetzt, es gibt nicht viele), kann dies die Aktualisierung beschleunigen.

Die Verwendung einer Sphinx-Suche, die Sie einfach gruppieren können, um Elemente im Voraus zu finden, hilft. Ich habe gehört, dass es Suchanfragen schneller zurückgeben kann, als MySQL weiß, dass es Indizes hat.

http://www.sphinxsearch.com/

Mark L.
quelle
Entschuldigung, aber ich verstehe den Punkt nicht? Wir machen keine Suche in unserer Datenbank ...
Julien Genestoux
Sie sind: .. WO ( feeds. urlIN (NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL)) .. .. WO ( feeds. urlIN ('S', 'S')) .. Diese Dies könnte teurer sein, als zuerst die Ganzzahl-IDs über Sphinx zu ermitteln. Mit Sphinx können Sie die Last problemlos auf einen anderen Server übertragen, indem Sie sich Gedanken über Master-Slave-Setups und Verzögerungen im Bin-Protokoll machen.
Mark L
feeds.url ist ein Index. Ist der Primärschlüssel schneller als ein Index?
Julien Genestoux
Wenn es eine Ganzzahl anstelle einer VARCHAR (10) oder so etwas wäre (nehme ich an), würde die Ganzzahl weniger Speicherplatz verbrauchen und MySQL könnte sie schneller in der Indextabelle finden.
Mark L
Aber wäre das nicht wirklich marginal?
Julien Genestoux
0

Überprüfen Sie erneut die Abfragen, die so viel Zeit in Anspruch nehmen. Sind die Tabellen richtig indiziert?

Sie können auch MySQL Tuner ausführen, um Ihre MySQL- Einstellungen zu optimieren.

hdanniel
quelle
Ja, das passiert, wenn der Server blockiert, und ich habe wieder alle Abfragen überprüft: Alle verwenden Indizes. (+ die langsamsten Abfragen sind auch INSERT-Abfragen ... was nichts mit Indizes zu tun hat, AFAIK)
Julien Genestoux