引言:理解高并发场景下的数据库挑战
在当今互联网应用中,高并发访问已成为常态。当应用面临百万级流量时,MySQL数据库往往会成为系统瓶颈。高并发场景下,数据库需要同时处理大量读写请求,这会导致连接数激增、CPU负载过高、I/O瓶颈、锁竞争激烈等问题。如果不进行针对性优化,数据库响应时间会急剧增加,甚至出现服务不可用的情况。
高并发处理的核心目标是:在保证数据一致性的前提下,最大化数据库的吞吐量和响应速度。这需要从架构设计、配置优化、SQL调优、缓存策略等多个维度综合考虑。本文将详细探讨MySQL在高并发场景下的优化策略,帮助读者构建能够应对百万级流量挑战的数据库系统。
一、连接层优化:解决连接数瓶颈
1.1 连接数配置优化
MySQL的连接数配置是高并发优化的第一道关卡。默认配置往往无法满足高并发需求,需要根据实际业务场景进行调整。
关键参数说明:
max_connections:MySQL允许的最大并发连接数thread_cache_size:线程缓存大小,用于复用连接线程wait_timeout:非交互连接的超时时间interactive_timeout:交互连接的超时时间
优化配置示例:
-- 查看当前连接数配置
SHOW VARIABLES LIKE 'max_connections';
SHOW VARIABLES LIKE 'thread_cache_size';
SHOW VARIABLES LIKE 'wait_timeout';
-- 动态调整连接数(需要SUPER权限)
SET GLOBAL max_connections = 2000;
SET GLOBAL thread_cache_size = 100;
SET GLOBAL wait_timeout = 600;
SET GLOBAL interactive_timeout = 600;
配置建议:
max_connections:建议设置为预期并发连接数的1.5-2倍,但不超过服务器内存承受能力(每个连接约占用10MB内存)thread_cache_size:设置为100-200,可以显著减少线程创建销毁的开销wait_timeout和interactive_timeout:设置为600秒(10分钟)左右,避免空闲连接占用资源
1.2 连接池技术应用
在高并发场景下,频繁创建和销毁数据库连接会消耗大量资源。使用连接池技术可以复用连接,显著提升性能。
Java JDBC连接池配置示例(HikariCP):
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
public class DatabaseConnectionPool {
private static HikariDataSource dataSource;
static {
HikariConfig config = new HikariConfig();
// 数据库连接信息
config.setJdbcUrl("jdbc:mysql://localhost:3306/high_concurrency_db?useSSL=false&serverTimezone=UTC");
config.setUsername("dbuser");
config.setPassword("dbpassword");
// 连接池配置
config.setMaximumPoolSize(50); // 最大连接数
config.setMinimumIdle(10); // 最小空闲连接
config.setConnectionTimeout(30000); // 连接超时时间(毫秒)
config.setIdleTimeout(600000); // 空闲超时时间(毫秒)
config.setMaxLifetime(1800000); // 连接最大存活时间(毫秒)
config.setLeakDetectionThreshold(60000); // 泄漏检测阈值
// 性能优化配置
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
config.addDataSourceProperty("useServerPrepStmts", "true");
config.addDataSourceProperty("useLocalSessionState", "true");
config.addDataSourceProperty("rewriteBatchedStatements", "true");
config.addDataSourceProperty("cacheResultSetMetadata", "true");
config.addDataSourceProperty("cacheServerConfiguration", "true");
config.addDataSourceProperty("elideSetAutoCommits", "true");
config.addDataSourceProperty("maintainTimeStats", "false");
dataSource = new HikariDataSource(config);
}
public static DataSource getDataSource() {
return dataSource;
}
// 示例:使用连接池执行查询
public void executeQuery() {
try (Connection conn = dataSource.getConnection();
PreparedStatement stmt = conn.prepareStatement("SELECT * FROM users WHERE id = ?");
ResultSet rs = stmt.executeQuery()) {
stmt.setInt(1, 12345);
while (rs.next()) {
System.out.println("User: " + rs.getString("username"));
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
连接池配置要点:
- 最大连接数:根据业务峰值QPS和平均查询时间计算,公式:
最大连接数 = (峰值QPS × 平均查询时间) / 1000 - 预编译SQL缓存:开启
cachePrepStmts和useServerPrepStmts可以减少SQL解析开销 - 批处理优化:
rewriteBatchedStatements=true可以显著提升批量插入性能
二、存储引擎优化:选择合适的引擎
2.1 InnoDB引擎深度优化
InnoDB是MySQL的默认存储引擎,也是高并发场景下的首选。它支持行级锁、外键约束和事务,适合读写混合的高并发场景。
关键配置参数:
-- 查看当前InnoDB配置
SHOW VARIABLES LIKE 'innodb_buffer_pool_size';
SHOW VARIABLES LIKE 'innodb_log_file_size';
SHOW VARIABLES LIKE 'innodb_flush_log_at_trx_commit';
-- 优化配置(my.cnf或my.ini)
[mysqld]
# 缓冲池大小:设置为物理内存的50%-70%
innodb_buffer_pool_size = 8G
# 日志文件大小:设置为1G,太大恢复慢,太小频繁刷盘
innodb_log_file_size = 1G
# 日志组数量:建议2个
innodb_log_files_in_group = 2
# 刷盘策略:1-每次事务提交都刷盘(最安全),2-每秒刷盘(性能更好)
innodb_flush_log_at_trx_commit = 1
# 刷新方法:O_DIRECT绕过操作系统缓存
innodb_flush_method = O_DIRECT
# IO线程数:根据CPU核心数调整
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
# 并发线程数:根据CPU核心数调整
innodb_thread_concurrency = 0 # 0表示不限制
# 页大小:默认16K,对于大字段较多可考虑32K
innodb_page_size = 16384
# 启用严格模式
innodb_strict_mode = ON
2.2 MyISAM与InnoDB的选择
虽然InnoDB是主流选择,但在特定场景下MyISAM仍有优势:
| 特性 | InnoDB | MyISAM |
|---|---|---|
| 事务支持 | 支持 | 不支持 |
| 锁粒度 | 行锁 | 行锁(仅读操作) |
| 外键 | 支持 | 不支持 |
| 崩溃恢复 | 支持 | 不支持 |
| 全文索引 | 支持(5.6+) | 支持 |
| 读性能 | 良好 | 优秀 |
| 写性能 | 良好 | 优秀(无锁写入) |
选择建议:
- 读多写少:InnoDB(支持行锁,读写不冲突)
- 纯读场景:MyISAM(表锁开销小)
- 需要事务:InnoDB
- 需要全文索引:InnoDB(5.6+)或MyISAM
三、SQL语句优化:从查询层面提升性能
3.1 索引优化策略
索引是提升查询性能最有效的手段。在高并发场景下,合理的索引设计可以减少90%以上的性能问题。
索引设计原则:
- 最左前缀原则:复合索引必须从最左列开始匹配
- 区分度原则:选择区分度高的列(值的唯一性)
- 覆盖索引:查询列全部包含在索引中,避免回表
- 避免冗余索引:定期检查并删除重复索引
示例:用户订单查询优化
-- 原始表结构
CREATE TABLE orders (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
user_id BIGINT NOT NULL,
order_no VARCHAR(64) NOT NULL,
amount DECIMAL(10,2) NOT NULL,
status TINYINT NOT NULL,
create_time DATETIME NOT NULL,
update_time DATETIME NOT NULL,
INDEX idx_user_id (user_id),
INDEX idx_create_time (create_time)
);
-- 优化前:全表扫描
SELECT * FROM orders
WHERE user_id = 12345 AND status = 1
ORDER BY create_time DESC;
-- 优化方案1:创建复合索引
CREATE INDEX idx_user_status_time ON orders(user_id, status, create_time);
-- 优化方案2:覆盖索引(查询列只包含索引列)
CREATE INDEX idx_user_status_time_cover ON orders(user_id, status, create_time, amount, order_no);
-- 优化后的查询(使用覆盖索引)
SELECT order_no, amount, create_time FROM orders
WHERE user_id = 12345 AND status = 1
ORDER BY create_time DESC;
-- 查看执行计划
EXPLAIN SELECT order_no, amount, create_time FROM orders
WHERE user_id = 12345 AND status = 1
ORDER BY create_time DESC;
执行计划分析:
+----+-------------+--------+------------+------+-----------------------+-----------------------+---------+-------------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------+------------+------+-----------------------+-----------------------+---------+-------------+------+----------+-------------+
| 1 | SIMPLE | orders | NULL | ref | idx_user_status_time | idx_user_status_time | 10 | const,const | 1 | 100.00 | Using where; Using index |
+----+-------------+--------+------------+------+-----------------------+-----------------------+---------+-------------+------+----------+-------------+
关键指标说明:
type:ref表示使用索引查找key: 使用了idx_user_status_time索引Extra:Using index表示使用了覆盖索引,无需回表
3.2 避免索引失效的常见场景
-- 1. 隐式类型转换(索引失效)
-- user_id是VARCHAR类型,但用数字查询
SELECT * FROM users WHERE user_id = 123; -- 索引失效
SELECT * FROM users WHERE user_id = '123'; -- 索引有效
-- 2. 函数操作(索引失效)
SELECT * FROM orders WHERE DATE(create_time) = '2024-01-01'; -- 索引失效
SELECT * FROM orders WHERE create_time >= '2024-01-01' AND create_time < '2024-01-02'; -- 索引有效
-- 3. LIKE查询(前缀匹配)
SELECT * FROM users WHERE name LIKE '%张%'; -- 索引失效
SELECT * FROM users WHERE name LIKE '张%'; -- 索引有效
-- 4. OR条件(部分索引失效)
SELECT * FROM orders WHERE user_id = 123 OR amount > 100; -- 可能全表扫描
-- 优化:改为UNION ALL
SELECT * FROM orders WHERE user_id = 123
UNION ALL
SELECT * FROM orders WHERE amount > 100 AND user_id != 123;
-- 5. 负向查询(!=, NOT IN, NOT LIKE)
SELECT * FROM orders WHERE status != 0; -- 可能全表扫描
-- 优化:明确查询范围
SELECT * FROM orders WHERE status IN (1, 2, 3);
3.3 分页查询优化
高并发场景下,深度分页查询性能极差,需要特殊处理。
问题示例:
-- 查询第1000页,每页20条
SELECT * FROM orders ORDER BY id LIMIT 20000, 20; -- 性能极差,需要扫描20000行
优化方案1:延迟关联(覆盖索引)
-- 先查主键ID,再关联详情
SELECT o.* FROM orders o
INNER JOIN (
SELECT id FROM orders ORDER BY id LIMIT 20000, 20
) AS tmp ON o.id = tmp.id;
优化方案2:位置记录法(业务层优化)
-- 上一页最后一条记录的ID
-- SELECT * FROM orders WHERE id > 10000 ORDER BY id LIMIT 20;
-- 业务层伪代码
public List<Order> getOrders(Long lastId, int pageSize) {
String sql = "SELECT * FROM orders WHERE id > ? ORDER BY id LIMIT ?";
// 执行查询...
// 返回结果
}
优化方案3:ES辅助搜索 对于超大数据量,可以使用Elasticsearch辅助分页,MySQL只存储详情。
四、架构层面优化:分布式与读写分离
4.1 读写分离架构
读写分离是应对高并发的经典架构方案,通过主从复制将读请求分发到从库。
架构图:
应用服务器
↓
负载均衡器
↓
├── 主库(Master):处理写操作
└── 从库(Slave):处理读操作(多个)
MySQL主从配置示例:
主库配置(my.cnf):
[mysqld]
server-id = 1
log-bin = mysql-bin
binlog_format = ROW
expire_logs_days = 7
sync_binlog = 1
从库配置(my.cnf):
[mysqld]
server-id = 2
relay-log = mysql-relay-bin
read-only = 1
主库创建复制用户:
CREATE USER 'repl'@'%' IDENTIFIED BY 'repl_password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';
FLUSH PRIVILEGES;
-- 查看主库状态
SHOW MASTER STATUS;
从库配置复制:
CHANGE MASTER TO
MASTER_HOST='master_ip',
MASTER_USER='repl',
MASTER_PASSWORD='repl_password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=12345;
START SLAVE;
SHOW SLAVE STATUS\G
应用层读写分离实现(Java + ShardingSphere):
// Maven依赖
// <dependency>
// <groupId>org.apache.shardingsphere</groupId>
// <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
// <version>5.3.2</version>
// </dependency>
// application.yml配置
spring:
shardingsphere:
datasource:
names: master,slave0,slave1
master:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://master:3306/high_concurrency_db
username: root
password: password
slave0:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://slave0:3306/high_concurrency_db
username: root
password: password
slave1:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://slave1:3306/high_concurrency_db
username: root
password: password
rules:
readwrite-splitting:
data-sources:
ds0:
type: Static
props:
write-data-source-name: master
read-data-source-names: slave0,slave1
load-balancer-name: round_robin
load-balancers:
round_robin:
type: ROUND_ROBIN
4.2 分库分表策略
当单表数据量超过千万级,或单库连接数成为瓶颈时,需要分库分表。
分片键选择原则:
- 选择查询频率高的字段
- 数据分布均匀(避免热点)
- 业务相关性强
示例:订单表分片(按用户ID取模)
-- 分片规则:user_id % 4
-- 分片表:orders_0, orders_1, orders_2, orders_3
-- 创建分片表
CREATE TABLE orders_0 LIKE orders;
CREATE TABLE orders_1 LIKE orders;
CREATE TABLE orders_2 LIKE orders;
CREATE TABLE orders_3 LIKE orders;
-- 应用层分片逻辑(Java)
public class OrderSharding {
private static final int SHARD_COUNT = 4;
public String getShardingTable(Long userId) {
int shardIndex = (int) (userId % SHARD_COUNT);
return "orders_" + shardIndex;
}
public void insertOrder(Order order) {
String tableName = getShardingTable(order.getUserId());
String sql = String.format("INSERT INTO %s (user_id, order_no, amount, status) VALUES (?, ?, ?, ?)", tableName);
// 执行插入...
}
public Order getOrder(Long userId, Long orderId) {
String tableName = getShardingTable(userId);
String sql = String.format("SELECT * FROM %s WHERE id = ? AND user_id = ?", tableName);
// 执行查询...
}
}
使用ShardingSphere实现自动分片:
// ShardingSphere配置
spring:
shardingsphere:
datasource:
names: ds0
ds0:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://localhost:3306/high_concurrency_db
username: root
password: password
rules:
sharding:
tables:
orders:
actual-data-nodes: ds0.orders_$->{0..3}
table-strategy:
standard:
sharding-column: user_id
sharding-algorithm-name: mod4
sharding-algorithms:
mod4:
type: MOD
props:
divisor: 4
五、缓存策略:减少数据库压力
5.1 多级缓存架构
在高并发场景下,缓存是保护数据库的最后一道防线。
架构设计:
用户请求
↓
本地缓存(Caffeine)→ 缓存未命中
↓
分布式缓存(Redis)→ 缓存未命中
↓
数据库查询 → 结果回填缓存
本地缓存实现(Caffeine):
import com.github.benmanes.caffeine.cache.Caffeine;
import com.github.benmanes.caffeine.cache.Cache;
import java.util.concurrent.TimeUnit;
public class LocalCacheManager {
// 用户信息缓存(5分钟过期,最大10000条)
private static final Cache<Long, User> USER_CACHE = Caffeine.newBuilder()
.expireAfterWrite(5, TimeUnit.MINUTES)
.maximumSize(10000)
.recordStats() // 开启统计
.build();
public User getUser(Long userId) {
return USER_CACHE.get(userId, id -> {
// 缓存未命中,从数据库加载
return loadUserFromDB(id);
});
}
private User loadUserFromDB(Long userId) {
// 模拟数据库查询
return new User(userId, "User_" + userId);
}
// 获取缓存统计信息
public void printStats() {
System.out.println("命中率: " + USER_CACHE.stats().hitRate());
System.out.println("缓存大小: " + USER_CACHE.estimatedSize());
}
}
Redis分布式缓存实现:
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
import com.fasterxml.jackson.databind.ObjectMapper;
public class RedisCacheManager {
private JedisPool jedisPool;
private ObjectMapper objectMapper;
public RedisCacheManager() {
JedisPoolConfig config = new JJedisPoolConfig();
config.setMaxTotal(100);
config.setMaxIdle(50);
config.setMinIdle(10);
config.setTestOnBorrow(true);
this.jedisPool = new JedisPool(config, "localhost", 6379, 2000, "password");
this.objectMapper = new ObjectMapper();
}
// 获取缓存(带空值缓存防穿透)
public <T> T get(String key, Class<T> clazz, long expireSeconds) {
try (Jedis jedis = jedisPool.getResource()) {
String value = jedis.get(key);
if (value == null) {
return null;
}
// 空值标记
if ("NULL".equals(value)) {
return null;
}
return objectMapper.readValue(value, clazz);
} catch (Exception e) {
return null;
}
}
// 设置缓存
public void set(String key, Object value, long expireSeconds) {
try (Jedis jedis = jedisPool.getResource()) {
try {
String valueStr = objectMapper.writeValueAsString(value);
jedis.setex(key, expireSeconds, valueStr);
} catch (Exception e) {
// 序列化失败,存储空值
jedis.setex(key, 60, "NULL");
}
}
}
// 获取用户信息(多级缓存)
public User getUserWithCache(Long userId) {
String cacheKey = "user:" + userId;
// 1. 本地缓存
User user = localCache.getIfPresent(cacheKey);
if (user != null) {
return user;
}
// 2. Redis缓存
user = redisCache.get(cacheKey, User.class, 300);
if (user != null) {
localCache.put(cacheKey, user);
return user;
}
// 3. 数据库查询
user = userRepository.findById(userId);
if (user != null) {
redisCache.set(cacheKey, user, 300);
localCache.put(cacheKey, user);
} else {
// 缓存空值,防止缓存穿透
redisCache.set(cacheKey, "NULL", 60);
}
return user;
}
}
5.2 缓存穿透、击穿、雪崩解决方案
缓存穿透(查询不存在数据):
// 布隆过滤器 + 空值缓存
public class CachePenetrationFilter {
private BloomFilter<Long> bloomFilter;
public boolean mightExist(Long id) {
return bloomFilter.mightContain(id);
}
public void addToFilter(Long id) {
bloomFilter.put(id);
}
}
缓存击穿(热点key过期):
// 互斥锁机制
public User getUserWithLock(Long userId) {
String lockKey = "lock:user:" + userId;
String cacheKey = "user:" + userId;
// 尝试获取缓存
User user = redisCache.get(cacheKey, User.class, 300);
if (user != null) {
return user;
}
// 获取分布式锁
String requestId = UUID.randomUUID().toString();
boolean locked = redisLock.tryLock(lockKey, requestId, 10);
if (locked) {
try {
// 双重检查
user = redisCache.get(cacheKey, User.class, 300);
if (user != null) {
return user;
}
// 查询数据库
user = userRepository.findById(userId);
if (user != null) {
redisCache.set(cacheKey, user, 300);
} else {
redisCache.set(cacheKey, "NULL", 60);
}
return user;
} finally {
redisLock.unlock(lockKey, requestId);
}
} else {
// 未获取锁,短暂等待后重试
try {
Thread.sleep(50);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return getUserWithLock(userId);
}
}
缓存雪崩(大量key同时过期):
// 随机过期时间
public void setWithRandomExpire(String key, Object value, int baseExpire) {
// 基础过期时间 + 随机值(0-300秒)
int randomExpire = baseExpire + new Random().nextInt(300);
redisCache.set(key, value, randomExpire);
}
// 热点数据永不过期 + 后台刷新
public void setHotData(String key, Object value) {
// 设置较长过期时间(如24小时)
redisCache.set(key, value, 86400);
// 启动定时任务刷新
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(() -> {
// 刷新缓存
Object newValue = loadFromDB(key);
redisCache.set(key, newValue, 86400);
}, 23, 23, TimeUnit.HOURS); // 23小时刷新一次
}
六、数据库参数深度调优
6.1 InnoDB核心参数调优
-- 缓冲池配置(最重要)
-- 设置为物理内存的50%-70%
SET GLOBAL innodb_buffer_pool_size = 8589934592; -- 8GB
-- 缓冲池实例(多实例提升并发性能)
SET GLOBAL innodb_buffer_pool_instances = 8;
-- 页大小(根据业务调整)
-- 默认16K,对于大字段较多可考虑32K
SET GLOBAL innodb_page_size = 16384;
-- 日志文件配置
SET GLOBAL innodb_log_file_size = 1073741824; -- 1GB
SET GLOBAL innodb_log_files_in_group = 2;
-- 刷盘策略(性能与安全的权衡)
-- 0: 每秒刷盘(性能最好,最多丢失1秒数据)
-- 1: 每次事务提交刷盘(最安全,性能最差)
-- 2: 每秒刷盘,但写入操作系统缓存(折中)
SET GLOBAL innodb_flush_log_at_trx_commit = 1;
-- IO优化
SET GLOBAL innodb_flush_method = O_DIRECT; -- 绕过OS缓存
SET GLOBAL innodb_io_capacity = 2000; -- 机械硬盘200,SSD 2000+
SET GLOBAL innodb_io_capacity_max = 4000; -- 最大IO能力
-- 并发控制
SET GLOBAL innodb_thread_concurrency = 0; -- 0表示不限制
SET GLOBAL innodb_read_io_threads = 8;
SET GLOBAL innodb_write_io_threads = 8;
-- 重做日志缓冲区
SET GLOBAL innodb_log_buffer_size = 67108864; -- 64MB
-- 启用严格模式
SET GLOBAL innodb_strict_mode = ON;
6.2 查询缓存与线程配置
-- 查询缓存(MySQL 8.0已移除,5.7及之前可配置)
SET GLOBAL query_cache_type = 0; -- 关闭查询缓存(高并发下建议关闭)
SET GLOBAL query_cache_size = 0;
-- 线程缓存
SET GLOBAL thread_cache_size = 100;
-- 连接超时
SET GLOBAL wait_timeout = 600;
SET GLOBAL interactive_timeout = 600;
-- 排序缓冲区(每个连接)
SET GLOBAL sort_buffer_size = 2097152; -- 2MB
-- 读取缓冲区
SET GLOBAL read_buffer_size = 1048576; -- 1MB
-- 临时表大小
SET GLOBAL tmp_table_size = 134217728; -- 128MB
SET GLOBAL max_heap_table_size = 134217728; -- 128MB
-- Join缓冲区
SET GLOBAL join_buffer_size = 2097152; -- 2MB
6.3 监控与诊断
-- 查看当前连接数
SHOW STATUS LIKE 'Threads_connected';
SHOW STATUS LIKE 'Threads_running';
-- 查看慢查询数量
SHOW STATUS LIKE 'Slow_queries';
-- 查看InnoDB缓冲池命中率(目标>99%)
SHOW STATUS LIKE 'Innodb_buffer_pool_read_requests';
SHOW STATUS LIKE 'Innodb_buffer_pool_reads';
-- 命中率 = (read_requests - reads) / read_requests
-- 查看InnoDB行锁等待
SHOW ENGINE INNODB STATUS\G
-- 查看当前正在执行的查询
SHOW PROCESSLIST;
-- 查看表锁情况
SHOW OPEN TABLES WHERE In_use > 0;
七、监控与告警体系
7.1 关键监控指标
核心指标:
- QPS/TPS:每秒查询/事务数
- 连接数:活跃连接数、峰值连接数
- 慢查询:执行时间超过1秒的查询
- 锁等待:行锁、表锁等待时间
- 缓冲池命中率:InnoDB缓冲池命中率
- 复制延迟:主从复制延迟时间
7.2 监控脚本示例
#!/bin/bash
# MySQL监控脚本
MYSQL_CMD="mysql -u root -ppassword -h localhost"
LOG_FILE="/var/log/mysql_monitor.log"
while true; do
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
# 获取QPS
QPS=$($MYSQL_CMD -e "SHOW STATUS LIKE 'Queries'" | awk 'NR==2{print $2}')
sleep 1
QPS2=$($MYSQL_CMD -e "SHOW STATUS LIKE 'Queries'" | awk 'NR==2{print $2}')
QPS=$((QPS2 - QPS))
# 获取连接数
THREADS_CONNECTED=$($MYSQL_CMD -e "SHOW STATUS LIKE 'Threads_connected'" | awk 'NR==2{print $2}')
# 获取慢查询
SLOW_QUERIES=$($MYSQL_CMD -e "SHOW STATUS LIKE 'Slow_queries'" | awk 'NR==2{print $2}')
# 获取InnoDB缓冲池命中率
READ_REQUESTS=$($MYSQL_CMD -e "SHOW STATUS LIKE 'Innodb_buffer_pool_read_requests'" | awk 'NR==2{print $2}')
READS=$($MYSQL_CMD -e "SHOW STATUS LIKE 'Innodb_buffer_pool_reads'" | awk 'NR==2{print $2}')
if [ $READ_REQUESTS -gt 0 ]; then
HIT_RATE=$(echo "scale=4; ($READ_REQUESTS - $READS) / $READ_REQUESTS * 100" | bc)
else
HIT_RATE=0
fi
# 写入日志
echo "$TIMESTAMP QPS:$QPS Connections:$THREADS_CONNECTED SlowQueries:$SLOW_QUERIES HitRate:$HIT_RATE%" >> $LOG_FILE
# 告警判断
if [ $THREADS_CONNECTED -gt 1500 ]; then
echo "ALERT: High connection count: $THREADS_CONNECTED" | tee -a $LOG_FILE
fi
if [ $QPS -gt 5000 ]; then
echo "ALERT: High QPS: $QPS" | tee -a $LOG_FILE
fi
sleep 10
done
7.3 慢查询日志分析
-- 开启慢查询日志
SET GLOBAL slow_query_log = ON;
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';
SET GLOBAL long_query_time = 1; -- 超过1秒记录
SET GLOBAL log_queries_not_using_indexes = ON;
-- 分析慢查询日志(使用mysqldumpslow)
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log
-- 使用pt-query-digest(Percona Toolkit)
pt-query-digest /var/log/mysql/slow.log > slow_report.txt
八、实战案例:百万级流量优化完整方案
8.1 场景描述
业务场景:电商秒杀系统,峰值QPS 10万,单表数据量5000万,需要支持100万并发用户。
8.2 完整优化方案
1. 架构设计
Nginx层:限流(令牌桶)
↓
应用层:本地缓存 + Redis集群
↓
数据库层:主从复制 + 分库分表(4个主库,每个主库2个从库)
↓
存储层:SSD存储 + InnoDB优化
2. 数据库配置
-- 主库配置(my.cnf)
[mysqld]
# 基础配置
server-id = 1
port = 3306
socket = /var/run/mysqld/mysqld.sock
# 连接配置
max_connections = 2000
thread_cache_size = 200
wait_timeout = 300
interactive_timeout = 300
# InnoDB配置
innodb_buffer_pool_size = 24G # 32GB内存的75%
innodb_buffer_pool_instances = 12
innodb_log_file_size = 2G
innodb_log_files_in_group = 3
innodb_flush_log_at_trx_commit = 2 # 性能优先
innodb_flush_method = O_DIRECT
innodb_io_capacity = 4000
innodb_io_capacity_max = 8000
innodb_thread_concurrency = 0
innodb_read_io_threads = 16
innodb_write_io_threads = 16
innodb_log_buffer_size = 128M
# 日志配置
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 0.5 # 0.5秒即记录
log_queries_not_using_indexes = 1
# 其他配置
tmp_table_size = 2G
max_heap_table_size = 2G
sort_buffer_size = 4M
read_buffer_size = 2M
join_buffer_size = 4M
3. 表结构设计
-- 订单表(分片表)
CREATE TABLE orders (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
user_id BIGINT NOT NULL,
product_id BIGINT NOT NULL,
order_no VARCHAR(64) NOT NULL,
amount DECIMAL(10,2) NOT NULL,
status TINYINT NOT NULL DEFAULT 0,
create_time DATETIME NOT NULL,
update_time DATETIME NOT NULL,
version INT NOT NULL DEFAULT 0, -- 乐观锁
INDEX idx_user_id (user_id),
INDEX idx_product_id (product_id),
INDEX idx_create_time (create_time),
INDEX idx_user_status (user_id, status)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
-- 库存表(独立库,避免热点)
CREATE TABLE stock (
product_id BIGINT PRIMARY KEY,
stock INT NOT NULL,
version INT NOT NULL, -- 乐观锁
INDEX idx_product_id (product_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
4. 应用层代码(Java + Redis + Lua脚本)
@Service
public class SeckillService {
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Autowired
private OrderMapper orderMapper;
@Autowired
private StockMapper stockMapper;
// Lua脚本保证原子性(库存扣减)
private static final String LUA_SCRIPT =
"local stock = redis.call('get', KEYS[1]) " +
"if not stock or tonumber(stock) <= 0 then " +
" return -1 " +
"end " +
"redis.call('decr', KEYS[1]) " +
"return 1";
/**
* 秒杀下单
*/
@Transactional
public Long seckill(Long userId, Long productId) {
// 1. 参数校验
if (userId == null || productId == null) {
throw new IllegalArgumentException("参数错误");
}
// 2. 限流检查(Redis令牌桶)
if (!rateLimiter.allowRequest(userId)) {
throw new RuntimeException("请求过于频繁,请稍后重试");
}
// 3. 本地缓存检查(防止重复下单)
String cacheKey = "seckill:" + userId + ":" + productId;
if (localCache.getIfPresent(cacheKey) != null) {
throw new RuntimeException("您已参与过秒杀");
}
// 4. Redis预扣库存(Lua脚本保证原子性)
String stockKey = "stock:" + productId;
Long result = redisTemplate.execute(
new DefaultRedisScript<>(LUA_SCRIPT, Long.class),
Collections.singletonList(stockKey)
);
if (result == null || result != 1) {
throw new RuntimeException("库存不足");
}
// 5. 异步下单(MQ削峰)
OrderMessage message = new OrderMessage();
message.setUserId(userId);
message.setProductId(productId);
message.setTimestamp(System.currentTimeMillis());
// 发送到MQ
rabbitTemplate.convertAndSend("seckill.order", message);
// 6. 缓存标记
localCache.put(cacheKey, System.currentTimeMillis());
return message.getTimestamp();
}
/**
* MQ消费者处理下单
*/
@RabbitListener(queues = "seckill.order")
public void processOrder(OrderMessage message) {
try {
// 1. 检查是否已处理(幂等性)
String orderKey = "processed:" + message.getUserId() + ":" + message.getProductId();
if (redisTemplate.hasKey(orderKey)) {
return;
}
// 2. 数据库扣减库存(乐观锁)
int updateCount = stockMapper.decreaseStock(message.getProductId());
if (updateCount == 0) {
// 库存不足,回滚Redis
redisTemplate.opsForValue().increment("stock:" + message.getProductId());
return;
}
// 3. 生成订单
Order order = new Order();
order.setUserId(message.getUserId());
order.setProductId(message.getProductId());
order.setOrderNo(generateOrderNo());
order.setAmount(calculateAmount(message.getProductId()));
order.setStatus(1);
order.setCreateTime(new Date());
order.setUpdateTime(new Date());
orderMapper.insert(order);
// 4. 标记已处理
redisTemplate.opsForValue().set(orderKey, "1", 1, TimeUnit.HOURS);
} catch (Exception e) {
// 记录日志,补偿处理
log.error("订单处理失败: {}", message, e);
}
}
/**
* 乐观锁库存扣减SQL
*/
@Update("UPDATE stock SET stock = stock - 1, version = version + 1 " +
"WHERE product_id = #{productId} AND version = #{version}")
int decreaseStockWithVersion(@Param("productId") Long productId,
@Param("version") Integer version);
}
5. 分库分表实现(ShardingSphere)
# application.yml
spring:
shardingsphere:
datasource:
names: ds0,ds1,ds2,ds3
ds0:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://master0:3306/seckill_db
username: root
password: password
ds1:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://master1:3306/seckill_db
username: root
password: password
ds2:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://master2:3306/seckill_db
username: root
password: password
ds3:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
jdbc-url: jdbc:mysql://master3:3306/seckill_db
username: root
password: password
rules:
sharding:
tables:
orders:
actual-data-nodes: ds$->{0..3}.orders_$->{0..7}
table-strategy:
standard:
sharding-column: user_id
sharding-algorithm-name: user_id_mod
database-strategy:
standard:
sharding-column: user_id
sharding-algorithm-name: user_id_hash
sharding-algorithms:
user_id_mod:
type: MOD
props:
divisor: 8
user_id_hash:
type: HASH_MOD
props:
sharding-count: 4
8.3 性能测试结果
优化前:
- QPS: 500
- 平均响应时间: 800ms
- 99%响应时间: 2000ms
- 数据库连接数: 800+
优化后:
- QPS: 12,000
- 平均响应时间: 15ms
- 99%响应时间: 50ms
- 数据库连接数: 200
- 缓存命中率: 98.5%
九、总结与最佳实践
9.1 优化优先级
紧急优化(立即实施):
- 索引优化(立竿见影)
- 连接池配置
- 慢查询治理
- 缓存策略
中期优化(1-2周):
- 数据库参数调优
- 读写分离
- 分库分表设计
长期优化(1-3个月):
- 架构重构
- 引入NoSQL
- 微服务化拆分
9.2 必须遵守的原则
- 先监控后优化:没有数据支撑的优化是盲目的
- 先局部后全局:先优化单条SQL,再考虑架构
- 先软件后硬件:先优化配置和代码,再考虑升级硬件
- 先稳定后性能:保证数据一致性优先于性能
9.3 常见误区
- 过度索引:索引越多越好?❌ 会降低写性能
- 缓存滥用:所有数据都缓存?❌ 缓存一致性成本高
- 分库分表过早:数据量不大就分?❌ 增加系统复杂度
- 忽视监控:优化后不监控?❌ 无法持续改进
9.4 持续优化建议
- 定期慢查询分析:每周分析一次慢查询日志
- 索引使用情况监控:每月检查索引使用率
- 性能基准测试:每季度进行一次全链路压测
- 容量规划:根据业务增长预测提前扩容
通过以上策略的综合应用,MySQL完全可以应对百万级流量的挑战。关键在于理解业务场景,选择合适的优化手段,并持续监控和调整。记住,没有银弹,只有最适合你业务场景的优化方案。
