Is Redis Really Single-Threaded? A Comprehensive Analysis from Source Code Perspective
Redis is single-threaded — this statement has spread so widely that many people believe Redis runs with just one thread. But if you check a running Redis process with ps -ef or top, you’ll find there’s more than one thread.
What’s going on? This article will clarify this question thoroughly from the source code perspective.
The Short Answer
Redis’s “single-threaded” nature refers to: the main logic for command processing is single-threaded.
But a Redis process actually contains:
- Main thread: handles network requests, executes commands, runs the event loop
- 3 background threads: asynchronously handle file closing, AOF fsync, and lazy freeing
- Child processes: forked for RDB persistence and AOF rewrite
So Redis is not strictly single-threaded, but rather “single-threaded command processing”. This design is quite clever, and we’ll explain why later.
Background Threads: bio.c
Open bio.c, and the comment at the beginning states clearly:
This file implements operations that we need to perform in the background. Currently there is a single operation, that is a background close(2) system call.
The “currently a single operation” refers to early versions. Now it has been extended. See the definitions in bio.h:
1#define BIO_CLOSE_FILE 0 // Asynchronous file closing
2#define BIO_AOF_FSYNC 1 // Asynchronous AOF fsync
3#define BIO_LAZY_FREE 2 // Asynchronous memory freeing
4#define BIO_NUM_OPS 3 // Total 3 types of background tasks
Redis creates 3 background threads at startup:
1void bioInit(void) {
2 // Initialize locks, condition variables, task queues
3 for (j = 0; j < BIO_NUM_OPS; j++) {
4 pthread_mutex_init(&bio_mutex[j],NULL);
5 pthread_cond_init(&bio_newjob_cond[j],NULL);
6 pthread_cond_init(&bio_step_cond[j],NULL);
7 bio_jobs[j] = listCreate();
8 bio_pending[j] = 0;
9 }
10
11 // Create 3 threads
12 for (j = 0; j < BIO_NUM_OPS; j++) {
13 if (pthread_create(&thread,&attr,bioProcessBackgroundJobs,arg) != 0) {
14 serverLog(LL_WARNING,"Fatal: Can't initialize Background Jobs.");
15 exit(1);
16 }
17 bio_threads[j] = thread;
18 }
19}
Each thread handles one type of task and has its own task queue. The main thread submits tasks via bioCreateBackgroundJob:
1void bioCreateBackgroundJob(int type, void *arg1, void *arg2, void *arg3) {
2 struct bio_job *job = zmalloc(sizeof(*job));
3 job->time = time(NULL);
4 job->arg1 = arg1;
5 job->arg2 = arg2;
6 job->arg3 = arg3;
7
8 pthread_mutex_lock(&bio_mutex[type]);
9 listAddNodeTail(bio_jobs[type],job);
10 bio_pending[type]++;
11 pthread_cond_signal(&bio_newjob_cond[type]); // Wake up corresponding thread
12 pthread_mutex_unlock(&bio_mutex[type]);
13}
The background thread’s work loop:
1void *bioProcessBackgroundJobs(void *arg) {
2 unsigned long type = (unsigned long) arg;
3
4 while(1) {
5 pthread_mutex_lock(&bio_mutex[type]);
6
7 // Wait if no tasks
8 if (listLength(bio_jobs[type]) == 0) {
9 pthread_cond_wait(&bio_newjob_cond[type],&bio_mutex[type]);
10 continue;
11 }
12
13 // Get task
14 listNode *ln = listFirst(bio_jobs[type]);
15 job = ln->value;
16 pthread_mutex_unlock(&bio_mutex[type]);
17
18 // Execute task
19 if (type == BIO_CLOSE_FILE) {
20 close((long)job->arg1);
21 } else if (type == BIO_AOF_FSYNC) {
22 redis_fsync((long)job->arg1);
23 } else if (type == BIO_LAZY_FREE) {
24 if (job->arg1)
25 lazyfreeFreeObjectFromBioThread(job->arg1);
26 else if (job->arg2 && job->arg3)
27 lazyfreeFreeDatabaseFromBioThread(job->arg2,job->arg3);
28 }
29
30 pthread_mutex_lock(&bio_mutex[type]);
31 listDelNode(bio_jobs[type],ln);
32 bio_pending[type]--;
33 pthread_mutex_unlock(&bio_mutex[type]);
34 }
35}
A classic producer-consumer pattern.
Why Are These Background Threads Needed?
BIO_CLOSE_FILE: The close() system call can block in certain situations, such as closing a large file or on NFS filesystems. If the main thread blocks, all clients would stall, so this is handled in a background thread.
BIO_AOF_FSYNC: AOF persistence requires periodic fsync. This is a disk I/O operation that can be slow. The appendfsync everysec configuration does an fsync every second, handled by a background thread.
BIO_LAZY_FREE: Used by commands like UNLINK, FLUSHDB ASYNC, FLUSHALL ASYNC. Deleting a large key (e.g., a hash with millions of elements) would block the main thread, so it’s done gradually in a background thread. This feature was introduced in Redis 4.0.
Child Processes: Persistence
RDB snapshots and AOF rewrite use fork() to create child processes:
1// rdb.c
2if ((childpid = fork()) == 0) {
3 /* Child process */
4 closeListeningSockets(0);
5 redisSetProcTitle("redis-rdb-bgsave");
6 // Execute persistence...
7 exitFromChild(0);
8}
1// aof.c
2if ((childpid = fork()) == 0) {
3 /* Child process */
4 closeListeningSockets(0);
5 redisSetProcTitle("redis-aof-rewrite");
6 // Execute AOF rewrite...
7 exitFromChild(0);
8}
Why use fork() instead of threads? Because the forked child process has a complete copy of the parent’s memory (copy-on-write), allowing safe traversal of all data for persistence without worrying about simultaneous modifications by the main thread. With multi-threading, various locks would be needed, dramatically increasing complexity.
But fork has a cost: the larger the parent process memory, the slower the fork. That’s why Redis officially recommends not making single-instance memory too large.
Why Is the Main Thread Single-Threaded?
Back to the core question: why is the main logic for command processing single-threaded?
Several reasons:
1. No Locking Overhead
Multi-threading means shared data requires locks. Redis has complex data structures, and adding locks brings:
- Lock contention overhead
- Deadlock risks
- Increased code complexity
Single-threading completely avoids these issues.
2. The Bottleneck Isn’t CPU
Most Redis operations are in-memory operations, extremely fast. Bottlenecks are typically:
- Network bandwidth
- Number of client connections
- Operations on large keys
Multi-threading doesn’t necessarily improve performance, but adds complexity.
3. Event Loop Model
Redis uses epoll/kqueue for multiplexing — a single thread can handle tens of thousands of concurrent connections. This I/O model is inherently single-thread friendly; Nginx uses a similar design.
What About “Slow” Operations?
The biggest problem with single-threading: if one operation is slow, all subsequent requests must wait.
Redis’s strategies:
1. Break Operations Into Smaller Pieces
For example, KEYS * traverses all keys, which is slow. Redis later added SCAN, which traverses only a small portion each time, using a cursor for continuation.
2. Delegate to Background Threads
Lazy free is exactly this approach. The UNLINK command asynchronously deletes large keys:
1void unlinkCommand(client *c) {
2 if (server.lazyfree_lazy_server_del) {
3 // Asynchronous deletion
4 bioCreateBackgroundJob(BIO_LAZY_FREE, NULL, NULL, key);
5 } else {
6 // Synchronous deletion (old behavior)
7 dbDelete(c->db, key);
8 }
9}
3. Use Child Processes
Persistence is delegated to forked child processes.
4. Simply Prohibit
The KEYS command is not recommended in production, and DEBUG SLEEP is for debugging only.
What About Redis 6.0’s Multi-threaded I/O?
Redis 6.0 introduced multi-threading for network I/O (reading/writing sockets), but command execution remains single-threaded.
This feature’s code is in networking.c, mainly addressing network bandwidth bottlenecks. When client data volume is large, reading/writing sockets becomes a bottleneck, and multiple threads can handle this in parallel.
But core data structure operations and command execution remain single-threaded.
Summary
| Thread/Process | Responsibility |
|---|---|
| Main thread | Event loop, command execution |
| bio thread 1 | Asynchronous file closing |
| bio thread 2 | Asynchronous AOF fsync |
| bio thread 3 | Asynchronous lazy freeing |
| Child process | RDB persistence, AOF rewrite |
Redis’s “single-threaded” nature refers to the main flow of command processing. But operations that could block — like file closing, fsync, and large key deletion — are handled by background threads or child processes.
This is a pragmatic design choice. Single-threading is simple, lock-free, and easy to maintain. Combined with async I/O and background tasks, it’s sufficient for most scenarios.
If you really need higher performance, the right approach isn’t modifying Redis code, but deploying multiple instances and using clustering to distribute the load. After all, Redis natively supports cluster mode.