Skip to main content

Building a Distributed Key-Value Store in C++ (Part 3)

Table of Contents

🌐 Phase 3: Networking the KV Store
#

Our key-value store now persists data across restarts. But it’s still trapped inside a single process.

In this part, we take a big step forward: turning our store into a networked service, accessible over TCP sockets via a custom protocol. That means clients can now connect remotely to put, get, and del keys.

🧰 New Components
#

We’re introducing two new binaries:

  • kvstore_server: A daemon that runs the key-value store and listens for TCP connections
  • kvstore_client: A CLI tool that connects to the server and issues commands

🧱 Server Design
#

The server listens on a TCP port and spawns a new thread for each client connection.

📦 KVServer Class
#

// server.hpp
class KVServer {
public:
  KVServer(int port, const std::string& store_file = "logs/store.log");
  void run();

private:
  int port;
  KVStore store;
  void handle_client(int client_socket);
};

🧠 Main Server Loop
#

// server.cpp
void KVServer::run() {
  int server_fd = socket(AF_INET, SOCK_STREAM, 0);
  // ... bind, listen

  while (true) {
    int client_socket = accept(server_fd, ...);
    std::thread(&KVServer::handle_client, this, client_socket).detach();
  }
}

Each client is handled in a dedicated thread, allowing concurrent access to the store.

🗣️ Command Handling
#

The client sends simple text commands like:

PUT foo bar
GET foo
DEL foo

Server parses these and replies with:

OK
<value>
NOT_FOUND

Here’s the heart of the logic:

// server.cpp
void KVServer::handle_client(int client_socket) {
  char buffer[1024] = {0};

  while (true) {
    int valread = read(client_socket, buffer, sizeof(buffer) - 1);
    if (valread <= 0) break;

    std::istringstream iss(buffer);
    std::string cmd, key, value;
    iss >> cmd >> key;

    std::ostringstream response;
    if (cmd == "PUT") {
      iss >> value;
      store.put(key, value);
      response << "OK\n";
    } else if (cmd == "GET") {
      auto val = store.get(key);
      response << (val ? *val : "NOT_FOUND") << "\n";
    } else if (cmd == "DEL") {
      response << (store.del(key) ? "OK\n" : "NOT_FOUND\n");
    } else if (cmd == "QUIT") {
      break;
    } else {
      response << "ERROR UNKNOWN COMMAND\n";
    }

    send(client_socket, response.str().c_str(), response.str().size(), 0);
  }

  close(client_socket);
}

🖥️ Interactive CLI Client
#

Let’s try it out from the terminal.

$ ./kvstore_server
KVServer listening port 12345...

$ ./kvstore_client 127.0.0.1 12345
> PUT hello world
OK
> GET hello
world
> DEL hello
OK
> GET hello
NOT_FOUND
> QUIT

🔍 Client Code
#

// client.cpp
while (std::cout << "> ", std::getline(std::cin, input)) {
  send(sock, input.c_str(), input.length(), 0);

  char buffer[1024] = {0};
  int valread = read(sock, buffer, sizeof(buffer)-1);

  if (valread > 0) {
    std::cout << buffer;
  }

  if (input == "QUIT") break;
}

📦 CMake Changes
#

To support multiple executables, we updated the top-level CMakeLists.txt:

add_executable(kvstore_server src/server.cpp src/server.hpp src/server_main.cpp)
target_link_libraries(kvstore_server PRIVATE kvstore_lib)

add_executable(kvstore_client src/client.cpp)
target_link_libraries(kvstore_client PRIVATE kvstore_lib)

🧪 Testing (Still Local)
#

Right now, only the local store logic is unit tested via Catch2. Networking is integration-level and will be tested manually (or via scripts) for now.

🗺️ Updated Roadmap
#

  1. Phase 1: Local Store

    ✅ Done

    Basic In-Memory KV Store

    • Implemented a KVStore class
    • Supports put, get, and del operations
    • Command-line usage for demoing
  2. Phase 2: Persistence

    ✅ Done

    Durability with Append-Only Log

    • Append-only log on disk
    • Recovery by replaying log
    • Unit tests with Catch2
  3. Phase 3: Networking

    ✅ Done

    Client-Server Communication

    • Expose KVStore via TCP sockets
    • Define simple request/response protocol
    • Build interactive CLI tool
  4. Phase 4: Multi-Node Architecture

    Next

    Cluster Mode

    • Connect multiple nodes
    • Add forwarding and replication
  5. Phase 5: Consensus

    Planned

    Leader Election and Coordination

  6. Phase 6: Testing & Resilience

    Planned

    Hardening the System

🚀 What’s Next?
#

In the next post, we’ll begin to scale horizontally by introducing a multi-node architecture.

We’ll explore:

  • Simple replication
  • Node discovery
  • Forwarding requests between servers

There are no articles to list here yet.