01
­{ }
A
Technical Laboratory Report

Distributed — Systems

Distributed Systems// Go Implementation

Chord DHT

Go

Scalable peer-to-peer lookup service implementing the Chord protocol with O(log N) routing.

N0N1N2N3N4N5N6N7N8N9N10N11N12N13N14N15
Live Protocol Simulation
Target Resource: Key_5
Status: Initialising...

Demonstrating O(log N) efficiency. The system skips half the remaining distance in each step, reaching any node in a million-node network in under 20 hops.

Project Title: Chord Distributed Hash Table (DHT) Implementation in Go

System Overview

This project implements the Chord protocol, a scalable peer-to-peer lookup service and distributed file system. Developed in Go, the system organizes nodes into a logical ring topology to perform decentralized resource location without a central authority. The application makes use of the practical application of consistent hashing algorithms, RPC network programming, and concurrent state management.

Core Architecture: The Identifier Circle

The system is built upon a 160-bit identifier space generated via SHA-1 hashing.
  • Consistent Hashing: Both nodes and data keys are mapped to the same identifier ring. Each node is responsible for managing keys that fall logically between its ID and its predecessor's ID. This minimizes data movement during node churn.
  • Communication Layer: Inter-node communication is established using Go’s native net/rpc package over TCP, enabling Remote Procedure Calls that abstract network complexity.

Routing Optimization: Finger Tables

To resolve the performance limitations of linear O(N) search in a standard ring, the system implements Finger Tables to achieve O(log N) lookup scalability.
  • Exponential Routing: Each node maintains a routing table where entries (fingers) point to nodes at exponentially increasing distances ($2^{i-1}$) around the circle.
  • Recursive Forwarding: Queries are forwarded recursively through the network. This optimization allows a cluster of 1,000,000 nodes to resolve a key lookup in approximately 20 hops.

Topology Maintenance & Fault Tolerance

The system is engineered to handle a dynamic environment where nodes join, leave, or fail unpredictably.
  • Dynamic Bootstrap: New nodes require only the address of a single existing member to calculate their position and integrate into the ring.
  • Stabilization Protocols: Dedicated background Goroutines run periodic "Stabilize" and "Check Predecessor" routines. These verify network integrity and repair incorrect pointers in real-time.
  • Successor Lists: To prevent network partitioning (ring breakage), nodes maintain a list of multiple nearest successors. If an immediate neighbor fails, the node automatically bridges the connection to the next available live node.

Data Storage & Security

The implementation extends the basic routing protocol into a functional distributed storage system.
  • Client-Side Encryption: Data privacy is enforced via AES-256. Files are encrypted with a user-provided passphrase before transmission, ensuring that storage nodes function as "blind" repositories without access to the raw data.
  • Interactive CLI: A custom command-line interface is implemented to allow users to store, load, and lookup files across the cluster interactively.

Concurrency & Reliability

Given the asynchronous nature of the network, strict concurrency controls are applied:
  • State Safety: Granular locking via sync.RWMutex protects internal node state, allowing multiple concurrent readers while ensuring exclusive access for state updates.
  • RPC Error Handling: Timeout management ensures that unresponsive nodes or network latencies do not cause cascading failures within the request handling logic.
Back to Archive
Ahsan.

Software Engineer

© 2026 Ahsan Javed. All rights reserved.
LinkedIn