Client/Server vs P2P is the wrong question
January 2, 2026 by Farms
For some reason there's a lot of articles out there that seem to think the main choice you need to make when selecting your multiplayer architecture is between "client/server" and "peer-to-peer".
I suspect a large part of this is driven by the idea that the choice you are making is between "running server infrastructure" and "not".
Unfortunately the internet we have today makes true peer-to-peer with zero centralized infrastructure very hard. You will always need something somewhere acting as your entrypoint to the network, introducing players, or other facilitation. So the idea that a p2p architecture is going to reduce the burden of maintenance is not the correct framing, nor is it the most important choice for your architecture. Both client/server and p2p topologies can be useful tools in your toolbox, and they are not mutually exclusive.
aside: if you are interested in building maximally decentralized things checkout the work going on in iroh, the local-first community, and libp2p - the building blocks are there but beware the rabbit-hole goes deep.
In my quest to pick a networking shape for this project I attempted to categorize things into what I considered the more useful buckets and give some names to things so I can refer to them. Maybe this will help someone else thinking about multiplayer:
- Dissemination Strategy - what is "on the wire", input or state?
- Consensus Strategy - how do players agree?
- Optimistic Strategy - how do you battle the speed of light?
The actual network topology becomes an implementation detail that mostly falls out from those decisions, and in some cases makes it clear that hybrid topologies make the most sense.
Dissemination Strategy
Dissemination is just a fancy word for "distributing information".
I find most multiplayer architectures and netcode are defined primarily by what messages are being distributed to players. What data is "on the wire". What data is actually in that packet has the biggest impact in the overall design of the system and how a game is built around it.
Broadly there are two strategies to pick from:
- Input Replication
- State Replication
In an Input Replication system, the primary goal of the architecture is to disseminate messages that represent player intents to all players.
An intent could be an action like jump, a button like X, or a command like moveUnit(x,y). You can imagine the players with gamepads and really long wires connecting them together
In a State Replication system, the primary goal of the architecture is disseminate messages that represent snapshots of the game state to all players.
Rather than receiving what a player intends to do, you receive the effect that the player had on the world. This could be an entire copy of the world in the naive instance, but would usually be some partial update or diff to reduce the size of the data transmitted.
I see this choice get conflated with the network topology where there is an assumption that a state replication system must imply a client/server model, or an input replication system must imply a p2p model. I didn't find this a particularly useful mapping personally. CRDTs are a form of state replication that works in a peer-to-peer setting and a central relay ordering input messages is a form of input replication in a client/server setting. The topology and transport is less important than the core message flow in my opinion.
Consensus Strategy
Consensus is just a fancy word for "agreement".
How you want your distributed group of players to stay in sync and agree on the state of the world has a big impact on the shape of things.
Not only the shape of the implementation, but the kinds of guarantees the system provides in the face of malicious actors.
Broadly I see this falling into two patterns:
- Single Authority
- Algorithmic Consensus
A Single Authority is the easiest to reason about the one you are likely most familiar with; One participant (one of the players, a third-party server, or a blessed service) is the final arbiter of truth. They might be computing the state and distributing it, or they might be keeping a log of all the inputs, or it may be more complex where only disagreements are disputed and resolved through the single entity. The end result is that someone somewhere has more power than everyone else.
An Algorithmic Consensus strategy is one where the players themselves form agreement about the state of the world without the need for any individual with special privileges. The simplest and most common consensus people talk about in multiplayer architectures is "lockstep". In a Lockstep consensus the rule is some variant of "Each tick all players must agree on the set of inputs before moving on to the next tick". But there are many more advanced methods of forming agreement, each offering different guarantees and performance characteristics.
The vast majority of multiplayer games today use some form of Single Authority. Even many architectures claiming to be "peer-to-peer" are either depending on central relays, or use "promoted host" patterns which are really just client/server with funny hat on.
The main exception appears to be real-time strategy games with very high APM (Action Per Minute) counts. And this hints at the advantage; algorithmic consensus unlocks as close to true peer-to-peer as you can get. With no need to defer to or route through a single node, you can get the latency of your message passing as low as physical distance will allow.
Which choice you make here and the choices in the implementation will have a large impact on the resulting topology, how matchmaking will need to work, but also on "trust". Do your players need to trust you? do they need to trust each other?. Can one entity change the state of the world? can one entity alter the history of a match? Does it matter? ... there's a class of cheating that becomes impractical with an Algorithmic Consensus design which is a nice bonus.
Optimistic Strategy
When we talk about being "optimistic" in distributed systems like multiplayer architectures, we mean we are "guessing and hoping for the best".
The speed of light is famously a constant. It's rare in everyday life to come up against it but in multiplayer games it will stubbornly prevent you from passing your messages around to all your players at the same time. No matter what other strategies you have selected, latency between your players and your server, or between players themselves will exist, and will be inconsistent.
A naive State Replication system with a Single Authority server will feel sluggish if you have to wait for the round trip of your button press to turn into a movement on the screen.
A naive Input Replication system with Algorithmic Consensus is going to feel stuttery at best if you need to wait for everyone's inputs to arrive before stepping on to the next tick of the simulation.
Optimistic Strategies are tricks we can applied to mask these issues. Which strategies are available are often tightly coupled to the other strategy choices that I considered not including them in this categorization, but they also have such a large impact in the design of the system as a whole that I couldn't ignore them.
The desire to make use of an optimistic strategy can force your hand on one of the other strategies and the ability to use an optimistic strategy can unlock the types of games that are a good fit for the system.
The big ones:
- None
- Eager Application
- Prediction
Maybe you just don't care about the lag. There's a huge range of multiplayer turn-based, strategy, word games, mobile games, async games where latency has little to no impact on the game. So maybe your architecture does not need to mitigate latency issues. None is a valid choice.
Allowing a player to eagerly update their own copy of the world state before knowing for sure if that update will be agreed upon is the most obvious enhancement. But raises the question how you will "fix" it when the authority or group disagrees. In a State Replication system this might be as trivial as simply overwriting the state once the real one arrives if you are replicating large snapshots of the world, but it may get complicated if you are only replicating partial diffs that require an order. In an Input Replication system preemptively running the simulation forward may require a way to rewind to an earlier version of the state and replay the real inputs on top.
Prediction takes this idea further, not only applying as-yet unconfirmed inputs or changes, but allowing for guessing updates before they happen. In State Replication systems this might be "dead reconing" (predicting movement based on things like existing velocity). In Input Replication systems it might be heuristics around repeated input patterns (like holding "forward").
In all cases the desire to leverage these kinds of optimistic updates will usually require some kind of support from the structure of the multiplayer architecture and how state is computed.
Mixing and matching
So what do we get when we mix and match these strategies. Below is a non exhaustive list for what I think are the more common patterns resulting from strategy choices, and my hand wavey opinions about when you might choose them.
Single Authority + State Replication + Eager Application
Probably the most common structure for a multiplayer architecture, likely what you are thinking of when you say "client/server" model. A central server accepts inputs from player, computes state, and replicates state snapshots to players at a fixed update schedule.
The server is the source of truth, it is the clock. Clients are mostly dumb renderers of the state. Compatible with an eager update strategy and very easy to reason about.
Algorithmic Consensus + Input Replication + Optimistic Prediction
The common peer-to-peer RTS, racing, fighting architecture. Likely what you are think of when someone says "deterministic lockstep". Players send inputs to each other, predict each others inputs, calculate the world state deterministically independently, will rollback and replay inputs to recalculate state once agreement is confirmed.
Single Authority + Input Replication + Optimistic Prediction
Often called a "Relay" architecture and similar to "deterministic lockstep", this setup simplifies things a little by letting a central server form the canonical order of events. Players send inputs to a central authority, which broadcasts them to all players. The server can be very lightweight in this case as it is not computing state.
My pick
Ultimately this little rambling was me exploring the landscape and trying to put things into boxes to help me decide on a structure for my little Multitap challenge.
My goals are almost certainly not your goals, and there is certainly no "best" architecture for all scenarios.
In my case I am optimising for:
- Reusability - I am going for quantity, I don't want to have lots of custom per game code touching the netcode too much
- Deployability - I don't want my deployment and infra maintenance burden to scale too much with the number of games
- Documentability - I need to be able to clearly communicate or abstract away netcode so that agents do not get confused, without preventing them being able to make a variety of games
Taking these into account I have landed quite firmly in the Input Replication camp.
I do not believe I need super low latency and believe that Eager Application optimisation strategies should be enough to get me going, so I think I can avoid the complexity of consensus and use a Single Authority design.
These choices give me a "relay" flavoured architecture, where players send their inputs to a server, which re-broadcasts them to all players. I believe this relaying can potentially be so reusable as to allow me to write a single server that can be used for all games.