Hi all, I’d really appreciate some help in understanding what the correct way is, to implement client/server prediction, when using velocity.
My understanding is there’s 2 ways, we can aid in smoothing client movement, whilst waiting for a server response -
-
Client side prediction / server reconciliation > We immediately render where the client should be. Once we receive a message from the server, we replay the position it told us (and then re-apply any inputs we’ve already ran locally) - ideally leading to a seamless experience when receiving server messages (explained here)
-
Linear interpolation > We always render a client 100ms (or a given time-frame) behind, and interpolate between two known server states on the client (explained here)
For the purpose of this post, i’ll be explaining my usage of (1), which I believe to be the best solution the client.
My issue, is that when moving past the simple example (where we set X,Y on client, and then server simply sets X,Y for each frame of client input), and onto using arcade physics, we begin to run into a problem where I cannot simulate identical velocity on the client and server.
Perhaps made clearer through example -
The client has an FPS of 60. Every input we check
- How long since last frame
- Which key is pressed
We then have a nice way, to apply velocity to a client on every frame a key is down (using frame_time * speed
) - meaning a fluctuating frame-rate doesn’t impact the velocity.
For every frame a key is pressed, we also send this input to the server, in a packet which looks like (input_time, key_pressed)
Now the problem arises, when we try to simulate these inputs on the server
Lets now take a 10FPS server
- It receives all 10 inputs, and queues them ready to be processed on next tick
- On next stick, it adds up all of the input velocity over those 10 client frames, and then runs it on a single server frame taking 100ms.
Now the problem, is that the server frame time cannot be predicted. I.E we may run our simulations of the 10 client inputs over 100ms, or 110ms, with no way of telling until the next frame (at which point we need to return the value to the client)
At this point, our client prediction is now wrong, with our server prediction being slightly incorrect on every tick.
This now leads to two scenarios on the client -
1) Every time we receive a server inputs, we re-set our client X,Y to what the server tells us, and re-play the client state we’ve already rendered, ON TOP of the now incorrect server position.
This leads to big jumping forward and back, as for example our player path may look like the following, with a 200ms client lag added, for easier explanation
Client
Starting position = X=0, Y=0
0ms > Client press right key, we apply velocity to X. We also send input #1 to server, and save on client for later reconciliation
16ms > On next frame, velocity has been applied, client ends up at X=1, Y=0
100ms > Client press right key, we apply velocity to X. We also send input #2 to server, and save on client for later reconciliation
116ms > On next frame, velocity has been applied, client ends up at X=2, Y=0
Server
200ms > server processes input #1, calculates actual distance over frame is X=1.5, Y=0
300ms >
- We send input #1 location back
- server processes input #2, calculates actual distance over frame is X=2.5, Y=0
400ms > We send input #2 back to the server
Client
600ms > client receives server position, for input #1 back,
- applies X=1.5, Y=0 to the client
- See’s we have input #2 to re-apply, now it tries to re-play input #2 (which just has a key pressed, and duration), on top of the incorrect #1, resulting in something like X=2.3, Y=0
700ms > client receives server position, for input #2 back - applying X=2.5, Y=0 to the client
As you can see, this would mean, when moving a player right 2 positions, it’s X path goes something like 1(client) > 2(client) > 1.5(receive server) > 2.3(re-apply client) > 2.5(receive server)
Example -
In order to combat the above, I’ve tried adding a “threshold”, where it should not re-apply the server position, if it’s within a certain “safe” threshold. However what this ends up in, is my client going more, and more out of sync, before it eventually needs a big jump to reset itself to the authoritative server position.
Client code for this here
Clearly neither of these are workable. Am I missing something obvious, perhaps I should be interpolating the servers position during reconciliation, or I’ve missed something obvious in the client/server velocity calculation
Full code is here if you wanted to run locally. front-end = yarn start
, and server = yarn debug