Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vino
Aug 11, 2010
You modeled your queue service after how a player would pick a server when browsing through the old 2000-style server browser, but I'm not sure that's the best route to go here. It causes a number of problems, like that some lobbies will never start if they don't have enough greedy clients selecting it. That doesn't seem like a problem you can easily solve in your model. You'll also have security problems with people hacking their client to join lower-skill games so they win more. I think you need to flip how it works - the matchmaking service and not the client does the heuristics on which players will go together, and when it has enough players to fill a game, finds an available server somewhere in their geographic median and tells them to connect, and the servers never refuse.

Adbot
ADBOT LOVES YOU

dads friend steve
Dec 24, 2004

Come to think of it, the last online FPS I played was BF1942 where the exact problems you described happened so lmao at me

You’re right, I gave way too much decision making power to clients

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
It's kind of funny that the two example cases are a matchmaking system and an airport because they're completely opposite in terms of business strategy.

A matchmaking system you basically want to gather as much info about how it's working as you can because there are a lot of weird human factors that go in to how well it works, especially in sparser markets, and you really want to plan for its fallibility and design everything with the assumption that you might have to go back and change it.

An airport is one of the longest-term infrastructure projects you can possibly do, and every single decision made is going to stick for an extremely long time, so you'd better get everything absolutely perfect on the first attempt.

dads friend steve
Dec 24, 2004

I’ve never had the airport question. At my current job they asked me to design a system that I came to find out is one they already run in prod lol

(I actually think that’s an alright way to do it. By definition it’s a relevant design question, and since it’s one they already had it’s not like they were trying to get free work out of their interviewees)

Ranzear
Jul 25, 2013

dads friend steve posted:

You’re right, I gave way too much decision making power to clients

The correct amount of authority to give the client is zero. Don't even fully trust the inputs you do allow.

Coffee Jones
Jul 4, 2004

16 bit? Back when we was kids we only got a single bit on Christmas, as a treat
And we had to share it!

Ranzear posted:

I'm gonna read between the lines that they intended the question to be about the differences in matchmaking between those two titles.

Also how you’d structure the services, caches and what happens if you can’t find a great skill match between players in a given geographical region and need to search surrounding region for waiting players

Ranzear
Jul 25, 2013

Mild rant ahead.

I would almost argue that a turn based game with no chat doesn't strictly need regional servers but global connectivity sucks more and more lately. Even just Japan to West Coast will completely poo poo the bed twice a week for hours at a time. It's still better to have the combined pool of players from the start in that case. Splitting the pool into regions, when unnecessary like a turn-based game, is rarely a good idea because then you're even more prone to time-of-day slumps in activity affecting both match time and match quality.

Non-gameplay UX, like deck management, can still suffer from bad ping more than gameplay does, so I've flip-flopped on regional servers a lot. Regional servers with a shared PvP pool is an ideal I've been working out recently though, since my current project falls cleanly into the mentioned category. It just depends entirely on how portable your matchup data and client connections are. Matchmaking should be able to tell any set of clients to meet up on a best-case gameplay server.

I just don't like regional servers being completely isolated islands, even in ping-critical games.

mcdroid
Jun 23, 2013

Ranzear posted:

global connectivity sucks more and more lately. Even just Japan to West Coast will completely poo poo the bed twice a week for hours at a time.

I noticed that when accessing the nintendo dev site, didn't know it was global, I'm curious why, everyone web conferencing?

Ranzear
Jul 25, 2013

Nah, been like that for a long time. Fremont to Tokyo will just randomly be like 10mbit tops, from both Linode or Digitalocean (dunno if they're not just in the same datacenter though). Entirely throughput for that matter, latency stays pretty normal.

ChickenWing
Jul 22, 2010

:v:

Tangentally off the server question, how does game server communication actually work? My dev career so far has been exclusively RESTful CRUD servers with like 200ms response times and I've always wondered what the paradigms are when your job is to be responsive at the reflex level rather than at the "waiting for a web page to load" level. I know vaguely that you're sending the absolute minimal client deltas (at least that's how they explained it in the source engine video I saw), and I'd guess you're doing the absolute barest minimum serialization/deserialization at the communications layer, but beyond that it's one hundred percent black box.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

ChickenWing posted:

Tangentally off the server question, how does game server communication actually work? My dev career so far has been exclusively RESTful CRUD servers with like 200ms response times and I've always wondered what the paradigms are when your job is to be responsive at the reflex level rather than at the "waiting for a web page to load" level. I know vaguely that you're sending the absolute minimal client deltas (at least that's how they explained it in the source engine video I saw), and I'd guess you're doing the absolute barest minimum serialization/deserialization at the communications layer, but beyond that it's one hundred percent black box.

For lots of stuff (leaderboard results, shop info, etc) it's exactly web API.

For real time, I like the quake 3 model. https://fabiensanglard.net/quake3/network.php

It's a little different from GGPO or unreal's model, etc but it's not a bad jumping off point if you want to read up on them. As you said, track state for each client and ship deltas around.

If you're thinking more about RTS/lots of units, the AoE2 article is pretty good. https://www.gamasutra.com/view/feature/131503/1500_archers_on_a_288_network_.php?print=1

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
When it comes to building special effects, do y'all have any recommendations for teaching materials that can help me build a framework for how to approach the problem? Like, if I'm drawing something by hand, I can break it down into primitive shapes. If I'm making a 3D model, I can rough it out and then fine tune with loop cuts and extrusions. If I'm making a sound effect, I can think of sounds similar to what I'm trying to achieve and layer them together. But outside of fairly simple examples, I'm having trouble building visual effects from the ground up because I simply don't know how to approach the problem. I can write basic shaders OK, and navigate my way around particle systems; I can gain familiarity with the tools. But that doesn't tell me how to use them.

Ranzear
Jul 25, 2013

ChickenWing posted:

Tangentally off the server question, how does game server communication actually work? My dev career so far has been exclusively RESTful CRUD servers with like 200ms response times and I've always wondered what the paradigms are when your job is to be responsive at the reflex level rather than at the "waiting for a web page to load" level. I know vaguely that you're sending the absolute minimal client deltas (at least that's how they explained it in the source engine video I saw), and I'd guess you're doing the absolute barest minimum serialization/deserialization at the communications layer, but beyond that it's one hundred percent black box.

Web APIs dominate a lot of stuff. Most mobile games just have PHP backends. Beyond that I guess I'll essay and summarize:

For actively synchronized games, it depends entirely on the game. A lot of AAA titles are hilariously liberal with their netcode and that's how you get stupid easy see-through-walls hacks and whatnot. A lot of the mentioned 'minimal deltas' concept relies on giving the client way too much information in the first place. If you have a very simple arena game with no 'hidden' information like Splatoon this is generally okay, but some free weekends of Call of Duty were a good time to play with some of the tools and see how it lacks any sort of server side culling or input filtering; the client knows the position and facing direction and even movement inputs of every other player likely because the dedicated server was the exact same code as their ad-hoc with just a dummy client as host.

There was a tank game called Tanarus way back that did pretty well to hide cloaked tanks from appearing in the netcode, but it would still have an object for the engine sound because players were supposed to be able to listen for it if one was lurking very close. Of course this engine sound was given away no matter how distant, so there was a hack that would just automatically fire artillery at any unassociated engine sound.

That's the first slice in my framework of netcode: My comment about trusting nothing from the client was only part of a whole shtick about treating the client like a hostile adversary; give it minimal information and accept only exact inputs in return. It's my anticheat paradigm. No amount of machine learning spyware nonsense is gonna keep the client from abusing information that the server readily provided. It's so common I wonder if that's just some repeated misinterpretation, where the instruction to send 'minimum information' leads to minimizing the bandwidth cost and full state updates, when in reality it was about gameplay information all along and those optimizations work entirely against hiding information. Bandwidth is indeed a consideration, but you know what costs a lot of bandwidth? Sending information the client isn't supposed to use in the first place.

The next slice of stuff that varies based on application is server versus client simulation. The client has to have some simulation capability just to interpolate between server updates, and the server should be doing some simulation just as the barest protection against nefarious clients. This is a huge slider in almost all cases and the Tanarus example I gave comes up again. Tanarus was an entirely server-driven game and went at least as far as being able to cull tanks from client vision but still provide unassociated sounds to play at some location. Tanarus is long gone for me to know too much more than that, particularly how much client sim it did, but I do remember that network loss would have a tank continue driving/turning aiming as it was before disconnecting, so there was at least the minimal simulation on the client. That's the barest gist of netcode: Keep the simulation state of the server and client from diverging, hopefully with minimal data and effort but also with consideration for hidden information remaining hidden.

There is this paragon ideal I chase where the server does 100% of simulation and logic for the game, and the client is turbo-mega-dumb and receives state and raw metadata (velocity, acceleration) for what an object will do until the next update rather than fully simulating anything at all. The client's only job is to blend each state transition with whatever it was last displaying.

That leads into the third slice, and perhaps the hardest to really pin down: Latency compensation. By the time the server tells the client something, that information is already dozens of milliseconds and multiple display frames old, but one of the best functions of a well-simulating client is to take that state information and the known delay from the server and 'forward simulate' to be in at least temporal parity with the server. This is where culling state information gets tricky, because it might delay the visibility of someone walking around a corner by a step or two and they'll seem to pop into existence. There are several more advanced topics here about how the server can look back at the state when a client gave inputs and retroactively change the current state, but the idea is that simulation lends to aligning things temporally as much as spatially and mechanically.

This is another ideal I chase where the server has three complete gameplay states it operates on: A past state that takes user inputs and applies them in proper timing to produce the present state, the present state which is the canonical state of the game and becomes the past state, and a future state which is used for information culling to prevent 'pop-in' and also gives some version of that contextless 'forward-sim' metadata I mentioned before.

So yeah, that whole 'black box' thing you encounter isn't entirely for reasons of complexity or proprietarity, but probably just raw disgusting shame at how much information they leak to the client and how abusable even AAA netcode tends to be. Their best anticheat method is usually being console exclusive, which is why you see spyware/rootkit nonsense get slapped on the PC ports or, hell, even whole new PC releases. Looking at (and never playing) you, Valorant.

This idea that the server can do a lot more of the work in keeping the game secure is someone reliant on VMs getting cheaper and cheaper though, the paradigm is just really slow to shift. Used to be a game server was some ten-thousand dollar 1.2ghz dual socket four-core expected to run a dozen instances, but now a decent VM is easily more powerful than any single client (without needing to do graphics at all) and costs a whole forty bucks a month tops and is still spun up only when needed.

So anyhoo, in summary: Netcode considerations depend on how much information you want to hide from the client, the distribution of work between the client and server to keep information hidden or interpolate state, and how precisely you need to keep the client(s) and server in sync temporally. Bandwidth cost and server load are almost red herrings compared to those factors, and either of those becoming a major factor is probably indicative of bigger problems.

The easiest netcode to implement is literally just serializing and shoving the whole game state to clients. That's also probably the best place to start. From there it's all about paring off the parts one doesn't need, but in some priority or order that follows those factors I described:

  • Rather than serializing the whole state, do a pass by team and filter out things that team doesn't see.
  • Instead of taking any kind of simulation-embedded input, like dictating the vector of their gun barrel, just get an abstract 'aiming here, pressing these buttons' input object from each client and let the server sort out what it means and/or just pass it along to other clients for forward-sim.
  • With input mappings or other simple enumerations, let the client do the legwork figure out what animation is supposed to play or other strictly visual things. Have the server handle a system, like sounds, when you want that system of things to break through the 'fog of war'. Server-controlled visibility is also great for information layers, like if you had some kind of 'sonic sensor' to let a player see footsteps through walls - one needs to cull these footstep sounds unless the player has that sensor. Splitting gameplay objects into sets and layers is fantastic for this and even simplifies the team-based vision in some cases.
  • Interpolation is a good friend, and the server should run way slower than any of the usual 'display' framerates one might think of. This is when one gets into deep time slicing, as if to have a bullet fired at some particular time between updates, and as much as I've read about time slicing it always ignores the amount of isolation of any given object required to really pull it off; simulating the whole state at any arbitrary slice is silly. Anything that operates in arbitrary time should be well isolated. What this really looks like is producing a bullet already simulated to the position it'd be at as if it had been fired at that arbitrary time, but that gets into hitscan vs particle jank. I could write half a book on doing broadphase with temporal bounds to solve that but I think you get the idea. Just run the server as slow as one can get away with.
  • As much as one can, let the server and the client run the same code. My ancient gnarly jankfest netcode-prototyping four-course-spaghetti-dinner Caliber uses node.js for the server and has all the client scripts double as node modules, so all server and client sim is absolutely 1:1.

Something that floats around conceptually for me is having the "server" be a standalone/portable thing and the client is running a local instance that stays in sync with it and does all the basic state transition and temporal work, then the visual part of the client is able to freely poll it for arbitrarily interpolated state without any worry for updates or interpolation. That way even local play has a 'server' and this enables ad-hoc connections and stuff too, while also hugely simplifying development with not needing to maintain or restart an active server on every change of interacting code. It'd be like a state negotiation model, and leads to some ideas like having a game 'upgrade' to ad-hoc by one or two players joining and doing round-robin authority and then 'upgrade' again by migrating the state to a dedicated server when even more players join.

tl;dr You're doing better than 90% by just shoving serialized state around but leaving out parts the client shouldn't know. The rest is just temporal fuckery to prevent jank and promote fairness despite lag. Your game has to have a hilariously overburdened state to need any of the crazy decades-old delta stuff and modern VM servers kinda negate the tradeoff anyway.

Rodney The Yam II
Mar 3, 2007




Amazing explanation, I've been wondering about this topic as well. My game involves a very non-deterministic AI controller that animates the game entities. I think I just have to live with the fact that the client will never see them animating smoothly, unless I cook up some brilliant forward prediction model. Thanks again!

Ranzear
Jul 25, 2013

Rodney The Yam II posted:

My game involves a very non-deterministic AI controller that animates the game entities.

What's making it nondeterministic? My only guess is RNG, but you could throw a seeded PRNG behind it instead and change the seed every so many timesteps to prevent prediction. That will let you forward-sim on the client.

nielsm
Jun 1, 2009



The network game model in OpenTTD is based on the principle of a fully deterministic simulation. When the server and all clients start from the same game state, and nobody does anything, they will all arrive at identical game states after any number of simulation ticks.

Therefore, when a client joins a server, the server sends a complete save game to the client, which it then loads.

Then, when a client does something, like build some rails, buy a train, or add some orders, those actions are sent as commands to the server. When the server has received everyone's commands it executes them to produce its own new state, and also distributes everyone's commands to all clients, so they can do the same thing.

This works great when the code is bug free, but not when there's unintended ways for clients to drift apart from the server's game state. That's a desync, and they can be difficult to detect early, and even more difficult to debug the cause of.
It turns out there are some desync bugs since the current major version (1.10) that still haven't been diagnosed fully, four months after release.

Another downside of this model is that all clients have to run in lock-step with the server. If the server is underpowered to run the simulation at full speed, all clients will be forced to run at the lower simulation rate. But if a client can't keep up with the server, it just gets kicked off for falling behind.

chglcu
May 17, 2007

I'm so bored with the USA.
While we're on the topic and there are people here more familiar with this stuff than I am: Is there a viable alternative to fully deterministic lockstep networking for a large-scale real-time strategy game, where hundreds or thousands of units may be relevant at any given time?

Chev
Jul 19, 2010
Switchblade Switcharoo
Planetary Annihilation uses a more FPS-like client-server model.
https://www.forrestthewoods.com/blog/tech_of_planetary_annihilation_chrono_cam/

chglcu
May 17, 2007

I'm so bored with the USA.

Chev posted:

Planetary Annihilation uses a more FPS-like client-server model.
https://www.forrestthewoods.com/blog/tech_of_planetary_annihilation_chrono_cam/

Yeah, that seems like a really interesting approach, though this bit about the server bandwidth requirements pretty quickly takes hosting a server on your own machine off the table, at least for any place I've lived in the US:

quote:

Servers will need to support 1 Mbit per connected player. For large games with many players this can add up quickly. This is why Uber is running dedicated servers so players never have to worry about it.

Ranzear
Jul 25, 2013

In terms of raw state-pushing, the best idea I've had is to implement both state and edges, edges being strictly a transformation between states. Optionally, multiple edges can be chained from one state to produce some new state, but the basis is about edges being simple transformations that take a state in and put a state out. They really are just delta differences I guess.

From there, one can use two different kinds of updates in their netcode. This was something I explored but never fully implemented in Caliber. There is a 'static' update that is the most recent full state, obviously supplied to joining clients and would be as complete as a save file, and dynamic updates which are just the transformation edges between states. From there, multiple edges on a single state are additive, meaning one can also produce an edge that makes corrections, like if some player's laggy input changed an outcome and their position should be adjusted to compensate.

The clever bit with this is letting objects be identified mostly by anonymizing hashes. Then one can do something like the team-vision based culling I mentioned before, giving a limited state object to limit information to each client, but then throw the same complete edge updates to all players. The final trick is to shuffle all the hashes on each full state update, so no continuing edges for a given object can still be associated with that object. The edge updates are useless without an associated object in the main state update, while not culling edge updates makes those updates fast enough to crank way up in frequency.

I just never got around to implementing the edge updates or hash shuffling for Caliber, because full state updates weren't pushing more than 500kbps for thirty plus visible players (i.e. spectating, no culling) and the server starts to eat itself for many other spagettiriffic reasons long before tick rate becomes an issue. I was leaning toward full state updates as slow as 4hz, and edge updates as fast as 100hz. The other weirdness was that it shouldn't even matter if edges arrive out of order, just apply them as received, so UDP is an option. Even mild packet loss would be quickly corrected by the main update.

Theory is not implementation though. Even just calculating the edges between states and ideally trimming out 'unchanged' stuff is probably too big a tradeoff to beat out big simple state shoving.

nielsm
Jun 1, 2009



If you want to study this in actual game code, I think Quake 1 will work well. It's simple, and is a working example of client prediction being added to an existing game that didn't have it.

The original release version runs all game logic on the server: Each client sends their inputs to the server ("begin moving forward", "stop moving forward", "press the trigger", "release the trigger", "impulse action 4", etc.), and the server sends back a delta of all entities on the map. This makes the implementation simple and robust, just not good as soon as you get into pings of 15+ ms, i.e. anything but local network or a short-range phone call. The entity data sent to the client just consists of position, rotation, which model or sprite to use, and whether it emits dynamic light.

Some years later QuakeWorld was developed, same graphics engine, but now the game code ran on both server and client, allowing the client to predict. I'm pretty sure QW was the first game to use prediction this way to solve the internet multiplayer problem.

Later source ports of Quake have added further new protocol versions, and I'm not sure what the improvements in those are.

Rodney The Yam II
Mar 3, 2007




This is all so fascinating, and I'm reflecting on my days as a 56k modem gamer in a broadband world and how different real-time games would feel and play differently with a bad connection. Lag strats!

Ranzear posted:

What's making it nondeterministic? My only guess is RNG, but you could throw a seeded PRNG behind it instead and change the seed every so many timesteps to prevent prediction. That will let you forward-sim on the client.

Thanks for the tip! The problem is that the AI is controlled by an experimental machine learning algo that runs at the physics rate to animate its body movements (none of the animations are predetermined, just the body physics). The best solution I've found so far is to run the AI on the server and just send body joint angles etc and occasionally full entity transforms to the clients. But with 12 joints per AI body, it's maybe a lot to try and keep in sync. Ends up real stuttery.

For now I'm using Photon for networking because my teammate recommended it. Is there an efficiency loss associated with using this kind of product? Would custom netcode (theoretically) result in tighter sync?

Wallet
Jun 19, 2006

Rodney The Yam II posted:

Thanks for the tip! The problem is that the AI is controlled by an experimental machine learning algo that runs at the physics rate to animate its body movements (none of the animations are predetermined, just the body physics). The best solution I've found so far is to run the AI on the server and just send body joint angles etc and occasionally full entity transforms to the clients. But with 12 joints per AI body, it's maybe a lot to try and keep in sync. Ends up real stuttery.

I have no idea what kind of game it is, but assuming it's moving in some semi-logical way instead of just twitching (and depending on the complexity of the movement) you may be able to address the stuttering by doing some simple(r) prediction and interpolation on the client side rather than just snapping between the states received from the server.

nielsm
Jun 1, 2009



Rodney The Yam II posted:

This is all so fascinating, and I'm reflecting on my days as a 56k modem gamer in a broadband world and how different real-time games would feel and play differently with a bad connection. Lag strats!

The real issue with modem internet for real-time games was typically not the bandwidth as such, but rather buffers on the ISP side. If you make a direct call between two modems, and it's a local call (same telephone central), instead of going across the Internet via a common ISP, you should be able to get a latency in the range of 5 ms or better.
However typically an ISP will have some large buffers on their end that can delay traffic by quite a lot, especially if many customers are busy. The buffers can make the connection appear to have less packet loss, which might make some applications (like web browsing) appear smoother.
So while analog modems were slow, they were not by themselves the cause of poor latency.

Rodney The Yam II
Mar 3, 2007




Wallet posted:

I have no idea what kind of game it is, but assuming it's moving in some semi-logical way instead of just twitching (and depending on the complexity of the movement) you may be able to address the stuttering by doing some simple(r) prediction and interpolation on the client side rather than just snapping between the states received from the server.

That's a great idea, thanks!


nielsm posted:

Modem stuff

The world of telecommunications will never cease to amaze me

Chev
Jul 19, 2010
Switchblade Switcharoo
It's actually super logical, low latency has always been the least desirable property of communication networks in all domains except, specifically, action videogames. That's why netcode itself only gets you to a certain point and the secret sauce of companies like Riot to get low latency is to pay for their own specialized infrastructure. Interesting links:

https://technology.riotgames.com/news/fixing-internet-real-time-applications-part-iii
https://gafferongames.com/post/fixing_the_internet_for_games/

(gafferongames also has a whole bunch of interesting tutorials on netcode implementation)

Ranzear
Jul 25, 2013

Ping jitter is far more noticeable and harder to correct for than raw latency. A client that wildly swings from 20ms to 80ms latency likely has a far worse experience than a client that is consistently 120ms. The biggest problem with gaming on wifi isn't latency or loss, it's jitter.

Hughlander
May 11, 2005

Heh get me drunk some time and remind me about the project I helped out on who's entire network model involved never dropping a packet conceived of the in house PhD...

You'll never guess what all the reviews talked about when it came out!

nielsm
Jun 1, 2009



It was either massive latency issues, or massive packet loss (lol), right?

Xarn
Jun 26, 2015
Don't implement your game networking on top of TCP kids.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Xarn posted:

Don't implement your game networking on top of TCP kids.

Eh; disable nagle and you're mostly fine

Ranzear
Jul 25, 2013

I don't know of any socket library I've looked into lately that doesn't use TCP_NODELAY by default, because most are doing their own buffering. Rust's does at least.

Sounds more like UDP if packet loss was causing it to implode. Somebody pour Hughlander a drink.

Vino
Aug 11, 2010
I'll add some thoughts to the game networking discussion. Read and learn from my mistakes. (Note that my experience is mainly based around first and third person realtime shooters and networking models for other types of games will necessarily be different, I don't know how eg MOBAs or MMOs should work.)

1. If you're going to make networking support for you game then do not do not do not do the simple obvious thing where you have a single binary that is sometimes a client, sometimes a server, and sometimes a listenserver depending on parameters. This is a great way to have client-authoritative features and bugs everywhere where the logic works differently in those three situations. Instead, make a .dll that is always the server, and a separate .dll that's always the client, and if you want a listenserver then load them both in the same memory space.

2. If you're going to develop an API to expose networking to gameplay engineers then don't allow them to serialize their own messages. If you listened to my approach above then you should be able to get away with exposing the following:

- Servers write to replicated variables that are automatically (using methods previously discussed in this thread) serialized and replicated to all clients. This is how the server communicates with clients.
- Movement, shooting, and abilities if the game has them should be sent from the client to the server as a stream of inputs. If you do this then you can add prediction on top of it by running the same input stream on the server. Predict as little as you can get away with.
- Everything else should be sent as a stream of commands out of band of the input stream.

cmdrk
Jun 10, 2013
i've been writing a game server for fun with a protocol largely modeled off of old MMOs. basically i keep a big table of opcodes to define different kinds of messages, and then I use protocol buffers to serialize/deserialize my data from my server (Erlang) to my client (Godot). this seems to be fairly efficient and work pretty well for the type of game that i'm building (multiplayer SimCity-esque game).

for my purposes im happy with this, but I was wondering what is the 'modern' way of approaching things? do people tend to just grab RPC frameworks off the shelf or do most AAA (MMO) titles do something entirely custom?

Hughlander
May 11, 2005

cmdrk posted:

i've been writing a game server for fun with a protocol largely modeled off of old MMOs. basically i keep a big table of opcodes to define different kinds of messages, and then I use protocol buffers to serialize/deserialize my data from my server (Erlang) to my client (Godot). this seems to be fairly efficient and work pretty well for the type of game that i'm building (multiplayer SimCity-esque game).

for my purposes im happy with this, but I was wondering what is the 'modern' way of approaching things? do people tend to just grab RPC frameworks off the shelf or do most AAA (MMO) titles do something entirely custom?

How I've handled that is a deterministic engine that's running in client and server. Client sends a command to the server, server validates it and gives an output, client already simulated it as soon as the command went to the server and if the outputs are right everything goes on, if they're wrong, there's a weird desync (bug/hacker) and you re synchronize with server authoritative. Having client/server in different languages will obviously have problems there.

cmdrk
Jun 10, 2013

Hughlander posted:

How I've handled that is a deterministic engine that's running in client and server. Client sends a command to the server, server validates it and gives an output, client already simulated it as soon as the command went to the server and if the outputs are right everything goes on, if they're wrong, there's a weird desync (bug/hacker) and you re synchronize with server authoritative. Having client/server in different languages will obviously have problems there.

yeah i'm definitely trying to be careful with things that may differ significantly in language implementation. especially floating point numbers and how those get handled by the client and server. other things are fairly easy to predict and account for. ie in my essentially tile-based game, client sends its intent to perform an action on a tile, assumes it will be complete, server sends an authoritative yes or no back to the client. if the server disagrees, client fixes the local view appropriately.

that brings me to another question actually: if you had a networked, top-down 2D type of world with free movement (think 2D Zeldas or whatever), how would you ensure players aren't walking through walls etc? would your library running in the server need to load up the world geometry and do the whole simulation server-side? i guess it must?

Vino
Aug 11, 2010
Yes. You should be running the same code on the server and the client so that when the client says "I was at X,Y,Z and I pressed W" they can both do the same simulation and end up in the same place. This is what people mean when they say deterministic.

nielsm
Jun 1, 2009



Yes, going back to OpenTTD that's exactly how it works, a deterministic simulation that runs on all participants in the game. Floating point is banned in any code that can have an effect on the simulation, only stuff like rendering statistics windows or mixing audio is allowed to use floating point.
The command preflight does everything except actually alter game state. It's also the same code path that's used to do cost estimation for the player.

Ranzear
Jul 25, 2013

Fixed point is good for consistency but a pain in the butt to actually implement. I ended up just trimming floats all the time, but it did save a lot of raw bytes in my JSON streaming. Outside of extreme cases, like Kerbal Space Program before the origin centering patch, nothing needs full precision floats. Hell, few things need more than three digits of decimal, which would be millimeter precision in Unity.

Don't rely entirely on determinism either, just have a periodic re-sync of clients and reduce desync as much as possible. OpenTTD is an outlier probably for being too broad a simulation to throw a complete state around and that level of lockstep also relies on complete information to all clients, which is an already discussed no-no in games with mechanics reliant on hidden information. I also presume OpenTTD is built for multiple players potentially interacting with the same objects, which doesn't really come up in other games.

As for implementation of input, make it something parallel to or inserted into the game state. A very simple struct as what Vino mentioned, just a set of raw input values that are tied to and fed to some particular game object. It's also not a directive like "I aim my turret 30 degrees left" but instead an intent like "I want my turret to be pointed 127 degrees absolute" which lets both the server and client determine how much the turret turns independently in real time, forward-sim time, or inter-update frame time. When abstracted this way, the client just plugs the more current local input object in for forward-sim until the next update, which makes local input immediate and smooth, but the server also just passes along inputs on other player controlled objects instead of needing to do any 'baking in' of those inputs for forward- or inter-update- sim. This also makes it easier to have bots, because they just produce the same input object as a regular player and don't need to process as much state.

This is how Caliber runs 120fps+ on the client, but a variable tick rate based on player count server-side. Everything is delta time and any precision loss or collision damage weirdness is just chalked up to server load.

Eliminating 'instantaneous' input is nice all around, like my turret-aiming example. I don't have anything against games with instantaneous snap aiming, but I like the physical realism of not having instantaneous perfectly accurate inputs, and it makes everything easier to synchronize and keep smooth between updates. Instantaneous direction changes from client input are hard to reconcile in netcode because they're hard to reconcile in physics and reality, but an intent and a known timeframe to achieve that intent can be executed in many different temporal offsets and granular simulation steps at once. It's also a mild anticheat method because the client has zero authority over the absolute precision of an input.

Adbot
ADBOT LOVES YOU

RazzleDazzleHour
Mar 31, 2016

So if I'm putting in applications for Environment/Texture artist jobs, what are some things that employers like to see? I've talked to like six or seven people about this already but I seem to get different answers every time so I like to collect responses since I usually base what I'm gonna do next around the answers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply