Sigma Tau: Networking & Structure

The code for Sigma Tau separates the World and the Bridge, making more intuitive and powerful networking for a bridge simulator, albeit different than typical games.  The code for Sigma Tau, rather than having individual players connect to the world server, each ship is a single client which also acts as a server for the individual officers.  I have a variety of reasons why I plan to do it this way.


1. Unix Philosophy

I am a firm believer of the Unix Philosophy [1].  “Write programs that do one thing and do it well.”  The World Server only knows a ship as a ship (not as several players) and it is a game like any typical game (where ships fly around and shoot each other) without caring about complications (like system power).  The World Server does not need to be (much) more complicated than this (in theory it could be repurposed as a single pilot game). Also because of this clear separation of concerns, contributing to the code can be simpler.


2. No increased latency

Despite the fact that there are two layers of servers, latency is not necessarily going to be longer.  The expectation is that, except in special cases, the Ship Server will be either on the same network as the Terminals or on the same computer as the World Server.  Latency over LAN is insignificant when compared with over the network. The only time when Latency would be increased is if the Ship Server is hosted at a different place than either the World and the Terminals (an arrangement which would not even be possible with a typical single client, single server).


3. Reduced bandwidth

Having a separate World and Ship could greatly reduce bandwidth in the places it actually matters (and therefore reduce added latency).

One of the circumstances where bandwidth would be reduced (particularly on the server’s network) is when you two ships are playing together, each bridge in different places.  If each bridge runs the Ship Server on their local network then the World Server only needs to send data to two connections, rather than sending ridiculous replications, and the ship server can send data to the Terminals without stressing the World Server’s network.


4. Simpler Terminal

The Sigma Tau Terminal will be a super-lightweight web-interface without much latency handing.  The Terminal will basically only be an interface to change/view values on the Ship server (thruster power level, or scanner state, etc.) without predicting the effect from the changing of those values. This requires a low latency connection to the Ship Server (like LAN) but allows the Terminal to be super lightweight.  I can only practically run a few instances of EE on my computer and, because my computer currently has a poor connection to the router, the other day I had to run only a single client!


5. Not meaningfully more complicated to use

There is no need for this to be any more complicated to set up.  It would be trivial to have the main application allow you to start the World Server and a connected Ship Server (or multiple).  And then players either navigate to the Ship Server’s URL in your browser or start another Ship Server to have another ship. Think of the Ship Server more like a player client in a typical game but with multiple Web GUI connections for controlling it.


This is a more full explanation of why this is how I am implementing the networking for Sigma Tau.  If you have remarks or questions, I am happy to discuss.


[1] https://en.wikipedia.org/wiki/Unix_philosophy

Comments

  • edited March 11

    If each bridge runs the Ship Server on their local network then the World Server only needs to send data to two connections, rather than sending ridiculous replications, and the ship server can send data to the Terminals without stressing the World Server’s network.

    Seems more or less reasonable from a bandwidth / throughput perspective, not so great from a latency perspective, but will probably be ok since it's very likely not the type of game to need super low latency anyway.

    And it will add complexity of course, and I think the main danger with these type of projects is biting off more than you can chew and/or taking a very long time to build up the required infrastructure to get to the point of something playable. And this non-playable infrastructure building time is when you'll potentially get discouraged or lose interest and abandon the project, esp. if you haven't made something like this before. This almost happened to me with SNIS, I worked on it for about a month building up the lobby system, then lost steam and abandoned it for 2 years. It was only really almost an accident that I ever picked it up again. There is a long tradition of new posters in game dev forums waltzing in, announcing grandiose plans, and then disappearing, never to be seen again, and it's this that was leading me to think, "Why are you making it so complicated?" But perhaps you know what you're doing.

    In any case I would suggest that as quickly as possible you get something up and running with all the basic parts (world server, ship server, client) talking to each other without getting too tangled up in details, and without it actually doing very much other than just establishing that these parts can talk to each other. Then build on that. Though one detail I would consider very early on is the coordinate system, which way is x, y, z, and right or left handed, and units. Pick whichever one matches whatever you're planning to use for graphics and stick with it throughout. What I would not recommend is starting with a 2D game with x,y and then later tacking on z to make it 3D. That is what I (very painfully) did with SNIS.

  • The unix philosophy works when tasks are independent, the code is modular, and everything is trusted. An email server is composed of different executable and code bases, and is very flexible. You do not see anyone opening up sendmail for anyone in the world to use. Just like one wouldn't trust xxxhax0rxxx to use an open email relay, I wouldn't trust anyone running a client with any authority over the game state. A little in memory editing and this bridge server can report that the engineer got it to 200% power, while the engineer has not done so. This is a problem even in niche games, like Allegiance and early Minecraft. I also think this proposed system will violate the unix philosophy with the increased code complexity. Games can still be coded with the unix philosophy in mind (ECS for example) even though they are bundled as a single executable.

  • Flexibility comes a cost. Usually a complexity cost. Yes, the small tools are easy and simple, but the bigger system build by it can be much more complex then a monolithic application. (I'm looking at you 2500 lines of shellscript that fail at random!)

    1. Unix Philosophy

    Do 1 thing and do it well, could also be translated to classes, they don't really need to separate processes. And I think you'll quickly discover that you need things beyond your send boundaries. Also, you'll cause data duplication and synchronization that you need to keep up-to-date across multiple processes, increasing complexity instead of decreasing it.

    There is also the concern of authority. You'll be splitting authority, the main server has authority over some things, but the ship servers over others. This will complicate things. Especially when multiple things can have an influence. For example, take "system damage", on EE, when your hull is damaged, some of that damage is translated to systems. So if hull damage lives on the main server and system damage on the ship server, this will be added complexity on handling this. If hull damage lives on the ship server, then destroying a ship once the hull is destroyed becomes more complex.


    2. No increased latency

    There is lies, damn lies and this. You are putting a process in between, so you will increase latency. If it's noticeable, that depends on your code, network and even OS. I think you'll be fine. However, stating that there is no change in latency is simply a lie. (As a real time embedded software engineer, latency is kinda a thing I deal with quite often)

    Just the extra taskswitch and processing that you need to do can add up to a few ms of latency. Still, anything below 100ms feels most likely fine.

    3. Reduced bandwidth

    Bandwidth is plentiful these days, and unless your connection is getting saturated has little to no effect on latency. Bandwidth only seems to become an issue for EE when we are really upping the amount of players, like more then ~30 connected clients.

    And, I just build a "proxy server" for EE. Which allows this distribution of bandwidth as well, single connection to the server, multiple clients connected. But the reason for building this hadn't to do with bandwidth but with reliability. If a proxy crashes the server happily continues running. So it removes a strain from the server when going near a 100 connected clients. (Oh yes, kilted klingon is planning big things)

    4. Simpler Terminal

    EE runs a pretty basic simulation on the clients. No AI for example. And for many things it does a round-trip to the server before anything updates. Noticeable exceptions are power/coolant/impulse sliders. That update on the clients directly to give a snappy feeling. But the effect of these changes require a server round-trip.

    It's actually the bad rendering code that is the slow bit on EE clients. And a web client could potentially solve that.

    5. Not meaningfully more complicated to use

    Yes, I highly recommend to abstract this away from 99% of your players. Nobody wants to start 4 different things before they can start playing. Most people don't care that ships run on different processes, so don't bother them with those details.

  • I am going to leave off my point about the Unix philosophy. It wasn't really quite applicable and the Unix philosophy is often very misunderstood--I once heard someone in a talk, use the term Unix philosophy to mean the exact opposite. He claimed Linux violated the Unix philosophy because it could run on things from crock-pots to Super-computers to cow-milkers, which is precise evidence of it fitting the Unix philosophy. If what I just said begs you too reply, do it in a new thread, please (:

    @daid Okay, yes, "No increased latency" is technically false (I knew that...). But really? a few ms of latency? in an indie project?! There are many other things which make a much bigger difference (TCP/UDP, optimizations, etc). Is a few ms even within the margin of error? when on a scale of hundreds of ms? Give me a break.

    @croxis I expect the ship-server to be trusted. Even EE puts a LOT of trust in the clients (did you know that with a single line of client-side code you can make science scans instant!)

    @daid Um, EE clients do a lot on their own, but that is interesting how much it lets the server do.

  • If you expect people to be running their own private gameworlds, client trust is fine. If you are planning for the servers to be publicly hosted, then any client trust is a very bad idea.

  • Note that I spoke about authority instead of trust. EE puts a reasonably high trust in its clients (they know the whole game state but are trusted to only show what you should be seeing)

    But authority is a different thing. It points to the source of truth. Only on scanning and hacking EE puts authority on clients (to keep it simpler for those cases) on everything else, the server has authority. The server decides what happens, and that is not about trust, but about complexity.

    Example, if the world server has authority over ship positions, and the ship server over system states. Then how do you handle impulse engine power? As that info is split between two different authorities.

Sign In or Register to comment.