[GAME] Space Nerds In Space

1234568

Comments

  • edited November 2019
    Infinity isn't as bad as NaN, but still interesting, as it's a near total loss of precision, best case. Also (I might be wrong) but I think isnan(x) returns true if x is an infinity. Edit: I suspect I am wrong about that.
  • edited November 2019
    ost of the network code just checks "value not the same as last time? Send it." So if a NaN gets in there, it just keeps sending it at 60 updates per second. As it is never equal to the previous value.
    Huh, your network code is more sophisticated (or at least different) than mine in this regard. I don't bother checking the values. On the server, when an object's values are updated, I update a per-object timestamp, and this timestamp is compared against a per-client timestamp to decide whether to transmit updates to each client. If there's any change (the timestamp is newer) since the last thing sent, I send the update for the object completely.

    There is a super cool idea I've run across for doing client updates that goes like this: Save each client update on the server for one cycle, then when the next client update is to be sent, XOR it with the previous client update, then compress it, then send it. If it's nearly the same as the last update (which is likely the case), then the XOR'ed result will be mostly zeroes, and consequently the compression will be amazingly awesome. Haven't attempted an implementation though. On the client side, XOR with the previous update to reconstitute.
  • Ah, yes, your network code is vastly different then.

    I modeled mine slightly to how the Unreal engine does it. First, let's set some terminology. "Replication" is the act of sending the server state to the clients.

    And I replicate 2 main things:
    * Objects that exist on the server and need to exist on the client as well.
    * Member variables of those objects.

    Objects are really simple in replication. Just creation and destruction. On creation all member variables are send with it as well.
    But members are checked one-by-one. Only changed members are send to the client after the initial replication. This all happens without the need for bookkeeping code inside the actual objects, it is all handled by a base class.


    Usage is very simple due to all kinds of magic I pull, replicating objects is as simple as adding a single macro call:
    https://github.com/daid/EmptyEpsilon/blob/master/src/spaceObjects/scanProbe.cpp#L11

    And replicating members is just a function call in the constructor:
    https://github.com/daid/EmptyEpsilon/blob/master/src/spaceObjects/scanProbe.cpp#L17
    And some objects have a lot of members:
    https://github.com/daid/EmptyEpsilon/blob/master/src/spaceObjects/spaceship.cpp#L126

    Some registerMemberReplication calls have an additional parameter. This is the minimal delay between updates. Many things are simulated on the clients as well, and updating those frequently would look bad, and causes a lot of network traffic as they are constantly changing.


    And then there is one final replication type. And that is position/velocity replication. Which has a lot more logic to it:
    https://github.com/daid/SeriousProton/blob/master/src/multiplayer.cpp#L66
    It updates positions of objects that are close to player ships more often then those that are further away.


    And then there is calling something on the server from the client. Luckily, there are not that many actions that can be done. However, I never liked this code. It looks like this:
    https://github.com/daid/EmptyEpsilon/blob/master/src/spaceObjects/playerSpaceship.cpp#L1812
    https://github.com/daid/EmptyEpsilon/blob/master/src/spaceObjects/playerSpaceship.cpp#L1482
    And large switch-case statements are never nice.



    The Unreal engine could do a few more things. It could only send objects that are visible to clients (not that useful in space) and set a "Authority", allowing clients to own an object and actually have a client update the object to the server (used for players).
    And allowing the server to call a function on all clients. As well as making a variable "unreliable", meaning updates could get lost (most likely send with UDP) without problem as the next update would be on it's way.




    Oh, and while you might think, yes, this works for things like ships and asteroids. But what about other game state related things? Well. I've put those in an object as well:
    https://github.com/daid/EmptyEpsilon/blob/master/src/gameGlobalInfo.cpp#L6
    Meaning I have only a very small set of package types that are send:
    https://github.com/daid/SeriousProton/blob/master/src/multiplayer_internal.h







    Now, SeriousProton2 expands on this, with a few things. It allows callbacks on the client when a member gets updated. As well as custom "Replication" to be defined. It also allows the client to call a function on the server. And I still want to add to be able to call a function on all clients.
  • Just noticed this:
    https://linux.die.net/man/3/feenableexcept
    The divide-by-zero exception occurs when an operation on finite numbers produces infinity as exact answer.
    Confusing wording, or do we get a divide-by-zero exception on other things then a division by zero (or almost zero)?
  • edited November 2019
    Confusing wording, or do we get a divide-by-zero exception on other things then a division by zero (or almost zero)?
    Yeah, it is confusing... There are a few other things that yield divide by zero exceptions, log(0) is one I can think of. I thought that tanf(x) with the right value of x should be able to do it, though when I tried:

    float f = tanf(0.5f * M_PI);
    double d = tan(0.5 * M_PI);
    it returned -22877332.000000 and 16331239353195370.000000 for f and d respectively (and did not produce a floating point exception), which is pretty weird. I guess it's due to lack of precision and landing close to PI / 2 but very slightly to one side or the other.

    log(0) got me this:

    Program received signal SIGFPE, Arithmetic exception.
    0x00007ffff7aefdb0 in __feraiseexcept_invalid_divbyzero (__excepts=4) at ../sysdeps/x86/fpu/bits/fenv.h:132
    132 ../sysdeps/x86/fpu/bits/fenv.h: No such file or directory.
    So it mentions "divbyzero".

    double e = exp(10000); is another one that returns infinity, but does not mention "divbyzero".

    Program received signal SIGFPE, Arithmetic exception.
    0x00007ffff7b32f14 in __ieee754_exp_avx (x=<optimized out>) at ../sysdeps/ieee754/dbl-64/e_exp.c:227
    227 ../sysdeps/ieee754/dbl-64/e_exp.c: No such file or directory.
    I guess it tries to distinguish between mere overflow and actual infinities in the exception, but the value returned in e, if printed via printf("%lf\n", e); comes out as "inf".

    Perhaps the compiler tried to compute that exp(some-constant) at compile time (I compiled with -O0, but it says "optimized out" anyway.)

    If you read "man 7 math_error", it says among other things:

    Range error
    A range error occurs when the magnitude of the function result means that it cannot be represented in the result type of the function. The return value of the function depends on whether the range error was an overflow or an underflow.

    A floating result overflows if the result is finite, but is too large to represented in the result type. When an overflow occurs, the function returns the value HUGE_VAL, HUGE_VALF, or HUGE_VALL, depending on whether the function result type is double, float, or long double. errno is set to ERANGE, and an "overflow (FE_OVERFLOW) floating-point exception is raised.
    Those HUGE_VAL* macros are infinities.

    So yeah, definitely confusing.
  • edited December 2019

    I haven't tried this myself, but I have just received word from Dan Hunsaker that SNIS runs fine at 30 FPS on a Raspberry pi 4, including the weapons screen and even hybrid main/nav (probably the most demanding.) This is pretty cool news. May be some thermal issues, so you may want/need a good heatsink and/or fan setup (I'm not up to speed on Raspberry Pi 4 stuff, so I can't really be more specific.) Not sure what the resolution of the screen was, which can make a big difference as graphics tend to be fill-rate limited.

    Edit: This was at 720p (1280×720). Also this was running everything on the pi, (snis_server, snis_client, ssgl_server, snis_multiverse) not only snis_client. That being said, snis_client takes the lion's share of the resources, but still. interesting to see the Raspberry Pi 4 performing decently as a general purpose computer, including graphics and network heavy apps.

    Update: At 1080p, (1920x1080), FPS drops into the 10 FPS range (not really acceptable), and at 720p there's the occasional drop into the 28 or 29 FPS at 720p (not ideal but acceptable) and this might not even happen if you only run snis_client, and not snis_server and ssgl_server and snis_multiverse on the RPI.

  • https://www.youtube.com/watch?v=wdy4ICZqc68

    I finally got a Raspberry Pi 4B to try for myself. The summary is that the Raspberry Pi 4B with 4GB RAM is fine for the less intensive screens like NAVIGATION, ENGINEERING, DAMAGE CONTROL, SCIENCE, and COMMS, it's still not really good enough for the MAIN VIEW, WEAPONS or DEMON screens, despite a few optimistic reports I'd received from users. The video is long and boring, mostly taken up with installation, which went pretty smoothly for the most part. Then I try it out at first at the default resolution of 1080p and then at 720p to see if that makes performance good enough. The "easy" screens seem to be fine at either 1080p or 720p (maybe NAV drops to 28 FPS on 1080p sometimes), and the "hard" screens are terrible at 1080p and slightly less terrible at 720p. Oh well.

  • Here's a video showing today's progress building physical consoles for Space Nerds in Space. Still early stages, but I have basic control of engineering more or less working. There's a repo for the hardware design and for the portion of code that runs on the arduino in https://github.com/smcameron/snis-consoles The code and plans there are still a bit hand-wavy and subject to change, but kind of working.

    Within the space-nerds-in-space repo, there's a new program, snis_arduino, which reads commands from the arduino via usb-serial, and forwards these commands to snis_client to make it so. That is invoked like so:

    snis_arduino /dev/ttyACM0

    Assuming /dev/ttyACM0 is the serial port at which your arduino appears. You may have to add your user to the "dialout" group ("sudo adduser username dialout") before snis_arduino can successfully open /dev/ttyACM0. You can monitor snis_arduino's stderr if it doesn't seem to work.

    Of course it's all still very much a work in progress, so I really don't expect anyone else to be trying this out at this point (or, realistically, ever, ha.)

  • edited February 2020

    Now you can set a per-client tweakable variable on the console, MAIN_VIEW_AZIMUTH_ANGLE, and with multiple main view clients, you can construct a widescreen view. About +75 or -75 degrees for the left and right views.


  • So, this sets the rotation of the camera for that window?

  • edited February 2020

    Yes. Rotation about an axis passing vertically through the ship. So 0 (the default) is straight ahead, +90 is directly to the right, -90 is directly to the left, 180 is directly to the rear, etc. (It's possible I have +/-, left/right reversed, but that's the gist of it.) It was a trivial to do, as I already have the code to aim the main screen camera in the direction the ship is pointed, and general code for tweakable variables, and already supported as many main screen instances as you want, so throwing one more client-side rotation into the camera aiming code was not a big deal at all.

    Third person view and zooming become a bit strange, as you see multiple instances of your ship from different angles, and it zooms all the main screen instances in different directions, and they cease to give the illusion of being sort of adjacent windows, and using Ctrl-O or Comms buttons to mirror the various screens to the main screen ends up showing the mirrored screen on all instances of the main screen, which is a bit strange, so it's not perfect. Also the lens flare effect is obviously wrong (but it's easy to turn that off.)

    I haven't really tried this on a proper 3 monitor or 3 projector setup, but I expect it should be a pretty cool effect, so I don't mind that in the abovementioned cases it falls apart a little bit.

  • I gave the weapons screen the same treatment as the main screen. Now you can set weapons_azimuth_angle via the demon screen. +/- 75 degrees seems to work alright for setting up a left and right view. It's not perfect, there's some near-plane clipping of the gun turret happening in the left and right screens. You can't really set up a two screen setup with the guns in the middle because the gauges only render if the azimuth angle is zero.


  • While cruising around, I spotted this lone ship crossing the face of a green gas giant and snapped a pic...

    It's kind of weird sightseeing in a place that doesn't exist that I made out of nothing.

  • There's a new star system, "Zaurno". Execute "make update-assets" to get it.

  • Daid mentioned the addition of voice chat to Empty Epsilon... what a great idea! An idea worth borrowing! And so I've done the same for SNIS. Now you can press-to-talk F12 and you voice will be delivered to your crew-mates, or CTRL-F12, and your voice will be delivered to all players within the snis_server instance. I will say this hasn't been very well tested, I don't really have a good way to test it properly, but it does work for what little testing I've done (two clients both running on the same machine). I'm using libopus to do the audio compression (same as Empty Epsilon does), and it seems to do a pretty good job, it doesn't take a huge amount of bandwidth or anything. Took about a day to code up, and 6 days to debug, lol.

  • Additional thoughts about the voice chat system...

    It occurred to me that my current code won't do anything good if multiple clients transmit audio concurrently. What I expect it will do is interleave audio packets from different clients on the server in the order they arrive on the server, and then shoot them out to all the other clients (clients don't get their own audio packets back). This means that it will sound like garbage, because each audio packet will be around 0.04 seconds worth of audio. and waterfall shuffling two concurrent streams of such audio into a single sequential stream 0.04 seconds worth at a time... well, it won't sound good.

    I can see two solutions to this.

    The first and easiest solution is to implement a "talking stick" protocol in which before transmitting audio, a client asks the server for the talking stick, and only transmits audio once the server has given it the talking stick, and when finished, it gives the talking stick back to the server. Thus only one client can talk at a time. (of course there would need to be some timeout/renewal system to deal with a client that crashes while holding the talking stick, or with a client that talks for an inordinately long time.)

    The second solution I can think of would be to attach a sender ID to each audio packet and then sequence the audio packets received by any client into parallel streams by sender ID that get mixed together. (Currently I have only one such stream for all VOIP traffic that gets mixed with the other audio streams playing (sound effects stuff) but I think there's no reason I couldn't have multiple such streams.)

    The first solution is attractive, because I expect it might be easy to implement, and it mimics the behavior of e.g. aircraft radio, where there can be only one transmitter on a frequency at a time, which kind of fits the spacecraft theme.

    The second solution is attractive because it might make conversations go easier if everyone is able to talk whenever they want, even if that means people can talk over each other (and also for the technical challenge of it.)

    What do you guys think?

    I'm not sure what EE does in this situation. I didn't see any code to deal with it... nothing like a sender ID attached to the audio data, or a token system to exclude multiple concurrent transmitters, but I might easily have missed it, or perhaps it's doing a 3rd thing which I haven't thought of.

  • I thought about this some more, and it occurred to me that even if I do allow multiple clients to transmit at the same time, I still will need a "talking stick", or rather, "talking sticks". This is because in any case, there will be a fixed, finite small number of playback audio channels that I can mix, and if each client has N such channels for VOIP, then I will need N talking sticks. The clients can ask the server for a talking stick, be assigned one of the N sticks, and then this will determine which of the N playback channels their data goes into.

  • EE uses multiple streams. See here, it includes a clientID() https://github.com/daid/SeriousProton/blob/master/src/networkRecorder.cpp#L118

    (I actually added the client IDs to fix that audio works behind proxies, but that's an implementation detail)

    I also added "start" and "end" packets. As you do not want to keep an audio stream open for everyone. And you want to flush the buffers out to the audio if the end of a stream is reached. You also want a single OPUS_decoder per stream, as the decoder is not stateless as far as I understood per documentation.

    Playback is done by 2 classes here: https://github.com/daid/SeriousProton/blob/master/src/networkAudioStream.cpp

    The NetworkAudioStream is a single stream of audio data. And the NetworkAudioStreamManager handles distributing/creating and stopping audio streams.



    I choose this method, because the "speak token" method wouldn't work very well with different voice targets (ship only vs server wide)

  • Ah, that is a good point about the opus_decoder per stream, I hadn't thought of that and of course you're right. I think for the way my audio code works, I don't need a start and end because I'm continuously mixing audio, and I just append PCM data to the VOIP audio streams. and if I don't append more, they will drain empty and cease to contribute to the mix.

  • Ah, but I buffer 100ms of audio data before i start playback, to account for jitter in the tcp stream.

  • edited April 2020

    I see. I may end up having to do that as well (haven't tried this other than locally).

    Meantime, I committed code[1] to enable up to 4 clients to transmit audio at once, decoding and mixing these streams with the other audio on each of the clients.


    [1] https://github.com/smcameron/space-nerds-in-space/commit/47d30105ae19151b1ec7e80474a15ad7eb63581b

  • It occurs to me that a trivial way to buffer 100ms before initiating playback could work as follows: Upon seeing the first new audio data arrive from a particular client in "awhile", immediately inject ~100ms of artificial silent buffer in front of that first buffer ("awhile" is probably a few seconds). That will start the mixer off 100ms behind the accumulation of actual data without having to have any special queue or code to buffer it up until 100ms worth has arrived before letting the mixer at it.

  • edited April 2020

    So I tried out my voice chat over the internet, and uncovered a little problem... makes a bit of horrible noise. Up to now I had only tried it locally, and it seems to work fine that way for me. So, still some work for me to do there.

    OTOH, I got some video of what it's like to run with the clients local (in Virginia) and the server remote (in New York). About 25ms latency, about 21.5k/sec per client (43k /sec total) bandwidth usage if I "set npc_ship_count = 100", "set asteroid_count = 0", and "regenerate" on the demon screen to cut out a lot of stuff. And it seemed more or less... totally playable? Color me surprised.

  • That's cool! Which port(s) would one need to forward if they host an internet game?

  • That's cool! Which port(s) would one need to forward if they host an internet game?

    Short answer is, you get to choose whatever port you want.

    I didn't run the lobby server and the rest, only snis_server. Instructions are here:

    (Ok, that's rather annoying how this board auto-formats links.)

    I will probably work on this document some more to show how to make the lobby system work as well so that auto-wrangling (starting/stopping snis server instances automatically) and warp gates will work too.

  • edited April 2020

    Yeah, the new version of the board software is weird like that. Don't nessessarily like it either.

  • edited April 2020

    Ok, I got the lobby system working in the cloud. I added a way to restrict the set of ports snis_server uses to a particular range, so you don't have to open up a zillion ports on the firewall. I don't think the 'autowrangling' system of snis_multiverse that automatically starts and stops snis_server instance is working perfectly so probably don't mess with that. It might only be the OOM killer kicking in, I didn't troubleshoot it much.

    Anyway, here's a new doc about running SNIS in the cloud:

    "https://github.com/smcameron/space-nerds-in-space/blob/master/doc/running-in-the-cloud.txt"

    (putting the link in quotes seems to stop the preview rendering).

  • On the links, you can get around the big ass preview. Do not remember how, and on mobile, so cannot test. But it was possible.


  • Tried to replicate the "horrible noise" problem... and I couldn't. I think it may have been my audio interface. I remembered that I have heard it making that noise before in contexts unrelated to SNIS (e.g. while playing back youtube videos) quite some time ago and I forgot that my usual cure for it was to unplug and replug in my USB audio device -- a Focusrite Scarlett 2i2. Just now tried again from here to New York and back and... it seems to work fine now.

Sign In or Register to comment.