Pondering bridge simulation game creation.

So, why this post. First off, just to dump my mind a bit. As it's getting full in here.

As you all will most likely know. I started EmptyEpsilon as a better, open, hackable/moddable clone of Artemis. And it does a pretty decent job on that area. It plays well, it's quite stable, it's pretty easy to pick up, runs well with or without a game-master. And it does not suffer any out-of-sync problems (after the collision bug was squashed...)
It also has it's share of negatives, the sound effects suck. The amount of mission scripts is limited. Few parts are a bit harder to grasp then others.
But all of that, isn't a huge problem. The game works, people enjoy it.

Now the last year or so, I've been pondering about making a new bridge-simulator game. I've made a whole bunch of silly proof of concept/tech demos to try things. Just to name a few things.
* There is the way too complex highly modular C++/lua prototype that contains a lot of code for something that only simulates an unstable reactor core, but does contain it's own scripting language next to lua...
* Then there is the slightly less, but still complex, modular python code with a web frontend, using true orbital physics. Which actually has a lot of the simulation problems that the unstable reactor core had. (Making the same mistake twice?)
* And not to forget to mention, the electrical network simulator, where components can be placed on a grid and electrical connections can be made, but I never got the UI up to a point where it was a joy to work with this code. This also quickly became complicated without adding anything really.
* I have an ascii-turn-based-single-player-roguelike-spaceship-simulator, doesn't work very well on a conceptual level.

I've been noticing a few things in what I'm building. First off, i'm trying various technologies to build on top. Secondly, I noticed I never touched combat. EE and Artemis are quite combat heavy, and that has it's plus side, as it's clear what goals are, and combat is generally tense action. But i think they can be more to bridge simulation then combat.
Finally, none of my prototypes had a fixed amount of players, except for the single-player-turned-based thing. EE is clearly optimized for 6 players (5 stations plus a captain)

I think with the right engine/mission/scripts/data, you could have a lot of interesting scenarios without ever touching combat. Just to name random ones:
* Tracing/tracking warp signatures to follow a ship
* Handling a ship with broken system (detecting, fixing and recovering from a coolant leak in the right truster engines)
* General maneuvering/flying in full 3D, while orbital mechanics are hard, Newtonian physics are quite easy to grasp.
* Trading goods
* Docking, with full docking communication/negotiation (please shut down your fusion reactor before getting within 5km of our station, to prevent radiation blasting our station)
* Landing on planets/moons, managing fuel, speed, angle of attack.
* Maneuvering deep space

Next, I noticed that I'm having trouble on settling on technology, everything has it's advantages and problems.
* C++ with SFML and OpenGL gives a lot of flexibility and speed. But is also slower to develop in. The 2nd version of Serious proton addresses quite a few of the problems in the first version. But isn't mature yet, but I do have a lot of experience in C++.
* Python with PyQT5 and QML, we use this at the office a lot. Quite fast to develop in, but harder to properly package and deploy. QML gives a nice markup language for UI development.
* HTML5. I'm not a huge fan of javascript. But the zero-deployment effort of web development does has it's advantages. Performance is a clear disadvantage here. Realtime data is also a bit harder unless websockets are used, which are a bit of a complex piece of tech on it's own. 3D rendering has options, but support isn't great across the board. (NodeJS is not an option for me)


  • As much as HTML5/JS sucks, it does solve one huge problem - cross platform deployment sync. That alone is what caused me to abandon Artemis. This nonsense of "oh wait a year after I push out an update for whoever I commissioned to do the Android version, if they can even be bothered" is just unbelievable. I looked at Horizons but $60 entry fee pushed me away from that without even trying (but hey his stuff just WORKS in any device). SNIS, only for extreme geeks on an OS that nobody uses. So I landed at your project, and here I am. The mishmash of languages (low level coding and higher level scripting) is confusing, but like you said, it's mature, it works, it's free, it works on my devices.
    And for all the negatives listed above, the authors are ALL better than me, have all made something entertaining, and clearly have their own fanbase. That's a win no matter what. So each and every one of you creators, thank you. Keep doing what you're doing.
  • Well, EEs Android version is build from the same source as the desktop one, and the release process builds and releases both. Pretty much the only way to keep compatibility like this. And as EE is free, expectations are different, but offering a paid android version and not keeping it up to date with the rest is shitty at best.
    SNIS's only for linux is pretty much what hold me back on even trying it. Linux is used by 2-3% of the end users of the company I work for, which is most likely already biased towards the higher end due to 3D printing being nerdy. So you are right on the nobody uses it as a desktop OS.

    I'm actually quite happy with the C++ and Lua mix. Lua allows for a safe environment to quickly build mission scripts, without knowing all the details of the C++ code. The only thing I would do different there is the http API, which now hooks into the lua scripting, which makes it difficult to use. A JSON REST API makes much more sense here, and works better with all kinds of other tools including javascript frameworks.

    Another point for pondering. Users. World wide, there are most likely up to a few 1000 people that have interest in these kind of games, have the people, and the space to play this with 6 people. With 5 or 6 different games currently in the mix, and most of those are quite the same, as artemis was the first and people copied a lot from that. I tried to filter out a few of the things I didn't see working in EE. But in the core it's the same. Most these games currently follow the same lines.
    So quite a lot of games "fighting" over a small group of people. Which is why I don't think any of the paid options will survive except for Artemis (for the simple sake of being first and most well known)
    The free options are surviving for the simple fact that they are being driven by people that want to create something for themselves, and at the same time just give that away to the world.
    So any new game, needs to offer something new and unique, or it won't attract players. My current idea if I make something new, is to have much more complex systems. So it's less a pickup and play, but more a pickup, fail and learn.

    If you make something different, that's no guarantee for success. Roguesystem http://imagespaceinc.com/rogsys/ has some good ideas, but for an alpha game, it has a large focus on graphics. Meaning there is little content right now. It also runs on only 1 of the machines I have, which is my girlfriends laptop. Due to the heavy graphics requirement. Part of the backing funding was cut from a publisher due to the limited success in sales.
    (Note that it is single player, not a crew based simulator. But it does simulate a pretty extensive spaceship, with lots and lots of buttons)

    And something completely else. On the development side. It's very very hard to develop a game without playtesting. With EE I had the luck of quite a bit of playtesting at the office. Every session there provided valuable feedback. What controls worked, what features are unused, how are the interactions between the crewmates, which stations are busy at what times. With the GM screen, I could keep a close eye on everything. And even spot bugs without the players noticing them.
    But when other people are playing, I'm limited by the feedback those people give here. Which is limited, and generally done by the people that are more experienced. So lacks a whole lot of details.
    EE can do some recording of a session with the gamestate logger. But that does not capture the full state of the game, and lacks information on what the players where actually doing.
    It also does not auto-upload those logs. So I only have logs for my own sessions, which I already saw with my own eyes.
    The original idea of these logs was to do postmortems, sitting down with the crew and walking trough what they did on critical moments. But we never actually did.
    As a coop game, game balance isn't a huge problem. The players should be more powerful and smarter then the AI ships. As the players should feel smart and good at the game.
    This is where I think MMO like bridge sims will fail. You don't have the player base and information to properly balance things out, unless you make it one giant COOP game.

    Anyhow, back to postmortems. If I make something new, I would like to collect data, shitloads of it. Status of every system, every button pressed, maybe even audio of the people playing it? Capturing players audio is always tricky. As people generally do not like to be recorded. But it's also essential interaction in a crew based simulator. (Not to forget data size, an uncompressed audio stream of an hour is about 300MB of data)

    Another thing in this area is, if I make a more complex simulator, people will need to know why they failed/blewup/ran out of power/flew into a sun. So the game might need to be able to identify critical chains of events that lead to failure. (Shield system sabotage caused structural damaged while flying into an asteroid, which in it's turn caused a warp-core fracture, that was not properly contained in time and thus cased an warp-implosion, destroying a large part of the ship and venting all oxygen into space, as air-tight-bulkheads where not closed, killing everyone on the ship)
  • You seem to think about 20 steps ahead, I'm lucky to get 2 or 3 :) Thanks for the insight into the analytics of making these things work.
  • Well, I did play chess. But thinking before doing is a big part of software engineering. However, you can also over-think things, and quite often you just need to start building prototypes to see what works and not. EE was build on top of other work, including the space-invaders clone running on our arcade cabinet at the office, and various prototypes. And it had the advantage of looking closely at Artemis.
    Our original plan was actually to build something like http://www.lhsbikeshed.com/, before we encountered Artemis and changed plans, mainly due to the large hardware/room requirements. Our office is in constant space shortage. Only reason I can keep storing our EE setup there, is that most people think it's part of our inventory as it's there for so long already.

    Subsystems. Any game has subsystems, or components, or gameplay elements, as you could also call them. And subsystems should have relations to other subsystems. Also, subsystems generally should have equal complexity, else things start to feel out of place.
    Examples work better here. A game like... Doom (the first one), has a bunch of subsystems/gameplay elements, it has health, armor, weapons (with ammo), projectile enemies, close combat enemies. All those are simple on their own, and are closely related and work well together.
    On the other end of the spectrum, we have Eve online. Which also has health/armor, and weapons, and ammo. But health/armor is hull, armor and shields. Weapons have a whole bunch of statistics, and relative distance and movement are suddenly also important, as well as shield or armor strength. All subsystems there are way more complex, even tough there are about the same amount of subsystems and about the same relations. It does make for a whole different game.

    Back to bridge spaceship simulation. If we look at EE/Artemis, we have a whole bunch of clear subsystems:
    * Energy (+reactor)
    * Beam weapons
    * Missile weapons
    * Moving around (impulse/maneuvering)
    * Jump/warp drive
    * Long range scanning
    * Communications
    * Shields/Hull
    * System damage + repair crews
    * Probes
    And that's also why engineering is the most complex, and most important job on the ship. Almost all these subsystems are connected to engineering in one way or another. That means engineering can influence almost everything, except scanning, communications and probes. While most other things only have relations to a few things, where relay is the crew member with the least amount of direct connections, and generally has a very indirect influence.
    Note that while engineering has the most direct influence on how the ship operates, it also has no real information on what is happening except for the main screen. So it can influence everything, but has no direct information on what to do. So talking here is vital.
    Now, there are more subsystems in EE, but no all are as visible.

    Now, on any new game, you need to consider what main subsystems to implement. One of those that "popped up" in my prototypes is "life support". Life support is important on any spaceship, as, well, it keeps you alive. However, after some careful thinking, it generally does not make for a very good game subsystem. Without going into the details of food/heat and oxygen. Life support is generally a system when you put in power, and the result it that you stay alive.
    If you have played "FTL" I'll use that as an example, and if you haven't played it, stopped reading, and buy it on steam. I have 126 hours in it, and finished it on easy with every ship/layout.
    Anyhow, it has a life support subsystem. The "O2" component on your ship. This refills any room of your ship with air that the crew needs to live. If the system is disabled (damaged or no power) then the air is slowly venting.
    And while venting a room of air, has a bunch of uses, namely putting out fire and killing boarding crews. The O2 subsystem that refills it has actually one major use.
    And that is that it is actually a battery. It's not life support at all, it's a battery. It charges up, and when it's full, you can take out the power and use it somewhere else until the charge is too low. And as the extended content added a emergency battery as well, the game now suddenly has two backup batteries.
    With the side note that one of these batteries is essential for your ship, and any damage do it should be repaired as soon as possible.

    So back to new simulator games. Life support is essential for a spaceship, but does not make very good game mechanics. As it's just something that needs "upkeep", it does not tie into any other systems very well. And just becomes something that can break, needs fixing, or else you die. Unlike most other things, which if they break, you need to avoid certain situations, or else you die. For example, your generator can be destroyed, but you still have time before you run out of energy. So you should get out of the way for repairs, or you die. Or if your shields are down, you need to avoid being hit, or you die.
    Life support down, you die. No "if", just no life support results in death.

    I want to think up a whole list of subsystems that spaceship simulation games can have. But I'll leave that for another day.

    Note, I spend about two hours a day in trains. Which is where I find the time to do these posts. Some of the things in here actually come out of text files I've written for myself earlier. But I figured why keep those things private, I can also share those thoughts. And, I lost a few of those files... which is always a shame. (I had a whole file on "life support", with all kinds of details on what the human body can handle and what happens if you go beyond those limits, but I lost the file)
  • Huh. I recently added "life support" to Space Nerds In Space -- it's pretty much as you describe, a battery that kills you if it runs out of juice (O2) but which, when working, uses a small amount of power and charges up on O2. For my game, it's just another system added into Engineering that can break and require repair, and applies to the whole ship (no separate regions, bulkheads, etc.) I put quite a long timer on it, and it gives you plenty of warning though. So it really doesn't add much to the game either way, as it mostly just sits there quietly working, and when it doesn't work, you still have a long time before you actually need to repair it (unless you have been purposely depriving it of power). I really put it in for the ambience -- people expect a space ship to have a life support system (and it should also work most of the time) -- not that anyone plays my game, ha. :)
  • Tiny bit more on the life support. Scraping together some old files I had. So here are my notes on life support.

    There are in general a few things that life support needs to handle:
    * Cabin pressure
    * O2 levels
    * CO2 levels
    Less then 0.356 bar, you die due to all kinds of horrible effects on your body.
    Less then 0.5 bar, effects of low pressure are the same as high altitude sickness.
    More then 3 bar gives you tunnel vision.
    More then 4 bar kills you.
    Less then 17% O2, death due to lack of O2
    Less then 18.5% O2, breathing will be harder
    More then 50% O2, while not deadly, this makes breathing harder. Also, it makes everything flammable as fuck. So explosion risk!
    More then 5% CO2, player starts to get sleepy.
    More then 7% CO2, player will fall asleep, and sleeping players cannot do anything to survive.

    For long term survival, there is also food and water. For example, an average human produces 0.2L water vapor per hour. Which you will need to extract from the air. And humans need to eat and drink. But you can go a few days without water and weeks without food. (Although the fun quickly drops if you go without those)
    And there is the thing of going mad without light, day&night cycles and inter-human contact. But that's also quite long-term.

    Back to the short term life support:

    Standard air mixture is 20.95% O2, and the rest is almost all N2, which doesn't do anything for us in terms of survival, except for keeping up the pressure without making everything flammable as fuck. (There is about 1% of other stuff in air, so that's not really contributing for anything simulation wise)

    Humans consume about 0.6m^3 of O2 per day. Which is turned mostly into CO2 and some H2O. Starting of in a 1m cube with 100% O2, you can live for a few hours before the CO2 levels kill you. If you start off with normal 20% O2, O2 levels will drop below living conditions before CO2 becomes a problem. But only slightly, start with 25% O2 mixture, and CO2 becomes your problem.

    CO2 scrubbers are important thing for life support, and lack of those will kill you quite quick.
    As for O2 itself, even if you don't recover the O2 from the CO2/H2O. You most likely have a shitload of O2 in the form of one of your fuel components. So "generating" that is actually quite easy.
  • Subsystems! There are so many of them. Let's write down a quick list of all I can think off right now, and steal from my previous notes. Some of these come from other games. Some of these are just ideas. Some of these might not even work in a game. I could write almost every single one of them out in as much detail as the life-support.

    Games to peek at (in random order):
    FTL, SNIS, Artemis, EE, RogueSystem, Elite

    Power generation
    Fuel cell (H2 + O2 -> H2O+power, exists already today)
    Reactor (nuclear, fusion, "future tech")
    Battery: Can store energy.
    Solar panel: Generates energy when aimed at a sun.

    Engine: uses H2 and O2 to accelerate the ship.
    Reaction control system (RCS)/Stabilizers
    Docking computer
    Navigation computer
    Warp drive, variation A: move at high speed
    Warp drive, variation B: move trough "sub space", different reality where distances are shorter
    Jump drive (instantly move to new location, "spore drive ;-) ")
    atmospheric flight

    Life support
    CO2 scrubber
    Space heater
    Pressure regulator

    Radio transmitter receivers. Unidirection, directional, long range, short range. Can also be seen as a "sensor"
    Communication channels (Open subchannel 5)
    Radar, Unidirection, directional, long range, short range. Can also be seen as a sensor.
    Sensors to sense gravity, light (intensity/frequency), life signs, magnetism, radio waves, electrical waves, ...

    * Beam weapons (instant hit, also called "hit trace weapons")
    * Missile weapons
    * Homing system
    * Combat drones
    Repair system/drones

    Heat exchanger/radiator
    Electrolizer (H2O+power -> H2 + O2)
    Computer: runs software to control various aspects of the ship.
    Database: Stores data for the computer to access.
    Light: uses electricity to light up the ship.
    Material sensor: Sense a certain material amount, come in gas and liquid forms.
    Doors (bulkheads, airtight. Normal, only prevents people from crossing)
    Cargo hold, carry cargo, spare parts, legal trade good, illegal trade goods.
    Cloaking device. Artemis has this, confusing as hell, felt like a bug when ships popped up out of nothing without a warning/animation.
    Boarding crew
    Tractor beam
    Landing gear
  • Don't forget lidar! More resolution than radar but somewhat limited range and line of sight. Use it a lot in heavy gear in space.
  • edited November 2017
    Actually there already is one situation in EE that can directly lead to death without be exposed to enemies or hazards: If your Engineering officer decides that it would be a good idea to overpower your reactor a while. With max power and wthout cooling, that needs just about half a minute (even less if the reactor is already damaged)
    So it actually is a bit weird that of all stations the self destruct button is in engineering, as the engineer already has a pretty effective way to blow up the ship, one that does not even require the permission of other officers.

    I would really recommend to check out SNIS. Compilation is pretty straightforward, at least on debian-based systems.
    I'd say it is a pretty good source for inspiration.
    It is definitely not the most accessible bridge sim out there (I guess that award goes to EE, at least from those gave I tested so far) and I never played a real mission with it, but it has a very distinct and cool feeling, and some fresh ideas. Also full 3d space without beeing arcady.

    Small addition about FTL:
    FTL is not only available to Steam, but also DRM free through humble store (which also includes a steam key, so even for those who are into steam that can be a good decision)

    Regarding technology: You said the sfml/c combination's advantage is speed. Maybe thats just the implementation in EE, but I had several performance issues when running on older hardware. Even with shader disabled, the fps drops to values between 5 to 7 on some machines, even on engineering. On the title screen however, everything is fine So I guess there is quite some overhead there. So for a new bridge sim, maybe that kind of overhead can be reduced.

    Some other note: It would be cool if you could consider to make unicode the standard encoding for that new sim.
    I know, you are not a big fan of translations, but with unicode it will at least be easier to do unofficial "fan translations".
  • Uhm, yes, engineering blowing up the ship with the reactor is a bit of an issue. I did think about removing that. (It takes 17 seconds if you do it right, from the start of a game. Which makes it for one of the shortest game sessions we ever did, note that the self-destruct has a much bigger explosion, and thus does a lot more damage to ships surrounding you. We actually won a game with this once)

    I'm planning to checking out SNIS. One of my main reasons is the fact that it is 3D. EE is 2D for a simple reason, it makes it easier to navigate and communicate about positions and headings. So I'm wondering how SNIS solves this with 3D.
    If I make something else, I feel the huge urge to make it truly 3D. But navigation is a lot more complex in 3D, so I'm still pondering how to handle that UI wise.
    Kerbal space program does it (or doesn't, as navigation is a big thing there). As well as Homeworld. But then there are also games that just "ignore" the fact that they are 3D, for example, the X-Wing and Tie Fighter games are fully 3D, but the "map" view is 2D. Note that this is what most games do, give some degree of 3D freedom, but mostly work on a 2D plain.

    The FPS drop of EE is all down to 3D acceleration or not on the hardware we use. The UI rendering isn't the most efficient implementation, but fast enough if software rendering isn't used. Note that it's one of the reasons I'm thinking about Qt for whatever I build next. That only redraws the elements that change, instead of constantly redrawing everything as I do now. The speed I actually referrer too, is the fact that you can run a 500 AI ship simulation on a fast computer.
    (Possibly, it's the text rendering that is slow. I'm not sure if re-creating the sf::Text objects every draw is actually the right way for speed)
  • edited November 2017
    > I'm wondering how SNIS solves this with 3D.

    I can write about that a bit.

    There are waypoints on the science screen, where the player can either type in x,y,z coordinates, or drop a waypoint at the current location of the ship. These waypoints also show up on the Nav screen, and the ship's computer knows about them too. The science station can select most objects in the game including waypoints, and will give a bearing and mark (azimuth and elevation relative to a fixed canonical orientation, really) and distance, and closing rate (the latter might be negative) on the "details" screen. When science selects something, on the navigation screen, there's a 3D arrow that points in the direction of whatever science has selected (there's also another arrow pointing whichever way the gun turret is pointed). So to navigate towards something, science can scan it and select it, and then navigation can follow the arrow that's pointing to it. You can also use the computer and type in "set a course for blah", (In theory, you can also use the speech recognition, but it is incapable of recognizing the procedurally generated planet names or ship names, but it knows "nearest ship" and "nearest planet", etc.) There are also a couple different attitude indicators on the Nav screen, one of which is mostly cosmetic, that shows a little rendering of the ship, and shows yaw, pitch, and roll rates (copied from Apollo era attitude indicator https://www.hq.nasa.gov/alsj/alsj-FDAI.html), and then the recently added main attitude indicator with 3 large rings with degree markings. One ring (slightly different color than the other two) shows the heading (0 to 360 degrees) and the other two rings show the "mark," or elevation (-90 to +90 degrees). So it is possible for the science officer to call out "Bearing 320 Mark 70", and the navigation officer can, by looking at the rings, manage to bring the ship onto that course, more or less (it is more easily accomplished by the science officer selecting something, then Navigation following the green arrow that will appear on the nav screen, but it is also possible strictly by using the numbers on the rings.)

    The interaction between nav and weapons is interesting, because the weapons is a 3D turret (more Millenium Falcon style than Star Trek style, admittedly, but hey, the MF is super cool.) But if Nav is thrashing around, this makes aiming the guns very hard (they do not try to compensate for ship motion), so Nav has to be aware of what Weapons is trying to shoot, and since the guns are only on the top of the ship, not the bottom, Nav has to maneuver so weapons is actually capable of pointing in the right direction. This interaction works pretty well I think in that it forces some cooperation quite naturally. I have vaguely thought about having some deflector shield turret mechanism so someone can "angle the deflector shields" as in the MF, but haven't done any work in that direction.

    The science short range scanner screen is a little strange. It shows a 2D view in which the distances are proportional to the 3D distances. This can get some pretty strange looking things happening in the 2D view. When you select a ship on the 2D screen, it kind of "pops out" (and other ships near it also pop out) in a 3D representation of the spatial relationships. It is a bit strange. We kind of wanted something that didn't look too straightforward, and required some poking around and gave a feeling of exploring the space. Not sure how successful that part is. There is also a bug in that if you have something selected, and then it warps a great distance away, well, there's some strange behavior as the view sort of follows this warp travel in a lerpy way, and then if you deselect, the view rapidly lerps back to the locale around your ship. Eh, it's pretty harmless as bugs go.

    The 3d "demon screen" (game master) was interesting. I don't think it works great, but it works kind of ok, though it is kind of strange the way that it works. iirc, clicking right mouse button on something in this view flies the gamemaster (distinct from the ship) towards the object (with maybe some incidental spinning that is not really intentional, just some quaternion slerping side effect, but which can be a little disorienting). If you click on no particular object but to the right, left, above or below the center of the screen, the view is rotated in that direction, so you can steer around by clicking. The mouse wheel moves you straight ahead or backwards. There is a "exagerate scale" button so that all the objects (except planets) are rendered much larger than they really are so you can see them from a great distance, if you move up close to something, you can turn it off and see things are normal scale. So, did struggle with how the UI for that screen could work in 3D, and came up with something, but it's not particularly great.

  • Been a while. But lets see if I can pick this up again.

    First on, on the 3D navigation for SNIS "demon screen". You might want to take a look at homeworld. Does a few things really clever on 3D navigation and placing orders in 3D space. I don't remember the details on camera navigation. But 3D camera navigation in quite a few games works by selecting an object and then you can orbit the camera around that object.

    As always, I'm just thinking out loud here.

    Now, let's take a look at what is EmptyEpsilon. EmptyEpsilon has a whole bunch of aspects. But almost everything boils down to 2 things:
    * User interface
    * World simulation
    There are other aspects, like multiplayer handling, but that's the engine handling network synchronization of the world simulation. And relative code size, it's not that big actually. If you look at the code size, most of it is actually the user interface. The world simulation isn't even that complex. If you would take out the multi-station UI and strip the game down to the world simulation, you have a pretty basic simulation. Yes there is collision detection, and some 3D rendering. Some AI behavior. But nothing there is fairly big or complex.

    Which brings me to the following. A lot of things in EmptyEpsilon are actually hacks. The engine that is build under it (SeriousProton) is one of my own build, and first made for a 2D game, a space-invaders clone. With the network multiplayer code added on top of it for EmptyEpsilon.
    Just to list a bunch of hacks:
    * The engine has no GUI system, so that's hacked in EE. Click events are actually handled during rendering. Rendering performance of the GUI is actually quite bad.
    * As the engine has on concept of different worlds/scenes, the whole repair crew thing is really hacked in there. Which really shows if you have some network problems and crew on clients jump all over the place.
    * The engine has no concept of 3D rendering. 3D rendering is hacked in to the GUI system which was already hacked on top of the engine :-). To make matters worse, the rendering of ship/station names in 3D is actually hacked on top of the 3D view.
    * The multiplayer code cannot send over references to objects, so that always needs to use a more complex method of converting from/to IDs
    * The script engine handles global functions and object functions completely differently. For no good reason.
    * Getting commands back from clients to the server is messy at best and requires quite a bit of glue-code. Luckily in EE the amount of client->server data is quite limited.
    * Sounds in space are a hacky thing. The 3D rendering sets the position in space, and then positional sounds are fired. If 3D rendering isn't done then you could hear sounds from the wrong location where your ship isn't at.

    But back to my point. Like 70% of EmptyEpsilon is actually GUI. And I think that's the case for most bridge simulation games. A huge part is the GUI. The whole world simulation part is actually quite basic. The main problem here in EE is that the 3D features are pretty much hacked on top of everything else.

    Now, development of EE has be quite slow for quite a while. Just because I'm doing other things and other priorities. But, people that might have been looking at my github commit history will notice that I have been committing irregularly at the SeriousProton2 repository. The reason is simple, I like building those kinds of things. I know there are off-the-shelve engines like Unity3D and a whole bunch more that are better then I can ever build. But I don't care. I like doing things from scratch.

    SeriousProton2 is my spiritual successor of the SeriousProton engine. My attempt to solve all kinds of problems with the first engine without breaking EmptyEpsilon. Now, don't expect me to port over EmptyEpsilon, for the simple reason that SeriousProton2 has been build up from scratch, and is really different. It's unstable, and gets incompatible changes almost every commit. So don't try to use it :-)

    One of the key differences is the GUI system. SP2 has a build in GUI system, with a declarative system that allows you to define UIs in a text file instead of in code. Separating layout and behavior. This GUI system just went to the 3th version, combine that with 2 versions in EE, that gives a total of 5 GUI systems that I made so far. Guess what, GUI systems are hard :-) But this last version finally uses the same rendering system at the rest of the code, instead of being rendered separately. This should improve GUI rendering performance, but I didn't measure it.

    Another key difference is that it has 3D rendering support, and a whole different more efficient rendering system as a whole. And the code accounts for both 3D and 2D collision. Only 2D collision has been implemented so far.

    Another key difference is scenes and nodes, forming a scene-graph. In SP2 you can have multiple "scenes", worlds so to speak, each have their own set of objects that live in it, which are nodes. Each node actually has a parent node, which allows you to attach objects to other objects. Think turrets on spaceships. Or turrets on turrets on spaceships ;-) But also, repair crew can live in their own scene unrelated to the scene that contains the spaceships. Actually, the whole latest version of the GUI system just lives in a scene where all the widgets are nodes in a graph.

    The script engine code has been redone from scratch, which has a lot more flexibility now, and better performance. No difference between global and member functions is there anymore. All all the internal hacks that I did in the first SeriousProton to bind objects to scripts are gone (it's really a mess that is bug prone)

    There is a bit of a start of multiplayer code there, but it's incomplete.

    Oh, and textures are loaded on a separate thread. Ever noticed that switching to the 3D view of EE for the first time causes a huge slowdown? That's because it's loading a shitload of textures at that point in time. SP2 will load these textures in the background and will use a placeholder till the texture is loaded.

    Will I ever make an EE2 with this? I don't know. EE currently stands on it's own quite well, and even with all the hacks, it works very well. Making the same game again doesn't feel right, even if it would be technologically better end result.
  • Oh, and there is more that I forgot. SP2 supports key bindings, "camera nodes" that define where you view into the 2d/3d world from, basic multi-touch support, GUI theming. Start of a animation system (but only 2D flipbook sprite animations are implemented)
  • The multiplayer code cannot send over references to objects, so that always needs to use a more complex method of converting from/to IDs
    Out of curiousity, how else would you do it? Maybe send array indices and don't do anything which would cause the clients/servers array index usage to get out of sync (or detect and correct such de-syncs somehow)? I'm not thinking of a way to do it which would not require some new assumption to be true that isn't currently true (in my case, anyway). This one doesn't seem to me like quite a "hack" but more like a pretty reasonable design.
  • All objects having a unique ID and using that for reference in the multiplayer synchronization is indeed not bad. The client<->server code uses these IDs internally for everything.
    The problem is that you need to account for the fact that it's send over network in other places in the code. For example, instead of having a P target, the spaceships have a int target_id. While the rest of the network replication code has been designed to be as easy to use and invisible as possible, this falls apart with references to objects. It's a thorn in my eye :wink:
  • I just had a wild idea. Bit of background info, my SeriousProton2 engine contains a much better rendering engine then used in EmptyEpsilon (which was pretty much put together with ductape)
    SP2's rendering uses semi modern OpenGL, and abstracts most out of it. Objects no longer render themselves, but only contain information on how they should be rendered.

    Which brings me to the wild idea. The rendering information is currently used to nicely render everything with OpenGL on screen. But what if I send this information trough the network to a browser with WebGL. WebGL and my OpenGL renderer are not that different, and websockets can be used to stream data without polling overhead. So that allows my game application to just send out a stream of frame updates at 60FPS.

    That would open up the ease of use of browser based playing, without making my life a pain due to javascript/html.
  • That is truly brilliant! I've seen similar concepts work fabulously and allow for quick rendering on devices that would normally have a difficult time handling it.
  • That's basically what the original OpenGL specification was for, the actual 3D part was done on another system, then the screen space GL fragments sent to the X-Windows system for display. that's why early GL 1.x doesn't have direct memory access to textures or frame buffers, they may be on another machine.
  • edited April 2018
    Funny, as OpenGL 1.x would require sending a shitload of data. While with FBOs, the amount of data you needs to transfer is actually quite limited, as long as the FBOs are static, which they generally are.
    Accessing textures or framebuffers directly is bad for performance anyhow, as you cause a pipeline stall. I think it's one of the reasons the radar rendering in EE is slow...
  • That's what display lists were supposed to be for, the list commands get cached, then you only send the list calls over the wire to do the draw.
  • Would that mean that you would need webgl for every station, or just for those that need 3d rendering? (though that of course would include at least weapons and helm, if you are going to full 3d movement.)

    Other question, do you plan(though the word "plan" might be not the right one at this early point in time) to use an https api like EE in addition?
  • The WebGL would be purely optional, but it would be used to render everything then, UI, 3D view, all. Everything that is normally on a display. It can replace a monitor so to say.

    As for an HTTP API, always :-)

    Right now, I'm just still building random game engine parts. The multiplayer code isn't finished (and tested) at all. So that is also still on the engine todo list before I can even build something truly new.

    I just randomly decided to add IPv6 support. That's how my development currently goes, just random new things.
  • The WebGL would be purely optional, but it would be used to render everything then, UI, 3D view, all. Everything that is normally on a display. It can replace a monitor so to say.
    Good to know. Because webgl only might might lower the chance for some hardware to run the game. But it would be cool too have it as option, which would actually raise the amount of supported devices.

    HTTP API, yay!

    Yeah I am aware that this is still in a brainstorming/proto-prototyping phase, but some ideas are already quite interesting nevertheless, so it's nice that you share your that bits and pieces.
  • Greetings all -

    Please pardon my 'Johnny come lately' to the conversation.

    daid - you had broached the subject much earlier in the conversation about feedback to a crew after a sim run. This is one of our principal interests. The feedback, or the After Action Review (AAR), is an important step because it's where you're able to evaluate the results of decisions/actions and discuss how a better course of action would have determined a better result. Of course, no one's really interested in doing that when you're having beers and having fun blowing each other up, but our group focuses on using the bridge sim as a leadership lab/teambuilding tool, hence the AAR is an essential element to the experience. When we run our larger events, we actually assign people to serve as a Mentor/Evaluator (ME) for each ship crew. This person evaluate's the crew's performance during the sim and then provides feedback after the sim is over. The ME has a scoring sheet we set up for the specific missions, and the entire event is structured as a contest between groups. (We give cake as a reward. /grin/) We use a paper based evaluation solution because the bridge sims don't come with an AAR tool, but I've been thinking that this might be a viable use for Thorium in a side-by-side setup. That is, have the ME's use a tailored Thorium deployment as the AAR tool for EE.

    Ref the part in the discussion of using Lua, I'm very happy with scripting in Lua and think the script management engine works pretty well (from my 'outside' perspective). I was not happy at all with the XML scripting solution for Artemis, and find Lua gives me broad creative license via the commonly understood OOP.

    Ref a possible EE2 with a full 3D space environment, please oh please oh please oh please oh please oh please oh please oh please oh please oh please oh please....... I'll even volunteer to be your test lead.... ;]
  • I was not happy at all with the XML scripting solution for Artemis
    I seriously do not know what the developer of Artemis was smoking when he thought that was a good idea. But it must have been a wild trip.

    I like Lua because it's quite easy to integrate into your code, doesn't bring a lot of dependencies, and is reasonably safe to include. But javascript and python are other options that people use for scripting engines. Both mature and quite good to integrate.

    About the AAR, EE has the detailed logs, and the log viewer: https://github.com/daid/EmptyEpsilon/tree/master/logs
    It allows for a basic view of what happened after the facts.

    For for EE2, don't expect me to start on it anytime soon. I simply don't have the time to have multi-people play sessions anymore. So I'm currently only working on small 1/2 player things. My new engine (SP2) is a lot better suited to support a lot of things however, but the network code of that is completely untested at the moment. So even starting an EE2 on that will require debugging that code to start with.
  • If we're going to volunteer testers. I'd be happy to join a testing crew on a Saturday (remotely).
  • I haven't posted here in a while. Still, things pop up in my mind from time to time.

    So, this morning, my mind was suddenly at EE. And that it's mostly just a copy of Artemis, with improvements/additions. After all, it started out as a straight up Artemis clone, copying things 1:1, stations, ship stats (no reason to lie about that) and then improving on that.

    However, this morning I found myself thinking about the Engineering station. And, that it might be interesting to just remove it. What, what? Yes, remove the engineering role and move the power and repair management to each specific station.
    This means there is no longer engineering that can call out "But captain, I'm giving it all she's got!", but on the other hand, the players will have to collaborate on power management. As you can only add power to your systems if someone else removes it from theirs.

    Repair could be done with a button to request and release repair teams. With a limited amount of repair teams available you have the same priority issue again that you have to communicate.

    I came to this idea, also, because in our games, engineering was generally silent. And worked "solo", reacting to things happening without interaction, just managing things on it's own. Which is fine, but goes a bit against the hidden goal of the game of having people interact with each other.
  • edited January 2019
    The way you've got EE designed, Daid, you could have both. Create some new consoles like Helm+ where power and coolant sliders are added for warp, jump, impulse and maneuver; Weapons+ where power and coolant sliders are added for missiles, beams and shields; Put the reactor on Science+ or Relay+. Have a repair request for each system. Engineering and/or Damage control could still be used in conjunction with these new consoles.

    Communication over the sharing of coolant would have to occur this way. Power could be added at will, but if the stations don't share coolant, the ship systems burn to the damage point. If auto-coolant is enabled, then the power allocation must be discussed.

    Instead of a slider with the fine grained control that an Engineer gets, you could have "boost," "normalize," and "reduce" buttons for power and/or coolant. In fact, that could be done in a lua script (which I just may have to do)
  • Oh, I know I could have both. But I rather not right now, as it would make things more complex and harder to use properly for the players.

    Another mind dump, on alternative setups. How does the following sound:
    Bridge simulator with 3 stations:
    * Flight
    * Sensor
    * Planning

    Flight is pretty much the same as helms, controlling the flight of the ship.
    Sensors deals with all kinds of external sensors on the ships. Short range, long range sensing. Wide beam (short range), narrow beam (short&long range). And gets semi-raw results from this.
    Planning has a map view, can add notes and details to this map. As well as a database to look things up and communication systems.

    The idea being that you have very limited "world information", and that you need to build this up yourself. Example:
    On the main screen, you see a planet. It's a rocky planet. But that's all you know right now. With sensor you can then scan the planet with a wide beam with different sensors once you are close enough, giving you size and composition of the surface. Which is low in iron and high in silicon, no water.

    Planning can now mark this planet on it's map, and add these details. Also, by checking it's database, it sees that there are 5 planets that could match this one. So you still do not know exactly where you are.

    Flight turns the ship and notices a reasonably close star/sun. Sensor aims it's narrow beam spectrometer to get some more data. Now planning can check it's database to see which planets orbit a sun which these specifications.

    Now that there is an idea of where you are, planning can find coordinates of a nearby station. Aiming your communication dish (sensor) and hailing the station (planning) is naturally wise before heading there.

    Once at the station, planning should request docking permission and gets assigned a docking bay. At which point flight and sensor need to work closely together to properly align for docking.
  • Sounds good. Focus is on research and coordination, less on ship combat. It would perhaps appeal to a different kind of bridge simulator enthusiast. I could see this as an adjunct ship or as adjunct bridge stations for existing ships in the context of the existing EE/Artemis game paradigm.
Sign In or Register to comment.