Pondering bridge simulation game creation.

13

Comments

  • I would argue that KSP always follows the anticipate-stress-relax cycle. When and what the moment of stress is will shift as the player's skill (and gameplay unlocks) increases. Its a very tight loop when the player is learning to launch rockets (good to get em hooked), and then it becomes more and more delayed gratification as it goes to the Mun, orbiting other planets, landing and taking off on them, etc. Some games level the monster difficulty to match the player (skyrim) to try and keep that stress loop constant, but it sacrifices the sense of progression of returning to a starting zone and just stomping everyone.

    I think what makes the primary loop harder for bridge sims is that not only is there a primary loop of the ship itself, but every station has its own primary loop. And loops needed for LARPING might be mutually exclusive with the loops needed to make it inherently fun.

    Another book that really impacted me as an armchair gamemechanics ideas man, and as a teacher, is A Theory of Fun by Raph Koster. His theory is that the fun of a game comes from the player optimizing the gameplay loops and the game becomes unfun when the player masters it. Thats where factorio's fun comes from.
  • I think you are right about KSP. Maybe that's also why I stopped playing it...
    His theory is that the fun of a game comes from the player optimizing the gameplay loops and the game becomes unfun when the player masters it.
    That works for Factorio (especially taking in mods, which put everything upside down again sometimes)

    But that does not explain why I enjoy quite a few Zelda games, and metal gear games. There is not a loop I optimize. That's a story I play. I do see that I only play it once. So maybe it's the story.

    But yes, it seem to hold quite well as a theory for quite a few of my favorite games. Including (in random order) FTL (everything done on easy/medium), SuperMeatBoy (light world finished), Into the breach (everything done on easy), commandos (all 4 of them, done, finished, twice)
    Hollow Knight was fun. As I suck at it, but I managed to beat quite a big chunk of the game by gathering upgrades out of the normal path. Took a strange optimize path there.


    So, how can we apply this to bridge simulation? Or should I stop trying to see it as a game, and just a simulator. Maybe that's why EE1 worked. It's a simulator, not really a game? And the game aspects are just quite thing and underdeveloped, but the simulator aspect makes up for that?



    I like comparing to FTL, one, for they are both space games, and both have a whole bunch of systems you need to manage. But they have one key difference. Just one. FTL has the ever advancing fleet at your back, and the big bad end boss at the front. Meaning the whole game is a balance between advancing to stay ahead of the fleet, but collecting enough resources while doing that to defeat the boss at the end.

    Take that away, and you have a much more free roaming game. But also much less fun. You get, Convoy. There are a few other things wrong with convoy as well. But not having the ever advancing wall of doom, it takes a lot pressure out of the game. And with that it takes a lot out the game, it tries to be a resource management game, but without time pressure, it becomes a chore to collect enough resources before you can continue on. While FTL is balanced and structured to have a good shot at the end boss.


    And I think that's what makes EE1 a bit random. Depending on the scenario, you have a story that you play. Or an every-advancing-wall-of-doom. And actually require different game setups, and EE1 is a bit of a mix of both.
  • Zelda tends to be a bit of puzzle element, applying the treasure in the dungeons to that first and second level gameplay loop to get to and defeat the boss. Loop optimizations, puzzles, and stories can't be unlearned. Loop optimization tends to have a bit more longevity to it.

    Game vs simulation is something that probably gets debated a lot in gamedev 101 classes. As much as I like academic debate I don't think the label is too important as long as someone is having fun. Table-top rpgs there are short 1-page systems which is just some basic scaffolding for group storytelling, to multivolume tomes that simulate things in intricate detail.

    Games just have the advantage of fudging or changing things in the name of fun. I do thing EE1 is very much a game, but social interaction is such a big part of it. I personally have only played the general "kill all the bag guys" in the zone with Artemis and EE1 so I am most definity not qualified to talk about the higher-end gameplay, my perspective is definitely that of a new player. but without the wall of doom and final boss like in FTL this gameplay does feel a bit anti-climatic. We blow up the last ship of a pod and its "oh we won, hurray?" It could also be that I, the host, should read the manual so we can play something a bit more structured.


  • croxis said:

    Game vs simulation is something that probably gets debated a lot in gamedev 101 classes. As much as I like academic debate I don't think the label is too important as long as someone is having fun.

    While I don't think the label is important. I do like to make the distinction. Maybe it's less about game vs simulation, but more closed vs open world.

    For example, minecraft is pretty much open world/simulation. Even if it has sort of an "end goal". Most people ignore that endgoal and that is fine.
    But, assassins creed, while being promoted as open-world. It's not simulation, ignoring the "end goal" only provides a very limited amount of gameplay. So I wouldn't call this simulation.
    We blow up the last ship of a pod and its "oh we won, hurray?" It could also be that I, the host, should read the manual so we can play something a bit more structured.
    Small tip there. I usually sit behind the GameMaster screen on the server, and shuffle thing around. And then making this climax happening. Moving&creating things outside of the range of the science station takes a bit of practice. And the nebula help to hide your actions.
    Also, tweaking the enemy ship stats help with this, the players won't notice if you boost a ships engines for a while so it reaches the player faster. They also don't notice if you re-strock the enemies missiles. And it makes things a lot more dynamic.
  • Right! We just have the problem of no one wants to be gm because we all want to be in the game. I couldn't even split my group of 10 into different rooms to do pvp.

    I think there could be a case for interesting emergent player stories through some smart scripting and AI. I'm naming dropping Pandemic Legacy season 1. For those who don't know, Pandemic is a cooperative board game where players are medics trying to rid the world of four different viruses. There is no GM running the game, it is all done through rules and cards. In Pandemic Legacy four players play the game 12-24 times, depending on how well they do. The first game plays like regular Pandemic, then you open a box which add a couple rules and stickers that impact the next game.

    During the campain rules are changed depending on circumstances, stickers are unlocked adding boons and busts, stickers are added permanently altering the gameboard -- most of which is only reviled when certain game conditions are met. Its been years since we played and we still talk about that campane. The amazing thing is there is no human GM that did this. It was all through the "Legacy" mechanics. And the absolute amazing thing is that even though we unlocked about 90% of the same mechanics and stickers as everyone else who played, our story is very unique to us.

    I think three things made this really memorable for us. One -- it started as pandemic a game we had played a lot of -- it started familiar to us. Two -- it was a campaign that played on the same gameboard over and over so we got very familiar with our placespace and how our choices impacted it. Three -- setting. Its Earth and our cities and countries. We are familiar with it. I think that is part of the reason why the Expanse is doing so well as a book and TV series. The audience is familiar with the setting, both with technology and the location.

    Now I am going to go keep myself awake thinking what that would look like in a bridge crew game!
  • edited October 2019

    Greetings all.  My apologies for my tardiness in joining the discussion.  Simply too much going on.  Trying to focus. 

    @daid:  Ref your comment on 21 Oct on ship CPU and memory resource utilization; I am definitely all for trade-offs in ship design, but personally I think these should be of the larger system variety, i.e., shields vs engines vs sensors vs weapons vs energy plant, etc.  CPU and memory resources seem too far "down in the weeds" for my taste regarding ship design trade-offs to be interesting/entertaining/good game play.  "Memory is cheap" as they say, and should not be a limfac. 

    @smcameron:  Ref your comment on 22 Oct, yes, I agree that white/background noise is important to the experience.  So much so that we have a dedicated set of speakers and sub-woofer attached to the machine running the front screen for each of our ships.  On that machine, we simply run a ship background/engine noise sound file in loop for the duration of our events.  It creates an important base sound that fills the room and adds a great deal to the overall effect.  We created the file with a spaceship sound generator we found online and paid nothing for it, yet it fits the need very well.  It has an understated yet strong engine "thrumming" feel.  We simply play the file with Win media player in the background while EE is running the front screen.  Super easy yet very effective.  Total night and day difference when using it.  Here's the sound file for anyone who'd like to check it out/use it:  https://drive.google.com/open?id=1zAvbRq4Jmke_Nu7S-TA8xvC7ZBuOgEAS    Please note that it's much more effective when played with a subwoofer. 

    @daid: ref your comment on 23 Oct on an "anticipate, stress, relax" cycle in the game.  The game scenarios that I host are very much focused at team building/problem solving.  It's the "1) here's the problem, 2) here are the constraints, and 3) you and your team must solve the problem within the constraints; oh, and yes, you will be evaluated and you will receive feedback on your performance" (or lack thereof, ha).  So the idea of "anticipate, stress, relax" occurs somewhat naturally in our "solving the problem" approach.  I try to set up our events with clear goals in mind (i.e., Capture the Flag, Rescue Us!, or, Last Stand).  The teams are motivated because they know they are being evaluated and will receive feedback from peers/superiors on their performance.  They have some "skin in the game" because there will be "bragging rights" to be had by peers they interact with on a routine basis as well as some nominal prize items. 

    @daid & @croxis: ref the conversation thread from 24-30 Oct: 

    • on "A Theory of Fun" by Raph Koster -- great book and very foundational to me!!  Quite pleasantly surprised/pleased to see it referenced.  I took a class at our company based on this book a few years ago and it's been guiding me since. 
    • By way of summarizing the discussion thread from 24-30 Oct, I see EE as a game engine that provides us the basic mechanics to host a team event.  The dilemma/story telling is up to the script builder to create this, and it's great because essentially EE presents a "blank canvas" universe with some baseline physics and allow us to create from there.  Awesome.  EXACTLY what I want.  Please do not sweat for one moment that EE doesn't have a "single ship" storyline to follow.  I'm happy that it doesn't, and leaves us to create the stories/universes in which to play.  Although we've only barely got started, I have several other storyline/script ideas that are just waiting to be produced… So, please don't consider the lack of a single ship story thread to be a detriment.  I certainly don't. 

    Going forward, I am hosting our company's first official "site-vs-site" EE event on 14 Nov.  We have multiple office sites around the country, and this is the first time we'll be hosting a "Site A" vs "Site B" competition using CtF in EE.  (Many thanks to @Xansta for his work to help create the CtF scenario/script! You rock buddy!).  Folks are stoked and it's gonna be a blast!  We're aiming for 3 full ships per site.  I hope to stream the event, and will post a link if I'm able to do so.  After this, there's only going to be bigger and better events…!  I hope to keep everyone posted on the events.  Look for my event write-ups here:  http://bridgesim.net/discussion/comment/4199/#Comment_4199

  • Awesome. EXACTLY what I want. Please do not sweat for one moment that EE doesn't have a "single ship" storyline to follow.
    Don't worry about that. I'm not planning to change EE1 in any way in that regard. It was always intended as a blank canvas replacement of Artemis, with good GM tools. Everything else was just tweaked on top of that after that. For a long time the "basic" scenario was the only one present, and waves was the 2nd scenario. Simple, no story.

    But remember, I started this topic to dump my mind and think about a possible EE2. Not to change the plans for EE1. I could make EE2 a rehash of EE1, with some better code here and there. But I'll most likely never finish it, as it does not spark my interest to do that, and it would be just more of the same.

    I do like all the comments I get on my random mindfarts here. That's why I post them, I rather post 20 ideas here and have 20 shot down for good reasons, then only post my "this has to be a good idea" here.
    "Memory is cheap" as they say
    In our world yes. But we did go to space with less memory then your average arduino has. So in an alternative universe, memory does not have to be cheap.
    Remember the mantra "Never let realism get in the way of good gameplay"
  • Making second post about a different thing. Maybe expanding on the memory is cheap thing.

    PLANETS! god damn it. I keep getting back to wanting to have proper planets that you can explore. So I'm going to dump my mind here, and type as I think.
    My earlier experiments where based on height maps, which is kinda what Kerbal Space program also does. It uses procedurally generated height maps (well, details are a bit more complex) and modifies a UV sphere with that. Which results in artifacts at the poles (like the Mohole, and texture stretching)
    Not a huge problem for KSP, as you generally are at the equator due to orbital mechanics. But not that nice.
    Also, you cannot have things like cave systems.


    So, what if we use a different method. What if we use voxels? No, we don't have to be blocky like minecraft if we use voxels, we can smooth out the results: https://thatfrenchgamedev.com/games/nil-order-voxel-smoothing-engine/

    But, how many voxels do we need? What if we have a small planet. Smallest moon of KSP has a 13km radius. So... let's say we have a 10km diameter planet. Yes, that's small for KSP standards. And, what if we have a meter size voxel (which is what minecraft uses)
    Then we get.... 10,000x10,000x10,000 = 1,000,000,000,000 voxels. If we use a bit per voxel, this is still 116GB of memory we need. But hey, memory is cheap right? :wink:

    Now, we don't actually need to store each voxel. Large areas will be 100% full or 100% empty. So we can group voxels and store those larger blocks as fully empty or fully filled, and you can do this quite well with octrees

    But, we could do this simpler, and use chunks maybe? 32x32x32 chunks would need 4096 bytes of memory (assuming a bit per voxel). For a 10km planet we need 312.5 chunks per axis. So let's take 320. 320^3 = 32,768,000 chunks. If empty chunks are just a pointer, that's only ~260MB of memory for the pointers. Doable, not nice, but doable. All kinds of optimizations possible. But at least this fits in my memory for now.

    How many chunks will be filled? I just need a ballpark figure. So let's assume a sphere planet. If we look at it from any direction, it will have a surface area of PI*r^2, so PI*5000^2 = 78,539,816 square voxels. 1024 square voxels in a side of a chunk. So 76700 chunks if we look at from the front. Double that for looking at the back as well. Now we miss a bunch of chunks for looking not looking at the sides. But we are looking for a ballpark figure.

    So, 153400 chunks, at 4k memory each. ~600MB memory for the chunks. So for 1GB of memory we can have a small planet at a reasonable resolution. For just the voxels... we haven't even done any generation of render data yet.

    (No mention on how to actually generate and fill this voxel data, which will take a few CPU cycles most likely)



    Ideas, step 2. We don't actually need to store the detailed voxel data, unless we have terrain modifications. If we use formulas to describe the planet for procedural generation, then we don't need to keep each voxel in memory.
    But we still have the issue of generating the render data. We cannot just generate a quad for each square meter. Even a perfect sphere has 314,159,265 square meters, meaning that many quads, double that for the amount of triangles. Even ignoring the memory requirements, only a highend CPU can render that amount at 60FPS.

    So, we need level of detail. LOD for short. It's common. Back to math. We have a few "worst case" scenarios. What if we view a planet at full screen, edges of the screen are the edges of a planet. Monitor resolution of 2000 pixels (for easy math). Anything smaller then 10000/2000 = 5 meters will be smaller then a pixel for sure. At the edge of the planet that is.
    The middle will be closer. How much closer? Assuming a 90deg field of view... things suddenly become complex. As we are not actually seeing all the way up to the sides of the planet then, we are only seeing ~90 degrees of the planet. Now, we need a bunch of trigonometry (not uncommon in 2d/3d gaming math), not bothering you with the details, but the closest point of the planet will be at 2000m from your viewpoint then, instead of the 5000m of the furthest points. Meaning 1 pixel is actually 1 meter.



    So, setting the LOD so we have an optimal density of triangles means we get back at the "we have too many voxels" issue. So how big can we make the triangles before it becomes ugly? I don't know.
    How do we prevent that we need to evaluate 1,000,000,000,000 voxels to know what we need to render or not? I don't know yet. Octrees will help I guess, and a lot of pre-processing...

    Note that most games that use voxels don't deal with this scale. No man sky sort of does it, but quick look at some planetary landings and I think they cheat. Spherical planet until you enter the atmosphere and then fog to cut off the view distance. Also, height map terrain I think.
  • edited October 2019
    I am curious how you plan to integrate the idea of exploring the planet with the bridge-crew experience. I always dismissed the idea because, well, for one it's pretty tough to implement, and also it kind of doesn't fit in with the idea of a bridge crew in that if you land on a planet, then you want to get out and walk around, well, you have to break the immersion because if you leave the bridge you're just outside your house in your own yard or street. Maybe VR... but the hardware requirements for this type of game are already too limiting, I don't want to make it even *more* niche.

    Although, I suppose you can always say, "the atmosphere is not suitable" and the bridge sprouts wheels and allows you to drive away from the rest of the ship in buggy form or something, or maybe have a separate console for a shuttle craft/buggy for away teams to use. Or maybe a single VR rig to allow a crewmember to go into a closet and "beam down" to a planet (one VR rig is a lot more feasible than one per crewmember, I suppose.) Edit: or there's the FPS solution that Pulsar: Lost Colony uses.

    I think a lot of games that have variable level of detail use a cube map for the sphere (to avoid distortion at the poles) normal maps for terrain (at large distances) and start subdividing regions of the mesh as you get closer and filling them in with proc gen meshed terrain (or from pre-generated data, I suppose). Getting the seams between the regions at different levels of details to match seems to be something people have trouble with. Well, probably nothing you didn't know already, and I can't really offer any great insights since I've never implemented such a system. I used to know someone who had implemented such a system (in Java) but since the demise of google plus, lost their contact info. :( Edit: Oh, but I found a blog post: https://www.shaneenishry.com/blog/2014/08/13/level-of-detail-experiments/ Pretty sure she got further along than that blog post indicates because I remember seeing some posts on google plus about it.
  • I am curious how you plan to integrate the idea of exploring the planet with the bridge-crew experience.
    There are a few possibilities:
    * Don't land. Simple, removes the need for all kinds of complex interactions between your ship and the planet. What if you could only land inside landing bays of planetary bases?
    * Send a probe, as the ship is too large/unsuitable for atmosphere. The crew controls the probe ship, so no way the crew can get out, as there is no crew in the probe.
    * Give no reason to get out. You don't have to explain all things. Some things are just not in the minds of people as long as you do not plant them. (for example, flying up in EE1) Land, gather your resources, and take off again. It's not like anyone asked if they could walk into the spacestation in EE1 when docked.
    * No planets. No gravity. Just large asteroids. Without gravity, landing isn't really an option.

    Also, as soon as you "step out and walk", you are most likely no longer in the same line of bridge sim cooperation, as everyone becomes equal (unless you do the arbitrary "skill system" that Pulsar: Lost Colony has)



    But that is also why my mind is going away from the height map solution, landing on a planet is one thing. But flying down to the surface, entering the surface cave system and navigating that?

    Few random ideas around planets:
    * Thick atmosphere, unable to see the ground until you are close. Sensor scans are important to prevent unexpected lithobraking.
    * Corrosive atmosphere layer, while you can see trough it. Don't stay in it, else your ship will be destroyed. Maybe it is only safe inside caves or canyons?
    * Hollow planet, doughnut planet, twin planets, gas "giant"



    And the cube map subdivision is the first thing I tried, well I used a tetrahedron, but the concept is the same. It works, the seams are a bit more difficult, but quite solvable, you need to keep track of where the level of detail changes and make vertices align properly there on the lower detail. A bit easier if you use triangle subdivision instead of quad subdivision.
  • edited October 2019
    * Don't land. Simple, removes the need for all kinds of complex interactions between your ship and the planet.
    For this case, which is what I do in SNIS, I have found dynamically varying the level of detail to be unnecessary. A reasonably tessellated sphere with six 1024x1024 normal maps and textures for terrain works pretty well. It does get a little blocky when you get super close, but generally, you never do unless you're deliberately crashing into the planet. Eh, I guess the textures do look a little more pixelated than I would like sometimes, so while "unnecessary" I suppose improving it would be desirable.
  • Well, I would like to be able to get up close and personal with planets. Just need to figure out how to actually make highly dynamic planets without memory and CPU exploding. Fun challenge.
    On octrees. If I take my 10km planet, as a raw sphere, and make a octree with a depth until I get 16 meter blocks, I need 2,455,301 octree nodes. Which will be more if the planet actually has features, not just a sphere. But quite doable memory wise I think. Difficulty will be creating the octree, the only way to know if a node needs to exist is to fully scan it up to the smallest size. So I will need to evaluate the 1,000,000,000,000 voxels at least once...

  • I've fiddled with the sphereized cube method myself. Didn't get very far with it but it was working out way better than just height mapping a sphere. This was a few years ago but I'm seeing a few newer methods out there with a quick google/stackexchange search. Oh to have free time again x.x

    Space Engine and Infinity Battlescape both use procedural methods for their planets. I don't know the details but both have copious devblogs on it.

    Space Engineers uses voxels for their planets.

    Gameplay wise there is the trope of the senior bridge crew beaming down to the planets. In reality the bridge crew would stay on the bridge and the other ship personnel would beam/shuttle down to the planet to do their work. The bridge crew would then be splitting their attention from their space duties and managing the landed crew. That then opens a lot of possibilities on gameplay, story, and presentation.

    It would be a lot more content to make and systems. If you want some sort of suit or rover camera view then character animation, assets, structures, flora, fauna, etc has to be made and coded for. There is a mantra from someone famous that it is better to make one great game than trying to make two good games fused together.
  • Going with the alternate universe idea -- what if technology and science focused on spaceflight instead of computers. Instead of making lithium ion batteries and 16 core cpus, material science figured out how to make a space elevator on Earth. Getting into space becomes cheap, throw in a couple asteroid captures for mining purposes, etc. In real life the equations for orbital mechanics are simple enough that the position of all the planets and moons in our solar system can be calculated in real time. I have a python script for it. Could make computer resources limited for an FTL system through.
  • @daid:  you're on fire dude!  Ha. 

     Ok, by point/topic. 

    • EE 1 vs 2.  Sorry if I befuddled the topic by implying you wanted to change EE1.  I knew you didn't, but I thought I perhaps detected some (regret?) with there not being more "anticipate, stress, relax" cycle in the game, and I was trying to be encouraging that in fact, I for one am quite satisfied with EE1, even without this cycle.  And I do -especially- love the built-in, easy to use GM tools.  Thank you a million times over.
    • Ref planet graphics.  Yes, this is an area that needs some work in EE.  I make it a point to explain to new players that planets in the main view screen always appear much closer than they actually are, however, it's a relatively minor issue to deal with.  Now, regarding how to better graphically render said planets, I'm way out of my depth there and am not able to weigh in.  Sorry.  However, I do feel qualified to say that it would be nice to have a larger variety of planetary surface textures in the standard library.  (If anyone has such a library and would like to share, I would be immensely grateful… 8-)
    •  Ref away parties to planets:
      • This is a very intriguing idea.  It would of course be an optional "add on" to the game, but it could be done as a simple first person perspective in a virtual landscape just like a gazillion games are done today, and there could be an optional VR component to it.  Obviously now you're talking a lot more development commitment/cost, but it would be a great addition.  From my simulation planner's perspective, the entities actually going to the planet would be RC droids and not the ship crew persons themselves.  That way it really doesn't matter the environment, you can send a droid to it and remote pilot it around to do whatever is required.  It gets destroyed, np, launch another (if you have more in your inventory of course, ha).   
      • This overall capability would be cool, but I think in comparison to all the other AAA titles available (No Man's Sky being first/foremost), certainly not original.  Thinking about scripting a scenario that involved possible planetary away parties, and I start to get the sinking feeling that the complexity of writing said script would be too involved for those of us mere mortals who do this on a part time/for fun basis and it would probably go unused.  8-(   I love the thought of having the freedom to tell a story through this option, but some significant thought would have to go into how to simplify the scripting of said away missions and the environments. 
      • From a team-building exercise perspective, I can definitely see how an away party element could add all kinds of cool layers to the team interactive dynamics.  Additional critical skills could be brought to bear in a team's make-up, and that would enable other team members to contribute and shine in their own ways.  
    • Ref overall EE2 desires:  I confess, my items here are much more evolutionary than revolutionary, but I would like to see the following, in no particular order: 
      • Full 3d space environment with appropriate navigational aides
      • Overhauled and streamlined ship classification system with a broader set of 3D models
      • Ability to play sound and video files at Relay console and to have them played through main screen (I would definitely take advantage of such and have some scripted acting in audio/video to create a more immersive experience)
      • Broader set of console options; think more BS Galactica's Combat Information Center (CIC) in addition/expansion to the  standard bridge; the game can be set up as either depending on the number of participants or game focus (I think I'm going to start a separate thread about this topic)
      • More significant cyber attack/defend elements
      • "Baked in" ability to control drones
      • Ability to have the game engine manage objects orbiting other objects and not requiring the script developer to manage such; for example, it's awesome that EE includes planets and these can be set to orbit other planets, but I also want to be able to have space stations and asteroids in orbit around planets and not have to manage the orbit update calculations myself
      • External visual representation of damage done to a space ship
      • Introduction of compartment decompression within a ship; this would have a direct effect on how well repair crews would perform (obviously would slow them down)
      • Consider implementing the concept of boarding parties; this could be done all via AI drones/droids; boarding parties could control access to certain areas and thus affect ship systems/ability to repair; of course this would also imply that each ship would have a number of security forces on board (simulated ship population) in order to defend/repel invaders (this begs the question what other fundamental concepts from Starfleet Battles / Starfleet Command need to be considered for implementation?)
  • edited November 2019
    However, I do feel qualified to say that it would be nice to have a larger variety of planetary surface textures in the standard library.
    Taking a quick peek...

    scameron@wombat ~/github/EmptyEpsilon/resources/planets $ file *
    atmosphere.png: PNG image data, 128 x 128, 8-bit grayscale, non-interlaced
    clouds-1.png: PNG image data, 1024 x 512, 8-bit grayscale, non-interlaced
    gas-1.png: PNG image data, 2048 x 1024, 8-bit/color RGB, non-interlaced
    moon-1.png: PNG image data, 2048 x 1024, 8-bit/color RGB, non-interlaced
    planet-1.png: PNG image data, 2048 x 1024, 8-bit/color RGB, non-interlaced
    planet-2.png: PNG image data, 2048 x 1024, 8-bit/color RGB, non-interlaced
    star-1.png: PNG image data, 512 x 512, 8-bit/color RGBA, non-interlaced
    scameron@wombat ~/github/EmptyEpsilon/resources/planets $
    It seems EE uses 2n x 1n RGB images (where n is usually 1024), and looking at a couple, looks like an equirectangular mapping to a sphere.

    If you want more gas giants, someone has figured out how to take the six cube map images that gaseous-giganticus generates and cram them down into a single equirectangular mapped image using Hugin. See: http://forum.celestialmatters.org/viewtopic.php?f=2&t=732

    Gaseous-giganticus is here: https://github.com/smcameron/gaseous-giganticus
    and if you scroll down, there are some instructions and links about how to use it.

    This process of using Hugin to make an equirectangular mapped image from six cube map images would presumably also work for the cubemap images that earthlike.c generates. The earthlike program is still in the SNIS codebase. How to use it is described here: https://github.com/smcameron/space-nerds-in-space/blob/master/doc/howto-generate-earthlike-planets.txt

    And if you don't want to be bothered generating new planets, you're of course welcome to plunder the assets from SNIS and run the planets in there through Hugin to make equirectangular versions. They're all CC-BY-SA-3.0 licensed.

  • Sound can already be played by scripting. I scripted some triggers (buttons on Relay) in the Ambassador Gremus scenario. I know this is not the same as video, but it's one step towards immersion
  • edited November 2019
    @smcameron - thanks for the excellent info! I admit I've not done this before, but you've certainly given me enough to investigate and figure it out. Many thanks! 8-)

    @Xansta - I should have remembered that you implemented some audio via scripting. I'll be sure to check out that script. Thanks for the reminder!
  • edited November 2019
    Want more planets? Easy, generate as many as you want: http://www.texturesforplanets.com/
    Note that EE has no specific size requirement for the textures. But powers of 2 are quite common, and on older hardware more efficient.
  • @daid - dude! that's an awesome resource. Thanks very much! 8-)
  • There is a whole lost of ideas there. And I could comment on all of them. But I pick one first.

    Full 3d space environment with appropriate navigational aides

    What is "full 3d"?

    I currently classify EE1 as 2D with 3 degrees of freedom (x/y motion and z rotation)

    If we look at other examples. Doom (the classic one) is 2.5D. It's maps are 2D, but there is some limited amount of motion in the 3th dimension. It also has only 3 degrees of freedom, as there is no looking up or down.

    Duke Nukem 3D (the classic one) is also 2.5D. It's maps are still only 2D, it pulls a shitload of tricks, but it is 2D. It does allow for 5 degrees of freedom in motion, as you can look up and down. And you can jump.
    But we could also say that Duke Nukem 3D has 3 degrees of full freedom (x/y motion, yaw rotation) and 2 degrees of limited motion (z jumping and pitch)
    I would classify Artemis currently as 2.5D, as it has different Z planes where you can be, but not real 3D motion.

    Going forward on the game timeline. There is Descent. Full 3D, 6 degrees of freedom. There is no defined up, and you can move in any direction and any rotation. Yaw, pitch and roll are unlimited. And let me tell you, when I first played it, it was amazing and confusing at the same time.
    The X-Wing games and Tie-Fighter games also allowed for full 6 degrees of freedom. But the scenarios where placed out pretty much flat. Making little use of this fact.

    Now, a lot of games after that. They where no longer 6 degrees of full freedom. Freelancer for example, it had full freedom in x/y/z motion (well, z was a bit limited) but very limited pitch and roll.
    And that's what we see for many space games, they limit the roll to define the up/down direction. And with that, place everything on a 2D plane again. Just to make it less confusing.


    So, if you ask for full 3D. What do you mean? 6 degrees of freedom? 5 degrees with limited roll? Just 2.5D like Artemis does? There are a lot of things between 6 degrees of freedom and the 2D that EE1 does.

    I want to aim for 6 degrees of freedom. But that breaks a lot of the existing EE1 systems, like combat. There should be a post about this in this topic already somewhere.
  • And back on planet rendering. In my mind a concept starts to form that might actually work. If I take an octree for level of detail, and say "level of detail needs to increase if distance to center of octree node is less then the size of the node", then I need a surprisingly low amount of nodes in memory at each time. (few 100 at the worst case)

    I still need to setup an render each node then. And "building" the render data for a node is still quite a few calculations. But I could say, I render a node as 64x64x64 voxels. As the level of detail is already handled by the octree, I do not need to account for that at this point, just scale the voxels according to the octree node size.
    And I "just" need to progress the 262,144 voxels for each node, times about 400 nodes, you get only 104,857,600 voxels to process, in real time...

    Except, there is a lot of room for optimization. Caching nodes is one, I don't need to recreate the rendering data for a node if it did not change. So only when you cross octree boundaries you get a partial re-calculation. And fully filled or fully empty nodes can be detected by processing all edge voxels (so 6x64x64) as if they are all full the rest does not really matter. And if they are all empty, there is no need to create a floating thing in the middle if that would have existed otherwise.



    About generating the planet data, or "is this point solid or not" is the main question. That is actually not that hard at all. You combine metaballs with some 3d perlin noise, and you have a planet. For more advanced stuff you can mix in some lookup tables to convert stuff. But in general, it's not that hard. And it will be fun to play and tune with the parameters, trust me.
  • I recall that Star Trek Online Devs test full 6DOF early in development. They found that it was very disorienting to a lot of players and they didn't want to limit their audience. Communication between stations will have more challenges too. The pilot will have a good mental state of the orientation as they have control over it, but it will be harder for the other players to track spatial awareness in a more nimble craft.
  • edited November 2019
    What I did in SNIS to mitigate confusion regarding 6DOF is to have arrows on the navigation screen showing the direction that the WEAPONS are pointing and the direction to whatever SCIENCE currently has selected. It's mostly NAV that needs to be aware of these. My Weapons is a turret system which can cover a half-dome shape on the top side of the ship, and NAV has to try to make sure it keeps targets out of WEAPONS' blind spot beneath the ship, and SCIENCE is used for finding out where distant things are, so the SCIENCE arrow on the NAV screen lets the navigator know which direction to point the ship to navigate to distant starbases or wayponts. It seems to work pretty well, I haven't had too many complaints of confusion, apart from a few due to the "necker cube" illusion the arrows suffered from for awhile, but I think I've mitigated that by putting arrowheads and tail feathers on the arrows.

    Players other than NAV and WEAPONS don't actually *need* to have very much spatial awareness, though I have for awhile been thinking about having some system of "angling the deflector shields" via ENGINEERING(?) which would require some spatial awareness, but have not really come up with a good idea for how a UI would work for it. maybe something like the Radar that I have on weapons, (copied from... Freespace, I think.) This WEAPONS RADAR also indicates which blip SCIENCE has selected.

    My SCIENCE short range view actually has a very distorted view in which 3D distances are mapped into a 2D space, so that the 2D distance on the display is always proportional to the 3D distance, which can look pretty weird, especially when things move around (straight line movement is curved on this display) and then it has this weird 3D pop-out thing that it does when items are selected. The idea was to try to make it like you were "exploring" a 3D space without just presenting it as a fait accompli. Not sure how successful that was, but it does sort of prove that it isn't necessary for SCIENCE to have an accurate 3D representation clearly presented to the user in order to do the job, since SCIENCE isn't primarily concerned with the position, but more with the distance and the attributes of what is being scanned (and yes, the position, but only in order to relay this to NAV or WEAPONS -- which in SNIS is done automatically via the arrows on NAV and the RADAR on WEAPONS)

    I think the biggest impact "proper 3D" and 6DOF has on the game is in terms of "terrain." It is no longer quite so easy to put in a mine field or nebula blocking the player's path or constraining the path in some way, as things are generally so much more easy to simply go around when you can go above or below them as well as to the left or right. Or conversely constraining the path so it isn't so easy to go around is harder and takes more "stuff" and tends to seem unnatural, like you have to practically build a box around the player to present anything more than the very slightest of inconveniences.

    All this is a long way to say that I think the "confusion" caused by "proper" 3D and 6DOF as compared to to a more limited scheme are not really so bad in my experience.

    Oh, BTW, today SNIS is 7 years old.
  • True, I am just thinking of our not very sober games where someone sciency shouts "enemies to the left!" or another ship yelling "I'm north of you, where are you going?!" We might not be the target audience for this :P

    Your points on that 3D makes space terrain more problematic also applies with realistic scales. Space is so big and so far apart that it is trivial to go around things. Even nebula are still practically a vacuum and are see-through if you are actually in them.
  • Oh, BTW, today SNIS is 7 years old.
    Party time?

    About spacial awareness. I noticed only relay and science have spacial awareness on EE1. Even the captain has no real clue on where things are. And communication is in the form of "station X is 24U in heading 230". Science could have a list of visible objects with headings and distances, and it would play out mostly the same. Except for the "terrain" things that smcameron talked about.

    About realistic scales. I question if that indeed is a good idea. Except for heavy simulator games, nobody really does this. KSP does it. Elite and EVE Online sort of do it, but make heavy use of "zoning", small regions of interest where you can travel to easily. Everything else is vast emptiness. Elite uses the "mass lock" to allow long distance travel while still needing slower travel in zones. EVE uses jump gates for the same effect, and bubbles that players can place to capture ships.

    I think a different method would be "precision and winding up/down". So you have a large distance drive, but aiming it is a bit hard, and firing it will require a minimal run-time, so you cannot use it to cover short distances.



    And finally, on the terrain and emptiness. There is another class of bridge simulation which does this better. And that submarine simulation, there you have 5 degrees of freedom, with limited pitch, and lots of possible obstacles.
    Maybe that's why I want planets with cave systems?
  • edited November 2019
    smcameron said:

    Or conversely constraining the path so it isn't so easy to go around is harder and takes more "stuff" and tends to seem unnatural, like you have to practically build a box around the player to present anything more than the very slightest of inconveniences.

    Don't build a box. Build a sphere and call it the Oort Cloud :-)
    I don't really think the lack of terrain is a problem because of 3d, but because it is space. You can fly around a minefield in 2d as well as in 3d, you just have more options in 3d. If you want to prevent the players you have to enclose an area in 2d or a volume in 3d. On that perspective, this is probably why asteroid fields and nebulae in many scifi stories/games are waaaaaay denser than in reality: to add some kind of terrain.
    Actually, there is some sort of terrain in games with a realistic/simulationistic approach: Orbital mechanics. And that's even a pretty rough/restricting kind of terrain.
    daid said:

    Maybe that's why I want planets with cave systems?

    The cool thing of an engine that supports that kind of terrain is it's flexibility, so the game can easily be reskinned for other kinds of bridge simulations. A good example is DFA, as there are e.G. missions in space, under water or inside of the human body.

    About orientation: in DFA there is also a "reorient" button that brings to ship's orientation back parallel to a default plane.

  • http://daid.eu/dump/record_205534_05112019.gif (large gif warning)
    Quick test of the level of detail concept. Note that the actual level of detail "camera point" is fixed on the sphere at a single point, and scales are off. The smallest details are impossible to see here.

    But concept wise, this looks good.
  • edited November 2019
    If I may interject some "thoughts about bridge sim games" that are not necessarily immediately preceded in the conversation by any provoking stimulus... but which are nevertheless on my mind lately, here are some ideas about what does and does not work in SNIS regarding NPC ship movement in full 3D.

    On the macro scale, the primary notion guiding NPC ships in SNIS is a patrol route, which is nothing more than an array of several (x, y, z) positions which are to be visited in sequence, starting over at the beginning when the end of the sequence is reached. There is an algorithm for generating this patrol route which chooses points near the available starbases and planets and arranges a sequence of waypoints of several such choices. In theory, ships then travel in a circuit from one waypoint to the next, going from one various planet or starbase to the next in some sequence that repeats.

    But the devil is in the details. What if, between two waypoints, a planet lies? Or while traveling, an asteroid or a ship intercepts the path? Given a ship's current position and a destination waypoint, it is easy enough to calculate a straight line velocity to drive the ship from here to there, but this leaves the roll component of the ship completely unspecified. Some roll orientation must be chosen. Suppose the ship needs to execute a turn. How, exactly, should a turn be executed?

    In SNIS, I have tackled all of these problems, but with only partial success and some failure. There are definitely deficiencies and bugs, leading me to conclude I need to rethink much of the NPC ship movement code.

    For example, the planet avoidance code I suspect does not work very well. What I tried to do was determine if a planet lies between a ship's current position and the next waypoint, using a ray-sphere intersection test, and if so, navigate along a great circle around the planet in a plane coincident with the line to the waypoint. That's what I *tried* to do. I don't think I quite succeeded, as sometimes I see NPC ships acting quite strangely while they are ostensibly "avoiding planet".

    Regarding ship orientation... In general, the navigation code is concerned mainly with a ship's position and velocity. The ship's orientation is kind of derived from these, leaving the "roll" component more or less unspecified. This is mainly a case of suffering induced by doing the "stupidest thing that might possibly work". This leads to a variety of problems, least of which is the ships can roll around in somewhat unnatural looking ways. Other problems caused by this. It's possible for a ship to slowly reverse direction in velocity, that is, slow down and then travel backwards. If the orientation is derived from the velocity, this appears as a *very* sudden 180 degree flip in orientation. So we try to introduce some code to limit the change in velocity to some angle considerably smaller than 180 degrees (e.g. 45 degrees) to get the ship's orientation to changes to look less strange. This is the kind of thing you bump into.

    What if two ships come very close together? They ought to try to avoid each other, right? So, in this case, I introduced some braking and steering forces that nudge the ships this or that way to hopefully avoid collisions. How well does this work? It's hard to say. It's hard to test this kind of thing, and when you do manage to test it, it's hard to know if it's really doing what you meant. "uh... I think that ship kind of dodged out of the way. Maybe."

    I also added a debugging console, and some AI debug code to log to this console. I can at runtime "aitrace" any given NPC ship, to try to log what it is "thinking" as it makes decisions and moves about, to try to figure out why it does the strange things that it does. Let me tell you, debugging these buggers is not easy. Without this debugging code I would be completely lost. With this debugging code, I'm only mostly lost. For example, there's a bug in which NPC ships sometimes move very very slowly for no apparent reason. I have no idea why.

    So, given the mess of code I've written to control my NPC ships, and the bugs and problems I have with it, I am inclined to think I need to re-write it. OTOH, I do not actually have a reasonable idea in mind about how I might re-write it to avoid such problems. Some of this stuff is just hard. This is one area where I think 3D makes things quite a bit more difficult than is the case for a 2D game. I'm looking at the code and thinking, "this is a mess, I should re-write it", but I do not have a clear idea how to re-write it and solve all the problems without it being a mess. I don't have a unified theory of 3d NPC ship navigation.

    Edit: And while there are a *lot* of problems, even so, it all *mostly* works anyway. So it's not as if these problems destroy the game. But they are definitely enough of a problem that I'm not amply satisfied by the simple fact that they do not destroy the game.
  • If I may interject some "thoughts about bridge sim games" that are not necessarily immediately preceded in the conversation by any provoking stimulus... but which are nevertheless on my mind lately
    YES PLEASE! I started this topic as a dumping grounds for thoughts. So feel free to share yours as well.


    On the planet avoiding. That's the same thing as I do in EE1, but then you do it in 3D instead of 2D. In princible, plotting the path is the same, just sphere-line intersection, and then create a move-to point outside of the radius to get around the sphere. And repeat until there are no more objects to avoid.
    (The "combing" code of Cura's 3D printing slicing engine does almost exactly the same, to keep travel moves inside the 3D printed object to avoid artifacts on the outside of the object)

    I don't think the problems here are much harder due to 3D then 2D. Except for the roll part, it's just getting a point to travel to, and then travel to that point.
    I think my code here:
    https://github.com/daid/EmptyEpsilon/blob/master/src/pathPlanner.cpp#L78
    Should also work in 3D, in pretty much the same way. Except that in 2D you have 2 possible points to travel to for avoidance, while in 3D it is a whole circle around the object.


    Biggest problem I had with AI "stopping" was that the planner was jittering between the 2 possible points to travel around a planet. So there is some code to keep the using the older point even if the 2nd point around the planet would be slightly shorter.




    Maybe this helps with your roll issue:
    Assuming you are using quaternions (and you should if you have 6 degrees of freedom) you should be able to get the quaternion between your current rotation and the new rotation from taking your current forward direction vector and the new forward direction vector and this piece of code:
    https://github.com/daid/SeriousProton2/blob/master/include/sp2/math/quaternion.h#L159
    And then convert it to a required yaw/pitch/roll amount by this code: https://github.com/daid/SeriousProton2/blob/master/include/sp2/math/quaternion.h#L81
    Ignore the roll on your NPC ship output, and you should have navigation I think. Roll should be processed separately if you want roll control. Maybe you want to align the roll to a planet, or a station. But that is sort of the same as yaw&pitch control, but then figuring out how much you need to change your "down" vector to align with the requested down vector.

    (ok, it is a little bit harder then "rotation_amount = target_rotation - current_rotation")



    EE1 ships have no concept of avoiding each other. Generally, that gets messy fast. What most RTS games do for path planning, is that moving units are ignored and stationary units are avoided. Which generally works due to collision stopping units and most units traveling in the same direction. But EE's space is so bit, that collision with ships is generally not a huge issue.
    What EE1 does have, is "formation flying", it is something the script can setup. Allowing a group of ships to fly in a set formation and keep that formation until they attack. Also, the GM screen pulls a simple trick. When you order a group of AI ships to fly somewhere, it offsets the target positions by the current relative positions between all the ships. Keeping that formation sort of intact as well. (It is not as effective as the formation flying AI order. But it gives the same illusion)


    I think, if you want to avoid another ship, you don't just have to account for it's current position. But it's projected position according to it's heading and speed. A bit like the missile firing solution code does in EE, but then to avoid instead of aim towards.

    Taking the 2 rays for your current ships projected path, and the other ships projected path. And then calculate the shorted ray between that:
    https://github.com/daid/SeriousProton2/blob/master/include/sp2/math/ray.h#L29
    This shortest ray should give you an optimal avoidance direction, and the distance between your current ship and that ray should give you an idea of how hard you need to apply avoidance.
    This does have problems when the angle between the two paths is very small, as your intersection point might be far way, but your actual collision will be closer. So there needs to be some kind of correction for that as well...
Sign In or Register to comment.