Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«15 »
  • Post
  • Reply
Elentor
Dec 14, 2004



I've updated the Roadmap and made the spreadsheet public. You can find it here:

Roadmap for The Sky Is Dead

I also moved the smaller, superficial changes from my project.log to a tab in the Roadmap called Change Log. This will make my project.log a lot less spammy while allowing me to update the change log a lot more frequently. I'll be talking a bit more about it later this week. I have some of the next updates planned.


Last but not least, here's something:

Adbot
ADBOT LOVES YOU

EponymousMrYar
Jan 4, 2015

The enemy of my enemy is my enemy.


SPACE POLICE

Cheese it it's the fuzz!

Kurieg
Jul 19, 2012






...Space police?

biosterous
Feb 23, 2013







Hot dang is that a pretty spaceship

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'


Clever Betty

biosterous posted:

Hot dang is that a pretty spaceship

Elentor
Dec 14, 2004



Thanks guys! For context, here's the Destroyer in a contained, experimental lighting environment:



And here are the Destroyers in context:

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'


Clever Betty

That looks real pretty.

TooMuchAbstraction
Oct 14, 2012

Hubris

Fun Shoe

Oh dang, procgen big ships!

Have you considered having some small set of levels be assaults on gigantic ships, like this level from Aleste? I guess what that'd mean would be having "ships" that were really terrain with guns mounted on them. Maybe your procgen isn't readily adapted to that concept though.

Elentor
Dec 14, 2004



These ships are terrain with weapon turrets/buildings mounted on them! This is why they have flat surface areas.

TooMuchAbstraction
Oct 14, 2012

Hubris

Fun Shoe

Oh excellent. Now you just have to make ones that are the size of an entire stage!

Kortel
Jan 7, 2008

Nothing to see here.


I would like to add this is an awesome LP and it's been quite enjoyable to read. Keep up the work!

Elentor
Dec 14, 2004



Next chapter should be up this weekend. I've got a few chapters lined up so there should be less of a hiatus between the next 4-5.

Feral Integral
Jun 6, 2006

YOSPOS



Elentor posted:

Next chapter should be up this weekend. I've got a few chapters lined up so there should be less of a hiatus between the next 4-5.

jonesing so hard

Elentor
Dec 14, 2004



Still working on a few kinks.

I finished the universe generator (took about 10 days) with my new algorithm that fixes the old issues, but I want to debug it fully before moving on. It's a complete procgen universe, I'm pretty happy with how it works. I've posted a few details about it in my log but I'll condense in the next update.

I've been keeping tabs here:
https://docs.google.com/spreadsheet...#gid=1716611053

I realized that I can use the existing framework for the stage select to add a map viewer in 2-3 days instead of 7-10 so I moved it up on my timetable. I think I can show it off a bit for the next update.

Elentor
Dec 14, 2004



Chapter 28 - Inner Universe, Part I


Over the past two weeks I've been working on the procedural universe.

While creating the Space Map and filling its algorithm, I found out that my play space was somewhat limited due to the existing placeholder algorithm in place

One of the things I remember early on during Starbound's Beta is that a few changes in the structure could ruin a save file because the entire universe would be completely different. This makes sense - if you add, for example, a new biome type, then that biome has to fit in somewhere, which means it will overwrite something that already exists. During a Beta, there will be radical changes.

For example, imagine we have two biomes: Space, and Moon. We roll a dice, or in this case, we roll a number from 0 to 1. If the value is lower than 0.5, we choose Space. If the value is bigger, we choose Moon.

0.0 to 0.5: Space
0.5 to 1.0: Moon

Now, if we introduce a new Biome, let's call it Gas Giant, and we again roll an equal chance amongst the three, then what will happen is this:
0.00 to 0.33: Space
0.33 to 0.66: Moon
0.66 to 1.00: Gas Giant

One third of the Space Biomes will now become a Moon, and two thirds of Moon Biomes have changed into a Gas Giant to accomodate the change.

When dealing with terrain structures like the one in Terraria (where your save file holds every single tile) this is not a big issue. However, Starbound only saves the differences in terrain tiles, not the tiles themselves. It makes sense - in a universe with trillions/quadrillions of planets, the planets are generated from the ground up whenever you visit them, and only the difference is saved. So, if a structure changes too much, things break apart.

Now, depending on the game structure, this might not change the game much, but it also might render the save file broken. Again, even if the file loads in the above example, your base might now be underwater. Or floating high in the sky.

I've been taking particular care to make everything I do resistant to changes, which is not a particularly easy task. A single RNG strain cannot be used for everything linearly, because changes that occur in the middle of the process can completely change the entire generation.

Designing a resistant system

The first thing I did in TSID was creating a system that stores all the relevant informations related to the "now" without ambiguity.

That wasn't very hard. In fact, the system is a bit too resistant. Right now the game spawns the next stages based on your latest position. The stages themselves are not serialized (which means, each individual enemy ship is not saved, for example), but every relevant information that could be taken from a biome or external source is. So once a stage is generated, it no longer requires any pointers to the universe. Since every quadrant is valid, everything related to your "now" is safe (the stage biome, levels, attributes, your position, the next stages' position).

The second, much harder thing to do, was try to create a system that was resistant to significant algorithmic changes. I did versioning to some classes (so that the game keeps older algorithms stored as it progresses, similar to how some games with replay keep old .exes so you replays of older versions of the game work). I then turned almost every generational process modular, independent and non-linear. This involves using multiple streams of RNG (which is a feature of PCG). It was very annoying to code it in, but it also makes really easy to debug what is wrong and find bugs in the code because every process is deterministic and easy to reproduce.


Creating the Generator

So I'm continuing from where I stopped last chapter. This image:



The temporary universe was very bad and had an enormous amount of bugs. So I worked really hard on visualizing something tuned to my specifications. I came up with a few decisions:

* The starting sector, star and quadrant have a "safe" bubble around them, where level progresses somewhat (but not entirely) smoothly.
* Afterwards, the sectors start to become increasingly higher level, with some valleys.
* The exoticity of sectors increase the further away they are from you, also with some valleys.
* These variations exist so that items with different requirements can drop in different places.
* As stated before, levels go from 1 to 240. The starting sector goes up to level 84, the campaign is designed to take you up to level 65 without side-quests.

Inputting the Excel-based algorithms took two days. I played the game for a few hours to make sure everything worked and now there was no longer any oddities like level 240. That was the easy part.

If you haven't been following, I have published my roadmap as a public spreadsheet. I update it almost every day. One of the things that I had planned to the future was a map viewer, but I realized that it made a lot more sense to make it sooner rather than later since I'm working on the universe generator. So I needed to code something to give me some basic visualization of stuff.

Since stars and quadrants are on a 10x10 grid, a 10x10 grid would suffice (except for sectors which are on a 1000x1000 grid but we really don't need to see a lot of them). I wrote a quick script to show me the sectors and their levels colored in a black - red - orange - yellow - green - blue - white gradient.



Okay, this so totally did not work. After hundreds of thousands of lines of code over the past years, I've been better at coding stuff and having it work without errors, but holy moly did I miss the mark on this one. It's not even supposed to be animated!

I won't go in detail about what went wrong, though if I knew the answer was "everything", I'd have written it as a live puzzle-solving update since you guys like those and I like writing them as well. Eventually I worked out the kinks and got this:



This was visualization in its crudest form, but at least it worked! This is the view of the sector stars. You can see their sizes, colors and positions. They're more or less based on real stars (you can see some brown dwarves, some red giants, G and K-Type orange/yellow/white stars and so on.

By then I had also added them as cosmetics in the Stage Select, along with a few more fluff, like your ship floating:

Old:



New:



Seeing the aesthetic elements is cool and all but stars seem relatively glitchless. I then checked the quadrant aesthetics viewer, which mainly show me the fog and ambient light coloring, which is based on the nebula behind each star.

This time I was greeted with this:


Bug #1 - Too much Red



Why is it so red?

So the fog of the stages were rolling a color based off of the nebula in the background of the system (typically). For reference, here's what they were for each star system:



I think I mentioned in the past, but I prefer to work, most of the time, with HSV colors instead of RGB. HSV stands for Hue (Red, Yellow, Green, and so on), S stands for Saturation (from Grey to Red, for example) and V for how intense the color is (from Black to Red). RGB is the typical Red/Green/Blue scheme.

With HSV, for example, if I want to pick something that will a bright, random color, I will roll:

H: 0 to 360
S: 1 (Max)
V: 1 (Max)

This will yield any saturated color in the rainbow. Note that H ranges from 0 to 360, as in degrees. This is because H treats the colors as a circle since they loop:
(Red - Orange - Yellow - Green - Cyan - Blue - Magenta - Red again)

Here's an RGB color wheel, courtesy of Wikipedia, with some degree labels added by me. These colors would have max saturation and value in HSV:



You can see how treating it as a circle with 360 degrees makes sense. HSV is useful in this sense because if you instead roll each RGB channel, like:

R: 0 to 1
G: 0 to 1
B: 0 to 1

You can get colorful results, but also black, gray, white, or unsaturated colors in general.

The issue was because I was converting the nebula color from RGB to HSV through Unity, and unlike the other extensions that I use, Unity treats H as 0 to 1 instead 0 to 360. And I was converting back to Color treating H as a 0 to 360 range. Which means it always returned red since 1/360 is pretty drat red.



Here are the fixed quadrants. In the future I want to make the nebula colors not completely random, because 100% random is both not super interesting and a random Hue value is also not exactly... well, unbiased. But to talk about this would enter the realms of really technical color theory and this is a lengthy, very lengthy subject. I'm always thinking of things in term of color theory and it's an active effort not to create entire chapters about it.


NEXT TIME:

More Universe Generation! Algebra! Exciting work with Matrices! Wait, where are you going

Elentor fucked around with this message at 10:38 on May 21, 2018

EponymousMrYar
Jan 4, 2015

The enemy of my enemy is my enemy.


Truly the Sky is dead: The universe went from blood red to color vomit

Elentor
Dec 14, 2004



EponymousMrYar posted:

Truly the Sky is dead: The universe went from blood red to color vomit


Yeah, like I said, I'm not excited about the colors right now. Regardless, they're a lot subtler in-game than that map would make it seem.

At least the color vomit I can control. The blood red was the stuff of the nightmares. Luckily the solution seemed pretty obvious because of the HSV thing so it didn't take long, but try to imagine my surprise at seeing that thing.

Elentor fucked around with this message at 16:21 on May 22, 2018

Karia
Mar 27, 2013

Self-portrait, Snake on a Plane
Oil painting, c. 1482-1484
Leonardo DaVinci (1452-1591)



College Slice

If you're not going to do a write-up on color theory do you have any good links? I know nothing about it and would love some reading.

Elentor
Dec 14, 2004



Karia posted:

If you're not going to do a write-up on color theory do you have any good links? I know nothing about it and would love some reading.


If you guys are interested I can do a write-up on it, writing lengthy stuff clearly doesn't bother me. It is a subject I fear going overboard because it is the kind of subject that is lengthy just to talk about why it is lengthy. One of the first jobs I declined was to be an art direction professor because I felt the proposed curriculum did not involve color theory enough and that was my focus in college. One of my best friends and gaming teammate is an art history professor and he has a greater understanding of traditional color theory than I do, but most of his knowledge is very hard to apply to digital art because of how much it diverges, for example.

There is a huge discrepancy between what color theory entails between different fields. Traditional Art has literal centuries of heated debates on every single aspect of color theory, down to what constitutes the proper color wheel. Then digital art came in and through our understanding of the physical processes with which our eyes perceive color, you could additively create any color through red, green and blue. This, in turn, resulted in us understanding that secondary colors (yellow, magenta, and cyan) were perfect to create subtractive colors, which conflicted with the traditional 3-color model of red, yellow and blue. The neurologist sees color as it happens in the frontal cortex; The neuro-ophthalmologist might be worried about color as it travels in the optic nerve, and what sort of shenanigans happen in the eye before it is turned to signal (incidentally, a bunch of convolution matrices are used to pre-process the image). The biologist or ophthalmologist might see color as the reaction between light and cone cells; the physicist might be worried about the wavelength of said light.

You can see where this is going, I'm not even telling you what these people will research when it comes down to color theory, merely what their foundation is. I'm not even talking about the cultural perception of colors, the biochemical, psychological or physiological reactions that we might have to color, the emotional links one may have to color, what is acquired and what is hardwired. To an evolutionary biologist, our greatly, greatly increased sensitivity to green might be a consequence of us needing to see better what lurks in the jungle, which is mostly green. And our low sensitivity to blue might be because of the near 10k c/m^2 of light that a bright blue sky emits, or maybe because there are not a lot of blue stuff in nature for us landlubbers, or maybe both. A businessman and their producer may only be interested in statistically significant results and since teal and orange are proven to work, that might be the sum total of what color theory means to them.

And they're not wrong. And it's all right, from a pragmatist point of view color theory can be explained in less words than this post. You can see color theory as something that ranges from something as simple, trivial and functional like "make everything like Michael Bay" to an overwhelmingly complex field.

It should come as no surprise that it took a mind like Newton's - who understood science as the "natural philosophy" and concerned himself with having science of nature without the limitations of specializations - to reach advance in color theory and the way our eyes perceive color (I mean, the dude stuck a needle in his eye to test a theory, that's loving hardcore). Understanding color is an undertaking that requires science of a multitude of disciplines, in order to attempt to develop an objective overview of an extremely subjective field. No one really has time for that nowadays, and failure to do so results in an insurmountable amount of researches that yield no useful result, and if "scientific" studies today are barely scientific, imagine the horrors that can be read if one starts to look for modern researches related to visual perception. One of the latest gems I read was a guy judging from his electroencephalography studies that we really only see 13 frames per second and anything above 24 is a waste and "doesnĺt mean you can be better in the game". Anecdotally that seems very wrong by, you know, playing a game competitively, but it also completely ignores the fact that even if his research is right it does not mean that the optic nerve does not encode motion vectors in its signal, which features an extremely complex compression. A lot of research is either done or presented with half facts, and it makes very hard to make an entirely neutral post about the theory of visual subjects without being heavily biased. Consider an experiment-based approach: Asserting that playing at 24hz equals 60/120/144+hz would be a lot better if tests were done with control groups to see if the experiment and theory matches reality.

This is a lengthy post about why it's lengthy to talk about color theory. But if you want to start somewhere, reading about how many color wheels are there should be enjoyable and give you an idea of the rabbit hole. Wikipedia has a bunch of related links and it should make easy to read more about. You can also start at different points:

Cone cells - The stuff your eyes use to perceive color.
Color Vision - This article goes pretty in-depth about the more physical aspect of interpreting color.
Here's a not very well known condition that I have - Before, during and after I attended college I studied this condition a lot, going so far as to help with research on a related topic and be a bit of a guinea pig for lab studies. I almost chose a career in neuro-ophthalmology over art/games. You can take a look then at all sorts of weird hallucinations that can occur and wrap all the way back to how we deal with optical illusions. Be sure to check the watercolor illusion, in particular Figure 2.

Elentor fucked around with this message at 16:51 on May 22, 2018

frankenfreak
Feb 16, 2007

Almanya ÷nde!
Bir başka hedef!
Sonsuz şef L÷w išin zafer!




Elentor posted:

The issue was because I was converting the nebula color from RGB to HSV through Unity, and unlike the other extensions that I use, Unity treats H as 0 to 1 instead 0 to 360. And I was converting back to Color treating H as a 0 to 360 range. Which means it always returned red since 1/360 is pretty drat red.
Oh dear! How long did it take you to figure that out? This feels like something you can get hung up forever because it would never cross your mind doing it the way Unity does here.

Elentor
Dec 14, 2004



frankenfreak posted:

Oh dear! How long did it take you to figure that out? This feels like something you can get hung up forever because it would never cross your mind doing it the way Unity does here.

It took a surprisingly short amount of time, compared to other stuff (like why the map wasn't working and was animated ). I ran into some bugs that took 4 hours to solve during the universe procgen, but that bug in particular took a few minutes. I knew the primary method of finding a random color was through the Hue, and red starts at 0 degrees, so the most likely explanation was that H either wasn't being set up properly or that it was ranging from 0 to 1.

So one of the first things I tried to do was multiplying the value by 360 and see if it worked. If it didn't, then I'd have to take a deeper look at the nebula code. But it did, so case solved .

Elentor
Dec 14, 2004



Chapter 29 - Inner Universe, Part II


Continuing our series of bugs.

Bug #2 - Bottomline

More importantly, I needed to check that the levels of the systems were working correctly. And they were! At least for the starting sector.

When taking a look at other sectors, the map looked like this:



Well this doesn't look right at all. My gut instinct is that something was off with the blur, so I disabled the blur and I got something that looked more proper:



These are the 10x10 stars within a sector. The individual levels are not very important for now, but the colors follow the same gradient as before (black - red - orange - yellow - and so on).

Because I didn't want there to be a huge gap between neighbouring star systems I implemented a blur algorithm. In case you're not familiar with blurring algorithms, here's a quick explanation on how they work: You basically take every value and slowly average them. Yeah, that's it. Something interesting to make you weary of excessive post-processing effects in games is that every blur operation results in a permanent loss of information, here's an example:

So if you have two values, 0 and 1:



And you average them a bit, let's say they become 0.25 and 0.75:



However, the limit to averaging them is that they'll both become 0.5 and 0.5:



At which point the result is indistinguishable from if the starting point was 0,1 or 1,0.

The property of destructive editing and information loss isn't exclusive to blurring. This in itself is another subject that could yield a lot of chapters dedicated to it, as information loss as it is a quite important subject in a lot of fields, such as music-making. But I digress. Typically when blurring an image you filter the image through something called a convolution matrix. For example, let's say you have a 3x3 image:



A convolution matrix is a mask which tells you how to perform an operation on a certain matrix. These kernels can be used for many effects, including sharpening an image. A blur kernel in this case would look something like:

[1, 1, 1]
[1, 1, 1]
[1, 1, 1]

Meaning that each pixel will take the adjacent pixels and weight them equally. Afterwards you divide the result (otherwise it'll just add them up). So for the central pixel...



It takes all neighbours and mixes their values. If you divide the kernel by 9 (the sum of the kernel, in this case it also equals to the amount of pixels) it will yield the exact average of all surrounding pixels including self. If you divide by less than 9 you'll get a progressively brighter image. Here's how the transition looks from a weak blur to the final average:



As you can see, this image also averages to 50% gray

Here's something more interesting. Something that should feel very intuitive:



After blurred to infinity, becomes...



Yes, 50% gray again. I imagine some of you knew this was gonna happen, but if you were expecting green, that must have been disappointing, it sure was for me the first time. This is because all that blur does is average values and blue and yellow are opposite colors in RGB. There are algorithms to mimic how paints mix together but as far as pure, raw blur goes, there's no artistry to it. Blue is (0,0,1). Yellow is (1,1,0). Their average is (0.5, 0.5, 0.5).

Just not to end this on a gray note:



There you go!

So yeah, colors are hosed up. Again, this is a lengthy subject, if you're interesting in a derail I can post more about convolution matrices in a different update.

Back to the topic at hand, or at least to why I'm talking about blur. Blur algorithms are used a lot. There are so many variants, but something you need to keep in mind is that pure, high-quality, unadultered blur is computationally expensive.

Imagine you're running a game at 1080p. That's 1920*1080 pixels = 2,073,600 pixels. The convolution matrix I showed you is called a box blur because it averages everything in a box rustically, and so is not very high quality (a pixel on a diagonal is further away from the center than a pixel to the sides. Think hypothenuses), but it's still the cheapest we can get. Each non-border pixels needs to perform a bunch of operation, at the very least sum 9 values and divide by another. This would yield a blur of size 1 which, at 1080p, would be negligible.

Instead, games use a 2-pass blur. It works like this: You first do a single horizontal blur, and then you take the result of that and do a vertical blur. On a 3x3 matrix now you do a calculation on 6 pixels instead of 9. There are many other forms of blur that games use to cheat for performance reasons, especially on graphics.

As you may have realized, instead of using blur on an image, I'm using blur on the levels of the 10x10 map of the game. So if two adjacent star systems have levels:
[30, 70]
They're averaged somewhat, say to [40, 60], and now the gap between them is not as huge and the odds of you dying to an undead bear is significantly lower.

Back to the map, here's the non-blurred version:



And here's the map after the horizontal blur pass:



The problem is that somewhere in the vertical pass, I hosed up. The vertical pass isn't a simply copy-paste of the horizontal pass - I wanted to preserve some of the peaks and valleys so it had a bunch of custom code and some of it didn't work as expected because I ended up mixing the pre-horizontal blur with the post-horizontal blur and all hell broke loose.

Anyway, I eventually fixed it, and here's the algorithm in action. You'll notice it floods over some parts over others instead of making a perfect average:



The blur bug was only affecting sectors other than the starting sector, which we haven't covered yet. So we take a look at it next chapter!


NEXT TIME:

The Starting Sector! Grids! Labels! Level Progression!


Qs & As

In my Project.log, I gave this answer to TooMuchAbstraction. I talk a bit more about the content of systems and quadrants and what kind of traits they can contain.

Elentor posted:

I've been working lately on the universe generation. I wrote the systems a while ago and have been working on them ever since.

The current version is not overly detailed but it is pretty much wrapped-up when it comes down to level progression. I'm very happy with it for now and am in the process of coding and testing it.

This is my first time doing something at such a large scale and it took me a long time figuring out how to implement this stuff. It's not a perfect solution (GUIDs would probably be a good thing to know in the future) but it's a lot-of-things-proof, among which:

1) It is resistant to micro changes - I can change aspects of the generation without destroying an entire save file or the general characteristics of a universe.
2) It supports extremely large universes - This was not a huge worry as I was pretty happy with a small universe but right now the universe features 10 billion quadrants, supports way more than that but that's already an amount with which I'm very happy and is way more than enough.

The main drawback is that it is possible, although extremely unlikely, that some quadrants might be repeated. This would be akin to the large-scale zoom of the PRNG structure issue that I discussed in my LP.

The good news is that this drawback is solvable, checkable, and rare. I can first check it and if I find any issues with it, I can solve it permanently by tweaking the algorithm a bit though that would take me longer and maybe limit future updates. However, the odds of a single quadrant being repeated is 1 to 1.5 trillion possible universes. This means that a player would have to visit 15 sextillion (15 * 10^21) quadrants on average to find a repeated instance. Such a chance is many orders of magnitude lower than the large scale zoom on the polynomial distribution of repeated seeds on a 64-bit space and in fact makes the amount of unique quadrants per repetition higher than 2^64 itself which is the precision of my PRNG.

TooMuchAbstraction posted:

That's the odds of finding an exact duplicate. Do you know what the odds are of finding a substantially similar quadrant? That's a much harder problem to answer, I'm sure, but it should roughly equate to how many times a player can play a given quadrant before they've seen all the macro variations on it. Depending on how replayable your content is that might also mark the point (or linearly correspond to the point) at which the player starts getting bored.

Elentor posted:

That's a... complicated question.

A lot of quadrants are very similar by design. Each quadrant holds a pool of map types. Think of it in terms of Path of Exile's Atlas.

Imagine that there are some map types just like PoE's maps have their own layouts and attributes. There are 20 map types right now planned for TSID, so quite a few ways from PoE's 100+ maps, but not that far away from the amount of Rift themes in D3.

I'm not showing which map types each quadrant can spawn though I might add that in the future, but it's a straightforward system. If you're in an earth-like planet quadrant, for example, it might spawn a cityscape map, or a space map with the planet in the BG, or a map over a lush forest. If you're in a space junkyard quadrant, it might spawn a space map, or a station map.

Unlike PoE, however, the quadrant traits change the layout, appearance and content of the map. Actually, it goes a bit beyond that. The Star System itself has traits (as does the Sector you're in). For example:



One of the most obvious traits is the star color. So space maps around that quadrant will (typically) have a directional light color that trends toward the star color. Also, the closer you are to the star, the stronger the lighting.

Most stage maps are going to be more or less alike, and adding variance is going to be a never-ending task. Every quadrant has its own pool of at least a few ships (I don't recall right now the minimum, but it's at least 3) that are unique to it so no two quadrants in the same system will have the exact same enemies. Maps take enemies from 3 different pools: The star pool (so each star system has a few enemies in common to give it an identity), the quadrant pool, and a new random pool for each stage.

Taking a quick look here at the code, the Deep Space Biome has 9 implemented characteristics that affect its maps, while the Star System has 8. There are more planned that I haven't coded yet, but you get the idea. The thing is that these characteristics have many different rolls. So a Star System may have a dense asteroid field that spam multiple quadrants, and these quadrants' maps will have a higher density of asteroids. In this case, seeing the streak of quadrants that have an asteroid background is a similarity that makes for the theme in the star system, like each one of them is part of a large forest. In some ways, quadrants understand their context as well, so an asteroid belt might get denser as you get deeper into it (so each quadrant towards it has a denser and denser asteroid field that permeates the maps). Because quadrants are part of a larger biome they inherently have more similarities than differences, though I think that's what gives their differences a more interesting context.

So yeah this is a very hard question to answer. I want (by design) the player to recognize patterns so that when he finds out something unusual, it shakes them a bit. I could assign every moon a fully randomized color but that would mean none of them are special. Instead, most moons have similarities, so when you find one that's really different you feel like you found something new altogether. With planets and planetoids the line between what is similar and not is easier to define, but with space stages there are more variables that are similar on purpose. Two quadrants might be very similar but one is higher level, two quadrants might be completely different and be next to each other.

What matters to me at the moment is to shake the frequency with which similar maps are played. This will require tuning to address your point of a player getting bored. If a player entire playthrough through a star system is something like (these are temp names, but are from the planned map pool):

Space -> Meteor Shower -> Orbit -> Station -> Desert -> Floating Cityscape Orbital -> Space -> Barren Moon -> Asteroid Terrain -> Space

A player saw a bunch of different stuff passing by, maybe, 4 quadrants. Maybe 10. I don't mind if these space maps are super similar because they're placing everything else in context. Some star systems have very dense nebula, so maybe these are colorful pink-bg space maps. The next star system might have space maps with battleship wrecks from a ship graveyard. Or maybe one of those 3 space maps is in a quadrant with exotic asteroids and it contrasts with the other two.

So yeah, this is pretty hard to answer, especially because I'll need to have more maps before I can fine-tune the frequency at which stuff happens. Not all biomes are created equal. Hopefully I can find a balance between similar and different.

Elentor fucked around with this message at 09:40 on May 25, 2018

Elentor
Dec 14, 2004



Karia posted:

If you're not going to do a write-up on color theory do you have any good links? I know nothing about it and would love some reading.

Not a link with content per se but a really good and relatively important book about color:

https://www.amazon.com/Business-Sci...s/dp/0471452122

I figured that if you want to go a bit deeper in the technical/scientific aspect then wiki links wouldn't suffice, this book goes a bit in-depth on the less artsy aspects of color, and it's very relevant to this day.

Elentor fucked around with this message at 06:38 on May 25, 2018

Elentor
Dec 14, 2004



Chapter 30 - Inner Universe, Part III


Let's take a look at the starting sector, shall we?

Where your journey begins



This is the first iteration of the starting sector. Black = Level 1, Bright Red = Levels 60+

You'll notice that you're in a black bubble. If we move the starting position...



The bubble moves with you. This is to ensure you never start surrounded by high level zones.



Here's the map with Level Labels toggled on. Over two days I kept tunning these values. During this time I had added support to multiple universe seeds. Previously there was only one universe, but I found a way to implement multiple universes. Universe seeds are 16-bit right now. In theory they can go up to 18 million but I'm restricting to 65k for the time being.



Here's a different seed, with more detailed labels, featuring the level range of each star system. Tuning these values took a while.



Here's the zoomed out map, showing sectors. The sectors have pretty big ranges at the start, then it slows down a lot.



I played a while seeing different seeds. This seed looked pretty interesting to me, and I really liked the shapes. For a while I was hunting for a seed that looked good enough for the campaign and the things I have in mind, and this was the closest, but still not good enough. Eventually I decided to work on it and create a custom starting sector. I kept notes of where the main campaign maps would be, how the plot would unfold, and so on. Eventually I finished and got this:



This is a clear view of the 100 sectors with a subgrid showing the 10k quadrants.

I decided to work on something different and perform some further smoothing on the transition between star systems. This isn't particularly easy. The way quadrant levels work is that they take the star system base level and add a random value over it. It's not easy to explain how this value ranges - there's something called internal level which differs from the final level. The final level (1 to 240) is a logarithmic operation over the internal level (typically ranging from 0 to 10) to smooth the curve at the peaks and make the first levels faster to pass by.

For example, the quadrant internal level starts as (starlevel -0.6 + ((0 to 0.75)^x + (0 to 1, clamped at 0.4)) * 1.35) before the blur, then a bunch other much longer poo poo happens after the blur.

Because some of these ranges are raised to a power x, I decided that I could control the variance by controlling by which power these values are raised. A value below 1 when raised to a power higher than 1 will go lower, so I can make a gradient that pushes these values toward 0 when close to a star system of lower level, for example. First, I mapped the gradient of each star system:



Here's the gradient with softer transition, taking into account the level of the star system itself:



And here's me loving up the algorithm:



So after a few hours I added a lot of extra details to it. The formula right now for the starting sector is a bit big but it works really well. The gradient takes in consideration distance from the center so the quadrants near the center of a system are closer to the star level, and then they go up and down depending on the neighbours. Here's the before/after:



The same sector with different seeds:



Last, but not least, the current map with the level ranges:



Next in the debug viewer, Exoticity. As usual, the starting systems have very low exoticity.



The quadrant exoticity is defined by two things: An entirely random value that more often than not is 0, a weight that comes from the Sector (so the starting sector is capped. Most seeds yield one or two 5 Exoticity system scattered around, 7 at absolute most). The star system has a simple 0..1 multiplier over it.



This is an example of the quadrant distribution, uninfluenced by external modifiers.



And this is it after the weights. The very bright yellow dots are 3..5 Exoticity. Some of these will be locked to high Exoticity values during the campaign, but for the most part the weirder content is meant to reward exploration past the initial sector.


Tweaks and Testing

After a huge amount of bug fixes I managed to get a 4 hour long testing. Because I haven't posted a ship in a while, here's a pretty one:



Some of the bugs involved moving around the universe, and how the movement algorithm works. I create a matrix of all possible choices from the current coordinate and roll a random value afterwards.

quote:

float[,] moveMatrix = coord.GetMoveMatrix(playerShip, playerShip.equippedSet);

Then, and here's the cool part, the moveMatrix is multiplied by a multiplier that you can control externally based on an angle of movement.

quote:

float[,] moveMatrixBonus = SpaceCoord.GetMoveMatrixFromDirection(direction, angle, minDeviance, maxDeviance);

So for example of how the code exists, one of the movement options is always towards the general direction you've been following - namely, a 120 degree arc, and everything past that arc is excluded.

quote:

angle = AveragePastCoords(10);
stages.Add(GenerateStage(stageSeed, StageType.Default, angle, 120f, 0f, 1f));

Similarly, I can always allow the player to find a way back to the starting point.

quote:

Vector2 angleToCenter = CampaignManager.OriginCoord().Difference(stageSeed.spaceCoord).Normalize();
stages.Add(GenerateStage(stageSeed, StageType.Default, angleToCenter, 90f, 0f, 1f));

Or, when it comes to it, steer the player towards the next campaign mission, closing the angle over time if needs be.

I know it's not much, but I particularly like the Stage Select aesthetic. In the future I want to populate it with miniatures of all the cool stuff you're visiting (the planets, nebula, moons, asteroids, stations, and so on)








In Conclusion:

Well, that's it about the Universe Generator for now! It was a huge undertaking but I'm really happy with the result so far and how easy it'll be to expand and add more content to it. At least until I find the next bug.


NEXT TIME:

The fantastic world of Rendering Pipelines!


Qs & As
This is something that I posted a while ago that I find relevant, since I talk a bit about the movement algorithm which I mentioned in this chapter.

Elentor posted:

I think I posted this before but this is how the tuning is done right now:

1) There are 240 levels (you can fight stuff up to 260 in some situations but 240 is the cap for your items). The starting sector (A 100x100 area) is where the campaign happens and it ranges from level 0 to 80.
Itemization and combat formulas are in. Universe formulas are defined (in that huge spreadsheet I showed last time) but not coded in yet.

2) There are pre-made shops spread across the campaign with accessible items and ships that provide a floor for how low your stats can be.
Game already has the support for this (the initial shop is one of these).
I haven't tuned all of the shops yet. Right now I have the script and design of the first chapter done, and three of these shops done. My idea is to have a premade Shop every two campaign stages.

3) While in the campaign you have what I call limited freedom to explore the map.
4) There's a bad luck prevention in your navigation that allow you to return to the starting zones.
These systems are in, I actually refined them a few days ago.

Navigation is a somewhat complicated algorithm because it involves a bunch of systems. You choose premade locations but these locations need to offer you a way to progress in the campaign, a way to progress back to the starting zone, and a way to mathematically guarantee that you can go anywhere you want once you're "free" while maintaining the appearance of randomness.

To achieve the limited freedom, I'm using an algorithm I created years ago. It's based off of random walk which is typically used in games like Terraria/Minecraft to create caves and the likes.

I instead create a weighted random walk that is pulled by magnetic forces creating a biased random walk. Some examples:



The entry point goes to a random point in each side of the map's boundaries:



To reinforce the apparent randomness the force of the magnet is influenced by a wave composed of multiple frequencies, and the sine of the signal dictates how strong the compass is. The power of the magnet increases with each iteration (in this case, the ever increasing value of the "bad luck" protection) and thus is bound to overpower the signal. I don't remember the value of the integral but on average the walk reaches its destination at magnet = 25% (maximum 200% to compensate for the negative value of the sine) meaning that the walk looks random at high frequency domains.

What I like about this system is that it doesn't need to pre-establish a route towards anywhere, it only needs the vector of the destination and the bad luck protection. These images show a fully traced path but the system never needs to actually trace a path anywhere, so your decisions are not based on a pre-established path. Every time you choose a different stage the final random walk would look completely different but it'd still lead to the same place.

4b) Even if you can in theory go anywhere, it's still possible, theoretically, if your stats are impossibly bad and your stage selection poor, that you run into a very, very bad situation.
So far I've done everything, both math-wise as well as in the design of the stage algorithms to avoid these kind of situations but at some point I'll just accept that they're part of the genre. I don't want to fill the game with extreme hand-holdings such as making stages easier if you fail at them a lot.

5) Re: Tuning of the campaign:
As for numbers, these are some values:

* A L80 Proton Emitter deals 3x the damage of a L0 Proton Emitter.
* Some weapons are natively better than others and scale worse, and vice-versa, and some rare weapons are just better overall. Weapons are designed from scratch to follow the philosophy of perfect imbalance. But in general the maximum gap between weapon equivalency is a 20 level difference.
* Weapon Traits close (or widen) that gap.
* A full set of L0 weapons can kill a L80-100 ship, so as long as you can dodge stuff you can kill anything coming your way one ship at a time.

Elentor
Dec 14, 2004



I've been working on the procedural AI and ship spawn. This is the most important part of the game right now which is what defines the procedural stages, i.e. the stuff you're fighting against.

This is a fairly long endeavor. I presumed that it'd be relatively fast but was I ever this wrong. The amount of content and testing is off-the-chart. I've gotten all systems in place except for a bug that's been keeping me up at night wherein some projectiles aren't spawning, but I usually solve those seconds after posting about them so there's that.

There are about 300 AI variants that I need to write to create an interesting flow. I know which ones they are and what to do, so between the coding and testing, this is more of a grind than anything at this point.

Edit: Once again a week-long bug is solved seconds after I post about it.

Elentor fucked around with this message at 23:16 on Jun 2, 2018

KillHour
Oct 28, 2007





You need to write an AI that writes all the code for your AI.

EponymousMrYar
Jan 4, 2015

The enemy of my enemy is my enemy.


KillHour posted:

You need to write an AI that writes all the code for your AI.

Don't let the AI self iterate. Not only will you create Skynet, you'll also create an exponential occurrence of bugs that you'd have to fix.

300 AI patterns seems a bit excessive for a shmup though. Then again you kind of need that many if that's the amount of different weapons that'll be firing at you...

Usual shmup's only have a few AI patterns since they typically have static weaponry, with the big variance being in how they're placed on a stage.

Elentor
Dec 14, 2004



EponymousMrYar posted:

Don't let the AI self iterate. Not only will you create Skynet, you'll also create an exponential occurrence of bugs that you'd have to fix.

300 AI patterns seems a bit excessive for a shmup though. Then again you kind of need that many if that's the amount of different weapons that'll be firing at you...

Usual shmup's only have a few AI patterns since they typically have static weaponry, with the big variance being in how they're placed on a stage.

The patterns are for each of the ship classes. One per ship class per faction.

Imperial Ships have five ship classes: Microships, Fighters, Heavy Fighters, Corvettes and Destroyers.

When a ship is procedurally created, it's assigned one AI index. This defines how the ship behaves. So let's say a microship can have the following patterns:
0 = Simple Horizontal Move, 1 = Simple Vertical Move, 2 = U Move, 3 = L Move
So it has 4 AI indices, 0 to 3. These have different chances of occurring (for the sake of an example, let's say they're split 40%/30%/20%/10%).

The AI is split in 3 subcategories:
SpawnBehavior, MovementBehavior and ShootingBehavior

Now what I do is make each indice roll between different behavior lambda. Let's say we need two types of spawn, horizontal move will spawn sideways and the others spawn at the top of the screen.
So we have 2 SpawnBehaviors. AI 0 uses Spawn 0, AI 1/2/3 uses Spawn 1.

For the most part, the ShootingBehavior doesn't need to be associated with the AI index. So a Microship AI 0 through 3 can roll the same ShootingBehavior. Here's an example:

quote:

ssa.AddWeaponAI("Proton Emitter Mod I"); //Similar to Proton Emitter but with a pink hue, to make easier to see enemy projectiles.
ssa.weaponAIs[0].damageMul = 0.5f;
ssa.weaponAIs[0].durationMul = 4f; //Ships shoot at you

if (r.value > 0.6f)
{
ssa.weaponAIs[0].overrideWeaponAnimation = true;
ssa.weaponAIs[0].entityAnimationMode = r.value > 0.72f ? WeaponAnimationMode.Constant : WeaponAnimationMode.Symmetric;
ssa.weaponAIs[0].eaDuration = r.Range(0.5f, 1f);
ssa.weaponAIs[0].eaIntensity = r.Range(0.5f, 1.5f);
}

The MovementBehavior on the other hand can be shared by 0 and 1 (they're just a forward-moving ship), but need to be custom for 2 and 3. Which gives us 2 different Spawn Behaviors, 3 MovementBehaviors and 1 ShootingBehavior for our Microship example.

Elentor
Dec 14, 2004



Today I stumbled upon an idea to make nebula backgrounds that are cube-mapped and thus can be used as skyboxes in a variety of angles. This came out of nowhere but I'm so glad it did that I'm gonna talk about it a bit.

First, I create a background using volumetric noise. I know that EVE Online uses Terragen to do this which seems really powerful but alas, I don't have it. I'm using a Unity solution that makes it really convenient, although it took me a few hours to tweak the settings to my liking, on top of a few days in the past to learn it. My idea was to write a script that would then take panoramic images and stitch them into a single cubemap. This is not a very hard task, you just need to stitch 6 square images with the appropriate Field-of-View.

Then came tweaking it. The process ended up being a bit long to find out but not long to replicate. I'm using a premade algorithm that converts the cube map to a cylindrical projection, then I use an algorithm to convert my Star Field filter using the depth map of the nebula to define where stars appear and with which frequency.

Second, I conveniently have a filter that converts cylindrical projections back to cubemaps and has that ever been a time-saver.

Afterwards, I apply a heavily modified version of a filter I wrote to create clouds. What it does in this case is taking a slice of the depth map (the intersection between the dark part of the nebula and the background) and fill it with a fluffy pattern. The fluffy regions are then randomly applied to the image not to make it too fluffy. Adapting this filter to my idea was the longest part and took about 5 hours but it was worth it.

This work took me the entire day but it results in some incredibly time-efficient backgrounds with a lot of diversity.



My mind is going crazy with all the possibilities that stem from this.

Xerophyte
Mar 17, 2008

This space intentionally left blank


Aw yeah, I always love seeing diffraction spikes in star fields. They're totally ingrained in what we think stars look like and also a complete fabrication in some sense. They appear in astronomy photographs because we use mirror telescopes and the top mirror in a mirror telescope is supported by a structure called a spider. A 4-legged spider causes those 4-spoke diffraction artifacts, with fatter spikes for fatter spider legs. The same patterns also appear when using regular cameras, but there the shape of the spike pattern instead depends on the aperture blade setup. For a sensor with a nice, round aperture -- like those of you with well functioning Human Eyeballs -- you'd get a symmetric, circular diffraction pattern (an Airy disk) and no spikes.

You can generate the diffraction pattern from an arbitrary aperture by doing a Fourier transform of the 2D aperture mask function and squaring the magnitude, then scaling the resulting kernel up and down for different wavelengths (presumably pick one wavelength for each of R, G and B in a fake filter). Here's a page with diffraction patterns for a couple of different telescope variants. Wikipedia has some schematic examples for different camera aperture types.

You can generate a whole heap of different filter kernels, approximating different sensor types if you want to have more than the cross pattern type. Probably you're well aware of all this and went with fat crosses because they look nice, I just think diffraction spikes are real cool and spacey.

Elentor
Dec 14, 2004



Xerophyte posted:

Aw yeah, I always love seeing diffraction spikes in star fields. They're totally ingrained in what we think stars look like and also a complete fabrication in some sense. They appear in astronomy photographs because we use mirror telescopes and the top mirror in a mirror telescope is supported by a structure called a spider. A 4-legged spider causes those 4-spoke diffraction artifacts, with fatter spikes for fatter spider legs. The same patterns also appear when using regular cameras, but there the shape of the spike pattern instead depends on the aperture blade setup. For a sensor with a nice, round aperture -- like those of you with well functioning Human Eyeballs -- you'd get a symmetric, circular diffraction pattern (an Airy disk) and no spikes.

You can generate the diffraction pattern from an arbitrary aperture by doing a Fourier transform of the 2D aperture mask function and squaring the magnitude, then scaling the resulting kernel up and down for different wavelengths (presumably pick one wavelength for each of R, G and B in a fake filter). Here's a page with diffraction patterns for a couple of different telescope variants. Wikipedia has some schematic examples for different camera aperture types.

You can generate a whole heap of different filter kernels, approximating different sensor types if you want to have more than the cross pattern type. Probably you're well aware of all this and went with fat crosses because they look nice, I just think diffraction spikes are real cool and spacey.

I'm not sure which route I'm gonna go with the stars because background/foreground contrast is important. I'm still playing around with it but working on nebulas is not my priority right now, just something that I came up on the spot.

The Fourier transform idea is pretty great, I feel ashamed that I never considered that, I actually never looked very deep into diffraction patterns past what you learn on a photography class. I went with artifacts with 4 spikes because in my reference pictures from NASA those were the most common and I already had them done on my filter.

Your post was actually very instructional, thanks! It'd be nice to have a parametric renderer with multiple diffraction kernels for future use.

Elentor
Dec 14, 2004



Bonus Chapter - Overhead and Upcoming Content


Busywork

Right now I'm stuck with busywork. But basically:

* Corporate busywork (registering my company, and so on).
* Planning social presence on the upcoming months.
* Finishing some Game Design documents for myself.
* Finishing some Game Design documents for my composer.

Coldrice (from Interstellaria) was kind enough to share some of his stats with me. Something that I learned is that way more people actually frequent a traditional .com website than I'd expect in this age of twitter + youtube + whatever.

Also, from multiple developers with released titles, a common advice was that as soon as I have more graphical content from the upcoming Alpha, I should have art content ready. The general consensus is that it doesn't matter how functional of a game engine you have, art sells and the first impressions are everything. This is not news to me, as I've done art work for previous crowdfundings/pitches before for different employers.

The social presence aspect is extremely time-consuming. I intend to get it going once I have the first Alpha going.


Ongoing Challenges

Still, as I said in the past, I enjoy the idea of selling a product that I can at least show something concrete for, and that full gameplay experience is what I'm aiming for at the moment.

One of the challenges I'm facing right now is with the AI. I have finished a very robust tool and I think I made something robust the point that I'm having to learn my own tool to familiarize myself with it. This was a decision that I took careful - I could make something simple that would be harder to expand in the future but would yield good immediate results, but I opted to err on the side of it being feature-complete even if it ended up being complex.

Either way I think the code is at a point where all the main problems have been solved. I need to add in content, and I'm taking a step back this month to flesh out concepts.


Considering a more Cyclical Approach

It's been about 1 year of TSID, and before that it had been 1 year of creating the procedural generator.

Taking a look at my productivity and analysing it I have reached the conclusion that I'm a faster starter than finisher. When I worked as a Technical Artist in the past this was a non-issue because as a Technical Artist you're always inventing something, while CGI researchers often spend months/years on a technique, a TA can do some research and original work but the cycle of development never surpasses 1-3 months. Often you're doing a few shaders or effects that take a few days to a week.

I posted about it a few days ago in the Making Games Megathread in regards to productivity and how to hack myself into being a better worker. Apparently the concept of coding your game in modules isn't new (so that by the time you feel burned out you have finished a module and are starting a new one). Since I'm doing (mostly) everything I noticed I make things faster by cycling through design -> art -> code and that's what I'm hoping to change in the upcoming months. Most modules for TSID are already done so eventually this cycle will change to concept/ideas -> art -> level design.


Next Steps

Recently I got the opportunity to work with my composer friend and so I'm taking my time to speed up the sound production ahead of schedule.

The social media part is something I need to develop a routine for. That's a bit new to me still, hence why I want to have a decent amount of stuff ready when I decide to start advertising the game outside. Even once I'm done with all the backend it'll still be a while away until all the content is done.

Past that I'll be fleshing out a few more designs and artwork so that by the time the Alpha is ready, I have a nice amount of content to show. The next chapters should be art-centric.

Once I get the site rolling I'm considering opening up more of the development, not just my roadmap. Make it as open a development process as possible, and open up some of my docs and other (non-spoilerific) content on google docs.

Elentor
Dec 14, 2004



Today while chilling I designed a ranking system because I always pondered the absurdity of having S++ as a thing, and decided to go further and add S* and S^.

I'm not sure S^ is achievable, but historically being a game designer and saying something probably isn't achievable equals people achieving it in the first day of release.

EponymousMrYar
Jan 4, 2015

The enemy of my enemy is my enemy.


Add an S^S rank.

Are you a bad enough dude to achieve S Squared Rank!?

(No you are not it is a joke rank and is actually the lowest rank because of an overflow error and as such requires you to utterly fail everything ever.)

Elentor
Dec 14, 2004



LOG(F) - Kill no enemies and finish with 1 HP.

Sordas Volantyr
Jan 11, 2015

Now, everybody, walk like a Jekhar.

(God, these running animations are terrible.)

Elentor posted:

LOG(F) - Kill no enemies and finish with 1 HP.

Nah, nah, there's probably a way to drop down there quickly without dying, and then it's just a pacifist run. You gotta make the special rank for ending the stage at negative HP!

Elentor
Dec 14, 2004



Sordas Volantyr posted:

Nah, nah, there's probably a way to drop down there quickly without dying, and then it's just a pacifist run. You gotta make the special rank for ending the stage at negative HP!

Fi

nielsm
Jun 1, 2009




Fallen Rib

Sordas Volantyr posted:

Nah, nah, there's probably a way to drop down there quickly without dying, and then it's just a pacifist run. You gotta make the special rank for ending the stage at negative HP!

Absolutely do special case winning a level but dying during the win animation. It's a thing so often abused by speedrunners, "well TECHNICALLY it's a win".

Kurieg
Jul 19, 2012






nielsm posted:

Absolutely do special case winning a level but dying during the win animation. It's a thing so often abused by speedrunners, "well TECHNICALLY it's a win".

The Saitama "Okay" face.

Adbot
ADBOT LOVES YOU

TooMuchAbstraction
Oct 14, 2012

Hubris

Fun Shoe

The "this is fine" GIF but the dog is replaced by your ship.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«15 »