Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
I have finally defeated my NIH syndrome by admitting defeat myself and really thinking about what exactly it was I wanted to do. I know that a lot of people suffer from 'Not Invented Here' syndrome either from time to time or as in my case, perpetually; so for those who suffer, take to heart what I have to say. It may be a bit long, but I think it has some important points, even for those who aren't themselves sufferers. If you don't want to read about this, please skip to somewhere near the end, because I have some questions I'd like to ask as well.

'Not Invented Here' is for people who like to program for the sake of programming; you will never release anything of value if you always recreate code-bases that are already freely available to you. Releases by people who have done everything themselves are the exceptions that prove the rule--in many cases they spend years developing just the framework that their software will be built upon.

So why my sudden change of heart? In a moment of triumph, I realized the futility of my work. For several years now I've been not only working on a game engine, but I've been working on the cross-platform API that I'm simultaneously using to develop the engine. C and Obj-C are my languages; I know many, but those are the two I actually enjoy coding in. The problem is the availability of a cross-platform core API on which to build in those languages; with C, what you see is what you get. With Obj-C, Cocoa is grand for the Mac and while cross-platform solutions exist (such as GnuStep and Cocotron), both Foundation and AppKit classes are quite heavy-handed for games.

So I did what any severe NIH sufferer would do; I laid out the API for the library exactly how I wanted and defined how I wanted many details to work (memory allocation patterns, etc.). Then I went to work actually writing the library and game engine; first deciding what part of the engine I wanted to write, then building the part of the API that it required to function. To put things in perspective, I did everything including writing a complete set of both basic and advanced data structures. So what did 3 years of casual development get me? It got me a game engine that in its current stage is little more than an advanced cross-platform timing and IO framework that also happens to support scripting and rendering. It also earned me a core library that's fleshed out enough that I no longer need to expand upon it every time I work on part of the engine, but it is no where near complete itself. Frankly, what I've written so far is pretty drat awesome--I'm particularly proud of my timing system. But being honest with myself, none of it was worth the time it took to create it; all of it has been done before, and although you would need to look harder for some more than others, all of it has been done better and offered for free by others.

What finally made me kick the syndrome and revaluate what I was doing, was when I finished my particularly extensive (and honestly pretty drat good) unicode support. I felt triumphant because of how long I had been avoiding implementing it; I had gone from only a basic understanding of how Unicode works to knowing exactly how it works. It was in my celebration that I realized exactly how much I had done, how long it had taken me to get there, and how much more I had to do. I was heartbroken; I legitimately liked doing what I was doing, but somewhere along the way I had forgotten that the reason I started this whole programming mess as a kid, was so I could make games. It took a lot of introspection to come to the conclusion that although I like programming just in itself, I like making games a lot more--so to hell with doing it all myself, I'll make good use of the resources people have developed before me.

That was when I decided I would look towards middleware, and settled on Panda3d as the optimal engine for games of the current level and scope I'm currently looking at making. It is here that I realized how unprepared I am to actually make games and not programs. Picking up Panda3d hasn't been the hard part; it's a well designed engine and very easy to use. I had to learn python to use it, but learning a new language is minuscule to learning the concepts behind programming and only took a few days. No, what has taken time and is beginning to really stretch the boundaries of my skill pool is the realization that I actually need to make assets for a game. More than that, coming up with game ideas is easy, but coming up with core gameplay concepts for games that would be feasible for me to create is hard as hell. Part of what makes it so hard is that as you increase the scope of a project, it becomes easier to develop gameplay concepts for it, but less and less feasible that the project could be completed. Simplicity is key for a one-man developer, but simplicity means that your game needs one core gameplay concept that everything revolves around. Being the backbone of your game means this concept needs to be a shining star or the game simply won't be worth playing. When you have a larger game with 3 or 4 different things going on, individually each thing can be flawed, because their interaction together is often the defining characteristic of your game, not the individual concepts themselves.

So I throw off the shackles of NIH syndrome thinking I'm just going to jump straight into developing a game, and instead find myself making a list of all the skills I need to develop first. To put things into perspective, over the last week I've simultaneously been learning how to 3d model, texture 3d models, audio design (unfortunately taking a back-seat to learning to model), and continually thinking of game ideas and trying to develop ways in which I could develop gameplay that would be both fun and possible for me to actually develop. Frankly, I'm amazed that I'm actually beginning to learn this stuff and not have it all just a horrible jumble in my head. Worst of all, I can't just settle for programmer art and call it a day. Now I mean, when all is said and done my artwork may still be (and probably will be) terrible, but I can't stand to do anything less than my best. It doesn't feel right to put something together that will be released to others, knowing that I personally could have done better, but couldn't be bothered.

Now that all that EN garbage is out of the way, I have a few questions I would like to ask about how you guys go about game development. I figure the best way to really start is to look at how others with more experience actually developing a game go about it.

• How do you come up with new projects? As I previously mentioned, my way of going about it is to come up with game ideas and then try to develop gameplay concepts that the game will be based around, but I feel like there might be much better ways to go about this.
• What do you do about assets? How do you gauge how much is too much and how many assets you can reasonably develop for your project?
• I actually had a few more questions, but now that I get to it I'm drawing a blank, so please feel free to include or mention anything you feel is important or wish you had known when you started out.

Another question that is only tangentially related; what source do you recommend for learning C++? To be more specific, learning C++ for a C developer. I know C++ in that I can write code that compiles and runs, but I write code as a C and Obj-C developer does (i.e. the polar opposite of how proper C++ is supposed to be written). Being perfectly honest, I don't like C++; I find it to be a poo poo language with a poor excuse for an OO implementation (there's a reason I like Obj-C), but having abandoned my old ways and into the realm of middleware, C++ is king. You can't imagine how much I wish people would just write libraries and frameworks in C, at least then it would be easy to write APIs for other languages, but you have to deal with what you're dealt, and that is C++. So like it or not, I need to learn to think like a C++ developer and I'm hoping someone can point me in the right direction.

Adbot
ADBOT LOVES YOU

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
I originally posted this over at the Ogre3D forums earlier today, but their Help forum moves slower than this thread and I know there's a few Ogre users here that might be able to help. You would think this would be a very simple problem, but I can't find the answer anywhere on my own.

The King of Swag posted:

Hopefully this question will be easy to answer, but I've been going through the Cg Tutorial (and compiling it as HLSL because FX Composer doesn't support Cg) and I'm currently most of the way through the Lighting chapter (5). Up to this point, I've written and successfully gotten to work with Ogre, the per-pixel lighting example. The tutorial then suggests as good practice, that the lighting functionality be offloaded into its own function, and most of the lighting parameters be offloaded into structs, to make it easier to pass them along and group them, and make the eventual expansion of the shader easier.

Again, I've done that and the shader compiles without errors, but Ogre complains with this error:

code:
OGRE EXCEPTION(2:InvalidParametersException): Parameter called light0.color does not exist.  ...
... in GpuProgramParameters::_findNamedConstantDefinition at ..\..\..\..\..\OgreMain\src\OgreGpuProgramParams.cpp (line 1451)
18:18:08: Compiler error: invalid parameters in TestMaterial.material(54): setting of constant failed
This error repeats for every parameter that has been moved into a struct; as you can see, I've attempted to access the struct members by using dot notation on the name of the struct in the argument list, and the struct member name. Although it doesn't seem to work, it was the way sinbad said to access the members in an old post I found (early 2008).

For good measure, here's the shader programs and material script I'm using. As you might notice, I've also added singular texture support to the Cg example. Although unrelated to my current problem, I had a monstrous time trying to get texturing to work, because the first two texture registers were being used for position and normal data needed for the lighting, and I couldn't figure out any way to make Ogre load a texture_unit into a specific register from the material file. I ended up using a shader example here on the site, which involves loading the texture uv data into the first texcoord register (being as there's only one texture), and then simply reusing that same texcoord for the position data, after saving the uv data.

/* I originally included the shaders and the material script here, but SA doesn't render code blocks inside of fixed width/height boxes like the Ogre3D forum does, and they take up a lot of screen-space, so in interest of not pissing everyone off with a huge wall of text, I'll just link to the original post on the Ogre3D forums if you'd like to see the code. */

How to set shader parameters that are part of structs?

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
I have a quick question that I haven't really been able to find a sufficient answer to by searching Google. Do you use unit-testing in your game related projects, a very brief explanation of why (yes or no), and do you think they're of any real value in game programming, or more of a time sink with not enough benefit?

I ask because I've used unit-testing for non-game related projects, but when it comes to programming games, I just don't see that many opportunities where it's beneficial. Nearly every test I can think up that properly models a real-world scenario doesn't fit the unit-test paradigm. The problem seems to be exacerbated the further you get away from writing foundational source (data structures, IO, etc.) and into the game logic; more-so the more logic is scripted instead of hard-coded. The thing is, unless you're rolling your own on everything (physics, rendering, etc.), chances are 95%+ of your program is game logic.

Maybe I'm looking at it wrong, but it seems to me that the type of tests that are actually useful in game development, are tests that bring all the systems together and actually test their interactions with each other in a scenario that closely mimics what will be seen in the final game. In other words, implementing a test map/world/whatever that's designed around your new functionality, so you can see if it reacts as expected in close to real-world scenarios. Also known as a play-test.

What made me want to ask in the first place, is that I was looking at various C++ to Lua binding libraries because I'm getting tired of manually implementing the same basic interface for every class that needs to be exposed to Lua. I found a couple that seemed to be great, but they entirely lacked documentation, aside from extensive unit-tests (in fact, they make the point that they don't need docs because they did unit-testing). Unit-tests can be helpful, but they are not and never will be a replacement for proper documentation. So I started crawling the usual places to see where people would get stupid ideas like this, and found what seems to be an overwhelming number of programmers that expound the position that there is never a scenario in which you shouldn't unit-test everything. So overwhelming that it's making me begin to second-guess myself.

But I'd like the opinion of programmers whose interests are closer to my own (game programming), and actually program to create things (as this thread has shown) instead of fart about with theory and toy programs (which I get the feeling, right or wrong, is what most people on sites like stack-overflow actually do).

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
I really appreciate all the insight guys, and if anyone else wants to weigh in, I'd like to hear it.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
So I've been avoiding Unity for a couple of years now, solely because of its complete lack of modding support for games made in it. But after doing some research into it, I've found what I think to be a good solution, albeit one that many people don't seem to make use of.

User scripting can be handled by making use of Jurassic or one of the Lua bindings found on the asset store (or by C# plugins, the method used by Kerbal), but options for importing new content at runtime looked bleak. But then I came across the serialization assets/libs; namely Unity Serializer and the stripped down (to just the actual serialization functionality, none of the save game stuff) subset Unity Serializer Basic. Both are MIT licensed and freely distributable, and both make the claim that they are capable of fully [de]serializing any custom or Unity class, object, component, etc., including textures, meshes, animations, whatever. The ability to import custom content at runtime is even cited as one of their features. The only catch is that instead of the end-user exporting a .fbx or .dae and then loading it at runtime, they instead need to load it into a Unity (free) project, that has a basic editor script setup, to serialize and save their custom content into external files, which are then easily loaded by the game.

So I suppose my question is if I can drop lack of modding support as a reason to avoid Unity, given what I've found regarding scripting support and runtime content loading?

I've been using Ogre3D and a collection of various open-source libs (with half-assed custom bindings to tie it all together) for a couple of years now, and I'm frankly tired of the trial and error work-flow that's so prevalent when you lack any form of IDE outside Visual Studio/CodeLite/whatever, and your game creation tools consist of a couple utilities you've written here and there to do various menial tasks. So a little over a week ago now, I decided that grass must be greener on the other side and started looking at game engines where I could just focus on the actual game creation and let the engine handle all the foundational stuff.

If there's anything I've learned since I started looking, it's that there's a ton of options right now for potential game engines; unfortunately very few of them are particularly any good. There's a ton of options that really have nothing wrong with them, but they never really got traction and didn't develop the communities necessary to drive development of and with them, so they're essentially dead projects. Eventually I worked the list down to just a few options.

  • NeoAxis: because I was already familiar with the rendering engine it used (Ogre3D) and the various quirks it has.
  • Polycode: I think it has real potential for smaller projects, if for no other reason than it has a slick API and IDE. But it's no where near where it needs to be for serious projects though (i.e. projects that need to get released at some point).
  • Godot: People keep billing this as Unity lite, and I can sorta see it. It's an entirely encapsulated development environment built for the sole purpose of developing games. Particularly, Okam studio's games as all of their products have been developed in and published from various iterations of the engine. It boasts a number of professional quality features, and the community surrounding it is already growing, but the documentation right now is incredibly lacking. That won't last forever as the team is pushing incredibly hard right now to document anything and everything that can be, but it does mean that for right now at least, developers are going in blind when using it.
  • Unity: I've avoided it for a couple of years now, just because of the apparent lack of modding support for any game produced in it. But everything else I've seen about it drew me in anyways, and after having spent 3 days going through all the "Learning" articles, tutorials and docs available, I have to say that it has impressed me in ways that I never imagined that it would. Even in just the short amount of time I've had to mess around with it, I can't believe how incredibly useful the asset store has turned out to be. I can only imagine how much better things are going to get from here on out, now that UE4 is providing some real competition in a market where Unity has been sitting comfortably as king for several years now.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
Maybe I'm misunderstanding, but wouldn't that method mean that you can't use the scene view at all? Components use asset references instead of resource paths, so deferring the actual asset (mesh/material/texture/whatever) selection entirely until runtime means you won't have a visible GameObject in your scene when developing; it'll all have to be handled programmatically.

I would imagine it would be much easier to attach a script to any game object with moddable assets, that just stores an asset location. On awake (or start), the object checks to see if those assets exist at that location. If they do, the custom asset loader loads the file and replaces the currently referenced mesh/material/texture with the newly loaded one, otherwise it does nothing. An alternative that does much the same is to create a "user-content manager" that does the same, just from a centralized source (that eliminates the overhead of attaching it to every object that needs it).

Of course, in this situation I'm pretty sure I'm misunderstanding exactly what needs to be done, because it sounds insane to simply drop the Scene view entirely in favor of designing the game 100% in scripts.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
For you Unity users out there, this may be old news, but I just ran across this tool that allows you to seamlessly use VS Express with Unity. I wish I had found it a couple of weeks ago when I started learning how to use Unity. MonoDevelop is actually a pretty nice IDE, but it has a few quirks that really throw me off, such as an autocomplete that sometimes won't let me escape from it, requiring me to accept the autocomplete, delete back and retype what I wanted.

I hadn't tried it without this tool, but apparently if you hook up VS Pro as the external editor in Unity, every script you open will open a new VS instance, and you can't open scripts from within Unity at all if you use Express. So far the tool works perfectly for me, allowing me to not only open scripts from within Unity (in the same instance of VS Express no less), but if you click an error in the console, it'll jump to the correct line in the script.

As per the Readme.md on the Github page, you need to compile the tool using the given Solution, and then link the executable in Unity as the external editor, with the proper arguments in the arguments line. The arguments I used are "$(File)" $(Line) 2013, with the 2013 being my express version. Apparently it defaults to 2010 if you omit that.

The King of Swag fucked around with this message at 15:36 on Apr 24, 2014

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
j.peeba, how large of a chunk of your total time did the graphics take you for your entry? When I see a good looking game (which yours is) with otherwise relatively simple 2D graphics, that question always pops to mind. Especially since my programmer art skills exist in this weird limbo, where excluding organic models, I can pretty quickly put together nice looking hard-edged 3d models, but if I had to estimate, I'd say the 2D graphics you used would take me close to a week to draw (I am really bad at 2D for some reason; I can't visualize it the way I can 3d objects).

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

roomforthetuna posted:

And here it is, "2048 Birds of the Dead". http://www.kongregate.com/games/RavenBlackX/2048-birds-of-the-dead
I scored 13 before releasing it, 12 in the version that records the high score.

Played 3 games, managed to get the same high score of 12. I stopped playing because the coin placement script honestly needs some work; 2 out of 3 coins are impossible to get because they're right against the walls, either before or after. I'd much rather see fewer coins that are all possible to collect.

All that said, it's pretty good for what I assume is just a few hours of work.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

nebby posted:

Man first UE4 source for 20 bucks a month and now UT development happening out in the open. Looks like I picked a pretty good time to try to break into game dev! It'll be interesting to see how the UT experiment goes. I'm sure there is going to be a ton of noise in the forums.

I know this is from a couple of pages back, but what's this now? I haven't heard anything about this (or really seen any proof that the Unity team has changed their ways). They're notoriously mum about all development, possibly because all they seem to do is come up with half-baked ideas and implementations, and then plug them into Unity, never to fix any bugs or expand upon functionality after the initial release.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Stick100 posted:

Unreal Engine 4 is rapidly surpassing Unity3D for me. The one thing I wish I had access to in Unity3D was a performance profiler, and in Unreal Engine 4 you can just press cntrl-shift-comma and they will explain everything.

I only migrated to Unity3D from Ogre3D a couple of months ago, but I'm already comfortable with Unity3D and have several hundred dollars worth of asset store purchases to shore up the shortcomings of Unity (Shader Forge, A* Pathfinding, DFGUI, etc.)* But the more I see about UE4 and the more I don't see anything coming out of Unity Technologies, the more I'm starting to worry that I've hitched my wagon to a dying horse. The deafening silence and overall lack of response by Unity Technologies is what's worrying me the most; it tells me they don't know where to go or what they need to do to compete with the competition.

I'm using Unity Free (because there is no way I could afford Pro right now), but I'm not so invested in store assets and development in a given project that I couldn't switch engines if I needed to. The main things holding me back right now are content pipeline (Unity is very Blender and Allegorithmic Substance Designer friendly, both programs forming the backbone of my asset generation) and C# support. Now, I'm a C programmer through and through, with C and Obj-C still being my languages of choice for personal projects where I'm more interested in the code than the final product. I've also spent a lot of time writing C++ code. That said, when it comes to projects where I actually care about getting things done and not just generating pretty solutions in code, I want C#. Unity has an old and cobbled together hunk of junk for a C# compiler, and licensing issues or not, it is one of Unity's bigger failures, but I really don't want to move back to C++ for serious projects, even if it is some form of pseudo-managed C++ environment.

If things keep going the way they have been, especially with a lack of official announcements from Unity Tech, I think we may start to see a lot of people flocking from Unity as they finish their current projects over the coming year. Competition is a very good thing, but game development middleware lives and dies by the community around it, and if Unity Technologies doesn't do anything to try and keep the current community, we may see a dramatic weakening of the Unity development environment, which will only serve to further drive more in the community towards greener pastures.

* To be fair, I picked most of them up during sales that I was fortunate enough to catch, so I haven't paid full price.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Ragg posted:

Unity's plan is to build their own .NET environment for AOT platforms and then upgrade Mono, to get around the LGPL. So a newer C# version is planned, but who knows when it will land.

I saw them talk about that a while back in the forums; from what I understand, the new compilation pipeline actually assembles the C# into C++ and then compiles that. They said that in theory, it should make code execution much faster, at the cost of slower compile times and absolutely no introspection capabilities. Personally, I think it's a tremendously stupid idea, and if they actually do it, that would end my qualms about sticking around for C#.

The main issue I have with it, is that Unity's C# environment is already non-standard compliant enough that you sometimes run into issues over it. Once anything resembling introspection and any other feature they deem "not useful" is dropped, there is no hope of ever using outside C# libs again. I couldn't imagine trying to get something like Json.net to function if there was no introspection capabilities.

Then you have the issue that once they go this route, Unity Technologies is now maintaining their very own custom compiler and assembler, which is something that is traditionally tackled by groups that have many more people than Unity Technologies can throw at it, and are the type of programmer that have specialized in that. I just don't see how anything they write will be able to compete in quality. Even if they were good at it, there are going to be bugs and a lot of them, and since we'll now be dealing with a non-standards compliant environment, there will be no way for the Unity community to avoid them until someone steps over one.

Given UT's previous track-record with bug-fixes and updates, there is little to no chance that any bugs that exist will ever be addressed, and there will never be a feature expansion beyond the initial release, to try and bring it up to feature parity with increasingly newer versions of the C# language.

I know I sound all doom and gloom, but this just strikes me as possibly the worst direction they have ever decided to go in, and I don't know if there's any turning back or recovery if they go down that road.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

xgalaxy posted:

I've heard nothing about lack of introspection. Can you provide a link because I don't think you are correct. Microsoft is doing the same thing with a native C# compile option and they aren't having any problems providing full C# including introspection and dynamic language features. I don't see why Unity would have any problems. From what I know they still have a small native runtime that should be providing the introspection features you are talking about.

As I said, it was a response one of the Unity devs gave in a forum thread awhile back. Their forum search sucks (I can't figure out any way to go from a search result to the found post, other than go through every post in the thread), and you are more than welcome to dig through the forums and try to find the post. It was in one of the threads where someone was asking if they'd ever update Mono. In this case, I'd love to be wrong, simply because being right means bad things to come.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Shalinor posted:

...and chiefly because we want mod support.

I've been playing around with the easiest ways of making this happen without having to forgo editor functionality and just overall being as invisible a process as possible. The best solution I've come up with so far is using JSON.NET for .NET 3.5 (the actual library, not the crippled version on the Asset Store) and serializing the poo poo out of meshes, audio, whatever and saving it out as BSON.

As far as I can tell, JSON.NET is capable of fully serializing anything you throw at it, as long as recursion is set to ignore, and you can get away with simply feeding it raw gameobjects, meshes, whatever. That said, by default it also serializes a ton of readonly data and properties that is calculated when an instance is created, so for certain things it's much more efficient to pull out the relevant data and then serialize that together. The example that immediately comes to mind is that if you serialize a mesh, then every vertex is serialized as a Vector3. When Vector3s are serialized, JSON.NET saves the magnitude, square magnitude, normalized, etc., and this explodes the size of the serialized data.

So far, the best solution I've found is to create proxy classes where you can intimately control exactly what is serialized and how. So to handle a Vector3, you'd create

code:
namespace JSONProxy
{
	[JsonObject(MemberSerialization.Fields)]
	public class Vector3
	{
		public float x;
		public float y;
		public float z;

		public void FromOriginal(Vector3 original) { ... }
		public Vector3 ToOriginal() { ... }
	}
}
...and serialize that. Unfortunately things like Shuriken, which don't expose all their state, need to be directly serialized because a proxy won't be able to properly represent the needed state.

However you go about doing it, once you have figured out how to serialize a class you want to import-export for modding, then you need to create a simple editor plugin that allows you to easily do just that. This fortunately is pretty drat simple conceptually, and isn't a whole lot more than a script attached to an otherwise empty gameobject, that you point towards various assets you want serialized. So you drop a mesh onto the script, hit the serialize button, the mesh is then serialized and saved out into a SerializedAssets folder. That saved file is now ready to import into an actual game, deserialized and plugged into a targeted object's mesh renderer.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
The purpose of this post was originally to ask a simple question, but after having asked it, this post devolved into a detailed analysis of my project; the decisions I've made so far, the ones I'm still deciding on, how and why I've made them and how I felt they would affect the commercial sales of the finished product. That further devolved into an analysis of my insecurities with devoting so much time and effort into a project, that like most indie projects, needs to realistically be looked at as an inevitable and abysmal commercial failure, and that any money made from such a venture is simply to be taken as gravy on top of having the pride of having released a game.

But then I thought better, deleted all that and here's my original question: what is a good 2D framework for Unity, that doesn't force me into a 2D only environment / orthographic only camera? I'm working on a graphical roguelike, but I'm working with 3 dimensional maps, so vertically aligning horizontal map slices (which are obviously tiles merged into a singular mesh) and rendering them with a perspective camera is a must.

I've looked at a number of different options, but it seems like every one comes with its own quirks and limitations that don't jive well with my requirements.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
Does anyone here implement version control on their projects, and if you do, do you simply use it to track revisions, or do you actually use features like issue tracking? Technically this question could pertain to just home grown programming in general, but I'm interested particularly in how it relates to indie game devs.

Personally, I never used to use version control until a few projects ago when I was introduced to it, and realized how much easier it makes my life, if only to track my progress. Since it has always just been myself on these projects, I never really made much use of the other features such as issue trackers, but on this newest project, I find myself using it (the issue tracker) almost as a way of documenting what I've done, what I need to do, what I want to do, and what's currently in-progress.

I'm just curious to hear about other people's approach at managing their projects.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Yodzilla posted:

This. BitBucket doesn't want you pushing gigs of information up onto its servers but for smaller/indie project you're probably pretty good with everything.


Also PRO TIP .gitignore is your friend for avoiding pushing tons of Unity generated non-project files: http://kleber-swf.com/the-definitive-gitignore-for-unity-projects/

e: he even has a good getting started tutorial http://kleber-swf.com/start-new-unity-project-with-git/

Github manages their own .gitignore file repository that includes one for Unity, that is more up to date than the one in your link.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
No matter how many times I do it, it never becomes less gut-wrenching to put together enough of a large system*, that you can actually test what you have as a whole, and see if the lack of any compiler errors or warnings actually translates into a workable design. Fortunately for me, the only error I ran into when testing it, was a 'calling method on null exception', which is because I forgot that C# initializes reference fields to null, instead of calling their default constructors.

* In this case, a layer based streaming map system for a roguelike.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
Maybe you guys can shed some light on how you'd design an interface, given the current situation I'm facing.

Basically, I need values in units of X, and I need them to be easily convertible into other units on a regular basis. A critical point for me is that they're structs, but they're treated as much like a primitive type as possible. C# actually makes this pretty easy, but I've found myself in a situation where I can go with a slightly less intuitive interface and make my classes much shorter (units of volume have the most types, and thus the biggest difference, at 250 vs 750 lines of code), or deal with the extra lines of essentially filler code, that make the interface much more intuitive.

e.g. (just an example I cooked up right now)

code:
// This first part is the same for either interface design.

Volume fuelLevel = new Volume(4.5f, UnitOfVolume.Gallon);
Volume maxFuelUsagePerSec = new Volume(1.0f, UnitOfVolume.Dram);

fuelLevel -= maxFuelUsagePerSec * engineLoad;
This next part is where the difference in interface design comes in. First the longer, intuitive (to me) interface.

code:
// Check fuel left
if (fuelLevel.inOunces <= 8)
{
	Settings.DisplayMetric ? Log(String.Format("Low Fuel: {0} milliliters left.", fuelLevel.inMilliliters)) : Log(String.Format("Low Fuel: {0} tablespoons left.", fuelLevel.inTablespoons));
}
Contrasted with this:

code:
// Check fuel left
if (fuelLevel.inUnit(UnitOfVolume.Ounce) <= 8)
{
	Settings.DisplayMetric ? Log(String.Format("Low Fuel: {0} milliliters left.", fuelLevel.inUnit(UnitOfVolume.Milliliters)) : Log(String.Format("Low Fuel: {0} tablespoons left.", fuelLevel.inUnit(UnitOfVolume.Tablespoon));
}
That first example is a lot cleaner to me, but the reason it requires so many more lines of code, is that for every conversion, I have to declare a property like so:

code:
public float InGallons
{
	get { return InUnit(UnitOfVolume.Gallon); }
}
This isn't so bad for things like Temperature, where there's only a handful of units you can convert between, but when you deal with the likes of volumes, where there's 40 different possible conversions (US Customary, Metric, and Imperial), that's a lot of code that's essentially just filler, but is already a single discrete action and can't be removed without losing its benefit. One thing I was already considering, and it would eliminate about half the cruft, is implementing implicit conversion to float. Right now I have the properties as I just showed, as well as a mirrored set of methods that return a new instance of the struct, instead of the value as a float. With implicit casting, I could convert the properties to return the struct and do away with the mirrored methods altogether.

Just for the curious (and because I'm already talking about it); like values (Volumes, Lengths, etc.) with different units do properly compare against each other. So you can do this if you like:

code:
Mass grainMass = new Mass(7000.0f, UnitOfMass.Grain);
Mass poundMass = new Mass(1.0f, UnitOfMass.Pound);

grainMass == poundMass ? Log("We're equal!") : Log("We're different!");
...will print "We're equal!"

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

SupSuper posted:

Do you like Unity? Visual Studio? But struggled at putting the two together? Well, good news!

Microsoft acquires SyntaxTree, creator of UnityVS plugin for Visual Studio

Keep in mind that it doesn't work with Express; for us poors, we're still stuck with Unity VSExpress. It works great, but doesn't support debugging.

As for the convertible unit interface:

First things first; I did implement implicit conversion for float, and explicit for int. This allowed me to eliminate an entire series of methods, and now the conversion properties just return a new struct themselves, since it behaves in other calculations as a float. I also changed the UnitOf[X] enums to have plural names, since it flows more naturally.

ninjeff posted:

It's a bit kooky at first, but mathematically the way to get 80 out of "80 ounces" is to divide by one ounce — a unit is interchangeable with one times itself. So you could throw away or encapsulate UnitOfVolume (an enum?) and create a set of constants in Volume itself:
code:
Volume fuelLevel = 80f * Volume.Ounce;
float gallonsOfFuel = fuelLevel / Volume.Gallon;
This lets you do things like adding Volume.Gallon to itself to get two gallons, like you'd expect, or getting the ratio of ounces to gallons by dividing Volume.Gallon by Volume.Ounce.

The ratio of ounces to gallons is always 128:1; that's a defined number that will never change; although I do get what you're saying (ratio of milliliters to drams isn't so easy to remember). What I don't get is why would I want to move the calculations outside of the struct, and use constants instead of an enum (which removes any intellisense benefits and strong typing)? The Volume/Length/etc structs already support arithmetic operators; reading back, I don't think I emphasized enough that arithmetic on these values will always calculate the correct value, regardless of the units used. The only rule to remember is that the unit of the result is always the unit of the first operand.

code:
Volume gallonsLeft = new Volume(1.0f, UnitOfVolume.Gallons);
gallonsLeft += 1.0f; 						// 2 gallons
gallonsLeft += new Volume(64.0f, UnitOfVolume.Ounces);		// 2.5 gallons
Volume litersLeft = gallonsLeft.inLiters;			// 9.46 liters
litersLeft *= new Volume(500.0f, UnitOfVolume.Milliliters);	// 4.73 liters
Please let me know if I'm misunderstanding what you're saying though, because I feel like I'm missing the point. Looking at my code, I really wish I could do this:

code:
Volume gallonsLeft = Volume.Gallons(1.0f);
gallonsLeft += 1.0f;						// 2 gallons
gallonsLeft += Volume.Ounces(64.0f);				// 2.5 gallons
Volume litersLeft = gallonsLeft.inLiters;			// 9.46 liters
litersLeft *= Volume.Milliliters(500.f);			// 4.73 liters
Without having to create an entire set of methods, for all the different supported units.

Subjunctive posted:

My C# is rusty, but it seems like you could have an interface like

code:
var metric = myPints.as<Litre>();
where you have a baseline unit for each dimension, like mL for volume, and then as<T> would convert to the baseline and then divide by T.fromMillilitres or whatever. Making them be types would let you use using to import the ones you want without having to explicitly prefix at each point of use.

I haven't finished my coffee yet, so if I'm missing something big I apologize.

The reason this wouldn't work is that structs in C# can implement interfaces, but they cannot inherit, so unless I went crazy with extension methods, every single unit would need to re-implement all the functionality of the others. Classes wouldn't be an option either, because obviously they're not value types. You're definitely on the money on how to proceed with the calculations though, and it's actually how it internally works already. When you perform a conversion (if you aren't converting to the same unit), everything gets converted into a native unit first, and then to the requested unit. If it wasn't done this way, either the number of calculations you'd need to write would be number of units^2, or it'd require a rats nest of branching. Because all calculations are a straightforward switch statement and multiplication, doing two conversions (to native and then to the requested unit) is actually considerably cheaper (processing wise) than the other methods.

Subjunctive posted:

Edit: actually, the ergonomics of your existing interface get better if the caller just does using Whatever.UnitsOfMeasure as UoM or something, no?

I'm not sure I agree with this, only because I come from an Obj-C background, and abbreviations in interfaces are the devil's playthings. Especially abbreviations that are indiscernible out of context, just by reading the name and nothing else.

As always, I appreciate the input guys; it really does help to have some input from others.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Subjunctive posted:

Hmm. Here's what I was thinking, untested and unlikely to compile:

code:
**snip**

This would actually work, but I don't know how efficient it would be, as any struct referenced through an interface has to be boxed and unboxed. Speaking of efficiency, I currently cache the last conversion performed, so you can call the same conversion in repetition without need to store it as a separate variable. This unfortunately does mean that the size of the value doubles (which internally is a float and an enum), and I'm wondering if storing the cache is even worth it.

Subjunctive posted:

That said, simply Units would probably be clear and unique as a package name.

I'm not sure if you're talking about the namespace, or the UnitOf[X] enum names, so I'll address both. Right now I'm using the namespace UnitOf, but I also think your suggestion of just Units is a good one.

If you meant the enums, then I'm using names like UnitsOfLength and UnitsOfVolume, because I have many different types of supported units, and I don't want to mix them into a single enum, since many conversions simply don't exist (such as from milligram to fahrenheit). Keeping them separate enums gives compile-time type safety, that forbids these non-conversions.

This is a list of the different unit types:

UnitOfLength, UnitOfVolume, UnitOfMass, UnitOfEnergy, UnitOfPower, UnitOfTemperature, UnitOfAngle, UnitOfPressure, UnitOfTime

I'm actually thinking of releasing this on the Unity asset store when done, since I don't see anything like it, and I know that at least in Roguelike circles, conversions between different units is common (at least for display/internationalization purposes). Doing conversions seems simple, but there's actually a lot that goes into making them user friendly, as you can see from me asking about interface ideas.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

ninjeff posted:

**snip**

This reads pretty clearly to me: one gallon, plus one gallon, plus 64 ounces, plus 500 mL, in L.

There's no loss of Intellisense or strong typing - a Volume is a Volume and a float is a float, and they're not interchangeable at all. You can multiply a Volume by a float, but you can't multiply a Volume by another Volume. Implementing implicit conversion to float would actually break the strong typing you're looking for here, as it would mean that two Volumes could be multiplied together.

You say "the unit of the result is always the unit of the first operand", but actually volumes themselves don't really have units - it's extracting a displayable number that requires you to refer to a unit, and there's no need for the display unit to be the same as the unit you used to create the volume. You can add a gallon to three liters and get the result in drams.

I understand it now and I think you have the right idea on this one, although I still believe the struct should maintain a 'native' unit, which it uses to decide the returned value on access. You are also right in that units are really just a way to display a value, and that the same volume in different units, is still the same volume. But at the end of the day, the volume does need to be in a unit, and asking the user to explicitly extract a float representation of the volume in the unit they want, goes against treating it as much like a primitive as possible.

I understand that with the implicit cast, you can't prevent someone from multiplying volumes, but other types of units (lengths for instance) do make complete sense with all arithmetic, and have no inherent reason to prevent the implicit float cast. To me it would only be confusing if you allowed some types to be used as floats, and prevented others. Especially since they're all nothing more than fancy wrappers around a float. Forbidding the conversion for all of them, even when it makes sense for those types, doesn't make sense to me.

Combining both the previous examples:

code:
Volume gallonsLeft = 1f * Volume.Gallon;	// Volume.Gallon keeps gallons as its 'native' unit; this is inherited by gallonsLeft. The actual stored value is still in the common unit, liters. 
gallonsLeft += 1f;				// 2 gallons
gallonsLeft += 64f * Volume.Ounce;		// 2.5 gallons
gallonsLeft -= 500f * Volume.Milliliter		// 2.37 gallons
Volume dramsLeft = gallonsLeft.inDrams		// 'Native' unit is drams, as there's an explicit instead of implicit conversion here.

MethodThatExpectsFloat(gallonsLeft);		// Gets 2.37
MethodThatExpectsFloat(dramsLeft);		// Gets 2424.74; internally both gallonsLeft and dramsLeft store the same value: 8.96 (liters).
The reason I'm still leaning towards tracking a 'native' unit, is that I imagine it would get really ugly and tiring to put this everywhere you needed a conversion:

code:
float gallons = myVolume / Volume.Gallon;
float liters = myVolume / Volume.Liter;
// etc...
Plus, maybe others are different, but I think of all measurements as having an inseparable unit, that's tied to context. My Jeep holds 20 gallons of fuel; that's also 75.7 liters. But I can calculate as many equivalent values in other units as I want, it's still a 20 gallon tank. It may even have a hidden level etched into the inside of the tank that tracks the level in liters, but when full, it still holds exactly 20 gallons, because it was intended as a 20 gallon tank. If you want to make a copy of it and call it a 75.7 liter tank, that's great, you now have a 75.7 liter tank; it also happens to hold 20 gallons, but it is a 75.7 liter tank.

I'd like to know how you feel about this hybrid style, and I'd also like to hear what others think of the different proposed styles.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
The equality comparison for all types must already be overloaded to comply with IEquatable<T> (all types implement IEquatable<T>, IComparable<T>; they'd be almost useless with keyed or sorted containers without those), and since I'm using Unity, I do the comparison between values with Mathf.Approximately. So far it seems to work great. A standardized unit is already stored internally, and converted to the format needed on request.

code:
Volume aGallon = Volume.Gallons(1f);	// Internally stores a value of 3.78541 L
Which makes me want to note that I've discussed this with others (one of our resident programmers in TFR being particularly helpful), and the consensus is that implicit conversion to float is fine, as long as there's no magic number +/- allowed, and any invalid arithmetic throws an exception. But considering that so many here are so adverse to the implicit conversion, I'll drop it, even though in practice, I've found that the lack of it makes my tests more verbose for little gain. Which is to say the pretty much finalized design is:

code:
Length boxWidth = Length.Feet(2f);
boxWidth += 1f;					// Illegal; no magic numbers.
boxWidth += 1f * Length.Foot;			// No implicit conversion to float, makes this illegal. This still just looks strange to me anyway.
boxWidth += Length.Feet(1f);			// Legal; number is an explicit type.

boxWidth *= 2f;					// Legal, because multiplication of lengths by a scalar is a valid operation.
Area boxArea = boxWidth * Length.Inches(30f);		
Volume boxVolume = boxArea * Length.Inches(6f);
Length boxHeight = boxVolume / Area.SquareCentimeters(13935.456);

MethodThatExpectsFloat(boxHeight.InInches)	// 6 inches
Finally, for those curious about the total variety of supported units: Initial Supported Units

Before anyone goes about claiming that I went overboard on the total number of supported units; remember that when this is done, I'm planning on releasing this to the asset store, where a wider variety of units would be needed by users. That said, I'm almost frightened that barring Imperial units (which are not the same as US Customary units), I actually see a use for the vast majority of those, within my own projects.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Jewel posted:

Why is this illegal? Why convert to float? Float multiplied by Length should return a Length, then Length plus a Length is valid. This seems like fine syntax to me.

Quick Edit: Which also lets you just say "Length boxWidth = 3f * Length.Foot;" which is really natural too.

Because overloading public static Length operator *(Length first, float second) and public static Length operator *(float first, Length second) has the potential for odd errors in complex statements. But I suppose I can add it anyway, and the end-user just will just have to be extremely careful to specify order of operations.

Inverness posted:

C# doesn't have a separate *= operator though anyways.

If boxWidth *= 2f is legal then boxWidth = boxWidth * 2f is what it will actually be doing since you will have needed to define the * operator to do that.

I'm using *= and such in these examples for brevity; I don't think C# even allows you to define separate [x]= operators. They're all calculated by using the overloaded [x] operator with the source variable fed as the first operand, and then set as the result.

Inverness posted:

It seems like the best way to handle this would be to still have the default Equals() and equality operator use an exact comparison, but then have an overloaded Equals() method allow the user to specify a tolerance for the comparison.

a.Equals(b, 0.001) for manual way to do it along with a.NearlyEquals(b) that uses a common tolerance.

Two floats with the "same" value that were come across with different calculations, will more often than not, not actually equal each other. Making the default equality operator actually check for exact equality is almost never what you want, which would make feeding them into a collection a serious pain. The best solution would be having the default equality operator perform a tolerance check, and then have another ExactlyEquals() method that will perform an exact equality check.

The King of Swag fucked around with this message at 19:25 on Jul 7, 2014

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Inverness posted:

I realize this, but the reason I suggested this is also be cause of consistency. The default equality implementation for value types including other floats or things like strings is always an exactly equal comparison. For case-insensitive string comparison or tolerant float comparison you have to specify how you want that handled. I think having the default equality comparison for your unit types be a tolerant one obscures what is actually happening.

The question arises as to what does a user do if they don't like the tolerance of the default equality comparison. People have different needs after all. That tolerance is also obscured from a casual glance at the code unless you go into the documentation to check what it is.

Maybe you could do something like specify the tolerance for the default comparisons statically or for a specific thread, but it still seems like it is introducing uncertainty. You're changing the behavior of the equality operator by having it be tolerant by default. It's something I specifically advise people against doing when it comes to languages that have operator overloading.

I'm going to put some serious consideration into this; probably mock something up and see how unwieldy it is to use compared to how it is now. I have the feeling (actually, I'm almost certain) that it'll make them unwieldy to use as keys in keyed collections, but might make other comparisons easier to make, if I allow for variable tolerances; either on a type or per-instance basis.

Something else I'm also considering that's tangentially related, is switching internally from floats to doubles. Doing more tests, I found that the precision of floats isn't that great when dealing at the far ends of the supported units (gigajoules vs microjoules), and while it remains equally important to use floating point numbers smartly*, using a double does allow for much greater precision at very large and very small values. Computationally, doubles don't have the performance drawbacks over floats that they used to (at least for .NET with modern hardware), so I'm not so much worried about that. The only real issue I can find with it, is that the user will likely have to cast the values to float in most situations, as Unity and games still stick to float like bees to honey.

* Don't keep performing calculations on the same value over and over, whenever you have access to the source value and can perform a fresh calculation using it instead. This prevents error creep over time.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

OneEightHundred posted:

To be clear: Using integers will not save you from rounding-related issues. However, they will save you from problems stemming from rounding not happening at the precision you expect, and they tend to fail more consistently because round-towards-zero is less likely to make something accidentally work than round-to-nearest.

I'm generally of the opinion that if you can represent the largest and smallest values that you will ever reasonably be interested in for something using an integer, and you don't expect to multiply it by something that will cause it to overflow (which in many cases can be worked around anyway because integer multiply producing outputs at double the input precision is standard on every architecture worth mentioning), then you should use integers.

I personally think this is just asking for trouble; the IEEE 754 standard has well defined and predictable behavior, and what you're essentially doing here is creating a custom floating point representation with its own rounding behavior. The rounding behavior may be better suited to your purpose than the standard, but it's non-reproducible from the outside without duplicating the internal representation.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
Except that IEEE 754 is designed specifically for cases where you're dealing with real-world representable measurements (such as length, mass, time, etc.); it comes at the cost that any exact value in particular, has the possibility of not being exactly representable. The solution to this is fuzzy equality, which is easier said than done, but a solution that takes into account both an acceptable margin of error and ULP difference handles all cases while being relatively simple to implement. The real trick to it is defining what is an acceptable margin of error (any value smaller than that margin is automatically equal), and what is an acceptable number of ULPs (unit of least precision) for large magnitude values, where adjacent floats may have a larger difference between them.

Here's what Wikipedia has to say about IEEE 754.

Wikipedia posted:

As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of a proton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact. An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation.[39] The "decimal" data type of the C# and Python programming languages, and the IEEE 754-2008 decimal floating-point standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal.

This actually fits very well with the most common use cases for these units, as the only time you're dealing (and expect) an exact value, is when you explicitly create a measurement with an exact value. Most of the time you're (or at least I am) going to be taking measurements, such as the distance between the player and an object, or the amount of water in a map tile, and then performing calculations, measurements and conversions on it.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
Just as a heads up; my support for floating point is not a discounting of your suggestions for fixed-point. In fact, I just spent quite a bit of time looking at various fixed-point options, and weighed the pros and cons of both fixed and floating point. In the end I've decided to stick with floating point, because more often than not, you're comparing values relative to each other rather than direct equality comparisons. For the cases where you do need a direct comparison, I've already written a fuzzy equality test, although I suspect there will be quite a bit of tuning needed to get the margins of error just right for the most common use cases. On modern PC hardware, floating point arithmetic is also orders of magnitude faster than fixed point arithmetic, in all of the fixed-point implementations I could find. There is a lot to be said about having purpose-built hardware that performs floating point math.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

OneEightHundred posted:

Fuzzy testing still has some challenges in that you have to remember to use it - It's easy to forget to use fuzzy compares when you're not doing equality comparisons, i.e. when you use > or <, and get the same class of bugs.

Also, I'm not really as enthusiastic about using fixed point math compared to just using integers for storage, but I don't know what you're doing that gives you an order of magnitude of difference in performance with fixed point. Extended-precision multiplies and divides are FREE on x86/x64 because they're the only types of integer multiplication and division they even support and shifts are dirt cheap. I ported an MP2 decoder, which is basically a giant stress-test of integer multiply speed, to fixed-point not that long ago and it was almost identical in speed to the float implementation.

Fixed point libraries are a completely different beast than extended precision multiplication/division, and are most certainly more than just an integer multiplication / division and a few shifts. I'm being serious; all the implementations available even warn that they're many times slower than floating point math.

The reason I'm still iffy on extended precision integer storage, other than the overflow issues at the high extremes* and lack of precision at the low extremes, is that every division is a truncation. While values calculated in the same way may still come to the same value, the actual error will naturally be greater than floating point math, which defaults to midpoint rounding. It still doesn't solve the issue that the same value calculated in two different manners, may not be equal due to rounding errors.

I'm not opposed to the idea if there's real benefits, I just don't see what the benefit is here.

* Using energy as an example, if I use 64 bit integers, and store everything at 1/100 a microjoule, I can only store about +/- 10 gigajoules. This span might be uncommon for lengths or volumes, but it's not unheard of for energy measurements, and I don't think 10 GJ as the upper range is high enough, without sacrificing too much precision at the low range (you can push to 100 GJ by reducing the precision to 1/10 a microjoule).

Edit: To clarify further; for enough precision for the likes of a volume (since not all units are multiples of each other), you'd have to store your internal representation as 1/10,000,000 of a microliter to accurately represent all supported volume types. Let's say that we don't need absolute precision, as that's impossible to get without arbitrary precision numbers anyway, and say we throw out the 2 or 3 units that require so much precision to accurately represent. So we reduce that precision to just 1/1,000 of a microliter, or 1 nanoliter; that's still pretty accurate right?

Well, let's take into consideration the (US Customary) dram, which is defined as exactly 3.6966911953125 mL, and actually one of the units that more nicely converts to metric and back. That's 3,696.6912 µL and 3696691.2 nL. Over 100,000 drams, the error becomes 20,000 nanoliters, or 20 microliters. If we were using a double and storing our values internally as liters (100,000 drams is 369.66912 liters), the ULP would be 5.6843418860808E-14. A reasonable first guess for ULP range after multiple operations on a value would be 4 ULP, which means our error could be anywhere from 0 to 1.7347235e-18. Obviously it's possible it could be larger than that, but it's not likely unless we performed tons of operations on the same value. Either way, it's nowhere near a 20 microliter error.

Of course the caveats are that error dramatically increases as you grow increasing large or small (and decreases as you approach 0), and that floats have drastically lower precision than doubles.

The King of Swag fucked around with this message at 12:05 on Jul 9, 2014

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
Just to comment again on the floating point issues; I just finished writing (and verifying it works with tests) a fuzzy comparator and hashable fuzzy comparator. Following Inverness's suggestion, I made the different types perform regular equality checks instead of fuzzy equality checks. If you explicitly want fuzzy equality or comparison, you can do this:

code:
// These use the default margin of error and ULP tolerance as defined in the settings.
myLength.Fuzzy() == otherLength.Fuzzy();
myLength.Fuzzy() < otherLength.Fuzzy();

// ... Repeat for the other types of comparison ...

// These use custom margin of error and ULP tolerance to perform the fuzzy comparisons.
myLength.Fuzzy(customError, customULPTolerance) == otherLength.Fuzzy(customError, customULPTolerance);
// ... etc ...
Now, it's a really bad idea to use a regular floating point number as a key in a collection, unless it's an assigned value (no calculations are ever performed to obtain that key). Traditional fuzzy comparison actually solves the issue of dealing with error, but it's non-hashable and thus can't be used as a key in collections. So to match the fuzzy comparator, I put together the hashable fuzzy comparator, which makes some compromises in the equality check, that allows you to generate a hash for it*. The compromises are:

  • Both values must have the same margin of error and ULP tolerance; both are used in the hash generation, and thus become part of the equality.
  • The hashable range of values is reduced by an amount relative to the amount of error that is allowed. This allows for the potential for more hash conflicts, but admittedly only for values that are very close together (but not enough to equate to each other).
  • The hash works by introducing imaginary boundaries into the underlying bit representation of the floating point number (in contrast to the very real 0.0 boundary). That means fuzzy values that would otherwise be equal, could potentially equate to not equal, if one number lies across the boundary line from another. Actual equal values will always be equal (since they couldn't lie across a boundary then). The likelihood of this occurring is inverse to the reduction in the hashable range of values. i.e. The less likely this is to happen, the more potential hash conflicts, and vice versa. That said, while I'm still tuning this, I have this occur about 1/50th of the time in tests, and I'm using a pretty conservative boundary scale. I could easily push this further out, greatly reducing it, while still only having minimal impact on hash conflicts.
  • The larger your margin of error and/or ULP tolerance, the more potential hash conflicts you introduce. I don't really consider this anything more than an edge case though, because sane tolerance values are very small and by the time it did become an issue, you'd have tolerances so large that completely unrelated values would equate to equal.

Using the hashable fuzzy values is very similar to the regular fuzzy values:

code:
myLength.FuzzyAndHashable() == otherLength.FuzzyAndHashable();
myDictionary[myLength.FuzzyAndHashable()] = aValueToStore;	// The main purpose of hashable fuzzy values.
* When I say generate a hash, I mean generating a valid hash that meets specifications. That is, if A == B, then A.Hash == B.Hash; but, if A.Hash == B.Hash, A == B doesn't need to be true.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Inverness posted:

I'm not sure about a case where you need fuzzy less-than or greater-than comparison.

If you perform fuzzy equality, then you must perform fuzzy comparisons, or you'll receive inconsistent results. Same as if you perform fuzzy equality and comparisons with different tolerances.

Inverness posted:

Is also odd since it seems like you're making the code more verbose just to hold onto the ability to use the equality operator. By convention, custom equality checking using an instance is done with an overload of Equals() like I suggested before:

I could overload the Equals method like you suggested, but then there's no way to overload the Equals method with default tolerances; it would clash with the original Equals overload, which under your suggestion, now performs an exact equality comparison, rather than a fuzzy comparison. Forcing the user to specify the tolerances every time (rather than using the user defined default settings when omitted), is just asking for trouble. Not to mention the issue of how do you handle fuzzy </=/> comparisons, when the overloaded operator behavior is an exact comparison, to match the Equals method?


Maybe I should be more clear on exactly what's happening here. There are two structs, FuzzyDouble and HashableFuzzyDouble that, as the name suggests, are value types that represent a double that should be treated as a fuzzy value, and a double that should be treated as a hashable fuzzy value, using an algorithm that works with hashing. These are separate types because the non-hashable algorithm lacks the caveats of the hashable algorithm, and you shouldn't be using the hashable type if you don't need to generate a hash from it. These types don't have anything to do with the units, and can be used anywhere in the program where you would need fuzzy equality and comparisons.

For the units themselves: MyUnitTypeX.Fuzzy() returns a FuzzyDouble, and MyUnitTypeX.FuzzyAndHashable() returns a HashableFuzzyDouble.

Now I agree with you that I should probably implement a FuzzyEqualityComparer for direct use with Dictionaries and other collections, but that doesn't help in the (most common) case, where you simply want to take fuzzy values and compare them, without having to keep a separate *EqualityComparer instance, or feeding them to a method in a static class, which necessitates you keep specifying the tolerances on every use.

After weighing all the options, the best design seemed to be what I have right now, which was to develop explicit fuzzy values. As already touched upon, these values are agnostic to where they're used in code; they're created with a value, a set of tolerances, and then they're ready to be used anywhere that needs fuzzy value comparisons. The reason methods like Fuzzy() and FuzzyAndHashable() exist in the Unit structs, is that there needs to be a fast and easy way to get a hold of fuzzy values, for use by the user.

I want to be clear that I understand the .NET design conventions and I'm not just implementing the interface all willy nilly. A lot of thought goes into the different ways it can be handled, and what I settle on is usually for a good reason. Why I'm discussing it in this thread (and I appreciate the feedback by the way), is that the feedback I receive helps highlight any design deficiencies which I may have overlooked. It also helps for improving naming schemes, as sometimes you guys simply have a better name for a class/method/etc than what I gave it.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Unormal posted:

http://www.microsoft.com/bizspark/ yourself some Ultimate licenses!

Just in case anyone else signs up for Bizspark; Microsoft says you'll be approved or declined within 5 business days, but I wouldn't believe that. I signed up right after Unormal posted this (on July 2nd), and I still haven't heard a peep out of them.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Stick100 posted:

Last I checked it said 10 business days. Must have some people out for summer or something.

If I go to "My Bizspark", it gives me: "Your account is now pending approval. We will be in touch with you regarding your status within five business days."

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

One Eye Open posted:

Texture arrays are an OpenGL ES 3.0 feature, so if you're targeting mobile, you're excluding a large proportion(still a majority, uptake-wise, AFAIK) of your users who are still on ES 2.0.

It's also not supported in DirectX 9, which is still the primary render target for indie desktop games, and a lot of open source game engines.

In other news, my first Unity Asset Store asset has just been approved, and it's free to boot!

Fuzzy Logic - A minimal yet robust fuzzy logic library for floating point numbers.

I needed the majority of the functionality for that unit conversion library I'm working on (and was getting help with in this thread, the other week), and I realized that I was so close to having another small library of its own, that I decided to go ahead and flesh it out, and then release it for free. It doesn't actually rely on anything Unity related, so here's the generic repo for it. Obviously it's written in C# and not UnityScript or Boo.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
Hey, so my BizSpark application finally went through; I applied just a day under a month ago. It took contacting Microsoft support to get anything to go ahead, but once that process was started, they were very nice and very quick about it. UnityVS is obviously the first plugin being installed in my new copy of VS Ultimate, but is ReSharper worth the money? I've looked at it before, and it looked awesome, but never had a copy of VS that could install plugins.

P.S. Stay away from all things GPL; it's a cancerous license and sickening the open source waters. When there's so many good licenses such as BSD, MIT, Apache 2, or my favorite NCSA, I don't understand why so many look at GPL as anything but poison.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Gul Banana posted:

is there a better alternative license for "use this, but don't extend it commercially without contributing back changes"?

Yeah, it's called manning up and using a permissive license. I don't actually mean that to sound hostile, just that I want to make it clear that the reasons people have for using GPL licenses are moot.

  • Your source isn't magical, no one wants to steal it.
  • Even if a company wanted to steal your source or use it without contributing back, they're not likely to. Maintaining your own private branch of a small source-set is hard enough; doing that on larger source-sets is insane and no one does it if they have a choice.
  • Expanding on the last point: companies just want to make money with their products. Your source is not their product; your source is likely a (very) small part of their product as a whole. There is absolutely no benefits and a ton of disadvantages for them to keep any changes to your source all to themselves.
  • Being licensed under the GPL absolutely does not stop them from packaging your source as a whole and using it as their product. As long as they publish the source, they're in the clear. The GPL does not do what most people think it does.
  • The GPL is poisonous and it poisons your project if you attach it to the license. You're not just cutting off yourself from potential commercial users that don't want to share their source and/or changes, you're cutting yourself off from the entirety of the non-GPL based open-source community. Group A with project X (under MIT/Apache/etc.) would not and could not use your project Y if it's licensed under GPL.
  • Companies using permissively licensed source code and then never contributing back changes has never been an endemic problem as the Free Software Foundation would have you believe.

Honestly though, I think anyone looking to license their software under an open source license needs to think long and hard about why they're actually releasing it, and what they want it to be. The FSF goes on and on about how they're the protectors of open source and free software and yadda yadda yadda; they're a bunch of lying assholes. If you want software to truly be free, then you release it under a permissive license. They're called permissive licenses for a reason; almost all of them allow users to use them for any purpose, and only really exist to protect the developer from legal liability. That is truly free software; free as in free beer, free as in free speech. The GPL gives you free as in free beer, but it strips you of the free as in free speech. GPL licensed source comes with rules and stipulations on what can be done with it; it is open source, but it is not free software.

From my point of view, I see people that choose permissive licenses as of one mindset, and people that prefer GPL as of another. I'm obviously biased here (against GPL), but I'm not actually going to make a claim as to which mindset is the right one to have, because they're purely subjective.

The permissive mindset is that open source code trends towards free, which means that most users are likely to contribute back changes even without being compelled to. Even if they don't, it doesn't matter, because if those changes were actually that critical or needed, someone else would add them eventually. Free is innate, free is powerful; it brings people in without even asking.

The copyleft (GPL) mindset is that open source code trends towards closed if it isn't actively protected. Free is weak, it must be protected or the flame will die. People must be forced to contribute or they never will; if you don't make them release their changes, they'll hoard them all to themselves to starve competition.

The King of Swag fucked around with this message at 16:53 on Aug 2, 2014

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

xgalaxy posted:

This is exactly the thing I'm battling now with my posts in another thread regarding SeviceStack. The stock library functions fine under Xamarin.iOS and Xamarin.Android but didn't work on Unity. Luckily this particular library was really easy to port. However, other libraries which utilize .NET 4+ that work perfectly fine on Xamarin are not portable to Unity without tremendous amounts of effort, especially if they make heavy use of new .NET features like tasks and async / await.

It seems like Unity is trying to get rid of the problem by developing their C# to native compiler. My biggest fear is that this will just make things even more incompatible, or worse, their native implementations exhibit inconsistent behavior with the "reference" implementation (Microsofts).

And why are they doing their native compiler when Microsoft is already working on this very thing. Just seems like they are falling into the same trap.

You basically just summed up why I'm so scared about the direction the Unity team is going, the future of Unity if they continue down that path, and why I have the sinking feeling that I hitched my horse to a sinking ship.

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
Some of you may remember from a few weeks back when I was posting about my UnitOf, unit of measurement conversion library that I was working on. Well, after getting sidetracked with my Fuzzy Logic library (which despite being free, sadly has less than two dozen downloads after two weeks on the Asset Store), I've gone back to working on the UnitOf library. Most of the hard work for the library had already been done from the other week; the remaining work is largely just the tedious stuff (writing all the unit conversions themselves), and polishing. The only actual hard part left to solve was the unit of measurement to displayable string, which I'm happy to say now works.

Given a format string and a unit of measurement, it'll generate a string which nicely prints the measurement, not just in the specified unit, but also in dividing units. The system goes another step further to select the best fitting unit for the format string and measurement you present it.

code:
Length testLength = Length.Meters(15.250004);

Debug.Log(testLength.ToString(UnitOfLength.Meters, "U3d2S"));
Debug.Log(testLength.ToString(UnitOfLength.Meters, "u3d2s"));

Debug.Log(testLength.ToString(UnitOfLength.Yards, "U3d2S"));
Debug.Log(testLength.ToString(UnitOfLength.Yards, "u3d2s"));
Debug.Log(testLength.ToString(UnitOfLength.Yards, "d3S"));
Outputs:
15 m, 25 cm, 4 µm
15 m, 2 dm, 5 cm // dm is decameter

16 yd, 2 ft, 0.39 in
16 yd, 2′, 0.39″
16 yd, 2 ft, 393.858 mil

Here's the format arguments:

  • S -- Display symbols for units. Omitting this argument and s means that no unit symbols will be displayed.
  • s -- Display secondary symbols instead of primary symbols (S). Examples of secondary symbols are ° for degrees, and ″ for inches.
  • U# -- How many units the measurement can be split into, with # replaced by the number. Dividing units are smaller units that can be used to split up the larger unit. U3 used with a measurement of 3.324 meters, would produce "3 m, 32 cm, 4 mm". Omitting this and u# means that the measurement will be divided as many times as possible.
  • u# -- Same as U#, but allows the use of uncommonly used units in addition to the more common units available. The previous example with uncommon units used, would instead output "3 m, 3 dm, 2.4 cm".
  • D# -- The absolute number of decimal places to display, with # replaced by the number of decimal places. Omitting this argument and d# defaults to the same behavior as d#, with the maximum supported number of decimal places.
  • d# -- The max number of decimal places to display, with # replaced by the max number of decimal places. Omitting this argument defaults to the max number of decimal places supported by Math.Round.

P.S. I pride myself a bit on writing code that doesn't allocate any memory unless it absolutely needs to, just because of how terrible the GC for Unity's Mono is. Even after two days of optimization (both performance and code wise), the ToString/formatting methods still allocate a (relatively) lot of memory and perform multiple boxing operations per call. Working with strings in general is part of it, but the complexity needed to accomplish what you see above contributes to the problem too. Maybe not surprisingly, but there's a lot of work that needs to go into both format string and output string parsing to make everything work, not to mention the calculations that need to go on to decide the proper units to display as the unit of measurement is divided down.

The King of Swag fucked around with this message at 04:33 on Aug 6, 2014

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.
So I'm apparently really bad about betting on the wrong horse; I purchased Daikon Forge not that long ago, and now it's announced that they're no longer developing it and it has been removed from the Asset Store: http://www.daikonforge.com/forums/threads/now-disappeared-from-the-asset-store.2235/

While the Asset Store is very much "buyer beware", I can't help but feel cheated. Daikon Forge was not a cheap asset, so purchasing it only to have it discontinued shortly after is really lovely.

Adbot
ADBOT LOVES YOU

The King of Swag
Nov 10, 2005

To escape the closure,
is to become the God of Swag.

Yodzilla posted:

Yeah that's sucks but I guess it's like any other piece of software. I'm not familiar with the plugin but reading those forums you linked makes the author seem really dodgy.

Daikon Forge was the other major alternative to NGUI.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply