|
I have finally defeated my NIH syndrome by admitting defeat myself and really thinking about what exactly it was I wanted to do. I know that a lot of people suffer from 'Not Invented Here' syndrome either from time to time or as in my case, perpetually; so for those who suffer, take to heart what I have to say. It may be a bit long, but I think it has some important points, even for those who aren't themselves sufferers. If you don't want to read about this, please skip to somewhere near the end, because I have some questions I'd like to ask as well. 'Not Invented Here' is for people who like to program for the sake of programming; you will never release anything of value if you always recreate code-bases that are already freely available to you. Releases by people who have done everything themselves are the exceptions that prove the rule--in many cases they spend years developing just the framework that their software will be built upon. So why my sudden change of heart? In a moment of triumph, I realized the futility of my work. For several years now I've been not only working on a game engine, but I've been working on the cross-platform API that I'm simultaneously using to develop the engine. C and Obj-C are my languages; I know many, but those are the two I actually enjoy coding in. The problem is the availability of a cross-platform core API on which to build in those languages; with C, what you see is what you get. With Obj-C, Cocoa is grand for the Mac and while cross-platform solutions exist (such as GnuStep and Cocotron), both Foundation and AppKit classes are quite heavy-handed for games. So I did what any severe NIH sufferer would do; I laid out the API for the library exactly how I wanted and defined how I wanted many details to work (memory allocation patterns, etc.). Then I went to work actually writing the library and game engine; first deciding what part of the engine I wanted to write, then building the part of the API that it required to function. To put things in perspective, I did everything including writing a complete set of both basic and advanced data structures. So what did 3 years of casual development get me? It got me a game engine that in its current stage is little more than an advanced cross-platform timing and IO framework that also happens to support scripting and rendering. It also earned me a core library that's fleshed out enough that I no longer need to expand upon it every time I work on part of the engine, but it is no where near complete itself. Frankly, what I've written so far is pretty drat awesome--I'm particularly proud of my timing system. But being honest with myself, none of it was worth the time it took to create it; all of it has been done before, and although you would need to look harder for some more than others, all of it has been done better and offered for free by others. What finally made me kick the syndrome and revaluate what I was doing, was when I finished my particularly extensive (and honestly pretty drat good) unicode support. I felt triumphant because of how long I had been avoiding implementing it; I had gone from only a basic understanding of how Unicode works to knowing exactly how it works. It was in my celebration that I realized exactly how much I had done, how long it had taken me to get there, and how much more I had to do. I was heartbroken; I legitimately liked doing what I was doing, but somewhere along the way I had forgotten that the reason I started this whole programming mess as a kid, was so I could make games. It took a lot of introspection to come to the conclusion that although I like programming just in itself, I like making games a lot more--so to hell with doing it all myself, I'll make good use of the resources people have developed before me. That was when I decided I would look towards middleware, and settled on Panda3d as the optimal engine for games of the current level and scope I'm currently looking at making. It is here that I realized how unprepared I am to actually make games and not programs. Picking up Panda3d hasn't been the hard part; it's a well designed engine and very easy to use. I had to learn python to use it, but learning a new language is minuscule to learning the concepts behind programming and only took a few days. No, what has taken time and is beginning to really stretch the boundaries of my skill pool is the realization that I actually need to make assets for a game. More than that, coming up with game ideas is easy, but coming up with core gameplay concepts for games that would be feasible for me to create is hard as hell. Part of what makes it so hard is that as you increase the scope of a project, it becomes easier to develop gameplay concepts for it, but less and less feasible that the project could be completed. Simplicity is key for a one-man developer, but simplicity means that your game needs one core gameplay concept that everything revolves around. Being the backbone of your game means this concept needs to be a shining star or the game simply won't be worth playing. When you have a larger game with 3 or 4 different things going on, individually each thing can be flawed, because their interaction together is often the defining characteristic of your game, not the individual concepts themselves. So I throw off the shackles of NIH syndrome thinking I'm just going to jump straight into developing a game, and instead find myself making a list of all the skills I need to develop first. To put things into perspective, over the last week I've simultaneously been learning how to 3d model, texture 3d models, audio design (unfortunately taking a back-seat to learning to model), and continually thinking of game ideas and trying to develop ways in which I could develop gameplay that would be both fun and possible for me to actually develop. Frankly, I'm amazed that I'm actually beginning to learn this stuff and not have it all just a horrible jumble in my head. Worst of all, I can't just settle for programmer art and call it a day. Now I mean, when all is said and done my artwork may still be (and probably will be) terrible, but I can't stand to do anything less than my best. It doesn't feel right to put something together that will be released to others, knowing that I personally could have done better, but couldn't be bothered. Now that all that EN garbage is out of the way, I have a few questions I would like to ask about how you guys go about game development. I figure the best way to really start is to look at how others with more experience actually developing a game go about it. • How do you come up with new projects? As I previously mentioned, my way of going about it is to come up with game ideas and then try to develop gameplay concepts that the game will be based around, but I feel like there might be much better ways to go about this. • What do you do about assets? How do you gauge how much is too much and how many assets you can reasonably develop for your project? • I actually had a few more questions, but now that I get to it I'm drawing a blank, so please feel free to include or mention anything you feel is important or wish you had known when you started out. Another question that is only tangentially related; what source do you recommend for learning C++? To be more specific, learning C++ for a C developer. I know C++ in that I can write code that compiles and runs, but I write code as a C and Obj-C developer does (i.e. the polar opposite of how proper C++ is supposed to be written). Being perfectly honest, I don't like C++; I find it to be a poo poo language with a poor excuse for an OO implementation (there's a reason I like Obj-C), but having abandoned my old ways and into the realm of middleware, C++ is king. You can't imagine how much I wish people would just write libraries and frameworks in C, at least then it would be easy to write APIs for other languages, but you have to deal with what you're dealt, and that is C++. So like it or not, I need to learn to think like a C++ developer and I'm hoping someone can point me in the right direction.
|
# ¿ Sep 8, 2012 14:19 |
|
|
# ¿ Apr 28, 2024 07:58 |
|
I originally posted this over at the Ogre3D forums earlier today, but their Help forum moves slower than this thread and I know there's a few Ogre users here that might be able to help. You would think this would be a very simple problem, but I can't find the answer anywhere on my own.The King of Swag posted:Hopefully this question will be easy to answer, but I've been going through the Cg Tutorial (and compiling it as HLSL because FX Composer doesn't support Cg) and I'm currently most of the way through the Lighting chapter (5). Up to this point, I've written and successfully gotten to work with Ogre, the per-pixel lighting example. The tutorial then suggests as good practice, that the lighting functionality be offloaded into its own function, and most of the lighting parameters be offloaded into structs, to make it easier to pass them along and group them, and make the eventual expansion of the shader easier.
|
# ¿ Mar 14, 2013 08:09 |
|
I have a quick question that I haven't really been able to find a sufficient answer to by searching Google. Do you use unit-testing in your game related projects, a very brief explanation of why (yes or no), and do you think they're of any real value in game programming, or more of a time sink with not enough benefit? I ask because I've used unit-testing for non-game related projects, but when it comes to programming games, I just don't see that many opportunities where it's beneficial. Nearly every test I can think up that properly models a real-world scenario doesn't fit the unit-test paradigm. The problem seems to be exacerbated the further you get away from writing foundational source (data structures, IO, etc.) and into the game logic; more-so the more logic is scripted instead of hard-coded. The thing is, unless you're rolling your own on everything (physics, rendering, etc.), chances are 95%+ of your program is game logic. Maybe I'm looking at it wrong, but it seems to me that the type of tests that are actually useful in game development, are tests that bring all the systems together and actually test their interactions with each other in a scenario that closely mimics what will be seen in the final game. In other words, implementing a test map/world/whatever that's designed around your new functionality, so you can see if it reacts as expected in close to real-world scenarios. Also known as a play-test. What made me want to ask in the first place, is that I was looking at various C++ to Lua binding libraries because I'm getting tired of manually implementing the same basic interface for every class that needs to be exposed to Lua. I found a couple that seemed to be great, but they entirely lacked documentation, aside from extensive unit-tests (in fact, they make the point that they don't need docs because they did unit-testing). Unit-tests can be helpful, but they are not and never will be a replacement for proper documentation. So I started crawling the usual places to see where people would get stupid ideas like this, and found what seems to be an overwhelming number of programmers that expound the position that there is never a scenario in which you shouldn't unit-test everything. So overwhelming that it's making me begin to second-guess myself. But I'd like the opinion of programmers whose interests are closer to my own (game programming), and actually program to create things (as this thread has shown) instead of fart about with theory and toy programs (which I get the feeling, right or wrong, is what most people on sites like stack-overflow actually do).
|
# ¿ Jul 9, 2013 00:43 |
|
I really appreciate all the insight guys, and if anyone else wants to weigh in, I'd like to hear it.
|
# ¿ Jul 9, 2013 17:29 |
|
So I've been avoiding Unity for a couple of years now, solely because of its complete lack of modding support for games made in it. But after doing some research into it, I've found what I think to be a good solution, albeit one that many people don't seem to make use of. User scripting can be handled by making use of Jurassic or one of the Lua bindings found on the asset store (or by C# plugins, the method used by Kerbal), but options for importing new content at runtime looked bleak. But then I came across the serialization assets/libs; namely Unity Serializer and the stripped down (to just the actual serialization functionality, none of the save game stuff) subset Unity Serializer Basic. Both are MIT licensed and freely distributable, and both make the claim that they are capable of fully [de]serializing any custom or Unity class, object, component, etc., including textures, meshes, animations, whatever. The ability to import custom content at runtime is even cited as one of their features. The only catch is that instead of the end-user exporting a .fbx or .dae and then loading it at runtime, they instead need to load it into a Unity (free) project, that has a basic editor script setup, to serialize and save their custom content into external files, which are then easily loaded by the game. So I suppose my question is if I can drop lack of modding support as a reason to avoid Unity, given what I've found regarding scripting support and runtime content loading? I've been using Ogre3D and a collection of various open-source libs (with half-assed custom bindings to tie it all together) for a couple of years now, and I'm frankly tired of the trial and error work-flow that's so prevalent when you lack any form of IDE outside Visual Studio/CodeLite/whatever, and your game creation tools consist of a couple utilities you've written here and there to do various menial tasks. So a little over a week ago now, I decided that grass must be greener on the other side and started looking at game engines where I could just focus on the actual game creation and let the engine handle all the foundational stuff. If there's anything I've learned since I started looking, it's that there's a ton of options right now for potential game engines; unfortunately very few of them are particularly any good. There's a ton of options that really have nothing wrong with them, but they never really got traction and didn't develop the communities necessary to drive development of and with them, so they're essentially dead projects. Eventually I worked the list down to just a few options.
|
# ¿ Apr 2, 2014 18:57 |
|
Maybe I'm misunderstanding, but wouldn't that method mean that you can't use the scene view at all? Components use asset references instead of resource paths, so deferring the actual asset (mesh/material/texture/whatever) selection entirely until runtime means you won't have a visible GameObject in your scene when developing; it'll all have to be handled programmatically. I would imagine it would be much easier to attach a script to any game object with moddable assets, that just stores an asset location. On awake (or start), the object checks to see if those assets exist at that location. If they do, the custom asset loader loads the file and replaces the currently referenced mesh/material/texture with the newly loaded one, otherwise it does nothing. An alternative that does much the same is to create a "user-content manager" that does the same, just from a centralized source (that eliminates the overhead of attaching it to every object that needs it). Of course, in this situation I'm pretty sure I'm misunderstanding exactly what needs to be done, because it sounds insane to simply drop the Scene view entirely in favor of designing the game 100% in scripts.
|
# ¿ Apr 2, 2014 23:46 |
|
For you Unity users out there, this may be old news, but I just ran across this tool that allows you to seamlessly use VS Express with Unity. I wish I had found it a couple of weeks ago when I started learning how to use Unity. MonoDevelop is actually a pretty nice IDE, but it has a few quirks that really throw me off, such as an autocomplete that sometimes won't let me escape from it, requiring me to accept the autocomplete, delete back and retype what I wanted. I hadn't tried it without this tool, but apparently if you hook up VS Pro as the external editor in Unity, every script you open will open a new VS instance, and you can't open scripts from within Unity at all if you use Express. So far the tool works perfectly for me, allowing me to not only open scripts from within Unity (in the same instance of VS Express no less), but if you click an error in the console, it'll jump to the correct line in the script. As per the Readme.md on the Github page, you need to compile the tool using the given Solution, and then link the executable in Unity as the external editor, with the proper arguments in the arguments line. The arguments I used are "$(File)" $(Line) 2013, with the 2013 being my express version. Apparently it defaults to 2010 if you omit that. The King of Swag fucked around with this message at 15:36 on Apr 24, 2014 |
# ¿ Apr 24, 2014 15:34 |
|
j.peeba, how large of a chunk of your total time did the graphics take you for your entry? When I see a good looking game (which yours is) with otherwise relatively simple 2D graphics, that question always pops to mind. Especially since my programmer art skills exist in this weird limbo, where excluding organic models, I can pretty quickly put together nice looking hard-edged 3d models, but if I had to estimate, I'd say the 2D graphics you used would take me close to a week to draw (I am really bad at 2D for some reason; I can't visualize it the way I can 3d objects).
|
# ¿ Apr 28, 2014 15:50 |
|
roomforthetuna posted:And here it is, "2048 Birds of the Dead". http://www.kongregate.com/games/RavenBlackX/2048-birds-of-the-dead Played 3 games, managed to get the same high score of 12. I stopped playing because the coin placement script honestly needs some work; 2 out of 3 coins are impossible to get because they're right against the walls, either before or after. I'd much rather see fewer coins that are all possible to collect. All that said, it's pretty good for what I assume is just a few hours of work.
|
# ¿ Apr 29, 2014 15:18 |
|
nebby posted:Man first UE4 source for 20 bucks a month and now UT development happening out in the open. Looks like I picked a pretty good time to try to break into game dev! It'll be interesting to see how the UT experiment goes. I'm sure there is going to be a ton of noise in the forums. I know this is from a couple of pages back, but what's this now? I haven't heard anything about this (or really seen any proof that the Unity team has changed their ways). They're notoriously mum about all development, possibly because all they seem to do is come up with half-baked ideas and implementations, and then plug them into Unity, never to fix any bugs or expand upon functionality after the initial release.
|
# ¿ May 22, 2014 02:02 |
|
Stick100 posted:Unreal Engine 4 is rapidly surpassing Unity3D for me. The one thing I wish I had access to in Unity3D was a performance profiler, and in Unreal Engine 4 you can just press cntrl-shift-comma and they will explain everything. I only migrated to Unity3D from Ogre3D a couple of months ago, but I'm already comfortable with Unity3D and have several hundred dollars worth of asset store purchases to shore up the shortcomings of Unity (Shader Forge, A* Pathfinding, DFGUI, etc.)* But the more I see about UE4 and the more I don't see anything coming out of Unity Technologies, the more I'm starting to worry that I've hitched my wagon to a dying horse. The deafening silence and overall lack of response by Unity Technologies is what's worrying me the most; it tells me they don't know where to go or what they need to do to compete with the competition. I'm using Unity Free (because there is no way I could afford Pro right now), but I'm not so invested in store assets and development in a given project that I couldn't switch engines if I needed to. The main things holding me back right now are content pipeline (Unity is very Blender and Allegorithmic Substance Designer friendly, both programs forming the backbone of my asset generation) and C# support. Now, I'm a C programmer through and through, with C and Obj-C still being my languages of choice for personal projects where I'm more interested in the code than the final product. I've also spent a lot of time writing C++ code. That said, when it comes to projects where I actually care about getting things done and not just generating pretty solutions in code, I want C#. Unity has an old and cobbled together hunk of junk for a C# compiler, and licensing issues or not, it is one of Unity's bigger failures, but I really don't want to move back to C++ for serious projects, even if it is some form of pseudo-managed C++ environment. If things keep going the way they have been, especially with a lack of official announcements from Unity Tech, I think we may start to see a lot of people flocking from Unity as they finish their current projects over the coming year. Competition is a very good thing, but game development middleware lives and dies by the community around it, and if Unity Technologies doesn't do anything to try and keep the current community, we may see a dramatic weakening of the Unity development environment, which will only serve to further drive more in the community towards greener pastures. * To be fair, I picked most of them up during sales that I was fortunate enough to catch, so I haven't paid full price.
|
# ¿ May 26, 2014 03:15 |
|
Ragg posted:Unity's plan is to build their own .NET environment for AOT platforms and then upgrade Mono, to get around the LGPL. So a newer C# version is planned, but who knows when it will land. I saw them talk about that a while back in the forums; from what I understand, the new compilation pipeline actually assembles the C# into C++ and then compiles that. They said that in theory, it should make code execution much faster, at the cost of slower compile times and absolutely no introspection capabilities. Personally, I think it's a tremendously stupid idea, and if they actually do it, that would end my qualms about sticking around for C#. The main issue I have with it, is that Unity's C# environment is already non-standard compliant enough that you sometimes run into issues over it. Once anything resembling introspection and any other feature they deem "not useful" is dropped, there is no hope of ever using outside C# libs again. I couldn't imagine trying to get something like Json.net to function if there was no introspection capabilities. Then you have the issue that once they go this route, Unity Technologies is now maintaining their very own custom compiler and assembler, which is something that is traditionally tackled by groups that have many more people than Unity Technologies can throw at it, and are the type of programmer that have specialized in that. I just don't see how anything they write will be able to compete in quality. Even if they were good at it, there are going to be bugs and a lot of them, and since we'll now be dealing with a non-standards compliant environment, there will be no way for the Unity community to avoid them until someone steps over one. Given UT's previous track-record with bug-fixes and updates, there is little to no chance that any bugs that exist will ever be addressed, and there will never be a feature expansion beyond the initial release, to try and bring it up to feature parity with increasingly newer versions of the C# language. I know I sound all doom and gloom, but this just strikes me as possibly the worst direction they have ever decided to go in, and I don't know if there's any turning back or recovery if they go down that road.
|
# ¿ May 26, 2014 15:11 |
|
xgalaxy posted:I've heard nothing about lack of introspection. Can you provide a link because I don't think you are correct. Microsoft is doing the same thing with a native C# compile option and they aren't having any problems providing full C# including introspection and dynamic language features. I don't see why Unity would have any problems. From what I know they still have a small native runtime that should be providing the introspection features you are talking about. As I said, it was a response one of the Unity devs gave in a forum thread awhile back. Their forum search sucks (I can't figure out any way to go from a search result to the found post, other than go through every post in the thread), and you are more than welcome to dig through the forums and try to find the post. It was in one of the threads where someone was asking if they'd ever update Mono. In this case, I'd love to be wrong, simply because being right means bad things to come.
|
# ¿ May 26, 2014 19:19 |
|
Shalinor posted:...and chiefly because we want mod support. I've been playing around with the easiest ways of making this happen without having to forgo editor functionality and just overall being as invisible a process as possible. The best solution I've come up with so far is using JSON.NET for .NET 3.5 (the actual library, not the crippled version on the Asset Store) and serializing the poo poo out of meshes, audio, whatever and saving it out as BSON. As far as I can tell, JSON.NET is capable of fully serializing anything you throw at it, as long as recursion is set to ignore, and you can get away with simply feeding it raw gameobjects, meshes, whatever. That said, by default it also serializes a ton of readonly data and properties that is calculated when an instance is created, so for certain things it's much more efficient to pull out the relevant data and then serialize that together. The example that immediately comes to mind is that if you serialize a mesh, then every vertex is serialized as a Vector3. When Vector3s are serialized, JSON.NET saves the magnitude, square magnitude, normalized, etc., and this explodes the size of the serialized data. So far, the best solution I've found is to create proxy classes where you can intimately control exactly what is serialized and how. So to handle a Vector3, you'd create code:
However you go about doing it, once you have figured out how to serialize a class you want to import-export for modding, then you need to create a simple editor plugin that allows you to easily do just that. This fortunately is pretty drat simple conceptually, and isn't a whole lot more than a script attached to an otherwise empty gameobject, that you point towards various assets you want serialized. So you drop a mesh onto the script, hit the serialize button, the mesh is then serialized and saved out into a SerializedAssets folder. That saved file is now ready to import into an actual game, deserialized and plugged into a targeted object's mesh renderer.
|
# ¿ May 27, 2014 17:27 |
|
The purpose of this post was originally to ask a simple question, but after having asked it, this post devolved into a detailed analysis of my project; the decisions I've made so far, the ones I'm still deciding on, how and why I've made them and how I felt they would affect the commercial sales of the finished product. That further devolved into an analysis of my insecurities with devoting so much time and effort into a project, that like most indie projects, needs to realistically be looked at as an inevitable and abysmal commercial failure, and that any money made from such a venture is simply to be taken as gravy on top of having the pride of having released a game. But then I thought better, deleted all that and here's my original question: what is a good 2D framework for Unity, that doesn't force me into a 2D only environment / orthographic only camera? I'm working on a graphical roguelike, but I'm working with 3 dimensional maps, so vertically aligning horizontal map slices (which are obviously tiles merged into a singular mesh) and rendering them with a perspective camera is a must. I've looked at a number of different options, but it seems like every one comes with its own quirks and limitations that don't jive well with my requirements.
|
# ¿ Jun 4, 2014 05:12 |
|
Does anyone here implement version control on their projects, and if you do, do you simply use it to track revisions, or do you actually use features like issue tracking? Technically this question could pertain to just home grown programming in general, but I'm interested particularly in how it relates to indie game devs. Personally, I never used to use version control until a few projects ago when I was introduced to it, and realized how much easier it makes my life, if only to track my progress. Since it has always just been myself on these projects, I never really made much use of the other features such as issue trackers, but on this newest project, I find myself using it (the issue tracker) almost as a way of documenting what I've done, what I need to do, what I want to do, and what's currently in-progress. I'm just curious to hear about other people's approach at managing their projects.
|
# ¿ Jun 13, 2014 14:23 |
|
Yodzilla posted:This. BitBucket doesn't want you pushing gigs of information up onto its servers but for smaller/indie project you're probably pretty good with everything. Github manages their own .gitignore file repository that includes one for Unity, that is more up to date than the one in your link.
|
# ¿ Jun 14, 2014 18:10 |
|
No matter how many times I do it, it never becomes less gut-wrenching to put together enough of a large system*, that you can actually test what you have as a whole, and see if the lack of any compiler errors or warnings actually translates into a workable design. Fortunately for me, the only error I ran into when testing it, was a 'calling method on null exception', which is because I forgot that C# initializes reference fields to null, instead of calling their default constructors. * In this case, a layer based streaming map system for a roguelike.
|
# ¿ Jun 19, 2014 00:22 |
|
Maybe you guys can shed some light on how you'd design an interface, given the current situation I'm facing. Basically, I need values in units of X, and I need them to be easily convertible into other units on a regular basis. A critical point for me is that they're structs, but they're treated as much like a primitive type as possible. C# actually makes this pretty easy, but I've found myself in a situation where I can go with a slightly less intuitive interface and make my classes much shorter (units of volume have the most types, and thus the biggest difference, at 250 vs 750 lines of code), or deal with the extra lines of essentially filler code, that make the interface much more intuitive. e.g. (just an example I cooked up right now) code:
code:
code:
code:
Just for the curious (and because I'm already talking about it); like values (Volumes, Lengths, etc.) with different units do properly compare against each other. So you can do this if you like: code:
|
# ¿ Jul 1, 2014 23:28 |
|
SupSuper posted:Do you like Unity? Visual Studio? But struggled at putting the two together? Well, good news! Keep in mind that it doesn't work with Express; for us poors, we're still stuck with Unity VSExpress. It works great, but doesn't support debugging. As for the convertible unit interface: First things first; I did implement implicit conversion for float, and explicit for int. This allowed me to eliminate an entire series of methods, and now the conversion properties just return a new struct themselves, since it behaves in other calculations as a float. I also changed the UnitOf[X] enums to have plural names, since it flows more naturally. ninjeff posted:It's a bit kooky at first, but mathematically the way to get 80 out of "80 ounces" is to divide by one ounce — a unit is interchangeable with one times itself. So you could throw away or encapsulate UnitOfVolume (an enum?) and create a set of constants in Volume itself: The ratio of ounces to gallons is always 128:1; that's a defined number that will never change; although I do get what you're saying (ratio of milliliters to drams isn't so easy to remember). What I don't get is why would I want to move the calculations outside of the struct, and use constants instead of an enum (which removes any intellisense benefits and strong typing)? The Volume/Length/etc structs already support arithmetic operators; reading back, I don't think I emphasized enough that arithmetic on these values will always calculate the correct value, regardless of the units used. The only rule to remember is that the unit of the result is always the unit of the first operand. code:
code:
Subjunctive posted:My C# is rusty, but it seems like you could have an interface like The reason this wouldn't work is that structs in C# can implement interfaces, but they cannot inherit, so unless I went crazy with extension methods, every single unit would need to re-implement all the functionality of the others. Classes wouldn't be an option either, because obviously they're not value types. You're definitely on the money on how to proceed with the calculations though, and it's actually how it internally works already. When you perform a conversion (if you aren't converting to the same unit), everything gets converted into a native unit first, and then to the requested unit. If it wasn't done this way, either the number of calculations you'd need to write would be number of units^2, or it'd require a rats nest of branching. Because all calculations are a straightforward switch statement and multiplication, doing two conversions (to native and then to the requested unit) is actually considerably cheaper (processing wise) than the other methods. Subjunctive posted:Edit: actually, the ergonomics of your existing interface get better if the caller just does using Whatever.UnitsOfMeasure as UoM or something, no? I'm not sure I agree with this, only because I come from an Obj-C background, and abbreviations in interfaces are the devil's playthings. Especially abbreviations that are indiscernible out of context, just by reading the name and nothing else. As always, I appreciate the input guys; it really does help to have some input from others.
|
# ¿ Jul 2, 2014 22:13 |
|
Subjunctive posted:Hmm. Here's what I was thinking, untested and unlikely to compile: This would actually work, but I don't know how efficient it would be, as any struct referenced through an interface has to be boxed and unboxed. Speaking of efficiency, I currently cache the last conversion performed, so you can call the same conversion in repetition without need to store it as a separate variable. This unfortunately does mean that the size of the value doubles (which internally is a float and an enum), and I'm wondering if storing the cache is even worth it. Subjunctive posted:That said, simply Units would probably be clear and unique as a package name. I'm not sure if you're talking about the namespace, or the UnitOf[X] enum names, so I'll address both. Right now I'm using the namespace UnitOf, but I also think your suggestion of just Units is a good one. If you meant the enums, then I'm using names like UnitsOfLength and UnitsOfVolume, because I have many different types of supported units, and I don't want to mix them into a single enum, since many conversions simply don't exist (such as from milligram to fahrenheit). Keeping them separate enums gives compile-time type safety, that forbids these non-conversions. This is a list of the different unit types: UnitOfLength, UnitOfVolume, UnitOfMass, UnitOfEnergy, UnitOfPower, UnitOfTemperature, UnitOfAngle, UnitOfPressure, UnitOfTime I'm actually thinking of releasing this on the Unity asset store when done, since I don't see anything like it, and I know that at least in Roguelike circles, conversions between different units is common (at least for display/internationalization purposes). Doing conversions seems simple, but there's actually a lot that goes into making them user friendly, as you can see from me asking about interface ideas.
|
# ¿ Jul 3, 2014 05:19 |
|
ninjeff posted:**snip** I understand it now and I think you have the right idea on this one, although I still believe the struct should maintain a 'native' unit, which it uses to decide the returned value on access. You are also right in that units are really just a way to display a value, and that the same volume in different units, is still the same volume. But at the end of the day, the volume does need to be in a unit, and asking the user to explicitly extract a float representation of the volume in the unit they want, goes against treating it as much like a primitive as possible. I understand that with the implicit cast, you can't prevent someone from multiplying volumes, but other types of units (lengths for instance) do make complete sense with all arithmetic, and have no inherent reason to prevent the implicit float cast. To me it would only be confusing if you allowed some types to be used as floats, and prevented others. Especially since they're all nothing more than fancy wrappers around a float. Forbidding the conversion for all of them, even when it makes sense for those types, doesn't make sense to me. Combining both the previous examples: code:
code:
I'd like to know how you feel about this hybrid style, and I'd also like to hear what others think of the different proposed styles.
|
# ¿ Jul 3, 2014 15:05 |
|
The equality comparison for all types must already be overloaded to comply with IEquatable<T> (all types implement IEquatable<T>, IComparable<T>; they'd be almost useless with keyed or sorted containers without those), and since I'm using Unity, I do the comparison between values with Mathf.Approximately. So far it seems to work great. A standardized unit is already stored internally, and converted to the format needed on request.code:
code:
Before anyone goes about claiming that I went overboard on the total number of supported units; remember that when this is done, I'm planning on releasing this to the asset store, where a wider variety of units would be needed by users. That said, I'm almost frightened that barring Imperial units (which are not the same as US Customary units), I actually see a use for the vast majority of those, within my own projects.
|
# ¿ Jul 7, 2014 11:04 |
|
Jewel posted:Why is this illegal? Why convert to float? Float multiplied by Length should return a Length, then Length plus a Length is valid. This seems like fine syntax to me. Because overloading public static Length operator *(Length first, float second) and public static Length operator *(float first, Length second) has the potential for odd errors in complex statements. But I suppose I can add it anyway, and the end-user just will just have to be extremely careful to specify order of operations. Inverness posted:C# doesn't have a separate *= operator though anyways. I'm using *= and such in these examples for brevity; I don't think C# even allows you to define separate [x]= operators. They're all calculated by using the overloaded [x] operator with the source variable fed as the first operand, and then set as the result. Inverness posted:It seems like the best way to handle this would be to still have the default Equals() and equality operator use an exact comparison, but then have an overloaded Equals() method allow the user to specify a tolerance for the comparison. Two floats with the "same" value that were come across with different calculations, will more often than not, not actually equal each other. Making the default equality operator actually check for exact equality is almost never what you want, which would make feeding them into a collection a serious pain. The best solution would be having the default equality operator perform a tolerance check, and then have another ExactlyEquals() method that will perform an exact equality check. The King of Swag fucked around with this message at 19:25 on Jul 7, 2014 |
# ¿ Jul 7, 2014 19:20 |
|
Inverness posted:I realize this, but the reason I suggested this is also be cause of consistency. The default equality implementation for value types including other floats or things like strings is always an exactly equal comparison. For case-insensitive string comparison or tolerant float comparison you have to specify how you want that handled. I think having the default equality comparison for your unit types be a tolerant one obscures what is actually happening. I'm going to put some serious consideration into this; probably mock something up and see how unwieldy it is to use compared to how it is now. I have the feeling (actually, I'm almost certain) that it'll make them unwieldy to use as keys in keyed collections, but might make other comparisons easier to make, if I allow for variable tolerances; either on a type or per-instance basis. Something else I'm also considering that's tangentially related, is switching internally from floats to doubles. Doing more tests, I found that the precision of floats isn't that great when dealing at the far ends of the supported units (gigajoules vs microjoules), and while it remains equally important to use floating point numbers smartly*, using a double does allow for much greater precision at very large and very small values. Computationally, doubles don't have the performance drawbacks over floats that they used to (at least for .NET with modern hardware), so I'm not so much worried about that. The only real issue I can find with it, is that the user will likely have to cast the values to float in most situations, as Unity and games still stick to float like bees to honey. * Don't keep performing calculations on the same value over and over, whenever you have access to the source value and can perform a fresh calculation using it instead. This prevents error creep over time.
|
# ¿ Jul 7, 2014 21:36 |
|
OneEightHundred posted:To be clear: Using integers will not save you from rounding-related issues. However, they will save you from problems stemming from rounding not happening at the precision you expect, and they tend to fail more consistently because round-towards-zero is less likely to make something accidentally work than round-to-nearest. I personally think this is just asking for trouble; the IEEE 754 standard has well defined and predictable behavior, and what you're essentially doing here is creating a custom floating point representation with its own rounding behavior. The rounding behavior may be better suited to your purpose than the standard, but it's non-reproducible from the outside without duplicating the internal representation.
|
# ¿ Jul 8, 2014 07:06 |
|
Except that IEEE 754 is designed specifically for cases where you're dealing with real-world representable measurements (such as length, mass, time, etc.); it comes at the cost that any exact value in particular, has the possibility of not being exactly representable. The solution to this is fuzzy equality, which is easier said than done, but a solution that takes into account both an acceptable margin of error and ULP difference handles all cases while being relatively simple to implement. The real trick to it is defining what is an acceptable margin of error (any value smaller than that margin is automatically equal), and what is an acceptable number of ULPs (unit of least precision) for large magnitude values, where adjacent floats may have a larger difference between them. Here's what Wikipedia has to say about IEEE 754. Wikipedia posted:As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of a proton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact. An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation.[39] The "decimal" data type of the C# and Python programming languages, and the IEEE 754-2008 decimal floating-point standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal. This actually fits very well with the most common use cases for these units, as the only time you're dealing (and expect) an exact value, is when you explicitly create a measurement with an exact value. Most of the time you're (or at least I am) going to be taking measurements, such as the distance between the player and an object, or the amount of water in a map tile, and then performing calculations, measurements and conversions on it.
|
# ¿ Jul 8, 2014 09:34 |
|
Just as a heads up; my support for floating point is not a discounting of your suggestions for fixed-point. In fact, I just spent quite a bit of time looking at various fixed-point options, and weighed the pros and cons of both fixed and floating point. In the end I've decided to stick with floating point, because more often than not, you're comparing values relative to each other rather than direct equality comparisons. For the cases where you do need a direct comparison, I've already written a fuzzy equality test, although I suspect there will be quite a bit of tuning needed to get the margins of error just right for the most common use cases. On modern PC hardware, floating point arithmetic is also orders of magnitude faster than fixed point arithmetic, in all of the fixed-point implementations I could find. There is a lot to be said about having purpose-built hardware that performs floating point math.
|
# ¿ Jul 8, 2014 16:11 |
|
OneEightHundred posted:Fuzzy testing still has some challenges in that you have to remember to use it - It's easy to forget to use fuzzy compares when you're not doing equality comparisons, i.e. when you use > or <, and get the same class of bugs. Fixed point libraries are a completely different beast than extended precision multiplication/division, and are most certainly more than just an integer multiplication / division and a few shifts. I'm being serious; all the implementations available even warn that they're many times slower than floating point math. The reason I'm still iffy on extended precision integer storage, other than the overflow issues at the high extremes* and lack of precision at the low extremes, is that every division is a truncation. While values calculated in the same way may still come to the same value, the actual error will naturally be greater than floating point math, which defaults to midpoint rounding. It still doesn't solve the issue that the same value calculated in two different manners, may not be equal due to rounding errors. I'm not opposed to the idea if there's real benefits, I just don't see what the benefit is here. * Using energy as an example, if I use 64 bit integers, and store everything at 1/100 a microjoule, I can only store about +/- 10 gigajoules. This span might be uncommon for lengths or volumes, but it's not unheard of for energy measurements, and I don't think 10 GJ as the upper range is high enough, without sacrificing too much precision at the low range (you can push to 100 GJ by reducing the precision to 1/10 a microjoule). Edit: To clarify further; for enough precision for the likes of a volume (since not all units are multiples of each other), you'd have to store your internal representation as 1/10,000,000 of a microliter to accurately represent all supported volume types. Let's say that we don't need absolute precision, as that's impossible to get without arbitrary precision numbers anyway, and say we throw out the 2 or 3 units that require so much precision to accurately represent. So we reduce that precision to just 1/1,000 of a microliter, or 1 nanoliter; that's still pretty accurate right? Well, let's take into consideration the (US Customary) dram, which is defined as exactly 3.6966911953125 mL, and actually one of the units that more nicely converts to metric and back. That's 3,696.6912 µL and 3696691.2 nL. Over 100,000 drams, the error becomes 20,000 nanoliters, or 20 microliters. If we were using a double and storing our values internally as liters (100,000 drams is 369.66912 liters), the ULP would be 5.6843418860808E-14. A reasonable first guess for ULP range after multiple operations on a value would be 4 ULP, which means our error could be anywhere from 0 to 1.7347235e-18. Obviously it's possible it could be larger than that, but it's not likely unless we performed tons of operations on the same value. Either way, it's nowhere near a 20 microliter error. Of course the caveats are that error dramatically increases as you grow increasing large or small (and decreases as you approach 0), and that floats have drastically lower precision than doubles. The King of Swag fucked around with this message at 12:05 on Jul 9, 2014 |
# ¿ Jul 9, 2014 08:47 |
|
Just to comment again on the floating point issues; I just finished writing (and verifying it works with tests) a fuzzy comparator and hashable fuzzy comparator. Following Inverness's suggestion, I made the different types perform regular equality checks instead of fuzzy equality checks. If you explicitly want fuzzy equality or comparison, you can do this:code:
Using the hashable fuzzy values is very similar to the regular fuzzy values: code:
|
# ¿ Jul 11, 2014 09:40 |
|
Inverness posted:I'm not sure about a case where you need fuzzy less-than or greater-than comparison. If you perform fuzzy equality, then you must perform fuzzy comparisons, or you'll receive inconsistent results. Same as if you perform fuzzy equality and comparisons with different tolerances. Inverness posted:Is also odd since it seems like you're making the code more verbose just to hold onto the ability to use the equality operator. By convention, custom equality checking using an instance is done with an overload of Equals() like I suggested before: I could overload the Equals method like you suggested, but then there's no way to overload the Equals method with default tolerances; it would clash with the original Equals overload, which under your suggestion, now performs an exact equality comparison, rather than a fuzzy comparison. Forcing the user to specify the tolerances every time (rather than using the user defined default settings when omitted), is just asking for trouble. Not to mention the issue of how do you handle fuzzy </=/> comparisons, when the overloaded operator behavior is an exact comparison, to match the Equals method? Inverness posted:**snip** Maybe I should be more clear on exactly what's happening here. There are two structs, FuzzyDouble and HashableFuzzyDouble that, as the name suggests, are value types that represent a double that should be treated as a fuzzy value, and a double that should be treated as a hashable fuzzy value, using an algorithm that works with hashing. These are separate types because the non-hashable algorithm lacks the caveats of the hashable algorithm, and you shouldn't be using the hashable type if you don't need to generate a hash from it. These types don't have anything to do with the units, and can be used anywhere in the program where you would need fuzzy equality and comparisons. For the units themselves: MyUnitTypeX.Fuzzy() returns a FuzzyDouble, and MyUnitTypeX.FuzzyAndHashable() returns a HashableFuzzyDouble. Now I agree with you that I should probably implement a FuzzyEqualityComparer for direct use with Dictionaries and other collections, but that doesn't help in the (most common) case, where you simply want to take fuzzy values and compare them, without having to keep a separate *EqualityComparer instance, or feeding them to a method in a static class, which necessitates you keep specifying the tolerances on every use. After weighing all the options, the best design seemed to be what I have right now, which was to develop explicit fuzzy values. As already touched upon, these values are agnostic to where they're used in code; they're created with a value, a set of tolerances, and then they're ready to be used anywhere that needs fuzzy value comparisons. The reason methods like Fuzzy() and FuzzyAndHashable() exist in the Unit structs, is that there needs to be a fast and easy way to get a hold of fuzzy values, for use by the user. I want to be clear that I understand the .NET design conventions and I'm not just implementing the interface all willy nilly. A lot of thought goes into the different ways it can be handled, and what I settle on is usually for a good reason. Why I'm discussing it in this thread (and I appreciate the feedback by the way), is that the feedback I receive helps highlight any design deficiencies which I may have overlooked. It also helps for improving naming schemes, as sometimes you guys simply have a better name for a class/method/etc than what I gave it.
|
# ¿ Jul 12, 2014 06:42 |
|
Unormal posted:http://www.microsoft.com/bizspark/ yourself some Ultimate licenses! Just in case anyone else signs up for Bizspark; Microsoft says you'll be approved or declined within 5 business days, but I wouldn't believe that. I signed up right after Unormal posted this (on July 2nd), and I still haven't heard a peep out of them.
|
# ¿ Jul 19, 2014 06:46 |
|
Stick100 posted:Last I checked it said 10 business days. Must have some people out for summer or something. If I go to "My Bizspark", it gives me: "Your account is now pending approval. We will be in touch with you regarding your status within five business days."
|
# ¿ Jul 22, 2014 03:02 |
|
One Eye Open posted:Texture arrays are an OpenGL ES 3.0 feature, so if you're targeting mobile, you're excluding a large proportion(still a majority, uptake-wise, AFAIK) of your users who are still on ES 2.0. It's also not supported in DirectX 9, which is still the primary render target for indie desktop games, and a lot of open source game engines. In other news, my first Unity Asset Store asset has just been approved, and it's free to boot! Fuzzy Logic - A minimal yet robust fuzzy logic library for floating point numbers. I needed the majority of the functionality for that unit conversion library I'm working on (and was getting help with in this thread, the other week), and I realized that I was so close to having another small library of its own, that I decided to go ahead and flesh it out, and then release it for free. It doesn't actually rely on anything Unity related, so here's the generic repo for it. Obviously it's written in C# and not UnityScript or Boo.
|
# ¿ Jul 24, 2014 23:35 |
|
Hey, so my BizSpark application finally went through; I applied just a day under a month ago. It took contacting Microsoft support to get anything to go ahead, but once that process was started, they were very nice and very quick about it. UnityVS is obviously the first plugin being installed in my new copy of VS Ultimate, but is ReSharper worth the money? I've looked at it before, and it looked awesome, but never had a copy of VS that could install plugins. P.S. Stay away from all things GPL; it's a cancerous license and sickening the open source waters. When there's so many good licenses such as BSD, MIT, Apache 2, or my favorite NCSA, I don't understand why so many look at GPL as anything but poison.
|
# ¿ Aug 2, 2014 04:33 |
|
Gul Banana posted:is there a better alternative license for "use this, but don't extend it commercially without contributing back changes"? Yeah, it's called manning up and using a permissive license. I don't actually mean that to sound hostile, just that I want to make it clear that the reasons people have for using GPL licenses are moot.
Honestly though, I think anyone looking to license their software under an open source license needs to think long and hard about why they're actually releasing it, and what they want it to be. The FSF goes on and on about how they're the protectors of open source and free software and yadda yadda yadda; they're a bunch of lying assholes. If you want software to truly be free, then you release it under a permissive license. They're called permissive licenses for a reason; almost all of them allow users to use them for any purpose, and only really exist to protect the developer from legal liability. That is truly free software; free as in free beer, free as in free speech. The GPL gives you free as in free beer, but it strips you of the free as in free speech. GPL licensed source comes with rules and stipulations on what can be done with it; it is open source, but it is not free software. From my point of view, I see people that choose permissive licenses as of one mindset, and people that prefer GPL as of another. I'm obviously biased here (against GPL), but I'm not actually going to make a claim as to which mindset is the right one to have, because they're purely subjective. The permissive mindset is that open source code trends towards free, which means that most users are likely to contribute back changes even without being compelled to. Even if they don't, it doesn't matter, because if those changes were actually that critical or needed, someone else would add them eventually. Free is innate, free is powerful; it brings people in without even asking. The copyleft (GPL) mindset is that open source code trends towards closed if it isn't actively protected. Free is weak, it must be protected or the flame will die. People must be forced to contribute or they never will; if you don't make them release their changes, they'll hoard them all to themselves to starve competition. The King of Swag fucked around with this message at 16:53 on Aug 2, 2014 |
# ¿ Aug 2, 2014 16:49 |
|
xgalaxy posted:This is exactly the thing I'm battling now with my posts in another thread regarding SeviceStack. The stock library functions fine under Xamarin.iOS and Xamarin.Android but didn't work on Unity. Luckily this particular library was really easy to port. However, other libraries which utilize .NET 4+ that work perfectly fine on Xamarin are not portable to Unity without tremendous amounts of effort, especially if they make heavy use of new .NET features like tasks and async / await. You basically just summed up why I'm so scared about the direction the Unity team is going, the future of Unity if they continue down that path, and why I have the sinking feeling that I hitched my horse to a sinking ship.
|
# ¿ Aug 4, 2014 10:28 |
|
Some of you may remember from a few weeks back when I was posting about my UnitOf, unit of measurement conversion library that I was working on. Well, after getting sidetracked with my Fuzzy Logic library (which despite being free, sadly has less than two dozen downloads after two weeks on the Asset Store), I've gone back to working on the UnitOf library. Most of the hard work for the library had already been done from the other week; the remaining work is largely just the tedious stuff (writing all the unit conversions themselves), and polishing. The only actual hard part left to solve was the unit of measurement to displayable string, which I'm happy to say now works. Given a format string and a unit of measurement, it'll generate a string which nicely prints the measurement, not just in the specified unit, but also in dividing units. The system goes another step further to select the best fitting unit for the format string and measurement you present it. code:
15 m, 25 cm, 4 µm 15 m, 2 dm, 5 cm // dm is decameter 16 yd, 2 ft, 0.39 in 16 yd, 2′, 0.39″ 16 yd, 2 ft, 393.858 mil Here's the format arguments:
P.S. I pride myself a bit on writing code that doesn't allocate any memory unless it absolutely needs to, just because of how terrible the GC for Unity's Mono is. Even after two days of optimization (both performance and code wise), the ToString/formatting methods still allocate a (relatively) lot of memory and perform multiple boxing operations per call. Working with strings in general is part of it, but the complexity needed to accomplish what you see above contributes to the problem too. Maybe not surprisingly, but there's a lot of work that needs to go into both format string and output string parsing to make everything work, not to mention the calculations that need to go on to decide the proper units to display as the unit of measurement is divided down. The King of Swag fucked around with this message at 04:33 on Aug 6, 2014 |
# ¿ Aug 6, 2014 04:18 |
|
So I'm apparently really bad about betting on the wrong horse; I purchased Daikon Forge not that long ago, and now it's announced that they're no longer developing it and it has been removed from the Asset Store: http://www.daikonforge.com/forums/threads/now-disappeared-from-the-asset-store.2235/ While the Asset Store is very much "buyer beware", I can't help but feel cheated. Daikon Forge was not a cheap asset, so purchasing it only to have it discontinued shortly after is really lovely.
|
# ¿ Aug 7, 2014 02:31 |
|
|
# ¿ Apr 28, 2024 07:58 |
|
Yodzilla posted:Yeah that's sucks but I guess it's like any other piece of software. I'm not familiar with the plugin but reading those forums you linked makes the author seem really dodgy. Daikon Forge was the other major alternative to NGUI.
|
# ¿ Aug 7, 2014 02:39 |