|
Canine Blues Arooo posted:I would like to use Unity as I'm already familiar with C#, but again, I don't know if it's appropriate to try and develop a 2D project in said engine. They at least have a 2D tutorial, so that's something. There also seem to be things like SpriteUI and Sprite Manager to help with such stuff. I just got the Unity stuff set up here an hour ago, though, so don't take my word for anything!
|
# ¿ Nov 2, 2011 15:46 |
|
|
# ¿ Apr 28, 2024 19:50 |
|
On a whim I bought the Indie license for the BigWorld MMO engine (used in World of Tanks/Warplanes and...not a huge amount else). The documentation is pretty sparse, and the wiki and forums are downright depressing. Any goon out there have experience with it to tell me if I should even bother grinding through the tutorials, or if I'd be wasting time as well as the money?
|
# ¿ Feb 10, 2014 00:20 |
|
xzzy posted:I'm not a pro developer either, but the best system I've found is to write components that don't have any dependencies. That way when I get to the "gently caress this is bad code, I'm gonna rewrite it" stage I'm only messing around one part of the project. It's the unix style of design.. each command does one thing and does it well. If your messaging between components is good enough it's actually pretty painless. This also makes it a lot easier to write good tests, which IMO is the biggest factor in whether you can go quickly in a code base or not. I've run projects ranging from 1KLOC just-me hacks to multi-million LOC systems with hundreds of developers, and the people with the most good tests are always the happiest, and almost always the most productive. If you're just starting out, or starting in a new domain (like I am with BigWorld stuff), having smaller components also makes it easier to get help, because people don't have to understand the world to help you with one weird thing. EDIT: components always have dependencies, unless they're pure functions in the math sense. managing those dependencies (dependency injection, data-driven wiring, whatever) is a big part of software architecture. I'd argue that a good dependency strategy is actually the most important architectural attribute in a piece of software, and I'm the sort of person that gets very when people start talking about abstract architectural theories. Subjunctive fucked around with this message at 23:46 on Feb 10, 2014 |
# ¿ Feb 10, 2014 23:44 |
|
Have you tried profiling? If that's not an option, some printf timestamp logging at key points can also shine a light. Performance analysis as a black box pursuit is pretty hard. :-(
|
# ¿ Feb 15, 2014 15:04 |
|
Unormal posted:They really need to divide that out into a platform team and a product team, and the product team can release and incrementally improve various pluggable engine features as (potentially paid) app-store plugins on top of the base editor, and decouple the platform from feature release cycle as much as they can. I've been down that path in (non-game) software I've developed, and it hurts. For one thing, it means that you need to make sure the engine changes are compatible with multiple versions of the plugins, and you have to make sure they can all operate in the presence/absence of the others. The combinatorial matrix explodes pretty quickly, and then you wish you'd decided to chew glass instead. You also end up spending a ton more time on API futzing, because the different components can no longer talk directly to each other and make assumptions about how the other implementations behave. There's a great agility benefit from being decoupled, for sure, but being able to release more frequent updates to individual pieces is small consolation if it gets a lot harder to build and maintain that set of pieces.
|
# ¿ Feb 24, 2014 04:33 |
|
Admiral Snuggles posted:So it seems like I've got a pair of options here, both of which are going to require some re-engineering. That's fine. Use one number to seed the RNG, and everything else derives from there (as long as they all ask for things in the same order). This is how Age of Empires (and a billion other games) do synchronized simulation, so modern machines and networks shouldn't strain. There are three kinds of state in your game:
You need to make sure that both sides see numbers out of the PRNG in the same way (125th is the hit roll for unit 0x15af3, 126th is the check for a special item in a chest, etc.), and that player input is processed in the same order, but after that the deterministic part of your simulation takes over. Barring differences in rounding modes or something (beware GPGPU stuff here or reading back from textures), they should stay in sync. Bonus: logging player input and initial seed is enough for a full replay for debugging or sharing purposes. The player input ordering takes some care, for real-time games, but it shouldn't be too bad. Caveat: I've built a moderately complex synchronized simulation as a toy, and talked with people who have deployed their own in shipping titles, but haven't shipped it in a game.
|
# ¿ Mar 5, 2014 07:15 |
|
Paniolo posted:You guys have a billboard in unincorporated Redmond advertising that you are hiring. It is very strange and you might be the only game studio ever to hire via billboard. Ubi did all sorts of crazy poo poo in Montreal when I was living there. Billboards on trucks, I think some people in sandwich boards.
|
# ¿ Mar 7, 2014 05:48 |
|
SupSuper posted:- Unix-likes: Global libraries rule, and users need to get them and set them up appropriately for their distro to run your executable. Most of the time this is done through the package manager which has all the common libraries (like SDL & co.), so just tell your users to get those and they should be set. Optimally you'd wanna wrap your game in a package too, so this whole dependencies/installation procedure is handled automatically, but this depends on how you're distributing it. Note that in this case you're at the whims of whatever library version the distros have, which isn't a problem 90% of the time, but watch out for that 10%. If you have some esoteric library not available in package managers, you'll have to point users to the website so they can set it up and configure it themselves. As an alternative, you often see third-party libraries shipped with the main binary, and a wrapper script used to set LD_LIBRARY_PATH so that things are found. Especially for libraries that aren't strongly versioned, or which have meaningful compile-time flags, this can be a lot easier for users and yourself. It's also basically what's necessary if you want your own code to exist in a shared library anywhere, or take along middleware that is. Last I played games on Linux this is what was done for anything that wasn't extremely common, including libraries where a distro probably had *a* version of libfoo, but maybe they hadn't upgraded to libfoo-1.74 yet, or maybe they only offered one that was too new. If you have the source around for use in your Windows and Mac builds, I'd do this. You can use pretty much the same build mechanics as for Mac, and then have the wrapper script set LD_LIBRARY_PATH to include the directory containing the binary and ancillary libraries at startup. I lost many many precious weeks of my youth to loving around with Linux distro versioning and oh-you-didn't-compile-with-that-backend-gee-thanks, albeit not in a game context, and I'm now pretty firmly in the "bundle anything that's not totally standard" camp.
|
# ¿ Mar 15, 2014 19:40 |
|
Obsurveyor posted:$600 limited time upgrade Is that only from 4, or is there also a deal from 3? I just dusted off my 3 license and haven't upgraded yet.
|
# ¿ Mar 19, 2014 00:25 |
|
Bondematt posted:You have to pay for 12 months with Unity subscription, so it's $900 assuming you don't want to continue it. Yeah, a lot of people seem to be missing that part.
|
# ¿ Mar 22, 2014 15:58 |
|
Suspicious Dish posted:They are? Edge is a toolchain they're building out that provides Flash-like authoring experience and produces HTML/JS/web output. It's basically looking to use modern browsers as the runtime for their authoring tools instead of the Flash VM. Some parts of the Flash shape model are still hard to represent with high fidelity on top of <canvas>, but now that WebGL is in the mix that gets better.
|
# ¿ Mar 24, 2014 00:00 |
|
dupersaurus posted:Scaleform 3+ may have changed some things, but in 2 all you need to do is tell scaleform that there's a method it can call through an fscommand or ExternalInterface. It's been a long time since I used the API, but I recall it being a real pain to handle structured data (objects with methods, life cycle management) through ExternalInterface. It's really a pretty terrible FFI through which to project the primary function of a UI: representing the state of a system, and manipulating it in response to user action. Maybe Scaleform's ExternalInterface is somehow better, though, and admits to something more effective and efficient than the XML encoding for marshaling.
|
# ¿ Mar 24, 2014 16:31 |
|
xgalaxy posted:I'd rather have lua / some subset of html / css for UI than javascript. I have an emotional attachment to JS and web tech, so grain of salt, but. JS + canvas/webgl should be able to improve on the memory profile of Scaleform at least, and with good use of shadow dom you could probably go there as well. Modern JS JITs have GCs optimized for pause time, and the rendering runtimes in web engines have all the asynchrony and composition tricks you want for keeping the UI responsive even if the game is chugging. JS is also designed for isolation and asynchronous/event-based flow, meaning that it's easier to make safe for extending. Mozilla has FirefoxOS running on 128MB, $25 phones with responsive UI and such, so fitting into even a mobile game memory budget should be fine. (Less so on iOS where you can't bring along a JIT, but for UI stuff you might not actually need it, and then ever better memory savings.) There is definitely also a code-footprint issue, in that it can be 20MB to bundle it along without version-fragile build hackery. I would strongly caution against trying to do a CSS implementation for a random UI toolkit; the cascade alone is surprisingly tricky to get right, and uncanny valley APIs are pretty tricky, and a bunch of things really only make sense with the CSS flow and box model. There's so much out there for JS devs, and using web tech lets UI development use the powerful browser tools and edit/reload model, that I think it's really a much better choice than Lua these days. Futures! Edit: aerique posted:It's been a good couple of years since I played with Ogre3D but didn't Awesomium do that already?: http://labs.awesomium.com/the-rationale-for-awesomium/ I think EQN also uses Awesomium, though I'm not sure if they do so for UI.
|
# ¿ Mar 24, 2014 18:25 |
|
Paniolo posted:I agree, I think the more likely solution is a stripped down HTML renderer + JavaScript VM. Awesomium took the approach of embedding an entire browser, which was okay for applications that actually wanted to have a full featured browser inside pulling content from the web (I think Guild Wars 2 auction house worked like this) but was not really appropriate for being a general purpose UI. That would be ok, if you get the stripping at the right place (hard). I don't think you get much that's different functionally, though. With HTML you want and get the DOM and event flow (though you want flexbox, really truly), and most layout engines are a little hard to bring up without the networking layer intact. Just use the parts you want, and figure out a way to not pay for a full document context for each UI window if it causes memory woes. The rest of the browser stuff (like history and cache and plugin hosting and save dialogs and so forth) is usually easy enough to disable or just not trigger.
|
# ¿ Mar 25, 2014 04:13 |
|
Suspicious Dish posted:You need to measure more. Reallocation can be thousands of times more expensive than iteration. Looping through 50 elements 60 times per second is absolutely nothing, and I'd actually expect it to be considerably less expensive than reallocating three times a second. All reasonable container implementations amortize reallocation to reduce that cost. Does your favorite STL implementation actually reallocate here other than when growing to a new maximum size? Removing an item from a vector should result in a copy for each one following in the vector, but I don't believe it'll reallocate. If it did, erase could fail due to out of memory errors, which is pretty bad ergonomics even for C++, and it would be sort of an odd fit with reserve's semantics. You typically have to explicitly trim to get containers to reduce their underlying allocations (shrink_to_fit in C++11) -- and if you do, watch out for hysteresis. This also means that a subsequent emplace_back will use the existing storage, meaning it's again just a copy operation for the new element. I don't think this is an issue requiring measurement, given the guarantees of the STL. But if what you're doing is front-to-back iteration and removal from the middle, maybe use a forward_list? (erase can throw for removals that don't include the last element, because copy constructors can throw, but if you're removing a proper tail then it provides a no-throw guarantee. I haven't looked for spec text explicitly about it, but that definitely seems to rule out reallocation on erase.)
|
# ¿ Mar 28, 2014 17:14 |
|
Stick100 posted:If you're a C++ dev, then pay the $19 for one month and grab Unreal Engine 4 and cancel the plan. Note, though, that you're still obligated to pay the 5%-of-gross royalty when you ship.
|
# ¿ Mar 28, 2014 19:56 |
|
Shalinor posted:None whatsoever. It's just an thing. I'm way too far into my current project to retrofit that stuff in, and I didn't want to fiddle with it at the start. It'll be at the core of our next project, and anything else we do with Unity most likely. Would that cause problems with 3rd party plugins expecting to use the standard resource loading mechanisms?
|
# ¿ Apr 3, 2014 00:08 |
|
Flownerous posted:I think the main problem is that games are simulations (of fantasy universes in most cases), and so it's often hard to define a test for whether the simulation ran correctly. Even physics is difficult to predict without running the simulation yourself. Many programs are like simulations: take initial state, perform operation, capture result state. If that operation depends implicitly on a lot of non-input state, it's hard to test, but with TDD you sort of inherently avoid that, because you ipso facto have code that's reasonably testable. You probably reason about the mechanics of the game in relatively isolated ways: "when they press jump, they arc through a height of X units and end Y units farther ahead." Not "when there's lava they jump over it if they jump from this close." One test approach could be to put the player at an origin, initiate a jump, track the height as you advance the simulation, and run a few steps of the simulation beyond where they hit back to zero to make sure they don't end up too low or whatever. This is easier if your player/physics logic is distinct from rendering and game setup, but many things will be. I was a TDD skeptic for a long time, until I realized that it wasn't a way to end up with the same code and some more tests. You end up with different code, as an effect of writing it with tests front of mind. You might or might not find that different style of code to be desirable, but I found it way easier to work with, especially when I was iterating on a lot of parts and wanted confidence that changing the log format hadn't broken something else. The core of tests are that same kinda-functional input->output verification that most behaviour comes from (for me). If the code is amenable to testing, I'm also just more likely to actually write tests. I'll admit that I'm a bit lapsed and sometimes end up trying to retrofit tests (usually after some very frustrating bug was introduced); I almost always wish I'd has tests in mind all along. The *trivial* game stuff I've done has so far let me write non-graphical/command-line tests for most mechanics, though it was annoying to build map/board state in some cases until I wrote a bit of tooling. The systems that were tied deeply into some other state were the ones that caused clenching when I had to change them (like shaders). My background definitely influences my position here, though. I spent a decade-plus working on browsers, where test-friendliness of things determine whether "make sure we handle network timeout well" is or , and things like system text measurement can ruin your day. Diffs that decoupled systems were acts of righteousness, because they made things easier to work with, and less likely to suffer from breakage at a distance.
|
# ¿ Apr 4, 2014 21:00 |
|
xgalaxy posted:That's a result if their technology stack though and avoidable. I wish Arena published more about their hot-update stuff. I mean, you see it in a lot of non-game contexts (not much scheduled outage for FB or Pinterest or GitHub or imgur, though they do a ton of maintenance and code deployment), but as you point out it's definitely not the norm in gaming. GW's instance-heavy design helps a bunch, I think, similarly to how the browser model helps some other services. I don't think it's a prerequisite, though, and most back-end services are themselves somewhat instanced. That said, if I were building a game, "eliminate production downtime" probably wouldn't be enough to get me to do the extra work for dealing with version handoffs (in both directions). The extra agility probably would, but that depends on having things you actually want to deploy all the time.
|
# ¿ Apr 23, 2014 02:16 |
|
They're only "simple" to the extent that everything is instanced. Most games don't have seams around chat, banking/storage, auction houses, the mail system, etc. Those things don't make it impossible, but they mean that you need new and old systems to collaborate pretty well for what could be a relatively lengthy time.
|
# ¿ Apr 24, 2014 07:21 |
|
roomforthetuna posted:Any remotely large-scale game (anything with multiple servers) has those things separated out anyway, because that stuff is always communicating between instances and between servers. Yes, but load-spreading for federated things is harder to manage when you have different versions in play. Dungeon instances don't have to talk to each other, so if they're each playing by slightly different rules then things are probably still fine. If your guild bank has multiple personalities because there's more than one version sharing the load, it gets harder to maintain your invariants (and really hard to do effective testing of these cross-fading scenarios before deploying). I'm not saying it's impossible by any means, and I've always enjoyed working on distributed systems, but I don't think the earlier characterization of "easy, just use the existing seams" is reasonable.
|
# ¿ Apr 24, 2014 15:21 |
|
Dr. Stab posted:That's just the convention. Push the middle button on the controller to turn on the system. It also powers on the controller, I think, so it makes sense that the other buttons wouldn't do anything while the controller was powered down.
|
# ¿ Apr 29, 2014 02:24 |
|
serious norman posted:I'm currently in the process of developing a tile-based web game coded in html5/js/pixi. However the game design requires me to have huge maps, like 1000 x 1000 tiles or so. What's the "best practice" on defining such a map? A big 2D-array is out of the question I gather. Any ideas? I'm fairly new to js. TypedArrays should give you very compact representation in JS engines that support them. Without that, most JS engines will use 4*entries bytes of contiguous storage for arrays containing primitives, so you'd be looking at on the order of 4MB for the map data. Access would be quite fast too, since array access is almost always inlined on modern engines. (64-bit engines will probably use 8MB in the normal-array case.) 4MB/8MB shouldn't be a problem.
|
# ¿ May 5, 2014 02:06 |
|
OneEightHundred posted:"Feeling like you actually built something" is, in my opinion, not a good reason to rewrite things that already exist. I think building something that you want to build is a perfectly fine undertaking, though it's not necessarily the most efficient way to get to some farther end goal. Doing a tutorial isn't that different. I think lots of people write games for the journey and satisfaction of creation as much as for the unique thing that's created.
|
# ¿ May 8, 2014 16:53 |
|
I'm pretty surprised that Unity and Xamarin couldn't come to terms. I know the Xamarin guys really well and they're utterly reasonable guys. Using .NET on Windows for development would probably be a bad idea, because they'd have to worry about things that work there but not under the Mono runtime on other platforms. C#-to-C++ could give much better optimization compared to Mono AOT, especially related to allocator pressure and memory traffic. I think that's what Unity is seeing in practice too. In theory, the gap could be small, but in practice the Mono AOT is not near the theoretical limit.
|
# ¿ May 29, 2014 02:13 |
|
OneEightHundred posted:(i.e. who the gently caress thought it was a good idea to compile shaders at runtime and trust hardware vendors to write compilers, and why are they still embracing that decision?) Wait, aren't D3D shaders still compiled at upload time from the HLSL bytecode format to the vendor's secret sauce? There's no way the drivers and cards use the general form as their internal representation. (In part because the vendors all do specific optimizations in that compilation pass because they know the hardware details.) It does suck to have to send raw shader source to a driver rather than to a utility that can be sandboxed as a first pass, though. WebGL-atop-ANGLE has nicer security characteristics than WebGL-atop-GL in that way.
|
# ¿ Jun 20, 2014 16:33 |
|
My C# is rusty, but it seems like you could have an interface likecode:
where you have a baseline unit for each dimension, like mL for volume, and then as<T> would convert to the baseline and then divide by T.fromMillilitres or whatever. Making them be types would let you use using to import the ones you want without having to explicitly prefix at each point of use. I haven't finished my coffee yet, so if I'm missing something big I apologize. Edit: actually, the ergonomics of your existing interface get better if the caller just does using Whatever.UnitsOfMeasure as UoM or something, no? Subjunctive fucked around with this message at 15:58 on Jul 2, 2014 |
# ¿ Jul 2, 2014 15:56 |
|
The King of Swag posted:The reason this wouldn't work is that structs in C# can implement interfaces, but they cannot inherit, so unless I went crazy with extension methods, every single unit would need to re-implement all the functionality of the others. Classes wouldn't be an option either, because obviously they're not value types. You're definitely on the money on how to proceed with the calculations though, and it's actually how it internally works already. When you perform a conversion (if you aren't converting to the same unit), everything gets converted into a native unit first, and then to the requested unit. If it wasn't done this way, either the number of calculations you'd need to write would be number of units^2, or it'd require a rats nest of branching. Because all calculations are a straightforward switch statement and multiplication, doing two conversions (to native and then to the requested unit) is actually considerably cheaper (processing wise) than the other methods. Hmm. Here's what I was thinking, untested and unlikely to compile: code:
code:
The King of Swag posted:I'm not sure I agree with this, only because I come from an Obj-C background, and abbreviations in interfaces are the devil's playthings. Especially abbreviations that are indiscernible out of context, just by reading the name and nothing else. Oh, I don't mean that the actual API would be UoM, I mean the consumer could make a shorthand if they found the full name unwieldy. code:
|
# ¿ Jul 3, 2014 01:36 |
|
The King of Swag posted:This would actually work, but I don't know how efficient it would be, as any struct referenced through an interface has to be boxed and unboxed. Speaking of efficiency, I currently cache the last conversion performed, so you can call the same conversion in repetition without need to store it as a separate variable. This unfortunately does mean that the size of the value doubles (which internally is a float and an enum), and I'm wondering if storing the cache is even worth it. My understanding is that the CLR optimizes away a lot of they boxing traffic, but I can't find a reference. Do you see a lot of repeated conversions? I think cache-in-a-local is a good client code pattern if a given app sees that as a hotspot, so don't make everyone pay for the cache maintenance.
|
# ¿ Jul 3, 2014 06:45 |
|
Inverness posted:Really? RakNet source has always been free to download. You still have to pay if you want to use it commercially. Not any more!
|
# ¿ Jul 7, 2014 21:15 |
|
If you want all the conversions to reverse well, you probably have to use something that preserves them as rationals. Such libraries exist, but they aren't super fast. Fixed-point will probably get you pretty close with a 64-bit value, though, and you won't have the classic IEEE754 0.1+0.2=??? sorts of problems.
|
# ¿ Jul 8, 2014 00:50 |
|
I'm puttering around with a little map-building toy, and so far I've spent all my time on the placement and movement of rectangles. I want some simple things, I'm hoping there's some toolkit out there that's waiting to love me. - drag rectangles from a palette of different ones - drag rectangles from one place to another, cancelling with ESC, not allowing drag off the grid or overlapping with other buildings - paint rectangles by dragging from corner to corner, optionally being able to overlap with buildings or other painted rects - grouping of shapes for duplicating patterns - super neat if I can rotate groups - easy to label the rects I've been playing with FabricJS, and it's OK, but I feel like I'm doing that thing where I build the engine for so long that I get sick of it before I get to the part I really care about (letting people draw out a map and then generate another format from it). Web stuff or otherwise cross-platform stuff would be great, but failing that I'll take something that works on Windows or iOS or OS X.
|
# ¿ Jul 19, 2014 22:46 |
|
Shalinor posted:Generally, anything you'd legit want a non-pow-2 for? Is something you should be texture atlasing instead. UI elements and 2D hand-animated sprites being the usuals. Render-to-texture is the most common NPOT usage, I think.
|
# ¿ Jul 24, 2014 21:49 |
|
roomforthetuna posted:UE4's "flappy chicken" demo appears to be 28MB on Android. As some kind of person from medieval times this makes me sad - you could totally have played a game like that on a machine with 16K. Even allowing for higher resolution graphics and a reasonable amount of bullshit 3D interface wrangling surely we shouldn't be excusing this "everyone has infinite everything" approach to modern software development. We'll never get nice things if we keep making the same old poo poo take up the full hundreds-of-times expanded capacity of our computer-machines. Get off my lawn. How many ABIs does it support? It'll likely include multiple (3?) copies for different ARM variants, unless they do the work of splitting it into multiple packages.
|
# ¿ Aug 4, 2014 16:44 |
|
I've been getting the "maintenance mode" error page at https://www.unrealengine.com/register for the past day or so, don't see anything in their Twitters about it. Is there another way to sign up?
|
# ¿ Aug 4, 2014 23:58 |
|
Pseudo-God posted:If I have an arbitrary polygon defined by a list of points, how do I calculate this polygon's shadow? By shadow I mean something like this: What about - translate shadow polygon - remove vertices that are occluded by the original - take the set of visible shadow vertices and their original counterparts - wind CW to make poly I guess that doesn't work for non-convex polys though, hmm.
|
# ¿ Aug 12, 2014 17:14 |
|
dupersaurus posted:There's the concept of the "virtual slice" which is sort of a pre-alpha proof-of-concept: If you're looking to Google more information about this, it's more commonly known as a "vertical slice". You might also think of it as "enough game to launch a Kickstarter", I guess.
|
# ¿ Aug 13, 2014 22:05 |
|
Above Our Own posted:Just last quarter Blizzard loving Entertainment released a game in Unity; it's still an excellent game engine, and being C# based is a major quality of life advantage for the programmer. I just wouldn't want anyone reading the thread only recently to think Unity isn't still a great tool. Blizzard made that technology choice quite a long time ago, probably before UE4 entered the scene. The tone in this thread is about what technologies to choose now, not inescapable doom for people who have chosen Unity for projects that are underway.
|
# ¿ Aug 16, 2014 04:17 |
|
If UE supports C++11 and in the nearish future follows C++14, you don't lose a lot from C# other than the (arguably very important, but) conversion of crash to exception. And, I guess, whatever C# libraries you've incorporated, but C++'s free thunk to C is a big help there. (~~~rust~~~)
|
# ¿ Aug 16, 2014 04:46 |
|
|
# ¿ Apr 28, 2024 19:50 |
|
The Laplace Demon posted:Yeah, C makes hot reloading code pretty trivial. I don't know why more things don't take advantage of it. It's easy for trivial cases, less so when you have to fix up the vtables on all your existing objects and so forth, plus code references on the stack on other threads, etc. Not a trivial undertaking, in my experience, but if you get it robust it can be a huge win.
|
# ¿ Aug 16, 2014 11:14 |