Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Reason posted:

What are good places to find freelance work online? The freelance section of the OP is empty.

A lot of the online freelance sites are very mixed. You'll get a lot of people who think a better version of ebay should cost $100, and on the other side, thousands of offshore coders who just poo poo out thousands of cut and paste lines of code that will reinforce the beliefs that these things are cheap. It's a lot of stress just so you can code at $20/hour.

Adbot
ADBOT LOVES YOU

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Reason posted:

So its not a good thing to do? I'm interested in it because I'm a stay at home dad looking to make money on the weekends/evenings and I have two skills, web development and serving legal papers and one of those things I can't do from home.

If you are absolutely desperate for cash, or need SOME experience in software development, it may be worth your time. You are competing in a market with a lot of offshore providers who are also probably desperate for cash, and ignorant customers who expect the world for pennies. If you can get a junior dev position paying $20/hour, you'd be better off in pretty much every aspect. The social aspect is also important to evaluate. If I could do it just to pick up a few dollars solving problems, I would. But I don't want to spend a minute dealing with the types of people who end up posting work. I'd rather do manual labour than suffer them.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

v1nce posted:

Any reason you can't do this with keyboard navigation? Add some hot-key focus grabbers and let them jump elements with the arrows.

You could do this, as you can calculate the exact scroll position you would jump to. It would also make more UX sense, in keeping with what revmoo said. This way, you'd only be changing the behavior of the up/down keys when on that window, which is much less jarring and doesn't violate nearly as many expectations.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Ghost of Reagan Past posted:

So I'm trying to mess around with Node and Angular, and grabbed a Yeoman generator for Angular projects so I could dive in on Windows.

I then wanted to move it to a Dropbox directory so I could play with it on my laptop, but npm generates these insane deep directory structures that are completely impossible to deal with on Windows, because Windows has file path depth limits, and as far as I can tell the npm folks haven't fixed this in years. Is there some workaround or is this just an npm thing that nobody wants to fix? I'm not even sure I should throw it in Dropbox because it might just break that, too.

It won't 'break' Dropbox, but it might have trouble syncing, and can take a long rear end time indexing all the files. The folder trees are not impossible to deal with, but can be a hassle. Windows Explorer can MOVE such folders, but cannot delete them. NPM and Node have no problem creating and reading the files, since the issue is not with NTFS but with Windows Explorer.

Maybe try a sync batch file which zips & copies all files?

NPM 3.0 is supposed to fix the issue, but they seem to be dragging their feet with it.

Ghost of Reagan Past posted:

I guess I'll just throw it on Github then...this is a really silly issue and all I find on Google is the npm devs blaming Microsoft, and Microsoft saying "tough poo poo, we aren't changing anything." I just don't know why it has to create such giant directory structures.

:laffo: I'm trying it some other way and the paths are too long for the Microsoft build tools to do their poo poo. Get your poo poo together npm.

It's a really stupid issue. The NPM guys don't want to change it because they made a decision a long time ago which works on linux, but doesn't so well on Windows. Instead of change their minds, they've blamed Windows for it, while snarkily maintaining that "Node fully supports Windows, however, some 3rd party tools (Windows File Explorer) do not support Node". There are other ways of perfectly representing a dependency graph, but they refuse to admit they might have chosen a poor method, and only very reluctantly started trying to fix it recently.

Skandranon fucked around with this message at 01:28 on Sep 15, 2015

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

piratepilates posted:

NPM 3 beta is out with flat directories, just use that

Did not know that! Will try it out tomorrow, I've had my own share of NPM/Windows issues.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Ghost of Reagan Past posted:

I don't know if it works with Yeoman yet but I just set up the project on my (Ubuntu) laptop synced to Github, so I'll just use npm3 beta and install the dependencies that way on my desktop later.

This was entirely too complicated.

I totally agree with you, but npm3 looks to solve all(most) of the issues with Windows, and the benefits are significant. Gulp is a great build tool for Angular apps (Grunt is the devil!).

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Parrotine posted:

I'm deciding to sign up for an online course between Web Design and UX Design. Both are the same cost, but I feel that I should pursue the UX one because it would have a better chance of fleshing out my portfolio, as well as giving me a better advantage to getting my foot in the door.

I was asking because i've spent the entire year trying to learn the ins-and-outs of Front End Web Development, and i'm kind of burned out frankly. It's something that's taking its sweet time for me to understand, and I need to make a jump a career soon since my time window is almost up. I'm glad i'm wrong about my assumptions between collab between UX and Front End, because it helps me cement my decision to pursue the UX design course over the Web Design one. I feel like, based on my experiences throughout this year, that I can continue to teach myself more about Web Design on my own without the aid of a course, while UX is something that is a bit more of a mystery for me. I was worried that pursuing UX over the Web Design one would potentially be throwing money down a hole if they weren't closely related in some way.

For me to continue my studies in Front End Development I need to pull in more income, and probably work on it on the side while I do a 9-5 job working a field that keeps me close to the action, such as UX. I've done a few internships as a UI designer in the past, and i'm thinking that if I market myself out there as a UXer with a solid understanding of both Web Design and Development, it'll make a company want to take a chance on me so I can get my foot in the door somewhere.

But I have to step away from FE Web Development for a bit cause it's just murdering me on grasping these concepts. It'll probably take me another year of hard work before these ideas are as fluid in understanding as the bread-and-butter basic ones for Web Design. I'm gonna have to sign up for some hands-on workshops where I meet with someone in person to put together various projects step-by-step, cause reading a course explanation with a vague set of instructions on what they want me to put together is just not cutting it on my side of things.

UX has it's own strange things to learn, like how users get frustrated, how they pay attention to things, complimentary colours, etc. It's a lot more than just building wireframes and HTML.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

an skeleton posted:

I'm tasked with the job of evaluating if it is feasible to transition our web app from using MySQL to using MongoDB (as well as some other technologies).

Has anyone done this, and if so, how was the transition? Any advice? Should I write a script mapping the old data to the new document-based system or start more from "scratch"?

Do you have a specific reason to do this, or does someone think MongoDB is magically faster?

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

an skeleton posted:

It's because we are trying to move our team to the MEAN stack as a standard. The previous app, including the db structure was architected by someone who eventually got fired so the DB is a piece of crap anyway. If we have to rewrite some of the relational crap in the new app, fine, but I just want to evaluate whether that is a retardedly difficult task (it doesn't seem horrible but I've got <2 years experience under my belt).


The main problem you'll have with Mongo is that, instead of having the database doing things like join or filter operations, you'll have to do them in your application layer. Which can mean transmitting a LOT of data to retrieve just a single item. For some scenarios, Mongo makes sense, but by no means all. I'd focus on moving everything else in the stack and stick with whatever DB you have, or start over again in Postgre or a new MySQL

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

The Merkinman posted:

I'm sorry if this is off the current discussion, but how do you tackle performance (load time, size)?
I know there are tools to measure, but what am I measuring for? What is too slow? If I don't give an actual number, then others will want to add more and more features and umpteen marketing tags.
I was at a conference with other people from other companies recently and they all mentioned actually removing features/tracking to get the site to be faster, but gave no goal number.

This is really a failure of the business analysis portion. They should have a specific goal for how fast an application should be for various things. For example, Google has a specific < 1s time limit for it's queries. A product doesn't get released until it is faster than 1s. Saying "Make X faster" without a benchmark of what is good enough is impossible to really satisfy.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

The Merkinman posted:

That's my point. How do I figure out what good enough is? Since one hasn't been provided to me (through the failure of the business analysis) how do I make one of my own to then tell the business analysts?

Well, if you are the developer, and they give you vague requirements, pick the interpretation you like best. 0.01% faster IS faster. Conversely, you can send them some emails explaining how their requirements suck (need clarification). I don't know how contentious your relationship is with the people who do business analysis.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

kedo posted:

Genuinely curious – why are you folks opposed to loading screens for a website? Loading screens exist in practically every other type of application, what's wrong with using them on the web?

I'd posit that if your site is going to take more than a couple seconds to load, it'd be better to show one followed by a quick reveal rather than showing your chugging application. Besides the fact that snapdrafts has a ~2 second fade-in, it's not offensive to me in concept.

e: Should note that I'm all for doing your best to ensure your site loads quickly, but I'm talking about situations where a longer load time is unavoidable.

It's a UX thing... users will get frustrated if it takes longer than 2-3 seconds and are likely to leave your site for another one. So, it's better to get something shown as fast as possible, and break your loading into smaller chunks. People won't leave if just the calendar & twitter feed are loading, but the other content is visible, but will if the whole site looks unresponsive.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

an skeleton posted:

Our app really isn't *that* complex. If we cant manage to get it to work decently in Mongo its probably more our fault than anything. Will report back crying when I've realized how right you guys were

If you're dead set on doing it Mongo, go nuts. It's probably a bad decision, but if you are just looking for some after the fact justifications, say it works great with Angular and Node because it serves up JSON well.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

kiwid posted:

I currently work in a corporate environment as the only webdev/programmer. I didn't originally start out in this role, instead I started out as a sysadmin. However, there just isn't enough work to keep me busy all day as a sysadmin so I started getting into the hobby of building web applications to ease other peoples job roles as well. Well... 5 years later and I have like 15 web applications that I'm managing all the while still doing my sysadmin responsibilities. I'm finding it really hard to keep web apps updated and some of them are suffering from software rot. Also, I've brought up the fact that they should split the responsibilities and hire a new person to do the sysadmin stuff but they refuse.

So, to alleviate this, I was thinking about combining all of the web applications into one monolithic web application. A lot of the web apps have similar tables like the branches tables, customers tables, users tables, products tables, etc. Is this a good idea or am I opening myself up to other problems I'm not currently considering?

Merging them could help some, if only as it will be a concentrated effort to clean up your code. However, if their purpose is not explicitly linked, maybe shouldn't have just One WebApp To Rule Them All. If there are some common tables talked to, maybe you could consolidate the data layers into a few DLLs and reference those instead of having the code in 15 places.

I would push to refactor the sites & look for a new job. You can then talk about how you merged them as part of your interviews!

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

kiwid posted:

An update to this.

Today I was called into my bosses office totally unexpectedly and they are apparently branching off my roles and are going to hire someone. So umm... which one of you goons is my boss?

So they are hiring a manager for you? Are you being transitioned into a full-time developer role? Do you want this?

Being passed over is usually a bad sign and it is unlikely there is anywhere to go now at this place. If you are transferred to full developer role, this is a good way to have a formal Developer title on your resume, but you should start getting ready to leave now. Maybe stay until you finish your refactoring, as it's a good resume builder, but since they just hired a new manager, they are unlikely to spring for any more money for you.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

kiwid posted:

No when I said "hire someone", I meant hire someone to take over the roles of either the programming side or the sysadmin side. Apparently they've decided to split the IT roles into software and hardware and have given me the option of choosing which direction I'd like to move towards. Overall, I think this is a good thing.

Ok, that is better.

If you are choosing to go the Developer route, one thing to watch out for is making sure the role is transitioned fully. You don't want to end up effectively fixing all the mistakes new guy makes forever.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Khelmar posted:

I'm open to suggestions to raze it all and start over, although I don't know if the board will ultimately go for it. :)

There isn't much value in keeping what is already there. It would be easier for someone to start from scratch than to keep fitting things into the current design.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

revmoo posted:

Mostly looking for Git-specific stuff but I'll go with the Github stuff if I can't find anything else.

This is such a horrible argument to even be having, it's like the company wants to go from Windows NT to Dos or something. :suicide:

It's even worse because I migrated the entire company over from SVN->Git last year. We've just got a new division that's a bunch of Windows guys that are afraid of change.

Maybe offer up Mercurial as a simpler version of Git? TortoiseHg does a really good job of providing a UI that does most of what your day to day stuff will be, but in a Windows friendly way.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
There is a TortoiseGit UI, have you tried to get the Windows users to use that?

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

revmoo posted:

Its funny, ive never tried a Git gui ever. Ive used gui tools for merge conflicts, but never an actual Git program (or cvs/svn). Ive always used, and trusted the CLI. I use a bunch of hotkeys to automate my Git workflow. Using a mouse seems so clunky.

I mean, ive seen them in use plenty of times, but never felt the need.

You may not, but if you are getting resistance from Windows users who don't like change, this may be a way to convince them it's not so bad. You can't win these fights just by proving how dumb they are for wanting to use SVN, you just make yourself look like a jerk.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Knifegrab posted:

OK so my website uses images that constantly change/evolve (sprite sheets mostly). Problem is that often times this can lead to stupid cacheing issues. I have learned that if I reference the file using a "?[some_number_here]" and I change that number when the file changes the browser will force itself to re-download the file, it treats it as a new image.

This is great and rad, but most of my sites image updating is handled automatically in cron, and I want to be able to cron this change to my css or html (adding the ?[numbers]). I figure I have a decent idea how to do it (using nodejs) but I was wondering if anyone out there knew of or could suggest a good practice for doing this?

I have some projects where we do a similar thing with our bundled library/template/application files. We have a single gulp task which goes through any file that has a direct reference and injects a new UUID for each build, so when it is deployed, everything is referencing the version from that build. Should be able to do a similar thing for fetching your pictures.

Edit: looks like so, uses gulp-preprocess
code:
var cacheguid = "?" + generateGUID();
    
return gulp.src(folders.src + "**/*.html")
    .pipe(preprocess({ context: { 
        APPLICATION_BUNDLE_PATH: applicationfilename + cacheguid,
        CSS_BUNDLE_PATH: cssfilename + cacheguid
    }})) 
    .pipe(gulp.dest(output.getFolder() + ""));

Skandranon fucked around with this message at 00:23 on Oct 22, 2015

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

LP0 ON FIRE posted:

I have a security question regarding security with a database and logging in. I'm using PHP and mySQL, but the concept of how to do this best probably matters the most.

I have a table of users that have auto-incrementing ID's, email address and encrypted password. The emails in the user database require that they are unique. When they log in, first the code checks if there actually is a user by the name of their email address they typed in, and then it checks if the password is correct by adding the correct salts by fetching them on another table according to their user ID, and adding them to the password and seeing if it's the same as the irreversible encrypted password stored on the database.

The password stuff is all good, but now I want to get extra secure by reversibly encrypting all the users info, including their username which is their email address. The encryption it will use is openssl_encrypt, and all users will have their own IV stored on a table that matches their user ID.

My problem is that my user lookup will no longer work if their email address is encrypted with their own unique ID. I can't know which IV they use until their user ID is looked up. Please give me advice or ideas.

How does encrypting the data help secure anything if the IV is also in the same database?

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

LP0 ON FIRE posted:

No wait. Hashing doesn't guarantee a unique value. I know the chances are extremely small, but it doesn't seem right to me.

You are hashing the passwords already, and have a similar 'problem', which is not worth worrying about.

DarkLotus posted:

When a user is added or changes their email address and you create the new hash, make sure it doesn't exist, if it does create a new one until it is unique.

Don't do this, you'll just be needlessly checking the database. A good cryptographic hash should have such a low chance of collision as to not be worth worrying about, especially for this use.

Skandranon fucked around with this message at 22:00 on Nov 3, 2015

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

LP0 ON FIRE posted:

Hashing passwords guaranteed that the user's row is unique by requiring that the username (email) is unique. Having two user passwords that have the same hash is virtually inconsequential.

If you are really worried about hash collisions, use a guid ID key instead of an auto-incrementing one.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

v1nce posted:

This is a nice thought, but if you have a DB breach it probably won't matter. A dump is a dump is a dump. Chances are it'll contain everything and they'll still have all your IVs.
Again, without the password the data should be useless. IVs are used to make sure the data, once encrypted, can't be easily compared with known encrypted strings.


This is what I was getting at, unless there is a lot of work going on that's not being discussed, encrypting as asked isn't going to be helping anything in the event of a real breach.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Karthe posted:

Does anything look off about this Angular filter implementation?

code:
var searchFilter = function(value, index) {
    console.log(value);
    return value.shifts[DateUtils.getDBDate(vm.shift.date_requested, true)] != null;
};

var filtered = $filter('filter')(employees, searchFilter);
console.log('filtered:', filtered);
I'm trying to filter out any object in employees that has a shift on the given date. However, the console.log() inside searchFilter never appears in the console so I'm wondering if the function is even being called. I know I'm doing something wrong because the "filtered" objects that appear below the $filter call aren't filtered at all, but without being able to see the value that gets passed into the filter I'm unable to figure out what I'm screwing up.

This is sort of an aside, but there is little point in using the $filter service if you are not going to be calling your filters from your templates via |. If you are going to do the work in code, you might as well just do it with a for loop or something like Underscore or Lodash. Also, keep in mind that filters called via | are always executed at least twice, as it happens during the $digest loop, so if it is in any way expensive, moving it out of there is also a good idea to keep your $digests tight.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Karthe posted:

Oh, I'd just started moving some filtering out of the views and into the controller after reading about that and thinking "there's no point in things filtering twice". I also viewed it as a preventative measure - what if the datasets I'm filtering balloon above the couple of records that I'm working with now? It seemed pragmatic, but the things I read left it unclear at what point filtering done in the view should be moved to the controller.

Most of the web apps I work on require a significant focus on the $digest cycle, so I prefer to keep it doing as little as possible. If you are just doing a simple app, and there are already built in filters for what you want, go ahead. I don't like $filter because it's one of those Angular features that seems like magic, but scales poorly.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Karthe posted:

Oh, your point is to skip $filter completel! I see. Then it should be just as good to angular.copy() a controller variable, for() through it to remove unneeded values, and then re-set that new object back to the controller variable?

There probably isn't a reason to even use angular.copy(), just create a new variable as the result set, and push the values you want to it. Keep 2 sets of information, the full set (probably stored in a service somewhere and only changes when getting from server or whatever) and the filtered set in the controller. The controller is responsible for assigning the filtered set, but doesn't need to create new values, just needs to add the references to the filtered set. When you assign the new filtered set, this will trigger any $watches you have keeping an eye on it, but you won't have created an entirely new set of records for it.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Data Graham posted:

Heh, thanks for this link. Rings nice and true for me. I just came from a day at work where I found myself unexpectedly in charge of a team spread across two cities trying to figure out why our internal system (basically nothing more than your bog-standard tabular data display) had started out fast when it was new but now was so goddamned slow even to just scroll the browser that it was basically unusable. After half an hour fumbling around in a screensharing group with timeline viewers and Django-toolbar'ing the API endpoint and adding debug statements everywhere to try to figure out why it took five seconds just to render the page after getting the API response, it comes out that they had decided to build the thing on AngularJS and were building the entire table view dynamically on the client side after pulling the entire, unpaginated data set via the API call. Well of loving course it's going to get slower over time, geniuses. That API call is returning 8000 rows now, all of which the client has to juggle and filter and render into HTML, and every time any event fires it reevaluates the entire freaking app, and you'll be lucky if the client performance only degrades linearly rather than geometrically with data size. This is like the textbook case where you absolutely do not want to use Angular. Filter and paginate the loving thing on the server like people have been doing for 20 years, you're not going to suddenly reinvent the concept of a tabular data view and create an amazing new user interface paradigm on some internal tool.

Let's fumble around for a year trying to figure out how to do Angular properly! But for heaven's sake don't even suggest not using Angular...

This should not take anywhere near a year... there are a number of readily available virtual scroll directives available which will allow you to easily render arbitrarily large datasets without pagination. I've gotten a lot of use out of https://github.com/kamilkp/angular-vs-repeat.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Data Graham posted:

Nice. I assume this deals with selecting paginated slices from the API call? The demo just synthesizes an array of X size, no external data source. I assume one of the purposes of this is to avoid having to pull all 8000 rows from the API with every page load? Because goal 1 for me would be to not have to worry about what happens at 80,000.

It takes your data source and creates a virtual scrollbar. Only the elements visible are actually rendered, and as you scroll through the list they will be dynamically added/removed. So rendering 1,000 is the same as 1,000,000. I've built log viewers that can filter & render up to 2 million logs in the client with no problem. The only reason there is a max at 2 million is the available RAM to hold onto the data in Chrome (~1gb).

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Data Graham posted:

Yeah, but I mean I'm not keen on a) using that much browser RAM or b) transferring that much data with every call to the data source. I don't want a simple page load to be pulling a million records, regardless of how it gets rendered.

Right, sorry, I misunderstood your question. There are some other directives that you could use to also add in loading your data in pages, I just can't think of one off the top of my head. This one directly addresses the AngularJS rendering issue.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

darthbob88 posted:

What's the Best Practices way to organize an API to use WebSockets? Working on a personal project, I've decided that I'm probably going to need a WebSocket server for sending data to the user, so I'm looking into this library, which seems easy to work with. Given the example on that page, the best option for organizing the API seems to be either a single server, with each call handled in a large OnMessage handler, or a server instance for each API call, but neither feels quite right to me.

A single OnMessage handler is just how Websockets works, but that doesn't mean all your code has to be in there, that's just the entry point for your messages. I'd start with a simple single server, with a well organized OnMessage handler which identifies the message, and routes to the appropriate handler.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

huhu posted:

For my next project I'm going to help an NGO (or fail and tell them they need to pay someone) design a world map of all their projects with a popup showing some basic information like who where and what. I haven't branched into this kind of stuff except for some potentially useful experience with jQuery and JavaScript. Ideally the popup information will be stored in an Excel sheet or easily converted to some other database format. What would I need to learn to get a very basic setup up and running?

Don't try to use the Excel file on the website, build something that reads it at compile time and converts it to something usable by whatever your website will use.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

nexus6 posted:

I'm being put on a project that will essentially need to be a webapp/HTML & JS game that
  • can be used on an iPad or a larger touchscreen monitor
  • can store data entered into fields in the app
  • can store the score achieved in the game
  • can (given an internet connection) upload stored data
  • can (given an internet connection) download data (e.g. previous game scores)
  • can work offline

Basically this will be used at various field events around the country at different dates. At the end of the campaign the best score will be picked for prizes.

The game part isn't an issue because I have found an example on codecanyon that I can use.

Annoyingly the client can't guarantee that there wll be a live internet connection when people are playing the game so I can't write scores directly into a database.

I've kind of achieved this sort of thing previously by serializing data and storing it in localStorage. Later (when online) I'd have an 'Upload' button that sends all the localStorage data via AJAX to a server script to write into the database.

It was a real pain to set up and because I don't have a Mac I can't view the contents of an iPad's localStorage nor view the error console.

Does anyone have a better way to solve this issue? I'm afraid the only alternative I have to doing this is quitting my job and if I ever have to do something like this again it will be a real option.

I'm not an iOS developer! Stop pitching offline iPad apps!

LocalStorage is pretty much all you can go on, the browser doesn't really have any other options for storage. How is the game loaded into the browser if there is no internet connection? Is it purely from cache, or is a web server installed on the client device to serve up the HTML/JS assets? If so, you COULD embed a REST API into that web server that lets you store things into some sort of SQLite, but this isn't really that different from LocalStorage.

Are you using ServiceWorkers? You could look into that, though it is again just a sort of middle-man for these type of things, doesn't materially change what needs to be done.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

nexus6 posted:

Cool, I was just checking I wasn't doing anything dumb or missing out on a better way to do it. Meteor and PouchDB both appear to offer automatic syncing but I guess in my case it's not a huge deal if it doesn't sync magically by itself.

I just get really paranoid about this sort of thing because offline apps are out of my hands once they're in use and if anything goes wrong there's not a lot I can do. I once made a tiny syntax error in one project and they basically had to stop using the iPads until they were shipped back to me and I realized there was an errant ' character.

That and the fact that the people ultimately using these things aren't reliable and tell me they've uploaded more data than is actually in the database. I get blamed because they aren't counting correctly.

If this is being deployed to actual customers, there should probably be a formal QA process before it actually gets to them. Bugs are inevitable, you have to plan for that. "Ship it and let the customers do QA" is not a great plan.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

The Merkinman posted:

I have this SPA, not written in Angular/React/Anything special.
A user presses a button and content dynamically appears above the button. Depending on how much content it is, it pushes the page down, so it gives the appearance that the page has jumped (ie, the button is no longer in the persons view).
Is there any way around this?
I tried figuring out how far they were scrolled before/after the dynamic content, but as some of the content is an image, my numbers are off. I don't see how I could just hardcode the height of the image as its responsive.

You could check the top property of the button in question and then change the scroll position based on that.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

The Merkinman posted:

Isn't that just the CSS top property?

I checked how far from the top the user was right when they clicked the button...$(document).scrollTop();, and subtracted that from the whole document $( document ).height() to see how far from the bottom the user is.
Then after the content gets injected, take the new $(document).height(), subtract how far from the bottom they were, to figure out how far from the top they should be now.
Trouble is, that all calculates before the dynamic image loads, so once the image loads, the content pushes down again.

You can either do what Lumpy suggested

Lumpy posted:

This. A ghetto solution is to give the button an ID, then after you insert your content, navigate to myApp.html#buttonID.

Or use a timeout to do the recalculation Xms after the button click happens.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

an skeleton posted:

We're moving to a team standard because boss says so. I'm technically an intern, although I've been here a while so I'm basically a regular developer, but mostly I'm just happy to have the opportunity to work with Mongo. The database structure is simple enough that it seems like it could work under a relational or non-relational DB. Anyways I'm just following orders and trying to make the transition as smooth as possible.

Notably, we're not just moving from MySQL because we hate SQL, but because the previous DB and app design/implementation of MySQL is... lackluster.

But now you are documenting the hell out of your schemas... so it sounds like you really wanted an SQL DB from the beginning.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

an skeleton posted:

I get that, and I highly doubt the data we have is 100% fully optimal for Mongo and SQL would probably work just fine. However, I also don't think our data is super complicated, we do maybe *A* join here or there, never 3 or more. So my estimate is that it will be a perfectly functional but mediocre tool for the job. However, I also have basically 0 chance of influencing a chang to MySQL at this point because my boss wants a working MEAN stack app as part of our new standard. Therefore, I will just try and build the best MEAN app that I can. I'm also early in my career and trying to grasp new technologies so it coincidentally benefits me more to use Mongo.

Will report back to let you know if this blows up in my face.

You know the M in MEAN can be MySQL, right?

Adbot
ADBOT LOVES YOU

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

kedo posted:

If I were SO I would immediately start injecting the following into people's clipboard when they copy something off the site.

JavaScript code:
// "FOR LOOP"
// 
// Copyright (c) 2016 StackOverflow user PHPussyDestroyer
// [url]http://stackoverflow.com/questions/34820332/bro-how-do-you-loop[/url]
// 
// Permission is hereby granted, free of charge, to any person 
// obtaining a copy of this software and associated documentation 
// files (the "Software"), to deal in the Software without restriction, 
// including without limitation the rights to use, copy, modify, merge, 
// publish, distribute, sublicense, and/or sell copies of the Software, 
// and to permit persons to whom the Software is furnished to do so, 
// subject to the following conditions:
// 
// The above copyright notice and this permission notice shall be 
// included in all copies or substantial portions of the Software.
// 
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF 
// ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED 
// TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 
// PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT 
// SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR 
// ANY CLAIM, DAMAGES 

for(i=0; i<10; i++) {  ....

Do I have to include that for all my for-loops from now on, or just the first?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply