|
Reason posted:What are good places to find freelance work online? The freelance section of the OP is empty. A lot of the online freelance sites are very mixed. You'll get a lot of people who think a better version of ebay should cost $100, and on the other side, thousands of offshore coders who just poo poo out thousands of cut and paste lines of code that will reinforce the beliefs that these things are cheap. It's a lot of stress just so you can code at $20/hour.
|
# ¿ Aug 30, 2015 03:45 |
|
|
# ¿ May 21, 2024 00:31 |
|
Reason posted:So its not a good thing to do? I'm interested in it because I'm a stay at home dad looking to make money on the weekends/evenings and I have two skills, web development and serving legal papers and one of those things I can't do from home. If you are absolutely desperate for cash, or need SOME experience in software development, it may be worth your time. You are competing in a market with a lot of offshore providers who are also probably desperate for cash, and ignorant customers who expect the world for pennies. If you can get a junior dev position paying $20/hour, you'd be better off in pretty much every aspect. The social aspect is also important to evaluate. If I could do it just to pick up a few dollars solving problems, I would. But I don't want to spend a minute dealing with the types of people who end up posting work. I'd rather do manual labour than suffer them.
|
# ¿ Aug 30, 2015 04:34 |
|
v1nce posted:Any reason you can't do this with keyboard navigation? Add some hot-key focus grabbers and let them jump elements with the arrows. You could do this, as you can calculate the exact scroll position you would jump to. It would also make more UX sense, in keeping with what revmoo said. This way, you'd only be changing the behavior of the up/down keys when on that window, which is much less jarring and doesn't violate nearly as many expectations.
|
# ¿ Sep 1, 2015 04:40 |
|
Ghost of Reagan Past posted:So I'm trying to mess around with Node and Angular, and grabbed a Yeoman generator for Angular projects so I could dive in on Windows. It won't 'break' Dropbox, but it might have trouble syncing, and can take a long rear end time indexing all the files. The folder trees are not impossible to deal with, but can be a hassle. Windows Explorer can MOVE such folders, but cannot delete them. NPM and Node have no problem creating and reading the files, since the issue is not with NTFS but with Windows Explorer. Maybe try a sync batch file which zips & copies all files? NPM 3.0 is supposed to fix the issue, but they seem to be dragging their feet with it. Ghost of Reagan Past posted:I guess I'll just throw it on Github then...this is a really silly issue and all I find on Google is the npm devs blaming Microsoft, and Microsoft saying "tough poo poo, we aren't changing anything." I just don't know why it has to create such giant directory structures. It's a really stupid issue. The NPM guys don't want to change it because they made a decision a long time ago which works on linux, but doesn't so well on Windows. Instead of change their minds, they've blamed Windows for it, while snarkily maintaining that "Node fully supports Windows, however, some 3rd party tools (Windows File Explorer) do not support Node". There are other ways of perfectly representing a dependency graph, but they refuse to admit they might have chosen a poor method, and only very reluctantly started trying to fix it recently. Skandranon fucked around with this message at 01:28 on Sep 15, 2015 |
# ¿ Sep 15, 2015 01:23 |
|
piratepilates posted:NPM 3 beta is out with flat directories, just use that Did not know that! Will try it out tomorrow, I've had my own share of NPM/Windows issues.
|
# ¿ Sep 15, 2015 02:50 |
|
Ghost of Reagan Past posted:I don't know if it works with Yeoman yet but I just set up the project on my (Ubuntu) laptop synced to Github, so I'll just use npm3 beta and install the dependencies that way on my desktop later. I totally agree with you, but npm3 looks to solve all(most) of the issues with Windows, and the benefits are significant. Gulp is a great build tool for Angular apps (Grunt is the devil!).
|
# ¿ Sep 15, 2015 04:24 |
|
Parrotine posted:I'm deciding to sign up for an online course between Web Design and UX Design. Both are the same cost, but I feel that I should pursue the UX one because it would have a better chance of fleshing out my portfolio, as well as giving me a better advantage to getting my foot in the door. UX has it's own strange things to learn, like how users get frustrated, how they pay attention to things, complimentary colours, etc. It's a lot more than just building wireframes and HTML.
|
# ¿ Sep 15, 2015 21:57 |
|
an skeleton posted:I'm tasked with the job of evaluating if it is feasible to transition our web app from using MySQL to using MongoDB (as well as some other technologies). Do you have a specific reason to do this, or does someone think MongoDB is magically faster?
|
# ¿ Sep 17, 2015 21:15 |
|
an skeleton posted:It's because we are trying to move our team to the MEAN stack as a standard. The previous app, including the db structure was architected by someone who eventually got fired so the DB is a piece of crap anyway. If we have to rewrite some of the relational crap in the new app, fine, but I just want to evaluate whether that is a retardedly difficult task (it doesn't seem horrible but I've got <2 years experience under my belt). The main problem you'll have with Mongo is that, instead of having the database doing things like join or filter operations, you'll have to do them in your application layer. Which can mean transmitting a LOT of data to retrieve just a single item. For some scenarios, Mongo makes sense, but by no means all. I'd focus on moving everything else in the stack and stick with whatever DB you have, or start over again in Postgre or a new MySQL
|
# ¿ Sep 17, 2015 21:49 |
|
The Merkinman posted:I'm sorry if this is off the current discussion, but how do you tackle performance (load time, size)? This is really a failure of the business analysis portion. They should have a specific goal for how fast an application should be for various things. For example, Google has a specific < 1s time limit for it's queries. A product doesn't get released until it is faster than 1s. Saying "Make X faster" without a benchmark of what is good enough is impossible to really satisfy.
|
# ¿ Sep 18, 2015 16:43 |
|
The Merkinman posted:That's my point. How do I figure out what good enough is? Since one hasn't been provided to me (through the failure of the business analysis) how do I make one of my own to then tell the business analysts? Well, if you are the developer, and they give you vague requirements, pick the interpretation you like best. 0.01% faster IS faster. Conversely, you can send them some emails explaining how their requirements suck (need clarification). I don't know how contentious your relationship is with the people who do business analysis.
|
# ¿ Sep 18, 2015 19:28 |
|
kedo posted:Genuinely curious – why are you folks opposed to loading screens for a website? Loading screens exist in practically every other type of application, what's wrong with using them on the web? It's a UX thing... users will get frustrated if it takes longer than 2-3 seconds and are likely to leave your site for another one. So, it's better to get something shown as fast as possible, and break your loading into smaller chunks. People won't leave if just the calendar & twitter feed are loading, but the other content is visible, but will if the whole site looks unresponsive.
|
# ¿ Sep 18, 2015 21:22 |
|
an skeleton posted:Our app really isn't *that* complex. If we cant manage to get it to work decently in Mongo its probably more our fault than anything. Will report back crying when I've realized how right you guys were If you're dead set on doing it Mongo, go nuts. It's probably a bad decision, but if you are just looking for some after the fact justifications, say it works great with Angular and Node because it serves up JSON well.
|
# ¿ Sep 19, 2015 02:25 |
|
kiwid posted:I currently work in a corporate environment as the only webdev/programmer. I didn't originally start out in this role, instead I started out as a sysadmin. However, there just isn't enough work to keep me busy all day as a sysadmin so I started getting into the hobby of building web applications to ease other peoples job roles as well. Well... 5 years later and I have like 15 web applications that I'm managing all the while still doing my sysadmin responsibilities. I'm finding it really hard to keep web apps updated and some of them are suffering from software rot. Also, I've brought up the fact that they should split the responsibilities and hire a new person to do the sysadmin stuff but they refuse. Merging them could help some, if only as it will be a concentrated effort to clean up your code. However, if their purpose is not explicitly linked, maybe shouldn't have just One WebApp To Rule Them All. If there are some common tables talked to, maybe you could consolidate the data layers into a few DLLs and reference those instead of having the code in 15 places. I would push to refactor the sites & look for a new job. You can then talk about how you merged them as part of your interviews!
|
# ¿ Sep 24, 2015 15:50 |
|
kiwid posted:An update to this. So they are hiring a manager for you? Are you being transitioned into a full-time developer role? Do you want this? Being passed over is usually a bad sign and it is unlikely there is anywhere to go now at this place. If you are transferred to full developer role, this is a good way to have a formal Developer title on your resume, but you should start getting ready to leave now. Maybe stay until you finish your refactoring, as it's a good resume builder, but since they just hired a new manager, they are unlikely to spring for any more money for you.
|
# ¿ Sep 24, 2015 19:28 |
|
kiwid posted:No when I said "hire someone", I meant hire someone to take over the roles of either the programming side or the sysadmin side. Apparently they've decided to split the IT roles into software and hardware and have given me the option of choosing which direction I'd like to move towards. Overall, I think this is a good thing. Ok, that is better. If you are choosing to go the Developer route, one thing to watch out for is making sure the role is transitioned fully. You don't want to end up effectively fixing all the mistakes new guy makes forever.
|
# ¿ Sep 24, 2015 19:52 |
|
Khelmar posted:I'm open to suggestions to raze it all and start over, although I don't know if the board will ultimately go for it. There isn't much value in keeping what is already there. It would be easier for someone to start from scratch than to keep fitting things into the current design.
|
# ¿ Sep 29, 2015 16:26 |
|
revmoo posted:Mostly looking for Git-specific stuff but I'll go with the Github stuff if I can't find anything else. Maybe offer up Mercurial as a simpler version of Git? TortoiseHg does a really good job of providing a UI that does most of what your day to day stuff will be, but in a Windows friendly way.
|
# ¿ Oct 1, 2015 20:04 |
|
There is a TortoiseGit UI, have you tried to get the Windows users to use that?
|
# ¿ Oct 1, 2015 21:25 |
|
revmoo posted:Its funny, ive never tried a Git gui ever. Ive used gui tools for merge conflicts, but never an actual Git program (or cvs/svn). Ive always used, and trusted the CLI. I use a bunch of hotkeys to automate my Git workflow. Using a mouse seems so clunky. You may not, but if you are getting resistance from Windows users who don't like change, this may be a way to convince them it's not so bad. You can't win these fights just by proving how dumb they are for wanting to use SVN, you just make yourself look like a jerk.
|
# ¿ Oct 1, 2015 23:35 |
|
Knifegrab posted:OK so my website uses images that constantly change/evolve (sprite sheets mostly). Problem is that often times this can lead to stupid cacheing issues. I have learned that if I reference the file using a "?[some_number_here]" and I change that number when the file changes the browser will force itself to re-download the file, it treats it as a new image. I have some projects where we do a similar thing with our bundled library/template/application files. We have a single gulp task which goes through any file that has a direct reference and injects a new UUID for each build, so when it is deployed, everything is referencing the version from that build. Should be able to do a similar thing for fetching your pictures. Edit: looks like so, uses gulp-preprocess code:
Skandranon fucked around with this message at 00:23 on Oct 22, 2015 |
# ¿ Oct 22, 2015 00:17 |
|
LP0 ON FIRE posted:I have a security question regarding security with a database and logging in. I'm using PHP and mySQL, but the concept of how to do this best probably matters the most. How does encrypting the data help secure anything if the IV is also in the same database?
|
# ¿ Nov 3, 2015 20:45 |
|
LP0 ON FIRE posted:No wait. Hashing doesn't guarantee a unique value. I know the chances are extremely small, but it doesn't seem right to me. You are hashing the passwords already, and have a similar 'problem', which is not worth worrying about. DarkLotus posted:When a user is added or changes their email address and you create the new hash, make sure it doesn't exist, if it does create a new one until it is unique. Don't do this, you'll just be needlessly checking the database. A good cryptographic hash should have such a low chance of collision as to not be worth worrying about, especially for this use. Skandranon fucked around with this message at 22:00 on Nov 3, 2015 |
# ¿ Nov 3, 2015 21:58 |
|
LP0 ON FIRE posted:Hashing passwords guaranteed that the user's row is unique by requiring that the username (email) is unique. Having two user passwords that have the same hash is virtually inconsequential. If you are really worried about hash collisions, use a guid ID key instead of an auto-incrementing one.
|
# ¿ Nov 3, 2015 22:04 |
|
v1nce posted:This is a nice thought, but if you have a DB breach it probably won't matter. A dump is a dump is a dump. Chances are it'll contain everything and they'll still have all your IVs. This is what I was getting at, unless there is a lot of work going on that's not being discussed, encrypting as asked isn't going to be helping anything in the event of a real breach.
|
# ¿ Nov 4, 2015 03:30 |
|
Karthe posted:Does anything look off about this Angular filter implementation? This is sort of an aside, but there is little point in using the $filter service if you are not going to be calling your filters from your templates via |. If you are going to do the work in code, you might as well just do it with a for loop or something like Underscore or Lodash. Also, keep in mind that filters called via | are always executed at least twice, as it happens during the $digest loop, so if it is in any way expensive, moving it out of there is also a good idea to keep your $digests tight.
|
# ¿ Nov 13, 2015 21:10 |
|
Karthe posted:Oh, I'd just started moving some filtering out of the views and into the controller after reading about that and thinking "there's no point in things filtering twice". I also viewed it as a preventative measure - what if the datasets I'm filtering balloon above the couple of records that I'm working with now? It seemed pragmatic, but the things I read left it unclear at what point filtering done in the view should be moved to the controller. Most of the web apps I work on require a significant focus on the $digest cycle, so I prefer to keep it doing as little as possible. If you are just doing a simple app, and there are already built in filters for what you want, go ahead. I don't like $filter because it's one of those Angular features that seems like magic, but scales poorly.
|
# ¿ Nov 13, 2015 21:57 |
|
Karthe posted:Oh, your point is to skip $filter completel! I see. Then it should be just as good to angular.copy() a controller variable, for() through it to remove unneeded values, and then re-set that new object back to the controller variable? There probably isn't a reason to even use angular.copy(), just create a new variable as the result set, and push the values you want to it. Keep 2 sets of information, the full set (probably stored in a service somewhere and only changes when getting from server or whatever) and the filtered set in the controller. The controller is responsible for assigning the filtered set, but doesn't need to create new values, just needs to add the references to the filtered set. When you assign the new filtered set, this will trigger any $watches you have keeping an eye on it, but you won't have created an entirely new set of records for it.
|
# ¿ Nov 13, 2015 22:29 |
|
Data Graham posted:Heh, thanks for this link. Rings nice and true for me. I just came from a day at work where I found myself unexpectedly in charge of a team spread across two cities trying to figure out why our internal system (basically nothing more than your bog-standard tabular data display) had started out fast when it was new but now was so goddamned slow even to just scroll the browser that it was basically unusable. After half an hour fumbling around in a screensharing group with timeline viewers and Django-toolbar'ing the API endpoint and adding debug statements everywhere to try to figure out why it took five seconds just to render the page after getting the API response, it comes out that they had decided to build the thing on AngularJS and were building the entire table view dynamically on the client side after pulling the entire, unpaginated data set via the API call. Well of loving course it's going to get slower over time, geniuses. That API call is returning 8000 rows now, all of which the client has to juggle and filter and render into HTML, and every time any event fires it reevaluates the entire freaking app, and you'll be lucky if the client performance only degrades linearly rather than geometrically with data size. This is like the textbook case where you absolutely do not want to use Angular. Filter and paginate the loving thing on the server like people have been doing for 20 years, you're not going to suddenly reinvent the concept of a tabular data view and create an amazing new user interface paradigm on some internal tool. This should not take anywhere near a year... there are a number of readily available virtual scroll directives available which will allow you to easily render arbitrarily large datasets without pagination. I've gotten a lot of use out of https://github.com/kamilkp/angular-vs-repeat.
|
# ¿ Nov 26, 2015 01:24 |
|
Data Graham posted:Nice. I assume this deals with selecting paginated slices from the API call? The demo just synthesizes an array of X size, no external data source. I assume one of the purposes of this is to avoid having to pull all 8000 rows from the API with every page load? Because goal 1 for me would be to not have to worry about what happens at 80,000. It takes your data source and creates a virtual scrollbar. Only the elements visible are actually rendered, and as you scroll through the list they will be dynamically added/removed. So rendering 1,000 is the same as 1,000,000. I've built log viewers that can filter & render up to 2 million logs in the client with no problem. The only reason there is a max at 2 million is the available RAM to hold onto the data in Chrome (~1gb).
|
# ¿ Nov 26, 2015 01:41 |
|
Data Graham posted:Yeah, but I mean I'm not keen on a) using that much browser RAM or b) transferring that much data with every call to the data source. I don't want a simple page load to be pulling a million records, regardless of how it gets rendered. Right, sorry, I misunderstood your question. There are some other directives that you could use to also add in loading your data in pages, I just can't think of one off the top of my head. This one directly addresses the AngularJS rendering issue.
|
# ¿ Nov 26, 2015 01:46 |
|
darthbob88 posted:What's the Best Practices way to organize an API to use WebSockets? Working on a personal project, I've decided that I'm probably going to need a WebSocket server for sending data to the user, so I'm looking into this library, which seems easy to work with. Given the example on that page, the best option for organizing the API seems to be either a single server, with each call handled in a large OnMessage handler, or a server instance for each API call, but neither feels quite right to me. A single OnMessage handler is just how Websockets works, but that doesn't mean all your code has to be in there, that's just the entry point for your messages. I'd start with a simple single server, with a well organized OnMessage handler which identifies the message, and routes to the appropriate handler.
|
# ¿ Nov 26, 2015 17:56 |
|
huhu posted:For my next project I'm going to help an NGO (or fail and tell them they need to pay someone) design a world map of all their projects with a popup showing some basic information like who where and what. I haven't branched into this kind of stuff except for some potentially useful experience with jQuery and JavaScript. Ideally the popup information will be stored in an Excel sheet or easily converted to some other database format. What would I need to learn to get a very basic setup up and running? Don't try to use the Excel file on the website, build something that reads it at compile time and converts it to something usable by whatever your website will use.
|
# ¿ Nov 30, 2015 05:59 |
|
nexus6 posted:I'm being put on a project that will essentially need to be a webapp/HTML & JS game that LocalStorage is pretty much all you can go on, the browser doesn't really have any other options for storage. How is the game loaded into the browser if there is no internet connection? Is it purely from cache, or is a web server installed on the client device to serve up the HTML/JS assets? If so, you COULD embed a REST API into that web server that lets you store things into some sort of SQLite, but this isn't really that different from LocalStorage. Are you using ServiceWorkers? You could look into that, though it is again just a sort of middle-man for these type of things, doesn't materially change what needs to be done.
|
# ¿ Dec 10, 2015 18:52 |
|
nexus6 posted:Cool, I was just checking I wasn't doing anything dumb or missing out on a better way to do it. Meteor and PouchDB both appear to offer automatic syncing but I guess in my case it's not a huge deal if it doesn't sync magically by itself. If this is being deployed to actual customers, there should probably be a formal QA process before it actually gets to them. Bugs are inevitable, you have to plan for that. "Ship it and let the customers do QA" is not a great plan.
|
# ¿ Dec 15, 2015 17:20 |
|
The Merkinman posted:I have this SPA, not written in Angular/React/Anything special. You could check the top property of the button in question and then change the scroll position based on that.
|
# ¿ Dec 17, 2015 15:34 |
|
The Merkinman posted:Isn't that just the CSS top property? You can either do what Lumpy suggested Lumpy posted:This. A ghetto solution is to give the button an ID, then after you insert your content, navigate to myApp.html#buttonID. Or use a timeout to do the recalculation Xms after the button click happens.
|
# ¿ Dec 17, 2015 19:02 |
|
an skeleton posted:We're moving to a team standard because boss says so. I'm technically an intern, although I've been here a while so I'm basically a regular developer, but mostly I'm just happy to have the opportunity to work with Mongo. The database structure is simple enough that it seems like it could work under a relational or non-relational DB. Anyways I'm just following orders and trying to make the transition as smooth as possible. But now you are documenting the hell out of your schemas... so it sounds like you really wanted an SQL DB from the beginning.
|
# ¿ Jan 5, 2016 01:43 |
|
an skeleton posted:I get that, and I highly doubt the data we have is 100% fully optimal for Mongo and SQL would probably work just fine. However, I also don't think our data is super complicated, we do maybe *A* join here or there, never 3 or more. So my estimate is that it will be a perfectly functional but mediocre tool for the job. However, I also have basically 0 chance of influencing a chang to MySQL at this point because my boss wants a working MEAN stack app as part of our new standard. Therefore, I will just try and build the best MEAN app that I can. I'm also early in my career and trying to grasp new technologies so it coincidentally benefits me more to use Mongo. You know the M in MEAN can be MySQL, right?
|
# ¿ Jan 7, 2016 07:16 |
|
|
# ¿ May 21, 2024 00:31 |
|
kedo posted:If I were SO I would immediately start injecting the following into people's clipboard when they copy something off the site. Do I have to include that for all my for-loops from now on, or just the first?
|
# ¿ Jan 15, 2016 23:13 |