|
Grrr. Can't read.
|
# ? May 29, 2008 02:48 |
|
|
# ? Apr 29, 2024 15:34 |
|
Oh snapz. I just needed to be patient after applying those indices. I guess that slowness is pretty much flat no matter the result size. So it only takes 5 minutes to dump 1.7million records in the square table format. Hawt.
derdewey fucked around with this message at 09:35 on May 29, 2008 |
# ? May 29, 2008 09:28 |
|
This may not be a small question, but I'll ask it anyways: When is it a good idea to stop using MySQL? My company is interested in expanding a local site on a national level, and we're currently using PHP/MySQL. If the proposed plan is launched on a national level, the database will grow exponentially, and rather than fix a huge problem down the road when there are more clients and more data to manage, it would be good to know what/when the limits of MySQL are so that an investment into SQL Server or Oracle could be made. Obviously this isn't a cut and dried question, but I have never used MySQL on a huge scale before.
|
# ? May 29, 2008 22:13 |
|
Stephen posted:Obviously this isn't a cut and dried question, but I have never used MySQL on a huge scale before. What do you consider "huge"?
|
# ? May 30, 2008 02:03 |
|
Stephen posted:Obviously this isn't a cut and dried question, but I have never used MySQL on a huge scale before. I worked on a PHP/MySQL site before that started small and got somewhat large. It now has about 20 million records or so in it and gets a lot of use. The performance issues we ran into were nothing to do with MySQL though, it all had to do with lovely queries, missing indexes, and amateur PHP code, so even if it were an oracle backend we would have had the same troubles. So yeah, you should be fine but plan on fixing some things and doing some tweaks here and there as well as troubleshooting/optimizing slow queries.
|
# ? May 30, 2008 13:07 |
|
I think the main problem we've had so far that I couldn't find a way around in MySQL was indexing a datetime column. MySQL really seems to gently caress up whenever I try to do this and I've read a lot of the same complaints. There are features to the website that are tracking page views/week etc. so querying by date on a views table with millions of entries is really problematic. Other than that, it's been working really well, I just don't want to find out 4 months down the road that MySQL is terrible at handling millions of records or something like that.
|
# ? May 30, 2008 14:43 |
|
I'm working with SQL Server 2005. It's the backend for an ASP page. Can I create a temporary table for each session for certain pages? Is the overhead involved insane (I'm looking at 1000 concurrent users)?
|
# ? May 30, 2008 18:51 |
|
Jethro posted:
this is exactly what I needed, thanks
|
# ? May 30, 2008 19:35 |
|
fez2 posted:I'm working with SQL Server 2005. It's the backend for an ASP page. Can I create a temporary table for each session for certain pages? Is the overhead involved insane (I'm looking at 1000 concurrent users)? I'm curious about the background of your project, and what exactly would cause you to ask if juggling ~1000 temp tables is acceptable overhead. Not saying it's not, maybe you have some crazy requirements.
|
# ? May 30, 2008 19:49 |
|
epswing posted:I'm curious about the background of your project, and what exactly would cause you to ask if juggling ~1000 temp tables is acceptable overhead. Yeah... but I don't see why fez couldn't just have one table, temp or not, keyed on session ID. What the hell is going on there?
|
# ? May 31, 2008 02:21 |
|
Actually it's not that crazy. It's a website to support agile development. Basically, we're modelling elements of that process. Each time an element is updated or inserted, we have to write it out to a history table. It's a piece of cake for the main table, but each element can have any number of references or relationships associated with it. I'm looking for a way to have users open the references for an element, change them then commit only one save so that the history table doesn't get cluttered up with tons of atomic changes (it's supposed to be more of a version tracker). They kind of threw me to the wolves on this one. I'm a new hire and I hadn't touch ASP before this. I've got about 2 weeks to go before this is deployed and this is the last major hurtle. A table keyed on sessionID is a pretty good idea.
|
# ? Jun 1, 2008 05:09 |
|
Is there any way to access external databases dynamically from within a stored procedure? I'm using MSSQL 2005, and I figured out that I can access external databases if I know their name ahead of time, like code:
code:
|
# ? Jun 2, 2008 00:09 |
|
Casimirus posted:What I can't figure out is how to take a database name passed to the stored procedure and then access things like Yeah, it won't let you do that directly. For what you're doing, you might be able to get away with code:
|
# ? Jun 2, 2008 02:19 |
|
Casimirus posted:Is there any way to access external databases dynamically from within a stored procedure? I"ve done a lot of work with sp_executesql. sp_executesql is miles above EXEC (checks for sql injection AND can create/store an execution plan!) and is worth the extra hassles in using it. I don't see why sp_executesql would gently caress up a transaction. I might suggest though that you don't do a transaction in it, but that you use @@ERROR (Sql server 2000) or TRY/CATCH (Sql server 2005) in the caller of the sp_executesql to handle your transaction. code:
|
# ? Jun 2, 2008 12:51 |
|
Ardhanari posted:
Or, poo poo, I thought of this this morning but didn't have time to post: is there anything keeping you from using sp_addlinkedserver?
|
# ? Jun 3, 2008 00:42 |
|
Cool thanks guys, I think I'll end up going with synonyms as suggested by that link, since this is being run manually anyways.
Casimirus fucked around with this message at 09:35 on Jun 3, 2008 |
# ? Jun 3, 2008 07:41 |
|
One thing I've always hated about the Jet engine: it has different SQL standards for local Access connections and for external ODBC/ADO/DAO connections. For example, inside Access you can use the 'Nz([FieldName]) AS' function to replace NULL records with zeros or empty strings. However, if you are querying an Access DB externally, you can't use 'Nz' at all! Also, Jet doesn't support 'COALESCE', or anything like it. So instead, I have to use 'IIF' instead to weed out NULLs or other undesireables. Is there a better way to do this?
|
# ? Jun 3, 2008 19:13 |
|
To begin with, I'm a beginner to SQL. Is there a more elegant way to write this so that I can include PSOFT_RQMT_PAY in my WHERE clause afterwards? This is in MS Access 2003, and it doesn't seem to want to let me use the alias in the WHERE. code:
I bolded the problematic part. In a perfect world, I would like to simplify this to: code:
The tables are linking correctly and outputing the information I want, it's really just when filtering the results with the WHERE that I'm getting a snag. My original code works, but it's so slow.
|
# ? Jun 3, 2008 21:05 |
|
A quick one. Performance wise, which is better - a stored procedure or a view with a where clause attached? Case in point - I'm ripping a bunch of enormous queries out of a VB.NET DLL we use for gathering purchase order data, both for performance and for debugging purposes. These queries typically take a few parameters, and aren't terribly complicated [no need for dynamic sql, just need to plug the params in] but they are rather big. Would it be better to put them into stored procedures with parameters, or into a view, filtering the view based on the params? As far as I knew stored procedures were always better due to their speed, and views weren't really any better performance wise.
|
# ? Jun 3, 2008 23:02 |
|
I'm not a fan of reinventing the wheel, and this problem seems of the kind that's been solved a hundred billion times before. Googling hasn't gotten me very far though. The situation is this: I have a MySQL database with a single table, with around 20 columns. Employees from different departments will want to view data from this table for different purposes, so one might want just columns 1,5,9 and 13 while another only want's to see columns 2 and 9. I need a simple web interface that basically says "Select the columns you want to see, the date range and press submit." I whipped together something similar to this in PHP but I'm not really happy with it, and I'm positive there's some kind of ready-made solution for this seemingly very common problem.
|
# ? Jun 4, 2008 09:17 |
|
I am making a fairly simple database at work importing data from the work system so i can create pretty formatted word lists. I can do the easy queries with access but the syntax is troubling me on more complicated ones. I believe I need to do some sort of self referencing query, but i cannot figure it out. This is my database (Access 2000) What I want is a list of each person and their most recent location (TOP 1, determined by the link_location.date_created) while other.active is true. So I will end up with something like. Joe, Bloggs, 1/1/1990, 90, Bolobo Lane, Bum, NSW, 02/06/2008 Gerry, Ger, 4/5/1945, 1, Atle Street, Heffa, QLD, 03/06/2008 .... etc. Where Joe Bloggs and Gerry Ger have many locations but their most current is as above.
|
# ? Jun 4, 2008 10:43 |
|
clone posted:I am making a fairly simple database at work importing data from the work system so i can create pretty formatted word lists. I can do the easy queries with access but the syntax is troubling me on more complicated ones. When I've had to do this in Access I usually create a query, called something like link_location_maxdate, to determine the max date_created for each person_id in link_location. Then insert link_location_maxdate into the one you have above between person and link_location. You would join person.person_id to link_location_maxdate.person_id and then join link_location_maxdate.person_id to link_location.person_id and link_location_maxdate.max_date_created to link_location.date_created. I'm sure there's a better way to write it if you're just doing SQL, but this is easy enough to understand and quick to create in Access.
|
# ? Jun 4, 2008 14:53 |
|
clone posted:Some stuff Give this a try: code:
Puddleboy fucked around with this message at 15:11 on Jun 4, 2008 |
# ? Jun 4, 2008 14:58 |
|
At my work we have a C#/ASP.NET application that all use the same database on one development server. When I go to use the profiler to see the parameters of a stored procedure that is called ("New Trace"), all the developers' stored procs are shown. It becomes annoying sometimes to figure out which There's an NTUserName column that lists the user who is calling the database command. Is there any way to have it work through the application based on the user's windows login? I.e. If I'm logged in to windows as jk, run the C#/ASP.NET application off localhost and then call a stored procedure through a database connection, is there any way to specify the NTUserName column that will appear in the trace?
|
# ? Jun 4, 2008 15:46 |
|
chocojosh posted:At my work we have a C#/ASP.NET application that all use the same database on one development server. On my servers, the NTUSername isn't always filled, but I prefer to filter by SPID anyway. run sp_who2 to list the connections, decide which one you want to trace (you can see usernames and machine names here) and enter that SPID in the trace filter.
|
# ? Jun 4, 2008 15:58 |
|
Alex007 posted:On my servers, the NTUSername isn't always filled, but I prefer to filter by SPID anyway. run sp_who2 to list the connections, decide which one you want to trace (you can see usernames and machine names here) and enter that SPID in the trace filter. sp_who2 returns *10* different results for my user. I think that it's making one different spid for each "New Query" tab I have opened in Sql server studio. Also, it seems that if I log out and log back into my application I have a different spid. Is there a way to filter the results of sp_who2. Something similar to SELECT * FROM "exec sp_who2" where Login = mylogin and hostname = myhostname Also, I just realized something -- our web.configs are setup to use a standard username/password to access the database, even though they setup windows authentication for when I go into sql server 2005. Could this be a good reason why my ntusername isn't showing up? Then again, why can't it use the hostname field returned by sp_who2? chocojosh fucked around with this message at 16:19 on Jun 4, 2008 |
# ? Jun 4, 2008 16:16 |
|
chocojosh posted:sp_who2 returns *10* different results for my user. I think that it's making one different spid for each "New Query" tab I have opened in Sql server studio. Also, it seems that if I log out and log back into my application I have a different spid. sp_who2 takes a parameter, the login name, run it like this: EXEC sp_who2 'MYDOMAIN\MyUserName' (if you're using NT Auth)
|
# ? Jun 4, 2008 16:56 |
|
Is this an efficient way to get specific rows from a database?code:
|
# ? Jun 5, 2008 00:18 |
|
drcru posted:Is this an efficient way to get specific rows from a database? Yes, but you don't need the limit on there, its redundant. The IN clause will only grab four records assuming that `id` is unique.
|
# ? Jun 5, 2008 01:14 |
|
Thank you kindly.
|
# ? Jun 5, 2008 02:18 |
|
drcru posted:Is this an efficient way to get specific rows from a database? Be careful with the word 'efficient', that depends a lot more on how your tables are structured. Although the 'LIMIT' is useless in this case, I don't think that it adds any time/CPU cycles to the query. For example, WHY do you need those specific rows? If there is something that makes those rows different from the rest, perhaps you'd be better off with an indexed 'flag' field? (like a boolean or a single letter code) This way you don't have to hard code the Row keys into the SQL statement.
|
# ? Jun 5, 2008 15:29 |
|
I'll ask my question in a simpler way without a bunch of code that has no impact. Let's say I have in an Access 2003 query: code:
I have to do this to be able to filter by V_NAME instead of just being able to refer to the alias code:
|
# ? Jun 5, 2008 17:02 |
|
Just make it an inline view...code:
|
# ? Jun 5, 2008 17:24 |
|
Just a reminder, kids - remember your with (nolock)s. I was trying to figure out why a really dumb, simple select query was tanking performance in one of my web apps, and... yeah, nolocking it took it from taking up to a minute to execute down to under a second.
|
# ? Jun 5, 2008 17:35 |
|
Code Jockey posted:Just a reminder, kids - remember your with (nolock)s. I was trying to figure out why a really dumb, simple select query was tanking performance in one of my web apps, and... yeah, nolocking it took it from taking up to a minute to execute down to under a second. As long as you kids are aware of the data quality/sync issues for when you use NOLOCK.
|
# ? Jun 5, 2008 18:42 |
|
MySQL 5.0. I'm trying to select 3 rows plus whatever rows are flagged stickied. I have gotten as far as this:code:
|
# ? Jun 5, 2008 18:57 |
|
No Safe Word posted:As long as you kids are aware of the data quality/sync issues for when you use NOLOCK. This is why I take issue with Oracle's recent addition of asynchronous commits. Sure, it's faster, and sometimes you legitimately want to trade the reliability for the performance, but I have this vision of consultants everywhere showing up to "improve performance", turning on global deferred commits, collecting a fat bonus for speeding up the system in short order, and then blowing out before any problems appear.
|
# ? Jun 5, 2008 19:05 |
|
MySQL 5.0. I'm writing a php app for my roommate to run live trivia shows. It keeps track of teams, locations, questions, and all that sort of nonsense. One of the features he wants is for it to randomly pick 25 questions that haven't been asked in the past period/league (every 6 months generally). I wrote a query that does what I want it to, but I have to imagine there's a more efficient way of doing it using a join or two instead of subqueries. I tried to make one using joins, but I couldn't figure it out. code:
code:
Bruno_me fucked around with this message at 11:47 on Jun 7, 2008 |
# ? Jun 6, 2008 09:39 |
|
Hello Everyone, I'm creating an IT inventory application for our company, and i've decided to use mysql for the database side of it. I've created two tables, one for infrastructure with the primary key of asset tag, and Location, also asset tag as the PK. I wrote some table creation statment, and just wanted to see what everyone else thought of these: code:
|
# ? Jun 6, 2008 14:17 |
|
|
# ? Apr 29, 2024 15:34 |
|
I'm using MS SQL 2005 here and I'm trying to get a pretty simple condition in a stored procedure working, but it keeps spouting syntax errors:code:
Msg 102, Level 15, State 1, Procedure GetCustomerList4, Line 16 Incorrect syntax near '@CompanyName'. Msg 102, Level 15, State 1, Procedure GetCustomerList4, Line 26 Incorrect syntax near ')'. I've tried changing about/removing the brackets, using slightly different syntax (ELSE IF instead of ELSE, not using BEGIN/END, using END IF and chopping and changing different parts), I've tried using CASE too but all of them resulting in different but equally unhelpful errors. Is there anything glaring here because I'm stumped, and so is my mentor. The syntax I'm using at the moment I got from an MS SQL Server textbook but I've also tried what I've found online too, of course. I suspect dodgy placement of BEGIN/ENDs but I've not had any luck while playing about with those, either.
|
# ? Jun 9, 2008 14:08 |