|
NihilCredo posted:I guess "group by extract( day from dateadd(hour, -2, ts))" ? You're a wizard. Thank you so much.
|
# ? May 28, 2015 19:06 |
|
|
# ? Apr 29, 2024 07:33 |
|
NihilCredo posted:Couple of SQL Server questions: 1) Think of it as SQL Server just trying to minimize the amount of deadlocks on a possible resource. Say you had both foreign keys on Widgets tying back to Users - if a Widget was created and updated by the same user, then each foreign key fires when that user is deleted. You'd have plenty of deadlocks if that user created & updated more than 1 widget. Even if cascade was just on 1 foreign key, the constraint will still check the Widgets table for a reference, but delete and update will request an exclusive lock, and most likely horrid performance. Not sure of your exact structure, but would a on update/ on delete trigger referencing a stored proc in Users work for you? (I'd say have the proc update/delete from Widgets where UserID = CreatedBy or UserID = UpdatedBy) 2) Outside of saving a table called "base_template" in model, but otherwise no magic bullet I've encountered. I've seen a truckload of dynamic SQL procs build a create table statement out of a dictionary, but it's a mess to read and debug. 3) I've seen plenty of cases where a currency was normalized into it's own table - it's just good design and saves a lot in storage. I'd say the best guidelines I see my colleagues following is this: is the character data static/controlled change or highly dynamic? If static, go for normalizing into a table. For example, if you have a column for a custom MSMQ message type in a .Net app and you have 10,000 different types, it'd still be worth it. Just about any flavor of SQL will chew through 100k rows with ease outside of BLOBs. But if you're tracking a few million different varchar values and your table isn't too big (under 100 GB), I wouldn't sweat it too much.
|
# ? Jun 4, 2015 03:19 |
|
Let's say I have something like Num C 123 Y 123 N 456 Y 789 N And I want to say "Show all where Y and N" When I do where C = "Y" and C ="N" I get no results. When I do OR I get all results. I just want 123.
|
# ? Jun 4, 2015 20:55 |
|
Liam Emsa posted:Let's say I have something like select distinct t1.num as num from t as t1 where exists (select * from t as t2 where t1.num = t2.num and t2.c = 'y') and exists (select * from t as t3 where t1.num = t3.num and t3.c = 'n') or if c only has the values y and n, you can instead: select num, count(c) as num_rows from t group by num having num_rows = 2
|
# ? Jun 4, 2015 21:08 |
|
Hmm, I don't think Oracle Answers will let me do that.
|
# ? Jun 4, 2015 21:17 |
|
balakadaka posted:1) Think of it as SQL Server just trying to minimize the amount of deadlocks on a possible resource. Say you had both foreign keys on Widgets tying back to Users - if a Widget was created and updated by the same user, then each foreign key fires when that user is deleted. You'd have plenty of deadlocks if that user created & updated more than 1 widget. Even if cascade was just on 1 foreign key, the constraint will still check the Widgets table for a reference, but delete and update will request an exclusive lock, and most likely horrid performance. Not sure of your exact structure, but would a on update/ on delete trigger referencing a stored proc in Users work for you? (I'd say have the proc update/delete from Widgets where UserID = CreatedBy or UserID = UpdatedBy) Thanks a lot for the answers! Regarding (1): I think I get it, mostly. However, updating / deleting a UserId is going to be an extremely infrequent operation; shouldn't I be able to tell SQL Server "yeah I don't care if this is going to be hilariously expensive, go ahead anyway" somehow? I'm not sure if a trigger on Users works here. Either I don't set a FK constraint - in which case you could write a Widget with invalid user-related columns - or I set it to ON CASCADE NO ACTION, in which case, correct me if I'm wrong, I won't be able to update / delete Users at all and the trigger won't fire. I could work with the latter but it seems unnecessary busywork. Liam Emsa posted:Let's say I have something like A self-join should work. You basically put two copies of the same table side-by-side and grab the combined rows that have both Y and N. SELECT [Num] FROM [Table] AS a INNER JOIN [Table] AS b ON a.Num = b.Num AND a.C = 'Y' AND b.Num = 'N' Use "SELECT *" instead if you want to understand a little better how it works. (Followup question for the experts: is there any difference between the above and SELECT [Num] FROM [Table] AS a INNER JOIN [Table] AS b ON a.Num = b.Num WHERE a.C = 'Y' AND b.Num = 'N' ? I think the latter expresses intent more clearly, but the former is more manageable when you're building longer queries. Not sure if common SQL query planners would treat the two queries any different.) NihilCredo fucked around with this message at 21:31 on Jun 4, 2015 |
# ? Jun 4, 2015 21:29 |
|
Liam Emsa posted:Let's say I have something like Assuming you want all distinct numbers that are either Y or N, this should do it: code:
|
# ? Jun 4, 2015 21:34 |
|
That brings up all values that contain either Y or N. Maybe BI Answers is just bad.
|
# ? Jun 4, 2015 21:44 |
|
NihilCredo posted:A self-join should work. You basically put two copies of the same table side-by-side and grab the combined rows that have both Y and N. There's no logical difference. The difference is one of taste - do you feel that it's better style for a particular condition to be placed in the join logic (on clause) or the "overall" logic of the query (where clause)? There is a difference for outer joins. Liam Emsa posted:That brings up all values that contain either Y or N. It shouldn't, are you sure you typed ">" rather than ">="?
|
# ? Jun 4, 2015 23:29 |
|
Liam Emsa posted:Let's say I have something like code:
|
# ? Jun 5, 2015 01:21 |
|
I don't do really do any SQL stuff ever, but we recently got access to some databases that could be super useful to us. There's 4 remote DB's - one SQLPLUS, one MSSQL, one MySQL, and then another one that I'm not sure about right now. I'd like to have our local server run a small query on each of those every 5 or 10 minutes, then smash it all into a MySQL db that will get purged at the end of each day. What's the best way to go about doing this? Bunch of python scripts and a cron job?
|
# ? Jun 5, 2015 03:57 |
|
NihilCredo posted:Thanks a lot for the answers! Sorry, I misunderstood what you were trying to do when I mentioned the trigger on Users. After I thought about it more, it seems like you're trying to update/delete the user from Users when a widget is updated or deleted? At least, that's what CASCADE would do on a foreign key on a Widgets column referencing Users. Outside of that, I'd suggest maybe doing a stored proc to control the delete when you want to delete the user ID - set all of the references to the User ID to null, then toss the row from Users. Regarding telling SQL to ignore the cascade problem, I tried it out and had the same error. When I read that more, it seems like that protection is built in to prevent a big screw up by the relational engine and the algebrizer? I'm guessing that when it translates the commands into code to execute on the tables, something must be very weird when you get multiple cascade patgs
|
# ? Jun 5, 2015 14:35 |
|
invision posted:I don't do really do any SQL stuff ever, but we recently got access to some databases that could be super useful to us. There's 4 remote DB's - one SQLPLUS, one MSSQL, one MySQL, and then another one that I'm not sure about right now. I'd like to have our local server run a small query on each of those every 5 or 10 minutes, then smash it all into a MySQL db that will get purged at the end of each day. What's the best way to go about doing this? Bunch of python scripts and a cron job? That's probably what I'd do, assuming I could find libraries for all the DBMSs involved (I expect that is easy).
|
# ? Jun 5, 2015 16:10 |
|
Posted this in the Excel thread too, but, I'm about to pull my drat hair out. I absolutely cannot get Excel to talk to this MS SQL db. From Other Sources -> From SQL Server [DBNETLIB][ConnectionOpen (ParseConnectParams()).]Invalid Connection. //with server name as THESERVER.THEDOMAIN.ORG:PORT [DBNETILB][ConnectionOpen (connect()).]SQL Server does not exist or access denied //with server name as 10.20.30.40 with "User the following User Name and Password" checked I opened up wireshark to watch it, and it shows that excel is sending a bunch of SMB2 packets to the server. It also is sending my computer/domain logon to the sql server, which should obviously reject it. Yes, the "Use The Following Username/Password" is checked and filled in. I can use HeidiSQL, and it connects perfectly. Wireshark shows nothing but TCP/IP, and at no time is it blasting my not-sql credentials across the network. What's going on here?
|
# ? Jun 6, 2015 02:20 |
|
invision posted:I don't do really do any SQL stuff ever, but we recently got access to some databases that could be super useful to us. There's 4 remote DB's - one SQLPLUS, one MSSQL, one MySQL, and then another one that I'm not sure about right now. I'd like to have our local server run a small query on each of those every 5 or 10 minutes, then smash it all into a MySQL db that will get purged at the end of each day. What's the best way to go about doing this? Bunch of python scripts and a cron job?
|
# ? Jun 6, 2015 15:37 |
|
balakadaka posted:Sorry, I misunderstood what you were trying to do when I mentioned the trigger on Users. After I thought about it more, it seems like you're trying to update/delete the user from Users when a widget is updated or deleted? At least, that's what CASCADE would do on a foreign key on a Widgets column referencing Users. Sorry, no, I was trying to do the opposite: whenever a User is updated / deleted, the Widgets that reference that User should have all the referencing columns respectively updated / either set to NULL or deleted, depending on what the Widgets exactly are. (I always use surrogate, non-recyclable primary keys so the "update" scenario shouldn't actually happen, but if it did for any reason, then I would definitely want the foreign key columns to be automatically updated.) If I insert/update on the Widgets table, it obviously, shouldn't do anything to the Users table just check that the new values match some existing UserID (or are NULL).
|
# ? Jun 6, 2015 16:48 |
|
Two non-query questions: 1. Is there any way to speed up a db connection using ssh tunneling? My average load time has been 1.25 minutes only after a couple days. 2. Any resources into data scrubbing automation/utilities? I've seen the shell scripts and such but that seems painful and not easy to maintain. Would be really nice to have so I don't have to worry about question 1 anymore.
|
# ? Jun 10, 2015 08:27 |
|
I have three tables that I'm trying to join, really only two that I'm trying to join via an intermediary. The issue is that they use two different IDs for the same person. One table, Table_A, has ID_One. I'm linking this to a second table, Table_B, that has both ID_One and ID_Two, with the goal to then use that to finally link to Table_C. The only issue is that this is a one to many join when going from Table_A to Table_B, with Table_B having multiple rows of ID_One for every instance of ID_Two, but I really only care about a single row in Table_B because I'm just trying to get at the second ID in this table, ID_Two. The third table only has ID_Two, and has just one entry for every person. My current solution looks like this... code:
ItBurns fucked around with this message at 01:31 on Jun 11, 2015 |
# ? Jun 11, 2015 01:27 |
|
wolffenstein posted:Two non-query questions: What sort of query are you running?
|
# ? Jun 11, 2015 03:08 |
|
ItBurns posted:I have three tables that I'm trying to join, really only two that I'm trying to join via an intermediary. The issue is that they use two different IDs for the same person. One table, Table_A, has ID_One. I'm linking this to a second table, Table_B, that has both ID_One and ID_Two, with the goal to then use that to finally link to Table_C. The only issue is that this is a one to many join when going from Table_A to Table_B, with Table_B having multiple rows of ID_One for every instance of ID_Two, but I really only care about a single row in Table_B because I'm just trying to get at the second ID in this table, ID_Two. The third table only has ID_Two, and has just one entry for every person. code:
But if there's some particular logic by which you want to retrieve the specific value of ID_Two for the ID_One, then this won't do that obviously. FieryBalrog fucked around with this message at 19:01 on Jun 11, 2015 |
# ? Jun 11, 2015 11:18 |
|
John Big Booty posted:SSH Tunneling is never going to make anything faster.
|
# ? Jun 11, 2015 19:35 |
|
I've got a very simple beginner question. When creating new tables, is there any reason for adding in the foreign keys later using the ALTER TABLE command, instead of defining them when using CREATE TABLE. For example, I've got... CREATE TABLE Comments ( id int NOT NULL AUTO_INCREMENT, announcementId int NOT NULL, creatorId int NOT NULL, text varchar2(4000), NOT NULL time timestamp(0) DEFAULT systimestamp, CONSTRAINT Kommentit_pk PRIMARY KEY (id) CONSTRAINT creator_fk FOREIGN KEY (creatorID) REFERENCES users(id) CONSTRAINT announcement_fk FOREIGN KEY (announcementID) REFERENCES announcements(id) ); Also, if I've got several tables with an attribute called creatorId or announcementId, will naming the constraints in those as creator_fk cause some unexpected problems?
|
# ? Jun 14, 2015 16:48 |
|
Users and Announcements need to exist before you create your Comments table. If you add the constraints later, you can create your tables in any order. If you have circular references between tables then you need to use ALTER TABLE.
|
# ? Jun 14, 2015 17:13 |
|
Sedro posted:Users and Announcements need to exist before you create your Comments table. If you add the constraints later, you can create your tables in any order. If you have circular references between tables then you need to use ALTER TABLE.
|
# ? Jun 14, 2015 18:34 |
|
There's no problem, it is equivalent. The same thing goes for dropping constraints/tables (in the opposite order). It's much simpler to just drop all constraints rather than figure out the proper order to drop the tables. If you are loading bulk data into a table, it is often much faster to load the data then add the constraints. So the db can check the constraints once instead of on every insert.
|
# ? Jun 14, 2015 19:32 |
|
So I need to duplicate check a complex object, and I'm wondering if I can do it efficiently in SQL Server, or if I should go about it in my code. Structurally I have the following tables. Claim ClaimCaseSubTypes (mapping table for many to many relationship) ClaimDiagnosticCodes (ditto) ClaimTreatmentCodes (ditto) Basically a Claim is only a duplicate if it is matching on 8 fields in itself AND has the same relationships in all the mapping tables. For Example, the following records would be indicated as duplicates code:
|
# ? Jun 16, 2015 15:43 |
|
I've been learning SQL and relational databases through Database System Concepts using PostgreSQL, since it's relatively compliant with the ANSI standard and has a decent reputation (mostly as a supplement to the computer science curriculum at my university, since I don't plan on taking databases). After working through the chapters on SQL and some theoretical topics, I feel like I've hit a wall. Are there any recommended beginner/intermediate-level books or tutorials that I could use to acquire a more practical knowledge of SQL and the design, implementation, and maintenance of actual databases? I usually appreciate a more structured approach, as found in academic textbooks; a text that went into SQL and database design in more depth, with exercises, is what I'm looking for.
quickly fucked around with this message at 11:38 on Jun 18, 2015 |
# ? Jun 18, 2015 11:29 |
|
How do I ensure ACID transactions in TSQL? I'm having the database generate IDs through incrementation and store the most recent one in a field in a junction table. Basically, if you end up selecting one of the codes from said junction table, and the code is one where the DB Generates the ID instead of it being user supplied, it increments one to the existing, stored, most recent ID and passes that new one up to the server - my PM wants me to make very sure that we don't have two users trying to do this at the same time and something screwing it up. I've never actually had to worry about this before now, so I'd like to know what there is to make sure of this. I know SQL is in general good at this kind of thing but I've never needed to worry about the particularities before now.
|
# ? Jun 18, 2015 14:54 |
|
Space Whale posted:How do I ensure ACID transactions in TSQL? I believe the answer to this question is "you use Oracle."
|
# ? Jun 18, 2015 15:08 |
|
Citizen Tayne posted:I believe the answer to this question is "you use Oracle." gently caress. That's a non-option right now.
|
# ? Jun 18, 2015 15:11 |
|
Aren't you letting the DBMS handle the sequence generation for you? What you're asking here is pretty much a non-issue with any DBMS I've ever used.
|
# ? Jun 18, 2015 15:15 |
|
Citizen Tayne posted:Aren't you letting the DBMS handle the sequence generation for you? What you're asking here is pretty much a non-issue with any DBMS I've ever used. Being a dev and not a DBA I'm only well versed in making pretty simple queries and scripting out some basic stuff. I don't know if there are keywords to use in a stored procedure to do it atomically like I want, or not. We don't have a DBA and the only knowledge anyone has about ensuring this is a keyword to use in Oracle, which we're not using. For all I know MSSQL just does this by default but I don't know
|
# ? Jun 18, 2015 15:36 |
|
Space Whale posted:Being a dev and not a DBA I'm only well versed in making pretty simple queries and scripting out some basic stuff. I don't know if there are keywords to use in a stored procedure to do it atomically like I want, or not. I'm an Oracle dev so I'm not 100% sure about how SQL Server does it and it's been a long time since I touched it, but assuming that the DBAs did their job and made a sane database you don't need to worry about this in any modern DBMS I've ever used. You're trying to do things the hard way. Let the DB generate sequences and handle all of that for you.
|
# ? Jun 18, 2015 15:39 |
|
Citizen Tayne posted:I'm an Oracle dev so I'm not 100% sure about how SQL Server does it and it's been a long time since I touched it, but assuming that the DBAs did their job and made a sane database you don't need to worry about this in any modern DBMS I've ever used. You're trying to do things the hard way. Let the DB generate sequences and handle all of that for you. There are no DBAs, only zool. The database is not sane and poo poo like "we hit a linked oracle (lol) server for each and every row from this subquery in this big rear end report and it takes forever oh we're hitting it 140,000 times oh my god" or "we ran out of space and everything died and we're hemorrhaging money right now to the tune of millions a day" is a current recurring problem. OTOH I'll be 1000 miles away by the 6th of July
|
# ? Jun 18, 2015 15:41 |
|
Space Whale posted:There are no DBAs, only zool. The database is not sane and poo poo like "we hit a linked oracle (lol) server for each and every row from this subquery in this big rear end report and it takes forever oh we're hitting it 140,000 times oh my god" or "we ran out of space and everything died and we're hemorrhaging money right now to the tune of millions a day" is a current recurring problem. Sounds like you need to quit.
|
# ? Jun 18, 2015 15:43 |
|
Citizen Tayne posted:Sounds like you need to quit. I give the twoweeks tomorrow
|
# ? Jun 18, 2015 15:47 |
|
Space Whale posted:I give the twoweeks tomorrow Congratulations: you no longer need to care. Another problem solved by the Cavern of Cobol Crew.
|
# ? Jun 18, 2015 16:14 |
|
As long as you're not like the person here who wrote an archive and delete procedure backwards in such a way that it's possible to have already committed the delete and then fail and rollback the archive table write, you generally don't have to worry about that sort of thing.
|
# ? Jun 18, 2015 16:19 |
|
Also, I'd suggest googling AUTO_INCREMENT.
|
# ? Jun 18, 2015 16:21 |
|
|
# ? Apr 29, 2024 07:33 |
|
EDIT: CLEARLY SA COULD STAND TO HAVE TRANSACTIONS TOO
Space Whale fucked around with this message at 18:16 on Jun 18, 2015 |
# ? Jun 18, 2015 18:10 |