Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Tomahawk
Aug 13, 2003

HE KNOWS

NihilCredo posted:

I guess "group by extract( day from dateadd(hour, -2, ts))" ?

You're a wizard. Thank you so much.

Adbot
ADBOT LOVES YOU

balakadaka
Jun 30, 2005

robot terrorists WILL kill you

NihilCredo posted:

Couple of SQL Server questions:

1) Let's say we have these two tables:
code:
Users
 [UserID]
 (other columns...)

Widgets
 (other columns...)
 [CreatedBy]
 [LastEditedBy]
In this case I would want both [CreatedBy] and [LastEditedBy] to each have a foreign key relationship with Users.[UserID], with the options ON UPDATE CASCADE and either ON DELETE SET NULL or ON DELETE CASCADE, depending on the exact nature of the Widgets.

Now, SQL Server refuses to allow a second foreign key, unless both are set to ON UPDATE/DELETE NO ACTION, claiming that it might cause cycles and/or multiple cascade paths. I've read the standard Google results for this problem, but I still can't figure out what sort of cascade paths SQL Server is worried about in my scenario. Every other problematic example I've seen has either a third table involved or a different relationship diagram.

2) You're designing a new schema, and you have a whole bunch of tables that you want to set up. There's a lot of things you want to have on 95% of your tables, such as:

- a [<Table name>ID] primary key and index
- [Creation Date] and [Last Updated] columns
- triggers that autofill those two columns
- a [Status] column (e.g. 0 = new, 1 = active, 2 = deleted, or whatever)
- potentially multiple [OwnerID], [SourceID], etc. FK columns
- either a View or (in 2016) Row Level Security that only shows a user his own rows
- etc.

Now, this is a very infrequent task so copy/paste works perfectly. However, is there any feature in (T-)SQL that you're supposed to use in such a scenario?

3) Are there any guidelines for when a nchar property becomes "small enough" that you can use it as its own ID column all over your database? For an extreme example, currency is likely represented by just three ASCII characters, which aren't going to change. If you're not going to store exchange rates or other information, is it going to be faster to read/write EUR everywhere than to use a numeric CurrencyID foreign key? Or is relational storage so optimised at this point that you'd still rather join on a tinyint?

1) Think of it as SQL Server just trying to minimize the amount of deadlocks on a possible resource. Say you had both foreign keys on Widgets tying back to Users - if a Widget was created and updated by the same user, then each foreign key fires when that user is deleted. You'd have plenty of deadlocks if that user created & updated more than 1 widget. Even if cascade was just on 1 foreign key, the constraint will still check the Widgets table for a reference, but delete and update will request an exclusive lock, and most likely horrid performance. Not sure of your exact structure, but would a on update/ on delete trigger referencing a stored proc in Users work for you? (I'd say have the proc update/delete from Widgets where UserID = CreatedBy or UserID = UpdatedBy)

2) Outside of saving a table called "base_template" in model, but otherwise no magic bullet I've encountered. I've seen a truckload of dynamic SQL procs build a create table statement out of a dictionary, but it's a mess to read and debug.

3) I've seen plenty of cases where a currency was normalized into it's own table - it's just good design and saves a lot in storage. I'd say the best guidelines I see my colleagues following is this: is the character data static/controlled change or highly dynamic? If static, go for normalizing into a table. For example, if you have a column for a custom MSMQ message type in a .Net app and you have 10,000 different types, it'd still be worth it. Just about any flavor of SQL will chew through 100k rows with ease outside of BLOBs. But if you're tracking a few million different varchar values and your table isn't too big (under 100 GB), I wouldn't sweat it too much.

Liam Emsa
Aug 21, 2014

Oh, god. I think I'm falling.
Let's say I have something like

Num C
123 Y
123 N
456 Y
789 N

And I want to say "Show all where Y and N"

When I do where C = "Y" and C ="N" I get no results. When I do OR I get all results. I just want 123.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Liam Emsa posted:

Let's say I have something like

Num C
123 Y
123 N
456 Y
789 N

And I want to say "Show all where Y and N"

When I do where C = "Y" and C ="N" I get no results. When I do OR I get all results. I just want 123.

select distinct t1.num as num from t as t1 where exists (select * from t as t2 where t1.num = t2.num and t2.c = 'y') and exists (select * from t as t3 where t1.num = t3.num and t3.c = 'n')

or if c only has the values y and n, you can instead:

select num, count(c) as num_rows from t group by num having num_rows = 2

Liam Emsa
Aug 21, 2014

Oh, god. I think I'm falling.
Hmm, I don't think Oracle Answers will let me do that.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

balakadaka posted:

1) Think of it as SQL Server just trying to minimize the amount of deadlocks on a possible resource. Say you had both foreign keys on Widgets tying back to Users - if a Widget was created and updated by the same user, then each foreign key fires when that user is deleted. You'd have plenty of deadlocks if that user created & updated more than 1 widget. Even if cascade was just on 1 foreign key, the constraint will still check the Widgets table for a reference, but delete and update will request an exclusive lock, and most likely horrid performance. Not sure of your exact structure, but would a on update/ on delete trigger referencing a stored proc in Users work for you? (I'd say have the proc update/delete from Widgets where UserID = CreatedBy or UserID = UpdatedBy)

2) Outside of saving a table called "base_template" in model, but otherwise no magic bullet I've encountered. I've seen a truckload of dynamic SQL procs build a create table statement out of a dictionary, but it's a mess to read and debug.

3) I've seen plenty of cases where a currency was normalized into it's own table - it's just good design and saves a lot in storage. I'd say the best guidelines I see my colleagues following is this: is the character data static/controlled change or highly dynamic? If static, go for normalizing into a table. For example, if you have a column for a custom MSMQ message type in a .Net app and you have 10,000 different types, it'd still be worth it. Just about any flavor of SQL will chew through 100k rows with ease outside of BLOBs. But if you're tracking a few million different varchar values and your table isn't too big (under 100 GB), I wouldn't sweat it too much.

Thanks a lot for the answers!

Regarding (1): I think I get it, mostly. However, updating / deleting a UserId is going to be an extremely infrequent operation; shouldn't I be able to tell SQL Server "yeah I don't care if this is going to be hilariously expensive, go ahead anyway" somehow?

I'm not sure if a trigger on Users works here. Either I don't set a FK constraint - in which case you could write a Widget with invalid user-related columns - or I set it to ON CASCADE NO ACTION, in which case, correct me if I'm wrong, I won't be able to update / delete Users at all and the trigger won't fire. I could work with the latter but it seems unnecessary busywork.


Liam Emsa posted:

Let's say I have something like

Num C
123 Y
123 N
456 Y
789 N

And I want to say "Show all where Y and N"

When I do where C = "Y" and C ="N" I get no results. When I do OR I get all results. I just want 123.

A self-join should work. You basically put two copies of the same table side-by-side and grab the combined rows that have both Y and N.

SELECT [Num] FROM [Table] AS a INNER JOIN [Table] AS b ON a.Num = b.Num AND a.C = 'Y' AND b.Num = 'N'

Use "SELECT *" instead if you want to understand a little better how it works.

(Followup question for the experts: is there any difference between the above and

SELECT [Num] FROM [Table] AS a INNER JOIN [Table] AS b ON a.Num = b.Num WHERE a.C = 'Y' AND b.Num = 'N'

? I think the latter expresses intent more clearly, but the former is more manageable when you're building longer queries. Not sure if common SQL query planners would treat the two queries any different.)

NihilCredo fucked around with this message at 21:31 on Jun 4, 2015

Ellie Crabcakes
Feb 1, 2008

Stop emailing my boyfriend Gay Crungus

Liam Emsa posted:

Let's say I have something like

Num C
123 Y
123 N
456 Y
789 N

And I want to say "Show all where Y and N"

When I do where C = "Y" and C ="N" I get no results. When I do OR I get all results. I just want 123.
That select is looking for rows in which a column has two different values at the same time, so no go.

Assuming you want all distinct numbers that are either Y or N, this should do it:

code:
select Num from table where C IN ("Y","N") group by Num having count(*)>1;

Liam Emsa
Aug 21, 2014

Oh, god. I think I'm falling.
That brings up all values that contain either Y or N.

Maybe BI Answers is just bad.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

NihilCredo posted:

A self-join should work. You basically put two copies of the same table side-by-side and grab the combined rows that have both Y and N.

SELECT [Num] FROM [Table] AS a INNER JOIN [Table] AS b ON a.Num = b.Num AND a.C = 'Y' AND b.Num = 'N'

Use "SELECT *" instead if you want to understand a little better how it works.

(Followup question for the experts: is there any difference between the above and

SELECT [Num] FROM [Table] AS a INNER JOIN [Table] AS b ON a.Num = b.Num WHERE a.C = 'Y' AND b.Num = 'N'

? I think the latter expresses intent more clearly, but the former is more manageable when you're building longer queries. Not sure if common SQL query planners would treat the two queries any different.)

There's no logical difference. The difference is one of taste - do you feel that it's better style for a particular condition to be placed in the join logic (on clause) or the "overall" logic of the query (where clause)? There is a difference for outer joins.

Liam Emsa posted:

That brings up all values that contain either Y or N.

Maybe BI Answers is just bad.

It shouldn't, are you sure you typed ">" rather than ">="?

Pittsburgh Fentanyl Cloud
Apr 7, 2003

I’d make eye contact with him and fart. I did it all the time for like two years. I didn’t even know his full name, just heard that his coworkers thought he was a nut for complaining about “the guy who would fart on me in the break room”

Liam Emsa posted:

Let's say I have something like

Num C
123 Y
123 N
456 Y
789 N

And I want to say "Show all where Y and N"

When I do where C = "Y" and C ="N" I get no results. When I do OR I get all results. I just want 123.

code:

SELECT num FROM (
    SELECT num, 
        CASE
            IF C = 'Y' THEN 1
            ELSE 0
        END AS countY,
        CASE
            IF C = 'N' THEN 1
            ELSE 0
         END AS countN
    FROM tab
    ) x
GROUP BY num
HAVING (SUM(countY) > 1 AND SUM(countN) > 1)

invision
Mar 2, 2009

I DIDN'T GET ENOUGH RAPE LAST TIME, MAY I HAVE SOME MORE?
I don't do really do any SQL stuff ever, but we recently got access to some databases that could be super useful to us. There's 4 remote DB's - one SQLPLUS, one MSSQL, one MySQL, and then another one that I'm not sure about right now. I'd like to have our local server run a small query on each of those every 5 or 10 minutes, then smash it all into a MySQL db that will get purged at the end of each day. What's the best way to go about doing this? Bunch of python scripts and a cron job?

balakadaka
Jun 30, 2005

robot terrorists WILL kill you

NihilCredo posted:

Thanks a lot for the answers!

Regarding (1): I think I get it, mostly. However, updating / deleting a UserId is going to be an extremely infrequent operation; shouldn't I be able to tell SQL Server "yeah I don't care if this is going to be hilariously expensive, go ahead anyway" somehow?

I'm not sure if a trigger on Users works here. Either I don't set a FK constraint - in which case you could write a Widget with invalid user-related columns - or I set it to ON CASCADE NO ACTION, in which case, correct me if I'm wrong, I won't be able to update / delete Users at all and the trigger won't fire. I could work with the latter but it seems unnecessary busywork.

Sorry, I misunderstood what you were trying to do when I mentioned the trigger on Users. After I thought about it more, it seems like you're trying to update/delete the user from Users when a widget is updated or deleted? At least, that's what CASCADE would do on a foreign key on a Widgets column referencing Users. Outside of that, I'd suggest maybe doing a stored proc to control the delete when you want to delete the user ID - set all of the references to the User ID to null, then toss the row from Users.

Regarding telling SQL to ignore the cascade problem, I tried it out and had the same error. When I read that more, it seems like that protection is built in to prevent a big screw up by the relational engine and the algebrizer? I'm guessing that when it translates the commands into code to execute on the tables, something must be very weird when you get multiple cascade patgs

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

invision posted:

I don't do really do any SQL stuff ever, but we recently got access to some databases that could be super useful to us. There's 4 remote DB's - one SQLPLUS, one MSSQL, one MySQL, and then another one that I'm not sure about right now. I'd like to have our local server run a small query on each of those every 5 or 10 minutes, then smash it all into a MySQL db that will get purged at the end of each day. What's the best way to go about doing this? Bunch of python scripts and a cron job?

That's probably what I'd do, assuming I could find libraries for all the DBMSs involved (I expect that is easy).

invision
Mar 2, 2009

I DIDN'T GET ENOUGH RAPE LAST TIME, MAY I HAVE SOME MORE?
Posted this in the Excel thread too, but,
I'm about to pull my drat hair out. I absolutely cannot get Excel to talk to this MS SQL db.

From Other Sources -> From SQL Server

[DBNETLIB][ConnectionOpen (ParseConnectParams()).]Invalid Connection. //with server name as THESERVER.THEDOMAIN.ORG:PORT
[DBNETILB][ConnectionOpen (connect()).]SQL Server does not exist or access denied //with server name as 10.20.30.40
with "User the following User Name and Password" checked

I opened up wireshark to watch it, and it shows that excel is sending a bunch of SMB2 packets to the server. It also is sending my computer/domain logon to the sql server, which should obviously reject it. Yes, the "Use The Following Username/Password" is checked and filled in.


I can use HeidiSQL, and it connects perfectly. Wireshark shows nothing but TCP/IP, and at no time is it blasting my not-sql credentials across the network.

What's going on here?

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

invision posted:

I don't do really do any SQL stuff ever, but we recently got access to some databases that could be super useful to us. There's 4 remote DB's - one SQLPLUS, one MSSQL, one MySQL, and then another one that I'm not sure about right now. I'd like to have our local server run a small query on each of those every 5 or 10 minutes, then smash it all into a MySQL db that will get purged at the end of each day. What's the best way to go about doing this? Bunch of python scripts and a cron job?
Use PostgreSQL. Read support is likely stable since 9.1 for many of these, and I suspect write support into MySQL is reasonable given its popularity. Save a level of complexity, though, drop the silly notion of the MySQL accumulator, and just use PG as your reporting store.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

balakadaka posted:

Sorry, I misunderstood what you were trying to do when I mentioned the trigger on Users. After I thought about it more, it seems like you're trying to update/delete the user from Users when a widget is updated or deleted? At least, that's what CASCADE would do on a foreign key on a Widgets column referencing Users.

Sorry, no, I was trying to do the opposite: whenever a User is updated / deleted, the Widgets that reference that User should have all the referencing columns respectively updated / either set to NULL or deleted, depending on what the Widgets exactly are. (I always use surrogate, non-recyclable primary keys so the "update" scenario shouldn't actually happen, but if it did for any reason, then I would definitely want the foreign key columns to be automatically updated.)

If I insert/update on the Widgets table, it obviously, shouldn't do anything to the Users table just check that the new values match some existing UserID (or are NULL).

wolffenstein
Aug 2, 2002
 
Pork Pro
Two non-query questions:
1. Is there any way to speed up a db connection using ssh tunneling? My average load time has been 1.25 minutes only after a couple days.
2. Any resources into data scrubbing automation/utilities? I've seen the shell scripts and such but that seems painful and not easy to maintain. Would be really nice to have so I don't have to worry about question 1 anymore.

ItBurns
Jul 24, 2007
I have three tables that I'm trying to join, really only two that I'm trying to join via an intermediary. The issue is that they use two different IDs for the same person. One table, Table_A, has ID_One. I'm linking this to a second table, Table_B, that has both ID_One and ID_Two, with the goal to then use that to finally link to Table_C. The only issue is that this is a one to many join when going from Table_A to Table_B, with Table_B having multiple rows of ID_One for every instance of ID_Two, but I really only care about a single row in Table_B because I'm just trying to get at the second ID in this table, ID_Two. The third table only has ID_Two, and has just one entry for every person.

My current solution looks like this...

code:
select
  ID_One,
  ID_Two
from (
  select
    ID_One,
    ID_Two,
    max( ID_Two ) over ( partition by ID_One ) as max_ID_Two
  from
    Table_B ) max_B
where
  max_B.ID_Two = max_B.max_ID_Two
This gives me an unduplicated table of ID_One and ID_Two that I can use join Table_A and Table_C. Is this the best, or even correct, approach? This is happening in Oracle, if there are better approaches for that system.

ItBurns fucked around with this message at 01:31 on Jun 11, 2015

Ellie Crabcakes
Feb 1, 2008

Stop emailing my boyfriend Gay Crungus

wolffenstein posted:

Two non-query questions:
1. Is there any way to speed up a db connection using ssh tunneling? My average load time has been 1.25 minutes only after a couple days
SSH Tunneling is never going to make anything faster.

What sort of query are you running?

FieryBalrog
Apr 7, 2010
Grimey Drawer

ItBurns posted:

I have three tables that I'm trying to join, really only two that I'm trying to join via an intermediary. The issue is that they use two different IDs for the same person. One table, Table_A, has ID_One. I'm linking this to a second table, Table_B, that has both ID_One and ID_Two, with the goal to then use that to finally link to Table_C. The only issue is that this is a one to many join when going from Table_A to Table_B, with Table_B having multiple rows of ID_One for every instance of ID_Two, but I really only care about a single row in Table_B because I'm just trying to get at the second ID in this table, ID_Two. The third table only has ID_Two, and has just one entry for every person.

My current solution looks like this...

code:
select
  ID_One,
  ID_Two
from (
  select
    ID_One,
    ID_Two,
    max( ID_Two ) over ( partition by ID_One ) as max_ID_Two
  from
    Table_B ) max_B
where
  max_B.ID_Two = max_B.max_ID_Two
This gives me an unduplicated table of ID_One and ID_Two that I can use join Table_A and Table_C. Is this the best, or even correct, approach? This is happening in Oracle, if there are better approaches for that system.
You don't need the complication of the partition. Simply use this as the subquery/in-line view:

code:
select
  ID_One,
  max(ID_Two) as ID_Two
from Table_B
group by ID_One
This will work if you don't care what value of ID_Two you use for the join so long as it's a single value. (Or, you really want the max value of it for some reason).

But if there's some particular logic by which you want to retrieve the specific value of ID_Two for the ID_One, then this won't do that obviously.

FieryBalrog fucked around with this message at 19:01 on Jun 11, 2015

wolffenstein
Aug 2, 2002
 
Pork Pro

John Big Booty posted:

SSH Tunneling is never going to make anything faster.

What sort of query are you running?
It's webpages full of queries. I use SSH so I can access databases with client data but still use my debugging tools. I don't need the client data for most stuff, so I'm looking into data scrubbing.

Kuule hain nussivan
Nov 27, 2008

I've got a very simple beginner question. When creating new tables, is there any reason for adding in the foreign keys later using the ALTER TABLE command, instead of defining them when using CREATE TABLE.

For example, I've got...

CREATE TABLE Comments (
id int NOT NULL AUTO_INCREMENT,
announcementId int NOT NULL,
creatorId int NOT NULL,
text varchar2(4000), NOT NULL
time timestamp(0) DEFAULT systimestamp,
CONSTRAINT Kommentit_pk PRIMARY KEY (id)
CONSTRAINT creator_fk FOREIGN KEY (creatorID) REFERENCES users(id)
CONSTRAINT announcement_fk FOREIGN KEY (announcementID) REFERENCES announcements(id)
);

Also, if I've got several tables with an attribute called creatorId or announcementId, will naming the constraints in those as creator_fk cause some unexpected problems?

Sedro
Dec 31, 2008
Users and Announcements need to exist before you create your Comments table. If you add the constraints later, you can create your tables in any order. If you have circular references between tables then you need to use ALTER TABLE.

Kuule hain nussivan
Nov 27, 2008

Sedro posted:

Users and Announcements need to exist before you create your Comments table. If you add the constraints later, you can create your tables in any order. If you have circular references between tables then you need to use ALTER TABLE.
I figured that would be it. But as long as nothing's cyclical and the orders fine, there's no problem with declaring multiple constraints with one create command, right?

Sedro
Dec 31, 2008
There's no problem, it is equivalent.

The same thing goes for dropping constraints/tables (in the opposite order). It's much simpler to just drop all constraints rather than figure out the proper order to drop the tables.

If you are loading bulk data into a table, it is often much faster to load the data then add the constraints. So the db can check the constraints once instead of on every insert.

aBagorn
Aug 26, 2004
So I need to duplicate check a complex object, and I'm wondering if I can do it efficiently in SQL Server, or if I should go about it in my code. Structurally I have the following tables.

Claim
ClaimCaseSubTypes (mapping table for many to many relationship)
ClaimDiagnosticCodes (ditto)
ClaimTreatmentCodes (ditto)

Basically a Claim is only a duplicate if it is matching on 8 fields in itself AND has the same relationships in all the mapping tables.

For Example, the following records would be indicated as duplicates

code:
Claim
Id     CreateDate
1      1/1/2015
2      6/1/2015

ClaimCaseSubTypes
ClaimId    SubTypeId
1          34
1          64
2          34
2          64

ClaimDiagnosticCodes
ClaimId    DiagnosticCodeId
1          1
2          1


ClaimTreatmentCodes
ClaimId    TreatmentCodeId
1          5
1          6
2          6
2          5
And in this case I would want to keep 1 and get rid of 2.

quickly
Mar 7, 2012
I've been learning SQL and relational databases through Database System Concepts using PostgreSQL, since it's relatively compliant with the ANSI standard and has a decent reputation (mostly as a supplement to the computer science curriculum at my university, since I don't plan on taking databases). After working through the chapters on SQL and some theoretical topics, I feel like I've hit a wall. Are there any recommended beginner/intermediate-level books or tutorials that I could use to acquire a more practical knowledge of SQL and the design, implementation, and maintenance of actual databases? I usually appreciate a more structured approach, as found in academic textbooks; a text that went into SQL and database design in more depth, with exercises, is what I'm looking for.

quickly fucked around with this message at 11:38 on Jun 18, 2015

Space Whale
Nov 6, 2014
How do I ensure ACID transactions in TSQL?

I'm having the database generate IDs through incrementation and store the most recent one in a field in a junction table. Basically, if you end up selecting one of the codes from said junction table, and the code is one where the DB Generates the ID instead of it being user supplied, it increments one to the existing, stored, most recent ID and passes that new one up to the server - my PM wants me to make very sure that we don't have two users trying to do this at the same time and something screwing it up.

I've never actually had to worry about this before now, so I'd like to know what there is to make sure of this. I know SQL is in general good at this kind of thing but I've never needed to worry about the particularities before now.

Pittsburgh Fentanyl Cloud
Apr 7, 2003

I’d make eye contact with him and fart. I did it all the time for like two years. I didn’t even know his full name, just heard that his coworkers thought he was a nut for complaining about “the guy who would fart on me in the break room”

Space Whale posted:

How do I ensure ACID transactions in TSQL?

I'm having the database generate IDs through incrementation and store the most recent one in a field in a junction table. Basically, if you end up selecting one of the codes from said junction table, and the code is one where the DB Generates the ID instead of it being user supplied, it increments one to the existing, stored, most recent ID and passes that new one up to the server - my PM wants me to make very sure that we don't have two users trying to do this at the same time and something screwing it up.

I've never actually had to worry about this before now, so I'd like to know what there is to make sure of this. I know SQL is in general good at this kind of thing but I've never needed to worry about the particularities before now.

I believe the answer to this question is "you use Oracle."

Space Whale
Nov 6, 2014

Citizen Tayne posted:

I believe the answer to this question is "you use Oracle."

gently caress.

That's a non-option right now.

Pittsburgh Fentanyl Cloud
Apr 7, 2003

I’d make eye contact with him and fart. I did it all the time for like two years. I didn’t even know his full name, just heard that his coworkers thought he was a nut for complaining about “the guy who would fart on me in the break room”
Aren't you letting the DBMS handle the sequence generation for you? What you're asking here is pretty much a non-issue with any DBMS I've ever used.

Space Whale
Nov 6, 2014

Citizen Tayne posted:

Aren't you letting the DBMS handle the sequence generation for you? What you're asking here is pretty much a non-issue with any DBMS I've ever used.

Being a dev and not a DBA I'm only well versed in making pretty simple queries and scripting out some basic stuff. I don't know if there are keywords to use in a stored procedure to do it atomically like I want, or not.

We don't have a DBA and the only knowledge anyone has about ensuring this is a keyword to use in Oracle, which we're not using.

For all I know MSSQL just does this by default but I don't know :(

Pittsburgh Fentanyl Cloud
Apr 7, 2003

I’d make eye contact with him and fart. I did it all the time for like two years. I didn’t even know his full name, just heard that his coworkers thought he was a nut for complaining about “the guy who would fart on me in the break room”

Space Whale posted:

Being a dev and not a DBA I'm only well versed in making pretty simple queries and scripting out some basic stuff. I don't know if there are keywords to use in a stored procedure to do it atomically like I want, or not.

We don't have a DBA and the only knowledge anyone has about ensuring this is a keyword to use in Oracle, which we're not using.

For all I know MSSQL just does this by default but I don't know :(

I'm an Oracle dev so I'm not 100% sure about how SQL Server does it and it's been a long time since I touched it, but assuming that the DBAs did their job and made a sane database you don't need to worry about this in any modern DBMS I've ever used. You're trying to do things the hard way. Let the DB generate sequences and handle all of that for you.

Space Whale
Nov 6, 2014

Citizen Tayne posted:

I'm an Oracle dev so I'm not 100% sure about how SQL Server does it and it's been a long time since I touched it, but assuming that the DBAs did their job and made a sane database you don't need to worry about this in any modern DBMS I've ever used. You're trying to do things the hard way. Let the DB generate sequences and handle all of that for you.

There are no DBAs, only zool. The database is not sane and poo poo like "we hit a linked oracle (lol) server for each and every row from this subquery in this big rear end report and it takes forever oh we're hitting it 140,000 times oh my god" or "we ran out of space and everything died and we're hemorrhaging money right now to the tune of millions a day" is a current recurring problem.

OTOH I'll be 1000 miles away by the 6th of July :yotj:

Pittsburgh Fentanyl Cloud
Apr 7, 2003

I’d make eye contact with him and fart. I did it all the time for like two years. I didn’t even know his full name, just heard that his coworkers thought he was a nut for complaining about “the guy who would fart on me in the break room”

Space Whale posted:

There are no DBAs, only zool. The database is not sane and poo poo like "we hit a linked oracle (lol) server for each and every row from this subquery in this big rear end report and it takes forever oh we're hitting it 140,000 times oh my god" or "we ran out of space and everything died and we're hemorrhaging money right now to the tune of millions a day" is a current recurring problem.

OTOH I'll be 1000 miles away by the 6th of July :yotj:

Sounds like you need to quit.

Space Whale
Nov 6, 2014

Citizen Tayne posted:

Sounds like you need to quit.

I give the twoweeks tomorrow :q:

PleasingFungus
Oct 10, 2012
idiot asshole bitch who should fuck off

Space Whale posted:

I give the twoweeks tomorrow :q:

Congratulations: you no longer need to care.

Another problem solved by the Cavern of Cobol Crew.

Pittsburgh Fentanyl Cloud
Apr 7, 2003

I’d make eye contact with him and fart. I did it all the time for like two years. I didn’t even know his full name, just heard that his coworkers thought he was a nut for complaining about “the guy who would fart on me in the break room”
As long as you're not like the person here who wrote an archive and delete procedure backwards in such a way that it's possible to have already committed the delete and then fail and rollback the archive table write, you generally don't have to worry about that sort of thing.

PleasingFungus
Oct 10, 2012
idiot asshole bitch who should fuck off
Also, I'd suggest googling AUTO_INCREMENT.

Adbot
ADBOT LOVES YOU

Space Whale
Nov 6, 2014
EDIT: CLEARLY SA COULD STAND TO HAVE TRANSACTIONS TOO :haw:

Space Whale fucked around with this message at 18:16 on Jun 18, 2015

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply