|
Got a bit of a hiccup, including the fact that no, this is a 2005 server, not 2008 It's a MSSQL Server and I have a table of entry IDs (not unique) which can join on a table which also has revision IDs, which are very much unique. Well joining them is easy enough. But once I've done that, I want to kick out rows with duplicate EntryIDs based on the fact that I only want the most recent (so, highest valued) RevisionID. Edit: My procedural brain is saying "Order by Entry ID. If more than one exists, then go pick the highest valued RevisionID and the others that match should be discarded." But I don't want to just kick this poo poo out and do a loop before passing it along the rest of the way, I want to do it the right way in SQL. How should I be thinking about this? Edit2: Welp here's a lead ANSWER: I used a subselect, rank(), and OVER. Basically: code:
Could I do better? Fuck them fucked around with this message at 21:00 on Sep 26, 2014 |
# ? Sep 26, 2014 20:28 |
|
|
# ? Apr 25, 2024 10:41 |
|
gently caress them posted:ANSWER: 99% of the time, when your answer contains "I used a subselect", it could be done better. Subselects are bad for readability, even if not bad for performance. And often bad for both. For the purposes of what you're doing, you shouldn't need any meta columns (RANK, ROW_NUMBER, etc). Assuming a clustered PK on ID columns, I would write... code:
|
# ? Sep 27, 2014 20:30 |
|
Working with Sqlite: I have a set of decks each containing 0-n cards, and I want to select a random card from a certain subset of "checked" decks code:
code:
I'd rather not store the number of cards in the decks table, or checked status in the cards table as that would violate normalization. Vesi fucked around with this message at 13:59 on Sep 28, 2014 |
# ? Sep 28, 2014 13:57 |
|
I don't know if I'm misunderstanding, but are you trying to: SELECT card_id FROM cards WHERE card_id not in ( SELECT card_id FROM decks WHERE checked not 0) ? idk if this works but logically is that what you're trying to do?
|
# ? Sep 28, 2014 15:52 |
|
Vesi posted:Working with Sqlite: Does this not work? code:
|
# ? Sep 28, 2014 19:42 |
|
Anaxandrides posted:Does this not work? that works perfectly thank you!
|
# ? Sep 28, 2014 20:10 |
|
I have two tables: Table A - a_id - a_number - a_code Table B - b_id - a_id - b_checkpoint - b_value - user_id Table B will have multiple rows per a_id per user. Sometimes, there is no rows in B for a_id for a particular user (but the user will have other ones). My base query is: code:
I get I need to use a LEFT JOIN, but I'm a bit lost on how I want to do this.
|
# ? Sep 29, 2014 21:54 |
|
Master_Odin posted:I have two tables: Using MSSQL? code:
|
# ? Sep 29, 2014 22:18 |
|
Anaxandrides posted:Using MSSQL? It's Postgres so no Outer Apply. Also, I'd want to join a and b on a_id, not b.user_id=a.a_id
|
# ? Sep 29, 2014 22:23 |
|
Master_Odin posted:Whoops, left that out. Yeah, I edited the join afted posting. For Postgres, it looks like this would work (per their docs): code:
|
# ? Sep 29, 2014 22:59 |
|
Gatac posted:Hey guys, got a newbie question regarding PL/SQL. Suppose I've got a procedure that's got a fairly complex SELECT to query the data I want to work on, and I'd like to store the results for the runtime of the procedure so I can then run further queries on that. I've done some googling about temporary tables and such and supposedly on an MS system, building temporary tables at runtime is no big deal, but Oracle doesn't support it. The consensus pretty much seems to be "It's not supposed to work that way, don't do it", followed by (what looks to me) horribly kludge-y ways to get it done anyway by explicitly creating, filling and then dropping a table. This is the worst thing about PL/SQL. "We don't need easy temp tables! Write better SQL!" *implements half a dozen kludgy temp-table workarounds* I spent more time than I care to admit trying to get Oracle's lovely implementation of nested tables to work. The following basic query is completely unreliable in terms of performance: code:
This comes up all the time when like you said, a stored procedure might have an initial complex query that produces a working set of rows, and subsequent logic in the procedure will operate on that working set only. So it would, in theory, be very helpful to restrict the access/updates/operations of that subsequent logic to the set of primary keys in the working set, i.e. to precompute a series of accesses. In practice, Oracle DB often fails to make proper use of information like that if its not presented as a literal in-list (as in, actually use the info to drive access). FieryBalrog fucked around with this message at 13:29 on Oct 6, 2014 |
# ? Oct 6, 2014 13:15 |
|
I've been tasked with converting a client's current Oracle based system into our SQL Server 2008 system. I'm much stronger in SQL Server and quite weak in Oracle so any input would be great. The Oracle system was exported using the expdp utility from an 11g database. We have a license for the Oracle Standard Edition. My plan is to use the impdp utility to pull in the schema/tables/data only then use SQL Server Migration Assistant to pull the tables/data into SQL Server and work on the mapping with the client. Sounds simple enough, but what am I missing? Here's the pieced together command I'm working on in a Windows Server 2008 environment: impdp <user>/<pass>@orcl directory=data_pump_dir dumpfile=export01.dmp,export02.dmp logfile=import.log full=y I'm worried the character set was different (and potentially other settings) and also that the old Oracle environment was on a Unix platform instead of Windows. Is there any way I can pull in only the tables/data? Is there a better approach to this?
|
# ? Oct 7, 2014 04:12 |
|
murkdotdoc posted:I've been tasked with converting a client's current Oracle based system into our SQL Server 2008 system. I'm much stronger in SQL Server and quite weak in Oracle so any input would be great. Honestly, I'd use the SSIS Import/Export Data wizard to pull from Oracle to MSSQL. It'll create the tables for you on-the-fly, and pull the data right along with it.
|
# ? Oct 7, 2014 18:35 |
|
Anaxandrides posted:Honestly, I'd use the SSIS Import/Export Data wizard to pull from Oracle to MSSQL. It'll create the tables for you on-the-fly, and pull the data right along with it. I usually use SSIS for flat file, Excel or Access imports and it works great. All I have are two .dmp files from the Oracle database though - I'm assuming the data source would be "Microsoft OLE DB Provider for Oracle"? I'll check around to see what I need to install to make this work but I get a feeling there's more to it than that. Or is there something in SSIS I'm missing? Edit: I think what you're saying is get the data into Oracle and use SSIS to pull the data from the Oracle db, not import the dump files directly into MSSQL - sorry about that. murkdotdoc fucked around with this message at 19:08 on Oct 7, 2014 |
# ? Oct 7, 2014 19:02 |
|
murkdotdoc posted:I usually use SSIS for flat file, Excel or Access imports and it works great. All I have are two .dmp files from the Oracle database though - I'm assuming the data source would be "Microsoft OLE DB Provider for Oracle"? I'll check around to see what I need to install to make this work but I get a feeling there's more to it than that. Or is there something in SSIS I'm missing? Yeah, I assumed it was already in Oracle, not present in two .dmp files.If you don't have an Oracle setup at all, I'm not aware of a magic bullet.
|
# ? Oct 8, 2014 02:20 |
|
Another stupid out-there newbie question. Using PL/SQL Developer 10, I've been messing around with substitution variables in a script so I get a nice data entry popup at runtime. Now there's plenty of options to configure that (drop lists, checkboxes, value restrictions, required fields etc), but I'm wondering: is it possible to rig this up in a way that I can lock out one variable depending on the selection in another field? For example, say I have this: &<name="Date"> and &<name="Cutoff by Date" list="0, No, 1, Yes" description="yes" restricted="yes"> If I enter a date, I want to be able to choose whether to use it as cutoff or not. However, if I leave it blank, the choice is meaningless and I'd prefer to lock it to No. I have seen the options to make a variable read-only and to supply a default value, but these can only be predefined, not made conditional. I've also found that you can base the selection for one variable on another by using a SELECT with a bind variable, but unfortunately, PL/SQL Developer only executes this query when there's actually something in that bind variable - if it's a NULL (by leaving the field blank) the SELECT doesn't run. If it did, I'm pretty sure I could figure out how to build the selection so there's only one option, which would work well enough for my purposes, but as it is I'm a little stumped. It's not essential by any means - I'm honestly positively surprised at how indepth the existing options are - but if it's possible I'd like to know how.
|
# ? Oct 8, 2014 17:33 |
|
I'm struggling to compare some rate records in our database and need a nudge in the right direction. I think it can be solved joining the table with itself but as I am a total novice (I've written a few nested selects and that's about the extent of my experience, so for this question Type is most certainly equal to Stupid) I'm at a loss how to proceed! Essentially I'm trying to pull a list of accounts where our customer has lapsed their contracted rate for some amount of time, so instead of receiving a contracted rate they instead receive a default. Rate table structure is (roughly) below:code:
Since 12345's newest fixed rate begins a full month after the previous expired, I'd like it to be pulled, and since 67890's started immediately after the previous one, it would need to be excluded, regardless of the fact that 67890 still has that default in there. My problem is I'm not sure how to set it up to compare the dates. code:
|
# ? Oct 13, 2014 21:03 |
|
Kumbamontu posted:
Try this: code:
|
# ? Oct 14, 2014 18:22 |
|
Anaxandrides posted:Try this: This was the nudge I needed in the right direction, thank you! What I ultimately ended up going with was (account# is ieinum and is actually in a different table hence the extra join): code:
|
# ? Oct 14, 2014 21:30 |
|
Kumbamontu posted:One other newbie question: the above query runs in 2 seconds if I omit the DESC in the order by clause, but with DESC it takes 22 seconds. Is there a better less intensive way to get the results like I want them? This is mostly an academic exercise as this isn't going to be a continually run query so I don't particularly care about the 20 seconds, but I'm interested why it's such a performance hit. Broadly speaking, sorting is expensive for SQL to do, and moreso the larger the record set is. Specifically, in this case, you have one join operating on the common table expression (think of this as a @table being held in memory), and another operating on a second in-memory copy of that table, ordered in a different way. Since it's a @table being held in memory, it has no indexes, and it gets to sort the slow way. There can be no indexes to help it sort, and dates are a complex type. Sorting by an INT DESC, for example, would be really fast. That's my gut feeling on this one, anyway. It's akin to memorizing a book, then having someone tell you "Now read it backwards and forwards at the same time, and tell me which sentences are the same". Edit: To tack on, if you want the results ordered in DESC order, my suggestion would be to take the first result set and put it into a #table, then select from that ORDER BY EffBeginDate DESC. I would expect that to be pretty fast, since it's a different execution step at that point, Anaxandrides fucked around with this message at 23:57 on Oct 14, 2014 |
# ? Oct 14, 2014 23:55 |
|
So I got assigned the fun task of using 11 year old report building software to build a report for some patient data at work. After a couple weeks of wrangling the software to do what I need, I have it all ready to go, with one exception. There are a few places where, instead of straight text strings, the information I need is codified as a series of 0s and 1s. ex: 0010111. I can't output that because there's no actual data there, just 0s and 1s (obviously). So I want to turn that into text, where a 0 is blank and a 1 represents a word or string. ex: 0010111 = c, e, f, g or 1101000 = a, b, d. I'm not familiar enough with SQL to build an expression more complicated than a basic CASE statement. I know that the use of substring(table, 1, 1) = 1 is probably going to be used, but my research isn't giving me anything definitive. Any ideas?
|
# ? Oct 15, 2014 17:51 |
|
atmakaraka posted:So I got assigned the fun task of using 11 year old report building software to build a report for some patient data at work. After a couple weeks of wrangling the software to do what I need, I have it all ready to go, with one exception. What kind of SQL are you using? If it was heavy duty processing then I'd be tempted to create a function to convert the binary string into an integer value and then use bitwise AND to pull out the data values, but if it's not too processor intensive, then you could probably use something like the below (assuming MS T-SQL): code:
That does leave you with a trailing comma on the string field though; if that's a problem then you're probably looking at something a little more complex like setting up the above as an inline function and doing something like code:
|
# ? Oct 15, 2014 18:19 |
|
If I have a table whose layout is [timestamp],[accountID],[balance] where there are each account has a balance on a given date, what is the most efficient method to produce the following layout:code:
Is there another, faster method?
|
# ? Oct 15, 2014 18:38 |
|
Agrikk posted:If I have a table whose layout is [timestamp],[accountID],[balance] where there are each account has a balance on a given date, what is the most efficient method to produce the following layout: Is this for an arbitrary amount of accounts, or a known amount?
|
# ? Oct 15, 2014 18:48 |
|
One Swell Foop posted:What kind of SQL are you using? If it was heavy duty processing then I'd be tempted to create a function to convert the binary string into an integer value and then use bitwise AND to pull out the data values, but if it's not too processor intensive, then you could probably use something like the below (assuming MS T-SQL): That worked flawlessly. Thank you! We're on SQL Server 2012, but the report building frontend of the EMR software we use was last updated in 2003. In the next month I'll be getting CrystalReports and some paid training, so I'm excited for not having to use the lovely built in stuff.
|
# ? Oct 15, 2014 19:05 |
|
Anaxandrides posted:Is this for an arbitrary amount of accounts, or a known amount? The initial number is known (~200) but it is expected to grow randomly.
|
# ? Oct 15, 2014 22:09 |
|
Agrikk posted:Is there another, faster method? http://www.postgresql.org/docs/9.1/static/tutorial-window.html See also, cumulative sum. vvvv Sorry, I completely misread his original post. Phone doesn't display those code sections well, but it's been my only post/read approach these days. At least it prompted someone to give an answer! PhantomOfTheCopier fucked around with this message at 05:15 on Oct 16, 2014 |
# ? Oct 15, 2014 23:32 |
|
PhantomOfTheCopier posted:http://www.postgresql.org/docs/9.1/static/tutorial-window.html Not what he's looking for -- he wants a PIVOT, if I understand things correctly. Agrikk, what flavor of SQL are you using? Postgre, MSSQL, Oracle, MySQL, etc?
|
# ? Oct 16, 2014 00:50 |
|
Anaxandrides posted:Not what he's looking for -- he wants a PIVOT, if I understand things correctly. MSSQL. Yeah, from a casual google, a PIVOT is what I'm looking for.
|
# ? Oct 16, 2014 02:11 |
|
atmakaraka posted:That worked flawlessly. Thank you! I thought about this for a little while and on reflection I'd probably use a view to separate the binary digits out into separate columns in this case, but glad it worked out for you. Agrikk posted:The initial number is known (~200) but it is expected to grow randomly. Agrikk posted:MSSQL. Yeah, from a casual google, a PIVOT is what I'm looking for. With a regular PIVOT you need to know the account (column) names when you're writing the query, so it's not good (as it stands) for situations when the accounts can change. It is possible to write a Dynamic SQL Pivot to derive the SQL query from the data, but that kind of application (dynamic crosstabs) is getting more into the realms of OLAP (SSAS, on SQL Server).
|
# ? Oct 16, 2014 06:35 |
|
So the pseudo bianary guy made me think. At what point do you say gently caress it and write something in java/c# etc rather than huge and complex sql statements. Is there a point? Is there ever a reason to (beyond the absurd)?
|
# ? Oct 16, 2014 07:37 |
|
Cast_No_Shadow posted:So the pseudo bianary guy made me think. At what point do you say gently caress it and write something in java/c# etc rather than huge and complex sql statements. Is there a point? Is there ever a reason to (beyond the absurd)? Depends on the DB platform and how it integrates with the language of choice. SQL as a language is clunky but under the hood the optimizers are some of the most sophisticated software in the world, so for large data sets it's difficult to beat (although some NoSQL platformw will easily outperform it in specific tasks). That said, for complex logic you're probably better off looking at a higher-level SQL concept like a view, an inline function, or a stored procedure (as they typically still make use of the SQL optimization engine). If you need more complexity than that, then you could go to a higher level language, sure; but it's not always easy to integrate procedural languages efficiently into the set-based paradigm of SQL, especially for large data sets. Also it's often better to stick with a solution that can easily be debugged and optimized using a single platform's tools. Additionally, it sounds like the binary situation was part of a reporting tool that says 'you can add some custom SQL in here if you're not getting what you want from the defaults', as a lot of reporting tools do; in that case it's easy to one-off code a slightly long-winded SQL command than to try and integrate another language or reporting environment. A lot of business tools understand SQL or a derivative so it's easy to use in-place. That said, if performance isn't an issue, you're not tied to a reporting tool, you don't need your DBAs to support it, and you're already familiar with Java or C#, then feel free to grab the data from SQL and process it however you see fit. E: Moogle posted:I know this has already been answered, but I saw this after waking up and wanted to put it in one line because reasons. That's really a very pretty solution, and much better than mine for the problem as stated, but it locks you into the a, b, c mapping and I think you'd need a case statement to allow arbitrary strings. One Swell Foop fucked around with this message at 08:21 on Oct 16, 2014 |
# ? Oct 16, 2014 08:09 |
|
atmakaraka posted:So I got assigned the fun task of using 11 year old report building software to build a report for some patient data at work. After a couple weeks of wrangling the software to do what I need, I have it all ready to go, with one exception. I know this has already been answered, but I saw this after waking up and wanted to put it in one line because reasons. SELECT STUFF((SELECT ',' + CHAR(96 + (NULLIF(SUBSTRING('1011011',n,1) & 1,0) * n)) FROM (VALUES (0),(1),(2),(3),(4),(5),(6)) [mask] (n) FOR XML PATH('')),1,1,'') That's assuming left to right, but you can just reverse the VALUES figures if you'd like it the other way. Salt n Reba McEntire fucked around with this message at 08:15 on Oct 16, 2014 |
# ? Oct 16, 2014 08:13 |
|
This isnt a DB issue.. I am retarded.
thegasman2000 fucked around with this message at 21:53 on Oct 20, 2014 |
# ? Oct 20, 2014 21:19 |
|
Are there any best practices to keep in mind when setting up a read-only user account? There should be no difference between using the db_datareader group and "GRANT SELECT TO John", am I correct?
|
# ? Oct 20, 2014 21:52 |
|
Pretty much. DB_datareader is a database level permission, GRANT can be super fine-grained if you need it to be. One consideration is that DB_datareader (IIRC) gives read access on the database's system tables and views, so it's probably slightly better practice to create a schema for user tables and "GRANT SELECT ON userschema TO JOHN".
One Swell Foop fucked around with this message at 22:28 on Oct 20, 2014 |
# ? Oct 20, 2014 22:25 |
|
Thanks. I think I actually want read access on the system stuff in this case, so I can call sys.fn_my_permissions to verify that the credentials being used have been configured correctly (and halt the program otherwise).
|
# ? Oct 20, 2014 23:41 |
|
That'd be a "GRANT EXECUTE ON sys.fn_my_permissions". Generally speaking you could also just check to see if the items you need exist and can be read from (wrap "SELECT top 1 'test' AS TEST FROM mytable" in a try/catch). That'd be slightly better practice than providing access to system functions; in fact on reflection it's probably best practice to just wrap every SQL query that you run in a try/catch and handle it that way, as at some point your SQL server isn't going to be available and then at least you've got your handlers in place already.
|
# ? Oct 21, 2014 00:00 |
|
One Swell Foop posted:That'd be a "GRANT EXECUTE ON sys.fn_my_permissions". Generally speaking you could also just check to see if the items you need exist and can be read from (wrap "SELECT top 1 'test' AS TEST FROM mytable" in a try/catch). That'd be slightly better practice than providing access to system functions; in fact on reflection it's probably best practice to just wrap every SQL query that you run in a try/catch and handle it that way, as at some point your SQL server isn't going to be available and then at least you've got your handlers in place already. Well the primary concern is to make sure that INSERT/UPDATE/DELETE do not work, so running a test SELECT query wouldn't help with that. (I would have to try inserting/updating/deleting a dummy entry in each table and check the error I get - probably not a great idea!) FWIW, the scenario is that we need to add an audit access option to an existing application, and said access must be 100% read-only for legal reasons (minus log entries). While we're also going to intercept the DB-writing functions in the program itself (so we can display the appropriate "NO YOU CAN'T DO THAT" messages instead of the usual "database error" ones), I want to play it safe by giving the audit user read-only SQL credentials as well, and check that they are read-only before they can do anything.
|
# ? Oct 21, 2014 00:26 |
|
|
# ? Apr 25, 2024 10:41 |
|
NihilCredo posted:Well the primary concern is to make sure that INSERT/UPDATE/DELETE do not work, so running a test SELECT query wouldn't help with that. (I would have to try inserting/updating/deleting a dummy entry in each table and check the error I get - probably not a great idea!) If your tables aren't in a schema, you can GRANT SELECT ON SCHEMA::dbo TO MyUser. Also, I really wouldn't put your own functions in the sys schema.
|
# ? Oct 21, 2014 18:43 |