|
Mox Hombre posted:yo what's up YOSPOS long time no see. code:
https://www.postgresql.org/docs/11/tutorial-window.html
|
# ? Jul 24, 2020 18:36 |
|
|
# ? Apr 25, 2024 14:00 |
|
Mox Hombre posted:yo what's up YOSPOS long time no see. My intuitive solution to this is to group your ledger rows by person and add a row number to them, then filter that result to just payouts. Then you'd get something like... code:
However, are you even sure getting the number of tips per payout is even possible? Do payouts always empty the whole balance, or are partial payouts possible?
|
# ? Jul 24, 2020 18:38 |
|
I have a MySQL table like so: CREATE TABLE `user_names` ( `user_id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(75) NOT NULL, `team` bigint(20) NOT NULL, PRIMARY KEY (`user_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1114096 DEFAULT CHARSET=latin1; with three columns: user_id, username, teamname I am trying to pull a user/team combo at random, so I have a statement: select user_id, username, team from user_names where user_id = (SELECT FLOOR(RAND()*((select count(1) FROM user_names)-1+1)+1)) that should theoretically grab a single user_id anywhere from user_id 1 to the maximum user_id which is count(1) (1,081,480) and its respective username and team. Right? However, when I repeatedly run the command, often I get nulls, sometimes I get a single user_id (as expected) and other times I get two or three user_ids returned. What's going on here? Agrikk fucked around with this message at 06:30 on Jul 26, 2020 |
# ? Jul 26, 2020 06:27 |
|
Agrikk posted:I have a MySQL table like so: SELECT x FROM y ORDER BY RAND() LIMIT 1;
|
# ? Jul 26, 2020 08:16 |
|
Thanks for the responses. Yeah getting them only for one payout would be doable, just limit the ids to be between two payoutIds. However, I need aggregate data for all of them. I'll look into stored procedures and window function, it's my intuition that this can't be done with just normal SQL because it looks like you need something akin to a map reduce to do it.
|
# ? Jul 26, 2020 13:02 |
|
Kuule hain nussivan posted:No clue what's going on with your query, especially it returning multiple rows, but the MySQL way of getting a random row is... I think maybe the mid-level query (the SELECT FLOOR one) is being evaluated for each row of the outer query, rather than just once as you intended. You could perhaps force it to do it the way you want by doing a join instead. Anyway don't do that, just do what Kuule hain nussivan said.
|
# ? Jul 26, 2020 20:59 |
|
abelwingnut posted:yea, that makes sense. problem is i have no idea how to do that.
|
# ? Aug 1, 2020 02:09 |
|
I had a query that was scheduled to run at 10am est every Monday and return data going back to 12am the prior Tuesday (so any data in a 154 hour window). I was using this code to do that: where i.dte BETWEEN DATEADD(hour, -154, GETDATE()) AND GET DATE That worked fine but now I need the query to run at times other than 10am on Monday, it could be 8am or 2pm but still going back to 12am the prior Tuesday. Is there a way to make the code return data after 11:59:59 the prior Tuesday up to whatever minute the query is run? Thank you
|
# ? Aug 5, 2020 13:39 |
|
Mouse Cadet posted:I had a query that was scheduled to run at 10am est every Monday and return data going back to 12am the prior Tuesday (so any data in a 154 hour window). I was using this code to do that: the answer is probably "yes" but the way you do it might depend what RDBMS you are using
|
# ? Aug 5, 2020 13:49 |
|
yes. you effectively want to change the reference point from something relative to something absolute, and the best reference point is time 0. so something like: BETWEEN dateadd(hr, -108, dateadd(week, 0, getdate()), 0) AND getdate() so you're saying: ok, give me how many weeks it has been since time 0, and from that point, subtract 108 hours (i think that's how many hours it would be, can't check now). this gives you the start time. and the end time would just be a regular old getdate(). may not be exact, but shows what i'm getting at enough, i think. abelwingnut fucked around with this message at 13:53 on Aug 5, 2020 |
# ? Aug 5, 2020 13:49 |
|
Hammerite posted:the answer is probably "yes" but the way you do it might depend what RDBMS you are using SQL Server 2017. Thank you.
|
# ? Aug 5, 2020 14:19 |
|
Mouse Cadet posted:SQL Server 2017. Thank you. You can use DATEPART(dayofweek, GETDATE()) to extract the numeric value of the current day of week. Then use DATEADD() to subtract that many days from GETDATE(), so you go back to day #0 i.e. Sunday, and add two days to go to Tuesday. Finally, cast the result to DATE to go to midnight.
|
# ? Aug 5, 2020 15:38 |
|
SQL code:
|
# ? Aug 5, 2020 15:42 |
|
Are there any query builder tools for DB2/AS400 that also display table/column remarks during autocompletion and dropdown/combobox selectors? We have Showcase Strategy Query at work, but that app is from the stone age. It's been bought up ages ago and turned into a terrible web app. We're looking for an alternative, but that sort of feature seemed to have gone amiss, because people eventually started to slowly move to databases where you don't need to mangle names to fit within a few letters.
|
# ? Aug 5, 2020 20:13 |
|
so it seems merge is an extremely buggy command in sql server 2012 and it's best to avoid it. is that the general thought among you all as well? it could be so helpful but, man, it's just not updating like 20k of 200k rows for...i can't find a reason. e: lol, now i'm doing it the update/insert classic way and it's still not working e2: the only solution was deleting rows i wanted to update then inserting them. what the hell--never seen that. abelwingnut fucked around with this message at 20:47 on Aug 6, 2020 |
# ? Aug 6, 2020 17:54 |
|
Latest headache from my offshore team. This is supposed to be an app that can be used on mobile. Oh yeah, and none of the tables in the subqueries have a FK relationship to the main table. Or indexes. Please give me the sweet release of death. Macichne Leainig fucked around with this message at 19:03 on Aug 6, 2020 |
# ? Aug 6, 2020 19:00 |
|
abelwingnut posted:so it seems merge is an extremely buggy command in sql server 2012 and it's best to avoid it. is that the general thought among you all as well? it could be so helpful but, man, it's just not updating like 20k of 200k rows for...i can't find a reason. You sure you aren’t doing something like an inner join on a nullable column without handling the nulls? Very easy to accidentally do that and discard the records with a null on one side.
|
# ? Aug 8, 2020 04:25 |
|
Also merge is fine and useful when you need to use the output clause not only for inserted/deleted but also the merge source But otherwise I find it to be the least sweet syntactic sugar around
|
# ? Aug 8, 2020 04:27 |
|
Merge had some weird locking issues in old versions of SQL Server IIRC (can't remember how old though) but I think it's fine now. It has a bunch of caveats but sometimes it's exactly the right tool for the job.
|
# ? Aug 10, 2020 15:44 |
|
Ruggan posted:You sure you aren’t doing something like an inner join on a nullable column without handling the nulls? Very easy to accidentally do that and discard the records with a null on one side. the join was pk to pk, so it couldn't be that. i have no idea how it wasn't working properly. even the classic 'update, then insert' method wasn't working. i had to full on delete the records that needed updating, then just insert them along with the new records. the query itself is a 4k behemoth with like 50 temp tables and some other craziness going on, so maybe the server's running out of memory or processing power or...i don't know. it takes 30min to run, so it isn't too insane...or so i thought. it needs to be refactored down to something more manageable, yes. that's phase 2.
|
# ? Aug 10, 2020 18:59 |
|
I became aware recently that in SQL Server you can use a VALUES table-expression ("Table Value Constructor") to specify a collection of values without having to create and populate a temporary table. Suppose I have a similar use case to this but I have to use Oracle. What's the best I can do? I do not want to assume I am free to create temporary tables, either global or private. Assume I am going to left outer join from the pseudo-table to a real table, so a WHERE X IN (...) condition won't do.
|
# ? Aug 11, 2020 01:01 |
|
Hammerite posted:I became aware recently that in SQL Server you can use a VALUES table-expression ("Table Value Constructor") to specify a collection of values without having to create and populate a temporary table. There isn't, as far as I know, anything that nice in Oracle yet. If the set of pseudo-table data is fixed, but you can't store it, you could put it in a WITH clause/CTE by doing UNION ALL and selecting them from Dual. Handily stolen example: code:
|
# ? Aug 11, 2020 13:35 |
|
Moonwolf posted:There isn't, as far as I know, anything that nice in Oracle yet. If the set of pseudo-table data is fixed, but you can't store it, you could put it in a WITH clause/CTE by doing UNION ALL and selecting them from Dual. Handily stolen example: I thought that that's all that was available. Thanks for confirming.
|
# ? Aug 11, 2020 18:04 |
|
I'm not sure if this is the right thread for this but I can't think of where else it would go. I'm building an advanced filter type tool, similar to ebay where you can add a bunch of different filters to narrow down your results. I'm being asked if we can do something like this: where before applying each filter you get a preview of how many results it'll return. The only way I can think to do this is by basically pre-running each of the filter options as a COUNT query every single time the filters change, but that seems like it would be really slow. Is there a better way to do something like that or is this the kind of thing where we'd want a totally separate database layer if we want to access our data in this way?
|
# ? Aug 11, 2020 19:17 |
prom candy posted:I'm not sure if this is the right thread for this but I can't think of where else it would go. I'm building an advanced filter type tool, similar to ebay where you can add a bunch of different filters to narrow down your results. I'm being asked if we can do something like this: SELECT console, COUNT(*) FROM products GROUP BY console
|
|
# ? Aug 11, 2020 19:22 |
|
Alright I feel kinda stupid now having seen that... I have a lot more categories than just console, but I guess these count queries shouldn't be too slow? Plus it's going to be counting from a subset of the records that have the conditions already applied... not sure if that would make things faster or slower.
prom candy fucked around with this message at 19:57 on Aug 11, 2020 |
# ? Aug 11, 2020 19:29 |
prom candy posted:Alright I feel kinda stupid now having seen that... I have a lot more categories than just console, but I guess these count queries shouldn't be too slow? Plus it's going to be counting from a subset of the records that have the conditions already applied... not sure if that would make things faster or slower. I'm pretty sure it's just a general developer thing of not being able to see the simple solutions once you're in the weeds, I do the same thing all the time. Whatever aggregation stage you're pulling your filter options out at is ideally where you'd do the counts, because the queries can almost always pull double-duty. e: for example, our app has a similar function to what you're implementing here. We use java+hibernate, so our flow looks like: {{base filtered query}} select filter_options_1 , count(*) from {{base filtered query}} group by filter_options_1 select filter_options_2 , count(*) from {{base filtered query}} group by filter_options_2 etc... return select {{base filtered_query}} + filter options 1 + filter options 2 ChickenWing fucked around with this message at 20:49 on Aug 11, 2020 |
|
# ? Aug 11, 2020 20:46 |
|
Is there any cool new way to write hierarchical queries in the latest versions of SQL server (ala oracle 'connect by')? Or do you still need to use CTE / TSQL etc?
|
# ? Aug 17, 2020 19:17 |
|
Can anybody suggest a solution for a query running slowly in a stored procedure but fast in a query window? It takes ~7 seconds in a query window and 1-2 minutes in a stored procedure. I've tried the following:
Those are the major things I know to check for and at least one of them has solved the problem in the past, but not this time. MSSQL 2014 is the version I'm using.
|
# ? Aug 17, 2020 21:41 |
|
Withnail posted:Is there any cool new way to write hierarchical queries in the latest versions of SQL server (ala oracle 'connect by')? Or do you still need to use CTE / TSQL etc? Still need to use a recursive CTE pretty sure
|
# ? Aug 18, 2020 03:48 |
|
Just-In-Timeberlake posted:Can anybody suggest a solution for a query running slowly in a stored procedure but fast in a query window? It takes ~7 seconds in a query window and 1-2 minutes in a stored procedure. Implicit database? If you’re running queries with your database set to master and master is set to a different compatibility level than your database, then running the query from master and running it from your database could produce different query plans. Any cross server or cross database joins? Anything like resource governor installed? Can you preview the query plans produced using estimate query plan and see a specific difference to narrow down what might be causing the issue?
|
# ? Aug 18, 2020 03:52 |
|
Running the actual execution plan for both shows this, which I'm sure is the problem, but I'm not sure how to fix it: In the query window execution plan, there is an index seek, but in the stored procedure execution plan it becomes an index scan edit: I found the (FORCESEEK) hint and it works as it should now. Just-In-Timeberlake fucked around with this message at 15:20 on Aug 18, 2020 |
# ? Aug 18, 2020 15:14 |
|
I'm a noob with CTEs but if I have the following query on a self referencing table which return all the employees for manger id 16code:
Do I use multiple CTEs? edit. I was able to solve this by changing how the level is calculated, putting the CTE in a while loop and dumping the results in a temp table I suck at sql, I hate writing tsql Withnail fucked around with this message at 01:29 on Aug 20, 2020 |
# ? Aug 19, 2020 19:42 |
|
Not an SQL but more of a schema question. I'm working with this group that has a bunch of different datasets that go through data prep and have roughly similar schema at the end. For reasons though things aren't always equivalent enough that you could just paste the sets together into one table. Things like column A means one thing for set A and something slightly different everywhere else. I'm proposing creating a unified schema with additional columns highlighting the exceptions, like a separate column AExceptionReason that would be enum valued and encode the differences. Is there some name for this kind of thing? All my searches come up with data cleaning references and that's not what I want because the data is about as clean as it is going to get. This is more about allowing the data to live together with all of its differences. Currently the documentation of differences all live in outside documentation that is impossible to deal with programmatically.
|
# ? Aug 25, 2020 01:58 |
|
i'm not really sure how to say this, but hopefully this works. is there any way to run through a script line-by-line? like in a debug mode? i've seen this in other languages, where you can just say execute this line, and it'll show you how variables change with each line. anything like that without having to manually insert and remove selects with each line?
|
# ? Aug 26, 2020 19:13 |
|
abelwingnut posted:i'm not really sure how to say this, but hopefully this works. If you're using MSSQL, there's likely a Debug button next to Execute on the top bar of SSMS. I step through almost every stored procedure I write to make sure it's not going to tank something.
|
# ? Aug 26, 2020 19:32 |
|
abelwingnut posted:i'm not really sure how to say this, but hopefully this works. This only makes sense within the context of whatever database you're using's procedural language extensions (TSQL, PL/SQL, PL/pgSQL, etc). Within those, there should be a debugger like Sir Fishbone says, probably integrated into whatever IDE you use for it if it's a better supported one. SQL proper is declarative, you can't step through it because the entire statement is parsed and applied as one thing, there you're best off commenting out joins and reading the explain plans while you work out why it's not doing what you expect.
|
# ? Aug 26, 2020 21:47 |
|
Yep. What you write is not what is run. It is compiled into code that is run and there is no way to step through SQL line by line. For example, with a simple select a+b with join, your code might get optimized into a plan with two index seeks, a preemptive sort on one of the tables, a merge join, and a scalar operator for some calculation. There isn’t a way to visualize your sets at each step or correlate each step to a specific line or set of lines of declarative SQL. If your code contains multiple statements, for example a multi-part SP, you can run each separate statement individually sure.
|
# ? Aug 26, 2020 23:22 |
|
I guess a true SQL debug mode would look like EXPLAIN ANALYZE on steroids, with the engine printing the intermediate results between each step of the execution plan, and serialising parallel steps. It would be pretty cool to have such s thing, but as noted, the execution plan you'd be stepping through would look nothing like your query, to the point that even a "source map" approach wouldn't be feasible.
|
# ? Aug 27, 2020 07:24 |
|
|
# ? Apr 25, 2024 14:00 |
|
NihilCredo posted:I guess a true SQL debug mode would look like EXPLAIN ANALYZE on steroids, with the engine printing the intermediate results between each step of the execution plan, and serialising parallel steps. Star War Sex Parrot fucked around with this message at 17:26 on Aug 28, 2020 |
# ? Aug 28, 2020 17:01 |