|
m5 posted:Personally I think it's risky to perform relational comparisons on fields that might be null. That's what COALESCE and ISNULL (MSSQL) and CASE are for.
|
# ¿ Nov 2, 2007 04:33 |
|
|
# ¿ Apr 25, 2024 13:27 |
|
Last time I tried SQL Prompt, I found that it requires you to type ctrl-space or something to activate the dropdown (or the dropdown would appear way too often), wouldn't autocomplete after pressing space, and other such annoyances. Maybe I've just acclimated to VS2005's intellisense, but I really like it, more than any other intellisense I've used. Oh, I don't think SQL Prompt does autocompletion for INFORMATION_SCHEMA, which I use a lot.
|
# ¿ Nov 2, 2007 15:48 |
|
Why not just do the following? Hopefully you have some character that is never in the title.code:
Edit: you're kinda screwed with Regex unless you want to use it to determine whether the first word is one of the above, and if so, find the index of the first space and grab the string after that space. (MySQL regexes only do matching, not capturing of groups or replacement.)
|
# ¿ Nov 3, 2007 20:47 |
|
Find one of the many table-valued udfs that split strings, and then pass the ids as a delimited list.
|
# ¿ Nov 4, 2007 22:12 |
|
Satan's Scallion posted:What we do now is bulk load the list of values to a temporary table, create an index, and then simply query with a join against this table.
|
# ¿ Nov 5, 2007 16:54 |
|
chocojosh posted:sp_executesql will not accept dynamic table names or field names to be entered. chocojosh posted:I'm wondering if there's any "gotchas" for using temp tables or table variables (and what's the difference between the two). I want to know why Microsoft hasn't worked on passing tables around, or at least views that can be dynamically selected from without dynamic sql.
|
# ¿ Nov 6, 2007 20:49 |
|
chocojosh posted:I am choosing my table dynamically, based on an input parameter (I *know* you shouldn't do this due to lack of query caching). To do this requires using the exec statement which can make you a victim to sql injection. You can do dynamic SQL with sp_executesql just like you can with exec(sql). sp_executesql only allows scalar parameters, which may be what you were getting at.
|
# ¿ Nov 7, 2007 06:00 |
|
pig-fukr posted:They have. In 2008 you get table-valued parameters that can be passed around, provided you're willing to strongly type them and read-only access will do what you need:
|
# ¿ Nov 7, 2007 22:03 |
|
chocojosh posted:replace(varchar_variable, '''', '''''') [0] assuming that anything that doesn't go through the replace function above is strongly typed into something other than a string, e.g. int, float, date, etc. [1] that process the input from initial text input -> when the database query optimizer gets its hands on the query Xae posted:You would need to check for ';' as well, since they might try to submit a compound statement. ie "Select * from sys;Drop X;"
|
# ¿ Nov 8, 2007 00:41 |
|
Hairy == slow? Ugly?
|
# ¿ Nov 8, 2007 21:06 |
|
I don't think there is some solid limit -- the 8000 number just comes from the maximum storage length of char, varchar, binary, varbinary, in SQL 2000 and earlier (2005 has [n]varchar(max), and varbinary(max)). Selecting that many IDs from a GUI is a bit questionable, so if it is possible, I would just put a limit on that (say, to a 100 entries), and also have a "select all" option, in which case you wouldn't use in. However, people can be retarded and require that users be allowed to select hundreds of entries, so this might not be an option. If so, I guess a temporary table would be the best approach, albeit ugly.
|
# ¿ Nov 8, 2007 21:52 |
|
m5 posted:In our application, it isn't a case of idiot customers, and selecting thousands of things to operate on makes perfect sense.
|
# ¿ Nov 9, 2007 01:09 |
|
Join illustrations.
|
# ¿ Nov 21, 2007 15:47 |
|
If you have an index on owner, it's not very expensive. However, that makes it more expensive to insert/remove rows from your messages table. Whether this actually matters depends on details you have not provided.
|
# ¿ Nov 24, 2007 01:10 |
|
I might make a thread about sweet SQL2005 features at some point, but I just wanted to get this out there: using cross apply/outer apply allows you to join a table-valued function to a table, using values from the table as input to the UDF. That's useful for, among other things, normalizing data with a CLR Regex Split function. I learned about this tonight, after getting pissed that there was no way to link a table-valued UDF to a table, since relations can't refer to each other outside of the join condition. SQL2005 to the rescue!
|
# ¿ Nov 27, 2007 07:14 |
|
Let's say I have deployed a table-valued function called dbo.RegexSplit that returns a table containing the column value. Let's say I have a table that is (foo int not null, list_of_bars varchar(100) not null). I want to normalize that to (foo int not null, bar int not null). First, I triedcode:
code:
|
# ¿ Nov 27, 2007 07:37 |
|
Something tells me we should write up a tutorial on joins or create a FAQ entry. I've made the point m5 notes above before -- sometimes you just want an left/right join to fail but don't want the entire row eliminated from the results set.
|
# ¿ Nov 27, 2007 16:48 |
|
Two columns: userId and contentId. The PK is both of those columns.
|
# ¿ Dec 3, 2007 23:15 |
|
Have you googled your question?
|
# ¿ Dec 4, 2007 15:16 |
|
Wow, I can see how reading EXPLAIN results would be a bit painful. Compare that to MS SQL Server query plans:
|
# ¿ Dec 4, 2007 20:23 |
|
How does the MySQL example actually do what you think it is doing? You are grouping by UserID, but only getting one row back -- do you only have one user? If you are convinced that MySQL really is doing the proper thing, do post the results, because I would be very surprised. The TSQL version can be fixed by only assigning to variables, and not retrieving data. However, be warned that, at least if you throw in an ORDER BY clause, SQL Server is not guaranteed to populate your variable properly. If you'd like, I can yank a script that generates pretty CREATE TABLE code from INFORMATION_SCHEMA, which assigns to a @sql variable in the select statement in the style above, and breaks (only returns one row) if I add an ORDER BY ORDINAL_POSITION, in SQL 2005 (SQL 2000 works fine with that ORDER BY). Edit: if you're using SQL 2005, do this properly by writing an SQL CLR aggregate. I have two such aggregates: dbo.JoinWithPipe and dbo.JoinWithCommaSpace.
|
# ¿ Dec 5, 2007 15:53 |
|
What version of SQL Server are you using? I realize I misread your first post after you posted sample results, so I'm pretty sure you can't do what you want with variable assignment, but I think you can another way. SQL2005 offers a lot of sweet functionality that could potentially make this a lot easier.
|
# ¿ Dec 5, 2007 17:58 |
|
This guy says cursors are the best method for doing running totals. Here are a few different ways to do running totals. I don't have time to explain them now; I can do that later if necessary. Only one of them requires SQL Server 2005; it uses recursive CTEs. code:
|
# ¿ Dec 5, 2007 20:53 |
|
That SQL is revolting, especially the table breaking part. The SQL formatter in the OP sucks, IMHO. You have two tables in the first query that don't show up in the second query. code:
|
# ¿ Dec 6, 2007 02:47 |
|
Let's say I run a query on the table Thread. Then I add an inner join to Post. Will not the number of results balloon?
|
# ¿ Dec 6, 2007 03:10 |
|
Ryouga Inverse posted:Look at the two tables in question. A given install will only have one arch and only one sku; it's impossible to install the x64 and x86 versions simultaneously, for example. quote:Is there a way to construct a query such that I only get the results returned by the first query, that aren't returned by the second? I look at the queries and what I come up with is already in there (the OperationResult.NAME IS NULL bit) but all I'm really doing is stumbling in the dark. code:
code:
|
# ¿ Dec 6, 2007 05:19 |
|
Ok, if you have SQL2005.. just comment out the columns in each query that aren't found in another, make them in the same order in each query, then stick an except in between the two queries and try to run it. With derived tables, you'd literally replace "massive query here" with your massive query. However, using except is much cleaner and leaves much less room for error, so try that!
|
# ¿ Dec 6, 2007 12:03 |
|
noonches, you have to be kidding me. You're advocating an RDBMS which returns different results based on configuration like indexes. That means I could attempt to speed of a platform backed by MySQL by adding an index, and all of a sudden break the application. How on earth could that be construed in any way other than retarded? This query can be made to work: code:
Try this: code:
|
# ¿ Dec 8, 2007 17:21 |
|
Try the following:code:
|
# ¿ Dec 8, 2007 18:05 |
|
Mashi posted:My question is: What is the cleanest / most efficient way to retrieve this data as if it were stored the regular horizontal way?
|
# ¿ Dec 12, 2007 05:23 |
|
fletcher posted:If you have a mysql database of hundreds of thousands of articles what is the best way to pull up the top 100 articles ordered by number of views?
|
# ¿ Dec 12, 2007 05:24 |
|
On EAV: if you have SQL2005, it becomes a lot prettier with PIVOT functionality. A lot prettier. Dunno about how badly it performs...Dakha posted:This is probably really easy to fix, but I'm trying select a list of material numbers, and their corresponding text descriptions. However, the table with the text descriptions has several entries for each material number, for different languages. I want to get the german description, unless it doesn't exist, in which case I want the english descriptions. code:
|
# ¿ Dec 12, 2007 19:42 |
|
deimos posted:Yeah postgresql has the tablefuncs contrib for this, I was just wondering if anyone had a better idea. I am guessing I will make a product table and just foreign key an EAV-based table to it. quote:Another question, do I separate the "design/definition" data and the actual data or do I put them in the same table with some sort of flag? quote:Since I don't think I'll ever have to do anything with the data other than load and save (in terms of RDBMS interaction, because I won't ever have to search it) I could even make it a blob and just use XML/JSON to store it (yes, this is relational heresy, I know, but I don't care).
|
# ¿ Dec 12, 2007 20:39 |
|
Triple Tech posted:1) When should business logic be written at the database level versus the programming language level? Or is it something like "always in the DB unless you can't"? quote:2) My coworker just told me of this technique I've never heard before, so I wanted to know how common place it is. When importing a set of data, like many rows, you create a temporary table that's the same as the table you want to insert the stuff into, then you use some crazy SQL command to import the stuff en masse, and then delete the temporary table. True? What crazy command is this? And you wouldn't just loop over the stuff and INSERT each row/item?
|
# ¿ Dec 19, 2007 18:11 |
|
Doing a bulk insert into SQL Server with the utility bcp, inserting into a temporary table and then transforming it and inserting it into a permanent table, is still much faster than doing one-by-one inserts into the permanent table. At least, that's the way it is with SQL Server. If you don't need to do any keying (transforming a string such as "Mammography" to the ID of the row in some "dictionary" table that contains that code), then yes, you can insert directly into the permanent table. However, if you need to do transforms, and the permanent table has indexes on columns needing transforms, it will most definitely be faster to use a temporary table. Sometimes one must use a temporary table: what if the permanent table has a column for the ID for the code but not the code itself? (E.g. procedure_fk, which points to a Procedure record (id: 1, description: "Mammography"). Yes, I've been looking at a lot of hospital data lately.) SQL Server has an analog to your Oracle example: OPENROWSET.
|
# ¿ Dec 19, 2007 19:18 |
|
There's also the issue of needing to verify data integrity/quality before you shove it in a production table.
|
# ¿ Dec 19, 2007 21:03 |
|
INFORMATION_SCHEMA.TABLES might exist.
|
# ¿ Dec 22, 2007 00:48 |
|
Did you try code:
|
# ¿ Dec 22, 2007 01:56 |
|
Scarboy, I'd suggest doing a subquery, unless you found the performance prohibitive. Are you using MySQL? If so, you can use the ability to select non-aggregated fields that weren't grouped by to your advantage (note the use of coalesce):code:
|
# ¿ Dec 28, 2007 01:42 |
|
|
# ¿ Apr 25, 2024 13:27 |
|
I'll probably make a thread on it one of these days, but for now: Using LINQ in SQLCLR.
|
# ¿ Dec 28, 2007 22:02 |