|
Dakha posted:Is there some simple way we can rework things so that the extraction can be done in a reasonable amount of time? Some of the more complicated queries are taking FAR too long to run Stuff the IDs into a temp table and select a join off it?
|
# ¿ Nov 5, 2007 16:36 |
|
|
# ¿ Apr 25, 2024 10:18 |
|
jwnin posted:Can anyone provide me with some more ammo as to why we would not want to go down this path? Did the devteam also volunteer to manage, troubleshoot, and update those tens of thousands of databases? It's a horrible idea and whoever suggested it should be torn into little bitsy pieces and buried alive.
|
# ¿ Nov 8, 2007 20:06 |
|
You're pretty much in the same situation as Dakha was on page 1. If you're going to need to scale it up, a temp table is the way to go. I'm pretty sure 8000 characters isn't the limit for a query. That's the max size of the usual text variable types.
|
# ¿ Nov 8, 2007 21:53 |
|
Replace the last two WHERE conditions with a BETWEEN?
|
# ¿ Jan 12, 2008 17:22 |
|
Grigori Rasputin posted:Does anyone know a good replacement for query analyzer? I'm working in a .NET/SQLServer shop now and find Query Analyzer abysmal to use. I used PL/SQL Developer at my last job as an Oracle dev and found it to be robust, easy to use and cheap. It made it easy to view tables/records/store procedures and had a proper IDE for creating/debugging PL/SQL stored procedures. Basically, it was a stripped down version of TOAD. I just found this yesterday. It's got its quirks, but still seems to be pretty great. Support looks good too -- I emailed the dev over a couple issues I have with 2005 support and was told to expect a fixed build in a couple days. Pretty sure I'll be buying it (if I can't get my company to) once the evaluation is up.
|
# ¿ Feb 14, 2008 17:19 |
|
Tots posted:If I have a CSV file that matches up perfectly with a table that has FK values, how do I input that data? Are you using BULK INSERT (if this is SQL Server)? You could disable the FK constraint temporarily, load the data, then null any invalid FK values out (UPDATE newTable SET foreignKey = NULL WHERE foreignKey NOT IN (SELECT primaryKey FROM oldTable) or something like that) and re-enable the constraint.
|
# ¿ Mar 27, 2008 02:21 |
|
Tots posted:I'm using the school's database server, which currently doesn't give me permission to do a BULK INSERT, but I am working on getting that permission. For now, I am using sed (a search and replace program) to add in "INSERT INTO MyTable VALUES (" before each line, and a ")" after each line. I know I could further use RegExp to manipulate the data to suit my needs, but I know even less about RegExp than I know about SQL. Yeah. See here under 'Disabling Foreign Key Constraints'. You'll basically just be turning off the check for data integrity while you insert the new rows.
|
# ¿ Mar 27, 2008 06:08 |
|
dest posted:I copied that, added the correct FROM, and I got the error: "Didn't expect 'when' after the select column list" Remove the A.UnitPrice from just before the WHEN. CASE [Column] WHEN takes a value, CASE WHEN... takes a conditional expression. You need the conditional since it's not an exact match. Also you should be able to set the ELSE value to NULL if you don't want it populated with a zero.
|
# ¿ Apr 2, 2008 23:39 |
|
minato posted:Something more descriptive. Because you can be sure that eventually you'll have fields like "Created by", "Last updated by" and "Maintained by" that will all refer to user IDs, so you can't just use userId for those. I'd name it UserID for consistency in the foreign key relationship (User.UserID->Article.UserID reads better than User.UserID->Article.AuthorID), and those others would be CreatedBy, ModifiedBy, etcetera -- and those could be either IDs or user names, I've seen both used. Of course, it's all pretty much up to how quickly you want other people to pick up on what you did
|
# ¿ Apr 8, 2008 07:29 |
|
Dromio posted:This seems pretty small. I imported some legacy data into a SQL Server database. I want to add a uniqueidentifier column to that table, and populate it with random GUIDS. I've added the column, but how to I populate it with random GUIDs for the existing data? You can set the default value of the column to NEWID() in the table designer or the ALTER TABLE script, and it should add everything the way you want it.
|
# ¿ May 16, 2008 00:04 |
|
epswing posted:I'm curious about the background of your project, and what exactly would cause you to ask if juggling ~1000 temp tables is acceptable overhead. Yeah... but I don't see why fez couldn't just have one table, temp or not, keyed on session ID. What the hell is going on there?
|
# ¿ May 31, 2008 02:21 |
|
Casimirus posted:What I can't figure out is how to take a database name passed to the stored procedure and then access things like Yeah, it won't let you do that directly. For what you're doing, you might be able to get away with code:
|
# ¿ Jun 2, 2008 02:19 |
|
Ardhanari posted:
Or, poo poo, I thought of this this morning but didn't have time to post: is there anything keeping you from using sp_addlinkedserver?
|
# ¿ Jun 3, 2008 00:42 |
|
chocojosh posted:How would you go about importing a CSV file into the database with some custom logic and be able to report errors (ideally by sending an e-mail)? We're using SQL Server 2005. Look into BULK INSERT.
|
# ¿ Jun 18, 2008 00:02 |
|
chocojosh posted:Bulk insert doesn't seem to support custom rules. I've never used the FIRE_TRIGGERS part before, but I think you could do it through that -- just set up a trigger on the destination table with whatever custom logic you need (checking that every row in Inserted has a corresponding ID, etc). You should even be able to send emails from the trigger, but make sure you only do it under strictly specified conditions unless you want whoever's running your email server to cut your throat.
|
# ¿ Jun 18, 2008 21:22 |
|
xenilk posted:Thanks for explaining, it completly makes sense now Use EXCEPT to get rid of that innermost block comparison. code:
|
# ¿ Jun 20, 2008 17:52 |
|
Factor Mystic posted:I've got some questions for a new hobby project. It's a web bookmark database type thing, and there will be user-added keywords associated with each url. I want to be able to search by keyword, and I want to do this in an intelligent way. I know not to just dump all the keywords into a big Text field, but I don't know the proper way to handle this. Can anyone post some tips or basic-intermediate articles on this type of thing? Each URL can have many keywords, and keywords can have multiple bookmarks 'tagged', right? Set up a Keywords(KeywordID, KeywordData, ...) table and a Bookmark_Keyword (BookmarkID, KeywordID) to create your many-to-many relation. Then you can get the keywords for a URL by querying through the bridge table, or get all bookmarks with a particular tag the same way.
|
# ¿ Jun 22, 2008 16:58 |
|
Factor Mystic posted:Alright, in that setup I don't understand how a bookmark is related to more than one keyword. If each KeywordID links to only one keyword (KeywordData), how can more than one keyword be saved for a bookmark? I guess I don't understand the proper way to store a variable length list of data. Keyword.KeywordID links to Bookmark_Keyword.KeywordID. Bookmark_Keyword.BookmarkID then links to Bookmark.BookmarkID. You'll have unique keywords (if you want) but they link to multiple Bookmark_Keyword entries and thus to multiple bookmarks (and vice versa). code:
ninja edit: holy poo poo, I didn't realize I had an encoded message about markup in there. Unintended, but I'm leaving that in!
|
# ¿ Jun 22, 2008 21:44 |
|
|
# ¿ Apr 25, 2024 10:18 |
|
Super Mario Shoeshine posted:I just found a way to separate that string but i need to know how to pass the result of a string function to a temp variable, i mean for example if i use this function SELECT RIGHT() i get as a result a number, is there a way to pass this number to a variable i just defined in the session? SELECT @variable = RIGHT(whatever you're doing here)
|
# ¿ Jul 18, 2008 18:34 |