|
Roundboy posted:While I can't offer you too many specifics without knowing your db setup, the answer is almost all universally powershell. Someone on my team just wrote something like this in Powershell for our company last week, so I can confirm this is a good approach.
|
# ? Apr 17, 2015 13:57 |
|
|
# ? Apr 29, 2024 06:38 |
|
Nth Doctor posted:I'm pretty sure it's option 1. Are there any indexes involved? Like an index on Data for the stamp column? Yes. There is an index on stamp, col1
|
# ? Apr 17, 2015 18:01 |
|
Agrikk posted:Yes. There is an index on stamp, col1 Yeah, do option 1.
|
# ? Apr 17, 2015 21:27 |
|
Kumbamontu posted:Someone on my team just wrote something like this in Powershell for our company last week, so I can confirm this is a good approach. Powershell, you say? I've been meaning to learn how that works so ... good. The current setup makes far too much use of xp_cmdshell for my liking. Thanks all!
|
# ? Apr 21, 2015 05:15 |
|
I'm not sure the terminology for what I'm looking for - possibly agile tables? Basically, me and another guy are building a web form with .NET (MVC 4 IntraNet site) and we want to build it with "changes" in mind. We will receive an initial form layout, but I know they're going to change the amount of drop downs/checkboxes/free-text areas and so on. Do I build a table with the expectation that 1 row is one form input, or multiple tables? I could easily insert another column in a table if they add more questions, but that doesn't seem right. If I do multiple tables, I'll have to factor in PK for each, as they want to be able to edit these as they fill them in over the course of a few days. TL;DR - I just want to be able to add more columns to the table structures in the future and not have it be a big ordeal to do so, but I'm unsure of what I'm looking for. I'd prefer to keep the columns in order that the form follows, so things don't get out of whack when referencing them. IE: Section1Dropdown1 is the same between the table and the form. kloa fucked around with this message at 15:47 on Apr 27, 2015 |
# ? Apr 27, 2015 15:43 |
|
kloa posted:I'm not sure the terminology for what I'm looking for - possibly agile tables? quote:... We will receive an initial form layout, but I know they're going to change the amount of drop downs/checkboxes/free-text areas and so on. There's nothing wrong with adding a new field to a table, but be aware that some popular database engines are notoriously bad at that operation --- some lock the table. There is also no point in separating data that naturally belongs together; that is, a separate table for first name, last name, and title, would be the utmost silliness. (I assume you're talking about SQL, given which thread this is, and not some other data storage method.) quote:... I'd prefer to keep the columns in order that the form follows, so things don't get out of whack when referencing them. IE: Section1Dropdown1 is the same between the table and the form.
|
# ? Apr 27, 2015 16:13 |
|
Is there some way to simulate iteration in SQL (postgres specifically)? For example, if I've got a query like:SQL code:
|
# ? Apr 28, 2015 03:46 |
|
KernelSlanders posted:Is there some way to simulate iteration in SQL (postgres specifically)? For example, if I've got a query like: Some unbelievable genius added this to postgres 9.3 in psql as \watch http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-watch-in-psql/
|
# ? Apr 28, 2015 04:04 |
|
KernelSlanders posted:Is there some way to simulate iteration in SQL (postgres specifically)? For example, if I've got a query like: code:
|
# ? Apr 28, 2015 04:38 |
|
One way is with a recursive CTE:SQL code:
|
# ? Apr 28, 2015 04:46 |
|
Moogle posted:Just so you don't drive yourself absolutely crazy over this, if you're running on a virtual box - and I'm assuming here you are - that is simply the volume shadow copy service doing it's thing in response to a host action and nothing for you to worry about. Thanks for responding Moogle. Sorry I didn't write back earlier, but I am no longer employed by that organization so it's no longer my problem! As an aside, it wasn't a virtbox but a physical server.
|
# ? May 2, 2015 04:29 |
|
Dumb question about indexes. I have tables like this.code:
create index some_index on GROUP (ValidFrom, ValidTo) include (AccountId) Or is there some better structure/set of indexes? I suck at indexes. I need to study up. If it matters I'm basically doing: code:
|
# ? May 4, 2015 22:10 |
|
The quickest way to start working on what needs to be indexed is to ensure that both columns of the tables being joined are included in indexes. It sounds like eventdate and accountID are already indexed so probably to speed things up you want to put a nonclustered index on the ValidFrom and ValidTo columns in Group.
|
# ? May 6, 2015 20:05 |
|
Sub Par posted:Dumb question about indexes. I have tables like this.
|
# ? May 6, 2015 21:00 |
I have a fairly complex code request - I basically need to be able to run some pre-existing SQL queries against Excel data and return data from SQL. I'm willing to pay for it since it seems to me that it's more than just a stupid question. It's in SA-Mart: http://forums.somethingawful.com/showthread.php?threadid=3718618
|
|
# ? May 7, 2015 15:07 |
|
I'm looking to calculate some averages over various different date ranges all at once and trying to figure out the best approach. I have a way that works but it is ugly and involves multiple temp tables that I'm trying to clean up to make things more readable. At the core, I am attempting to calculate Daily, Monthly, and Yearly averages for various payment methods that our customers use. I'm starting off with ACH & Credit/Debit Card Payments and will expand it to other methods once I figure out a better way to do this if there is one. We have a table PaymentQueue (PK is ID) that houses all ACH & Card payments made, and a separate table ScheduledPayment that references PaymentQueue that specifies payments that were either one-time payments scheduled ahead of time or are recurring every month. I want to break out the averages by One-Time payments (so anything that is in PaymentQueue but not in ScheduledPayment), Recurring Payments, and Scheduled One-Time Payments. Here's what I have so far: code:
|
# ? May 8, 2015 21:56 |
|
code:
Here's what the mysql slow log is showing for these: # Query_time: 3.947889 Lock_time: 0.000364 Rows_sent: 36 Rows_examined: 24661677 hayden. fucked around with this message at 04:19 on May 9, 2015 |
# ? May 9, 2015 03:56 |
|
hayden. posted:
code:
|
# ? May 9, 2015 19:03 |
|
That's perfect, works great. Thanks so much, I've never used subqueries in select statements before. Good to know! edit: hooray, my VPS went from constantly ~50% CPU usage to ~5%. hayden. fucked around with this message at 21:58 on May 9, 2015 |
# ? May 9, 2015 21:46 |
|
Kumbamontu posted:I'm looking to calculate some averages over various different date ranges all at once and trying to figure out the best approach. I have a way that works but it is ugly and involves multiple temp tables that I'm trying to clean up to make things more readable. strings are not a substitute for dates and you are going to shoot yourself in the foot some day if you keep using them as such.
|
# ? May 11, 2015 15:54 |
|
Citizen Tayne posted:strings are not a substitute for dates and you are going to shoot yourself in the foot some day if you keep using them as such. A string literal 'YYYYMMDD' is an implicit cast to date so it doesn't necessarily mean the underlying postdate type is also a string. He even made it a datetime on #temp! ElehemEare fucked around with this message at 17:20 on May 11, 2015 |
# ? May 11, 2015 17:18 |
|
ElehemEare posted:A string literal 'YYYYMMDD' is an implicit cast to date so it doesn't necessarily mean the underlying postdate type is also a string. He even made it a datetime on #temp! Depends on where you're using it, in Oracle or Teradata that doesn't work. Relying on implied casts is a real bad idea.
|
# ? May 11, 2015 17:25 |
|
Those were just example dates thrown in there to see if the calculations work, which they do - I have no intention of leaving it like that. Eventually it will likely be parameterized so a user can input a range of years to get the averages over. Also this will never be run on anything other than SQL Server.
|
# ? May 11, 2015 17:55 |
Is there a way to tell if a database restore was done? All my work last week is gone and someone is going to pay dearly for it.
|
|
# ? May 12, 2015 20:35 |
|
I'm trying to set up an after insert trigger on an Sql Server database - all it needs to do is, upon insert to a certain table, insert another row in a similar but different table in a different db, on a different server. I've just discovered that if the triggered insert fails, the second one does as well. Is there any way to avoid this? I'm seeing suggestions that you can get around it by having the trigger call a stored procedure - but I can't see definitely if this will work and how to implement it. Can anyone help me out? e: Or, in fact, what is the best way to achieve this, if a trigger is not it? chippy fucked around with this message at 10:34 on May 22, 2015 |
# ? May 22, 2015 08:57 |
|
chippy posted:I'm trying to set up an after insert trigger on an Sql Server database - all it needs to do is, upon insert to a certain table, insert another row in a similar but different table in a different db, on a different server. I've just discovered that if the triggered insert fails, the second one does as well. Is there any way to avoid this? I'm seeing suggestions that you can get around it by having the trigger call a stored procedure - but I can't see definitely if this will work and how to implement it. Can anyone help me out? Surely that's what you would expect an after insert trigger to do? Why does it need to be an after insert trigger and not a before insert one?
|
# ? May 22, 2015 11:48 |
|
Hammerite posted:Surely that's what you would expect an after insert trigger to do? Why does it need to be an after insert trigger and not a before insert one? OK, well, as I understand it, SQL Server only has AFTER and INSTEAD OF triggers. Surely an instead of would have the same problem? Basically, the original insert always needs to succeed regardless of what happens. Maybe a trigger is not the best way of doing this? I think I phrased my original problem wrong - if the second insert fails (the one executed by the trigger), I need it to not roll back the original insert, that one that triggered the, er trigger.
|
# ? May 22, 2015 11:54 |
|
chippy posted:OK, well, as I understand it, SQL Server only has AFTER and INSTEAD OF triggers. Surely an instead of would have the same problem? My apologies, I only know anything about triggers in MySQL which has BEFORE and AFTER and I reasoned that if there was such a thing as an AFTER INSERT trigger in another DBMS then there must be such a thing as BEFORE as well. It turns out this was a bad assumption. I don't have anything helpful to say.
|
# ? May 22, 2015 12:01 |
|
Hmm. it seems maybe a scheduled sproc would be better. This needs to be close to real time though, every 30 seconds would probably be ok. Does anyone know of any issues with doing it this way?
|
# ? May 22, 2015 13:06 |
|
Okay, this feels like it should be easy, but I just can't figure out a smart way to do it. Oracle 11g, names changed for clarity, and I can't make any changes to the schema or the data, I'm just trying to query it the best way I can. My SQL knowledge is limited to what I've picked up over the last year, so if I'm being an idiot, feel free to call me out on it. Say I have a table CONTRACT with a primary key of CON_ID that is also a foreign key of another table CON_ATTRIBUTE as CONATT_CONID. CON_ATTRIBUTE contains timesliced attributes for CONTRACT that need to be retained there when changes come in, so you can see "Ah, it these values at the beginning, then on April 1st these values, then on June 5th it changed again..." To do this, CON_ATTRIBUTE has a column CONATT_DATE. This column is null for the first timeslice belonging to each CON_ID, then subsequent rows for the same CON_ID contain the date of the change. There's a column in CON_ATTRIBUTE we'll call CONATT_VALUE which is what I actually want to find out for each CON_ID. The problems are: 1) I want to join these two tables so I can get the most recent timeslice belonging to a contract, i.e. the timeslice with the most recent date in CONATT_DATE, counting the one with the null in CONATT_DATE as oldest. Since Oracle doesn't do TOP, I've been looking at a way of getting around that, but my attempt looks super clunky. code:
2) Both tables are pretty big. I'm looking at 100k+ rows in CONTRACT and CON_ATTRIBUTE is roughly twice that, so the prototype query runs pretty long. Now, there is an index on CON_ATTRIBUTE that combines the columns CONATT_CONID and CONATT_DATE. However, considering that roughly two thirds of all entries have null as CONATT_DATE, I'm not sure if it's doing any good for this at all. Is there a way to take advantage of it, or is this doomed to a full-table scan? (For what it's worth, I only need data for maybe 5 to 10 CON_IDs at a time, and without the join to CON_ATTRIBUTE the query is plenty fast.)
|
# ? May 22, 2015 18:15 |
|
Gatac posted:Okay, this feels like it should be easy, but I just can't figure out a smart way to do it. Oracle 11g, names changed for clarity, and I can't make any changes to the schema or the data, I'm just trying to query it the best way I can. My SQL knowledge is limited to what I've picked up over the last year, so if I'm being an idiot, feel free to call me out on it. Though I've never used Oracle, a cursory google search seems to indicate Oracle can use CTEs, so I would go that route: code:
|
# ? May 22, 2015 18:30 |
|
Check out the aggregate form of FIRST/LAST with MAX as a cleaner way of doing it. There are examples about halfway down. http://docs.oracle.com/cloud/latest/db112/SQLRF/functions065.htm#SQLRF00641
|
# ? May 22, 2015 20:15 |
|
Couple of SQL Server questions: 1) Let's say we have these two tables: code:
Now, SQL Server refuses to allow a second foreign key, unless both are set to ON UPDATE/DELETE NO ACTION, claiming that it might cause cycles and/or multiple cascade paths. I've read the standard Google results for this problem, but I still can't figure out what sort of cascade paths SQL Server is worried about in my scenario. Every other problematic example I've seen has either a third table involved or a different relationship diagram. 2) You're designing a new schema, and you have a whole bunch of tables that you want to set up. There's a lot of things you want to have on 95% of your tables, such as: - a [<Table name>ID] primary key and index - [Creation Date] and [Last Updated] columns - triggers that autofill those two columns - a [Status] column (e.g. 0 = new, 1 = active, 2 = deleted, or whatever) - potentially multiple [OwnerID], [SourceID], etc. FK columns - either a View or (in 2016) Row Level Security that only shows a user his own rows - etc. Now, this is a very infrequent task so copy/paste works perfectly. However, is there any feature in (T-)SQL that you're supposed to use in such a scenario? 3) Are there any guidelines for when a nchar property becomes "small enough" that you can use it as its own ID column all over your database? For an extreme example, currency is likely represented by just three ASCII characters, which aren't going to change. If you're not going to store exchange rates or other information, is it going to be faster to read/write EUR everywhere than to use a numeric CurrencyID foreign key? Or is relational storage so optimised at this point that you'd still rather join on a tinyint? NihilCredo fucked around with this message at 22:05 on May 22, 2015 |
# ? May 22, 2015 21:52 |
|
I'll prefix this post with saying that my PHP/SQL skills are taught from books and googling answers to problems, which results in me probably doing things in a really bad and non-efficient manner. For the last 4 years I've had a little site up for goons to vote for Eurovision entries for a bit of fun, and it's always worked fine. Each year I then just duplicate the database out and change the references in the scripts to point to the right place, so I can keep the old iterations up. However, somethings up this year and when the site starts getting a bit busy, it seems that SQL is eating up all the RAM on the server (there's a bunch of sites on the server, and it's usually at about 60% used of 1.75GB and sqld eats up about 20% on average), forcing the daemon to crash and eventually get rebooted by the system. Is there anything I can do easily (i.e. not installing a ton of software on to the box) to monitor what's happening? I'm running WHM/cPanel on a VPS, and mysql is on version 5.6.23. I was on an older version of mysql previously, so I think there's something in my scripts that the newer version is less tolerable about, but I'm not sure what at the moment.
|
# ? May 23, 2015 13:04 |
|
If I need to pull some metrics (just 10 or so COUNT/AVG queries essentially) for the last X amount of days from a SQL database (each day has it's own metrics), how do I do that so I'm not doing a loop in PHP that executes 10*X select queries? I can't really use an IN statement with an array to group everything I don't think, because I'm selecting each day by using BETWEEN 2 timestamps. Right now I am using a loop that decrements the time by one day, and executes a bunch of code:
Tomahawk fucked around with this message at 13:11 on May 28, 2015 |
# ? May 28, 2015 13:08 |
|
GROUP BY TRUNC(timestamp) ?
|
# ? May 28, 2015 14:25 |
|
Tomahawk posted:If I need to pull some metrics (just 10 or so COUNT/AVG queries essentially) for the last X amount of days from a SQL database (each day has it's own metrics), how do I do that so I'm not doing a loop in PHP that executes 10*X select queries? I can't really use an IN statement with an array to group everything I don't think, because I'm selecting each day by using BETWEEN 2 timestamps. group by extract(day from ts)
|
# ? May 28, 2015 17:03 |
|
Malcolm XML posted:group by extract(day from ts) Thank you. What if a "day" is defined from 2:00:00am yesterday to 1:59:59am today?
|
# ? May 28, 2015 17:34 |
|
Tomahawk posted:Thank you. What if a "day" is defined from 2:00:00am yesterday to 1:59:59am today? I guess "group by extract( day from dateadd(hour, -2, ts))" ?
|
# ? May 28, 2015 18:27 |
|
|
# ? Apr 29, 2024 06:38 |
|
NihilCredo posted:I guess "group by extract( day from dateadd(hour, -2, ts))" ? EDIT: nvm, didn't think through my approach Gatac fucked around with this message at 18:38 on May 28, 2015 |
# ? May 28, 2015 18:35 |