|
Extended Attributes are great if you ever want a data dictionary set up for your database, so that developers aren't constantly asking you "What should I put in CreateDate" or somesuch. If you've got the time, setting them up and publishing something for devs to look at will save you even more time in the long run.
|
# ? Nov 25, 2014 20:45 |
|
|
# ? Apr 25, 2024 17:16 |
|
I mean, wow; that extended attribute syntax is ugly. https://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_4009.htm#SQLRF01109 http://www.postgresql.org/docs/9.1/static/sql-comment.html COMMENT ON COLUMN my_table.my_column IS 'Employee ID number'; vvv No. Put a note in the commit message that says, "Read the drat manual". If you're nice, make it sound more like, "See {blah} to retrieve built-in column docs". Don't perpetuate the lazy. PhantomOfTheCopier fucked around with this message at 06:35 on Nov 26, 2014 |
# ? Nov 26, 2014 00:08 |
|
Thanks guys; I knew about extended attributes but that's like putting it in a closet under a stairwell behind a door with a sign that says 'beware the leopard'. Was hoping for something more convenient. I am putting it in the code, but the only person who reads the code is me. Guess I'll put it in a ****NOTE**** on the git-commit message.
|
# ? Nov 26, 2014 01:58 |
Why am I not getting a permission denied error here? I already did a FLUSH PRIVILEGES from the root mysql account before trying this:code:
|
|
# ? Nov 26, 2014 22:06 |
|
fletcher posted:Why am I not getting a permission denied error here? I already did a FLUSH PRIVILEGES from the root mysql account before trying this: FILE permissions are for files on the server locally -- remote file access is based on rights of the server proxy user to the remote machine.
|
# ? Nov 26, 2014 23:26 |
Anaxandrides posted:FILE permissions are for files on the server locally -- remote file access is based on rights of the server proxy user to the remote machine. This was with mysqld running on my localhost and /tmp/test_data.csv is also on my local machine, shouldn't the FILE permission cover that?
|
|
# ? Nov 26, 2014 23:57 |
|
Why did you use double backslashes, then? I can't say on that one -- is there a bug on MySQL that causes it to see this as a remote server?
|
# ? Nov 28, 2014 23:04 |
Anaxandrides posted:Why did you use double backslashes, then? I can't say on that one -- is there a bug on MySQL that causes it to see this as a remote server? Hmmm can't remember why I had the double backslashes, it seems to work with just singles though. Still can't get it to throw a permissions error though.
|
|
# ? Dec 1, 2014 23:15 |
|
fletcher posted:This was with mysqld running on my localhost and /tmp/test_data.csv is also on my local machine, shouldn't the FILE permission cover that? It is covered in the docs: quote:Using LOCAL is a bit slower than letting the server access the files directly, because the contents of the file must be sent over the connection by the client to the server. On the other hand, you do not need the FILE privilege to load local files. And it has its own page on security issues.
|
# ? Dec 2, 2014 04:40 |
|
Hey thread, a site I help out with is having some MySQL issues I thought I could run by here. We've got a table, slightly under 100k rows, where some queries work, and some queries give us the following error:code:
|
# ? Dec 4, 2014 06:59 |
|
Spraynard Kruger posted:Hey thread, a site I help out with is having some MySQL issues I thought I could run by here. We've got a table, slightly under 100k rows, where some queries work, and some queries give us the following error: Are you sure your connections are being closed properly all the time? That is, are your connections being closed and reopened after each command is issued? The problem with PHPMyAdmin looks to be a problem with the tool itself, per Google.
|
# ? Dec 4, 2014 18:48 |
|
How does MariaDB manage variables? I'm seeing this unexpected case that a key is not being used when a specifying a local variable.SQL code:
MrMoo fucked around with this message at 20:03 on Dec 4, 2014 |
# ? Dec 4, 2014 18:55 |
|
Ugh, apparently character set hell. An outstanding bug, workaround is to force the same character set everywhere. http://stackoverflow.com/q/16752922/175849 The TokuDB storage engine is nice though, 100GB source data down to 10GB table data and only 2GB compressed. MrMoo fucked around with this message at 23:18 on Dec 5, 2014 |
# ? Dec 5, 2014 23:14 |
|
In my database, i needed the ability to get any records that are associated by distinct dates. So first I made an sql call to get me the distinct dates: (something like "'DISTINCT (DATE(StartTime)) as yyyymmdd'") sample output is: 2014-02-02 2014-01-22 2012-12-31 2012-12-30 2012-11-09 2012-11-08 1999-10-05 (Each date is clickable and I ajax all the records associated with each date). It works fine, but now I was told that the the records can be plus or minus a day. For example, in the above output, "2012-12-31 and 2012-12-30 would not be separated. It would actually be one date 2012-12-30 and that would give me all the records associated with 12-30 thru 12-31. My question is, can SQL do something like this? Any help is appreciated.
|
# ? Dec 10, 2014 01:14 |
|
stoops posted:In my database, i needed the ability to get any records that are associated by distinct dates. Sure, easily. I assume this is after the AJAX call to retrieve records? If so, add a WHERE YourDate >= DATEADD(dd, -1, @YourInputDateTimeVariable) AND YourDate <= DATEADD(dd, 1, @YourInputTimeVariable)
|
# ? Dec 10, 2014 01:42 |
|
Yeah, DATEADD is freaking awesome if your time-stamps are parsable/can be forced into UTC.
|
# ? Dec 10, 2014 01:49 |
|
Were I doing it all in sql, I'd do something likecode:
|
# ? Dec 10, 2014 02:04 |
|
Anaxandrides posted:Sure, easily. I assume this is after the AJAX call to retrieve records? If so, add a WHERE YourDate >= DATEADD(dd, -1, @YourInputDateTimeVariable) AND YourDate <= DATEADD(dd, 1, @YourInputTimeVariable) I'm sorry I didn't make myself clear. Can sql get me the DATES themselves first? Basically, I want to get a listing of just the distinct,plus/minus a day, Dates. Then from that listing of dates, I'd run your ajax call. So instead of this: 014-02-02 2014-01-22 2012-12-31 2012-12-30 2012-11-09 2012-11-08 1999-10-05 I would get this: 014-02-02 2014-01-22 2012-12-30 2012-11-08 1999-10-05
|
# ? Dec 10, 2014 16:54 |
|
stoops posted:I'm sorry I didn't make myself clear. Can sql get me the DATES themselves first? What would happen when there are three (or four, or five) consecutive days in the data?
|
# ? Dec 10, 2014 17:23 |
|
stoops posted:I'm sorry I didn't make myself clear. Can sql get me the DATES themselves first? It's a good case for a recursive CTE. I have no idea how you'll know which dates to include in your AJAX, but that's a different issue. code:
Anaxandrides fucked around with this message at 17:45 on Dec 10, 2014 |
# ? Dec 10, 2014 17:37 |
|
Hammerite posted:What would happen when there are three (or four, or five) consecutive days in the data? In our case, it would not happen. Anaxandrides posted:It's a good case for a recursive CTE. Thanks for the help.
|
# ? Dec 10, 2014 19:02 |
|
stoops posted:In our case, it would not happen. Unless it's checked for and restricted in another place, I would assume that it's going to happen for whatever reason in your data. The reason so many reports break is because impossible things will always happen. Even it's just a simple check of, "ok, combine at most 3 days before going to a break" or "it's ok to have 2 weeks in a row combined", at least then you have a definition and expectation when someone asks about what they see rather than "that's an impossible scenario".
|
# ? Dec 11, 2014 04:47 |
|
Aredna posted:Unless it's checked for and restricted in another place, I would assume that it's going to happen for whatever reason in your data. The reason so many reports break is because impossible things will always happen. This is the truth. If it can happen, it will happen, and I highly doubt there's an actual constraint on your table to prevent it from happening.
|
# ? Dec 11, 2014 09:31 |
|
Running a mySQL database (well, like 4 to get around some IT policy problems) we run a dump every night at midnight and keep records for a while, which is cool. However, while it's backing up, it's still accepting queries, apparently, it's just returning 0 for results. So when we have stuff where we do a IF(SELECT WHATEVER) DOESN'T EXIST INSERT THIS THING the first query will return 0 results, so we insert some new rows, leading to (as we've discovered in the past week) rows that should be unique having thousands of duplicates from multiple clients getting 0 results on checking existence. What uhhhhhh... how do I back up this database without loving everything up? (There's probably something real clever to google that would solve this real quick but I can't seem to figure out what)
|
# ? Dec 12, 2014 17:37 |
|
There is clearly something special with the backup process, find out what and how it as affecting the SELECT query. It is not normal, but you may have a bug in your version of MySQL too
|
# ? Dec 12, 2014 17:42 |
|
Are you using InnoDB or not? Either way, odds are your rows are locked. This doesn't mean that the page is locked or that you can't insert records, just that your select may hit no rows. It depends what kind of concurrency controls you have running -- have you tried the single-transaction option for mySQLdump, or READ UNCOMMITTED, to see if those help?
|
# ? Dec 12, 2014 18:36 |
|
Sockser posted:Running a mySQL database (well, like 4 to get around some IT policy problems) You said "rows that should be unique" and this might be an obvious thing, but I feel like someone should check it anyway - I assume you have investigated putting unique indexes on and determined that unique indexes can't be used for whatever constraint it is you're trying to enforce?
|
# ? Dec 12, 2014 23:22 |
|
When does "Using filesort" become a serious problem with MySQL? I've got a query that I want to have able to sort by an IF() statement in the SELECT parameters, and when I utilize that column in ORDER BY I get Using temporary; Using filesort. I'm not a huge DBA guy, but I know enough to know that "Using filesort" is basically like dirty language in church. Right now it's basically a non-issue AFAIK since there's only 2 rows in the table where the file sort occurs, but I'd rather fix it now if possible instead of being up poo poo creek when it becomes a major issue.
|
# ? Dec 29, 2014 21:17 |
|
Filesort is just the name of the thing that goes "welp, I need to sort the entire table." It, alone, is not a red flag. Same thing with "using temporary". Temporary tables are kept in working memory until they grow to such a size that they need to be written to disk. Even together, they are not inherently evil, they're just a signal that you're doing something that can't be indexed. In this case, you're generating a column value on the fly and then sorting on it. Yes, that's going to take additional work. No, it won't actually be a problem until there's a few megs of data. If every other part of the query is hitting an appropriate index, and it's only having to think hard when it come to sorting, then you're probably good.
|
# ? Dec 29, 2014 23:48 |
I don't know if this is the thread, but I have a db that needs to be made. It is a catalog of courses. Each course will have schedules. So something like this: Catalog ------- catalog_id pk year start_date end_date Course -------- elective_id pk number name department Catalog_Course -------- catalog_id pk fk elective_id pk fk size Catalog_Course_Schedule --------- catalog_id pk fk elective_id pk fk start_date end_date active Then there needs to be able make blocks of schedules that can be attached to a course in a catalog. So the blocks would be something like this: Schedule_Block --------- schedule_block_id pk schedule_block_type Schedule -------- start_date end_date So you have schedules for a block that can be given to several course, but the schedules for the course will have attributes of thier own such as if they are active? Any ideas about how to do this?
|
|
# ? Jan 12, 2015 02:13 |
|
Every time I see someone post a pretty small and simple database that has multiple compound keys, I get a little bit worried. Can each elective only have one slot each semester? That part alone seems pretty iffy based on the college schedules I've seen. Be wary of over-normalization. Why does Catalog_Course exist rather than just rolling the Size column into Catalog_Course_Schedule? On that note, why is Schedule not FK'd in that table and instead data is duplicated? This is a pretty simple database structure to mock up. First off, is there some reason for the structures you posted below?
|
# ? Jan 12, 2015 17:58 |
|
Hey guys, I'm looking for a recommendation for reading on Apache Cassandra. Any of you goons come across anything good? nvm. I ended up picking up practical cassandra. duck hunt fucked around with this message at 00:20 on Feb 9, 2015 |
# ? Jan 16, 2015 07:12 |
|
It's possible that this was covered in the 192 pages prior to this with a similar problem. create or replace procedure p_MakeStars as v_num pls_integer :=0; v_sal pls_integer; v_max pls_integer; Begin select count(*) into v_max from bk_emp; dbms_output.put_line (v_num|| ' '||v_max); --This is just to see what's happening. for i in 1..v_max loop select sal into v_sal from bk_emp where rownum = i; v_num := trunc((v_sal/1000), 0); dbms_output.put_line (v_num); end; Why does this not loop to v_max but just stops at i = 1? My theory is that it has to do with the way rownum is fed in from bk_emp. Also, how can I output 'i' so I know where the loop inevitably breaks? Output is currently 0 14 0 I wish it were 0 14 0 1 1 2 1 2 0 5 3 based on the salaries in that after having been truncated. Halp.
|
# ? Feb 11, 2015 23:05 |
|
Add an end loop; and see what happens
|
# ? Feb 11, 2015 23:34 |
|
MrKatharsis posted:Add an end loop; and see what happens It compiles but when executed I get No Data Found error on line 11 which is this one. select sal into v_sal from bk_emp where rownum = i; edit: and the dbms_output is still 0 14 0 SauceNinja fucked around with this message at 16:21 on Feb 12, 2015 |
# ? Feb 12, 2015 15:35 |
|
rownum is a special pseudo column that is only populated as the rows are returned from the database. So your loop says "give me the first row returned from the database" and that works. Then your loop says "give me the second row returned from the database", and the database returns the first row and discards it because it's not the second row, then returns the new first row and discards it because it's not the second row, then returns the new first row and discards it because it's not the second row...until it has returned and discarded 14 first rows and it can't find any data.
|
# ? Feb 12, 2015 18:09 |
|
Why are you reproducing LIMIT? Shouldn't this be a cursor for loop, or a query into a %ROWTYPE followed by a loop over that (which is the same thing?)? vvv but right you are; it seems to be an SQL:2008ism (OFFSET... FETCH...) available in 12c. p.s., I don't use Oracle so I wouldn't know. PhantomOfTheCopier fucked around with this message at 04:04 on Feb 13, 2015 |
# ? Feb 13, 2015 00:11 |
|
PhantomOfTheCopier posted:Why are you reproducing LIMIT? Shouldn't this be a cursor for loop, or a query into a %ROWTYPE followed by a loop over that (which is the same thing?)? I assume it's oracle, and oracle doesn't have limit.
|
# ? Feb 13, 2015 01:49 |
|
I want to add a XML column to several of our tables so that in the future, when one dinky little customer asks to add "Size of Left Toe" to his customer records, we don't need to even consider touching the data structure. To make things straightforward for the kids who write reports (or to make them possible, since sometimes they use fairly old software), the plan is to add a View for each of those tables where the XML data is flattened out into regular columns, so they can just point to the [Customers_Reporting] view instead of the [Customers] table and they don't need to bother dealing with the XML at all. My (probably stupid) question is: When I ALTER VIEW, does the associated SELECT query actually run and check the database contents or is it just formally validated? In other words, as the size of the [Customers] table grows, does running "ALTER VIEW Customers_Reporting AS SELECT < all the old columns>, RandomXMLShit.value('/SizeOfLeftToe[1]', 'decimal(5, 2)') AS [SizeOfLeftToe] FROM Customers" become any slower? If the answer is "no", I may as well run the ALTER VIEW before any reporting job, to ensure the view is always up-to-date with whatever XML tags are being used right there and then. If it is "yes", I'll have to be a little more thoughtful about it. (SQL Server, if it matters.) e: SizeOfLeftToe was probably a bad example because that could be a simple one-to-many [CustomerID]-[PropertyID]-[Value] table. The reason I want to go with a XML column is because it offers the flexibility to handle more complex properties as well. NihilCredo fucked around with this message at 13:20 on Feb 14, 2015 |
# ? Feb 14, 2015 13:03 |
|
|
# ? Apr 25, 2024 17:16 |
|
NihilCredo posted:I want to add a XML column to several of our tables so that in the future, when one dinky little customer asks to add "Size of Left Toe" to his customer records, we don't need to even consider touching the data structure. Please don't break first normal form.
|
# ? Feb 14, 2015 14:42 |