|
NihilCredo posted:Well the primary concern is to make sure that INSERT/UPDATE/DELETE do not work, so running a test SELECT query wouldn't help with that. (I would have to try inserting/updating/deleting a dummy entry in each table and check the error I get - probably not a great idea!) On the other hand, you say you want to audit your audit user. Why should that be done by the audit user? Why should that user have any access to system tables just so they can be told what they are allowed to do? If you're saying, on the other hand, that you need external oversight of the audit user privileges, then you need an auditaudit user with those system table privileges and nothing else. Moreover, the auditaudit user has to understand that privileges can be fine grained, so you'll need to duplicate the object privilege structure from SQL. Have fun with that. But how can you make sure that the auditaudit user only has read access to those system tables?
|
# ? Oct 22, 2014 00:38 |
|
|
# ? Apr 25, 2024 14:25 |
|
PhantomOfTheCopier posted:Now you're approaching infinite regress. You say you want to add an audit user with fairly restricted access, but you should be aware, as has been pointed out, that SQL privileges can be very, very fine-grained. With that in mind, the audit user should expect certain things to fail, and should fail gracefully when they happen. From that point of view, you need to be handling SQL errors and warnings appropriately. I'm not sure I quite follow, or maybe I haven't explained myself well. I'll try and show how I'm thinking the process should go: code:
(Here's waiting to hear why the idea is dumb in some majestic way)
|
# ? Oct 23, 2014 08:21 |
|
It's not dumb per se, but you'll also need EXECUTE on something in order to run sys.fn_my_permissions. You can GRANT at the database, schema, or function level. If you're GRANTing EXECUTE on sys then that opens up a whole load of possibilities, so you'd probably only want to grant EXECUTE on that one function. You'll then also need to confirm that permission in your code. There are also small loopholes such as the fact that if AuditUser owns any tables, then they'll be able to run INSERTS and UPDATES on that table all day because they're the owner, so you might want to check ownership as well. There's also the issue that Stored Procedures and functions run as their creator rather than their caller, which in the case of the system SPs is (IIRC) the database owner. TBH this is one of the rare cases where I'd consider using a DENY operator to ensure that the audit user is DENYed INSERT and UPDATE and DELETE on your database. You could then check that those are in place and that you only have access to the one function to confirm that those are the permissions that are in place, and that might be enough.
|
# ? Oct 23, 2014 17:06 |
|
My team set up some testing tools that dump a bunch of data into a MySQL database that we set up on one of our team's fileservers. Cool. Other teams started using the tools, and there's a good amount of slowdown when they're running from other campuses in like, the UK or India or the West Coast or whatever, not because of geographic distance, but because of our company's security stuff and having to route traffic through a VPN. The best solution to this, or so we thought, was to set up database replication. Setting up MySQL as master-master across five different machines was basically a really fuckin' hard task to accomplish and we never really got it working for some reason I don't remember (this is maybe... a year ago?) So then we got SymmetricDS set up, which was cool for a while, but in the case that it ever breaks for any goddamned reason, the solution is for us to build the source and that totally sucks. Everything else we're seeing that accomplishes replication like we want it to costs in the tens of thousands of dollars, which our team isn't going to get approved for budget. Is there a half-decent way of solving our issue?
|
# ? Oct 23, 2014 18:03 |
|
Sockser posted:The best solution to this, or so we thought, was to set up database replication. Setting up MySQL as master-master across five different machines was basically a really fuckin' hard task to accomplish and we never really got it working for some reason I don't remember (this is maybe... a year ago?) Multi-master is not particularly difficult to set up. You guys should give it another shot. Try setting it up in isolation in all the environments first - a proof of concept using no actual data but living on the actual machines that would be part of the loop. The real pain in setting up multi-master for an existing database (or adding another node to an existing multi-master loop) is cloning the backups all over the place to get the new nodes in sync with the original.
|
# ? Oct 23, 2014 19:56 |
|
McGlockenshire posted:Multi-master is not particularly difficult to set up. You guys should give it another shot. By which you mean the standard MySQL replication?
|
# ? Oct 23, 2014 19:59 |
|
Sockser posted:By which you mean the standard MySQL replication? Correct, the free-as-long-as-your-time-has-no-cost option.
|
# ? Oct 23, 2014 20:08 |
|
I'm trying to generate a report showing a ratio of invoiceable hours to non-invoiceable hours per employee.code:
|
# ? Oct 24, 2014 15:32 |
|
wolffenstein posted:I'm trying to generate a report showing a ratio of invoiceable hours to non-invoiceable hours per employee. You could try something like this: code:
|
# ? Oct 24, 2014 15:44 |
|
Nth Doctor posted:You could try something like this: code:
|
# ? Oct 24, 2014 16:06 |
|
Does anyone know whycode:
|
# ? Oct 24, 2014 22:47 |
|
You have to give it a nameSQL code:
|
# ? Oct 24, 2014 22:54 |
|
DBAs, what is the best way to approach this problem?salisbury shake posted:I've got a web service that pulls information from a third party web service, but both require a login. salisbury shake posted:It's [the 3rd party system] an older system with a web front end that uses forms to login and cookies to manage sessions that die after an hour.
|
# ? Oct 26, 2014 00:08 |
|
e: sorry, thought this was a generic "dumb questions thread", not SQL specific
Paul MaudDib fucked around with this message at 17:52 on Oct 27, 2014 |
# ? Oct 27, 2014 17:42 |
|
salisbury shake posted:DBAs, what is the best way to approach this problem? bcrypt is a great choice for storing passwords locally. For the third party site, since you need that password accessible somewhere, there's not much you can do to make it "secure". You could theoretically 3DES the password using a randomly generated key/vector that you store somewhere (app.config, SQL, whatever), but it's ultimately going to end up being unsecured against someone with access to what you have access to. If you're particularly insane, you could spin up a Postgre (or other open-source) SQL agent on a VM that only you and your sysadmins have access to, create a DB there, and store the password. That seems like an awful lot of work for a password that's not yours, though, and at some point someone else in your development group is going to need the keys to the kingdom, so to speak. Our solution to third party passwords is to put 'em in a hidden branch of source control and figure that anyone with access has signed the appropriate paperwork to have access to them.
|
# ? Oct 27, 2014 19:45 |
|
I've noticed a lot of optimizer issues with queries that involve the PL/SQL engine with (select column_value from table(<PL/SQL nested table>). This can be quite annoying because PL/SQL table objects are a standard tool in our system (and I imagine a lot of Oracle systems) to allow dynamic in-lists. Of course, Here is a simple and real example that came up recently:code:
code:
I see very few resources dealing with this sort of issue online (performance problems caused by integration between SQL and PL/SQL engines) so if anyone has any suggestions I'm game. The only major issue of this type that gets talked about is context switching in loops that involve SQL code. FieryBalrog fucked around with this message at 17:36 on Oct 30, 2014 |
# ? Oct 30, 2014 17:29 |
|
Does anyone have any recommended reading regarding database design best practices? I'm beginning the foundation of a new project I'm working on and I have complete freedom in how I design the database for this application. I've designed databases before but generally just built things with lots of surrogate keys, no foreign constraints, etc. I want to build this database with best practices in mind. For example, I don't know whether to use natural/business keys vs surrogate keys, simple keys vs compound/composite keys (identifying vs non-identifying relationships), etc. kiwid fucked around with this message at 20:30 on Oct 30, 2014 |
# ? Oct 30, 2014 19:31 |
|
FieryBalrog posted:I've noticed a lot of optimizer issues with queries that involve the PL/SQL engine with (select column_value from table(<PL/SQL nested table>). This can be quite annoying because PL/SQL table objects are a standard tool in our system (and I imagine a lot of Oracle systems) to allow dynamic in-lists. Of course, Here is a simple and real example that came up recently: From my perspective as an MSSQL guy, I can see how this query would have performance issues. The NOT EXISTS results are being spooled rather than an Anti Semi Join. If those tables are large (more than 1,000,000 rows), as I suspect they are, it'd be bad. Why are those NOT EXISTS statements not just LEFT JOINs?
|
# ? Oct 30, 2014 21:54 |
|
FieryBalrog posted:I see very few resources dealing with this sort of issue online (performance problems caused by integration between SQL and PL/SQL engines) so if anyone has any suggestions I'm game. The only major issue of this type that gets talked about is context switching in loops that involve SQL code. I haven't tuned a PL/SQL nested table performance issue, but dealing with VARRAYs to drive a lookup list for reporting in a similar way I was surprised to learn that Oracle uses a default cost for the TABLE() cardinality (block size?), which generates some pretty cruddy plans. To get around this, we have used a mix of the undocumented CARDINALITY hint (with order-of-magnitude thresholds - 10, 100, 1000, etc - so that we don't flood the shared pool), and OPTIMIZER_DYNAMIC_SAMPLING (set to level 2, since it wasn't enabled at the session level). We didn't realize/care about CARDINALITY being undocumented in the beginning, otherwise we probably would not have used it. A quick Google search also has a post by Christian Antognini mentioning index-by tables instead of nested tables, but I didn't read the whole thread to see if they're talking about pure PL/SQL, or a permanent database object. For more reading (although maybe not specific coverage), I'd Google for posts by the usual: Christian Antognini, Jonathan Lewis, Tom Kyte and Maria Colgan (maybe not her so much since she became the In-Memory lady).
|
# ? Oct 30, 2014 22:27 |
|
I have an application where there are several admins managing the administrative tables. I want to start tracking what admins made what changes at what time. What is the best way to do this? I take it I could store everything in one table? user_id, current_timestamp, etc. But how do I specify what change is taking place? Anyone have any examples? Another way I was thinking would be to use soft deletes on all the tables I want to track, but that would mean putting these columns on every table (deleted_at, deleted_by, updated_at, updated_by) and I wouldn't really be able to tell what the changes were. kiwid fucked around with this message at 19:00 on Nov 5, 2014 |
# ? Nov 5, 2014 18:57 |
|
kiwid posted:I have an application where there are several admins managing the administrative tables. If you are on MSSQL or any other db with trigger support I'd recommend looking into using DDL triggers (though these should be used sparingly and only when the table doesn't get too many triggering events otherwise performance can degrade pretty quickly). That would make it fairly simple to log change data to a table.
|
# ? Nov 5, 2014 22:36 |
|
kiwid posted:I have an application where there are several admins managing the administrative tables. Terrible answer? Triggers. They're very brute-force, but they'll do the trick. Failing that, if on PostgreSQL, this is a useful guide to auditing. If on MSSQL, , start here.
|
# ? Nov 5, 2014 22:38 |
|
At my company, we had triggers on everything that inserted into audit tables. However, it turns out that this caused all sorts of replication problems (we use a slave pool for reads) in MySQL. We ended up having to move all the audit functionality into our application layer (we added it to our ORM model class). This happened right before I started, so I don't know too many of the details. Just a word to the wise though.
|
# ? Nov 5, 2014 23:09 |
|
DimpledChad posted:At my company, we had triggers on everything that inserted into audit tables. However, it turns out that this caused all sorts of replication problems (we use a slave pool for reads) in MySQL. We ended up having to move all the audit functionality into our application layer (we added it to our ORM model class). This happened right before I started, so I don't know too many of the details. Just a word to the wise though. edit: DimpledChad posted:...huge seven-year-old codebase with massive amounts of complicated hand-written SQL, plus 20+ million hits a day in production, so switching really isn't an option without a major, expensive rewrite. PhantomOfTheCopier fucked around with this message at 05:32 on Nov 6, 2014 |
# ? Nov 6, 2014 00:13 |
|
Yeah I'd imagine Postgres would solve a lot of problems that we have (I've only used it briefly in a small project, but it seems pretty awesome). But, you know, huge seven-year-old codebase with massive amounts of complicated hand-written SQL, plus 20+ million hits a day in production, so switching really isn't an option without a major, expensive rewrite. I'd imagine there's lots of companies in a similar boat, and adding auditing programmatically is definitely doable, if a little less elegant.
DimpledChad fucked around with this message at 02:04 on Nov 6, 2014 |
# ? Nov 6, 2014 02:02 |
|
Awesome, I'll check out triggers. Thanks.
|
# ? Nov 6, 2014 02:14 |
|
SQL server also has audit conditions built in where you can audit on actions filtered by user, etc, and log them to a database or flat file. But you might need the enterprise version for some of the more interesting bits.
|
# ? Nov 6, 2014 02:41 |
|
Slightly off topic, but is anyone in Seattle for SQL PASS this week?
|
# ? Nov 6, 2014 17:48 |
|
What's the best way to extract a date out of the middle of a text string in MSSQL? Ex: String = 'Text1 1/1/14 Text2' Text1 is always the same length, but Text2 can be of varying length. I want to be able to order my query by the date pulled out of this string if at all possible, so I assume I need to somehow get rid of Text1 & Text2 while at the same time casting the date as datetime but I am unsure how to attack it. I tried using the TRIM function from this SQL reference book I have like so: SELECT TRIM(LEADING 'Text1' FROM TRIM(TRAILING 'Text2' FROM String)) but MSSQL doesn't seem to have this function and I'm not entirely sure that function is even supposed to do what I'm trying to do with it. LTRIM and RTRIM only trim blanks so that's useless to me. What do I do here? edit: Figured it out myself! code:
kumba fucked around with this message at 20:53 on Nov 6, 2014 |
# ? Nov 6, 2014 20:23 |
|
Do Text1 and Text2 ever have spaces? If not, XML PATH is almost always faster than substringing.
|
# ? Nov 6, 2014 22:50 |
|
If you have to store a test result, and this test result could be either a few word sentence (varchar), or a count (integer), or a percentage (decimal?), but never be all three, only 1 of the three. How would you store it?
|
# ? Nov 6, 2014 22:54 |
|
Anaxandrides posted:Do Text1 and Text2 ever have spaces? If not, XML PATH is almost always faster than substringing. Both Text1 and Text2 always have spaces, so it sounds like that wouldn't work
|
# ? Nov 6, 2014 22:59 |
|
Kumbamontu posted:What's the best way to extract a date out of the middle of a text string in MSSQL? Ex: Since Text1 is of fixed length, let's say N, you can just do SUBSTRING(String, N+1, 8). Or you could, if you had dates in a fixed-length format (eg 01/01/14). Since you apparently don't, you'd need something uglier like SUBSTRING(String, N+1, CHARINDEX('/__ ')+3 - (N+1) ). BUT! Since you want to CAST() that stuff into a date afterwards, and MSSQL won't let you just specify a nonstandard format, I think you'll want to first get those dates into a standard format anyway, hopefully permanently but if not you can do it in the query. Something that should work at turning 1/1/14 into a much nicer 140101 could be CONCAT(SUBSTRING(String, CHARINDEX('/__ ') + 1, 2), SUBSTRING(String, CHARINDEX('/') + 1, CHARINDEX('/__ ') - CHARINDEX('/') - 1), SUBSTRING(String, N+1, CHARINDEX('/') - (N+1))). Then you just sort alphabetically and you're done.
|
# ? Nov 7, 2014 00:23 |
|
kiwid posted:If you have to store a test result, and this test result could be either a few word sentence (varchar), or a count (integer), or a percentage (decimal?), but never be all three, only 1 of the three. How would you store it?
|
# ? Nov 7, 2014 00:26 |
|
PhantomOfTheCopier posted:Which database? MySQL edit: Here is how I'm currently doing it. Is this a good way to do it? code:
code:
code:
kiwid fucked around with this message at 03:28 on Nov 7, 2014 |
# ? Nov 7, 2014 01:44 |
|
How did the DROP statement get its name?
|
# ? Nov 7, 2014 11:57 |
|
I hope its that when databases were in thier infancy anyone who accidentally deleted the wrong table had to drop their pants and run round the office as punisnment. Actually Im going to believe thats the answer now regardless of the truth.
|
# ? Nov 7, 2014 12:44 |
|
Back when computers were run on mercury/liquid analog switches you'd have to add a 'drop' of liquid to certain tubes to change how a switch operated.
|
# ? Nov 7, 2014 22:11 |
|
More questions! Looking at these two schema's, which one is the proper way to build the relationships? Each category can have many commodities and each commodity can be in many categories. Each facility can have multiple categories and commodities but does not have access to all categories or all commodities. Each category can have many facilities. Each commodity can have many facilities. An example of a category is "Food" or "Seed". An example of a commodity is "Corn", "Soybeans", or "Wheat". Soybeans can both be a food or a seed for example, hence the category_has_commodity table. Some facilities can only process seed, not food, but will still have access to the soybeans commodity. Some facilities can process food and seed, and multiple commodities. kiwid fucked around with this message at 22:25 on Nov 7, 2014 |
# ? Nov 7, 2014 22:17 |
|
|
# ? Apr 25, 2024 14:25 |
|
Unless a facility has access to every commodity in a category, then I think it's a combination of the two. There is a relation between facility and category, and then there is also a relation between facility and category-commodity.
|
# ? Nov 7, 2014 22:38 |