Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.
Extended Attributes are great if you ever want a data dictionary set up for your database, so that developers aren't constantly asking you "What should I put in CreateDate" or somesuch. If you've got the time, setting them up and publishing something for devs to look at will save you even more time in the long run.

Adbot
ADBOT LOVES YOU

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
:what: I mean, wow; that extended attribute syntax is ugly.

https://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_4009.htm#SQLRF01109
http://www.postgresql.org/docs/9.1/static/sql-comment.html

COMMENT ON COLUMN my_table.my_column IS 'Employee ID number';


vvv No. Put a note in the commit message that says, "Read the drat manual". If you're nice, make it sound more like, "See {blah} to retrieve built-in column docs". Don't perpetuate the lazy.

PhantomOfTheCopier fucked around with this message at 06:35 on Nov 26, 2014

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Thanks guys; I knew about extended attributes but that's like putting it in a closet under a stairwell behind a door with a sign that says 'beware the leopard'. Was hoping for something more convenient. I am putting it in the code, but the only person who reads the code is me. Guess I'll put it in a ****NOTE**** on the git-commit message.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Why am I not getting a permission denied error here? I already did a FLUSH PRIVILEGES from the root mysql account before trying this:

code:
mysql> SHOW GRANTS;
+---------------------------------------------------------------------------------+
| Grants for test_user@localhost                                                  |
+---------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'test_user'@'localhost'                                   |
| GRANT SELECT, INSERT ON `test_database`.`test_table` TO 'test_user'@'localhost' |
+---------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

mysql> LOAD DATA LOCAL INFILE '//tmp//test_data.csv' INTO TABLE test_database.test_table;
Query OK, 8 rows affected (0.01 sec)
Records: 8  Deleted: 0  Skipped: 0  Warnings: 0
I thought it would fail because I don't have the FILE permission

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.

fletcher posted:

Why am I not getting a permission denied error here? I already did a FLUSH PRIVILEGES from the root mysql account before trying this:

code:
mysql> SHOW GRANTS;
+---------------------------------------------------------------------------------+
| Grants for test_user@localhost                                                  |
+---------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'test_user'@'localhost'                                   |
| GRANT SELECT, INSERT ON `test_database`.`test_table` TO 'test_user'@'localhost' |
+---------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

mysql> LOAD DATA LOCAL INFILE '//tmp//test_data.csv' INTO TABLE test_database.test_table;
Query OK, 8 rows affected (0.01 sec)
Records: 8  Deleted: 0  Skipped: 0  Warnings: 0
I thought it would fail because I don't have the FILE permission

FILE permissions are for files on the server locally -- remote file access is based on rights of the server proxy user to the remote machine.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Anaxandrides posted:

FILE permissions are for files on the server locally -- remote file access is based on rights of the server proxy user to the remote machine.

This was with mysqld running on my localhost and /tmp/test_data.csv is also on my local machine, shouldn't the FILE permission cover that?

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.
Why did you use double backslashes, then? I can't say on that one -- is there a bug on MySQL that causes it to see this as a remote server?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Anaxandrides posted:

Why did you use double backslashes, then? I can't say on that one -- is there a bug on MySQL that causes it to see this as a remote server?

Hmmm can't remember why I had the double backslashes, it seems to work with just singles though. Still can't get it to throw a permissions error though.

MrMoo
Sep 14, 2000

fletcher posted:

This was with mysqld running on my localhost and /tmp/test_data.csv is also on my local machine, shouldn't the FILE permission cover that?

It is covered in the docs:

quote:

Using LOCAL is a bit slower than letting the server access the files directly, because the contents of the file must be sent over the connection by the client to the server. On the other hand, you do not need the FILE privilege to load local files.
http://dev.mysql.com/doc/refman/5.0/en/load-data.html

And it has its own page on security issues.

Spraynard Kruger
May 8, 2007

Hey thread, a site I help out with is having some MySQL issues I thought I could run by here. We've got a table, slightly under 100k rows, where some queries work, and some queries give us the following error:

code:
#2014 - Commands out of sync; you can't run this command now
We've found some random queries that can reproduce the issue, but can't nail it down to why any of them reproduce the issue. Even something as simple as paging through the table in phpMyAdmin will work for a bit, only to throw the error when we try to access somewhere around halfway through the data. Google results seem to call it out as either an issue with poorly written PHP or stored procs, but we're just running some simple SELECTs here! How can we get things in sync again?

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.

Spraynard Kruger posted:

Hey thread, a site I help out with is having some MySQL issues I thought I could run by here. We've got a table, slightly under 100k rows, where some queries work, and some queries give us the following error:

code:
#2014 - Commands out of sync; you can't run this command now
We've found some random queries that can reproduce the issue, but can't nail it down to why any of them reproduce the issue. Even something as simple as paging through the table in phpMyAdmin will work for a bit, only to throw the error when we try to access somewhere around halfway through the data. Google results seem to call it out as either an issue with poorly written PHP or stored procs, but we're just running some simple SELECTs here! How can we get things in sync again?

Are you sure your connections are being closed properly all the time? That is, are your connections being closed and reopened after each command is issued?

The problem with PHPMyAdmin looks to be a problem with the tool itself, per Google.

MrMoo
Sep 14, 2000

How does MariaDB manage variables? I'm seeing this unexpected case that a key is not being used when a specifying a local variable.
SQL code:
MariaDB [trth]> explain SELECT max(bid_price) AS bid_price, exchange FROM quotes WHERE quotes.ric = 'A' AND timestamp 
BETWEEN @start_time AND @end_time GROUP BY exchange ORDER BY NULL;
+------+-------------+--------+-------+---------------+---------------+---------+------+------+------------------------------+
| id   | select_type | table  | type  | possible_keys | key           | key_len | ref  | rows | Extra                        |
+------+-------------+--------+-------+---------------+---------------+---------+------+------+------------------------------+
|    1 | SIMPLE      | quotes | range | ric_timestamp | ric_timestamp | 9       | NULL |  659 | Using where; Using temporary |
+------+-------------+--------+-------+---------------+---------------+---------+------+------+------------------------------+
1 row in set (0.00 sec)
  
MariaDB [trth]> set @trade_ric ='A';
Query OK, 0 rows affected (0.00 sec)

MariaDB [trth]> explain SELECT max(bid_price) AS bid_price, exchange FROM quotes WHERE quotes.ric = @trade_ric AND timestamp
 BETWEEN @start_time AND @end_time GROUP BY exchange ORDER BY NULL;
+------+-------------+--------+------+---------------+------+---------+------+----------+------------------------------+
| id   | select_type | table  | type | possible_keys | key  | key_len | ref  | rows     | Extra                        |
+------+-------------+--------+------+---------------+------+---------+------+----------+------------------------------+
|    1 | SIMPLE      | quotes | ALL  | NULL          | NULL | NULL    | NULL | 10841597 | Using where; Using temporary |
+------+-------------+--------+------+---------------+------+---------+------+----------+------------------------------+
1 row in set (0.00 sec)
There is a half-explanation for T-SQL: http://www.sqlbadpractices.com/using-local-variables-in-t-sql-queries/

MrMoo fucked around with this message at 20:03 on Dec 4, 2014

MrMoo
Sep 14, 2000

Ugh, apparently character set hell. An outstanding bug, workaround is to force the same character set everywhere.

http://stackoverflow.com/q/16752922/175849

The TokuDB storage engine is nice though, 100GB source data down to 10GB table data and only 2GB compressed.

MrMoo fucked around with this message at 23:18 on Dec 5, 2014

stoops
Jun 11, 2001
In my database, i needed the ability to get any records that are associated by distinct dates.

So first I made an sql call to get me the distinct dates:

(something like "'DISTINCT (DATE(StartTime)) as yyyymmdd'")

sample output is:

2014-02-02
2014-01-22
2012-12-31
2012-12-30
2012-11-09
2012-11-08
1999-10-05

(Each date is clickable and I ajax all the records associated with each date).

It works fine, but now I was told that the the records can be plus or minus a day.

For example, in the above output, "2012-12-31 and 2012-12-30 would not be separated. It would actually be one date 2012-12-30 and that would give me all the records associated with 12-30 thru 12-31.

My question is, can SQL do something like this? Any help is appreciated.

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.

stoops posted:

In my database, i needed the ability to get any records that are associated by distinct dates.

For example, in the above output, "2012-12-31 and 2012-12-30 would not be separated. It would actually be one date 2012-12-30 and that would give me all the records associated with 12-30 thru 12-31.

My question is, can SQL do something like this? Any help is appreciated.

Sure, easily. I assume this is after the AJAX call to retrieve records? If so, add a WHERE YourDate >= DATEADD(dd, -1, @YourInputDateTimeVariable) AND YourDate <= DATEADD(dd, 1, @YourInputTimeVariable)

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Yeah, DATEADD is freaking awesome if your time-stamps are parsable/can be forced into UTC.

Pardot
Jul 25, 2001




Were I doing it all in sql, I'd do something like

code:
with dates as (select * as date from generate_series('2014-01-01'::date, '2014-02-01', '1 day'))
select dates.date, o.*
from dates
join othertable o on dates.date <@ tstzrange(o.start_time-'1 day'::interval, o.start_time+'1 day'::interval)

stoops
Jun 11, 2001

Anaxandrides posted:

Sure, easily. I assume this is after the AJAX call to retrieve records? If so, add a WHERE YourDate >= DATEADD(dd, -1, @YourInputDateTimeVariable) AND YourDate <= DATEADD(dd, 1, @YourInputTimeVariable)

I'm sorry I didn't make myself clear. Can sql get me the DATES themselves first?

Basically, I want to get a listing of just the distinct,plus/minus a day, Dates. Then from that listing of dates, I'd run your ajax call.


So instead of this:

014-02-02
2014-01-22
2012-12-31
2012-12-30
2012-11-09
2012-11-08
1999-10-05

I would get this:

014-02-02
2014-01-22
2012-12-30
2012-11-08
1999-10-05

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

stoops posted:

I'm sorry I didn't make myself clear. Can sql get me the DATES themselves first?

Basically, I want to get a listing of just the distinct,plus/minus a day, Dates. Then from that listing of dates, I'd run your ajax call.


So instead of this:

014-02-02
2014-01-22
2012-12-31
2012-12-30
2012-11-09
2012-11-08
1999-10-05

I would get this:

014-02-02
2014-01-22
2012-12-30
2012-11-08
1999-10-05

What would happen when there are three (or four, or five) consecutive days in the data?

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.

stoops posted:

I'm sorry I didn't make myself clear. Can sql get me the DATES themselves first?

Basically, I want to get a listing of just the distinct,plus/minus a day, Dates. Then from that listing of dates, I'd run your ajax call.

It's a good case for a recursive CTE. I have no idea how you'll know which dates to include in your AJAX, but that's a different issue.

code:
;WITH cte(MinStartDate, RowNumber)
AS (SELECT StartDate AS MinStartDate,
    1 AS RowNumber,
    FROM YourTable
    UNION ALL
    SELECT yt.StartDate,
    ct.RowNumber + 1
    FROM YourTable yt
        JOIN cte ct
             ON CAST(yt.StartDate AS DATE) = CAST(DATEADD(dd, 1, ct.MinStartDate) AS DATE)
)
SELECT MinStartDate
FROM cte c
    LEFT JOIN cte c2
         ON c.MinStartDate = c2.MinStartDate
            AND c2.RowNumber > c.RowNumber
WHERE c.RowNumber = 1
      AND c2.MinStartDate IS NULL

Anaxandrides fucked around with this message at 17:45 on Dec 10, 2014

stoops
Jun 11, 2001

Hammerite posted:

What would happen when there are three (or four, or five) consecutive days in the data?

In our case, it would not happen.


Anaxandrides posted:

It's a good case for a recursive CTE.

Thanks for the help.

Aredna
Mar 17, 2007
Nap Ghost

stoops posted:

In our case, it would not happen.

Unless it's checked for and restricted in another place, I would assume that it's going to happen for whatever reason in your data. The reason so many reports break is because impossible things will always happen.

Even it's just a simple check of, "ok, combine at most 3 days before going to a break" or "it's ok to have 2 weeks in a row combined", at least then you have a definition and expectation when someone asks about what they see rather than "that's an impossible scenario".

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.

Aredna posted:

Unless it's checked for and restricted in another place, I would assume that it's going to happen for whatever reason in your data. The reason so many reports break is because impossible things will always happen.

Even it's just a simple check of, "ok, combine at most 3 days before going to a break" or "it's ok to have 2 weeks in a row combined", at least then you have a definition and expectation when someone asks about what they see rather than "that's an impossible scenario".

This is the truth. If it can happen, it will happen, and I highly doubt there's an actual constraint on your table to prevent it from happening.

Sockser
Jun 28, 2007

This world only remembers the results!




Running a mySQL database (well, like 4 to get around some IT policy problems)

we run a dump every night at midnight and keep records for a while, which is cool.

However, while it's backing up, it's still accepting queries, apparently, it's just returning 0 for results.

So when we have stuff where we do a
IF(SELECT WHATEVER) DOESN'T EXIST
INSERT THIS THING


the first query will return 0 results, so we insert some new rows, leading to (as we've discovered in the past week) rows that should be unique having thousands of duplicates from multiple clients getting 0 results on checking existence.



What uhhhhhh... how do I back up this database without loving everything up?


(There's probably something real clever to google that would solve this real quick but I can't seem to figure out what)

MrMoo
Sep 14, 2000

There is clearly something special with the backup process, find out what and how it as affecting the SELECT query. It is not normal, but you may have a bug in your version of MySQL too :toot:

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.
Are you using InnoDB or not? Either way, odds are your rows are locked. This doesn't mean that the page is locked or that you can't insert records, just that your select may hit no rows. It depends what kind of concurrency controls you have running -- have you tried the single-transaction option for mySQLdump, or READ UNCOMMITTED, to see if those help?

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Sockser posted:

Running a mySQL database (well, like 4 to get around some IT policy problems)

we run a dump every night at midnight and keep records for a while, which is cool.

However, while it's backing up, it's still accepting queries, apparently, it's just returning 0 for results.

So when we have stuff where we do a
IF(SELECT WHATEVER) DOESN'T EXIST
INSERT THIS THING


the first query will return 0 results, so we insert some new rows, leading to (as we've discovered in the past week) rows that should be unique having thousands of duplicates from multiple clients getting 0 results on checking existence.



What uhhhhhh... how do I back up this database without loving everything up?


(There's probably something real clever to google that would solve this real quick but I can't seem to figure out what)

You said "rows that should be unique" and this might be an obvious thing, but I feel like someone should check it anyway - I assume you have investigated putting unique indexes on and determined that unique indexes can't be used for whatever constraint it is you're trying to enforce?

cka
May 3, 2004
When does "Using filesort" become a serious problem with MySQL? I've got a query that I want to have able to sort by an IF() statement in the SELECT parameters, and when I utilize that column in ORDER BY I get Using temporary; Using filesort. I'm not a huge DBA guy, but I know enough to know that "Using filesort" is basically like dirty language in church. Right now it's basically a non-issue AFAIK since there's only 2 rows in the table where the file sort occurs, but I'd rather fix it now if possible instead of being up poo poo creek when it becomes a major issue.

McGlockenshire
Dec 16, 2005

GOLLOCKS!
Filesort is just the name of the thing that goes "welp, I need to sort the entire table." It, alone, is not a red flag. Same thing with "using temporary". Temporary tables are kept in working memory until they grow to such a size that they need to be written to disk. Even together, they are not inherently evil, they're just a signal that you're doing something that can't be indexed.

In this case, you're generating a column value on the fly and then sorting on it. Yes, that's going to take additional work. No, it won't actually be a problem until there's a few megs of data. If every other part of the query is hitting an appropriate index, and it's only having to think hard when it come to sorting, then you're probably good.

NakedWithCandy
Sep 22, 2014
I don't know if this is the thread, but I have a db that needs to be made. It is a catalog of courses. Each course will have schedules. So something like this:
Catalog
-------
catalog_id pk
year
start_date
end_date

Course
--------
elective_id pk
number
name
department

Catalog_Course
--------
catalog_id pk fk
elective_id pk fk
size

Catalog_Course_Schedule
---------
catalog_id pk fk
elective_id pk fk
start_date
end_date
active

Then there needs to be able make blocks of schedules that can be attached to a course in a catalog. So the blocks would be something like this:
Schedule_Block
---------
schedule_block_id pk
schedule_block_type

Schedule
--------
start_date
end_date

So you have schedules for a block that can be given to several course, but the schedules for the course will have attributes of thier own such as if they are active? Any ideas about how to do this?

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.
Every time I see someone post a pretty small and simple database that has multiple compound keys, I get a little bit worried. Can each elective only have one slot each semester? That part alone seems pretty iffy based on the college schedules I've seen. Be wary of over-normalization. Why does Catalog_Course exist rather than just rolling the Size column into Catalog_Course_Schedule? On that note, why is Schedule not FK'd in that table and instead data is duplicated?

This is a pretty simple database structure to mock up. First off, is there some reason for the structures you posted below?

duck hunt
Dec 22, 2010
Hey guys, I'm looking for a recommendation for reading on Apache Cassandra. Any of you goons come across anything good?

nvm. I ended up picking up practical cassandra.

duck hunt fucked around with this message at 00:20 on Feb 9, 2015

SauceNinja
Nov 8, 2002
Knock Knock.
Who's There?
You're Fired.
It's possible that this was covered in the 192 pages prior to this with a similar problem.

create or replace procedure p_MakeStars
as
v_num pls_integer :=0;
v_sal pls_integer;
v_max pls_integer;
Begin
select count(*) into v_max from bk_emp;
dbms_output.put_line (v_num|| ' '||v_max); --This is just to see what's happening.
for i in 1..v_max
loop
select sal into v_sal from bk_emp where rownum = i;
v_num := trunc((v_sal/1000), 0);
dbms_output.put_line (v_num);
end;


Why does this not loop to v_max but just stops at i = 1? My theory is that it has to do with the way rownum is fed in from bk_emp. Also, how can I output 'i' so I know where the loop inevitably breaks? Output is currently
0 14
0

I wish it were
0 14
0
1
1
2
1
2
0
5
3

based on the salaries in that after having been truncated. Halp.

MrKatharsis
Nov 29, 2003

feel the bern
Add an end loop; and see what happens

SauceNinja
Nov 8, 2002
Knock Knock.
Who's There?
You're Fired.

MrKatharsis posted:

Add an end loop; and see what happens


It compiles but when executed I get No Data Found error on line 11 which is this one.
select sal into v_sal from bk_emp where rownum = i;

edit: and the dbms_output is still
0 14
0

SauceNinja fucked around with this message at 16:21 on Feb 12, 2015

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!
rownum is a special pseudo column that is only populated as the rows are returned from the database. So your loop says "give me the first row returned from the database" and that works. Then your loop says "give me the second row returned from the database", and the database returns the first row and discards it because it's not the second row, then returns the new first row and discards it because it's not the second row, then returns the new first row and discards it because it's not the second row...until it has returned and discarded 14 first rows and it can't find any data.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Why are you reproducing LIMIT? Shouldn't this be a cursor for loop, or a query into a %ROWTYPE followed by a loop over that (which is the same thing?)?


vvv :psyduck: but right you are; it seems to be an SQL:2008ism (OFFSET... FETCH...) available in 12c. p.s., I don't use Oracle so I wouldn't know.

PhantomOfTheCopier fucked around with this message at 04:04 on Feb 13, 2015

Pardot
Jul 25, 2001




PhantomOfTheCopier posted:

Why are you reproducing LIMIT? Shouldn't this be a cursor for loop, or a query into a %ROWTYPE followed by a loop over that (which is the same thing?)?

I assume it's oracle, and oracle doesn't have limit.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

I want to add a XML column to several of our tables so that in the future, when one dinky little customer asks to add "Size of Left Toe" to his customer records, we don't need to even consider touching the data structure.

To make things straightforward for the kids who write reports (or to make them possible, since sometimes they use fairly old software), the plan is to add a View for each of those tables where the XML data is flattened out into regular columns, so they can just point to the [Customers_Reporting] view instead of the [Customers] table and they don't need to bother dealing with the XML at all.

My (probably stupid) question is:

When I ALTER VIEW, does the associated SELECT query actually run and check the database contents or is it just formally validated? In other words, as the size of the [Customers] table grows, does running "ALTER VIEW Customers_Reporting AS SELECT < all the old columns>, RandomXMLShit.value('/SizeOfLeftToe[1]', 'decimal(5, 2)') AS [SizeOfLeftToe] FROM Customers" become any slower?

If the answer is "no", I may as well run the ALTER VIEW before any reporting job, to ensure the view is always up-to-date with whatever XML tags are being used right there and then. If it is "yes", I'll have to be a little more thoughtful about it.

(SQL Server, if it matters.)

e: SizeOfLeftToe was probably a bad example because that could be a simple one-to-many [CustomerID]-[PropertyID]-[Value] table. The reason I want to go with a XML column is because it offers the flexibility to handle more complex properties as well.

NihilCredo fucked around with this message at 13:20 on Feb 14, 2015

Adbot
ADBOT LOVES YOU

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


NihilCredo posted:

I want to add a XML column to several of our tables so that in the future, when one dinky little customer asks to add "Size of Left Toe" to his customer records, we don't need to even consider touching the data structure.

To make things straightforward for the kids who write reports (or to make them possible, since sometimes they use fairly old software), the plan is to add a View for each of those tables where the XML data is flattened out into regular columns, so they can just point to the [Customers_Reporting] view instead of the [Customers] table and they don't need to bother dealing with the XML at all.

My (probably stupid) question is:

When I ALTER VIEW, does the associated SELECT query actually run and check the database contents or is it just formally validated? In other words, as the size of the [Customers] table grows, does running "ALTER VIEW Customers_Reporting AS SELECT < all the old columns>, RandomXMLShit.value('/SizeOfLeftToe[1]', 'decimal(5, 2)') AS [SizeOfLeftToe] FROM Customers" become any slower?

If the answer is "no", I may as well run the ALTER VIEW before any reporting job, to ensure the view is always up-to-date with whatever XML tags are being used right there and then. If it is "yes", I'll have to be a little more thoughtful about it.

(SQL Server, if it matters.)

e: SizeOfLeftToe was probably a bad example because that could be a simple one-to-many [CustomerID]-[PropertyID]-[Value] table. The reason I want to go with a XML column is because it offers the flexibility to handle more complex properties as well.

Please don't break first normal form.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply