I've got a DB structure as is (simplified to maximum for understanding concern):
Table "entry" ("id" integer primary key)
Table "fields" ("name" varchar primary key, and others)
Table "entry_fields" ("entryid" integer primary key, "name" varchar primary key, "value")
I would like to get, for a given "entry.id", the detail of this entry, ie. all the "entry_fields" linked to this entry, in a single SQL query.
An example would be better perhaps:
"fields":
"result"
"output"
"code"
"command"
"entry" contains:
id : 842
id : 850
"entry_fields" contains:
entryid : 842, name : "result", value : "ok"
entryid : 842, name : "output", value : "this is an output"
entryid : 842, name : "code", value : "42"
entryid : 850, name : "result", value : "ko"
entryid : 850, name : "command", value : "print ko"
The wanted output would be:
| id | command | output | code | result |
| 842 | NULL | "this is an output" | 42 | ok |
| 850 | "print ko" | NULL | NULL | ko |
The aim is to be able to add a "field" without changing anything to "entry" table structure
I tried something like:
SELECT e.*, (SELECT name FROM fields) FROM entry AS e
but Postgres complains:
ERROR: more than one row returned by a subquery used as an expression
Hope someone can help me!
While stuck with this unfortunate design, the fastest query would be with crosstab()
, provided by the additional module tablefunc
. Ample details in this related answer:
For the question asked:
SELECT * FROM crosstab(
$$SELECT e.id, ef.name, ef.value
FROM entry e
LEFT JOIN entry_fields ef
ON ef.entryid = e.id
AND ef.name = ANY ('{result,output,code,command}'::text[])
ORDER BY 1, 2$$
,$$SELECT unnest('{result,output,code,command}'::text[])$$
) AS ct (id int, result text, output text, code text, command text);
If you don't have a huge number of different fields, it will be much simpler and more efficient to merge all three tables into one simple table:
CREATE TABLE entry (
entry_id serial PRIMARY KEY
,field1 text
,field2 text
, ... more fields
);
Fields without values can be NULL
. NULL
storage is very cheap (basically 1 bit per column in the NULL bitmap):
Even if you have hundreds of different columns, and only few are filled per entry, this will still use much less disk space.
You query becomes trivial:
SELECT entry_id, result, output, code, command
FROM enty;
If you have too many columns1, and that's not just a misguided design (often, this can be folded into much fewer columns), consider the data types hstore
or json
/ jsonb
(in Postgres 9.4) for EAV storage.
Maximum Columns per Table 250 - 1600 depending on column types
Consider this related answer with alternatives:
And this question about typical use cases / problems of EAV structures on dba.SE:
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句