Thema: Delphi nested joins check

Einzelnen Beitrag anzeigen

jobo

Registriert seit: 29. Nov 2010
3.072 Beiträge
 
Delphi 2010 Enterprise
 
#6

AW: nested joins check

  Alt 16. Jun 2020, 19:30
I guess You are dealing with design data maybe from Solid Edge based on Oracle DB mixed with 2 other Sources, measuring and production database?
However, it doesn’t look like there is a fine grained consolidation area using well designed tables, indices or API, but more a „pool“ of import / export / ETL sources.
So maybe a lot of fragments, poorly indexed ....

Tweaking (of existing, working select statements or aproach to newly create them)
Imagine a vehicle, vehicle modules, module parts down to single components and the other way round from components up to a vehicle (type).
Coming from a big denormalized table (as an abstract idea) containing all that information above one can assume, that a good approach would be using few criteria with big impact to minimize query results. This is the way, the database tries to optimize query paths.
With a few million records given in such a table, the selection of a specific vehicle type would entail a very large limitation of the data, same thing happens again filtering a specific module or module part.

This approach seems to be a good deal and also offers a “natural“ or “human” attitude looking to the final component list.

Now in theory the nice thing is, that You don‘t have to take care about which filter comes first. The database should know best. However, such a system naturally requires fixed points for such decisions, like key fields, indexes on key fields, quantity statistics for the individual tables, and so on.
When assembling a select statement, adding join after join, You will notice the runtime behaviour of each additional join. You're on a roll when an additional join speeds up the query (because it significantly limits the amount of data)! At least the next join shouldn't be a big worsening.
Avoid aggregations (and sorts) on large base quantities, which have to be filtered again later anyway. Avoid (usual and old fashioned) SQL workarounds for features that this DB engine can handle better (CTE, recursive Queries, Window Functions, Pivot, Unpivot, ..)
Avoid nasty and error prone preprocessing of import data (ETL), just do an easy and roughly filtered data import, followed by comfortable, exact and fast filtering in SQL.
To be clear, I think the most basic point is not the combination of joins, but parameterization (filtering) of data.

When using (or constructing) views I always do full selects, querying nearly all columns, building useful, reusable data layers. A view as predefined Selectstatemen doesn’t need to save resources, neither horizontal nor vertical, it saves thinking. For example building a view on some genealogy tables showing all fathers with their sons I just put the join logic there, I don’t do filtering on age or century. It’s perfectly okay doing this when finally using the view. And I don’t remove columns in the view to make it faster or something, this happens in select statement like needed.
To emphasize this idea of using views I admit, that some of the resulting views are incredible slow when getting selected unfiltered. This is not intended, but it’s no harm as long as these views are used the way they should be used. (Best way to ensure this is making use of privileges and or combinations with functions, returning (PK)ID or even packages providing kind of session variables used for dimension filtering.

But first of all, assure correct data selection, then start thinking about speed, elegance or comfort.

(Mit freier Unterstützung von DeepL)
Gruß, Jo
  Mit Zitat antworten Zitat