So about 3 years ago this foxed me, and as it wasn’t particularly important at the time we went with a hard coded number of processing engines. Randomly whilst driving through the beautiful British countryside last week i realised how simple the solution really is. Here is what i was passing at the time… (Brocket hall, they do food i think!)
Whoa! hang on a minute, what am I on about – OK the scenario is this – common in a metadata driven system – you want to process all your data, and depending on some attribute send the data to one template or another. Fine.
BUT because we love metadata, and we love plugins, you don’t want to have to change the code just to add a brand new template. Even if it would just be a copy and paste operation..
Concrete example? Sure. You’re processing signal data. You want to store the data differently depending on the signal. So you have metadata that maps the signal to the processing engine. Your options could include:
- Store as-is with no changes, all data
- Store only the records where the signal value has changed from the previous value
- Perform some streaming FFT analysis to move the data to the frequency domain
- Perform some DCT or other to reduce the granularity of the data
- etc!
The solution in the end is ridiculously simple. Just use a simple mapping, partitioned, and use the partition ID in the mapping file name!
As always you can find the code here (Notice the creation of a dummy engine called enginepartition-id.ktr which just keeps PDI happy and stops it moaning, and preventing you closing the dialog!)
Reblogged this on redCloverBI and commented:
Amazing
Pingback: Community News – July 2016 | Codeks Blog