#BigDataLDN – Day 2

So, BigDataLDN day 2, where to start?

A slightly quieter day meant for a much more interactive experience for me – it was a pleasure catching up with so many people! Many of the vendors were just as keen to chat, altho some folks were slightly jaded from the previous evenings drinks I think! I didn’t manage to find anyone doing automation and document processing unfortunately. Any hints?

I did think the memgraph app which tracked the #bigdataldn hashtag was a really clever use of a graph platform, I’m hoping more of that gets shared soon. There’s no better way to demonstrate a product than using a domain that everyone understands!

Also; staying to the bitter end was great for swag collection!

This was also my first time on a panel discussion – the Rise of the Data Engineer – a great topic and one very relevant to my career in data – was 45 minutes covering such interesting topics as trends and how to get into data engineering.

At the end we had Matt Parker doing the final closing keynote and he was brilliant! So funny – and so true – thats the scary thing. His YouTube channel is here.

I for one look forward to next year – where no doubt the conference will be bigger and better.

#BigDataLDN 2022 Day 1

So, of course, BigDataLDN was back yesterday and boy did it kick off with a bang! Queues around the block and irate venue staff! Wowser!

By far and away the busiest day I’ve seen at Olympia. It was great to see so many vendors all committed to the event too. Plus of course, the very exciting new stage for DataMesh.

Needless to say I saw both of Zhameks talks, all very interesting stuff. Some amusing predictions came out of the panel discussions, some of which I posted on twitter, which inevitably led to the usual religious hatred coming out! (Honestly i’ve never seen such a polarising technology)

Sadly for the first year ever i was unable to get into the F1 talk, doh! But I’ll look it up on youtube – it’s interesting they moved to the AI/ML stage this year. I wonder what the angle there is.

Anyway I’ll be back today – Not forgetting I’m on a panel debate where we’re discussing the rise of the data engineer. See you there!

PLUG 22 – It’s back!

So, on the evening of 10th Feb, PLUG (Pentaho London Usergroup) returned! This time around we were at the National Archives, quite an impressive building.

Now; This event has been a LONG time in the making. Not just because of covid either – It was quite some time ago that Adam mentioned he had some plugins to show off, but we got there and it happened.

So; how were the talks?

First up – I talked data mesh! Just a 10 minute overview / introduction to the basics. There was plenty of discussion and i’m sure we could have gone on much longer with this one. One things for sure, It’s one to watch

Secondly – Adam covered the plugins they use with PDI – this was very interesting. You can read his slides here: https://slides.com/adamretter/pentaho-plugins-and-enhancements

Third up – Rob covered performance tuning in PDI/Kettle and it was very well presented too. We’ve all been there, and there was plenty to consider and review in this presentation – Slides here: https://hidrive.ionos.com/lnk/3j2gLA5v

Fourth, and not yet last, was Michal and he gave an excellent demo on setting up both Apache Hop and Kettle on an M1 Arm64 based mac, he’s even written up the tutorial here: https://mikekubacki.com/

And finally – we had a quick updated from Benedict Faria who is a solutions engineer with Hitachi Vantara and keen to help support the community in London – In fact, if all goes well we’ll be back with Hitachi for the next meetup.

So yes, 5 talks, pretty crazy! Breaks our rule of limiting to 3, but as one was a lightning, and one and overview, well rules are made to be broken… Nevertheless everyone was really engaged, lots of questions and it really was good to be back. There may even have been a beer or two afterwards.. (And possibly even a taxi due to missing the last train – Damnit Kew is far away!)

Ah yes, one other thing, the name of the event. Well we’re continuing as a hybrid apache hop / pentaho meetup for now. hplug or phlug is a bit of a mouthful, so any better ideas to encompass both communities in the name?

Hope thats useful, comment below! And AGAIN – many thanks for the national archives for hosting!

#BigDataLDN 2021! DAY 1

Wow, what an event. It’s sooo good to be back!

So, who did I see talk and which vendors did I see?

First up was Jim webber talking datascience in neo4j. This is always good, and had some interesting snippets about those that get the best pay are often those with weak links to other communities, not those most strongly connected in the current community. Err, yay! 🙂 The hard bit with neo4j is the terminology, the software itself is almost comically easy to use.

Then Chris Fregly was talking Quantum. This is excellent stuff – and for some reason it made me chuckle that they already support S3 from the AWS bracket quantum compute layer. Top prediction is that RSA will be cracked in 100 years, so your bitcoin is safe for now! We’ve just moved into era 2 of 3 eras of quantum compute, so it’s definitely moving on. He said remember this talk in 20 years. Well I’ll be retired by then, so meh.

Then i saw TreasureData. Now these guys seem to be the first ever to have done customer 360 properly. They currently ingest 2Bn records a second – WOW. Very interesting identity resolution going on too – also liked how they promote to anonymous/unknown users to try and get them registered so they become known. One client has a $70M revenue increase from using this tool – i wonder what % of that they take? they must be VERY expensive.

Then the treat of the day – Just look at this talk topic: “The falacy of move fast break stuff”. Really good talk – check it out once it appears on youtube.. From https://twitter.com/JessicaCregg

So day 1 over, that then leaves day 2. When i’m speaking.. yipes..

Updating dimensions in your DWH – Python, Kettle and Athena

I’m quite reassured by both talks at BigDataLDN and other talks at Skillsmatter (RIP) which have been across many different technologies, and the reoccurring theme is building a data warehouse.

Now a few years ago everyone declared the DWH was dead. Well that couldn’t be further from the truth. It did fall poorly for some time, but it seems to me to have got a second wind.  It’s just virtualised now, but the modelling and concepts are more important than ever.

So; How do you maintain your dimensions? In a dimension record, your data can be versioned and you can choose whether you keep a history of your attributes (type2) or just overwrite with the latest value (type 1). There’s all sorts of combinations and variants of these too.

Now, Kettle or PDI (An industry standard Opensource ETL tool) has a lovely step which does ALL of this for you.  So, I can maintain a dimension with just two steps. It’s also incredibly rich, so i can choose on a field by field basis exactly what i want to happen. Here’s a picture:

Screenshot 2019-11-28 at 11.41.36

So, you can see here a table input, which runs a simple bit of SQL.  And a lookup/update step which has about 5 fields of configuration, and boom! We have a fully maintained and updated dimension. Now rinse and repeat.  Even a new user to Kettle could create this in 5-10 minutes.

Oh; And this is gloriously database independent too.  Doesn’t matter what the DB is, source or target, it’ll work. Perhaps your SQL may occasionally need to be tweaked, but nothing in Kettle would change.

And also, while i’m at it (this is NOT a sales pitch btw!) Kettle will also generate the SQL for the target tables. And it’ll insert a zero record for the case when you can’t find the dimension. (A quirk about dimensional modelling that you must ALWAYS have a key in the fact table)

Now these days, where it’s at is python. Not ETL tools. OK so how do we do this in Python? Well actually, it’s also extremely simple.  This time it took me about half an hour to get it working, but that is more reflective of my python knowledge than the tool itself.  So how does it work? Lets work through the code line by line:

The first thing, is install the library “pygrametl” via pip.  This seems to be a mature library, still occasionally updated, so seems a good one to use.

Then, in our code:

Screenshot 2019-11-28 at 12.04.42

So that’s the usual imports, etc, fine, boring..

Screenshot 2019-11-28 at 12.05.28

This is quite nice, here we’re defining two connections to postgres, and running some SQL. We’re also defining the mapping of the columns – something you don’t need to do in Kettle.  Nevertheless with just a couple of lines we have our connection and we’ve ran our SQL. Good stuff.

Now lets define the dimension itself:

Screenshot 2019-11-28 at 12.07.14

So there’s two here. The first one is type 1 (overwrite). The second is type 2 – update. (History) . So that’s also pretty simple, we’re just defining the fields and their uses. Good stuff. In fact you could actually generate this from metadata…  Ooh..

Next, let’s update the dimension!

Screenshot 2019-11-28 at 12.11.02

Again, that’s gloriously simple. For each row in the data, we simply make one call to our library to “ensure” the dimension. This then does either insert, update, or nothing.   (And then don’t forget to close our connections!)

Done!

So, now what? All of this is great, but there’s one niggling problem here – Both tools rely on the data being in a relational database – or more specifically one that supports update functions.  Athena (and other bigdata SQL engines on Hadoop) typically don’t support update so this doesn’t work.

That means you can either rebuild your whole dimension each time using inserts, or find another way.  And if you’re going to do that, you need to have all the data to do so – sometimes the source doesn’t actually have this data, and it’s data you’ve curated, so be sure to read the dimension first in that case, before clearing it down!

So one way i’ve seen, is to abuse SQL to look forwards (or backwards) over the data and replicate the valid from / valid to and version fields in a View.  Whilst this is really nice, as it hugely simplifies the stored data, it puts an awful lot of trust in the database being happy to run some more complex SQL.

So I think that leaves a hybrid – This is actually common in data architecture, and basically the advice is – use the right tool for the job. So in a hybrid setup your golden source of dimensional data is kept and maintained in an RDBMS, but then blatted over (Technical term sorry) to S3 or HDFS post updates.  Again, in Kettle this is something that is embarrassingly easy!  (And probably in python too)

Any other thoughts? How do you manage your dimensions when your DWH is based upon a filestore rather than an RDBMS?

What’s in a name? #PLUG

Good morning,

An off the cuff comment got me thinking the other day that should we change the name of the #Pentaho London Usergroup (meetup) to KLUG?  i.e. Kettle London Usergroup?

Well; Our keen supporters will remember at one point we tried to rebrand as an Opensource BI meetup, and we bought in all sorts of complimentary tools and talks. Not all opensource however!  This unfortunately didn’t work and just spread things too thin. There is value in meetups being focussed around a thin area.

So what does the word Pentaho vs the word Kettle mean?  Historically Pentaho was the company name of course, and also used interchangeably as the product name but Hitachi have now disowned the word Pentaho, and in fact, Pentaho swag is now banned in the offices.  So it means nothing to them. In my world it’s simple – its the original tech stack – So including the BI server, all modules within that, the Metadata layer, Reporting and Mondrian, and of course Kettle.  Oh, and ctools 🙂

Now; All of those modules apart from ctools are active independently from the BI server and used across various other stacks.  So “Pentaho” is just a collection of libraries really at its heart.  (Think Saiku, tibco etc)

Ctools on the other hand is still an extremely capable tool. There’s little else out there that is so flexible, despite it being positively ancient now. Just imagine if someone took the ctools architecture and rebuilt it for the modern times, it would be a killer app.

I digress. So, Pentaho is just a collective name for a bunch of BI tools!  Therefore, I think PLUG (for now) can remain our acronym! What do you think?  Any other options?

PS: Of course, with Kettle being renamed to Hop and moving to ASF, that presents another renaming opportunity, but somehow HLUG just doesn’t flow off the tongue correctly 🙂

 

Using #Kettle (#Hop) with #Apache #Airflow

A few days ago someone mentioned using airflow to orchestrate their messaging solution, so I figured it was worth a look. In essence, it’s a scheduler that understands the individual steps in your job, and the dependencies therein.  Of course, it scales out to workers, etc etc.  All good.

In fact, i recommend watching a few videos on youtube and you’ll get a feel for it.  Then, whilst searching for something else I came across this blog – Do take a read, it is quite interesting.

This is particularly interesting, reading through it, the barrier to entry was learning all of Airflows operators.  (And learning to love all the bugs/quirks of each one!). They propose a generic operator, and then the developers just concentrate on the code.  In fact the philosophy of airflow is and should be to separate the actual workflow code from the job, so this makes a lot of sense.

Now; You know where i’m going with this.  Kettle IS that generic operator, so this could be incredibly powerful. So lets have a go…  I think the devil in the detail is at what level you allow the orchestration to happen in Airflow and at what level you do it in the job definition.  My personal feeling is that jobs in Kettle should always be self contained anyway, so keeping to that you wont go far wrong.  There’ll always be a need to run jobs independently, so don’t lose that feature – And you don’t want duplication either as they will get out of sync. However Airflow is EXACTLY the scheduler we’ve been looking for to manage the interactions of your 100’s of jobs.  Unfortunately if the job does all of the orchestration you lose the clever bits of Airflow including re-runability/resume and performance tracking.  See, i told you there’s a devil here.  Some hybrid may be the best way to go, where you do replicate a job for manual runs, but break out the stages for airflow.  I would only put really high level breaks, and only very simple logic into airflow however – For example, at this level:

  • Load data to stage
  • Process dimensions
  • Process Facts
    • Could be argued to be broken down further, if you already have rerunnability setup.

So; Having written all that out, and looked at the typical DAGs in Airflow, i have concluded that you can generate a DAG from a Kettle Job, and boom! You have the best of both worlds.  Of course, this would be a post commit hook and happen automatically so they can never get out of sync. #CI

Note: this approach will of course break down if you’re ingesting data in a metadata driven fashion (and therefore, only running one job)  But maybe Airflow has an API and you can report progress that way? One to research…

OK. Let’s get it started.  Well thats easy – Just 4 commands, follow the Quick Start.  If you’re using Anaconda then be sure to install using conda, not pip.  Once done, it starts straight away, talk about easy.  Interesting that the UI is a separate service to the scheduler too. Nice – a sensible separation there.

So how are we to do this? Well the quick and easy approach is of course to use the bash operator and call Kitchen.  There’s some discussion of this approach in StackOverflow

And in fact, it’s super simple.  What I love about Airflow, is the schedule itself is in the code or the DAG. And that means all you need to do to update your schedules is a pull of your latest code.  So here it is:

Screenshot 2019-10-07 at 10.10.17

You’ll notice the schedule is very familiar – it’s basically a cron string. And thats it, our job is now running.  Airflow then captures duration, failure, logs etc etc etc.

Another approach would be to build a new operator I guess, which would do little more than launch kettle anyway. Perhaps pyJava can be used, but i suspect that’s overkill. The advantage of the bash operator would be around separation of the processes, resources etc.

But hang on, there’s a http operator. Why don’t we just call Carte?  Not quite sure how that would work in scaleout terms, perhaps each airflow worker has a single carte running on it, and when you call carte you just always call the localhost one?  Or perhaps you can devise a way to teach airflow about the carte cluster.

So what else is there? Well they’re very dynamic – DAGs can of course instantiate other DAGs.  And the original product came from Airbnb. I guess thats a good thing too? 🙂 . It’s easy to pass parameters of course as well and there’s lots of examples of how to do that.

For anyone putting up with Windows Scheduler, or cron jobs, and the sometimes obtuse places you have to look for logs/problems, then I do recommend taking a closer look at airflow. Let me know how you get on!

PS: The DAG itself is as simple as:

from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from datetime import datetime, timedelta


default_args = {
    'owner': 'airflow',
    'depends_on_past': False,
    'start_date': datetime(2015, 6, 1),
    'email': ['airflow@example.com'],
    'email_on_failure': False,
    'email_on_retry': False,
    'retries': 1,
    'retry_delay': timedelta(minutes=5),
}

dag = DAG('KettleGetBeerTags', default_args=default_args, schedule_interval=timedelta(days=1))

t1 = BashOperator(
    task_id='RunKettle',
    bash_command='/Users/codek/apps/kettle-remix-20190130/data-integration/kitchen.sh -file=/Users/codek/projects/airflow/jb-extract-beer-tags.kjb',
    dag=dag)

Dunnhumby and their use of GCP

So, a few days ago I came across this video, and as it happens Jamie is actually talking at the GCloud meetup in Manchester next week, It’s quite tempting to go in fact!

It’s a really interesting watch. Especially after I spent quite some time at Aimia wrestling with exactly the sort of problems that are described here!

Key points of interest (to me!) are really:

Transient (Ephemeral) clusters. Enabled via a shared hive metastore using Cloud SQL.  This is clever stuff.  So you run production jobs on their own clusters, which also means it’s one client, so single tenant removes so much complexity!  And you’re not going to have a flexible architecture where you can do that without using the cloud.

The usage stats are amazing! Just look at these numbers:

17k compute hrs/day
175 data scientists. about $7600 a day . (Thats pretty cheap! And remember its all traceable down to the job)
295tb
35 clients
~3k nodes.

Terraform is used to handle updates, and can build / tear down the entire cluster.

Oh; Another really interesting thing – each solution is spread across 2 GCP projects – one which is the store, and the other is called the runtime (all the ephemeral clusters etc) . Per client.

They don’t use bigquery (much). Partly because some of their clients prescribe different cloud providers.

A particularly amazing graph at the start shows how their costs reduced drastically once they introduced ephemeral clusters. Now this is particularly interesting – Cloud gives you the horsepower to throw resources at the problem and prove your product. THEN as the tech improves you’re able to achieve the same with less – which really must have profound implications for your business.

 

So some questions I thought of..

 

  • What about analytics across multiple clients?  Is that a strict no-no?  Not even aggregate comparisons etc?
  • Is there any standard data model? Or is each client different?
  • Cloud portable, but not cloud agnostic. Why’s that then? Is it because any agnostic layer will only ever support the lowest common denominator tech wise?
  • Do we see an end to needing to deploy network layers (firewalls etc) as things move more towards serverless?

 

 

RequireJS again!

Remember this?

https://dankeeley.wordpress.com/2018/05/03/requirejs-jquery-plugins-and-pentaho-cde/

Well, I came across a more complex example – a Library with various dependencies.  On reading all the docs MANY times I built a shim but it just didnt work, hence in a moment of frustrating i went for the last resort option of a post on the community site:

https://dankeeley.wordpress.com/2018/05/03/requirejs-jquery-plugins-and-pentaho-cde/

Note the deathly silence!

However; Later on I was editing my shim inline rather than as an external file, and hang on, i spotted a warning – there was a missing comma! Arrgh. How annoying. So after all that the shim WAS correct, and my understanding of requireJS was indeed valid.

So here’s how to do it:

var requireConfig = requireCfg.config;

if(!requireConfig['amd']) {
requireConfig['amd'] = {};
}

if(!requireConfig['amd']['shim']) {
requireConfig['amd']['shim'] = {};
}

requireConfig['amd']['shim']["cde/resources/public/dashboard/plugins/alpaca"] = {
exports: "jQuery",
deps: {
"cdf/lib/jquery": "jQuery" 
}
};

requireConfig['amd']['shim']["cde/resources/public/dashboard/plugins/handlebars"] = {
exports: "jQuery",
deps: {
"cdf/lib/jquery": "jQuery" 
}
};

requirejs.config(requireCfg);

define([
'cdf/lib/jquery',
'amd!cde/resources/public/dashboard/plugins/handlebars'
'amd!cde/resources/public/dashboard/plugins/alpaca'
], function($) {

});

Note: this is a simplified example, there are more dependencies, but as you see you can simply copy and paste those as needed.

There’s another point too. do NOT call your resource alpaca, because that clashes with the alpaca call itself. Instead call it alpacaShim or something.

I hope you all continue to enjoy the horrors of requireJS!

Until next time!

Dan

Security – Custom handling of login failures in #Pentaho

Not so long ago I blogged about using Salted authentication

https://dankeeley.wordpress.com/2018/09/13/pentaho-security-full-jdbc-passwords-with-salts/

Well; it turns out the Pentaho authentication layer despite being spring security is not really “up to date” in a lot of things.

A classic example is preventing brute force authentication.

Now; I’m currently integrating to a home grown security system, hence this is an issue. If we were going LDAP like most would then it’s a non-issue, because your LDAP would handle this.

So; Whats the requirements? Well simply:

  • Call a specific procedure once logged on
  • Call a different procedure if login fails

The former is easily achieved using a session startup action. see sessionStartupActions.xml in pentaho-solutions/system. In an ideal world you could use the startup rules cTool which lets you code these rules using Kettle but unfortunately it’s broken in 8.1 (I believe fixed in 8.2 but not proven)

The latter is somewhat harder. You must build a custom spring authentication failure handler.  Thankfully that’s as easy as:

  1. following this ultra simple blog example: https://www.baeldung.com/spring-security-custom-authentication-failure-handler
  2. enabling the new handler in applicationContext-spring-security.xml file.

b.t.w. the username is in the request – so access it like so:

request.getParameter("j_username");

So there you go. we now have a relatively easy hook into authentication failures in the BA Server.