Friday, June 29, 2018

Healthcheck for CloverETL server

In modern cloud oriented world there is a need for periodical healthchecks of servers or services. Let's imagine that you have CloverETL server deployed on AWS and you need to know that your host machine, application container and CloverETL server is running.

AWS has you covered for first 2 points with healthchecks on load balancer .

This might not cover edge cases when your host machine and application container (it might be Tomcat or any other supported application containers) are running, but CloverETL server is not.

There are multiple ways how to check health or "liveness" (if thats even a word).

Use HTTP API call

CloverETL server supports HTTP api for multiple operations, you are interested in 'cluster_status' .
Please ignore name, same operation will work on cluster or even on single installation.

You can use health check functionality of load balancer to ping that endpoint to get status. 

Disadvantage of this approach is that by default HTTP API is protected by HTTP Basic authentication , what might be a problem for some health check services.

Calling accessibility page

CloverETL server has page that could be used without any authentication.

Calling this page will result in one of 3 states:
  1. OK/200
  2. 500 
  3. 503
This way you can use any health check application or service to learn if your CloverETL is living.

Thursday, June 21, 2018

Flat files and newlines for different OS

CloverETL can read, as one of the many sources, flat files. Eg. files without hierarchical structure, data stored in human readable format. Simple example is still popular csv.

Csv means comma separated values, eg columns of data are separated by ',' (comma) delimiter.

CloverETL has for such files a FlatFileReader component which can read csv with different delimiters ('|' or ';' are another popular ones). This component can read a flat files with different delimiters, it can read them not only from local system but also from remote ones (ftp, sftp, S3).

For each file you want to read with a FlatFileReader you will need to have a metadata. Reader provide easy way how to create metadata for existing file via Extract metadata functionality.

This option will parse the file and produce metadata (description) of the file. Eg. list of fields, their datatypes etc.

One of the issues you might encounter in real world is that you created your metadata from one version of flat file, but in reality files could come from various sources, with various OS. Each operation system implements its own newline delimiters.

Extract metadata will get new line from that one file you triggered it on. But don't worry, there is an way how to be prepared for files from different OS.

You just need to:

  1.  edit created metadata (double click on the edge with the metadata)
  2. click on first row with name of metadata to get properties in right hand side column
  3. select last option in Record delimiter field

This option will allow you to read files from different OS without issues. (You can even write to that field and use delimiters which are not in the dropdown, just give it try!)

Friday, June 1, 2018

Connecting to Amazon Neptune with CloverETL

Connecting to Amazon Neptune with CloverETL

Just at the end of May 2018, AWS pushed their new service, Neptune, for general availability (as promised on reInvent 2017). I decided to play with it right away and try connect to Neptune with my “weapon of choice”, CloverETL.

What is AWS Neptune? Let me steal directly from AWS as they surely have better copywriters than I ever will be: “Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets.”

So Neptune is managed (as other RDS databases) graph (like Neo4j) database. Graph databases use structures like “nodes” (or "vertices") and “edges” to store and represent data and relationships between objects.
Figure 1 via

Graph databases could be helpful whenever relationships between objects get “a bit complicated” or just plainly massive. Common use cases of usage of graph database would be social networks (you want to traverse through all followers and theirs follower of specific person), fraud detection (you want to uncover fraud rings) etc.

For this exercise I spun up Neptune cluster, following , and CloverETL server on EC2, you can use this for info.
(both Neptune and EC2 instance are living in same VPC and security group.)

Amazon Neptune supports two different graph query languages, Gremlin and SPARQL, I will be using Gremlin as I am more familiar with that one.

Check for more information and great tutorials for Gremlin.
Copyright by

There are multiple ways how to use Gremlin for connecting to Neptune, you can write some code in Java or Groovy(for that we could use a CustomJavaComponent), Python (you can call Python scripts from CloverETL with a ExecuteScript component or use, you might want to use .Net or Node.js.

I will use another functionality of Amazon Neptune which is HTTP endpoints and REST interface. 

Document tells me that I can call Amazon Neptune with HTTP request, they are using ‘curl’ in the documentation. Same job in CloverETL is doing a HTTPConnector.

So lets do quick POST call like this one:

I created a graph on my CloverETL server in my CloverETL Designer, pasted a HTTPConnector from Palette on the right.
(Tip: Use Shift + Space for Add component dialog, this could make you a bit more efficient.)

I connect first output port of the HTTPConnector to a Trash component for now. 

In the HTTPConnector I am configuring URL property with value of HTTP endpoint of Amazon Neptune ( , Request method and payload in Request content.

You might notice that I created parameter for the endpoint as this one might change whenever I will recreate Neptune cluster.

I created simple graph (like "real graph in graph database" not CloverETL graph) in my Neptune cluster following .
I might be doing all these steps in CloverETL, but using Gremlin's REPL console is a bit faster. CloverETL will help us later to enhance and automatize data ingestion and can produce reports with data in different formats which is not really possible via Gremlin console.

I am using a json document "{"gremlin" : "g.V().limit(1)"}" to run query which will return me first vertex from the graph.

After I run this graph I can check response on the edge coming from HTTPConnector directly in the CloverETL Designer.

REST API is communicating via JSON format, record produced by HTTPConnector will contain response content and headers.

I will use functionality of  CloverETL which allows me to parse JSON documents returned from REST API with another components directly in the same process.

There are two, JSONReader (uses xpath) or JSONExtract (uses SAX parser and something like "json scheme").

JSONExtract can parse JSON document, build scheme for it and show user visual tree that will help him or her map elements out of JSON elements which are of interest.

For this I will copy value from content field of record (value will be available after successful run of the graph) and store it as file, NeptuneResponse.json in data-in subfolder. 

(Tip: Whenever I work with JSON I like to add some JSON plugin that will make JSON manipulation easier. I am using that CloverETL is build on top of a Eclipse platform, so there will be a bunch of plugins available. Currently I am using Json Tools.)

I will prepare for parsing of the response from HTTPConnector by placing JSONExtract component directly on edge of CloverETL graph, just before Trash component.

Now I need to point JSONExtract to read saved file by configuring its File URL property.
(Tip: Just drag the file from data-in folder onto JSONExtract component, that's the easiest way.)

Then I open Mapping dialog by clicking on ellipsis on Mapping property and let component produce visual tree by clicking on a Generate tree structure button.

In the mapping document I will map all field which are interested of me.

This will produce 2 connected output ports with appropriate metadata.

Last thing for JSONExtract is switching value of the File URL  property from physical file to different approach when JSON document will be provided dynamically on the CloverETL edge (by running the graph) and parsed automatically at the runtime.

Syntax for it is: port:$0.content:discrete (thats the value that should be in File URL now).

I mapped out 2 ports, first one is containing parsed value returned from Neptune, second port contains response type (200 for everything went OK, different code and error message other way) for possible error handling.

This combination of components could be easily reused by wrapping them into Subgraph, which would allow bigger re-usability, but that's question for some next article.