HPCC Internal Entity Linking through SALT – A Quick Start Guide – Pt. 3

This post is the third part of a series that aims to provide simple steps for a novice user to “get going” with HPCC’s SALT Internal Linking. Please see here for further information on the series. 

We should by now have a file (people_draft.mod) which contains all the ECL code to calculate the Field Specificities for our dataset. As such, we can proceed with:

  1. Importing the ECL Code into the IDE and calculating the Field Specificities
  2. Performing our first Internal Linking Iteration
  3. Performing subsequent Iterations

Continue reading

HPCC Internal Entity Linking through SALT – A Quick Start Guide – Pt. 2

This post is the second part of a series that aims to provide simple steps for a novice user to “get going” with HPCC’s SALT Internal Linking. Please see here for further information on the series. 

Having installed SALT, configured the IDE and sprayed the dataset into your cluster (as described in Part 1 of this series), you should now be in a position to

  1. Start preparing the solution in the ECL IDE
  2. Create a draft SALT Specification File
  3. Generate ECL Code based on the draft SALT spec file

Continue reading

HPCC Internal Entity Linking through SALT – A Quick Start Guide – Pt. 1


Purpose of this series of blog posts is to map out some very simple steps that will allow an HPCC SALT beginner to start Internal Linking Iterations, based on a basic set of Linking Rules. It does not aim to cover the full breadth of features in SALT, nor does it claim that everything stated in this document constitutes “good practice” – all this information can be found in the SALT documentation. It should enable a novice to generate ECL code through SALT that compiles, as well as examine the output of the various internal linking iterations.

The guide is split over multiple posts for the sake of readability, starting from the prerequisites and continuing with the sample dataset preparation, configuration of SALT specification files and finally, the execution of the linking iteration(s):

Continue reading

Get your staff right and the process will follow.

I was recently asked by a former colleague of mine to have a look at what appeared to be a process for Product Design. Pretty quickly it became apparent that the process was indeed interesting and its overview was written by someone who knew what he was talking about. However, it wasn’t long before I realised that the process in question was implemented by Google Ventures…. which made me question myself:

Do great processes make great products?

Is there any value in trying to follow the processes implemented by traditionally strong and product-oriented brands such as Google, Spotify, Amazon and the rest?

Continue reading

What do the Professional Networking Sites Teach Us?

In Greece, we have a saying…

…roughly translating as “The dog that barks a lot does not bite“; the implication being that the louder you are, the less likely you are to deliver on that promise, whatever that might be. Having jumped on the bandwagon of social networking sites  that are either strictly professional (e.g. LinkedIn, Yammer etc) or can be used in a professional capacity (e.g. Twitter), I can’t but replay the aforementioned Greek Proverb in my head.

And more specifically, I’m referring to the notorious “Share Post/Link” feature, available on any self-respecting social network. You see, on those sites people share a lot of content – sometimes their own, sometime’s somebody else’s. But surely, I can’t be the only one who thinks the following:

What have we learnt from the professional networking sites?

Credits: The XKDC-inspired graph was realised with the help of the D3 work performed by Dan Foreman-Mackey.
Tagged , ,

Some Practical Advice on Agile Project Contingency

In a perfect world…

…the life of a Scrum Master is bliss. We don’t pretend we know everything in advance, we don’t pretend we are in the head of the Product Owner and we are always honest about the real state of the product/development lifecycle through open demos, retrospectives and openly-publicized metrics.

We constantly dedicate our time to improving performance and delivery of value through best practices, continuous improvement, and collaboration with all the relevant stakeholders.

Until one magical day, after a lot of iterations, we reach Agile Nirvana and can predict accurately the amount of work we can deliver in the… next sprint! (That’d be 2 to 4 weeks – hardly something to write home about….)

But organisations don’t operate in sprints – they operate in quarters or years and more often than not, senior management is interested in knowing what you can deliver in the next X number of months.


That’s the point when all Scrum Masters start pulling their hair out, screaming that “this is not how Agile works” and curse about the travesty that is “agile waterfall-isation”. We keep kicking and screaming up until the point we realise that the managerial demand won’t go away and we start Googling advice from Agile Gurus on how to perform Project Planning on an Agile Environment.

Now, the purpose of this post is not to explain how to estimate and plan an Agile Project – there are a lot of people out there who can explain this a lot better than me and have done so already. The purpose of this post is to showcase my personal, straight-to-the-point advice on Agile Project Contingency and its role/importance in said environment.

So let’s get some facts straight…

Continue reading

Tagged , ,

Reporting End-of-Sprint Stats on an A4

Let me kick-off this post by quickly stating the following facts:

Compiling Reports Sucks…

…and the closer you are to the ground, the more it sucks. Having to look back while your “To Do” list is getting longer and longer seems to be a counter-productive concept (it’s not). After all, reports are just another way for “The Man” to keep an eye on you and constantly evaluate who the next flogging victim is going to be (not true either – I hope).

On top of that, reports usually require supporting stats and digging out that data can be tricky and/or time-consuming. Finally, in most cases, the objective of those reports is to explain why things have gone wrong/the production environment has exploded/the live database has disappeared (delete as required). 

As a result, I have yet to meet a report compilation enthusiast – I, for sure, am not one!

Lengthy Reports Cost Money

The longer it takes you to compile a report, the higher the direct cost (“time is money blah blah…”). Indirectly, the cost of having a Scrum Master compiling reports as opposed to running the Scrum or removing impediments can be even higher.

And the story does not end here. The longer the report, the longer it’ll take for the reader to find the information he/she needs and time is money… you get the idea.

Reports are actually needed…

…for many different reasons. Either as an article for convincing your team sponsor to keep paying your wages or (more importantly) to enable you as a team to “inspect and adapt” and become a better unit.

When it comes to Scrum and end-of-sprint reporting, I have seen and written many different types of reports; trying to explain how the sprint went, what we delivered, what the problems were etc. Some of those reports were long or pointless or full of unnecessary clutter (and in some cases, all of the above). But ultimately, what I have found is that the traditional Scrum artifacts are enough to give you the whole picture: Continue reading


Visualizing Strategic Objectives across Multiple Clients

For the past year or so, I have found myself in a situation that many Product Managers could probably associate to; our team’s main responsibility has been to look after a Web CMS platform – a platform that is being used by a number of different clients under the same roof (the company I work for in this case).

Even though the overall Technology Strategy is set at a company-wide level, the fact that most of our clients come from various market sectors means that all of them have their own unique requirements and roadmaps that need to be handled by a single team looking after a single product. On top of this, this platform  should remain as “global” and re-usable as possible, i.e. we cannot start introducing client-specific features without evaluating its impact to other clients.

As with most development teams around the globe, we also face the challenge of delivering maximum value at a minimal cost – in our case this means that we need to identify the features that are common across the majority of the clients and deliver top value. By addressing those items as a matter of priority, we manage to “kill two birds (or ten) with one stone”.

Continue reading

Tagged , ,

Data-driven testing with SoapUI Free Edition

Having spent more than 3 years in a “Web Service”-heavy development team, I can safely say that SmartBear’s SoapUI is a great tool; so great in fact that I have found myself becoming some sort of SoapUI-evangelist, much to the despair of some of my colleagues.

Whereas the focus of the Pro Edition of SoapUI is geared towards data-driven testing, the truth is that its price tag can be a hard pill to swallow, especially for smaller companies. The fact however remains that you can do a lot of the data-driven stuff on the free edition, as long as you have a very basic understanding of Groovy and maybe a bit of imagination.

Prior to the day that my Development Manager had enough of me preaching about SoapUI and bought us a few licences for the Pro edition, I managed to put together a couple of scripts that would allow me to simulate data-driven testing based on the capabilities of the free edition. Having struggled myself to find complete and relevant examples online, I thought it might be useful to share some of those scripts through an example case scenario.

Continue reading

Tagged , ,