WIRED reports this morning that
ALL UNITED AIRLINES flights in the US were grounded this morning for nearly an hour, over “dispatching information.”
There’s much speculation around the possibility of erroneous flight plans being generated by the flight planning system and even a tweet about a possible hack of the system.
Whatever the cause, there exists today a silent but growing problem upon which little thought is being spent. Ours is a world of increasing reliance on electronic systems that orchestrate more and more of our lives. These electronic systems govern many safety critical functions and are driven to automatic and autonomic operation by data. While there is much made of Data Science and it’s disciplined application to drive greater revenue and safer interactions between man and machine, we are forgetting the much bigger and deeper field of Information Science.
The thing is that data is only part of the picture and in our ever increasing dependance on automatic and autonomic systems we’re going to have to make sure we take a holistic look at how we and our machines compose the ever growing amount of data into usable and trustable information. Data Science is the discipline of extracting knowledge from data. But, there is a difference between having knowledge and knowing it. Knowledge is the accepted inferences made from data. Knowing is the ability to understand the full context and veracity of the knowledge in relation to the wider system within which the knowledge is embedded. It’s the ability to trust the knowledge that you or your machines have inferred.
This knowing begins with being able to trust your data. To be able to unequivocally assure all components of the system that the data it is being provided is what it say’s it is and is from where it says it’s from – at least. Each use-case will have more to add to the definition of trusted data but all use-cases need to be able to verify these two basic facts. Note I said verify. I didn’t say trust. Trust is almost always assumed.
It comes from within it’s own system so surely it can be trusted – right?
There’s firewalls, ACLs, tripwires, log parsing, alerting and a plethora of other defences – right?
Those defences are all well managed, well understood and well maintained – right?
You understand the provenance of your information – right?
the place of origin or earliest known history of something.
a record of ownership of a work of art or an antique, used as a guide to authenticity or quality.
A much abused term in the modern parlance of Engineering. Often used by companies and individuals who wish to bolster the credibility of their ‘inventions’ which are often no more than the regurgitation of broken technology patterns and modes of man-machine behaviours.
The truth of knowing is the purity found in the absence of assumption.
Within our discipline of Engineering it’s a tough order to fill. Engineers are so used to patterns of assumed trust and behaviour.
We trust out key stores.
We trust our CAs.
We trust those that we are unable to validate the veracity of us because it makes our lives easier. Then we are surprised when that trust is proven false. We are surprised when our implied modes of trust is circumvented from within and without, with ease.
The problem is the knowing the provenance of your information. We as a discipline have to become more aware of the challenges of understanding the knowing of our information. It is not enough to be able to trust our data any longer.
I work for a company called Guardtime. We believe we have a robust and provable solution to this challenge of information provenance. We believe we can provide your data and information handling with a leading edge technology which will allow you to know your information; allow your machines to know your information. It’s called KSI. There’s a great paper on what this technology is – go read it.
Over the coming weeks I’ll be expanding on some use-cases within my own disciplines of cloud solutions and devops – so, stay tuned but in the meantime try some light reading: