Government organisations understand the importance of having good data hygiene, good reference data sets, and data standards for interoperability. But data alone doesn’t tell the whole story. Content engenders context, which, in turn, completes the story. To allow content to operate at its full potential, it needs its own standards, its own semantics, and ultimately its ability to interoperate.
Content has been largely ignored in the data arena. Content often gets treated like data, confined to cells in a database, being moved around in static chunks like so much boxed cargo. The mechanisms meant for processing data limit content in so many ways. The complexity and nuance are dampened; the contexts are limited; its potential is hobbled.
The need for automatically discoverable, re-usable, reconfigurable, and adaptive content has been spurred on by the fourth industrial revolution. This has taken the study and application of “intelligent content” to new levels of automation on the delivery side, a demand shaped by the offering of Content as a Service. This has opened the door for both content standards and applied semantics as ways of automatically delivering content alongside data for more targeted contexts.