Hi, this seemed to be the most relevant group to post this link. Heard of the NZ Data Commons Project? http://datacommons.org.nz/ Their blueprint talks about the need for data contributors to be factored into to plans for reusing data. Let me know if you’d like an intro to the authors. Apologies if you’ve already seen this.
Think this links to the concept of a data management plan - this should connect the data generators, with the aggregators and the analysts. In this manifesto it seems the focus is on building “trust” in data by applying metadata. In some ways this feels like an intuitive approach. The question I have though in this approach is how you manage areas where the data is very wide but thin (unlike in the pipe dataset example in WCC). Presumable what this means in practice for health is a high number of associated definitions and metadata for very small gains? Perhaps from an emerging technology basis this is where ML could be used (to generate meta data)?
opengraphobject:[351561243100063 : http://datacommons.org.nz/ : title=“Data Commons NZdatacommons.org.nz” : description=“A collaborative project to establish a Data Commons in Aotearoa New
Zealand.”]