-
to
Formal ontologies mainly aim at characterising how, according to some experts, a given domain is structured, i.e., to identify and formally represent a set of concepts and relations together with the constraints that hold for this domain making sure that the resulting conceptual framework is logically and philosophically sound and coherent. In particular, foundational or top-level ontologies focus on very general notions which make sense in a wide domain.
A first subjective dimension concerns the conceptualisation of these domains and its representation: different models of the ‘same (general) domain’ is possible and can be represented by making use of different formal languages. Furthermore, data are usually reduced to factual instantiations of (part of) the model (factual propositions). Both the model and data sharing can then be addressed by standardizing, integrating, or partially aligning the involved ontologies.
There is then a need for a "scientific" comparison and evaluation of alternative (foundational) ontologies:
- evaluate the differences between alternative ontologies;
- evaluate the representational adequacy with respect to given domain/task, in particular with respect to scientific theories;
- evaluate the inferential and predictive power of ontologies and their practical usefulness (formal language and supporting tools).
In modern science, scientific datasets are usually generated by complex elaborations of raw data with a mixture of scientific software and human input. A further level of ‘subjectivity’ is present here impacting both (i) the acquisition of raw data and (ii) their (often complex) aggregation or transformation into (macro) indexes or scores. In the context of E-science where data are shared among a network of heterogeneous organizations, the (sharing of the) model of the domain seems not enough, an explicit model of how data have been collected and elaborated seems necessary to guarantee the quality, reliability, and trustworthiness of data-analyses. Likewise, modelling constraints seem to be necessary to prepare hybrid data, which is going to be arguably pivotal as statistical method revolving around AI become more prominent.
The scenario is also made more complex by the fact that economics, medicine, biology, psychology, and sociology (but physics and cognitive science too) are deeply founded on testing which is often based on statistical analysis at the ground level. Scores have a relative or comparative nature; they position the subject with respect to (the distribution of the scores of) the sample subjects chosen at the beginning, a critical step quite difficult to share.
In order to exploit these scenarios, the OntoCommons consortium is organising the workshop "Formal Ontologies, Applied Sciences and Data" from 2 to 4 October 2023 in Bologna (Italy), where 15 experts coming from different disciplines (philosophy, applied science, computer science, etc.) come together to investigate the connections between formal ontologies (intended here as logical theories) and sciences.
The workshop is closed and by-invitation only