Post by Florian Krautli (26 October 2021)
Dear all,
I have a data modelling challenge I would need some advice with.
We work with a collection of geographic depictions of Switzerland. This
includes photographs, paintings, prints, sketches, etc. We collaborate with
Smapshot <http://smapshot.heig-vd.ch/> who developed a method for aligning
landscape photographs with a 3D model of the physical landscape. An example
from our own collection can be seen here:
https://smapshot.heig-vd.ch/visit/204037
Using this method we can determine the possible viewpoint of a photographer
when taking a picture, or the viewpoint from which an artist may have produced
sketches of a landscape. In terms of data, we obtain the simulated position and
view of the photographer/artist as coordinates (lat/long), altitude, azimuth,
tilt, roll and focal view.
I'm debating now how to model this obtained data in CIDOC-CRM. I would suggest
a S7 Simulation or Prediction for the process of using the Smapshot app to
determine a viewpoint of an image. This process P140 assigns an attribute to a
E36 Visual Item, namely that the E36 Visual Item (the image) P138 Represents a
view. What is this view? Can we say it is a E53 Place? Or is there a more
suitable entity for describing such a (simulated) view?
One could also say that the data defines a E53 Place from which an image has
been created. However, while we can say this with some degree of certainty for
photographs, a painting of a landscape might have been created using a
combination of several viewpoints as well as, of course, use of imagination on
and off-site, so I would be hesitant to make a statement about the physical
location of an artist when creating a painting.
I would be grateful for your input!
All best,
Florian
Post by Oyvind Eide (26 October 2021)
Dear Florian,
thank you for this interesting puzzle!
Before I venture into concrete suggestions, allow me to ask some question in
the form of assumptions you can confirm, reject, or discuss:
The establishment of a hypothetical viewpoint is used to establish a location
of the canvas. That means the following:
1. There is a 3D model of a landscape where each point (also those making up
lines and polygons) are normal (x,y,z) coordinates in some coordinate system.
2. The hypothetical/assumed viewpoint of the photographer or the painter is a
point in the same coordinate system.
3. Each point of the canvas (representing a painting or a photography) being
put into the landscape is a point in the same coordinate system. Thus the
canvas as a whole is an area in that coordinate system.
If this is so, we might very well talk about something added to a pre-existing
3D model. If not, I would be happy to be enlightened and hopefully manage to
dig further.
All the best,
Øyvind
Post by Martin Doerr (27 October 2021)
Dear Florian,
Nice Problem! Actually I do not regard it a Simulation, because it does not introduce theories to extrapolate into the future or to fit to observational data for theory testing. I think it is simply data Evaluation, which results in an estimate for a place. I'd regard the calculated viewpoint as a declarative E53 Place /P168 defined by/ : the calculated value, which /P189 approximates/ the phenomenal place of the image taking activity. The direction of taking an image is not modeled in the CRM yet as well as some other parameters of observations.
In my opinion, the image itself can be seen as a measurement of optical qualities of a section of a physical feature, the surface of earth, or an observation in the case of painting. Both would "P138 represent" this section. In CRMdig, we took digital photos as a kind of composite Dimension, because it provides quantitative light emission data, but we did not consolidate this with taking the image as a Visual Item, a kind of Information Object. This is not a contradiction. Both, the painting and the photo can represent identifiable details and landmarks that allow for matching them with reality, or an assumed common reality.
So far, a quick thought. I think, a nice *issue*, to elaborate the Dimension - Visula Item question, view directions and sections of physical features defined by view focus.
Best,
Martin
Post by Martin Doerr (27 October 2021)
I'd like to add, since we discuss the question of observable situations and the results of observation in continuation, the painting as a result of an observation should be modeled adequately.
Post by Martin Doerr (27 October 2021)
I'd like to add, since we discuss the question of observable situations and the results of observation in continuation, the painting as a result of an observation should be modeled adequately.
I forgot: the image I sent in the previous message shows Mount Youchtas south of Heraklion, in a recent photo by me and a drawing by Buondelmonti, 1415, stressing that the mountain reminds the sleeping head of Zeus, in his report about his travel around Crete. This resemblance with a human bearded head can only be seen at an angle similar to that of my photo. The tripartite church on the tip if the "nose" exists and is unique, it was extended later. Obviously, he has seen the church from close by, from a view point that can easily be narrowed down. The stuff on the "forehead" refers to remains of a well-known Minoan sanctuary, where he found "infinitas imagines", obviously Minoan idols...
Best,
Martin
Post by Franco Niccolucci (27 October 2021)
Dear Martin, Florian,
a nice problem indeed. Just to add complication, it seems to me that this is
just a special case of determining the viewpoint from which an artist or a
photographer produced an image of a landscape, a monument or other landmark.
Florian's reverse engineering process determining the place from which the
image was taken, either with a camera or by human activity as painting,
sketching, and so on, is indeed a mathematical process availing of the software
Florian indicated, or of other computational methods. This process may still
have some approximation depending on the correspondence of the image to reality
and to the quality of the other data used, e.g. the 3D model.
Coming to the complication, here are two examples where the same process is not
based on maths.
1) Pianta della Catena of Florence, 1470. See it here:
https://commons.wikimedia.org/wiki/Category:Pianta_della_Catena#/media/File:Francesco_Rosselli_(attribution)._Pianta_della_Catena,_1470.jpg
As you can see from the image, the person who produced this perspective map is
also included in the picture (it is the small person dressed in red in the
bottom right corner), drawing it from an easily identifiable hill. The
mathematical approach may not work in this case because in 550 years the city
skyline has changed and the drawing was probably imperfect. Nevertheless,
identifying the place is almost certain, but we must accept that the drawer’s
representation is faithful as regards the place where he did the job, and not
just a symbolic indication that the map was made “at bird’s eye” from a high
place.
A later map (1594) by Stefano Buonsignori mentions the hill from which it was
taken - the same as the above-mentioned one - but gives no clue about the
*exact* place.
Unfortunately I have no reference to good images for this map except this one:
https://upload.wikimedia.org/wikipedia/commons/0/02/Pianta_del_buonsign…;
which is notable only because the gate is next to my house :)
In sum: an old map depicts the point of view; a later one, but still ancient,
mentions the place in a generic way; direct observation confirms the place,
which appears obvious from the drawings perspective, if you know the town and
the surrounding hills (I do). No computer image processing may identify the
place because interpretation of the human figure (case 1) or understanding the
map title (case 2) are required.
There are moreover studies identifying the place with some greater accuracy as
being an open space close to a monastery on the said hill.
Is this still a data Evaluation?
2) The second case concerns the interpretation of art historians. For example I
found that for a painting (1494) by Albrecht Dürer titled “The mill”
https://it.wikipedia.org/wiki/Albrecht_D%C3%BCrer#/media/File:Durer,_il…;
an art historian stated that “the painter was standing on the high northern
bank of the Pegnitz river, looking towards south”. No idea why this art
historian states so.
Dürer painted also many watercolour paintings in Trentino, and there is now a
cultural route titled “Albrecht Dürer Path” where visitors may stop at
designated points and look at the landscape from the (reconstructed) viewpoints
of the painter
https://www.trentino.com/en/leisure-activities/mountains-and-hiking/hik…;
I don’t think they used computers to create it.
In conclusion, maybe the viewpoint reconstruction is a more general process: an
Evaluation if it can be done/supported mathematically, something else (?) if
computation is not feasible but other means can be used with reasonable
trustworthiness. Is there a superclass fit for all cases?
Best
Franco
Post by Oyvind Eide (31 October 2021)
Dear Martin,
I agree that elaborating the Dimension - Visual Item question, view directions
and sections of physical features defined by view focus would be an interesting
issue.
It is a question of perspective.
https://www.youtube.com/watch?v=keW4QqRGVN4
Regards,
Øyvind
Post by Florian Krautli (27 October 2021)
Dear Øyvind,
Thank you very much for your input! To answer your questions:
1. Yes, the tool uses a 3D model of a landscape (based on Cesium:
https://cesium.com/ <https://cesium.com/> )
2. Yes
3. Yes. The tool positions the image in a 3D landscape so that from the
calculated viewpoint, the 2D image aligns with the 3D landscape. The tool also
outputs a glTF of what I assume is the canvas position in the coordinate
system: https://smapshot.heig-vd.ch/api/v1/data/collections/36/gltf/204037.gltf…;
(though I'm not familiar with this file format)
I should mention that I also discussed this issue via Slack with Matteo
Lorenzini. Nicola Carboni already prepared a model to document the perspective
over a place by a person, documented as the point of observation by an actor.
We concluded that we could apply that model also in this case. However, I would
be very interested in your thoughts on how to treat it on the level of the 3D
model. That might help me to model the data closer to the actual process of how
it was obtained.
Best wishes,
Florian
Post by Oyvind Eide (31 October 2021)
Dear Florian,
in addition to the comments made by others, which makes a lot of sense too, I
would offer the additional perspective that the resulting 3D model (with the
added canvas) can be seen as a collage of the source 3D model and a digital
reproduction of the photography / painting — thus as a work is the bringing
together, based on certain rules and principles, of two works.
I think this adds a different perspective than some of the others mentioned.
Which perspectives to focus on when modelling such processes is a pragmatic
choice.
All the best,
Øyvind
Post by Florian Krautli (1 November 2021)
Dear colleagues,
thank you so much for all your thoughts and input!
The examples by Franco indeed illustrate the problem very well. We also do have
many images that are composites of different views. The engineers at Smapshot
may come up with ways to accommodate this, for example by distorting the image
or by enabling a segmented georeferencing of an image.
We also have landscapes where we are quite certain that their creator has never
actually seen the view themself, but has copied a landscape from another
artist. So we can't make any statement about the location of an artist when
creating an image. In the case of photographs we can make such statements and
would be able to use the measurement approach that Martin described.
In general, I would tend to represent in data what we know and only in a second
step what we conclude from that knowledge. Øyvind's suggestion therefore sounds
plausible, too. We create a 3D model, and from this model we conclude
something, namely a hypothetical viewpoint of an observer. I have to give it
some thought how to represent this in a data model without creating too many
statements that are not grounded in available data, but I think it might be a
good direction for this particular problem.
Thank you all again, best wishes,
Florian
Post by Martin (1 November 2021)
Dear Florian,
Thank you too for the subject! We would be very pleased to hear in due time how you have proceeded.
In the meanwhile, I'd propose a new issue: How to model the focus/view of an observation (to elaborate the Dimension - Visual Item question, view directions and sections of physical features defined by view focus, as well as paintings and graphics as results of observation)
Best,
Martin
In the 53rd CIDOC CRM & 46th FRBRoo SIG meeting, MD gave some background for the issue:
it refers to the event of taking a view-point that should be used to define the position of someone taking a picture or photograph. It adds to the scope of the observation theory discussed in 583, which it depends on.
May 2022
Post by Martin Doerr (11 September 2023)
Dear Thanasi, all,
I think modeling a view direction is an overspecialization. I now believe for purposes of information integration, the question is to describe an area covered by an observation. specific directions etc. should be placed in an extended description. The more general question to specify an observed area, as in archaeological surveys, needs to be modeled indeed. This is neither an observed situation, nor a measurement of dimensions. Also, we need to understand which kinds of observation pertain to areas (typically optical, radar....).
This should be discussed!
Best,
Martin
In the 57th CIDOC CRM & 50th FRBR/LRMoo SIG Meeting, the SIG accepted the proposal by MD to continue working towards modelling the area of an observation and consider whether it is generally broader than the measurement.
The mappings of Aioli (a service created by MAP, which generates 3D models of objects/buildings from photographs and propagates the metadata of the photographs –like area, orientation, distance, angle, etc., and technical metadata as well, –to the overall 3D representation) to CRM can be referenced for the HW. They need to be updated.
HW:
- AG to review the Aioli mapping (which is also relevant for the CRMdig cleanup)
- SdS will be sharing sample data with AG, will also contact Martijn van Leusen in order to get access to some survey data.
- AK will look at the Semantic Sensor Network Ontology (SSN)
Marseille, October 2023