Ortofotos in real time?
I think the subject is sensitive, but hey, let's open our minds and think for a moment about the deceptions and lies of what is being said there.
In the conference Where 2.0 just past was presented by Jeffrey Johnson and David Riallant, both of Pict'Earth; (the first web application developer and the second professional in photogrammetry), the position of the work that they do and about which they had spoken at the AGU Fall. Surely it causes many of us a similar sensation to when we had to abandon those analog devices for hybrids and then for digital ones.
Well, let's spend some time to see if we get more confused:
1 The procedure: simplification
Basically, the process seeks to do the same as always, trying to solve the limitations of the previous "technologies" (because they were technologies) ... shortening the times and equipment using "information technologies":
- A small remote-controlled plane that replaces the piloted aircraft ... without having to think about fuel, travel expenses, pilot, permission to fly, etc. With the possibility of having a previously drawn route.
- A GPS with the ability to capture latitude, longitude and height ... is supposed to have a ground base against which to rectify the accuracy taken "literally on the fly"
- A digital camera with a resolution of many megapixels to be nicknamed "high resolution" for what others spoke of microns. It is clear, eliminating the problem of developing negatives, scanning them to the micron and those herbs ...
- A lightweight computer system that can in a simple kml associate the coordinate, with the capture and send it via SMS to a land operator who semi-automatically stretches the images based on certain control points out of the territory or a digital model.
We have the doubt if they have a way to obtain the instantaneous conditions of the camera, consequence of the inclination of the plane at the time of capture, known as alaveo, pitching and rotation but hey ... let's move on to the following
2. The good: saving time and costs
It is clear, the first gain is time, we know that it is a major problem of the conventional methodology; especially if it is done through a contract with a private company, depending on the amount of territory to be covered or the geographical location, it is sometimes necessary to wait for the summer, and when there is not much smoke from the fires ... that can !.
Another gain is that under the conventional procedure it is impossible to cover a region of 5 square kilometers without risking money and the danger of making a fool of yourself. For this reason, these tasks are only achievable by government institutions, temporary projects or large companies dedicated to this issue.
In terms of costs, we know what this costs (a lot of money), the less the coverage, the higher the value per square kilometer. Additionally, in some countries, the National Geographic Institutes or the Security Departments must authorize the flight, so you have to pay an additional money to take 10 photographs or 100,000 and of course this adds to the costs
In many cases, the commitment to deliver the negatives is also included so that they can later sell them below the table to the company of the competition or ultimately that the expensive negatives go to a warehouse full of cockroaches.
If we consider that under these new methods you can make flights in specific areas, with irregular shapes and especially in small coverages ... without having to plan a flight with aeronautical procedures, or permission for the clicks photos that Google shows for free ... surely It will be cheaper ... at least the flight because the cabinet processes are already almost automated.
3 The bad: precision is not systematized
What smells bad in all this is that everyone focuses on the photos and the digital process of orthorectification but little do we see that they speak of densifying the existing triangulation network or in many cases inconsistent. It seems that they only talk about stretching the mosaic of images based on recognized points, butRecognized where?
This is delicate, because the premises do not change with the adoption of new technologies: "at lower geodetic network density, lower accuracy of orthorectified products"And it's not that there isn't Formally patented proposals for a process like this, although to the extreme of the complication but we don't see results of their Improvement plans.
In the case of the people of Pic'Earth, they stretch the images so that they conform to the data of Google Earth !!!, we understand that for purposes that the data is not broken down because if they locate them where they correspond, they can leave As the 30 meters displacement. The problem then centers on the fact that all the material that these people generate, and that has been uploaded to Google Earth, has the same imprecision of the beloved virtual globe (2.50 meters relative, 30 meters absolute, not expressed and without published metadata). And it is not that everything is wrong, it is that any technical process that you want to sustain must be systematized.
4 The ugly: The change is taking resistance by connoisseurs and madness by neogeographers.
Let's be honest, when they told us we weren't going to use those
mirrors with the negatives of the photographs that we projected onto the plate to burn the orthophoto, we did not like it because we believed that a computer program with its mathematical methods did not have the criteria to distinguish the shadows from the stains on the mirror. The story is the same, now what is taking place is the semi-automation of the capture process ... just as the previous process will exchange quality for time.
At that time, we got complicated with the "precision" of the final product, knowing they continue to be models of reality. So we have the "neogeographers" on one side with their PDA in hand and at the other end we with total stations; it is necessary that we have the openness because inevitably our hybrid processes will have to be replaced by the simplified ones, just as sooner or later their equipment will achieve greater precision and they will do it for less money ... third, fifth and sixth Premises of Catastro 2014
The best thing will be that our surveying schools do not become outdated in the use of new technologies, and that they do not stop teaching the principles that underpin their use. In the end, the coffee cup will taste the same… like a curtain.
5. The conclusion: Relevance defines the details and these require the method
We go back to what We said before, the relevance of the data defines that there are no good or bad maps, only facts. The job of the data provider is to provide facts, with conditions of precision, tolerance and relevance. The one who raises the boundary says "I went, saw, measured and this is what I obtained ... with this method" while the one who delivers the orthophoto says "I flew, or I did not fly, I took photos, I took control points and this is what I got ... with this method ... ".
Orthophotos in real time? it is possible, finally the method defines the precision ... and if the relevance is clear ... it does not matter that while the plane was flying we were playing in tweeter.