How can we explain perspective calculations simply?

Rory

Closed Account
Perspective is difficult, both to understand and to explain. This was recently brought home to me listening to flat earthers Nathan Oakley and Anthony Riley attempt to explain this photo of mountains in Washington and Oregon:

south sister sunrise labelled.jpg

All decent analyses show that it aligns perfectly well with a globe earth model and is completely incompatible with being on a flat plane, yet the gentlemen in question demonstrated that they are unaware of how viewing angles are calculated, and how perspective works.

The question, then, is how do we explain how perspective works mathematically; show that calculators do include perspective; and help people who don't understand what perspective is, who struggle with maths, and who are mistakenly convinced that they already have the answers?

Theory

The theory and methodology is fairly straightforward: viewing angles (angle of elevation) between two points can be calculated using trigonometry. These angles will show where something will appear in a photograph, or in our actual field of vision. Larger angles will appear higher and smaller angles will appear lower. To calculate the angle, all we require is distance and elevation.

Here is a picture demonstrating this (angles, distances, and elevations are not to scale):

perspective.jpg

This shows how the line of sight from the observer in the bottom left corner to each of the peaks forms the hypotenuse of a right-angled triangle, which can then be calculated by tan(x)=opposite/adjacent (that is, elevation over distance).

The angle to Mt Rainier is not the largest because it's the tallest mountain, but because of a combination of distance and elevation: if we bring Mt Adams closer to the observer, for example, to the position where Mt Hood is, the angle to its peak will be larger than the angle to Rainier, and it will be predicted to appear highest in a photograph (assuming a flat plane):

perspective 2.jpg

Verifying the theory using distant mountains, however, is difficult to do. So we need a way to do this that is available to anyone.

Example

To demonstrate that calculators include perspective, all we need are a list of distances and elevations of some known landmarks and a photograph of these landmarks. Any photograph of a flat-ish street will do, but it would be best if it included buildings of different heights, with some taller buildings in the background.

This one of 5th Avenue in New York, taken at the intersection of E/W17th Street, should be a good candidate (building numbers added):

5th avenue labelled.jpg

Here are some of the landmarks seen in this picture:

Empire State Building - distance, 4270 feet; height, 1454 feet to tip, 1250 feet to roof
HSBC, 145 5th Avenue - distance, 867 feet to nearest corner, 959 feet to turret; height, 165 feet (roof), 200 feet (turret)
119 5th Avenue - 250 feet to near corner, 455 to far corner; estimated height 130 feet
Flatiron Building, 175 5th Avenue - d. 1240, 1462; h. 285 feet
245 5th Avenue - d. 2697; h. 308 feet
Langham Place, 400 5th Avenue - d. 4940; h. 632 feet
425 5th Avenue - d. 5460; h. 618 feet

Camera height I believe to be very close to 6 feet - certainly within a foot or so - based on the parallel lines in the images, the vehicles, and the people's heads.

Now let's put those figures into a calculator and find the predicted viewing angles by using tan(x)=elevation/distance. The largest angle indicates which building will appear highest in the photo, and so on:

upload_2019-3-18_12-10-0.png

Looking at the photo, we see that the predicted apparent height order and the actual apparent height order are the same:

5th avenue height order crop.jpg
 
Last edited:
The above post represents the easy version, but we can take it to the next level and look at the angles in more detail. If we place a line at the camera height to represent zero degrees and one at the tip of the Empire State Building to represent 18.7°, we can create a scale, such as might be seen in a theodolite:

5th avenue with scale.jpg

This is based on the pixel height of eye level (417) minus the pixel height of the tip of the Empire State Building (8) divided by 18.7. This gives a count of 21.83 pixels per degree, which allows us to calculate the approximate apparent height at which each point is predicted to appear:

upload_2019-3-18_13-22-42.png

I would add each of the predicted points to the image, to compare with the actual points, but it seems a bit redundant, given how close they are: most of them are pretty much bang on, with only three of the ten points varying by more than four pixels, and none by anything approaching significance.

I think this well and truly proves the point that perspective can be calculated; that apparent height order can be determined; that both curve and plane calculators do already account for perspective; and that very accurate positions in photos can be predicted merely from the distances to and elevations of landmarks.

Spreadsheet containing the calculations used above is attached.
 

Attachments

  • Perspective calculator.xls
    26.5 KB · Views: 592
Last edited:
For completion, here is the above method applied to the South Sister photo, flat first:

south sister flat with lines.jpg

While the above photo shows the complete incompatibility of the flat earth 'model' with reality, the results for the spherical earth are almost perfectly aligned:

south sister sphere with lines.jpg

Calculator attached.
 

Attachments

  • South Sister to Rainier.xls
    23 KB · Views: 574
Last edited:
This shows how the line of sight from the observer in the bottom left corner to each of the peaks forms the hypotenuse of a right-angled triangle, which can then be calculated by tan(x)=opposite/adjacent (that is, elevation over distance).

This is what i consider a "simple explanation" of "perspective calculations. :)



To calculate Perspective "size" from an observer (point A) to a mountain peak (point B), you divide the height by the distance.
Height ÷ Distance

Perspective means objects get smaller the further from you they are. The greater the distance from the observer, the smaller the object will appear.

Fieldset:
Ex.
Both Mountains are 1000 feet high.

Mountain 1 peak is 1000 feet high.
Mountain is 5000 feet from the observer.
1000 ÷ 5000 = 0.2

Mountain 2 peak is 1000 feet high
Mountain is 2000 feet from the observer.
1000 ÷ 2000 = 0.5

0.5 is bigger than 0.2
Mountain 2 looks bigger than Mountain 1 because it is closer to the observer.


ab.JPG




If you want to determine the elevation angle in degrees from the observer to the mountain peak, you use a scientific calculator. https://www.desmos.com/scientific
1.JPG

2.JPG
 
Last edited:
This is what I consider a "simple explanation" of "perspective calculations. :)

To calculate perspective "size" from an observer (point A) to a mountain peak (point B), you divide the height by the distance.

That's quite useful, and does work for giving the predicted apparent height order on a flat plane. For example, doing "elevation above/below observer" over "distance" gives:

upload_2019-3-18_16-8-45.png

All in the same order, either way one does it.
 
While that is indeed the simples math, and the way I normally do it (divide by distance!!!) That diagram is just gobbledygook to most people.
so if i see something like that "P=(x,f)" that comma means divide? thats why i showed a real calculator because alot of people (ie me) only know 'divide' by the division sign or if the numbers are on top of each other.
 
so if i see something like that "P=(x,f)" that comma means divide? thats why i showed a real calculator because alot of people (ie me) only know 'divide' by the division sign or if the numbers are on top of each other.

Those are coordinates of a point, P isn't a value, it's a point, defined by two values (x and f, or X and Z)

P=(x,f) means "the point P is a distance f horizontally from O, and a height x above O"

It's a bad diagram in many ways. The "image plane" is essentially the back of the pinhole camera, where the image is projected. But here it's in front, which is perfectly normal when you are doing the math, but it's not really clear what is going on.
 
alot of people (ie me) only know 'divide' by the division sign or if the numbers are on top of each other.
The division is here:
Metabunk 2019-03-18 13-44-28.jpg
It's using a key (and very simple) thing called "similar triangles". You can see there's two traingles in the diagram, this one made by the actual point.
Metabunk 2019-03-18 13-43-25.jpg

And the one where the line to the O point (the pinhole) intersects the image plane
Metabunk 2019-03-18 13-44-11.jpg
They are "similar", meaning they have all the same angles. That means the ratio of the sides is the same (it's just the same triangle scaled up). Hence you get x/F = X/Z, or you could say x/X = f/Z, same thing. Then you just rearrange to get a solution for x.

x gives you the size of the object in the image plane.

f is like the focal length in a camera.

x is like the size of the object projected onto the sensor.
 
Perspective comes from projection.

Projection in the sense of a camera is kind of like a slide projector in reverse. Instead of the light from the slide being projected on a wall, the light from a scene is projected (via the lens) onto the film.

Projection is all about straight lines and similar triangles. There's a view cone called a "view frustrum", which is a good word to search for explanatory images.

Here I'm projecting a view of NYC from across the river. The green is the frustrum. The red lines go through the tops of some buildings
Metabunk 2019-03-18 14-22-27.jpg

You could stick a view plane anywhere across the frustrum, and you'd get exactly the same image, because the red lines go through the same point (just scaled). Have a look at it in GE with the attached.
 

Attachments

  • MB_5294320A.kml.kmz
    3.9 KB · Views: 535
Divide by the depth along the line of sight, not by the Euclidean distance.
At large distances, the result is essentially the same. Just saying "distance" should cover the basics. Here we are concerned mostly with longer distances (like, to the horizon and beyond)

It's frustum, not frustrum. I made that frustrating mistake too. Almost as bad as restaurateur.
For a while today I was typing fustrum, then changed to frustrum. I've been getting this word wrong in a variety of ways for decades.
 
The calculator I've used above on the 5th Avenue photo is based purely on flat plane trig without refraction. But I thought I'd see how the sphere earth calculator fared, with refraction. Predicted apparent height order is the same (i.e., correct) and the pixel height predictions were very similar.

What I immediately notice, though, is that the sphere earth calculator predicts the objects as being higher than the flat plane calculator, whereas they're usually predicted lower.

What I found was that something like the tip of the Empire State Building is predicted to be higher with the sphere earth calculator until it's 3.1 miles away, where they're the same. More distant than that, and it will be predicted to be lower.

Not sure whether that reflects reality, or a need to fine tune the equations.

The second thing I notice is that they're all still predicted very accurately - I'd call within ten pixels "very accurate" - except the same one that the flat plane calculator has as an outlier, the turret of 145. This leads me to believe that the turret is probably a little big higher, or a little bit closer, than the figures I first used.

Changing the refraction coefficient makes no difference to the angles.

Also, if I change my marker pixel to the roof of the Empire State, rather than the tip, the results are even more accurate. So perhaps a clearer landmark for marker is best.

5th Avenue, by the way, slopes up very, very slightly. Not much for most of this length, but around 20 feet higher up by the Empire State. It doesn't make any significant distance to these calculations though.

https://caltopo.com/map.html#ll=40.74087,-73.99031&z=15&b=t
 
Last edited:
@Rory says:

The calculator I've used above on the 5th Avenue photo is based purely on flat plane trig without refraction. But I thought I'd see how the sphere earth calculator fared, with refraction. Predicted apparent height order is the same (i.e., correct) and these were the pixel height predictions:

…..

What I immediately notice is that the sphere earth calculator predicts the objects as being higher than the flat plane calculator, whereas they're usually predicted lower.
Content from External Source


What I found was that something like the tip of the Empire State Building is predicted to be higher with the sphere earth calculator until it's 3.1 miles away, where they're the same. More distant than that, and it will be predicted to be lower.

Not sure whether that reflects reality, or a need to fine tune the equations....
Content from External Source
I think it has been mentioned before that a simple ratio of pixels to visual angular size is a very good approximation when the angles are relatively small, but may be inaccurate (as compared with a photograph) when the angles are large. Very tall buildings, close to the observer, may be a case in point. In your New York photo some of the angles are over 15 degrees. I haven't worked out the maths, but this might be why some of your predictions are a bit off.
 
Last edited by a moderator:
Not sure whether that reflects reality, or a need to fine tune the equations.

Hang five: I'm pretty sure it's the latter. I had a bit of an issue with the obstruction calculator that I did, where it worked differently for points above or below eye level, and the same may be happening here (mountains below, buildings above).

Creases being ironed as we speak. :)
 
With Mick's help, the equations got fine-tuned. The problem was that I'd simplified the...meh, it's not interesting what the problem was. ;)

In a nutshell, the sphere earth (refracted) predictions are exactly the same as the flat earth (unrefracted) predictions for the 5th Avenue photograph.

Well, to about 5/1000th of a degree.

This is good: now there are two calculators, one for flat and one for sphere. The sphere earth calculator works perfectly on both long range (South Sister to Rainier) and short range (5th Avenue). The flat one only works on short range - and if it's made to work on long range, it won't work on short.

(The problem was to do with distance.)

I'm just gonna streamline it a bit more, and then I'll post it in the tools section.
 
@Rory i watched your latest video and read the chat w/ Sleeping Warrior.

When they look at the photo of Rainier, they think they are looking at a Flat Earth.
He says perspective doesnt really matter.
He says part of the mountain is missing because of refraction, not just the mountain shrinking due to perspective.
We know that what is missing is The Hidden due to Curve+(or minus) refraction.
They dont believe in the curve. So they think what is missing is ALL refraction.

And since refraction can bend different ways, especially over 180miles... basically that is their answer to everything. It's a good answer actually. Until i wonder what the atmosphere would have to be doing to make almost all of Rainier disappear on a FE. Maybe that could be a debunking angle? I may have this backwards.. ex. if light 'goes up' when its colder below and warmer up top. And Rainier went down.. so it would have to be what like 3000degrees farenheit near the earth and like -1000 degrees where the camera man is, to get the light to bend Rainier down that much.

Does that make sense?
 
I think so. Inventing something like "upward refraction curving at a rate of 8 inches per mile squared" might help them.

The temperature thing would be more Mick's department, though. I'm not sure how that would work.

I think the crux is them not really understanding angles or perspective. Things in the distance are smaller, and closer to the horizon (than when they're closer). That pretty much does it for them, as far as an explanation for Rainier goes.

But if they were able to take in the whole scene, and think about it a little more deeply, then they'd see why it doesn't work.
 
But if they were able to take in the whole scene, and think about it a little more deeply, then they'd see why it doesn't work.

I don't think so. I dont think they are ever going to see.

He thinks we cant count the hidden part of Rainier in the math. meaning Rainier is not 14,400 feet. It is like 4,000 feet high. and St. Helens is like 1,000 feet high.

He is saying my FE model doesnt work (ergo your math doesnt work) because on a FE it wouldn't look like my FE model. On their FE it would look like the real Rainier pic. So the math has to be wrong.

I think what they are doing is (i could be wrong) they are thinking the whole bottom of Rainier is just missing because of refraction compression.

Sorry this is so small, an 180 mile side view model is BIG and hard to get in one shot. If you take my yellow lines and compress them together.. I think that is what they think is happening.
hhlabeled.png

It's like the mandela effect.. in our reality we know your math and my model is right. but in their reality, it is wrong because they are living in an alternate reality.

Theyre saying 'angular size' and 'perspective' but i think what they really mean is 'compression'. The math and my model dont take [their imaginary] refraction compression into account.
top.png
 
@deirdre no, it's not compression. Compression happens when the gradient of optical density is uneven.
And yes, they are working backwards from the assumption that Earth is flat, and making their physics fit that, which is what they're accusing us of doing with the globe.

John D is leaving pressure out of his refraction considerations, and that leaves him with a temperature gradient with cold air on top, which bends light upward as long as you forget that buoyancy really prevents denser air to be on top of less dense air. If we follow the notion that a constant gradient bends light in a circular arc, then assuming the necessary k-value (like -0.83) will yield globe Earth appearances on a flat Earth. This works fine as long as you never attempt to measure it, and nobody questions why you illustrate your flat Earth refraction tutorials with upside down pictures of downward refraction.

Upward bending light means that for light from a distant object to reach your eye, it has to dip down and up again, and when that path collides with the ground, it's obstructed. (I should make a spreadsheet for that and collect @Rory 's prize money.)

And this whole discussion about "how can we explain perspective calculations" is about the merits of an argument and not about evidence.

You can show that air pressure affects refraction in the lab by measuring the refraction of light passing through a transparent vacuum chamber, but it's difficult because the optical density of air is already very close to vacuum, and the easiest experiment I have seen described uses interference patterns, which are not very intuitive.
 
I actually don't think they're thinking about it at all. Somebody made a video proposing that curve calculators didn't take perspective into account - Taboo Conspiracy, which was looked at here - and people like Riley and Oakley simply took the headline and conclusion, repeat it whenever presented with anything that appears to challenge their viewpoint, and believe that's enough.

This seems to be pretty standard practice among flat earthers: one person says "but that's atmospheric lensing" - or "angle of attack" - or "compression" - or "looming" - or "how our eyes work" - or "angular resolution" - or, gulp, "perspective" - as though simply saying the words counts as an explanation, and others then thoughtlessly repeat it.

Obviously that happens on the debunking side too - but generally it's not too much of a problem, since the explanations are usually right, if not always understood; and if someone among the debunkers proposed a faulty explanation for something - certainly here, and even on YouTube - it would be immediately challenged and the correct explanation arrived at.

At least, that's been my experience so far. :)
 
Last edited:
Somebody made a video proposing that curve calculators didn't take perspective into account - Taboo Conspiracy, which was looked at here
from your link there
upload_2019-3-23_13-46-55.png

seems we all agree. the mountain LOOKS smaller due to distance.. and distance is calculated in the curvature calculators. End of story.
 
I received a comment (in email) from a flat earther regarding this thread:
I found this comment interesting:
In a nutshell, the sphere earth (refracted) predictions are exactly the same as the flat earth (unrefracted) predictions for the 5th Avenue photograph.

Well, to about 5/1000th of a degree.

From my perspective this sounds like you need a phenomenon called refraction to account for the difference (which co-incidentally occurs on all images where you can see too far). [It] needs more faith to believe in the power of refraction phenomenon than actually just witness what is, in reality, what my eyes are showing me.
Content from External Source
Now, while I might have some ideas as to why they understood my words in this way, it does also seem worth clarifying what I meant by them, just in case someone else comes to the same conclusion.

Basically, the point was not that refraction causes the difference between the two models in the 5th Avenue photo, but that, over such a short distance, there's essentially no difference here between flat and sphere. The curve is minute. The street is, for all intents and purposes, flat - and, indeed, any changes in local terrain will be vastly larger than the miniscule amount of curve over 4000 feet (mere inches).

Actually, even if I remove refraction from the equation, over this distance it makes no difference to the results:



As shown, though it makes a big difference to the figures in the righthand columns (based on the radius of the earth in feet), only one angle shows a change (and then by only 0.00019°).

Out of interest, I thought I'd have a look to see what difference removing refraction makes to the South Sister-Rainier shot:



Angles change a little more this time - obviously - but the predicted order still stays more or less the same - the more distant mountains are most affected - and the results are still within about 1/10th of a degree of what we measure in reality.

Just in case any flat earthers are reading this, I hope that clears it up for you: it's curve that makes the difference here, and which makes the mathematics match reality, not refraction. :)
 
Last edited:
I saw a clip of Mark Sargent speaking on Australian television recently, where he was sitting in front of a photograph of Seattle taken from Kerry Park:

Screenshot (571).png

In the distance is Mount Rainier, and I wondered, with the smattering of tall buildings in the foreground, whether this image couldn't be used to demonstrate the curve of the earth, using the perspective calculator.

I'll give it a try.
 
Funnily enough, here's Darryl Marble also sat in front of the same vista, with a few extra buildings:

marble kerry park.jpg
(From 1-8: Space Needle; Two Union Square; Columbia Center; 1201 3rd Avenue; Amazon Day 1; 1420 5th Avenue; Mount Rainier; Safeco Plaza)

In terms of predictions, the only real difference - by more than 0.01° - is the position of Mount Rainier: the sphere earth calculator has it below 1420 5th Avenue, while the flat earth calculator has it above it - a 0.2° swing either side.

Where's the curve, Darryl and Mark?

It's behind you. :)
 
Last edited:
It's a fun little exercise, but because the margins are quite small - it's not such a dramatic demonstration as the South Sister image - I don't think I'll be offering it as a challenge to flat earthers. Something a bit more conclusive and less finicky is better.
 
Last edited:
Hmm I guess Kerry Park isn't really that big:
Metabunk 2019-04-02 11-33-06.jpg

Marble's image wasn't from there though. Actually probably was, just the other end of the park
 
Do you have a precise location for the camera?

I know it was taken from Kerry Park in Seattle - famous overlook - but precise location I'm not sure. It looks like Darryl's is a little to the west of Mark's, judging by the alignment of the Space Needle and the Columbia Center.
 
Last edited:
Here's a Kerry Park image with a good landmark on it:



This coordinates for the image above, right by the sign, are 47.629505, -122.360542, and the elevation for that point is given as 341 feet in WGS84.

Popping those figures in the calc I get the following:

upload_2019-4-2_21-36-13.png

So, again, Rainier is the big mover.

The only thing I'm wondering about here is the elevation of the skyscrapers. For the 5th Avenue, Manhattan picture it wasn't an issue, as it was all taken from street level. But for these ones I had to find ground level above sea level and add it to the building height, which may not be as accurate. I guess it all depends on how foundations and levelling is factored in.
 
Last edited:
For the 5th Avenue picture it wasn't an issue, as it was all taken from street level.
i assume you mean 5th avenue in Manhattan. as opposed to the 5th avenue address in your chart above. BTW youhave #6 as MT Rainier in your first pic.. so i still dont know what "1420 5th avenue" in Seattle is.
 
I made a panorama of some of JTolan's footage and ran the perspective calculator on it. First of all, here's his image, labelled and straightened (his original shot wasn't level):



Now I can input the distances and elevations and receive the predicted viewing angles for both models:

upload_2019-4-3_13-10-42.png

Peaks - and one building - are ordered here by their actual height in the panorama (pixel in photo). A quick glance down the predicted angle columns shows just how closely the sphere calculator comes to getting the positions right, with just a couple of those very close together (in both the image and their predicted angles) switching places. The flat results, on the other hand, bear no resemblance to reality: perhaps most notably in the case of San Jacinto, which is level with the US Bank Tower, but should be far above it; Kitching Peak, which should have 8 landmarks below it, but isn't visible at all; and Tahquitz Peak, which ought to be a good half-degree above Workman Hill, but is at the same apparent height.

Also for the sphere earth, the pixel position can also be predicted, and this is shown in the far right column. All 23 visible landmarks are within 5 pixels of the calculator prediction, and 17 of them within 2 pixels.

Doing the pixel prediction for the flat earth is difficult, since there are no two points that can be said to be accurate, but the best I could get it gave an average discrepancy of about 40 pixels, plus or minus.

I was thinking how best to present this, but, though it's a really nice example of how well relative positions can be predicted, it's probably a bit complex, and the South Sister shot is still the best and clearest for the purpose.
 

Attachments

  • Curve calculators.xls
    175 KB · Views: 497
Last edited:
This may help as part of the whole thing: I took a shot from the balcony and measured distances and elevations to a number of objects in the picture, as well as a hill about five miles away:

balcony perspective dotted.jpg

This is what the calculator shows:

upload_2019-4-11_12-4-37.png

It's not so much about any difference between the two models - as far as the mountain goes, there's barely any - but providing an example and a way that people can demonstrate for themselves that there's no great mystery as far as curve calculators go, and that perspective is very obviously included in the equation.

Note: if anybody wants to do such a small scale test, they have to be extremely right on with the measurements. Unlike long range shots, where camera height can be within 10 or 20 feet and still give the right results, something like the above needs to be pretty much accurate to the centimetre, as small changes make big differences to the resulting angles.
 
Back
Top