Claim: ''UAP researcher'' released clear smoking gun photo of Orb captured by photographer

One thing you can say about the government, if they disclose something, they disclose it for everybody. Not like this.
(Off topic, but I don't want your statement to go without checking.)
I wish that were true, but "the government" is composed of individuals. Regarding the capitol videos of the riots of Jan. 6, 2021, they were first released to a single right-wing television commentator, with the predictable result that "spin" won the news race before facts.
On Monday, Axios reported that McCarthy “has given Fox News’ Tucker Carlson exclusive access to 41,000 hours of Capitol surveillance footage from the Jan. 6 riot,” according to sources familiar with the matter.
Content from External Source
https://www.vanityfair.com/news/2023/02/kevin-mccarthy-tucker-carlson-january-6
 
Nope. Bright colours (at any level of saturation) move closer to each other as you make them brighter - the cone narrows as you go up. At the very top, the concept of saturation (and hue) loses all meaning. H=0, S=0 and H=180, S=100% are indistinguishable at L=1. And I don't mean "to perception", I mean "as points in 3D space".
If you do a level adjudtment as demonstrated by @Miss VocalCord , in RGB space, colors like #ff0000 or #00ffff are unaffected. If they were brightened, #ff0000 would become e.g. #ff7777, which would desaturate it, but there's no indication on the manipulated picture that this happened: the darkest areas remain dark.

The operation has the potential to shift hues, as #ff3f00 (reddish orange) could become #ffbf00 (yellowish orange).

We do not know exactly which operations Qvist performed, and which color space he performed them in.
 
If you do a level adjudtment as demonstrated by @Miss VocalCord , in RGB space, colors like #ff0000 or #00ffff are unaffected. If they were brightened, #ff0000 would become e.g. #ff7777, which would desaturate it, but there's no indication on the manipulated picture that this happened: the darkest areas remain dark.

The operation has the potential to shift hues, as #ff3f00 (reddish orange) could become #ffbf00 (yellowish orange).

We do not know exactly which operations Qvist performed, and which color space he performed them in.

I agree that not knowing what manipulation of the image he performed makes discussing them somewhat academic. We shouldn't even be having this discussion, as he shouldn't have performed this amateurish information-destroying manipulation of the image in the first place. However, you do appear to be mangling your arguments:
- you're jumping from L to V without warning, and making statements that do not apply to both, and not making clear which of the two you are referring to or why you've flipped case;
- you're jumping between assuming black->black and black->non-black, making statements that do not apply to both, and not making it clear which of the two cases you are referring to or why you've flipped case;
- it's not clear if you consider Miss VocalCord's level adjustment to be "brigtening" or not, yet jump between the two without warning, and make statement that do not apply to both, and not making it clear which of the two cases you are referring to or why you've flipped case.
Because of that, I can't pin down your argument into anything precise enough to argue against.

However, one point I can identify and disagree with is that one of the darkest areas on the image is the butterfly's wings, and many of those pixels have *not* remained dark - that's the whole point of his manipulation of the image.

Edit: A second objection - your hue shift absolutely isn't a hue shift, those colours are the same hue.
EditEdit: I retract that second objection. The mapping from cube to bicone isn't linear, straight lines get bent.

@Miss VocalCord - is your image manipulation tool really modifying just V? Was L an option? Any reason you chose V rather than L, it's the far less intuitive one to chose?
 
Last edited:
I made two errors. One in arithmetic (!) and one in sensor size. I shouldn't have stayed up late, and waited until the weekend!

Sensor size 1 x 1.9 inches is way too big for a cell phone camera, as I should have well known. 1/1.9" wide (main) sensor apparently means 1 over 1.9" which equals 0.52 inches for the wide angle lens camera. That way of measuring frame size is a new one on me. 0.52" = 13.208 mm. for the wide angle lens camera sensor. Is that the diagonal measurement?

Sensor size really = 13.208 mm diagonal(?). Is that right?

And my estimate for perfect point of focus should have been about(!) 50 cm.
 
Last edited:
I made two errors. One in arithmetic (!) and one in sensor size. I shouldn't have stayed up late, and waited until the weekend!

Sensor size 1 x 1.9 inches is way too big for a cell phone camera, as I should have well known. 1/1.9" wide (main) sensor apparently means 1 over 1.9" which equals 0.52 inches for the wide angle lens camera. That's a new one on me. Equals 13.208 mm. for the wide angle lens camera sensor. Is that the diagonal measurement?

Sensor size really = 13.208 mm diagonal (?). Is that right?
See: https://www.metabunk.org/threads/cl...rb-captured-by-photographer.13182/post-303661 on previous page
 
So there's a another factor? You have to divide by 1.5? Krazy. Why not just give actual size in mm?

I should take my own advice and slow down, and wait until Saturday to figure this out. Problem with that is I have a ton of work to do over the weekend.

And now I'm late for work.
 
@Miss VocalCord - is your image manipulation tool really modifying just V? Was L an option? Any reason you chose V rather than L, it's the far less intuitive one to chose?
I forgot to mention I used GIMP as tool (so people can check and try for them). I only wanted to show how easy it was to get 'magic' colors from something which looks black to the human eye. Didn't think to hard about what manipulation I was using and this Curves gave the easiest effect to see.

So I used the "Curves" with the 'Value' setting: (from the help file)
The Curves tool is the most sophisticated tool for changing the color, brightness, contrast or transparency of the active layer or a selection. While the Levels tool allows you to work on Shadows and Highlights, the Curves tool allows you to work on any tonal range. It works on RGB images.
....
Value
The curve represents the Value, i.e. the brightness of pixels as you can see them in the composite image.
.....
Main Editing Area
The horizontal gradient: it represents the input tonal scale. It, too, ranges from 0 (black) to 255 (white), from Shadows to Highlights. When you adjust the curve, it splits into two parts; the upper part then represents the tonal balance of the layer or selection.

The vertical gradient: it represents the destination, the output tonal scale. It ranges from 0 (black) to 255 (white), from Shadows to Highlights.

The chart: the curve is drawn on a grid and goes from the bottom left corner to the top right corner. The pointer x/y position is permanently displayed in the top left part of the grid. By default, this curve is straight, because every input level corresponds to the same output tone. GIMP automatically places a point at both ends of the curve, for black (0) and white (255).
Content from External Source
If you do some color picking over the black area the RGB values are about this:
Red = +/- 1%
Green = 2-5%
Blue = 6-10%

So there isn't an awful lot of data in that area. The width at max I found for this black part of the image is 40 pixels, the height of the main black are around 30 pixels.

Also if you look at the bottom of the object there seems to be something sticking out:
metabunkbutterfly3.png
So it isn't that 'orbish' in the end I would say.
 
I forgot to mention I used GIMP as tool (so people can check and try for them). I only wanted to show how easy it was to get 'magic' colors from something which looks black to the human eye. Didn't think to hard about what manipulation I was using and this Curves gave the easiest effect to see.

So I used the "Curves" with the 'Value' setting: (from the help file)
The Curves tool is the most sophisticated tool for changing the color, brightness, contrast or transparency of the active layer or a selection. While the Levels tool allows you to work on Shadows and Highlights, the Curves tool allows you to work on any tonal range. It works on RGB images.
....
Value
The curve represents the Value, i.e. the brightness of pixels as you can see them in the composite image.
[...]
GIMP automatically places a point at both ends of the curve, for black (0) and white (255).
Content from External Source

OK, GIMP too is being sloppy with terminology. There's no unique value=255 point in HSV space, yet it's identified one, white. Also, its explanation - "brightness" - sounds more like lightness, which indeed does have a unique L=255 point, so it looks like it thinks it's working on lightness in HSL space. (Or faking it in RGB space, and possibly making the same cube/bicone error that I made above while so doing!) So one would expect the saturated primaries to become less saturated and converge towards white in this operation.
[edit: double negative removal]
 
Last edited:
Then why the roadblocks?

What a joke.

There is nothing left to analyse other than peoples' characters at this point.

We're only really helping him promote it now. The old "must be something if the other side of the fence is fighting so hard against it".
I think they're making up a private-sector version of "classified data."

UFO people love classified data. It's sexy and exciting and lets your imagination run wild — especially if there's unclassified teaser imagery, with the prospect of much more definitive information that we just can't get our hands on, e.g., Gimbal. Your basic ad hoc hypothesis.

I'd actually be surprised if they give out the high-res version to anyone. I think it's all a ruse to keep people excited, thinking it's being studied by scientists. When actually the high-res version with its appendages and whatnot show a damn butterfly.
 
I think it's a waste of time. It looks like a butterfly. It looks nothing at all like a sphere. But if they keep the photo inaccessible, then they can make vague claims about analysis that cannot be verified.

I'm not interested in it unless the original photo is released.
 
About the sensor size...

Sensor sizes are expressed in inches notation because at the time of the popularization of digital image sensors they were used to replace video camera tubes. The common 1" outside diameter circular video camera tubes have a rectangular photo sensitive area about 16 mm on the diagonal, so a digital sensor with a 16 mm diagonal size is a 1" video tube equivalent. The name of a 1" digital sensor should more accurately be read as "one inch video camera tube equivalent" sensor. Current digital image sensor size descriptors are the video camera tube equivalency size, not the actual size of the sensor. For example, a 1" sensor has a diagonal measurement of 16 mm.[26][27]

Sizes are often expressed as a fraction of an inch, with a one in the numerator, and a decimal number in the denominator. For example, 1/2.5 converts to 2/5 as a simple fraction, or 0.4 as a decimal number. This "inch" system gives a result approximately 1.5 times the length of the diagonal of the sensor. This "optical format" measure goes back to the way image sizes of video cameras used until the late 1980s were expressed, referring to the outside diameter of the glass envelope of the video camera tube. David Pogue of The New York Times states that "the actual sensor size is much smaller than what the camera companies publish – about one-third smaller." For example, a camera advertising a 1/2.7" sensor does not have a sensor with a diagonal of 0.37 in (9.4 mm); instead, the diagonal is closer to 0.26 in (6.6 mm).[28][29][30] Instead of "formats", these sensor sizes are often called types, as in "1/2-inch-type CCD."

So, sensors sizes are described in terms of an obsolete technology, and the word "equivalent" is left out.

In the example 1" actually equals 16mm diagonal, you get the equivalent by dividing by 1.59

But in the example given above, the way to convert the advertised size to the actual size is to divide by 1.423. Sheesh.

So: The sensor is advertised as 1/1.9" (diagonal)
To find the true width and height of the sensor in mm:

Step One: 1" divided by 1.9" = 0.526"
Step Two: 0.526" divided by 1.423 = 0.370"
Step Three: Convert to mm = 9.398mm diagonal (Which is maybe the true size.)
Step Four: Determine the shape of the sensor. Is it 4:3?
Step Five: Use geometry to determine the width and height of the sensor in mm.

Do we know the shape of the sensor? What is the ratio of width to height?

The aspect ratio of the photo looks like 16:9 to me. But that doesn't tell us about the sensor. According to the Internet, cell phone camera sensors are 4:3. But...
Are you sure about your numbers. The Iphone 13 has a chip (image plane) that is 5x4mm in size. The diagonal is about 6.4mm.
My figure for the diagonal is 9.398mm. I'm willing to believe I made a mistake. Where did you get your figure?
 
Last edited:
A discussion we should have:

It seems to me after a few minutes of research that there are three different cameras. Each lens has a different sensor. The telephoto lens uses folded optics to fit into the case. Is this right?
 
A discussion we should have:

It seems to me after a few minutes of research that there are three different cameras. Each lens has a different sensor. The telephoto lens uses folded optics to fit into the case. Is this right?
With iPhones, only the iPhone 15 Pro Max has periscope optics. The Exif here says iPhone 13, which has two normal cameras.
 
So this is why the zoom lens maxes out at only 3x? There's no room in the case for more elements/a longer focal length, I'm guessing.
 
Last edited:
The iPhone 13 Pro has three different lenses - zoom telephoto, wide, ultra-wide, but only two sensors? I don't get it.
 
Last edited:
The iPhone 13 Pro has three different lenses - zoom telephoto, wide, ultra-wide, but only two sensors? I don't get it.
Two? I think the 13 pro has 3 sensors, each for each lens. The non-pro lacks the telephoto lens+sensor.
 
Great. You can test the accuracy of the Exif data. Specifically, the:

focus_distance_range


hyperfocal_distance
 
The Exif data is:

hyperfocal_distance - 3.29 m
I'm taking this to mean the hyperfocal distance for this lens while set to f/1.5.


focus_distance_range - 0.28 – 0.72 m

If this can be trusted, the lens was focused pretty close. About(!) 50 cm (19.7 inches). Depth of field (the zone which appears to us humans to be in good focus) would be from 11 to 28 inches.


If the perfect point of focus is at about(!) 50 cm and the hyperfocal distance for the lens at f/1.5 is 329 cm... that means that distant objects will not be in good focus.

Now, a naive person taking a casual look at a snapshot like this won't notice anything wrong with the focus. But if we have access to the RAW format file of this Fernando Cornejo-Wurfl photo, I think we'd be able to see a definite difference in the quality of the focus on near and distant objects.

I think I already can see a difference in the quality of the focus on the Orb/butterfly versus the distant forest in the best resolution cropped version we have of the Orb/butterfly.

F6_ScnPWYAAy0YX.jpg


Much depends on the validity of this data. Which is questionable.

The way to test the validity is to take some test shots with the wide angle camera set to f/1.5 and truly focused on a target 50cm from the sensor. Let's see what the Exif data shows.

And let's look at the RAW file format test photo and see what the focus on distant objects looks like compared to the 50cm target; as well as objects within or nearly within the depth of field.

I know I'm volunteering someone else to do the work. But I'm selfish.
 
Last edited:
The way to test the validity is to take some test shots with the wide angle camera set to f/1.5 and truly focused on a target 50cm from the sensor. Let's see what the Exif data shows.

And let's look at the RAW file format test photo and see what the focus on distant objects looks like compared to the 50cm target; as well as objects within or nearly within the depth of field.

I know I'm volunteering someone else to do the work. But I'm selfish.

I'll see what I can do.

You want me to:
  • take a photo of an object set 50cm from the phone using the wide angle lens.
  • use the RAW format, which is Apple ProRaw resulting in a DNG file.
  • set the stop to f/1.5, assuming I can figure that out.
  • upload the photo here?
Do we want a typical background, like just out in the yard or a controlled background like a blank wall or some other objects at set distances?

I'll head out to the shop and get going.
 
You want me to:
  • take a photo of an object set 50cm from the phone using the wide angle lens.
  • use the RAW format, which is Apple ProRaw resulting in a DNG file.
  • set the stop to f/1.5, assuming I can figure that out.
  • upload the photo here?
More clues to the camera settings:

0 mif1
1 MiHE
2 miaf
3 MiHB
4 heic
f_number 1.5
exposure_program Program AE
iso 50
metering_mode Spot
subject_area 2570 1506 747 752
scene_type Directly photographed
exposure_mode Auto
white_balance Auto
Content from External Source
And a question to those in the know: does the "subject-area" data give us the picture element that the camera was auto-focusing on?
 
Objects at the same distance as in the original would be good. But I suppose anything beyond the hyper focal distance would be okay.

Hmmm, I'm not sure about that. How far would be enough????

I'll see if I can find a manual on how to select f-stops. There are only two, apparently.

A way around that would be to set the shutter speed as high as possible and/or the ISO as low as possible. The camera would be forced to stop up.
 
Last edited:
More clues to the camera settings:


subject_area 2570 1506 747 752

Content from External Source
And a question to those in the know: does the "subject-area" data give us the picture element that the camera was auto-focusing on?
It's a new one on me. If I had any sense, I'd already be retired and would have time to learn digital photography from the ground up; and photo analysis to boot. But I haven't any sense.

My supervisor and me.
 
Last edited:
There is no "raw format file" as far as I can tell from the exif just a HEIC from the phone. I think the original image was just taken with the standard iPhone camera app.
It's a new one on me. If I had any sense, I'd already be retired and would have time to learn digital photography from the ground up; and photo analysis to boot. But I don't have any sense.
As far as I know this refers to the area the user touched on the screen to set the focus and on phones this usually also sets the area the phone uses to set exposure.
 
-Interesting. So why are they going on about the RAW format version? They just don't know?

-So there is a manual focus mode. And that's a built-in spot meter. My poor old handheld spot meter is long gone.

-subject_area 2570 1506 747 752

How do we interpret this? It would be a very interesting thing to know.
 
Last edited:
I did a couple of tries, but the forum says the file type is incompatible and I can’t load it, at least as a RAW image. All the online instructions for converting a RAW to jpeg on an iPhone say to scroll down to “Duplicate 1 Photo” option under the Share button. There is no Duplicate 1 Photo option. I’ll keep checking, maybe on my MAC.

The f stop can only be adjusted in portrait mode it seems, which doesn’t seem to work with the ultra-wide lens.
 
How else to share? Imgur accepts TIFF up to 20MB. It says PNG files over 5MB will be converted to JPEGs.

But a carefully cropped version will do.

Can you set the ISO down or the shutter speed up? And we're interested in the wide angle lens.
 
Last edited:
-Interesting. So why are they going on about the RAW format version? They just don't know?

-So there is a manual focus mode. And that's a built-in spot meter. My poor old handheld spot meter is long gone.

-subject_area 2570 1506 747 752

How do we interpret this? It would be a very interesting thing to know.
People sometimes mistakenly call the original file from the camera a raw file even though it isn't.
 
How else to share? Imgur accepts TIFF up to 20MB.

Can you set the ISO down or the shutter speed up? And we're interested in the wide angle lens.
Yeah, even converted to a PNG, it's too big for metabunk to load. I don't have any Imgur or other photo sharing accounts.

Shutter speed and ISO doesn't seem to be adjustable with the ultra wide angle lens. There is an "exposure" slider, but it's not an actual f stop setting, just an under or over exposure from standard. There is also a number of filters like "vivid" or "high contrast" but none of the normal settings like found on a camera.

EDIT: I thought @jarlrmai had some sort of Google drive account for this type of stuff? I'll upload it wherever it works.
 
I'm assuming it's autofocus only. No MF mode?
You can lock the focus on a part of the scene by tapping on it, then holding your finger on it.

Can you set the ISO down or the shutter speed up? And we're interested in the wide angle lens.

Not in the default camera app, which is like 99.99% of all iPhone photos. There are lots of alteranative apps where you can manually adjust everything.

I did a couple of tries, but the forum says the file type is incompatible and I can’t load it, at least as a RAW image.
To upload image files os they are not changed you need to zip them.

I've attached a raw iPhone 13 Pro image I took.

Many people (including, sometimes, me) will refer to any in-camera original as a "raw" image. So "raw" might just mean the original .JPG or .HEIC file.
 

Attachments

  • IMG_6080 2.DNG.zip
    20.3 MB · Views: 25
-Interesting. So why are they going on about the RAW format version? They just don't know?

-So there is a manual focus mode. And that's a built-in spot meter. My poor old handheld spot meter is long gone.

-subject_area 2570 1506 747 752

How do we interpret this? It would be a very interesting thing to know.
People sometimes mistakenly call the original file from the camera a raw file even though it isn't.
This appears, for some quick experiments, to be X,Y,W,H of the focus area.
Take the uncropped (if any of them are uncropped) rescale to match iPhone resolution if needed then you have the place the photographer had their finger placed.
 
I took the same image with five differen focus point. Top Left, Top Right, Bottom Right, Bottom Left, and Center.

Orientationtop, left (0°)top, left (0°)top, left (0°)top, left (0°)top, left (0°)
Image resolution72 x 72 dpi72 x 72 dpi72 x 72 dpi72 x 72 dpi72 x 72 dpi
Image size4032 x 30244032 x 30244032 x 30244032 x 30244032 x 3024

Subject area for each

Subject Area407 687 747 7523431 705 747 752800 2547 747 7523440 2571 747 7522063 1585 747 752

So these appear to be the pixel coordinates of the center of the subject area (from top left) along with its width and height

2023-10-15_12-34-18.jpg
 
To upload image files os they are not changed you need to zip them.

I've attached a raw iPhone 13 Pro image I took.
The Shard! Very nice, I just took one too:

IMG_7369.jpeg

And I just learned how to zip files. The zip files can be opened and played with as needed. The UFO is almost exactly 50cm from the camera. The f-stop was at 1.8.

IMG_5777.PNG
IMG_5776.PNG
 

Attachments

  • IMG_5773.DNG.zip
    24.7 MB · Views: 36
  • IMG_5775.jpeg.zip
    27.4 MB · Views: 30
Back
Top