Alleged Flight MH370 UFO Teleportation Videos [Hoax]

An observation in the webarchive source video. From the 'teleportation zap' frame onwards, the entire scene is significantly sharper than before the zap. No change of sharpness occurs from frame to frame for the entirety of the video until the zap. Make of that what you will.

before-and-after-zap.jpg
After the plane disappear there's non movement anymore so compression algorithms could have a easier task and give a sharper image.
 
When analysing frame by frame note that the plane/orb/flash only update every 4th frame, so that video is at 6FPS whereas the other elements (numbers/cursor/screen drag) update on each frame.
 
The stereo RegicideAnon video is cropped. There is also the monocular Jose Matos video that is less cropped. This implies that the original source video was less cropped than the RegicideAnon video. It is possible RegicideAnon created a stereo pair from a single video, or that the Jose Matos video was one of two videos available at the time. RegicideAnon may have combined the two videos while Jose Matos only uploaded one of them.

Regarding the second satellite view, I was wondering if it is possible that interferometric synthetic aperture radar would be on the same satellite as a visible imaging system, essentially producing a depth map. Does anyone have satellite imaging experience and can confirm whether this sort of thing is common practice?

The uncalibrated trackball theory is a good explanation for why the cursor drifts. But it doesn't explain the smooth appearance of the cursor drift. I attached a video showing what happens with normal cursors vs what happens with a VFX animation. If this were real, we would need another explanation for how the cursor moves so smoothly. A non-OS rendered cursor is one possibility, or if the original is on a 2560x1440 screen, then downsampled 2x to 1280x720 and compressed once for distribution, then compressed again for YouTube, that may explain it but we'd need to test.

The chaotic movement of the cursor would discount keyframing. Here is a minimum image showing the cursor trail from frames 850-1107, before during and after the cursor drift. If this were lazy VFX we would expect the cursor to move in exactly equal sized steps. Otherwise the animator would have to go through the process of placing each keyframe manually. Alternatively, we may be able to find evidence that certain remote desktop software exhibits this kind of cursor movement.

1692132976928.png
 

Attachments

  • cursor-explanation.mp4
    93.4 KB
Looks like an uncalibrated trackball. You can see the same thing with XBOX controllers, and video game joysticks. They require dead zone parameters in-game to prevent drift.
That’s not what you’re looking for, though — the drifting cursor is moving with subpixel precision, not snapping from one position in the pixel grid to another. There’s no device or operating system which does this.
 
When analysing frame by frame note that the plane/orb/flash only update every 4th frame, so that video is at 6FPS whereas the other elements (numbers/cursor/screen drag) update on each frame.
this means that the plane footage is one video and the cursor a legit screen recording?
 
That’s not what you’re looking for, though — the drifting cursor is moving with subpixel precision, not snapping from one position in the pixel grid to another. There’s no device or operating system which does this.
does this mean that the reddit claims of it being a virtual machine or custom os is meaningless?
 
Can anyone please explain how video from a satellite of a plane flying within the frame shows ZERO parallax? Is it even possible?
A satellite orbiting between 200-1000 km traveling @ 7.5-10 km/sec. observing an airplane in flight for 60 seconds and panning an area on the ground covering roughly 3.2 km according to the satellite coordinates from the beginning to end of video. Assuming the coordinates are at the center of the video on earth.
MH370 Plot.png
Aside from the parallax issue, do the deltas in the coordinates imply a distance travelled by the aircraft of only 3.2km in sixty seconds? Because that is only 192km/h, which is extremely slow and possibly below the stall speed for a fully laden 777.

Normal cruising speed is Mach 0.85 or ~900km/h according to several sources scraped by Google.

FD8DAD09-5A54-4E4F-840F-98D46183A3EB.jpeg
 
does this mean that the reddit claims of it being a virtual machine or custom os is meaningless?
Reddit has got to the point of constructing an elaborate system of coincidences and unfalsifiable hypotheses to explain away every flaw in the videos.

Last I saw they were talking about some specific version of Citrix that renders the cursor on the server side, and maybe with the right combination of ultra hi-res remote workstation and latency correction you might get sub-pixel cursor drift. But nobody has actually demonstrated it. (And then with an impressive logical backflip, someone will say “how could a hoaxer have known to replicate such obscure details about this particular implementation quirk of Citrix? It must be a real video!”).
 
Aside from the parallax issue, do the deltas in the coordinates imply a distance travelled by the aircraft of only 3.2km in sixty seconds? Because that is only 192km/h, which is extremely slow and possibly below the stall speed for a fully laden 777.

Normal cruising speed is Mach 0.85 or ~900km/h according to several sources scraped by Google.

FD8DAD09-5A54-4E4F-840F-98D46183A3EB.jpeg
In principle it could be a slowed down video, but the aircraft movement looks rather natural speed-wise against the clouds from purely visual judgement
 
Aside from the parallax issue, do the deltas in the coordinates imply a distance travelled by the aircraft of only 3.2km in sixty seconds? Because that is only 192km/h, which is extremely slow and possibly below the stall speed for a fully laden 777.

Normal cruising speed is Mach 0.85 or ~900km/h according to several sources scraped by Google.

FD8DAD09-5A54-4E4F-840F-98D46183A3EB.jpeg
I am making the assumption that the coordinates are a point on earth that is center in the image. I am simply curious about the lack of parallax. Until this is explained nothing makes me believe this is actual video of a jet. Even if it were from anoter plane close by we should see parallax.
 
Are there many examples of confirmed satellite filming of an aircraft in flight, from identifiable and reliable sources, that we can compare to the "MH-370" video?

From this thread I assumed that such footage was quite common, but after a (quick) search I haven't found anything that isn't connected with esoteric claims about MH-370, or Russian "evidence" about MAS-17.
 
Here is a screenshot of the video. It supposedly shows the fuselage of the drone:

img1.jpg

The fuselage isn't quite round. The mesh-edges are clearly visible to me here. Here the same image with marked mesh-edges:

img2.jpg

If we look at a low-poly 3D model of a Predator MQ-1 Drone, the exact same mesh-edges can be seen.

Low-Poly MQ-1 Predator Drone 3D View: https://sketchfab.com/3d-models/low-poly-mq-1-predator-drone-7468e7257fea4a6f8944d15d83c00de3

img5.jpg
img3.jpg
img4.jpg


However, the vertices are not exactly in the same position as in the video. So it could be another 3D model of a drone, with another wireframe form. But I'm pretty sure it's a 3D model.

For comparison, here is a picture of a drone. The fuselage is absolutely round:

img6.jpg
 
Last edited:
I am less certain than I was that the subpixel cursor movement is indicative of a hoax.

And I don't find the matching noise debunk convincing, because it just means that the second image in the pair was created with a depth map.

I'm going to take a break from this, but I think if someone wants to really debunk this they could show:

1. In practice, stereo satellite video is not generated from a single video like this was (e.g. using in-track stereo or SAR interferometry).
2. The subpixel movement can not be accounted for by downsampling a larger screen and compressing the video.

To demonstrate #2, I would propose a theory and test it by recreating an imagined pipeline.

I'm going to lay out a theory based on eyeballing the video. Let’s say the original resolution of the satellite viewing screen is 2560x1440. Then someone used remote viewing software to control the system. They are watching a noisy satellite video at 6fps. The telemetry text is yellow Courier, 24px tall, with a black shadow 1px below and 1px to the right. Then a 90% crop of the screen, like 2304x1296, was captured by Citrix and downsampled to 1280x720. The cursor motion was pixel-accurate on the original screen and drifting at 4px per second due to some kind of input controller bug or network glitch, but Citrix downsampled the coordinates to match a 1280x720 window, and rendered a high-resolution subpixel-accurate cursor server-side on top of the video at 12x18px. Finally, the video was compressed once for storage by Citrix. Then it was downloaded and uploaded to YouTube and Vimeo where it went through an additional compression step (though it was a different algorithm in 2014).

After simualting through this process with video editing/VFX software, if the text and cursor result looks nothing like the subpixel-smoothness of the best video we have from Jose Matos, then we should be able to say that this is not how it was made. It doesn't rule out the possibility of some other glitch or pipeline, but it would be helpful to have real evidence instead of just speculation. Also, showing that this does recreate the effect doesn't prove it is real—the best hoaxer would have set up Citrix to really make their hoax look like it was secretly exfiltrated.

Unfortunately I don't see any other clear avenues for debunking this right now.
 
Are there many examples of confirmed satellite filming of an aircraft in flight, from identifiable and reliable sources, that we can compare to the "MH-370" video?

From this thread I assumed that such footage was quite common, but after a (quick) search I haven't found anything that isn't connected with esoteric claims about MH-370, or Russian "evidence" about MAS-17.
This is the only satellite footage including an airplane I can find.
 
Last edited:
This is the only satellite footage including an airplane I can find.

Great find. This SkyBox footage is a goldmine of useful data that can be applied to even classified NROL satellites-the physics is the same across the board after all. The middle of this video has a shot of Kuala Lumpur with clouds at different heights that shows off just how jarring the parralax effect being absent in the alleged UFO video's cloud layers really is.
 
Last edited:
The chaotic movement of the cursor would discount keyframing. Here is a minimum image showing the cursor trail from frames 850-1107, before during and after the cursor drift. If this were lazy VFX we would expect the cursor to move in exactly equal sized steps. Otherwise the animator would have to go through the process of placing each keyframe manually. Alternatively, we may be able to find evidence that certain remote desktop software exhibits this kind of cursor move

The VFX artists from Corridor Crew (6.17mil subs) could be the right people to answer this, one member also had a go at hoaxing a ufo vid at 8.58 for his colleagues to spot.
They delve into debunking ufo videos as part of their content.
Just thinking a top VFX team could give us a concise answer or at least an educated second opinion.


Source: https://www.youtube.com/watch?v=39SJAcNXCzM
 
I am making the assumption that the coordinates are a point on earth that is center in the image. I am simply curious about the lack of parallax. Until this is explained nothing makes me believe this is actual video of a jet. Even if it were from anoter plane close by we should see parallax.


Satellites have software that corrects parallax error. But do we see it in this video it's unclear. We need a real military satellites expert lol.

RemapGOES14_ParallaxShift.gif
https://cimss.ssec.wisc.edu/satellite-blog/archives/217
 
Last edited:
Are there many examples of confirmed satellite filming of an aircraft in flight, from identifiable and reliable sources, that we can compare to the "MH-370" video?

There is imo not much available, I suspect none. This, as satellites are not designed to video or film. Recent commercial satellites are getting more into fashion though.
But, I doubt this whole "video from satellites" narrative. Sounds like non sense to me that footage from NRO is made public (which does not make videos btw).
 
how would this satellite software know what is and isnt viewable due to parallax? how would it know how far away objects / received light are and in what "layer"?
 
Differences between youtube (yt) and vimeo (vim) versions of the satellite video :
  1. yt is longer than vim. There are 36 additional frames at the start and 39 additional frames at the end. vim is "temporally cropped"
  2. There are approx. 50 pixel more pixels on the left and right of vim. You can seen the black bars on each side. The "NOL-22" text is not visible because of it. yt is "horizontally cropped"
  3. There are 2 more pixels on the top and bottom of yt. vim is "vertically cropped"
  4. There are brightness, contrast and saturation differences. To me it looks like there are more details in the clouds in yt, indicating that vim has been "enhanced" (for beauty, but loosing data) , but I can't rule out yt being also enhanced from another source.
    lenght_comparision.png
This means a video being either the best proof of aliens being there or the best fake of it was posted somewhere, and the only two people that reuploaded it decided to modify it before doing so.
 
Normal cruising speed is Mach 0.85 or ~900km/h according to several sources scraped by Google.
You can't fly a standard rate 2-minute turn (like the aircraft in the video) at that speed; they'd fly a half-rate 4 minute turn instead, or simply choose a convenient bank angle if not under ATC supervision.
 
Last edited:
But, I doubt this whole "video from satellites" narrative. Sounds like non sense to me that footage from NRO is made public (which does not make videos btw).
It doesn't?
Article:
SBIRS is to use more sophisticated infrared technologies than the DSP to enhance the detection of strategic and theater ballistic missile launches and the performance of the missile-tracking function.

SBIRS High (also now simply referred to as "SBIRS") is to consist of four dedicated satellites operating in geosynchronous Earth orbit, and sensors on two host satellites operating in a highly elliptical orbit. SBIRS High will replace the Defense Support Program (DSP) satellites and is intended primarily to provide enhanced strategic and theater ballistic missile warning capabilities. SBIRS High GEO 1 was launched on 7 May 2011.[12] Two SBIRS sensors hosted on two classified satellites in highly elliptical orbit have already been launched,[13] probably as part of the NROL-22 (USA 184) and NROL-28 (USA 200) launches in 2006 and 2008.[14][15] USA 184 and USA 200 are believed by analysts to be ELINT satellites in the family of JUMPSEAT and TRUMPET; TRUMPET has been reported to have carried an infrared sensor called HERITAGE.

I struggle to see how NROL-22 detects launches and tracks missile without IR video capabilities.

Also obviously, a satellite designed to cover the Ex-USSR and China can't well see the Indian Ocean. See https://www.metabunk.org/threads/alleged-flight-mh370-ufo-teleportation-videos.13104/post-298392 by @Narfi.
 
The VFX artists from Corridor Crew (6.17mil subs) could be the right people to answer this
I'd love to hear their take on it. The satellite video passes the tests that they usually pick up on (motion blur, consistent lighting, black levels). It's also worth noting that their expertise is not in satellite imagery, and there may be other things unique to satellite imagery that they would misinterpret as fake or not pick up on as fake.

Watching their videos is a good reminder that if this is a hoax, these are unique videos when it comes to hoaxes. As far as I know, there are no other satellite videos of UFOs, or of UFOs this close to any other object, or in high resolution thermal.

how would this satellite software know what is and isnt viewable due to parallax? how would it know how far away objects / received light are and in what "layer"?
The blog post mentions that when GOES-14 is corrected it uses "infrared imagery to estimate the height of the cloud". There are a few other ways to estimate depth that I mentioned above (in-track stereo or SAR interferometry). This correction doesn't happen on the satellite, but in-software on the ground. The visible imagery is combined with a depth estimate to create a displaced image.

Sounds like non sense to me that footage from NRO is made public (which does not make videos btw)
The NRO does make videos, and is always seeking proposals for improving these capabilities.

Differences between youtube (yt) and vimeo (vim) versions of the satellite video
I've looked more closely at a few versions of this video, and I think the only useful ones are the RegicideAnon video and the Jose Matos video. The Vimeo one is converted from 24fps to 29.97fps, and is missing frames as you mention. The Jose Matos video is exactly the same length as the RegicideAnon video. I suspect that the "original" or "source" video may have been two separate videos that were combined by RegicideAnon into a single side-by-side video (with some vertical black bars/cropping, for some unknown reason). And that Jose Matos just uploaded one of the two videos.
 
Regarding the second satellite view, I was wondering if it is possible that interferometric synthetic aperture radar would be on the same satellite as a visible imaging system, essentially producing a depth map. Does anyone have satellite imaging experience and can confirm whether this sort of thing is common practice?
During the initial test of STSS program in 2010, the Missile Defense Agency used two satellites, STSS-1 and STSS-2 to test stereoscopic missile tracking.
STSSDemoMissile2.jpg
Meaning that they used two infrared tracking satellites with a very different position in the sky to track a missile on all 3 axis (using "missile detecting sensors", not visible light sensors). Of course, looking at this map, you would immediately realize that you would get a lot more parallax if you combined two photos in the visible light spectrum from these satellites (separated by "over 36 degrees of orbital angle") but I'm pretty sure if people on reddit saw this, they would immediately start rambling how this could have been used to generate a depth map.

This video is not actually stereoscopic SBS 3D. It's just shifted using cheap effects. The military would get no strategic benefit from converting a video to SBS 3D, nor would it make their job easier in any way - especially not when it comes to missile tracking and intercepting, the main purpose of these satellites (although maybe for mapping, as has been mentioned in the past). This fake video was created using a child's understanding of what the military would want from a spy satellite.
 
Last edited:
And I don't find the matching noise debunk convincing, because it just means that the second image in the pair was created with a depth map.

I'm going to take a break from this, but I think if someone wants to really debunk this they could show:

1. In practice, stereo satellite video is not generated from a single video like this was (e.g. using in-track stereo or SAR interferometry).
Your theory is plausible, but the fact that you can perfectly align the left and right view after a small un-shear of the image means that there is no depth data.
The only depth there is in the image pair is in the coordinate text and the cursor. The video is 2D video view with some kind of VR viewing software that fake the other image from the pair and add a 3D text and cursor overlay.
Faking the 3D overlay is not a quick and dirty task so I don't think it was done by the reuploader, it's more probable that it was already in the source videos.
 
The video is 2D video view with some kind of VR viewing software that fake the other image from the pair and add a 3D text and cursor overlay
I think we agree on this point but I'm not sure. The cursor and text are also affected by the depth map. Here is a video I made showing the text being affected. The cursor disparity is not fixed throughout the video, but modulates slightly. But this needs a more thorough investigation, it could just be compression artifacts.

the fact that you can perfectly align the left and right view after a small un-shear of the image means that there is no depth data.
Yes it is? Just paste the overlay to both videos with a small offset?
The video on the right is not an exact copy of the one on the left, or simply warped by a slight shear, but distorted by a depth map. This could have been handcrafted, to some extent, by using a mix of a gradient plus the brightness of the clouds. Or it could have come from some kind of satellite imaging device that estimates depth. Here is a reconstruction of the depth map using StereoSGBM.

The military would get no strategic benefit from converting a video to SBS 3D, nor would it make their job easier in any way
Definitely not for missile tracking, but in the context of mapping and visible light imagery this definitely happens. Here's a demo video showing a product that supports this application. From a UX perspective, I can see how a slight 3D effect would help a user quickly distinguish whitecaps on the water from clouds and planes.
 
So would this depth mapping have to be able to make a depth map for each pixel for each object in the scene? Plane/clouds/contrail of the plane/alien orbs etc?
 
The video on the right is not an exact copy of the one on the left, or simply warped by a slight shear, but distorted by a depth map. This could have been handcrafted, to some extent, by using a mix of a gradient plus the brightness of the clouds. Or it could have come from some kind of satellite imaging device that estimates depth. Here is a reconstruction of the depth map using StereoSGBM.

diff.png
This is the difference between the two sides for the frame with the flash. The right side was sheared with a value of 3 in Gimp. You could get an even closer match with subpixel precision for offset and shear, but gimp doesn't allow it. I may code something myself later.
You would get something completely different if there was depth data.

The depth map you get looks very much like a noisy vertical gradient, not at all like 3D clouds. There no real depth data, but the algorithm you used tried to find a depth data that could explain the differences between the images, so its results are not significant.
 
depth_mismatch.png
The StereoSGBM depth map doesn't seem to match the general shape of the clouds.
Sorry I don't really get the whole depth map discussion - could anyone quickly elaborate what it means and what we would expect/wouldn't expect, if anyone would be kind enough and has time on their hands?
 
So my biggest issues with this one, and these have all probably been discussed here already (I have skimmed the thread) are -

- 'NROL-22' was the launch number, the satellite is 'USA-184' why would any footage be watermarked with the launch number, not the actual asset id
- MH370 disappeared between ~01:00-02:30, and, as has already been pointed out by Sid, the moon was not high in the sky - so the illumination we see looks wrong, it doesn't appear to be IR in anyway and in fact looks more like standard visible-band daylight to me..

Both of these points would require detailed knowledge of USA-184's technical capabilities, which are obviously not publicly available. I've looked at a lot of NRO-sat footage and see no watermarking, launch or satellite number..

Aside from all this, the idea that some group covered this alleged event up by replicating pieces of fuselage debris etc. and scattered them throughout the Indian Ocean to wash up on beaches, is completely absurd and would require a monumental effort to carry out, not to mention that all efforts by all groups to identify them as MH-370 would have to be part of the conspiracy (including - based on the (Aviation Safety Report) entries for debris analysis -

• the French Judicial Authority - Item No. 1 was found on 29 July 2015 in Saint-Denis, Réunion Island ... The item was retrieved by the local French authorities and shipped to General Delegate of Armament Aeronautical Technique(DGA/TA) facility in Toulouse for detailed examination ... Although the name plate was missing, which could have provided immediate traceability to the aircraft (9M-MRO),the part was confirmed to be a right flaperon of the aircraft 9M-MRO, by tracing the identification numbers of the internal parts of the flaperon to their manufacturing records at EADS CASA, Spain.

• the ATSB Laboratory in Canberra - many of the pieces were examined by this group

• "the Team" (in Malaysia) - most of the pieces were examined by this group

• "the Team" in collaboration with Science & Technology Research Institute for Defence (STRIDE) - "Item 16 - Cabin Interior Panel - The part has been determined to be almost certain from MH370."

• The South African Civil Aviation Authority - 1.12.4 Process for Recovery of DebrisAt the time of writing of this report, the possibility exists that more debris might be found washed ashore, especially at the coasts of south east Africa.Arrangements have been made with the Civil Aviation Authorities there to retrieve and secure the debris and to be delivered to the Team for examination.
 
Last edited:
Sorry I don't really get the whole depth map discussion - could anyone quickly elaborate what it means and what we would expect/wouldn't expect, if anyone would be kind enough and has time on their hands?
If you have 2 images of a scene with only the camera location changing between the 2 images you can compute a depth map of the scene. This is an image where the intensity of a pixel correspond to the distance of the object to the camera. That's called stereoscopic 3D.
For the scene we have we should get something like this :
1692184785747.png
  • the ground is far so it's black
  • low altitude cumulus clouds are dark grey because they are closer to the satellite
  • cirrus clouds are high altitude clouds so they are even closer and a lighter grey
  • the plane might be somewhere between the clouds layers so it's an intermediate shade
 
If you have 2 images of a scene with only the camera location changing between the 2 images you can compute a depth map of the scene. This is an image where the intensity of a pixel correspond to the distance of the object to the camera. That's called stereoscopic 3D.
For the scene we have we should get something like this :
1692184785747.png
  • the ground is far so it's black
  • low altitude cumulus clouds are dark grey because they are closer to the satellite
  • cirrus clouds are high altitude clouds so they are even closer and a lighter grey
  • the plane might be somewhere between the clouds layers so it's an intermediate shade
Thanks very much, got it.

So it's not a stereoscopic video at all.
 
Back
Top