• Which the release of FS2020 we see an explosition of activity on the forun and of course we are very happy to see this. But having all questions about FS2020 in one forum becomes a bit messy. So therefore we would like to ask you all to use the following guidelines when posting your questions:

    • Tag FS2020 specific questions with the MSFS2020 tag.
    • Questions about making 3D assets can be posted in the 3D asset design forum. Either post them in the subforum of the modelling tool you use or in the general forum if they are general.
    • Questions about aircraft design can be posted in the Aircraft design forum
    • Questions about airport design can be posted in the FS2020 airport design forum. Once airport development tools have been updated for FS2020 you can post tool speciifc questions in the subforums of those tools as well of course.
    • Questions about terrain design can be posted in the FS2020 terrain design forum.
    • Questions about SimConnect can be posted in the SimConnect forum.

    Any other question that is not specific to an aspect of development or tool can be posted in the General chat forum.

    By following these guidelines we make sure that the forums remain easy to read for everybody and also that the right people can find your post to answer it.

Shrinking files for photoreal scenery

The USGS orthoimagery I have used so far comes in two sizes, and in California those files are about three times as large (60mb per file instead of 20mb) as they are elsewhere. I can combine eight JP2 files into one TIF everywhere but California, because NConvert apparently can't process a file over 2gb, so California scenery means I can only do one 7.5-minute grid at a time instead of two -- twice as much work and I don't really notice any improved visuals for all the added work.

Is there a better path forward, like quickly shrinking the files without losing the coordinates, or maybe tweaking NConvert (a file with eight merged tiles just barely goes over NConvert's limits apparently)? Or should I just suck it up and deal with it?

Thanks,
 

arno

Administrator
Staff member
FSDevConf team
Resource contributor
Hi Chris,

Can you explain what NConvert does in your workflow? Because that's not a tool that I typically use.

Since your input files are JP2, the size in MB of them does not say that much, if the compression is different you get a different filesize already. While when you go to GTIFF that is typically uncompressed. So it would be more interesting to compare amount of pixels or so.
 
After re-projecting and combining into a GTIFF, extracting the geo data, then stripping out the IR layer (to save for when I finally have time to learn to plug trees in effectively), I use NConvert to quickly apply editing to the TIF file's brightness, color, and contrast levels through a command-line interface before re-inserting geodata and converting to BGL.

Here is an example, without IR layer: (California, one 7.5-minute grid) 26,787x26,785 pixels at 2.1 gb, versus a Missouri file (spanning two 7.5-minute grids) 31,694x15,849 pixels at 1.4 gb.
 
And the problem has been complicated even further as the 2.1-gig file I used in the above example is also apparently too large for NConvert, so I'd need to go from four tiles at once to two, meaning California imagery would now require eight times more work than elsewhere.
 
Hi Chris:

There are utilities that convert JPEG2000 (aka "JP2" / Mr.Sid) to GeoTIFF.

AFAIK, any TIFF flavor can be non-lossy LZW-compressed by 20 to 40 percent in size; most graphics applications can do this and GIS applications can as well..

The trade off will be 2x to 3x longer compilation time for non-lossy LZW-compressed.source imagery by SDK Resample.

GaryGB
 
Gary, I use these steps on OSGeo4W to combine the JP2 files into one large TIF file:

Code:
gdalwarp -of GTiff -co "INTERLEAVE=PIXEL" -t_srs "+proj=latlong +datum=WGS84" -r cubic "*.jp2" *.jp2" *.jp2" *.jp2""*.tif"
listgeo -d "*.tif" > "*.gtf"
gdal_translate -b 1 -b 2 -b 3 -of GTiff -co INTERLEAVE=PIXEL "*.tif" "*_a.tif"
"F:\crush\Prepar3D v4 Files\NConvert\nconvert.exe" -overwrite -balance 6 5 -10 -hls 0 0 -1 -brightness -5 -contrast 40 -sharpen 20 -replace 0 0 0 2 2 2 "*_a.tif"
geotifcp -g "*.gtf" "*_a.tif" "*.tif"
 
Last edited:
The California tiles must be higher resolution but if that isn't apparent in the finished .bgl they may be merely upsampled to a higher resolution for some reason. What are the filenames like? For example in comparing "m_3710742_sw_13_060_20190909.jp2" with "m_3610661_nw_13_1_20140615_20141030.jp2" the "_060_" in the first one tells you it's 60cm imagery where the "_1_" in the second filename tells you its 1m imagery. From my experience the 60cm .jp2s run about 75 Mb each where the 1m .jp2s run about 23 megs apiece. If you're not seeing any benefit in the sim from the larger files you might as well reproject them all to 1m which you can do by adding -tr 0.00001 -0.00001 to your gdalwarp command. That way you'll be able to warp 8 of them together without exceeding the 2 Gb limit. I've been warping 6 together with -tr 0.00001 -0.00001 in all of my commands, a .tif comes out just under 900 Mb with the NIR, haven't tried 8.
 
One of the files I am working with right now is "...\m_3912154_ne_10_h_20160710_20161004.jp2" -- not sure what 10_h represents, but I'll try reprojecting to 1m since I'm not noticing a result that justifies taking four times more work.

Jim, you may notice that the above commands came from you. Thanks for the suggestion as I was about to be done with scenery design for a while. At least in California.
 
No idea what the "_h_" means, I saw a breakdown of the file naming convention at one time that explained what everything meant sorta like they sometimes do with METAR decoders, I can't find it now. That particular tile warps unconstrained to about 47cm yet the metadata (from earth explorer) says it's 60cm. TBH it didn't look very good warped to 60cm either but 1m didn't look too bad. I find I have to use a different gdalwarp command for the .jp2s because they're projection is usually "WGS_1984_Web_Mercator_Auxiliary_Sphere" as opposed to NAD83 that the uncompressed .tifs generally use. I'm not sure whether it's the difference in the command or the projection but I find the output from the .jp2s is usually blown up quite a bit if I warp them unconstrained. I downloaded a random 60cm .jp2 tile (72.9 Mb) of the 2019 NAIP imagery in Colorado (where both "compressed" and "uncompressed" versions are available) and it warped unconstrained to about 47.5cm while the same tile uncompressed (428 Mb zipped .tif) warped unconstrained to 60.2cm. The .jp2 sample came out to 13159 x 13158 px and 660Mb after reprojecting, the .tif sample reprojected to 11890 x 11468 px and 520 Mb. Looking at both of them in PhotoShop at 100% the .jp2 output looks obviously upsampled. Resample will sort it all out but it just unnecessarily uses up disk space and makes PhotoShop sluggish so if it says it's 60cm I define that in the gdalwarp command with -tr 0.000006 -0.000006 so it comes out to the resolution it's supposed to be. Likewise for 1m imagery.
 
Using the 1m gdalwarp command I was able to save so much space that I was able to process an entire 15-minute grid -- 16 files at once. I haven't had time since compiling it yesterday to try it out, but considering all the associated steps, like adding the blend/watermask, that would be a huge time saver. I'd just have to see how much detail I am sacrificing.

Given all the help I've received -- nearly all of which came from you three in this post -- my photo-scenery textures and water vectors look better than what is in the new Microsoft flight sim. And from what I have seen, my 10-m terrain has it beat too. Arno's ScenProc has given me gorgeous trees, accurately placed autogen buildings, wind turbines, electric substations, and many other types of objects soon. It's all breathtaking for me, especially since everything I see in P3D is something I have created.

I can't thank you all enough, but I can pay it forward so as I have time I upload the files to my website for sharing with others.
 
Last edited:
if it says it's 60cm I define that in the gdalwarp command with -tr 0.000006 -0.000006 so it comes out to the resolution it's supposed to be. Likewise for 1m imagery.
Revisiting some older issues I had pushed to the side now that I have a better understanding of things... Would running GDALINFO help me find the optimal settings for merging JP2 files?

Running GDALINFO on
gives me
Pixel Size = (0.600000000000032,-0.599999999999950)

So does that indicate -tr 0.000006 -0.000006 is the way to go? Or do I need to find an answer as to what the "10_h" means in the filename like we mentioned previously?
 
IMO that pixel size is embedded in the .jp2s rather than detected by calculations based on px resolution vs lat/lon degrees covered. If the person uploading them decides to embed 0.60000000 when he meant 0.000006 or maybe 0.6 means 60cm in whatever program he uses to reproject, that's what gdalinfo will tell you rather than doing the actual calculations. I know if you try to use -tr 0.600000000000032 -0.599999999999950 with gdalwarp it will error telling you output dimensions must be greater than zero, or at least that's been my observation. If you run gdalwarp on that .jp2 without any -tr constraints gdalwarp will give you a completely different pixel dimension on the output than it did when you ran it on the .jp2 itself. I don't know what goes on here, I'm sure Gary will enlighten us. :)
 

arno

Administrator
Staff member
FSDevConf team
Resource contributor
Hi,

The -tr options normally uses degrees as unit (depending on the projection that the images use of course). So I don't think 0.000006 corresponds to 60 cm directly.
 
The 7.5-minute tile at N38W120 (#8 of 64) comes in at 1.99 gb, and the tile to the northwest that I am trying to work on at N39W120 (#63 of 64) is just a hair over 2.00 gb.
After merging and reprojecting #8's I got
Pixel Size = (0.000004678691164,-0.000004678691164)

My thought is to try -tr 0.0000047 -0.0000047 and see if that shaves enough bytes off to slide under the 2-gb threshold. But would there be a better way?

I found it fascinating that you get four different file sizes from Windows Explorer: hovering over a file after clicking it gives you 2.0gb; "Size" in the list view gives 2,097,604 bytes (which is what confused me initially, thinking I couldn't have reached 2 gigabytes on #63 since #8 to the south had processed fine -- despite showing a larger size in the list view); the Properties menu gives me "Size" of 2,147,945,856 bytes; and "Size on Disk" is 2,147,946,496 bytes.
 
I downloaded a random 60cm .jp2 tile (72.9 Mb) of the 2019 NAIP imagery in Colorado (where both "compressed" and "uncompressed" versions are available) and it warped unconstrained to about 47.5cm while the same tile uncompressed (428 Mb zipped .tif) warped unconstrained to 60.2cm. The .jp2 sample came out to 13159 x 13158 px and 660Mb after reprojecting, the .tif sample reprojected to 11890 x 11468 px and 520 Mb. Looking at both of them in PhotoShop at 100% the .jp2 output looks obviously upsampled.

I still have these sources on hand so I did this again, this time without personally professing them to be 47 cm or 60 cm, purely the unadulterated results of running gdalinfo on the two sources before and after reprojecting them.

Source 1
the input:
m_3710750_se_13_060_20190909.tif, 489 MB, 10260 x 12510px, Pixel Size = (0.600000000000000,-0.600000000000000)

the command:
gdalwarp -of GTiff -co "INTERLEAVE=PIXEL" -t_srs "+proj=latlong +datum=WGS84" -r cubic "m_3710750_se_13_060_20190909.tif" "m_3710750_se_13_060_20190909_WGS84.tif"

the output:
m_3710750_se_13_060_20190909_WGS84.tif, 522 MB, 11910 x 11491px, Pixel Size = (0.000006022492310,-0.000006022492310)

------------------------------------------------------------------------------------------------------------------------------------------------------

Source 2
the input:
m_3710750_se_13_060_20190909.jp2, 70.9 MB, 11597 x 14553px, Pixel Size = (0.600000000000096,-0.599999999999987)

the command:
gdalwarp -s_srs EPSG:3857 -t_srs EPSG:4326 -of gtiff "%remote%\m_3710750_se_13_060_20190909.jp2" "%remote%\jp2_m_3710750_se_13_060_20190909_WGS84.tif"

the output:
jp2_m_3710750_se_13_060_20190909_WGS84.tif, 660 MB, 13157 x 13159px, Pixel Size = (0.000004750694124,-0.000004750694124)


Here are some clips from the two output .tifs at 100%. This is why I personally use -tr 0.000006 -0.000006 when reprojecting 60cm .jp2s.

wasted_resolution.jpg
 
Last edited:
Following your example, I tried two adjacent "_10_h" .JP2 files -- one from a tile that comes in under 2gb and one that is bigger. Gdalinfo shows 0.000004673588393 for the over 2gb piece and 0.000004678740884 for the "just right" piece. Not sure why the more northern 7.5-minute tile (when four are stitched together) is larger than the southern one, since it makes sense to me that as you go further north, the grids get smaller and so should the data. I'm guessing this is 47-cm resolution, so would the command below be the best path forward?

Code:
gdalwarp -of GTiff -co "INTERLEAVE=PIXEL" -t_srs "+proj=latlong +datum=WGS84" -r cubic -tr 0.0000047 -0.0000047 "JP2 files" "TIF output"
 
Last edited:
Note that the example I showed above were the exact same tiles, same location, same date, same resolution, sourced from the same camera in the same airplane on the same flyover mission. The difference was that one was downloaded compressed as a .jp2 (70.9 MB) and the other was downloaded as a zipped .tif (489 MB extracted), the .jp2 is "WGS_1984_Web_Mercator_Auxiliary_Sphere" and the uncompressed .tif is "NAD83 / UTM zone 13N". The point I'm trying to make is that the .jp2s for some reason blow themselves up when you reproject them - to a pixel size of 0.000004750694124 (or 0.000004678740884, whatever) when you warp them unconstrained. I'm sure it's 60cm imagery, but go back to wherever you downloaded it from and read the metadata, it will tell you the resolution (most likely it will tell you it's .6m rather than 60cm but...). You're splitting hairs when you're messing with numbers beyond 7 places right of the decimal, that's insignificant, the difference between 0.0000047 and 0.000006 is huge however, an output of 0.0000047 on 60cm imagery is just wasted pixels and disk space and it unnecessarily causes you problems with the 2Gb limitation. The finished .bgl doesn't come out any different, it's like taking a 1920 x 1080 screenshot and blowing it up to 2554 x 1436 and then resizing it back down to 1920 x 1080 pretending you're getting an improved result. It's pointless, you can't manufacture data that isn't there to begin with, and the 2554 x 1436 version merely takes 1/3 again as much disk space to store in the interim.
 
That makes total sense; thanks for making that clear Jim. I've been all over the metadata since I began using ortho data, trying to wrap my mind around it and answer my own questions... I'm sure you're right about it being 60cm, but I looked through the metadata again anyways and here is the closest thing I can find (https://www.sciencebase.gov/catalog/item/59eb3e64e4b0026a55ffcc31):

<planci>
<plance>row and column</plance>
<coordrep>
<absres>.6</absres>
<ordres>.6</ordres>
</coordrep>
<plandu>meters</plandu>
</planci>

0.0000047 got me under the 2gb threshold just fine and the finished GeoTiff doesn't appear different than the other file I did without modifying the gdalwarp command I've used on everything else.

I'll try the process again on 60cm and see how the two stack up. Thanks!
 
That's it, and if you click the little "View" button next to the "attached files" XML link it shows you this:

Coordinate Representation:
Abscissa Resolution: .6
Ordinate Resolution: .6
Planar Distance Units: meters

And from Wikipedia, since I'd never seen X and Y called "abscissa" or "ordinate" before:
...the abscissa refers to the horizontal (x) axis and the ordinate refers to the vertical (y) axis of a standard two-dimensional graph.
 
Top