• Which the release of FS2020 we see an explosition of activity on the forun and of course we are very happy to see this. But having all questions about FS2020 in one forum becomes a bit messy. So therefore we would like to ask you all to use the following guidelines when posting your questions:

    • Tag FS2020 specific questions with the MSFS2020 tag.
    • Questions about making 3D assets can be posted in the 3D asset design forum. Either post them in the subforum of the modelling tool you use or in the general forum if they are general.
    • Questions about aircraft design can be posted in the Aircraft design forum
    • Questions about airport design can be posted in the FS2020 airport design forum. Once airport development tools have been updated for FS2020 you can post tool speciifc questions in the subforums of those tools as well of course.
    • Questions about terrain design can be posted in the FS2020 terrain design forum.
    • Questions about SimConnect can be posted in the SimConnect forum.

    Any other question that is not specific to an aspect of development or tool can be posted in the General chat forum.

    By following these guidelines we make sure that the forums remain easy to read for everybody and also that the right people can find your post to answer it.

P3D v5 Water Detection idea (work-around)


Sorry, it slipped from my attention again. I just had a look and it was a bug in the DetectFeature step, it did not allow 1 band input. It will be fixed in the next development release.

If you plan to select areas with low slope, you need to use the inverse threshold binary step btw in the texture filter as I just found out.
Ok, I will try that when you send out the next release.. Funny, on my test area my script was working fine (this is using the first method), then I tried it for a few other areas, exact same script just a different area and I am seeing this below. Is related to the same issue?

UE12 screenshot water slope crash.jpg
I think that is the same issue indeed.

The method where you filter the features afterwards will not give this error, only the method where you generate a mask from the slope raster.
Any updates to try? Eagerly waiting. And any chance on taking a look again at water detection in general or you still working on MCX issues?
The error about the raster type is fixed if you take the latest development release.

The MCX changes are nearing their end, so I hope to return to the scenproc features I was working on soon. But not sue yet if water detection is the first thing on my list then.
Tried method 2 again. This time I got an overflow exception. Here's my latest script:



DetectFeatures|type="slope"|B:\CA\Section_64\_Control Data\CA_Section_64_TFE_Water-Meth2-SlopeThres.tf2|String;type|mask|NONE
DetectFeatures|type="image"|B:\CA\Section_64\_Control Data\CA_Section_64_TFE_Water.tf2|String;type|water|type="mask"

ExportOGR|WatType="water"|ESRI Shapefile|B:\CA\Section_64\Masks Water\CA_BW49_WM.shp|water

Using two tf2 scripts as required. The first DetectFeatures uses the simple/straightforward steps of:

Input Image-->threshold Binary Inverse--->Output Image

and the 2nd DetectFeatures uses my standard water detection script using some 20-30 sample images. After about running 20 minutes (Water detection usually takes around 8 minutes), I got the following error:

1:34 PM SceneryProcessor Error System.OverflowException: Value was either too large or too small for an unsigned byte.
at System.Convert.ToByte(Int32 value)
at ASToFra.scenProc.DataModel.TextureFilter.Mean.GetImage(Int32 outputConnectorIndex, GridCell cell, BoundingBox bbox, Mat overwriteInput, Mat mask)
at ASToFra.scenProc.DataModel.TextureFilter.Merge3.GetImage(Int32 outputConnectorIndex, GridCell cell, BoundingBox bbox, Mat overwriteInput, Mat mask)
at ASToFra.scenProc.DataModel.TextureFilter.Svm.GetImage(Int32 outputConnectorIndex, GridCell cell, BoundingBox bbox, Mat overwriteInput, Mat mask)
at ASToFra.scenProc.DataModel.TextureFilter.Erode.GetImage(Int32 outputConnectorIndex, GridCell cell, BoundingBox bbox, Mat overwriteInput, Mat mask)
at ASToFra.scenProc.DataModel.TextureFilter.Dilate.GetImage(Int32 outputConnectorIndex, GridCell cell, BoundingBox bbox, Mat overwriteInput, Mat mask)
at ASToFra.scenProc.DataModel.TextureFilter.OutputImage.GetImage(Int32 outputConnectorIndex, GridCell cell, BoundingBox bbox, Mat overwriteInput, Mat mask)
at ASToFra.scenProc.Steps.DetectFeatures.Process(List`1 cells, String[] arguments)
at ASToFra.scenProc.Processor.SceneryProcessor.Process()
at ASToFra.scenProc.Processor.SceneryProcessor.ProcessConfig(String filename, List`1 commands)

My question is... should I be using the same sample images for both DetectFeatures steps?

The TF2 you use for the water mask generation does not need any sample images. And you are feeding it the slope raster now, so that is fine.

Does the error come from the first or second detect features step? It seems some data is not valid as a byte.
I have made a fix for that crash, it will be in the next development release.
Thanks. Willing to give Method 2 another try after that. I thought you might find it interesting to show what the "slope filter method 1" results look like.
Here is an area I have been testing with lots of small mountains/hills and a river running through the center. The blue shows my typical TFE water detection script with no additional filters applied. Lots of water on the sides of hills.

CA 64 BW49 - TFE results using Method 1.jpg

Here is same shot with all the unseen water polys highlighted:

CA 64 BW49 - TFE results using Method 1 - 002.jpg

Quite a dramatic result. And if we zoom in we can see all those water bodies are quite small and the result of shadows:

CA 64 BW49 - TFE results using Method 1 - 003.jpg

CA 64 BW49 - TFE results using Method 1 - 004.jpg

Here is the same area when I apply a Slope filter of 2%:

CA 64 BW49 - TFE results using Method 1 - Slope filter greater than 2 percent.jpg

As you can see the filter really does clean up the image. Sadly, sometimes a bit too much as while it "cleans up" unwanted polys it also deletes water in areas I do want it, such as in the river. So it is a matter of balance as to what is the best slope % to use and how much time I wish to take further deleting polys or adding water polys back into the image. But that is the purpose of a work-around. Not perfect but workable. Goal of course it to have the results good enough where I do not need to go back into each image and delete or add water. Not there yet.

Hopefully, you are about ready to get back into the core of the TFE water detection and see if we can improve on those tree shadows when it comes to water. My schedule will be opening up in a couple of weeks and would like to give it one last shot if you are game.

Yes, I have returned to some scenProc work by now. I have started again on the improved 3D buildings that I was working on last year before the FS2004 aircraft export grabbed my attention. So the water detection has not my highest priority at the moment.

Last year I tried to train an AI model for the water detection, but that required quite some training data that was hard to find. So I am not sure if that is the best approach. Maybe using the samples in scenProc to train for a certain area are more flexible for the user. What's your thought on that?
Improved 3D buildings? Just guessing, you mean the ability to create buildings based on the building footprint rather than the standard "rectangle"?

I would think more involvement of AI would be a perfect tool to make decisions on what is water and what is not. That said, the results are only as good as the training data it has to work from. My hope on this path is/was as you add more image samples scenProc learns what is water and what is not. But that database needs to grow, build upon previous samples to make more intelligent decisions.

Here's my process:
I process a geographic area that uses 25 LOD10 segments (roughly about 50 square miles). I start with the first 1 of 25 and create 1000 image samples to hopefully catch each important area where's their water. Many times I capture a slice of water but there are times where I miss a body of water completely. That's where the results can remain "blank" because there was no sample to make a calculation. I tend to select 15 up to 40 of these image samples to use in the TFE script. This can easily take a good hour to create points, let TFE process after each sample image, try to correct results if they appear incorrect. Many times TFE may crash or scenProc completely shuts down meaning I have lost all those pints and have to restart. So to help work-around this situation, I save the TF2 files several times changing the name of the file to save the latest point selections and work from that point forward. When I feel that is the best results I will get I make a final TF2 files to use in scenProc. I then repeat the process for the 2nd of 25 LOD 10 area, and so on and so on until I finish all 25 areas. So you are seeing up to 25 hours of manual work just to select sample points. I then run all 25 areas in 25 scenProc scripts to run at one time to complete that section. That will take around 1.5 hours which is not bad as it is computer time not me interacting with it at this point.

What I would hope for is that instead of starting fresh with each LOD 10 and selecting points, that a database would build upon all my past point selections making the AI smarter and smarter. So instead of my 15-40 images samples, it would would build upon the 1000's of past samples I have created. Hopefully, it could come to the point I don't need to add new samples or only would have to add only a few because I have created a huge knowledge base from it to work from. The data base is built from my images which is the most accurate for my use.

Yes, it will take weeks/months to build up this database and what about processing time? Somehow it will need to make decisions better/faster if I am working from such a large sample base. Unless you Arno, download the entire US or Europe image data (or data from another source), and have that data ready to use in scenProc and is provided with each update, I do not see a better way for water detection. Plus would it even be feasable to supply a database to users? That could be too big a file to provide with each scenProc update.

And the other issue is processing time. I see updating the TF2 file as I add points can take at first just a second or two. Very fast. However, as I had 100s of points it can take many seconds (30 seconds for example?) to save the files. If we can have that knowlege base not only build upon itself but find a way to improve the time to save newly added points (so not to take hours as I fear), I think we could have a winning process. Big challenges!

In general my approach is to only add samples when the content is really different. So when you process another LOD10 you would only have to add samples when the image content is really different. For a small US state I did vegetation this way with maybe around 30 samples in the texture filter.

As for the saving time, since the image data is stored in the TF2 file that takes more and more time the more image samples you have. So with a lot that can take many seconds indeed.

I'll check the stability of the texture filter editor, crashes are always annoying when you loose work.

The AI approach I was investigating last year is about training on a dataset and then running the already trained model. The training took me about 4 days (full GPU load all that time). So that's not something you want to repeat very often by adding just some samples. And the biggest problem I had was to find reference data of water to provide the truth for the desired outcome.
I have an idea on a sort of middle way between the two approaches we tried until now. Let me do a test implementation to see if that would work or not.
I just got back from FSExpo in Las Vegas (really good show and I only lost $40 USD at the casin, so been away for a bit. Time is opening up for me to try out your new ideas. Looking forward to it. 30 samples to run entire US state!!?!? WHAAAAT? I must be doing it wrong, ha! I don't see how one could capture all the different types of water I come across in 30 samples. Well, maybe if was Nevada, ha!
Like I said it was for vegetation and I have to be honest that the state I tested with was Delaware :D. But yes, by adding areas where the detection did not work well I came to the total of about 30 sample images.

Not sure if the same holds for water, it might be water has more variation than trees.

I have started coding the new idea, but it's not working yet. I'll let you know when I have something to test.
Oh vegetation, yes I can see that. One can be more forgiving with trees. I can create trees in shadow areas even when there aren't trees in real life and the overall effect still looks good... very good. I would say, for me, the detection works fine and no additional work is needed for vegetation.

The difference with water is you need to be precise as possible because of shorelines, riverbanks, etc. They need to match up with imagery. Also, the shading of trees causing water bodies to be created. And there are so many colors of water what with the many shades of water... blue, green, white rapids, brown muddy water all are a challenge. And the color of tree shades can be so close to the color of water. Clustering does help a lot. Just need to take to the next level.