PF_MaskedHistorical PF_Masked has two meanings. For a palettized texture it means: Set the first palette color to (0,0,0,0) and the other meaning beeing for rendering discard the fragment if the alpha value is below 0.5.
Apart from that the overall PF_Masked handling is a total clustercrappity smack. If precaching is enabled and a surface has the PF_Masked set, the PF_Masked flag is applied to the texture. Canvas drawing always adds the PF_Masked. And in Deus Ex if a texture has PF_Masked set this is also applied to the mesh face.
Currently the best way to deal with masked in the RenDev is to always honor whether the masked flag is set and treat the first palette index this way. This seems to always work right in Unreal, Nerf and Deus Ex, Botpack and probably all other games. However this way you end up loading the same texture twice and gives you that nasty dependency.
As I move in my codebase towards handling the texture with all subtextures (bump, macro, detail) and properties at once this special handling gets more and more nasty.
So my idea is to get all the masked handling straight. PF_Masked at rendering time just denotes if the fragment should get discarded based on it's alpha value, while either for texture just the masked setting on the texture is honored for unpacking it's data or, even better those textures get an own TextureFormat id which explicit says that the first palette entry should be set to (0,0,0,0). There should be just some very rare cases where the texture data is actually used one time as masked and another time as non masked, where it would be perfectly fine to have two distinct textures. But now for just a very few cases and not for a large amount of these textures.
Another thing one should keep in mind about masked textures is that they disable early fragment test. While this is currently not any performance concern, however this can become a concern for rendering a lot of shadow maps. Using another shader set for non masked textures is also an issue as switching between shaders is also expansive. But then again, if one can't use early fragment tests one could make the best of it and use a logarithmic z-buffer which would disable early fragment tests too.
One particular issue with masked textures is the usually rather odd outline and for many textures the darking too black on the edges. The darking issue can be solved by diving the fragments color value by it's alpha value in the shader, but thats not the best way to go. A better way is to use a texture format with an independent alpha channel. So the color will not be influenced by the masked part of the texture getting set to invisible black.
And if one has a non binary alpha channel one can start to use an antialiased alpha channel to smooth out the shape of a texture. One test I ran sometime last year can be seen
here.
It's also worth to mention that you can get a perfect 45° angle when choosing a ((0,0.5),(0.5,1))
pattern.
I'm pretty sure using these techniques one can do a lot about the awful borders of the mountains used in the skyboxes in Unreal.
Another problem of the binary alpha channel is it's heavy influence on the lowever mipmaps. Currently some biasing towards opaque is used as can be seen
here,
here and
here.
Even worse these mipmaps increase the already existing tendencity of masked texture to moire and aliasing, so a non binary alpha channel can help here too. For further reducing the moire/aliasing issue one should probably not use a box filter for the alpha channel but a filter with some more resonable spectral properties. Two other things to experiment with is whether one gets better results if one factors in a slight bias towards opacity or if one uses a pseudo random pattern for selecting the samples for the alpha channel.
PF_NoSmoothNoSmooth chooses a nearest texture minification/magnification filter over a linear reconstruction filter. Same as for PF_Masked, during precache and if set on a surface this gets applied to the texture the surface uses. While it would be less of an issue to handle on a per draw call basis, this is plain not worth it. My stance is that for PF_NoSmooth always the setting on the texture itsself should be used. This is used so rarely that that the extra complexity to handle this is plain not worth it. As for masked, if you really need both behaviours, copy the texture. This is still faster then having to care for switching the filtering state.
In Deus Ex, the UI drawing but for the background snapshot and the drug effect seems to be always smoothed. Otherwise it looks like all other UI elements are drawn with PF_NoSmooth, though it isn't in general set for the textures themself. So PF_NoSmooth could just be set for the UI textures which would be drawn smoothed.
PF_Unlit / bUnlitUnlit surfaces yield one of the most unpleasent visuals in these games. They usually either stick out or are done on a we keep most of the texture black basis which contrasts with other textures beeing used. Glowmaps are an easy way to achieve the same results while at the same time integrate in the lighting system so the surfaces won't stick out anymore. Also they can fairly easy be created while preserving the visual style of the original content.
PF_Highlighted / Additional STY_HighlightedPF_Highlighted is premultiplied alphablending. Using premultiplied alpha blending has some inherent advantages over non-premultiplied alpha blending as pointed out
here.
It is also worth to point out that this blending mode is very versatile to use as you can effectively put out an alpha value for the background and the color components to add. For non-premultiplied alphablending Src is multiplied with alpha during blending. While you can easily multiply with alpha inside the shader without any issues you can't divide by alpha inside the shaders without special precautions as you easily end up dividing by zero and easily get black faces.
It is also worth to mention that this blend mode offers the ability to resonable handle fog as part of single pass rendering per primitive.
Due to the ability to handle fog and and it's versatibility I'll translate all transparency rendering but modulated for canvas drawing to this blendmode. Details are illustrated below.
Additional PF_AlphaTexture / STY_AlphaTextureJust some ordinate alpha blended format.
PF_Translucent / STY_TranslucentHistorically Translucent together with Modulated have been the only ways to supply transparency in Unreal Engine 1. The reason why these have been chooses is based on their ability to work on the color components only and not require an additional alpha channel, which wouldn't have been possible to use with some decent quality using a palettized texture format or would have otherwise taken up to much space.
[KEN] gives an overview about the two classic ways of rendering translucents at the bottom of the page, so I won't recap them here. The fishy thing about both approaches is that the light which hits the surface influences the transparency of the surface itsself and thus results in a different visibilty of the background. Especially not that if no light is on a translucent it completely removes the background. The best example for this are some Unreal maps where this hid or darkened the stars.
I see them both as approximations to the following equation Epic plain choose due to limitations of the hardware of that time.
Result = Diffuse.rgb*Light.rgb + (1-Diffuse.rgb)*Dest.rgb
So Light just interacts with the surface and not with the background anymore.
However implementing this equation would come at a very high cost. Either you implement it as multipass and need to render each mesh triangle on their own twice or you use dual source blending and you
can't write to multiple color buffers. So either way you are crappity smacked.
So back to the equation. The only difference to pre multiplied alphablending is that we have Diffuse.rgb instead of some alpha channel for the background blending, so why not derive one?
Deriving the alpha channel basically breaks done to use some norm. One of the most natural choices would be to use the relative luminance or the maximum norm. The first one works well in
Unreal while the second one works great for the effects in Nerf.
Translucent mirrored surfaces are a bit more special case. The biggest difference is that one deals with reflection and not transmission when it comes to blending in the background. Using 0.7 - 0.2*RelativeLuminance(Diffuse) yielded
good results for me. One could consider introducing an angular dependency using some Fresnel approximation.
Another thing to keep in mind that it makes a huge difference whether you derive the alpha channel using the linear data or the gamma compressed one. In fact it even works quite well for a lot of textures to treat them completely as linear data.
My impression is that it will probably work fairly well just using always the same norm to derive the alpha channel and that just having the ability to either take the raw texture data as sRGB or linear depending on the texture format of the texture.
For the best results, especially on the meshes where it doesn't depend on surface flags in a map, one should head for selecting the norm to derive the alpha channel as an offline step and just convert that texture in some premultiplied alpha format. For those used in the map one can simply derive the alpha channel offline and plain ignore it for translucent surfs, while using it on premultiplied alpha surfaces if one intends to change a map to make full use of it. I can see this happening as part of my build commandlet. So it's just a line in the *.upkg file to define which norm to use to derive an alpha channel.
For Meshes and Sprites one now also needs to factor in ScaleGlow on the alpha term as ScaleGlow was applied over the light term before. Otherwise ScaleGlow=0 would not result in no visibilty, but instead it would darken the background.
In the end by using this approach the overbrightening effect is gone while beeing still singlepass and getting an ordinary premultiplied alphablending where I can way easier deal with fog and I can employ order independent transparency approaches. Neat!
Additional STY_AddWhile my change to remove the unaesthetic overbrightening effect on translucents is a noticable improvement in general, it also exposed certain cases where the overbrightening effect was the most important part of the visual appearance. Thus a dedicated rendering style exposing this feature was needed.
STY_Add is as simple as just adding the color to the background. It can be easily implemented by using PF_Highlighted with a zero alpha value.
One noticable example are coronas. Beeing translucent the changes to translucent rendering by basing the background alpha value solely on the diffuse map it introduced noticable darkening when coronas faded out. Even with just a quick, not yet fine tuned, change to account for switching to gamma correct rendering when deriving the corona color and using STY_Add I get good results for corona rendering. In fact just adding the color reflects the nature of a corona times better then the classic translucent approaches.
Other examples are the dispersion pistols effect and energy weapon projectiles in general.
PF_Modulated / STY_ModulatedModulation is basically Src*Dst*2. The best way to think about this is to 'light' whats behind the modulated primitive with the light values taken out of modulated texture. So modulated primitives are no standalone object like a alpha blended primitive would be, but merely just require interaction with the (viewport) dependend background.
Note that a Src value of 0.5 yield Dst again (e.g. no effect), which is also the reason why the outside of blood decal textures, etc. fade to 50% grey, something most of you guys are aware of. Also note that Modulated texture are usually stored as linear data.
Modulated texture have one advantage when used for detail/macro textures. That the data actually gets multiplied results in black areas of the source texture actually staying black. When used for detail textures one should always render the detail texture regardless of the distance as especially in Deus Ex this resulted in some rather drastic brightness changes when coming close to a surface. However detail textures should always have mipmaps and even more important the lowest mipmap level should get 50% grey. In fact Deus Ex looks way better after one normalizes it's detail textures this way.
For all other cases modulated textures plain suck. You can't put them into any draw order independent handling, you can't handle fog gracefully with it and in allmost all cases they won't be needed anymore.
Decals can always be done using some alpha blended format. While the average black bullet hole or explosion leftovers won't profit much, other textures like blood spurts or bio rifle splats will profit as their texture data can properly be blended in instead of beeing heavily affected by the background texture. Probably a good share of them could be converted using some automated means, like clamping the texture data to [0.5,1] and mapping this to [0,1] and deriving some alpha value like it's done for translucents.
Especially on the Meshes in Deus Ex modulated is used to achieve some light in the dark effect on Robots, AlarmPanels, etc. etc.. This should by all means be replaced with glowmaps as they offer the same functionality.
While this removes nearly all modulated, there are still some rare cases which would be left, so there should be at least a somewhat working fallback approach. A rough idea I have which I called for myself "the diner sign approach", would be to derive a diffuse material, a lighting term and an alpha term to treat it as an ordinary alpha blended texture. So for the diffuse term my idea is to clamp the texture data to [0.5,1] and strech those data to [0,1]. So for the lighting term one should probably consider that only values above 0.5 added light, so using that range again probably makes most sense. For the alpha term, one should probably apply some norm like for translucents but on (2*|red-0.5|,2*|green-0.5|,2*|blue-0.5|). Not that the darking effect is caused by reducing the background blending.
PF_TwoSidedThis flag indicates that the primitive is not backface culled. Usually render devices in Unreal Engine 1 perform no backface culling on their own, instead it is culled before inside scope of Render in software. Probably best approach is to ditch all CPU side backface culling and triangle flipping, and instead enable backface culling in OpenGL and reverse vertex order if needed in a geometry shader.
This flag has also implications for vertex normals on meshes,
see below on requirements for Meshes.
LINE_DepthCuedThe following implementation of the LINE_DepthCued flag proved resonable in the past. Movers, Active Brush are drawn on top in the 3D views and path display gets occluded by bsp geometry and gain some resonable display instead of a wild entaglement drawn on top.
glDepthFunc( (LineFlags & LINE_DepthCued) ? GL_LEQUAL : GL_ALWAYS );
glDepthMask( (LineFlags & LINE_DepthCued) ? GL_TRUE : GL_FALSE );
Not that some equivalent functionality in the PF_ set would be appreciated, like a PF_NonDepthCued flag, so the LINE_DepthCued flag could be mapped to PolyFlags and thus avoiding conflicts with PF_Occlude or a potential missing (or needless) reset of the depth function. But space inside the PF_ set is sparse, but that would be some hot candidate for it. This would also have some advantage for the 2D UI drawing phase. You won't need to do z-Buffer clearing just to draw a Tile over something.
Additional LINE_NoPointConvulseI added this line flag to disable the expansive check in ortho views which draw a larger point instead of a line when the line (within some limit) would appear as a one pixel point in that view. For rendering meshes as wireframes one does not desire that they produce points, so this is just wasted time. Even with using this flag for mesh rendering and additional cutting the cost of that check in half it takes up to ~2ms in total for the ortho top view of the sunspire just to perform these checks (though my cpu is horrible slow), but that is still an awful lot left. One can probably get rid of a lot of those checks by overriding the DrawWireBackground() function in the UEditorEngine and also supplying the LINE_NoPointConvulse there. In addition one can pass all the wire background lines of the same color and having the same LineFlags in one batch.