logo
Main

Forums

Downloads

Unreal-Netiquette

Donate for Oldunreal:
Donate

borderline

Links to our wiki:
Wiki

Walkthrough

Links

Tutorials

Unreal Reference

Usermaps

borderline

Contact us:
Submit News
Page Index Toggle Pages: 1 [2] 3  Send TopicPrint
Very Hot Topic (More than 25 Replies) [227j_35] FrameRateLimit cvar for XOpenGL (Read 3978 times)
Smirftsch
Forum Administrator
*****
Offline



Posts: 7471
Location: at home
Joined: Apr 30th, 1998
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #15 - Apr 22nd, 2016 at 4:50pm
Print Post  
but...did you ever try adaptive mode? Cheesy
  

Sometimes you have to lose a fight to win the war.
Back to top
WWWICQ  
IP Logged
 
[]KAOS[]Casey
Oldunreal MasterPoster
*
Offline


nedm

Posts: 3069
Joined: Aug 7th, 2011
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #16 - Apr 22nd, 2016 at 5:39pm
Print Post  
shoober420 wrote on Apr 22nd, 2016 at 3:06pm:
To clarify,  I have input lag in all games when I enable VSync. Im very sensitive to it and not E it immediately. I play Counter-Strike 1.6 at 640x480 and a whopping 170hz refresh rate. So it's super smooth. It's practice for me to keep VSync disable in all my games.



funny because these days you can play in 2560x1440 @ 144hz with new monitors, and  you will almost never cap 144 fps because of the framebuffer size. about 12x larger pixel wise, interestingly. though I am certain you play with a CRT if you go that far
  
Back to top
 
IP Logged
 
shoober420
Betatester
Offline



Posts: 191
Location: US
Joined: Jun 17th, 2012
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #17 - Apr 22nd, 2016 at 6:07pm
Print Post  
[]KAOS[]Casey wrote on Apr 22nd, 2016 at 5:39pm:
though I am certain you play with a CRT if you go that far


Yes, I do use a CRT. I notice input lag on LCD very easily. This is why I use a CRT. I'm sure that the gaming grade LCDs are decent, but I just love CRTs.

Smirftsch wrote on Apr 22nd, 2016 at 4:50pm:
but...did you ever try adaptive mode? Cheesy


I will tomorrow. I've been working 12 days in a row. I've been really busy lately, I normally have time, I promise I will.
  
Back to top
 
IP Logged
 
[]KAOS[]Casey
Oldunreal MasterPoster
*
Offline


nedm

Posts: 3069
Joined: Aug 7th, 2011
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #18 - Apr 22nd, 2016 at 6:20pm
Print Post  
Kissshoober420 wrote on Apr 22nd, 2016 at 6:07pm:
[]KAOS[]Casey wrote on Apr 22nd, 2016 at 5:39pm:
t hough I am certain you play with a CRT if you go that far


Yes, I do use a CRT. I notice input lag on LCD very easily. This is why I use a CRT. I'm sure that the gaming grade LCDs are decent, but I just love CRTs.

Smirftsch wrote on Apr 22nd, 2016 at 4:50pm:
but...did you ever try adaptive mode? Cheesy


I will tomorrow. I've been working 12 days in a row. I've been really busy lately, I normally have time, I promise I will.


One day, OLED monitors at insane refresh rates will exist that have CRT-like blacks. Right now they're exorbitantly expensive.
  
Back to top
 
IP Logged
 
shoober420
Betatester
Offline



Posts: 191
Location: US
Joined: Jun 17th, 2012
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #19 - Apr 22nd, 2016 at 8:33pm
Print Post  
Not only that, but to me colors look more vivid on CRT. Plus when you display a resolution that isn't native to the LCD it looks like poop lol
  
Back to top
 
IP Logged
 
han
Global Moderator
Unreal Rendering Guru
Developer Team
*****
Offline


Oldunreal member

Posts: 513
Location: Germany
Joined: Dec 10th, 2014
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #20 - Apr 22nd, 2016 at 8:37pm
Print Post  
Adaptive vertical synchronisation just helps if the framerate drops below the refresh rate. Otherwise it has the same behaviour like vertical synchronisation enabled.

Also there doesn't seem to be any consistency when input lags are perceived and how.

From my experience with DX, I often experienced it randomly in the past, however starting at some point for working on HX I used the shipped D3DDrv and later my own OpenGLDrv based on the ut436-opengldrvsrc and never experienced it. A couple of months ago I noticed that while I got no input lag in Revision with my OpenGLDrv I immediatly got a really noticable and annoying input lag when using utd3d. So when I had experienced it in the past it might have been after I installed utglr/utd3d. A couple of weeks ago I was browing utd3d source and noticed that it calls timeBeginPeriod(1) every frame. In any case that should not happen every frame, and there is a chance that it might cause the issue.

Another thing I noticed recently is that when using 227i without raw input (as ri is kind of annoying in windowed mode there) that I got a way larger input lag then in 226b I'm currently using much for rendering development. This might be caused by 227 use of timeGetTime which just has a maximum of 1 ms precision, which is for my taste to low when you have spent like about 17ms for a frame with vsync enabled. However this might also be caused by other changes on the (uc) part of the input system, but I have no overview what changed there. In any case it is probably a good idea to perform some check for a contant tsc (there is a cpuid flag advertising it) and if the os is >= vista (as you can be certain that rdtsc sync will happen across cores there) and use again the old tsc based low cost high resolution timer. Currently almost all, but really ancient systems will have a cpu with a constant tsc timer in.

Another thing which might be adverse on input are certain things like forcing the game on a single cpu. Galaxy, ALAudio/OpenAL, minor parts of network code are using threaded code, so having it forced to run on cpu can cause harm.

Another aspect is that in a least some cases some users have enabled rendering of multiple frames ahead activated in their video drivers control panel, which obviously is bad.

Apart from my experience above and possible other problematic code, the problem about perceived input lags probably breaks down in two categories:
First, the time between input and it's display of the screen and second the variance of the this delay. Imho the second issue is probably the most significiant as you can adopt to a constant input delay (when it is sufficient small), but you can't adopt to it when it varies significant.

For reducing the time between input and delay you would ideally want to know how much time it would take to render the frame, wait that time, grab input, render and just output shortly before the rendered frame gets transfered to the monitor. However the best thing you can do is to average the time previous frames took to render and reduce that time by some error margin (either constant or like based on variance) and wait that time before grabbing input. The issue with this approach is that if you wait to long, you will miss the frame and get 1/RefreshRate extra time randomly, which would be really odd. So things like nvidia gsync are actually quite a good approach to that problem.
For the variance of time spent rendering, one would ideally focus on some rendering design which is less prone to some (local) change of the time spent rendering. In case of utglr it uses rbtrees for texture cache lookup, which have a really expansive insert behaviour and might explain why I perceived some micro stuttering in utgrl in the past when using utglr in Revision. Also the span based rendering architecture is probably prone to a large variance in time spent on rendering a frame as at least the time spent for visibility set determination heavily depends on the view angle and position. So getting away from that rendering architecture would probably also improve things.

Another thing which would be interessting to know is why raw input actually helps. By design it should actually just remove the mouse acceleration, which shouldnt have such a huge impact. But maybe the key advantage is that the mouse position is not a integer pixel coordinate on screen, but you also get sub pixel readings basically, so it matches closer what the current mouse position was at that time.

To sum up. This above is just what I have gathered so far about the issue and which things influence and might cause it. I do neither feel like having a complete picture about it nor have an idea what would be the best way to deal with it, because the biggest issue is the lack of wildspread availability of gsync or high refresh rate monitors, which are basically the requirement for a good solution to the problem.
  

HX on Mod DB. Revision on Steam.
Back to top
 
IP Logged
 
shoober420
Betatester
Offline



Posts: 191
Location: US
Joined: Jun 17th, 2012
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #21 - Apr 22nd, 2016 at 8:50pm
Print Post  
I for sure notice huge input lag when you set the pre-rendered frames higher then 1. I have it set to 0 via a setting in the xorg.conf file for Linux. Nvidia Windows drivers removed the option for some reason. I think you need a tweak tool to change it now.

I remember I was playing around in CS1.6 one day and enabled VSync when using 170hz and I didn't really notice any input lag. Then again, 170hz is pretty high. It's the max my monitor supports. I disable it to be on the safe side. I recall my mouse feeling sluggish when I enabled VSync with a generic 60/75hz refresh rate.
« Last Edit: Apr 23rd, 2016 at 6:53am by shoober420 »  
Back to top
 
IP Logged
 
Skywolf
Betatester
Offline


Just placeholding...

Posts: 742
Joined: Aug 2nd, 2009
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #22 - Apr 22nd, 2016 at 9:28pm
Print Post  
I usually just limit my FPS to one frame below the refresh rate of my monitor using an external program. This gets rid of any (noticable) lag when using Vsync for some reason while still having all the benefits. No idea how but it works on all games and all setups I have tried it on.
  

I hate it when people ask me what my favorite game is. Just try to explain you're not talking about Unreal Tournament Roll Eyes.
Back to top
 
IP Logged
 
Smirftsch
Forum Administrator
*****
Offline



Posts: 7471
Location: at home
Joined: Apr 30th, 1998
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #23 - Apr 23rd, 2016 at 5:29am
Print Post  
as a small side note, the Linux build offers a resolution of nanoseconds anyway. Smiley
  

Sometimes you have to lose a fight to win the war.
Back to top
WWWICQ  
IP Logged
 
shoober420
Betatester
Offline



Posts: 191
Location: US
Joined: Jun 17th, 2012
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #24 - Apr 23rd, 2016 at 6:58am
Print Post  
Smirftsch wrote on Apr 23rd, 2016 at 5:29am:
as a small side note, the Linux build offers a resolution of nanoseconds anyway. Smiley


Lolol

About the animated textures, will there be an implementation to control the FPS of animated textures, like in that thread linked earlier? That would be cool to control the FPS of the textures, so the water would look more vivid. Controlling the torches would be cool too. The water and fire looks alot better at 60 FPS. When you play with 120 FPS, it looks way too crazy lol. I imagine that the default FPS for animated textures is either 30 or 60 FPS.
  
Back to top
 
IP Logged
 
Masterkent
Developer Team
Offline



Posts: 845
Location: Russia
Joined: Apr 5th, 2013
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #25 - Apr 23rd, 2016 at 5:59pm
Print Post  
Masterkent wrote on Apr 22nd, 2016 at 10:33am:
If the input system is implemented properly, max input lag should not exceed 1/FPS seconds

I forgot an important condition "assuming that VSync is implemented in a theoretically perfect way", that, as far as I can see, does not take place on my system :=)

A test program that renders an additional mouse cursor using OpenGL with VSync revealed an interesting thing: the additional cursor (rendered with OpenGL) appears at a certain location of the display with delay of 2 or 5 frames (for Intel or Nvidia card correspondingly) after the native Windows cursor would appear in the same place due to calling SetCursorPos. A similar test with D3D11 rendering showed a delay of 2 frames for both Intel and Nvidia. So, it seems, the problem lies in QoI of VSync rather than QoI of the input system. I didn't expect that VSync itself could work so badly.
« Last Edit: Apr 23rd, 2016 at 9:01pm by Masterkent »  
Back to top
 
IP Logged
 
han
Global Moderator
Unreal Rendering Guru
Developer Team
*****
Offline


Oldunreal member

Posts: 513
Location: Germany
Joined: Dec 10th, 2014
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #26 - Apr 23rd, 2016 at 8:38pm
Print Post  
Masterkent wrote on Apr 23rd, 2016 at 5:59pm:
A test program that renders an additional mouse cursor using OpenGL with VSync revealed an interesting thing: the additional cursor (rendered with OpenGL) appears at a certain location of the display with delay of 2 or 5 frames (for Intel or Nvidia card correspondingly) after the native Windows cursor would appear in the same place due to calling SetCursorPos. So, it seems, the problem lies in QoI of VSync rather than QoI of the input system. I didn't expect that VSync itself could work so badly. I'll try to measure the lag for D3D later.


I use some extensive clocking in my OpenGLDrv and noticed that when I have vsync enabled certain parts (like DrawSurface) take up to 16ms afterwards. So it seems that the wait for the vsync actually happens on the next draw call. So the input is grabbed in this case even before the last frame even appeared on screen. So this causes indeed another input delay. utglr/utd3d afaik has it's frame rate limiting code after the buffer swap call in URenderDevice::Unlock(), which would make sense that way to put in that place (in RenDev only scope). The frame rate limiting code inside the launcher also happens shortly afterwards before handling the next messages, so the spot there would be equivalent. Maybe there is a way to explicit wait in this place until the frame is displayed, so putting an option for this in might be worth a try.
  

HX on Mod DB. Revision on Steam.
Back to top
 
IP Logged
 
Masterkent
Developer Team
Offline



Posts: 845
Location: Russia
Joined: Apr 5th, 2013
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #27 - May 2nd, 2016 at 10:53am
Print Post  
shoober420 wrote on Apr 23rd, 2016 at 6:58am:
About the animated textures, will there be an implementation to control the FPS of animated textures, like in that thread linked earlier?

I implemented some client-side mod that modifies MinFrameRate and MaxFrameRate of textures on level startup. If a texture is loaded after level startup, it won't be affected. The mod should work in SP and multiplayer modes. I can't guarantee that it will work reliably though, because the implementation is based on some hacks whose reliability is questionable. (the extension of the 7z archive was renamed from 7z to rar, because the forum does not accept 7z when modifying a message).
  

AnimTexMod_v1_0.rar ( 3 KB | 16 Downloads )
Back to top
 
IP Logged
 
Masterkent
Developer Team
Offline



Posts: 845
Location: Russia
Joined: Apr 5th, 2013
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #28 - May 2nd, 2016 at 10:53am
Print Post  
han wrote on Apr 23rd, 2016 at 8:38pm:
I use some extensive clocking in my OpenGLDrv and noticed that when I have vsync enabled certain parts (like DrawSurface) take up to 16ms afterwards. So it seems that the wait for the vsync actually happens on the next draw call.

No idea how that OpenGLDrv is implemented, but in my case I measured a relative delay in number of frames.

I called SetCursorPos in order to move the mouse cursor to a new location on each frame, drew triangle by means of OpenGL functions and then called SwapBuffers in the same thread of execution. In order to make the mouse cursor and the triangle appear in the same visible location, I had to draw the triangle with prediction of 2 or 5 frames (depending on the chosen video card) with regard to the mouse cursor movements. That is, if mouse cursor at frame n is set at location L(n), then the triangle at frame n must be drawn at L(n + 2) or L(n + 5) for making them overlap in the same location of physical display. Note that there is no any input handling in this scheme, so the extra lag of 2 or 5 frames is purely an output lag here.

Interestingly, there is no that huge lag of 5 frames in case of using D3D. I also tried to build a simple program using Vulkan API, but I couldn't synchronize the program refresh rate with the display refresh rate yet. According to the specification, using VK_PRESENT_MODE_FIFO_KHR should be similar to using wglSwapIntervalEXT(1), but I couldn't get 60 FPS with VK_PRESENT_MODE_FIFO_KHR (or any other present mode). Did anyone here experiment with Vulkan?
  
Back to top
 
IP Logged
 
han
Global Moderator
Unreal Rendering Guru
Developer Team
*****
Offline


Oldunreal member

Posts: 513
Location: Germany
Joined: Dec 10th, 2014
Gender: Male
Re: [227j_35] FrameRateLimit cvar for XOpenGL
Reply #29 - May 4th, 2016 at 1:48pm
Print Post  
@MK:
This way one additional frame delay is already caused by mouse curser rendered by game using the prior position, while your custom code renders it already at the new position.

So this would sum up to two frames (or more, if rendering multiple frames ahead is activated).
  

HX on Mod DB. Revision on Steam.
Back to top
 
IP Logged
 
Page Index Toggle Pages: 1 [2] 3 
Send TopicPrint
Bookmarks: del.icio.us Digg Facebook Google Google+ Linked in reddit StumbleUpon Twitter Yahoo