This is the first test patch branch for v4.5 kernel, it's totally based on the v4.5_0469_vrq0 code, and include only one change.
The change is a continuous improvement for the "sticky task". For the background information about "sticky task", please read about this from CK's blog or search "sticky" in CK's blog for further information.
In VRQ, the sticky task already has some modifications, which happened with the policy caching timeout changes, please reference this blog for information. In this change, I'm
trying *not* putting sticky task into grq, instead, the sticky task is now set as the preempt task of the rq, which will be selected to be run immediately when next reschedule comes.
Pros:
This change reduce grq locking access overhead for all workload especially the for heavy load, it is recorded 2m32.xxxs under 300% workload compare to the original 2m36.xxxs for the NORMAL policy tasks.
Cons:
Theoretically, in current implement, two task A and B which run on same cpu could be the running task and the sticky task rationally and fail to select other tasks in the running queue.
The test branch is now at bitbucket and github, have fun with this first test branch and your feedback will be welcome.
BR Alfred
At first: Am I right that the test0 branch only differs from the vrq0 in the top 7 patches I see @bitbucket...linux-4.5.y-test? If so...
ReplyDeleteI'm not convinced of this approach. Although it's booting and running o.k. I've observed severe issues regarding interactivity:
* scrolling in firefox is generally(!) more sluggish vs. vrq0
* when having my two wcgrid tasks running as IDLEPRIO and nice 19 in the background, FF scrolling is even more "sticky" and general mouse movement on the desktop seems to be affected, they also introduce frame drops in flash video playback within FF vs. vrq0
I hope this info helps you in fine-tuning your test0 version! :-)
BR Manuel Krause
@Manuel
DeleteThanks for the quick feedback. It seems that the low priority tasks preempt too much. Improvement is incoming.
@Alfred:
ReplyDeleteI hope you're fine! Any news about the "incoming" improvement or the progress regarding it?
Don't misunderstand me, your current "standard" VRQ0 is working _very well_ on my machine, also with actualized BFQ (NO code change vs 4.4.x) and my forward-ported 4.3.3 TuxOnIce, and now even with post-factum's most recent writeback patches @kernel 4.5.3. That's nice :-)))
@post-factum:
I really appreciate your work! In the past years I've found it as a trustable source for solutions for our shared targets. But, please understand, that I'm quite disappointed to read that you're going to fade out patches like TuxOnIce. Can't we find a way to convince Nigel or someone else to continue the development?
Best regards to both of you,
Manuel Krause
AFAIK, Nigel is busy with other projects, so it is not up to us to convince him to continue the development.
DeleteAlso, unfortunately, TOI does not work anymore for me. So I see no reason to merge it into -pf. Nevertheless, you may always pull Nigel's git tree if it is updated in timely manner.
@post-factum:
DeleteThank you very much for your quickly provided clarifications. They're so reasonable, that I don't feel the need to comment them in any way.
Have you, by coincidence ;-) given my forward ported TOI patch (from kernel 4.3.3) a try that I've mentioned here:
http://cchalpha.blogspot.de/2016/04/45-vrq-patch-v450469vrq0-released.html?showComment=1461442160771#c5608643865812305468
?
I can't help myself and don't want to pat myself on the back, but, except for the still remaining mentioned warning, it's working safely on here.
Best regards,
Manuel Krause
Neither TOI for 4.3 nor for 4.2 worked for me. It became broken when I switched to complex disk layout involving RAID, LUKS, LVM and btrfs.
DeleteAlfred, I'm getting periodic spontaneous (once in several days) lockups. I've managed to get kernel logs via netconsole, and here they are: https://gist.github.com/4e45afffb863e2522c0e0fd1ae282cd0
ReplyDeleteCould you please take some look at those logs? For now I've switched back to stock kernel to check whether it is mainline issue or BFS-related.
My guess is that you enable nmi_watchdog when you notice periodic lockups, is that right or you enable it all the time by default? Also, I guess you are on 4.5-vrq branch?
DeleteMay be you could also switch back to 4.4-vrq and compare b/w releases.
BR Alfred
Correct, NMI watchdog is enabled on my system (by default). Should I disable it?
DeleteI'm on 4.5-vrq branch.
Haven't faced such an issue on 4.4-vrq with NMI watchdog enabled.
OK. Just give some time to test the mainline kernel.
Delete