tag:blogger.com,1999:blog-2963790426029213933.post8210560982141126229..comments2024-02-29T00:33:07.382-08:00Comments on Alfred Chen's Blog: v4.7_0472_vrq2 patch releasedAlfred Chenhttp://www.blogger.com/profile/03164306846702841944noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-2963790426029213933.post-47054839264455097572016-09-08T02:23:08.968-07:002016-09-08T02:23:08.968-07:00@kernelOfTruth
Would you try the all-in-one patch ...@kernelOfTruth<br />Would you try the all-in-one patch file of vrq1, it should fix a suspend/resume issue on vrq0 and there are just 4 commits different between vrq1 and vrq2.Alfred Chenhttps://www.blogger.com/profile/03164306846702841944noreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-47110341527180678472016-09-07T16:42:11.611-07:002016-09-07T16:42:11.611-07:00I should have written it more clearly, sorry
the ...I should have written it more clearly, sorry<br /><br />the issue didn't occur with a Kernel which had vrq0,<br /><br />but now it occurs on a kernel which is using vrq2<br /><br />ThankskernelOfTruthnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-30798127015570875972016-09-07T08:28:04.447-07:002016-09-07T08:28:04.447-07:00I don't know, Alfred if it's VRQ2 compared...I don't know, Alfred if it's VRQ2 compared to VRQ0<br /><br />but there's lots of stuttering during mpv playback when e.g. HQ (1080p) videos are played back at 60 or 48 fps,<br /><br />I've meanwhile also updated vapoursynth and mpv so I'm not really sure if it's BFS or mpv and vapoursynth that are causing trouble,<br /><br />and nvidia-drivers also were updated - are there any known issues with 370.23 ?<br /><br />In the past this was really smooth.<br /><br />Even when disabling compositing for kwin it occurs<br /><br />>is the introduction of preempt stick task to replace the original stick timeout in previous release, which helps with high workload performance and the result can be proved in the sanity test report.<br /><br />sounds VRQ3 could help ...<br /><br />ThankskernelOfTruthnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-35236966776643135982016-08-21T13:26:05.662-07:002016-08-21T13:26:05.662-07:00I compiled a new kernel with 4.7.2 (merging 4.7.2 ...I compiled a new kernel with 4.7.2 (merging 4.7.2 into 4.7.1),<br /><br />and the error message didn't occur on this boot,<br /><br />will observe if it shows again on the next boot ups.<br /><br />So far, so good<br /><br />Thank you :)kernelOfTruthnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-47889257989105709482016-08-16T17:47:29.614-07:002016-08-16T17:47:29.614-07:007a1262a bfs/vrq: task_cpu_hotplug() update
in VRQ2...7a1262a bfs/vrq: task_cpu_hotplug() update<br />in VRQ2 should fixed this by just allowed online cpus in the tsk_allowed_cpumask, would you double check it again?<br /><br />BR AlfredAlfred Chenhttps://www.blogger.com/profile/03164306846702841944noreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-88380082482809657702016-08-16T15:07:17.615-07:002016-08-16T15:07:17.615-07:00FYI:
the following occurs with VRQ0, VRQ2:
[ ...FYI:<br /><br />the following occurs with VRQ0, VRQ2:<br /><br />[ 0.112323] Renew affinity for 14 processes to cpu 1<br />[ 0.112410] #2<br />[ 0.175369] Renew affinity for 14 processes to cpu 2<br />[ 0.175450] #3<br />[ 0.238417] Renew affinity for 14 processes to cpu 3<br />[ 0.238498] #4<br />[ 0.300494] Renew affinity for 14 processes to cpu 4<br />[ 0.300572] #5<br />[ 0.362535] Renew affinity for 14 processes to cpu 5<br />[ 0.362616] #6<br />[ 0.423524] ------------[ cut here ]------------<br />[ 0.423546] WARNING: CPU: 2 PID: 24 at arch/x86/kernel/smp.c:125 native_smp_send_reschedule+0x25/0x3b<br />[ 0.423555] Modules linked in:<br />[ 0.423565] CPU: 2 PID: 24 Comm: ksoftirqd/2 Not tainted 4.7.1_dtop-I.4 #1<br />[ 0.423574] Hardware name: [snip]<br />[ 0.423583] 0000000000000086 00000000a4a20f87 ffff8807fa93bd98 ffffffff81551c6d<br />[ 0.423593] 0000000000000000 0000000000000000 ffff8807fa93bdd8 ffffffff81124ccc<br />[ 0.423604] 0000007dffffffff 0000000000000002 0000000000000000 000000000000a138<br />[ 0.423614] Call Trace:<br />[ 0.423625] [] dump_stack+0x4d/0x63<br />[ 0.423636] [] __warn+0xc5/0xe0<br />[ 0.423645] [] warn_slowpath_null+0x18/0x1a<br />[ 0.423654] [] native_smp_send_reschedule+0x25/0x3b<br />[ 0.423664] [] __schedule+0x1e8/0x7bb<br />[ 0.423674] [] schedule+0x79/0xc1<br />[ 0.423684] [] smpboot_thread_fn+0x14d/0x1a9<br />[ 0.423693] [] ? sort_range+0x1d/0x1d<br />[ 0.423702] [] kthread+0xdc/0xe4<br />[ 0.423712] [] ret_from_fork+0x1f/0x40<br />[ 0.423721] [] ? kthread_create_on_node+0x1ac/0x1ac<br />[ 0.423733] ---[ end trace dd6bfdceedb0dc5f ]---<br />[ 0.424524] ------------[ cut here ]------------<br />[ 0.424533] WARNING: CPU: 2 PID: 24 at arch/x86/kernel/smp.c:125 native_smp_send_reschedule+0x25/0x3b<br />[ 0.424542] Modules linked in:<br />[ 0.424551] CPU: 2 PID: 24 Comm: ksoftirqd/2 Tainted: G W 4.7.1_dtop-I.4 #1<br />[ 0.424562] Hardware name: [snip]<br />[ 0.424572] 0000000000000086 00000000a4a20f87 ffff8807fa93bd98 ffffffff81551c6d<br />[ 0.424584] 0000000000000000 0000000000000000 ffff8807fa93bdd8 ffffffff81124ccc<br />[ 0.424592] Renew affinity for 14 processes to cpu 6<br />[ 0.424603] 0000007dffffffff 0000000000000002 0000000000000000 000000000000a138<br />[ 0.424610] Call Trace:<br />[ 0.424617] [] dump_stack+0x4d/0x63<br />[ 0.424624] [] __warn+0xc5/0xe0<br />[ 0.424631] [] warn_slowpath_null+0x18/0x1a<br />[ 0.424638] [] native_smp_send_reschedule+0x25/0x3b<br />[ 0.424645] [] __schedule+0x1e8/0x7bb<br />[ 0.424653] #7<br />[ 0.424653] [] schedule+0x79/0xc1<br />[ 0.424665] [] smpboot_thread_fn+0x14d/0x1a9<br />[ 0.424672] [] ? sort_range+0x1d/0x1d<br />[ 0.424679] [] kthread+0xdc/0xe4<br />[ 0.424686] [] ret_from_fork+0x1f/0x40<br />[ 0.424692] [] ? kthread_create_on_node+0x1ac/0x1ac<br />[ 0.424699] ---[ end trace dd6bfdceedb0dc60 ]---<br />[ 0.486636] Renew affinity for 14 processes to cpu 7<br />[ 0.486648] x86: Booted up 1 node, 8 CPUs<br />[ 0.486655] smpboot: Total of 8 processors activated (54306.10 BogoMIPS)<br />kernelOfTruthnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-56033627865280904012016-08-13T06:42:18.986-07:002016-08-13T06:42:18.986-07:00Mmmh. I'm using the performance governor all t...Mmmh. I'm using the performance governor all the time. Probably a positive side-effect of something I haven't noticed in detail. This may be even having closed some tabs in FF.<br /><br />Don't mind for bothering you, BR<br />Manuel KrauseAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-34647514846865772502016-08-13T00:44:50.970-07:002016-08-13T00:44:50.970-07:00The commit
cbfc46d bfs/vrq: Deploy cpufreq_trigger...The commit<br />cbfc46d bfs/vrq: Deploy cpufreq_trigger() in task_preempt_rq()<br />would help with cpufreq governor and improve throughput in sanity tests.Alfred Chenhttps://www.blogger.com/profile/03164306846702841944noreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-64965669461308184252016-08-12T08:33:29.388-07:002016-08-12T08:33:29.388-07:00@Alfred:
With all the previous patches I've no...@Alfred:<br />With all the previous patches I've noticed an increasing cpu utilization with my usage scenario. Mainly depending on firefox' uptime. With this current patch I see a limit for this increase, or at least a significantly lower increase rate, what is very nice. Also, distribution of system and normal prio between my two cpu cores seems to get equalized better.<br /><br />From your developper's view -- can this be explained with the changes of this VRQ2, or is it more likely to be "fallout" from other possible changes (e.g. Mesa + Intel gfx updates) ? Just like to read your opinion on this.<br /><br />Now going to the new -test0 version... :-)))<br /><br />BR, and thank you for all your good work,<br />Manuel KrauseAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-49427347883344711722016-08-10T09:05:55.313-07:002016-08-10T09:05:55.313-07:00@Alfred:
It's up and running fine for some hou...@Alfred:<br />It's up and running fine for some hours now. :-)<br /><br />@Eduardo & kernelOfTruth:<br />Would be nice to read your testing results as well.<br /><br />BR, Manuel KrauseAnonymousnoreply@blogger.com