tag:blogger.com,1999:blog-2963790426029213933.post5105471602592285060..comments2024-02-29T00:33:07.382-08:00Comments on Alfred Chen's Blog: PDS 0.99m releaseAlfred Chenhttp://www.blogger.com/profile/03164306846702841944noreply@blogger.comBlogger51125tag:blogger.com,1999:blog-2963790426029213933.post-32595967132808122982023-02-09T01:34:51.785-08:002023-02-09T01:34:51.785-08:00betmatik
kralbet
betpark
tipobet
slot siteleri...<a href="https://topsnslots.com" title="betmatik" rel="nofollow">betmatik</a> <br /><a href="https://sakralarab.com" title="kralbet" rel="nofollow">kralbet</a> <br /><a href="https://onlinebestecasinos.com" title="betpark" rel="nofollow">betpark</a> <br /><a href="https://tipobet.online" title="tipobet" rel="nofollow">tipobet</a> <br /><a href="https://slothensai.com" title="slot siteleri" rel="nofollow">slot siteleri</a> <br /><a href="https://kibrisbahissiteleri.com" title="kibris bahis siteleri" rel="nofollow">kibris bahis siteleri</a> <br /><a href="https://canlipokersiteleri.info" title="poker siteleri" rel="nofollow">poker siteleri</a> <br /><a href="https://casinosallinfo.com" title="bonus veren siteler" rel="nofollow">bonus veren siteler</a> <br /><a href="https://flightnuts.com" title="mobil ödeme bahis" rel="nofollow">mobil ödeme bahis</a><br />15H7YCvdassadnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-43639953650611869972023-02-09T00:39:49.663-08:002023-02-09T00:39:49.663-08:00betpark
tipobet
betmatik
mobil ödeme bahis
poker s...<a href="https://onlinebestecasinos.com" title="betpark" rel="nofollow">betpark</a><br /><a href="https://tipobet.online" title="tipobet" rel="nofollow">tipobet</a><br /><a href="https://topsnslots.com" title="betmatik" rel="nofollow">betmatik</a><br /><a href="https://flightnuts.com" title="mobil ödeme bahis" rel="nofollow">mobil ödeme bahis</a><br /><a href="https://canlipokersiteleri.info" title="poker siteleri" rel="nofollow">poker siteleri</a><br /><a href="https://sakralarab.com" title="kralbet" rel="nofollow">kralbet</a><br /><a href="https://slothensai.com" title="slot siteleri" rel="nofollow">slot siteleri</a><br /><a href="https://kibrisbahissiteleri.com" title="kibris bahis siteleri" rel="nofollow">kibris bahis siteleri</a><br /><a href="https://casinosallinfo.com" title="bonus veren siteler" rel="nofollow">bonus veren siteler</a><br />REWturgutnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-15094965223771856172019-03-01T04:02:49.930-08:002019-03-01T04:02:49.930-08:00Hi Alfred,
thx for the code change. Looks better ...Hi Alfred,<br /><br />thx for the code change. Looks better and runs fine (As I wrote, I used the formula from your old commit as quick hack.)<br />But not reusing the old p->deadline and ignoring it and recalculating it from scratch leads for me to mentioned bug. Had you asked me 2 weeks ago, than I had say'd, that there is no problem for me with PDS, because this hung is really difficult to identify. But this hung leads already to a bug-fix, no one else had mentioned.<br /><br />But anyway, thanks for your help. It's ok for me, to do after an "git pull" an "quilt import" too ;). Don't worry about it. I know now the cause and the solution.<br /><br />But maybe you could me please inform, will the new p->deadline will always goes bigger than the old one or will it depend on situation (load etc.)?<br /><br />Many Thanks and Regards<br />sysitos<br /><br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-51244185571458641482019-03-01T01:16:44.650-08:002019-03-01T01:16:44.650-08:00It prefers higher clocking cores for tasks.
https:...It prefers higher clocking cores for tasks.<br />https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-max-technology.htmlAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-37226502323497393502019-02-28T14:29:35.964-08:002019-02-28T14:29:35.964-08:00What does this clearlinux patch actually do?What does this clearlinux patch actually do?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-1926936677199901412019-02-28T07:57:19.232-08:002019-02-28T07:57:19.232-08:00I have done pre-study about itmt last year on my i...I have done pre-study about itmt last year on my intel gen8 cpu on a notebook, but it turned out that it doesn't support itmt. Will check it back on the new scheduler when principal feature are done this year.Alfred Chenhttps://www.blogger.com/profile/03164306846702841944noreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-16086915879181640622019-02-28T07:32:07.409-08:002019-02-28T07:32:07.409-08:00Anyway to implement this in PDS or the new schedul...Anyway to implement this in PDS or the new scheduler?<br />https://github.com/clearlinux-pkgs/linux/blob/master/0123-add-scheduler-turbo3-patch.patchAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-82660123599940685552019-02-27T19:00:14.143-08:002019-02-27T19:00:14.143-08:00@sysitos
I have to said that none of your change m...@sysitos<br />I have to said that none of your change met the design intention. 1), revert the commit is not a good idea as there is bug in previous deadline calculation, that's why rework the time slice expiration. 2), it will by pass deadline update for NORMAL tasks at all.<br /><br />Your last code change looks ok, I'd suggest you to change the deadline calculation to<br />p->deadline /= 2;<br />p->deadline += rq->clock / 2 + task_deadline_diff(p);<br />If it works for you and your issue,you can keep the code change for yourself. I am not going to make changes to PDS so far, b/c I believe this only fix particular cases, may fail other cases. I hope you can understand it.<br /><br />For long term, there will be no deadline concept in new scheduler, so less trouble to worry about. But there will be still yield problem. Will see how to handle it later, as it will be low priority item.Alfred Chenhttps://www.blogger.com/profile/03164306846702841944noreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-26337305587716891422019-02-27T08:28:27.693-08:002019-02-27T08:28:27.693-08:00Hi (@svainar),
so changed back my stupidness and ...Hi (@svainar),<br /><br />so changed back my stupidness and modified pds.c (in the mind of the old commit):<br /><br /><br /> if (p->prio >= NORMAL_PRIO) {<br /> if (p->prio == NORMAL_PRIO) {<br /> p->deadline /= 2;<br /> p->deadline += (rq->clock + task_deadline_diff(p)) / 2;<br /> } else<br /> p->deadline = rq->clock + task_deadline_diff(p);<br /><br /> update_task_priodl(p);<br /> }<br /><br />This works here, and should be elegant for the rest too ;).<br /><br />PS: Could all be useless, because Alfred is working on a new scheduler ;)<br /><br />Regards sysitosAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-42038751842821702182019-02-27T07:35:50.891-08:002019-02-27T07:35:50.891-08:00@sveinar
thanks for clarification, than the only ...@sveinar<br /><br />thanks for clarification, than the only clean solution (for me) is solution 1, the ugly one :/<br />Alfred was handling the normal prio tasks in the old commit in an other way, which had no drawback here.<br /><br />But sorry for the stupid question, maybe you could clarify it a little bit, does that mean, that a "normal prio task" wouldn't never be refreshed and the assigned process time within a tick and only for this tick would be the same?<br /><br />So do you have a working load example, where I could check the wrong behavior?<br />Thanks and regards<br />Sysitos<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-35373153430938078412019-02-27T04:12:04.845-08:002019-02-27T04:12:04.845-08:00@sysitos
I kind of expect that commit to disable t...@sysitos<br />I kind of expect that commit to disable timeslice expiration for "normal tasks" by doing that. Not that i am a programmer or expert in any way tho :)<br /><br />Ie. You would never "update_task_priodl(p);" if a task is running as "Normal prio" (most tasks are). <br /><br />This would in turn probably work for you, but i am not entirely sure it is elegant for the rest of us? :)Sveinar Søplerhttps://www.blogger.com/profile/18401720133659243541noreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-9516210975064245722019-02-26T14:57:28.169-08:002019-02-26T14:57:28.169-08:00Hi Alfred,
here I am again. I know, I'm a lit...Hi Alfred,<br /><br />here I am again. I know, I'm a little bit insistent ;)<br /><br />Because there is no workaround for my problem and I don't want to go back to cfs, but on the other side I need my mail too, I have checked the problem again and found 2 solutions:<br /><br />1. solution (the ugly one):<br />I completely reverted your Commit 51d8f8b8 on top of the actual pf-kernel. Does compile fine and even better, it does work without the mentioned errors. But I think, you wouldn't like it, because of your rework within this commit.<br /><br />2. solution, the elegant one, so I hope ;)<br />I checked again your commit and than changed only a single sign:<br /><br />line 463 old: if (p->prio >= NORMAL_PRIO) { <br />line 463 new: if (p->prio > NORMAL_PRIO) {<br /><br />Compiled and run fine. Even tested with different yields (0,1,2). Couldn't trigger the error yet, checked different situations. Haven't seen any drawbacks.<br /><br />Maybe you or someone else could double check it.<br /><br />Thanks for your help.<br />Regards sysitosAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-73459990831028106182019-02-26T09:19:41.260-08:002019-02-26T09:19:41.260-08:00@Manuel and Eduardo,
thanks, had in mind some ude...@Manuel and Eduardo,<br /><br />thanks, had in mind some udev rule. Endless ways in linux to do so ;). But wouldn't help, see below.<br /><br />@Alfred<br /><br />bad news, the error was triggered now even with last yield_type=2 settings. So no workaround possible anymore. Checked it with newest pf-kernel.<br /><br />Regards sysitosAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-26949919286694419762019-02-26T08:33:51.532-08:002019-02-26T08:33:51.532-08:00@Manuel,
I use PDS exclusively on all machines I o...@Manuel,<br />I use PDS exclusively on all machines I own (and not).<br />I have placed some tweaks in /etc/rc.local to be executed every time computer starts, that includes yield_type as well.<br />BR, EduardoAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-60058867742259125132019-02-26T07:38:55.619-08:002019-02-26T07:38:55.619-08:00@sysitos:
I always call a script to change openSUS...@sysitos:<br />I always call a script to change openSUSE's defaults, once my desktop is up. I know, that's way too old-fashioned. But it leaves me in charge.<br />Maybe you can place an appropriate script into the systemd folders and let it been called during bootup?<br />Unfortunately, I'm too unexperienced with this.<br /><br />BR, ManuelAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-58210438608109650162019-02-26T07:14:34.029-08:002019-02-26T07:14:34.029-08:00Hi Alfred,
many thanks for the detailed explanati...Hi Alfred,<br /><br />many thanks for the detailed explanation, was time for me to google for it (I'm not a programmer).<br />But what does that mean, if all yield_types lead to an error? With the new pf-kernel (your bug fix already included), the error triggers now with yield_type=0 too. So I only have yield_type=2 as a workaround, must only check, how to set it during boot up or in the source code. But if your new concept scheduler does work different, maybe the error doesn't trigger there, so don't invest to much time in it. We have (hopefully) Linux 5.0 next week ;)<br /><br />Regards sysitosAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-40467394224071865132019-02-26T07:12:34.939-08:002019-02-26T07:12:34.939-08:00@Eduardo:
Thank you for adding this info !
Severa...@Eduardo:<br />Thank you for adding this info ! <br />Several years ago I've used a nvidia gfx card, where sched_yield = 2 was the only way to operate it properly over longer time. But code changed that much inbetween, that I won't pinpoint any former scheduler/ driver.<br /><br />I assume, that you use the normal kernel & X11 drivers for your Intel gfx ATM., right?<br /><br />BR, ManuelAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-27073586699769981212019-02-26T06:53:21.414-08:002019-02-26T06:53:21.414-08:00I have escaped weird behavior by setting the value...I have escaped weird behavior by setting the value to 2, mostly related to intel graphics driver, if I remember correctly.<br />BR, EduardoAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-26892238810428873212019-02-26T02:43:56.407-08:002019-02-26T02:43:56.407-08:00@Alfred:
If one would use a non-default sched_yiel...@Alfred:<br />If one would use a non-default sched_yield value as workaround, like sysitos, which one would you suggest/ recommend?<br />Do they have different impacts, that you know of?<br /><br />Best regards,<br />ManuelAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-13890471365852444042019-02-26T00:16:45.442-08:002019-02-26T00:16:45.442-08:00@sysitos
Thanks for these further testing. Let me ...@sysitos<br />Thanks for these further testing. Let me explain it this way, sched_yield() is an "evil" system call, which give up current task run time to let other tasks in the system to run and get the job making progress. In modern days, there are many ways to do IPC so current task can wait on something till other tasks can have cpu time, get the job done and notify it. But,it is legacy and it is still be used.<br />It's "evil" b/c it is not reliable, it depends on the scheduler how to handle the yielded task and let other tasks to be run. CFS use skip flag in task structure and BFS/muqss/VRQ/PDS use yeild types, all are different in implementation, but none is guaranteed(IMO). So, application using sched_yield() may work different in behaviors under different scheduler/yield type.<br /><br />Back to the PDS, I have checked the 51d8f8b86d81 (HEAD, refs/bisect/bad) pds: Rework time_slice_expired(), the code change is correct and as expected. But it failed some yield type for your application sched_yield() usage. It still sounds good to me as there is other yield type can workaround it<br />Maybe we should introduce a more reliable way to handle yield in scheduler, but I believe it's too late for PDS and thinking about it in the new incoming scheduler, it will still be a low priority item. Be honest, if I could control user-land usage, I'd eliminate sched_yield() system call, :)Alfred Chenhttps://www.blogger.com/profile/03164306846702841944noreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-57382342243381502122019-02-25T11:52:13.968-08:002019-02-25T11:52:13.968-08:00Hi Alfred,
so I bisect the wholw 4.19.y-pds tree ...Hi Alfred,<br /><br />so I bisect the wholw 4.19.y-pds tree (and patched always your fix) and here are my (shortened) results:<br /><br />51d8f8b86d81 (HEAD, refs/bisect/bad) pds: Rework time_slice_expired()<br />a473f87a3bd1 (refs/bisect/good-a473f87a3bd13ca95b3838108aa8f3a2f7e0f8e6) pds: Fix cpu hot-plug Oops.<br />55fdf19c03c1 (refs/bisect/good-55fdf19c03c121144717c95e9b0b177cf1cb883b) pds: [Sync] c377a2a8bf25 (refs/bisect/good-c377a2a8bf25e30707083156befda486b0e202b8) pds: Remove cpumask_weight() in best_mask_cpu().<br />770c3b622528 (refs/bisect/good-770c3b6225288fb308631c3a1ede419bbe2d735a) Tag PDS 0.99b<br /><br />So I hope, that I could help.<br /><br />Regards sysitosAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-23325011559498493592019-02-25T05:20:43.165-08:002019-02-25T05:20:43.165-08:00Hi Alfred,
couldn't agree with you in this ca...Hi Alfred,<br /><br />couldn't agree with you in this case. Yes, there are times, when the schedulers triggers errors produced within other applications. But it seems not be the case here. That's why I tested your patch not only on the newest 4.20 git, but also on older ones and here are the results:<br />The error couldn't be triggered by me with kernel 4.18 and PDS 0.99a (or better on your last commit for this branch linux-4.18.y-pds). All runs fine. This is the case too for branch linux-4.19.y-pds and PDS 0.99b, commit 770c3b622528. No problems at all. But there are problems with your last commit on this branch, the error triggers instantly. Other commits are not tested yet by me.<br /><br />Btw, no error with cfs and muqss with all yields.<br /><br />Regards sysitosAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-51598662167839709642019-02-24T18:32:30.025-08:002019-02-24T18:32:30.025-08:00@sysitos
Sorry for the late reply during the weeke...@sysitos<br />Sorry for the late reply during the weekend. Based on your testing, I believe issue is caused by sys_yield() usage of user land code and a bug in PDS code together, and https://gitlab.com/alfredchen/linux-pds/commit/2fab3ad028e396a9b0de760425052a2ab1444936 is the proper code fix in PDS. And adjust yield type would be the workaround for the user who use affected applications.<br /><br />For rr_interval, it has been change to 4ms for some times, and it's not encouraged to change this value.Alfred Chenhttps://www.blogger.com/profile/03164306846702841944noreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-56638732226745042092019-02-22T04:40:14.568-08:002019-02-22T04:40:14.568-08:00Hi Alfred,
short: your patch helped a lot, but th...Hi Alfred,<br /><br />short: your patch helped a lot, but the problem still persist.<br /><br />long: Applied your patch on top of pf-kernel and than additionally on top of your pds git tree for 4.20 to exclude some troubles. Results are the same. It's a way better than without the patch, now most time only 1 (or 0) hung imap sync process, prior it was 2-3. But now there is a influence of yield, what imho you had already in mind. I triggered the error until now only with yield_type=1. 0 and 2 run fine, without the error (with no influence of rr_interval at all, tested different values here). Btw. is the new default rr_interval=4? Wasn't it 6 some time ago?<br /><br />Regards sysitosAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2963790426029213933.post-67241486606512028642019-02-21T16:18:29.987-08:002019-02-21T16:18:29.987-08:00@sysitos
Would you pls send me an email? I'd l...@sysitos<br />Would you pls send me an email? I'd like to prepare a patch for your debugging.<br />The new scheduler is based on PDS code base, so most likely it will has the same issue.Alfred Chenhttps://www.blogger.com/profile/03164306846702841944noreply@blogger.com