PDS 0.98a is released with the following changes
1. Fix calculation mistake in task_deadline_level() in previous release.
2. Reduce policy fairness balance overhead when task_deadline_level() calcalation is corrected.
3. Refine policy fairness balance.
4. For 32bits kernel, remove a global lock accessing by only preempt lower scheduling level run queue. (32bits Raspberry PI should get some love)
5. Extend one more NORMAL policy deadline level.
6. Fix reverted task policy value.
This is a bug fix release plus some enhancement. Compare to previous release, there is some performance regression in exchange for some interactivity improvment. Now, all design should work as expected.
Enjoy PDS 0.98a for
v4.13 kernel, :)
code are available at
https://bitbucket.org/alfredchen/linux-gc/commits/branch/linux-4.13.y-vrq
and also
https://github.com/cchalpha/linux-gc/commits/linux-4.13.y-vrq
All-in-one patch is available too.
Saturday, September 30, 2017
Wednesday, September 20, 2017
PDS 0.98 release
PDS 0.98 is released with the following changes
1. Renamed. As planned, the scheduler is proper described as Priority and Deadline based Skiplist multiple queue Scheduler, in short, PDS-mq or just PDS. Documentation/scheduler/sched-PDS-mq.txt has been added to document the PDS, but it is not finished.
2. Fix UP compilation issue in previous release, reported by jwh7.
3. re-queue task when priority/deadline changed, which fix tasks out of order issue in run queue and cause a WARNING kernel log.
4. Minor code improvements.
5. Skiplist randomization when task burst forking, this help with skiplist level randomization when forking tasks in a very short time frame.
This release is mainly for bug fix and rename. Now there is over 200+ commits for PDS and the scheduler is consider stable in this kernel release, so in next kernel release, it is planned to squash commits into one or just few for better maintenance.
Enjoy the brand new (named) PDS 0.98 for v4.13 kernel, :)
code are available at
https://bitbucket.org/alfredchen/linux-gc/commits/branch/linux-4.13.y-vrq
and also
https://github.com/cchalpha/linux-gc/commits/linux-4.13.y-vrq
(Yes, still use old branch name, will be renamed in 4.14)
All-in-one patch is available too.
PS:
Found a task deadline level calculation mistaken on a debug load today, please consider to pick up the fix at https://bitbucket.org/alfredchen/linux-gc/commits/543de0b70aed7785c226ad65a39366c80f15711b
1. Renamed. As planned, the scheduler is proper described as Priority and Deadline based Skiplist multiple queue Scheduler, in short, PDS-mq or just PDS. Documentation/scheduler/sched-PDS-mq.txt has been added to document the PDS, but it is not finished.
2. Fix UP compilation issue in previous release, reported by jwh7.
3. re-queue task when priority/deadline changed, which fix tasks out of order issue in run queue and cause a WARNING kernel log.
4. Minor code improvements.
5. Skiplist randomization when task burst forking, this help with skiplist level randomization when forking tasks in a very short time frame.
This release is mainly for bug fix and rename. Now there is over 200+ commits for PDS and the scheduler is consider stable in this kernel release, so in next kernel release, it is planned to squash commits into one or just few for better maintenance.
Enjoy the brand new (named) PDS 0.98 for v4.13 kernel, :)
code are available at
https://bitbucket.org/alfredchen/linux-gc/commits/branch/linux-4.13.y-vrq
and also
https://github.com/cchalpha/linux-gc/commits/linux-4.13.y-vrq
(Yes, still use old branch name, will be renamed in 4.14)
All-in-one patch is available too.
PS:
Found a task deadline level calculation mistaken on a debug load today, please consider to pick up the fix at https://bitbucket.org/alfredchen/linux-gc/commits/543de0b70aed7785c226ad65a39366c80f15711b
Friday, September 8, 2017
VRQ 0.97b release
VRQ 0.97b is released with the following changes
1. Select a random rq when no preemptible rq available. This will help with removing the bottleneck when cpu number increased.
2. Add rr_interval kernel parameter. Now an "rr_interval=" kernel parameter is added, but it's not suggested to change the default rr_interval setting(6ms).
3. Introduce sched_prio_to_deadline[NICE_WIDTH] to simplify deadline calculation.
4. Extend NORMAL policy level to 7 levels, this change help with reducing rq look up cost when cpu number increased. And it also brings deadline fairness for different nice level NORMAL policy tasks. eg, when two nice 19 background tasks runs with 2 nice 0 front-ground tasks in a two cpus system, those two background tasks may both be on same cpu while normal tasks occupied another. In previous release, as a workaround for this use case, the background tasks need to be IDLE policy to trigger the policy fairness balance functionality. With this new changes, there is 7 NORMAL policy levels according to task's deadline, so the nice 19 background tasks(which likely have large virtual deadline) also can be triggered for the policy fairness balance functionality.
The main features for 0.97? release has been all set now for 4.13 kernel, these feature code has been running on my machines for 2week+, so they are consider stable. In next release, the major change will be renaming and documentation.
Enjoy VRQ 0.97a for v4.13 kernel, :)
code are available at
https://bitbucket.org/alfredchen/linux-gc/commits/branch/linux-4.13.y-vrq
and also
https://github.com/cchalpha/linux-gc/commits/linux-4.13.y-vrq
All-in-one patch is available too.
1. Select a random rq when no preemptible rq available. This will help with removing the bottleneck when cpu number increased.
2. Add rr_interval kernel parameter. Now an "rr_interval=" kernel parameter is added, but it's not suggested to change the default rr_interval setting(6ms).
3. Introduce sched_prio_to_deadline[NICE_WIDTH] to simplify deadline calculation.
4. Extend NORMAL policy level to 7 levels, this change help with reducing rq look up cost when cpu number increased. And it also brings deadline fairness for different nice level NORMAL policy tasks. eg, when two nice 19 background tasks runs with 2 nice 0 front-ground tasks in a two cpus system, those two background tasks may both be on same cpu while normal tasks occupied another. In previous release, as a workaround for this use case, the background tasks need to be IDLE policy to trigger the policy fairness balance functionality. With this new changes, there is 7 NORMAL policy levels according to task's deadline, so the nice 19 background tasks(which likely have large virtual deadline) also can be triggered for the policy fairness balance functionality.
The main features for 0.97? release has been all set now for 4.13 kernel, these feature code has been running on my machines for 2week+, so they are consider stable. In next release, the major change will be renaming and documentation.
Enjoy VRQ 0.97a for v4.13 kernel, :)
code are available at
https://bitbucket.org/alfredchen/linux-gc/commits/branch/linux-4.13.y-vrq
and also
https://github.com/cchalpha/linux-gc/commits/linux-4.13.y-vrq
All-in-one patch is available too.
Monday, September 4, 2017
VRQ 0.97a released
VRQ 0.97a is released with the following changes
1. Sync-up 4.13 mainline scheduler code changes.
2. Fix cpu preempt race in task_preemptiable_rq(), reported by Eduardo in D3 wine playing.
This is a sync-up and minor bug fix release for 4.13 kernel. If all goes well, new feature will be in next week.
Enjoy VRQ 0.97a for v4.13 kernel, :)
code are available at
https://bitbucket.org/alfredchen/linux-gc/commits/branch/linux-4.13.y-vrq
and also
https://github.com/cchalpha/linux-gc/commits/linux-4.13.y-vrq
All-in-one patch is available too.
1. Sync-up 4.13 mainline scheduler code changes.
2. Fix cpu preempt race in task_preemptiable_rq(), reported by Eduardo in D3 wine playing.
This is a sync-up and minor bug fix release for 4.13 kernel. If all goes well, new feature will be in next week.
Enjoy VRQ 0.97a for v4.13 kernel, :)
code are available at
https://bitbucket.org/alfredchen/linux-gc/commits/branch/linux-4.13.y-vrq
and also
https://github.com/cchalpha/linux-gc/commits/linux-4.13.y-vrq
All-in-one patch is available too.
Subscribe to:
Posts (Atom)