Home_greyopenFATE - openSUSE feature tracking > #311177
Dashboard | Search | Sign up | Login

Please login or register to be able to edit or vote this feature.

RE-Evaluate the Kernels Cache Buffers and all Associated Caching Processes

Feature state

openSUSE Distribution
Rejected Information

Description

I am going to try to suggest the unthinkable - so please bear with me. Over a 3 year period I have watched the utilisation of physical RAM on two X64 PC's. One has 8GIG of RAM the other 4GIG of RAM. I'll just copy and paste the information to make it easier.

*************************************************

Total memory (RAM): 7.8 GiB Free memory: 4.5 GiB (+ 2.8 GiB Caches)

Free swap: 42.6 GiB

*************************************************

Total memory (RAM): 4.9 GiB Free memory: 1.6 GiB (+ 2.3 GiB Caches)

Free swap: 66.5 GiB

**************************************************

Firstly, I am not going to ever suggest that any change be made to ANY caching algorithms or other O/S dependency’s, that require the utilisation of VM. Virtual RAM comes into play with its own costs of slowing everything down.

The notion of needed more Disk I/O's to stop a hungry application from issuing an 'Insufficient Memory" warning is out of the question I believe. Yes the additional Disk I/O's do allow any application to run, but they are done at the expense of competing with Disk I/O's for file retrieving and storage of application executables code and in essence slow everything down! VM is never a desirable situation and competing for Disk I/O slows everything down - Its not rocket science.

What I do see over time in the 4GIG of Physical RAM is better utilised in caching algorithms, but none the less leave an enormous pool of Physical RAM available, even under the highest application load. What I see over time in the 8GIG of Physical RAM is a huge pool of available RAM that sits there and never gets utilised.

The 8GIG example may as well only have 4GIG as the presence of a higher value of Physical RAM go to waste as it is never used. This pool of available physical RAM is totally wasted, I believe, and offers no performance enhancing characteristics as the abundant pool remains untouched by any caching buffers and their dynamic allocation does not exist.

Rather than reinventing the wheel I have the following suggestion which may very well kill this request completely or make it far more difficult to re-invent the wheel. I have had a very long period of exposure to Netware’s File Servers' dynamic allocation of Physical RAM and its complexities have been well honed over may years.

If we can just use the dynamic allocation of caching, turbo FAT, and TTS services we are almost 90% of the way there. I do agree that the Linux TTS Services are already very strong and possible need no attention.

The dynamic changes to the utilisation of ALL the Physical RAM on any Linux O/S Kernel could well do to look at how we dynamically respond when there is abundant Physical RAM present..

Your comments are very very very most welcome as I am starkly aware that suggesting this feature is so very very huge and a difficult task at an Annalists point of view let alone the programmers point of acceptance for changes in huge amounts of code.

However I am prepared to run it up the flag poll and see who salutes...:-) Lastly I hope my CR stay fast so as to not ruin the layout.. .

User benefit:

The user benefit is as big as the Linux Kernel itself - If we can get better use out of the pool of available physical RAM the better - My current testing over 3 years shows that there is NO tangible benefit of upwards of 4GIG of RAM!

Without any gain as in above current caching algorithms ( I use this term "cache", to encompass the Dynamic use of ALL Caching Processes , Turbo FAT, TTS, Memory Management and more beneficial use of excess pool resources etc. etc) fail to be either used to befit the speed of any PC that has upwards of 4GIG of Physical RAM.

I am sorry I can not express this feature in more simplistic terms - by its very nature, this feature is very complex!

Discussion


icons/user_comment.png S. C. wrote: (7 years ago)

Please reject and cancel this Request - Yast will Never Change at a users Request

icons/user_comment.png S. C. wrote: (7 years ago)

OOOP! Please Ignore my comment above- I was cutting and pasting into OTHER feature requests that were no longer valid - Sorry

icons/user_comment.png S. C. wrote: (7 years ago)

QA - Please close/cancel this request on 1 May 2011 if no other action or no other comments are made other than myself

icons/user_comment.png S. C. wrote: (7 years ago)

2011-02-04, 09:32:01

icons/user_comment.png S. C. wrote: (7 years ago)

Yes, I am aware that I am dancing with the Devil on this feature request - but as in the above -

I am prepared to run it up the flag poll and see who salutes despise the enormity of the feature request...:-)

(-Local Expression but I know everyone can understand this)

icons/user_comment.png R. D. wrote: (7 years ago)

I find the explanation of this feature unclear. Scott, are you saying that you have more Physical RAM than is used by the kernel and applications, and that you believe that virtual memory is slow?
Test undefinining the swap space, so only 8GB physical memory is available and see if you find any performance improvement, or low memory conditions in usage.
Considering the installed size of the system, it is plausible to me, that every demand paged memory page ever read, on that fat but probably otherwise typical desktop system is indeed retained in memory. 2.8 GIB is a lot of cached data to have read off disk (I often install test default desktop into a 10GB system partition with room to spare).
On some distro's like Ubuntu a tmpfs VM filesystem has been created and populated with most heavily used system libraries on boot. It would also be simple for end user to configure 2 GB RAM to be used for /tmp for instance.
Personally I suspect that attempts to do speculative read from disk, will be doomed to the same problems seens in MS's Vista OS, where excessive disk I/O reduced perceived performance. Desktop applications similarly suffer from bloat and poorer performance if they increase memory consumption wastefully, because of the relatively slow access times of RAM comparted to instruction cycle time of modern super-scalar multi-GHz CPU.

If a system is over resourced for the work load it is under, artificially making work will not improve the situation in fact it will do the opposite.

icons/user_comment.png S. C. wrote: (6 years ago)

NO! NO! Not at all - At all costs we need to restrict use of VM to the absolute last resort - see the following I wrote for a tech magazine many years ago about the reliance on VM that Microsoft use - We never want to use VM until its the last resort
http://www.techrepublic.com/forum/discussions/102-203004-2112103
My suggestion is that we re-examine the use of physical RAM for ever permutation of caching - from flush dirty cache buffers time period and the allocation of more cache buffers dynamic increase with more I/O requests.
The example I gave shows that there is 8GIG of physical RAM, yet only a fraction is used for ancillary performance after the kernel has loaded etc.
Under testing you can have ever increasing I/O requests but there seems no dynamic allocation of physical RAM to meat the increased IO's number.
In fact the only way I can get my X_64 bit PC with 8GIG of RAM to use anything above the example is to have multiple high end graphics application windows open and converting the image to another format.

In a nutshell, after we load the Kernel, the amount of Physical RAM we use is nominal and only used to load more hungry applications. Despite ever increasing I/O requests there appears NO dynamic allocation of cache or turbo file allocation tables It seems that we are wasting available physical RAM to enhance speed of application performance that is available in abundance! From testing the only way I can use up more available physical,RAM is to load a RAM Hungry graphics application and use the processes within that application!
I cannot see that the general available physical RAM pool is rarely used to support and speed up the pool of processes not specific to any application per say..Your serve...:-)

icons/user_comment.png S. C. wrote: (6 years ago)

Robert, I am known a bit to you over there in .DE - Yes I am passionate about software quality and the shear number of software bugs a year 5 student should not make;and global social ignorance - but if anything - I DO know what I am talking about
I am just amazed that it this has been rejected as NO reference was ever made to the use of VM. In 2006 I wrote a scathing technical article about the use of VM either on a speculative purpose or not.
http://www.techrepublic.com/forum/discussions/102-203004-2112103

Please re-evaluate the status as I have NEVER, in the above, suggested that we ever use VM until the very last resort or in ANY speculative reason. This feature is very clear and only comments of Physical RAM - I think the above needs to be read carefully - To reject a feature without understanding what I have first written seems premature and without foundation -
This feature was initiated to start dialogue in the use of the expression,
Yes, I am aware that I am dancing with the Devil on this feature request - but as in the above -
"I am prepared to run it up the flag poll and see who salutes despise the enormity of the feature request...:-)"

Can we leave it open for another 3 months now that I have clarified the first entry that I though was self explanatory...TA :-) Its all good.

icons/user_comment.png S. C. wrote: (6 years ago)

I have corrected the feature date...sorry for this error......
….Just a quipet of history....
When I was at Uni, the PC had not even been thought about, nor had the Intel's silicon chip for that matter...The concept of using an area of the Hard disk to simulate physical RAM was first though of back then.
Physical RAM was the second most expensive single item above the HD Drive Units that looked like small suitcases and had removable multi disk packs that held a whooping 20MB; and just lower than the components that made up the CPU.
Its initial concept was intended to have a functional usable life unit RAM units as we know them as SIMMS or SIPPS, became a reality and cost effective
Sadly M.S still uses it today when it was designed to solely prevent an alternate for any application sending out an Error
“Out of Memory”.
VM was only ever intended to stop this error level but sadly M.S needed to retain VM as the early costs of Physical RAM were still outrageously expensive and the O/S was and still is hopeless at directly addressing RAM above 640K.
M.S still has outrageously poor memory management but this will remain until they have the courage to release a totally new file system and applications and an O/S that is not backwardly compatible.
When I was lecturing in 1995, I made the hugely controversial statement that
“Nothing on this earth will ever make M.S Windows fast”...guess what I am still correct!
I also know that every single person I lectured to on the use of RAM, the design of its registers and how different O/S used it in modern day terms; will clearly remember this.
O/S 2 and the I386 CPU was supposed to be the answer! O/S 2 died long ago and today, as you know, Windows still needs the help of physical RAM software jugglers to make use of any amount of RAM above 1 Meg.
There are still reasons why M.S needs to retain the principles of VM which were supposed to have been eliminated and removed when Intel created the I386 CPU where, for the first time; we could get the O/S to even see that Physical RAM existed above 1MB.

But we do have an answer to Windows...Its called Unix or in PC terms Linux....sorry about the passionate response above.

icons/user_comment.png S. C. wrote: (6 years ago)

QA - Please close this feature request

Last change: 6 years ago
Voting
Score: -1
  • Negative: 2
  • Neutral: 0
  • Positive: 1
Feature Export
Application-xmlXML   Text-x-logPlaintext   PrinterPrint