OSR will be undertaking some major network upgrades over the next several weeks.
We’re sorry to say that you may expect sporadic outages and lack of access to OSRONLINE and the List Server.
We apologize in advance for the annoyance, but when things settle back down, we should have about 7 times the bandwidth available for supporting the community.
>… when things settle back down, we should have about 7 times the bandwidth
available for supporting the community.
Wish us luck,
Well, certainly I wish you all the luck in the world, but I am afraid you are going to be out of the one
anyway until you decide to replace your entire software platform…
> Even nginx, being architecturally a remake of w2k pre-http-sys IIS, is slower then IIS.
Normally you would say something about along the “Windows fanboy ABC/XYZ…/etc” lines under these circumstances, but, assuming that you are speaking about NGINX on Windows, there is a good chance that you are, indeed, right - as it follows from the link that I have provided in my previous post, this configuration has quite a few “known issues”…
>these circumstances, but, assuming that you are speaking about NGINX on Windows
No, I saw lots of benchmarks (just google for them) about IIS (modern) being the world’s fastest and most scalable webserver.
Yes, on static content. The benchmarks did not cover the stuff like mod_php or ASP.NET
Yes, nginx is the fastest UNIX webserver (according to the same benchmarks), at least among the popular ones. Probably some khttpd clone is even faster.
And yes, nginx is, by its architectural ideas, more-or-less the remake of the IIS’s ISAPI architecture, which is described on MSDN - free threading etc.
“Linux is faster then Windows” is a prejudice and often a myth by fanboys from Slashdot. Everything depends on task.
Yes, building my code by CMake+ninja+native gcc on Linux (ext4) is TIMES faster then building the same by CMake+ninja+CL from Visual Studio on Windows (NTFS). But sometimes Windows is much faster.
But anyway this is a moot point. Surely Peter has some app server software like Lyris which he do not want to go away from, and, for such a case, the OS+webserver is chosen to make the app’s life better, not vice versa.
“Linux is faster then Windows” is a prejudice and often a myth by fanboys from
Slashdot. Everything depends on task.
Well, the code like " while(1){} " is, indeed, going to show the same level of “performance” on both OSes.
Concerning some more useful code…well, Linux kernel is, indeed, MUCH faster than that of NT (and, as a matter of a fact, than that of any other UNIX flavour) . This is the direct consequence of Linus’s preoccupation with speed, which he would not sacrifice for anything, including an elegant OS design with clear separation of different layers. Therefore, the real question here is whether
“faster” necessarily implies “better in absolutely all respects”. Concerning “Linux fanboys”, they, indeed, don’t seem to bother themselves with this question. If you want to see what such “argumentation” may look like, you can,for example, check the link below and read the comments made by some posters ( like, for example, Pawlerson)
Concerning some more useful code…well, Linux kernel is, indeed, MUCH faster than that of NT
(and, as a matter of a fact, than that of any other UNIX flavour) . This is the direct consequence of
Linus’s preoccupation with speed, which he would not sacrifice for anything, including an elegant OS
design with clear separation of different layers.
Rather useless words.
For instance, to make a fast VM subsystem, you need not only to be “preoccupied with speed” (the speed of the assembler code I believe? this is what Linus is preoccupied with), but to be able to design a good algorithms and policies like page replacement.
If you have moronic VM policies written in a good “preoccupied with speed” locally optimized code - you will lose.
So, elegant design is sometimes crucially important.
Now back to Linux. Since Linux community with the help of corporations like IBM (and RedHat is rather rich too) can afford to rewrite the whole OS subsystem from scratch if they consider this beneficial - then I don’t think Linux is weak.
They just have more manpower resources to do such rewrites, more then MS’s core OS team and surely by far more then Oracle Solaris or FreeBSD has.
As we will see shortly, the above statement is just yet another formulation of “Liar’s Paradox”…
First of all, algorithms and policies fall into “implementation” class, rather than “design” one, which is mainly about interfaces, layers and communication between them. These two apply to totally different areas - the former is all about run-time efficiency and, hence, may be (in) efficient, smart/dumb,etc, while the latter is about code maintenance, portability and extensibility, and, hence, may be elegant, clumsy or right ugly. Therefore, you are mixing oranges and apples.
It is sometimes possible to sacrifice good design for certain assembly-level performance improvements, resulting from, for example, avoidance of extra function calls or minimisation of CPU cache misses. Although such an improvement per se may seem rather marginal, it may result in a significant overall system performance enhancement if the code in question executes frequently (and quite a few kernel-mode code paths, indeed, do). This is what Linus (apart from other things) is so concerned about.
You are correct in your observation that a poor choice of algorithms and policies may negate any performance improvement resulting from code optimisation. However, this is not what I am speaking about.
Concerning your particular example, this is,probably, not a very good one, because there is no universally "good"or “bad” replacement policy or algorithm - everything depends on particular access pattern. For example, LRU approach may work well for random access patters - it may perform quite well…until you do a long sequential scan that may trash your cache entirely. In the latter case MRU would be the best option, especially if you do such scans repeatedly.There are also some replacement algorithms like ARC, CAR or LIRS that may try to combine the ideas of different approaches. IMHO, ARC is, probably, the most interesting one
(https://en.wikipedia.org/wiki/Adaptive_replacement_cache).
Now back to Linux. Since Linux community with the help of corporations like IBM
(and RedHat is rather rich too) can afford to rewrite the whole OS subsystem from
scratch if they consider this beneficial - then I don’t think Linux is weak.
They just have more manpower resources to do such rewrites, more then MS’s core OS
team and surely by far more then Oracle Solaris or FreeBSD has.
Now consider rewriting some subsystem under, say, FreeBSD. Under which of these two you think your task task would require upsetting more existent code??? Don’t forget that the more code you upset the higher the probability of introducing some new bugs. This is why a good (IMHO) choice of architecture is the one that allows you to isolate your changes from the rest of the system as much as possible, and to confine them to as few files as possible. The best possible way of achieving this goal is to separate external interfaces from the internal implementation details as much as possible, even if it may require some extra calls…