一封关于squid处理小对象性能瓶颈的邮件

    技术2022-05-11  96

    Re:  the performance of squid

    早上起来,看到一个讨论squid性能的邮件,有很多问题都是一直困挠我的。不过,读完邮件后,释怀不少了。

    现将部分邮件帖出如下的:

    Hi, I use squid as a web accelerating cache proxy for the backend MS-IIS-6.

    When the requests per-second reache 2000, the squid will eat CPU

    resources to 99%. The files are some small size image, css, etc. I fill

    the regex computing or other operation may result in the dropping of

    the squid's performance. Is there some other way to improve squid's efficiency ?

    My computer hardware: 3.0 G Interl CPU, 4G memory, 100G SCSI disk.

    从邮件正文,可以发现这位用户使用的squid在处理小对象的时候,出现了性能瓶颈。对于频繁的访问小对象,在对header的处理势必要耗费大量的cpu时间。从我的角度来看,要想提高其性能,应该尽量减少对header的处理。

    读到Adrian Chadd的回复,他也是建议读者使用改写后的squid-2-Head。可见在head这一模块,性能还是有很大的提升空间。下面是Adrian Chadd回复的部分邮件内容:

    I'm working on identifying and improving the performance of Squid-2 HEAD.

    You could give that a whirl and let me know how it goes. I'd appreciate

    any testing that you could give it.

    最后问题提出者通过修改了部分配置文件,提高了squid的性能。其做法如下:

    1) If requets/second is high, and the files be to cached are small size and many

    number, the configure items: cache_mem, memory_pools should be set to use,

    and client_persistent_connections should be set 'on' when you use squid-2.6.X

    (in which the system's epoll be used);

    2)If request/second are not too large(such as 1000), and the files

    data be cached are too large(such as 100G ), we should close the memory_pools and set

    cache_mem to be a lower value(such as 256MB), this will let the OS system

    to use more ram memory and use swap memory fewer, and this will decrease

    the page-faults of squid very much and squid will not be blocked on DISK I/O.

    可以看出,第一种方案之所以是成功的,是通过合理的使用了cache_mem,  memory_pools减少了对Header的处理次数。而使用持久连接,避免了过于频繁的tcp连接和关闭。Tcp的连接和关闭,都是需要耗费系统资源的。

    对于第二种情况的话,使用cache_mem memory_pools的话,是要耗费过多的内存的,明显是不可取的。读者这样做,避免了这个问题。至于是否真的DISK I/O不阻塞了,那就得通过测试才能确定了。

    以上两种解决方法,都是非常具有参考价值的。

     

     

    最新回复(0)