流量稳定(Throttle)

116 限速里,我们介绍了一种常用的功能,限速:当符合条件的请求速率超过某个值以后,新增的请求会被终止。但是在一些情况下,我们希望在超过阈值之后,请求能够被排队处理,而不是终止请求。这个功能有时候被称为“流量稳定(Throttle)”,因为这个功能除了可以用来保护服务提供方,也可以用来约束服务请求方。在mesh环境里,这个功能尤其重要,就像电网里的“稳压器”,把不稳定的输入,变成稳定的输出,这也是“流量阀门”名字的来历。

演示场景介绍

在这个例子中,我们演示使用Flomesh的piped组件来控制http请求的流量。首先我们使用ab来访问一个nginx提供的服务,在没有piped代理的情况下,可以达到13000左右的RPS;然后我们在ab和nginx中间加入piped代理,可以看到RPS变成了11000左右(代理损失2000RPS左右);之后我们在piped中加入“流量阀门”,设置成匀速500RPS,从ab测试结果可以看到请求稳定在500RPS左右,同时没有失败请求。

设置演示环境

演示环境包括3个部分:

  1. nginx服务器,配置为2c2g,配置为2个worker。监听在80端口,提供一个/ok的静态服务,内容是静态文本
  2. ab测试服务器,运行在1c1g的虚拟机里边。在给定的最低硬件资源下,我们验证输出结果的稳定性和确定性
  3. 运行piped的虚拟机。这里我们采用独立的虚拟机运行piped,采用2c2g配置

部署Nginx服务器

在标准最小安装的Centos7上,执行如下命令,部署nginx测试环境:

yum -y install epel-release
yum -y install nginx
systemctl start nginx
systemctl enable nginx
echo "1" > /usr/share/nginx/html/ok

该虚拟机IP地址为192.168.122.150(我用的是centos上的kvm/libvirt/virt-manager虚拟化)。在nginx所在虚拟机内验证:

[root@nginx nginxs]# curl http://localhost/ok
1

部署ab测试机

这里我们采用ab测试,而没有采用wrk,主要是为了模拟互联网环境。在互联网环境下,短连接比长连接更为常见。

小常识: ab(apache bench)是apache httpd提供的基础http性能测试工具,发出的请求是HTTP1.0的,HTTP1.0是短连接。wrk是另外一款常见的http性能测试软件,不同之处在于wrk发出的是HTTP1.1的请求,是长连接。在实际使用环境中,大多数的HTTP服务器都支持HTTP1.0和HTTP1.1,也就是同时支持短连接和长连接。但是在使用中,在互联网环境里大多数请求都采用短连接,主要目的是为了避免服务器端等待;而在局域网(比如微服务之间),多采用长连接,长连接在稳定的网络环境里,有比短连接高很多的传输效率。

在全新最小化安装的centos7虚拟机里,采用如下步骤安装ab:

yum -y install httpd-tools

作为验证,我们从ab所在的虚拟机访问刚才部署的nginx服务器:

[root@localhost ~] ab -c 100 -n 10000 http://192.168.122.150/ok
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.122.150 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.16.1
Server Hostname:        192.168.122.150
Server Port:            80

Document Path:          /ok
Document Length:        2 bytes

Concurrency Level:      100
Time taken for tests:   0.773 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2460000 bytes
HTML transferred:       20000 bytes
Requests per second:    12941.32 [#/sec] (mean)
Time per request:       7.727 [ms] (mean)
Time per request:       0.077 [ms] (mean, across all concurrent requests)
Transfer rate:          3108.95 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:     1    8   1.0      7      10
Waiting:        1    8   1.0      7      10
Total:          3    8   0.9      7      10

Percentage of the requests served within a certain time (ms)
  50%      7
  66%      8
  75%      8
  80%      9
  90%      9
  95%      9
  98%     10
  99%     10
 100%     10 (longest request)

可以看到,在ab直接访问ng的情况下,RPS为12941.32,在13000左右。

部署piped服务

Piped是一个Flomesh团队自开发的代理软件,采用框架结构,目前可以代理TCP,HTTP,Dubbo,Socks,定长TCP报文等多种常见协议。Piped采用C++开发,使用流式处理、链式架构(类似ip chains),具有尺寸小、内存低、CPU利用率低、执行效率高的特点。主要设计和应用场景是协议转换、sidecar代理。目前Flomesh团队提供rpm和docker镜像两种安装介质。

在最下化安装的centos7虚拟机中,该虚拟机IP地址为192.168.122.10,采用如下方式安装piped:

wget http://54.92.105.113/piped/piped-0.1.0-55.el7_pl.x86_64.rpm
yum -y localinstall piped-0.1.0-55.el7_pl.x86_64.rpm

版权与免责: 上述的piped程序为Flomesh团队开发并拥有版权(并非open source software或者freeware),这里作为测试提供。此文档及安装介质可以免费下载用于测试、学习、非盈利目的使用。对于非原文转载,请注明出处(http://flomesh.cn)。piped程序不得作为盈利目的使用。Flomesh团队会维护该文档确保有效性,维护piped程序确保稳定、可靠,但是Flomesh团队不对此文档和该程序的使用负有责任。

piped的配置文件采用ini格式,通过定义一个处理链(Chain),完成对字节流的处理,进而实现各种代理功能。编辑/etc/piped/proxy.ini,内容如下:

1 [pipeline.proxy]
2 listen = 0.0.0.0:8080
3
4 [module.upstream]
5 name = proxy
6 upstream = 192.168.122.150:80

这段配置表达的是:监听在0.0.0.0的8080端口上,把到达8080的TCP连接,转发到192.168.122.150的80端口,也就是上边配置的nginx服务。

启动piped:

piped /etc/piped/proxy.ini

作为验证,在ab的虚拟机里,执行测试:

[root@localhost:~]# ab -c 100 -n 10000 http://192.168.122.10:8080/ok
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.122.10 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.16.1
Server Hostname:        192.168.122.10
Server Port:            8080

Document Path:          /ok
Document Length:        2 bytes

Concurrency Level:      100
Time taken for tests:   0.891 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2460000 bytes
HTML transferred:       20000 bytes
Requests per second:    11223.11 [#/sec] (mean)
Time per request:       8.910 [ms] (mean)
Time per request:       0.089 [ms] (mean, across all concurrent requests)
Transfer rate:          2696.18 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:     5    9   2.2      9      22
Waiting:        4    8   1.9      8      20
Total:          5    9   2.2      9      22

Percentage of the requests served within a certain time (ms)
  50%      9
  66%     10
  75%     10
  80%     11
  90%     11
  95%     12
  98%     13
  99%     14
 100%     22 (longest request)

对比上边的ab直接访问nginx,可以看出加入了piped作为代理(TCP代理)之后,大概有2000RPS的损耗。

现实中,进程数、CPU主频、网络条件(带宽和延迟等)对测试结果都有影响;相应的测试数据会有所差异。

测试piped作为http代理

在上一步的测试基础上,我们在piped的配置中加入HTTP协议的处理。此时piped从TCP代理变为HTTP代理,虽然没有做任何实际的针对HTTP的处理,但是新加入的模块实际做了HTTP REQUEST的解析(部分解析)和HTTP RESPONSE的解析,也就在配置文件中新加入的http-encode和http-decode模块配置。

修改/etc/piped/proxy.ini,这时候看起来是这样的(注意4-8行加入的内容):

1 [pipeline.proxy]
2 listen = 0.0.0.0:8080
3 
4 [module.http-decode]
5 name = http-request-decoder
6 
7 [module.http-encode]
8 name = http-request-encoder
9 
10 [module.7]
11 name = proxy
12 upstream = 192.168.122.150:80

重新启动piped:

[root@localhost piped]# piped proxy.ini 
Loaded config file /etc/piped/proxy.ini
Mon Jan 20 22:51:58 2020 [info] Listening on 0.0.0.0:8080

在ab的虚拟机里执行测试:

[root@localhost ~] ab -c 100 -n 10000 http://192.168.122.10:8080/ok
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.122.10 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.16.1
Server Hostname:        192.168.122.10
Server Port:            8080

Document Path:          /ok
Document Length:        2 bytes

Concurrency Level:      100
Time taken for tests:   0.996 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2460000 bytes
HTML transferred:       20000 bytes
Requests per second:    10035.76 [#/sec] (mean)
Time per request:       9.964 [ms] (mean)
Time per request:       0.100 [ms] (mean, across all concurrent requests)
Transfer rate:          2410.93 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     3   10   2.1      9      17
Waiting:        3    9   1.9      9      16
Total:          4   10   2.1     10      17

Percentage of the requests served within a certain time (ms)
  50%     10
  66%     11
  75%     11
  80%     12
  90%     13
  95%     14
  98%     15
  99%     15
 100%     17 (longest request)

这步测试中,我们可以看到加入http encode/decode之后,piped的处理能里从11000RPS降低到10000RPS。

测试Piped的Throttle功能

在上边的HTTP代理基础之上,我们加入Throttle功能。修改proxy.ini配置文件成为这样(注意7-9行新引入的module.tap,这个模块实现了Throttle功能):

1 [pipeline.proxy]
2 listen = 0.0.0.0:8080
3 
4 [module.http-decode]
5 name = http-request-decoder
6 
7 [module.tap]
8 name = tap
9 limit = 500
10 
11 [module.http-encode]
12 name = http-request-encoder
13 
14 [module.7]
15 name = proxy
16 upstream = 192.168.122.150:80

重启piped:

[root@localhost piped]# piped proxy.ini 
Loaded config file /etc/piped/proxy.ini
Mon Jan 20 23:02:33 2020 [info] Listening on 0.0.0.0:8080

在ab虚拟机上执行测试:

[root@localhost ~]# ab -c 100 -n 10000 http://192.168.122.10:8080/ok
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.122.10 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.16.1
Server Hostname:        192.168.122.10
Server Port:            8080

Document Path:          /ok
Document Length:        2 bytes

Concurrency Level:      100
Time taken for tests:   19.073 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2460000 bytes
HTML transferred:       20000 bytes
Requests per second:    524.30 [#/sec] (mean)
Time per request:       190.729 [ms] (mean)
Time per request:       1.907 [ms] (mean, across all concurrent requests)
Transfer rate:          125.96 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:     4  191 372.5     11     975
Waiting:        4  190 372.5     10     974
Total:          4  191 372.5     11     975

Percentage of the requests served within a certain time (ms)
  50%     11
  66%     13
  75%     14
  80%     16
  90%    959
  95%    965
  98%    970
  99%    971
 100%    975 (longest request)

这里我们可以看到:

  1. Requests per second降低到了524.30,也就是大概500左右。注意proxy.ini中第九行配置就是limit=500
  2. Failed requests为0,也就是说没有失败请求
  3. Time per request升高到190ms,这是因为超过500RPS的请求被队列处理,导致了平均每个请求的处理时间增加

进一步验证Throttle功能

基于上一个测试案例,我们把limit变成200,也就是每秒200个请求,超出的部分会被队列化处理。

proxy.ini是这样的:

1 [pipeline.proxy]
2 listen = 0.0.0.0:8080
3 
4 [module.http-decode]
5 name = http-request-decoder
6 
7 [module.tap]
8 name = tap
9 limit = 200
10 
11 [module.http-encode]
12 name = http-request-encoder
13 
14 [module.7]
15 name = proxy
16 upstream = 192.168.122.150:80

唯一不同是第9行500变成了200。

记得重启piped进程

在ab的虚拟机上执行测试(按照预期,10000个请求在limit=200时候,需要执行50秒):

[root@localhost ~]# ab -c 100 -n 10000 http://192.168.122.10:8080/ok
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.122.10 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/1.16.1
Server Hostname:        192.168.122.10
Server Port:            8080

Document Path:          /ok
Document Length:        2 bytes

Concurrency Level:      100
Time taken for tests:   49.050 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2460000 bytes
HTML transferred:       20000 bytes
Requests per second:    203.87 [#/sec] (mean)
Time per request:       490.502 [ms] (mean)
Time per request:       4.905 [ms] (mean, across all concurrent requests)
Transfer rate:          48.98 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     6  490 488.6     17     996
Waiting:        5  489 488.6     15     995
Total:          6  490 488.6     18     996

Percentage of the requests served within a certain time (ms)
  50%     18
  66%    987
  75%    989
  80%    990
  90%    992
  95%    993
  98%    994
  99%    994
 100%    996 (longest request)

从测试结果上,我们可以看到:

  1. RPS下降到200.387
  2. Failed requests为0,也就是没有失败请求
  3. Time per request升高到490ms

测试结论

通过如上的测试,我们可以得出如下的结论(部分数据依赖测试环境):

  1. piped作为TCP代理情况下,引入的消耗为:1.3W RPS下降到1.1W RPS
  2. piped作为HTTP代理情况下,引入的消耗为:1.1W RPS下降到1W RPS
  3. piped加入“流量稳定(Throttle)”情况下,流量可以稳定输出,没有错误请求产生;相应的,平均请求处理时间增加
  4. piped的内存使用和并发的数量有关,在100并发情况下内存使用小于5M,CPU占用低于70%。如下:
[root@localhost piped]# top | grep piped
 1247 root      20   0  108272   4704   1628 R  29.7  0.2   0:00.89 piped        1247 root      20   0  108272   4704   1628 R  66.8  0.2   0:02.90 piped        1247 root      20   0  108272   4704   1628 R  66.7  0.2   0:04.90 piped        1247 root      20   0  108272   4704   1628 R  64.5  0.2   0:06.84 piped        1247 root      20   0  108272   4704   1628 R  66.3  0.2   0:08.83 piped        1247 root      20   0  108272   4704   1628 R  69.1  0.2   0:10.91 piped        1247 root      20   0  108372   4916   1628 R  64.0  0.2   0:12.83 piped        1247 root      20   0  108372   4916   1628 S  16.7  0.2   0:13.33 piped  

注:第六列为Top命令的RES(驻留内存)内存数量,单位是kb。第九列为CPU使用,单位是%

注:如上所有测试均为HTTP1.0短连接