分类目录归档:性能和可伸缩性

Performance and Scalability

Optimizing Nginx for serving files bigger than 1GB

Yesterday I faced a strange issue, I realize that nginx was not serving files larger than 1GB. After investigation I found that it was due to the proxy_max_temp_file_size variable, that is configured by default to serve up to 1024 MB max.

This variable indicates the max size of a temporary file when the data served is bigger than the proxy buffer. If it is indeed bigger than the buffer, it will be served synchronously from the upstream server, avoiding the disk buffering.
If you configure proxy_max_temp_file_size to 0, then your temporary files will be disabled.

In this fix it was enough to locate this variable inside the location block, although you can use it inside server and httpd blocks. With this configuration you will optimize nginx for serving more than 1GB of data.

location / {
...
proxy_max_temp_file_size 1924m;
...
}

Restart nginx to take the changes:

service nginx restart

Smart and Efficient Byte-Range Caching with NGINX & NGINX Plus

When correctly deployed, caching is one of the quickest ways to accelerate web content. Not only does caching place content closer to the end user (thus reducing latency), it also reduces the number of requests to the upstream origin server, resulting in greater capacity and lower bandwidth costs.

The availability of globally distributed cloud platforms like AWS and DNS‑based global load balancing systems such as Route 53 make it possible to create your own global content delivery network (CDN).

In this article, we’ll look at how NGINX and NGINX Plus can cache and deliver traffic that is accessed using byte‑range requests. A common use case is HTML5 MP4 video, where requests use byte ranges to implement trick‑play (skip and seek) video playback. Our goal is to implement a caching solution for video delivery that supports byte ranges, and minimizes user latency and upstream network traffic.

Editor – The cache‑slice method discussed in Filling the Cache Slice‑by‑Slice was introduced in NGINX Plus R8. For an overview of all the new features in that release, see Announcing NGINX Plus R8 on our blog. 继续阅读

Nginx配置WebService、MySQL、SQL Server、ORACLE等代理

nginx配置webservice

#user  nobody;
worker_processes  4;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    upstream esbServer {   
        server 127.0.0.1:8083 weight=1 max_fails=2 fail_timeout=30s;   
    }

    #gzip  on;

    server {
        listen       8081;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location /ladder_web {
            proxy_set_header X-real-ip $remote_addr;
            proxy_pass http://esbServer;
        }

       
    }

}

nginx 配置mysql代理 — 基于nginx1.9以上 stream module 继续阅读

LVS、Nginx 及 HAProxy 的工作原理

当前大多数的互联网系统都使用了服务器集群技术,集群是将相同服务部署在多台服务器上构成一个集群整体对外提供服务,这些集群可以是 Web 应用服务器集群,也可以是数据库服务器集群,还可以是分布式缓存服务器集群等等。

在实际应用中,在 Web 服务器集群之前总会有一台负载均衡服务器,负载均衡设备的任务就是作为 Web 服务器流量的入口,挑选最合适的一台 Web 服务器,将客户端的请求转发给它处理,实现客户端到真实服务端的透明转发。

最近几年很火的「云计算」以及分布式架构,本质上也是将后端服务器作为计算资源、存储资源,由某台管理服务器封装成一个服务对外提供,客户端不需要关心真正提供服务的是哪台机器,在它看来,就好像它面对的是一台拥有近乎无限能力的服务器,而本质上,真正提供服务的,是后端的集群。 继续阅读

haproxy: restrict specific URLs to specific IP addresses

This snippet shows you how to use haproxy to restrict certain URLs to certain IP addresses. For example, to make sure your admin interface can only be accessed from your company IP address.

This snippet was tested on haproxy 1.5.

This snippet is tested on a Digital Ocean VPS. If you like this snippet and want to support me, plus get free credit @ DO, use this link to order a Digital Ocean VPS: https://www.digitalocean.com/?refcode=7435ae6b8212

This example restricts access to the /admin/ and /helpdesk URL’s. It only allows access from the IP addresses 20.30.40.50 and 20.30.40.40. Any other IP addresses will get the standard haproxy 403 forbidden error. 继续阅读

Install package network:ha-clustering:Stable / crmsh

For RedHat RHEL-6 run the following as root:

cd /etc/yum.repos.d/
wget http://download.opensuse.org/repositories/network:ha-clustering:Stable/RedHat_RHEL-6/network:ha-clustering:Stable.repo
yum install crmsh

For Fedora 25 run the following as root:

dnf config-manager --add-repo http://download.opensuse.org/repositories/network:ha-clustering:Stable/Fedora_25/network:ha-clustering:Stable.repo
dnf install crmsh

For CentOS CentOS-7 run the following as root:

cd /etc/yum.repos.d/
wget http://download.opensuse.org/repositories/network:ha-clustering:Stable/CentOS_CentOS-7/network:ha-clustering:Stable.repo
yum install crmsh

For CentOS CentOS-6 run the following as root:

cd /etc/yum.repos.d/
wget http://download.opensuse.org/repositories/network:ha-clustering:Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
yum install crmsh

日请求过亿的Web系统PHP7升级实践

QQ会员活动运营平台(AMS),是QQ会员增值运营业务的重要载体之一,承担海量活动运营的Web系统。AMS是一个主要采用PHP语言实现的活动运营平台, CGI日请求3亿左右,高峰期达到8亿。然而,在之前比较长的一段时间里,我们都采用了比较老旧的基础软件版本,就是PHP5.2+Apache2.0(2008年的技术)。尤其从去年开始,随着AMS业务随着QQ会员增值业务的快速增长,性能压力日益变大。

于是,自2015年5月,我们就开始规划PHP底层升级,最终的目标是升级到PHP7。那时,PHP7尚处于研发阶段,而我们讨论和预研就已经开始了。 继续阅读

分布式服务框架 Zookeeper — 管理分布式环境中的数据

Zookeeper 分布式服务框架是 Apache Hadoop 的一个子项目,它主要是用来解决分布式应用中经常遇到的一些数据管理问题,如:统一命名服务、状态同步服务、集群管理、分布式应用配置项的管理等。本文将从使用者角度详细介绍 Zookeeper 的安装和配置文件中各个配置项的意义,以及分析 Zookeeper 的典型的应用场景(配置文件的管理、集群管理、同步锁、Leader 选举、队列管理等),用 Java 实现它们并给出示例代码。

安装和配置详解

本文介绍的 Zookeeper 是以 3.2.2 这个稳定版本为基础,最新的版本可以通过官网 http://hadoop.apache.org/zookeeper/来获取,Zookeeper 的安装非常简单,下面将从单机模式和集群模式两个方面介绍 Zookeeper 的安装和配置。 继续阅读

Testing resource agents

10.1. Testing with ocf-tester

The resource agents repository (and hence, any installed resource agents package) contains a utility named ocf-tester. This shell script allows you to conveniently and easily test the functionality of your resource agent.

ocf-tester is commonly invoked, as root, like this:

ocf-tester -n <name> [-o <param>=<value> ... ] <resource agent>
  • <name> is an arbitrary resource name.
  • You may set any number of <param>=<value> with the -o option, corresponding to any resource parameters you wish to set for testing.
  • <resource agent> is the full path to your resource agent.

When invoked, ocf-tester executes all mandatory actions and enforces action behavior as explained in Section 5, “Resource agent actions”. 继续阅读

haproxy配置范例

HaProxy特性

HAProxy提供高可用性、负载均衡以及基于TCP和HTTP应用的代理,支持虚拟主机,它是免费、快速并且可靠的一种解决方案。根据官方数据,其最高极限支持10G的并发。

HAProxy特别适用于那些负载特大的web站点,这些站点通常又需要会话保持或七层处理。HAProxy运行在当前的硬件上,完全可以支持数以万计的并发连接。并且它的运行模式使得它可以很简单安全的整合进您当前的架构中,同时可以保护你的web服务器不被暴露到网络上。

其支持从4层至7层的网络交换,即覆盖所有的TCP协议。就是说,Haproxy 甚至还支持 Mysql 的均衡负载(read)。

如果说在功能上,能以proxy反向代理方式实现WEB均衡负载,这样的产品有很多。包括 Nginx,ApacheProxy,lighttpd,Cheroke 等。

但要明确一点的,Haproxy 并不是 Http 服务器。以上提到所有带反向代理均衡负载的产品,都清一色是 WEB 服务器。简单说,就是他们能自个儿提供静态(html,jpg,gif..)或动态(php,cgi..)文件的传输以及处理。而Haproxy 仅仅,而且专门是一款的用于均衡负载的应用代理。其自身并不能提供http服务。

但其配置简单,拥有非常不错的服务器健康检查功能还有专门的系统状态监控页面,当其代理的后端服务器出现故障, HAProxy会自动将该服务器摘除,故障恢复后再自动将该服务器加入。自1.3版本开始还引入了frontend、backend,frontend根据任意HTTP请求头内容做规则匹配,然后把请求定向到相关的backend。 继续阅读