月度归档:2015年06月

CentOS下使用ELK套件搭建日志分析和监控平台

1 概述

ELK套件(ELK stack)是指ElasticSearch、Logstash和Kibana三件套。这三个软件可以组成一套日志分析和监控工具。

由于三个软件各自的版本号太多,建议采用ElasticSearch官网推荐的搭配组合:http://www.elasticsearch.org/overview/elkdownloads/ 继续阅读

Ganglia

Ganglia

Ganglia

是一个跨平台可扩展的,高性能计算系统下的分布式监控系统,如集群和网格。它是基于分层设计,它使用广泛的技术,如XML数据代表,便携数据传输,RRDtool用于数据存储和可视化。它利用精心设计的数据结构和算法实现每节点间并发非常低的。它已移植到广泛的操作系统处理器架构上,目前在世界各地成千上万的集群正在使用。它已被用来连结大学校园和世界各地,可以处理2000节点的规模。 继续阅读

HAProxy’s load-balancing algorithm for static content delivery with Varnish

HAProxy’s load-balancing algorithms

HAProxy supports many load-balancing algorithms which may be used in many different type of cases.
That said, cache servers, which deliver most of the time the static content from your web applications, may require some specific load-balancing algorithms.

HAProxy stands in front of your cache server for some good reasons:

  • SSL offloading (read PHK’s feeling about SSL, Varnish and HAProxy)
  • HTTP content switching capabilities
  • advanced load-balancing algorithms

The main purpose of this article is to show how HAProxy can be used to aggregate Varnish servers memory storage in some kind of “JBOD” mode (like the “Just a Bunch Of Disks“).
Main purpose of the examples delivered here are to optimize the resources on the cache, mainly its memory, in order to improve the HIT rate. This will also improve your application response time and make your site top ranked on google :) 继续阅读

Use a load-balancer as a first row of defense against DDOS

We’ve seen recently more and more DOS and DDOS attacks. Some of them were very big, requiring thousands of computers…
But in most cases, this kind of attacks are made by a few computers aiming to make a service or website unavailable, either by sending it too many requests or by taking all its available resources, preventing regular users to use the service.
Some attacks targets known vulnerabilities of widely used applications.

In the present article, we’ll explain how to take advantage of an application delivery controller to protect your website and application against DOS, DDOS and vulnerability scans.

Why using a LB for such protection since a firewall and a Web Application Firewall (aka WAF) could already do the job?
Well, the Firewall is not aware of the application layer but would be useful to pretect against SYN flood attacks. That’s why we saw recently application layer firewalls: Web Application Firewalls, also known as WAF.
Well, since the load balancer is in front of the platform, it can be a good partner for the WAF, filtering out 99% of the attacks, which are managed by script kiddies. The WAF can then happily clean up the remaining attacks.
Well, maybe you don’t need a WAF and you want to take advantage of your Aloha and save some money ;).

Note that you need an application layer load-balancer, like Aloha or OpenSource HAProxy to be efficient. 继续阅读

load balancing, affinity, persistence, sticky sessions: what you need to know

Synopsis

To ensure high availability and performance of Web applications, it is now common to use a load-balancer.
While some people uses layer 4 load-balancers, it can be sometime recommended to use layer 7 load-balancers to be more efficient with HTTP protocol.

NOTE: To understand better the difference between such load-balancers, please read the Load-Balancing FAQ.

A load-balancer in an infrastructure

The picture below shows how we usually install a load-balancer in an infrastructure:

This is a logical diagram. When working at layer 7 (aka Application layer), the load-balancer acts as a reverse proxy.
So, from a physical point of view, it can be plugged anywhere in the architecture:

  • in a DMZ
  • in the server LAN
  • as front of the servers, acting as the default gateway
  • far away in an other separated datacenter

继续阅读

How to run Docker containers on CentOS or Fedora

Lately Docker has emerged as a key technology for deploying applications in the cloud environment. Compared to traditional hardware virtualization, Docker-based container sandbox provides a number of advantages for application deployment environment, such as lightweight isolation, deployment portability, ease of maintenance, etc. Now Red Hat is steering community efforts in streamlining the management and deployment of Docker containers. 继续阅读

Tomcat: Clustering and Load Balancing with HAProxy

Introduction


In this article we will explore how to setup a simple Tomcat cluster and load balancing using HAProxy. Our environment will consists of two Tomcat (latest version) instances running under Ubuntu Lucid (10.04 LTS). We will use sample applications from the built-in Tomcat package to demonstrate various scenarios. Later in the tutorial, we will study in-depth how to configure HAProxy and how to setup logging.

What is HAProxy?

HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing – http://haproxy.1wt.eu/

What is Tomcat?

Apache Tomcat is an open source software implementation of the Java Servlet and JavaServer Pages technologies… Apache Tomcat powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations – http://tomcat.apache.org/

 

Table of Contents


  1. Setting-up the Environment
    • Download Tomcat
    • Configure Tomcat
    • Run Tomcat
    • Download HAProxy
    • Configure HAProxy
  2. Load Balancing
    • Default Setup
    • Sharing Sessions
    • Configure Tomcat to Share Sessions
    • Retest Session Sharing
    • Session Sharing Caveat
    • Sharing Sessions
  3. HAProxy Configuration
    • Configuration File
    • Logging

继续阅读

Tomcat clustering + Haproxy

1. High Availability of Tomcat

继续阅读

Load balancing and HA for multiple applications with Apache, HAProxy and keepalived

Let’s imagine a situation where, for whatever reason, we have a number of web applications available for users, and we want users to access them using, for example,

https://example.com/appname/

Each application is served by a number of backend servers, so we want some sort of load balancing. Each backend server is not aware of the clustering, and expects requests relative to /, not /appname. Also, SSL connections and caching are needed. The following picture illustrates the situation:

Here is an example of how to implement such a setup using Apache, HAProxy and keepalived. The provided configuration samples refer to the load balancer machine(s). Here’s the logical structure of a load balancer described here:

Apache

继续阅读