月度归档:2015年05月

Configuring MySQL Master-Master Replication

MySQL Master-Master replication adds speed and redundancy for active websites. With replication, two separate MySQL servers act as a cluster. Database clustering is particularly useful for high availability website configurations. Use two separate Linodes to configure database replication, each with private IPv4 addresses.

This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, you can check our Users and Groups guide.

This guide is written for Debian 7 or Ubuntu 14.04.

Install MySQL

继续阅读

Install HAProxy and Keepalived (Virtual IP)

To avoid a single point of failure with your HAProxy, one would set up two identical HAProxy instances (one active and one standby) and use Keepalived to run VRRP between them. VRRP provides a virtual IP address to the active HAProxy, and transfers the Virtual IP to the standby HAProxy in case of failure. This is seamless because the two HAProxy instances need no shared state.

In this example, we are using 2 nodes to act as the load balancer with IP failover in front of our database cluster. VIP will be floating around between LB1 (master) and LB2 (backup). When LB1 is down, the VIP will be taking over by LB2 and once the LB1 up again, the VIP will be failback to LB1 since it hold the higher priority number.  继续阅读

haproxy + keepalived – the free HA load balancer

Load balancers are cool, especially free ones. haproxy and keepalived together can give you a simple HA load balancer at the cost of the hardware you run it on. Here’s how to setup a basic active/passive load balancer with haproxy and keepalived. First the environment:

haproxy-keepalived 继续阅读

SSL协议工作原理简述及相关应用

SSL协议工作原理简述及相关应用

SSL是一种安全传输协议,其全称是Secure Socket Layer(安全套接层)。该协议最初由Netscape企业发展而来,现已成为网络上用来鉴别网站和网页浏览者身份,以及在浏览器使用者及网页服务器之间进行加密通讯的全球化标准。由于SSL技术已建立到了所有主要的浏览器和WEB服务器程序中,因此,仅需安装数字证书,或服务器证书就可以激活服务器功能了。 继续阅读

服务器证书安装配置指南(Apache for Linux)

一、  安装准备

1.    安装Openssl
要使Apache支持SSL,需要首先安装Openssl支持。推荐下载安装openssl-0.9.8k.tar.gz
下载Openssl:http://www.openssl.org/source/
tar -zxf openssl-0.9.8k.tar.gz    //解压安装包
cd openssl-0.9.8k                 //进入已经解压的安装包
./config                          //配置安装。推荐使用默认配置
make && make install              //编译及安装
openssl默认将被安装到/usr/local/ssl

2.    安装Apache
./configure –prefix=/usr/local/apache –enable-so –enable-ssl –with-ssl=/usr/local/ssl –enable-mods-shared=all                               //配置安装。推荐动态编译模块
make && make install
动态编译Apache模块,便于模块的加载管理。Apache 将被安装到/usr/local/apache   继续阅读

Burp Suite使用介绍

Getting Started

Burp Suite 是用于攻击web 应用程序的集成平台。它包含了许多工具,并为这些工具设计了许多接口,以促进加快攻击应用程序的过程。所有的工具都共享一个能处理并显示HTTP 消息,持久性,认证,代理,日志,警报的一个强大的可扩展的框架。本文主要介绍它的以下特点:

1.Target(目标)——显示目标目录结构的的一个功能 2.Proxy(代理)——拦截HTTP/S的代理服务器,作为一个在浏览器和目标应用程序之间的中间人,允许你拦截,查看,修改在两个方向上的原始数据流。 3.Spider(蜘蛛)——应用智能感应的网络爬虫,它能完整的枚举应用程序的内容和功能。 4.Scanner(扫描器)——高级工具,执行后,它能自动地发现web 应用程序的安全漏洞。 5.Intruder(入侵)——一个定制的高度可配置的工具,对web应用程序进行自动化攻击,如:枚举标识符,收集有用的数据,以及使用fuzzing 技术探测常规漏洞。 6.Repeater(中继器)——一个靠手动操作来触发单独的HTTP 请求,并分析应用程序响应的工具。 7.Sequencer(会话)——用来分析那些不可预知的应用程序会话令牌和重要数据项的随机性的工具。 8.Decoder(解码器)——进行手动执行或对应用程序数据者智能解码编码的工具。 9.Comparer(对比)——通常是通过一些相关的请求和响应得到两项数据的一个可视化的“差异”。 10.Extender(扩展)——可以让你加载Burp Suite的扩展,使用你自己的或第三方代码来扩展Burp Suit的功能。 11.Options(设置)——对Burp Suite的一些设置 继续阅读

Foreman 企业级配置管理解决方案

构建运维体系

本文是构建运维体系的其中一个关键环节.

什么是 foreman

Foreman 是一个集成的 数据中心生命周期管理工具 ,提供了服务开通,配置管理以及报告功能,和 Puppet Dahboard 一样,Foreman也是一个 Ruby on Rails 程序. Foreman 和 Dashboard 不同的地方是在于,Foreman 更多的关注服务开通和管理数据中心的能力,例如和引导工具,PXE启动服务器,DHCP服务器及服务 器开通工具进行集成.

Foreman 机器统一管理平台:

  • Foreman与puppet集成使用,作为puppet的前端使用
  • Foreman可以作为外部节点分类器
  • Foreman可以通过facter组件显示系统信息,并且可以收集puppet报告
  • Foreman可以管理大规模节点,实现配置版本的回溯

继续阅读

30 Things to Do After Minimal RHEL/CentOS 7 Installation

Download Your Free eBooks NOW – 10 Free Linux eBooks for Administrators

继续阅读

High availability load balancing using HAProxy on Ubuntu

In this post we will show you how to easily setup loadbalancing for your web application. Imagine you currently have your application on one webserver called web01:

1
2
3
4
5
6
7
+---------+
| uplink  |
+---------+
     |
+---------+
|  web01  |
+---------+

But traffic has grown and you’d like to increase your site’s capacity by adding more webservers (web02 and web03), aswell as eliminate the single point of failure in your current setup (if web01 has an outage the site will be offline).

1
2
3
4
5
6
7
8
9
              +---------+
              | uplink  |
              +---------+
                   |
     +-------------+-------------+
     |             |             |
+---------+   +---------+   +---------+
|  web01  |   |  web02  |   |  web03  |
+---------+   +---------+   +---------+

In order to spread traffic evenly over your three web servers, we could install an extra server to proxy all the traffic an balance it over the webservers. In this post we will use HAProxy, an open source TCP/HTTP load balancer. (see: http://haproxy.1wt.eu/) to do that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
              +---------+
              |  uplink |
              +---------+
                   |
                   +
                   |
              +---------+
              | loadb01 |
              +---------+
                   |
     +-------------+-------------+
     |             |             |
+---------+   +---------+   +---------+
|  web01  |   |  web02  |   |  web03  |
+---------+   +---------+   +---------+

So our setup now is:
– Three webservers, web01 (192.168.0.1), web02 (192.168.0.2 ), and web03 (192.168.0.3) each serving the application
– A new server (loadb01, ip: (192.168.0.100 )) with Ubuntu installed. 继续阅读

Failover and loadbalancer using keepalived (LVS) on two machines

In this scenario, we have two machines and try to make the most of available resources. Each of the node will play the role of realserver, it will provide a service such as a web or a mail server. At the same time, one of the machines will loadbalance the requests to itself and to its neighbor. The node that is responsible of the loadbalancing owns the VIP. Every client connects to it transparently thanks to the VIP. The other node is also able to take over the VIP if it detects that current master failed but in nominal case only process requests forwarded by the loadbalancer. 继续阅读