月度归档:2016年06月

How To Map User Location with GeoIP and ELK (Elasticsearch, Logstash, and Kibana)

Introduction

IP Geolocation, the process used to determine the physical location of an IP address, can be leveraged for a variety of purposes, such as content personalization and traffic analysis. Traffic analysis by geolocation can provide invaluable insight into your user base as it allows you to easily see where they users are coming from, which can help you make informed decisions about the ideal geographical location(s) of your application servers and who your current audience is. In this tutorial, we will show you how to create a visual geo-mapping of the IP addresses of your application’s users, by using a GeoIP database with Elasticsearch, Logstash, and Kibana.

Here’s a short explanation of how it all works. Logstash uses a GeoIP database to convert IP addresses into latitude and longitude coordinate pair, i.e. the approximate physical location of an IP address. The coordinate data is stored in Elasticsearch in geo_point fields, and also converted into a geohash string. Kibana can then read the Geohash strings and draw them as points on a map of earth, known in Kibana 4 as a Tile Map visualization.

Let’s take a look at the prerequisites now. 继续阅读

How To Use Kibana Dashboards and Visualizations

Introduction

Kibana 4 is an analytics and visualization platform that builds on Elasticsearch to give you a better understanding of your data. In this tutorial, we will get you started with Kibana, by showing you how to use its interface to filter and visualize log messages gathered by an Elasticsearch ELK stack. We will cover the main interface components, and demonstrate how to create searches, visualizations, and dashboards. 继续阅读

Adding Logstash Filters To Improve Centralized Logging

Introduction

Logstash is a powerful tool for centralizing and analyzing logs, which can help to provide and overview of your environment, and to identify issues with your servers. One way to increase the effectiveness of your ELK Stack (Elasticsearch, Logstash, and Kibana) setup is to collect important application logs and structure the log data by employing filters, so the data can be readily analyzed and query-able. We will build our filters around “grok” patterns, that will parse the data in the logs into useful bits of information.

This guide is a sequel to the How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04 tutorial, and focuses primarily on adding Logstash filters for various common application logs. 继续阅读

How To Gather Infrastructure Metrics with Topbeat and ELK on Ubuntu 14.04

Introduction

Topbeat, which is one of the several “Beats” data shippers that helps send various types of server data to an Elasticsearch instance, allows you to gather information about the CPU, memory, and process activity on your servers. When used with the ELK stack (Elasticsearch, Logstash, and Kibana), Topbeat can be used as an alternative to other system metrics visualization tools such as Prometheus or Statsd.

In this tutorial, we will show you how to use an ELK stack to gather and visualize infrastructure metrics by using Topbeat on an Ubuntu 14.04 server. 继续阅读

How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 14.04

Introduction

In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on Ubuntu 14.04—that is, Elasticsearch 2.2.x, Logstash 2.2.x, and Kibana 4.4.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.1.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering. 继续阅读

open-falcon

Open-Falcon 是小米运维部开源的一款互联网企业级监控系统解决方案.

Github

Highlights and features

  • 数据采集免配置:agent自发现、支持Plugin、主动推送模式
  • 容量水平扩展:生产环境每秒50万次数据收集、告警、存储、绘图,可持续水平扩展。
  • 告警策略自发现:Web界面、支持策略模板、模板继承和覆盖、多种告警方式、支持回调动作。
  • 告警设置人性化:支持最大告警次数、告警级别设置、告警恢复通知、告警暂停、不同时段不同阈值、支持维护周期,支持告警合并。
  • 历史数据高效查询:秒级返回上百个指标一年的历史数据。
  • Dashboard人性化:多维度的数据展示,用户自定义Dashboard等功能。
  • 架构设计高可用:整个系统无核心单点,易运维,易部署。

继续阅读

Linux主机HBA常用操作指南

介绍

本文阐述了Linux系统下HBA卡的常用基本操作,包括如何通过命令或日志查找HBA,如何查找WWN以及设置永久绑定,以及HBA卡安装完成之后如何识别存储设备。

更多信息

主机外接存储的准备工作:

HBA卡与操作系统的安装顺序决定了主机外接存储的操作步骤。如果HBA安装在操作系统之前,那么连接磁盘还是比较简单的。安装程序检测到发现的硬件并准备所需模块。如果适配器安装于操作系统之后,或在操作系统安装之后有变更,则用户需要手动安装。本文以Emulex 1000作为示例HBA。

HBA安装于操作系统之前:安装程序发现硬件,准备模块

HBA安装或变更于操作系统之后:用户手动安装 继续阅读

TIP: How to reset root@localhost password on OTRS 5

TIP: How to reset root@localhost password on OTRS 5

To reset the root@localhost on OTRS 5 it´s necessary to get access to the server’s shell and execute the following command:

su -c "/opt/otrs/bin/otrs.Console.pl Admin::User::SetPassword root@localhost 123456" -s /bin/bash otrs

Now the new root password is 123456. This command is also used to reset the password of any other Agent, just overwrite “root@localhost” with the Agent’s login.

su -c "/opt/otrs/bin/otrs.Console.pl Admin::User::SetPassword AgentLogin 123456" -s /bin/bash otrs

什么是OTRS

OTRS是一款开源软件,适用于管理范围广泛的业务流程,从帮助台、支持中心乃至IT服务管理。所有功能集都是基于“问题单”的创建,OTRS可用于支持、销售、售前、计费、内部IT、支撑台等多种情景,可快速响应与接收各个部门发起的需求。若你希望建立一个通过邮件或网页,并由一个支撑小组来处理这些需求的系统,一定会喜欢上OTRS!

OTRS遵循GNU Affero General Public License (AGPL)协议,并在Linux、Solaris、AIX、FreeBDS、OpenBSD、Mac OS 10.x与Windows平台上测试通过。

OTRS的名字是由Open-source Ticket Request System首字母缩略字而来,是一个开放式的的项目,提交服务请求,问题管理等功能,集成非常多的功能用来管理用户通过邮件或电话呼叫。

otrs

官方网站:https://www.otrs.com/

Sharding-JDBC

简介(https://github.com/dangdangdotcom/sharding-jdbc)

Sharding-JDBC直接封装JDBC API,可以理解为增强版的JDBC驱动,旧代码迁移成本几乎为零:

  • 可适用于任何基于javaORM框架,如:JPA, Hibernate, Mybatis, Spring JDBC Template或直接使用JDBC
  • 可基于任何第三方的数据库连接池,如:DBCP, C3P0, BoneCP, Druid等。
  • 理论上可支持任意实现JDBC规范的数据库。虽然目前仅支持MySQL,但已有支持OracleSQLServerDB2等数据库的计划。

Sharding-JDBC定位为轻量级java框架,使用客户端直连数据库,以jar包形式提供服务,未使用中间层,无需额外部署,无其他依赖,DBA也无需改变原有的运维方式。SQL解析使用Druid解析器,是目前性能最高的SQL解析器。 继续阅读

HowTo: The Ultimate Logrotate Command Tutorial with 10 Examples

Managing log files effectively is an essential task for Linux sysadmin.

In this article, let us discuss how to perform following log file operations using UNIX logrotate utility.

  • Rotate the log file when file size reaches a specific size
  • Continue to write the log information to the newly created file after rotating the old log file
  • Compress the rotated log files
  • Specify compression option for the rotated log files
  • Rotate the old log files with the date in the filename
  • Execute custom shell scripts immediately after log rotation
  • Remove older rotated log files

继续阅读