2018年4月

MySQL创建函数-存储过程报“ERROR 1418 ”错误 解决方法

MySQL创建函数或存储过程的时候报error 1418错误:

This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

是因为log_bin_trust_function_creators参数在起作用:
当二进制日志启用后,这个变量就会启用。它控制是否可以信任存储函数创建者,不会创建写入二进制日志引起不安全事件的存储函数。
如果设置为0(默认值),用户不得创建或修改存储函数,除非它们具有除CREATE ROUTINE或ALTER ROUTINE特权之外的SUPER权限。 设置为0还强制使用DETERMINISTIC特性或READS SQL DATA或NO SQL特性声明函数的限制。 如果变量设置为1,MySQL不会对创建存储函数实施这些限制,此变量也适用于触发器的创建。

mysql> show variables like 'log_bin';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | ON    |
+---------------+-------+
1 row in set (0.00 sec)
 
mysql>  show variables like '%log_bin_trust_function_creators%';
+---------------------------------+-------+
| Variable_name                   | Value |
+---------------------------------+-------+
| log_bin_trust_function_creators | OFF   |
+---------------------------------+-------+
1 row in set (0.00 sec)

如果数据库没有使用主从复制,那么就可以将参数log_bin_trust_function_creators设置为1。

mysql> set global log_bin_trust_function_creators=1;

这个动态设置的方式会在服务重启后失效,所以我们还必须在my.cnf中设置,加上

log_bin_trust_function_creators=1

这样就会永久生效。
注意:如果开启了主从复制,同时又打开了log_bin_trust_function_creators参数,可以创建函数、存储过程,可能会引起主从复制故障·

解决virt-manager启动管理器出错:unsupported format character

virt-manager出错,报错信息如下:

启动管理器出错:unsupported format character ‘��0xffffffef) at index 30

virt.png

系统版本:CentOS release 6.9 (Final)
解决方法如下:
先卸载0.9.0-34版本:

yum remove virt-manager1

找到virt-manager-0.9.0-31的CentOS版本,安装就可以了

wget http://vault.centos.org/6.7/cr/x86_64/Packages/virt-manager-0.9.0-31.el6.x86_64.rpm
rpm -ivh virt-manager-0.9.0-31.el6.x86_64.rpm

即可解决。

使用gprecoverseg修复Segment节点

greenplum环境中测试的时候, segment节点sdw2由于硬盘空间不足,显示宕机了,重新启动的时候节点报错,启动不了;
使用gpstate -m查看节点状态显示sdw2节点失败:

[gpadmin@dw01 gpmaster]$ gpstate -m

gpstate:dw01:gpadmin-[INFO]:-Starting gpstate with args: -m
gpstate:dw01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0 build 1'
gpstate:dw01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.12.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 27 2017 20:45:12'
gpstate:dw01:gpadmin-[INFO]:-Obtaining Segment details from master...
gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
gpstate:dw01:gpadmin-[INFO]:--Current GPDB mirror list and status
gpstate:dw01:gpadmin-[INFO]:--Type = Group
gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
gpstate:dw01:gpadmin-[INFO]:-   Mirror   Datadir                        Port    Status    Data Status    
gpstate:dw01:gpadmin-[WARNING]:-sdw2     /data/gpdata/gpdatam1/gpseg0   50000   Failed                   <<<<<<<<
gpstate:dw01:gpadmin-[WARNING]:-sdw2     /data/gpdata/gpdatam1/gpseg1   50001   Failed                   <<<<<<<<
gpstate:dw01:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam1/gpseg2   50000   Passive   Synchronized
gpstate:dw01:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam1/gpseg3   50001   Passive   Synchronized
gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
gpstate:dw01:gpadmin-[WARNING]:-2 segment(s) configured as mirror(s) have failed

gprecoverseg参数选项

-a (不提示)
不要提示用户确认。
-B parallel_processes
并行恢复的Segment数。如果未指定,则实用程序将启动最多四个并行进程,具体取决于需要恢复多少个Segment实例。
-d master_data_directory
可选。Master主机的数据目录。如果未指定,则使用为$MASTER_DATA_DIRECTORY设置的值。
-F (完全恢复)
可选。执行活动Segment实例的完整副本以恢复出现故障的Segment。 默认情况下,仅复制Segment关闭时发生的增量更改。
-i recover_config_file
指定文件的名称以及有关失效Segment要恢复的详细信息。文件中的每一行都是以下格式。SPACE关键字表示所需空间的位置。不要添加额外的空间。
filespaceOrder=[filespace1_fsname[, filespace2_fsname[, ...]]
<failed_host_address>:<port>:<data_directory>SPACE 
<recovery_host_address>:<port>:<replication_port>:<data_directory>
[:<fselocation>:...]

恢复所有失效的Segment实例:

gprecoverseg
恢复后,重新平衡用户的Greenplum数据库系统,将所有Segment重置为其首选角色。 首先检查所有Segment已启动并同步

将任何失效的Segment实例恢复到新配置的空闲Segment主机:

$ gprecoverseg -i recover_config_file

本例使用gprecoverseg修复:

20180420_172.28.95.255038[gpadmin@dw01 pg_log]$ gprecoverseg
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Starting gprecoverseg with args: 
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0 build 1'
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.12.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 27 2017 20:45:12'
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Checking if segments are ready to connect
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Obtaining Segment details from master...
20180420_172.28.95.25503820180420:21:50:37:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Obtaining Segment details from master...
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Greenplum instance recovery parameters
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Recovery type              = Standard
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Recovery 1 of 2
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Synchronization mode                        = Incremental
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance host                        = dw04
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance address                     = sdw2
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance directory                   = /data/gpdata/gpdatam1/gpseg0
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance port                        = 50000
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance replication port            = 51000
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance host               = dw03
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance address            = sdw1
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance directory          = /data/gpdata/gpdatap1/gpseg0
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance port               = 40000
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance replication port   = 41000
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Target                             = in-place
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Recovery 2 of 2
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Synchronization mode                        = Incremental
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance host                        = dw04
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance address                     = sdw2
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance directory                   = /data/gpdata/gpdatam1/gpseg1
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance port                        = 50001
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Failed instance replication port            = 51001
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance host               = dw03
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance address            = sdw1
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance directory          = /data/gpdata/gpdatap1/gpseg1
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance port               = 40001
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Source instance replication port   = 41001
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:-   Recovery Target                             = in-place
20180420_172.28.95.25503920180420:21:50:38:002098 gprecoverseg:dw01:gpadmin-[INFO]:----------------------------------------------------------
20180420_172.28.95.255039
20180420_172.28.95.255039Continue with segment recovery procedure Yy|Nn (default=N):
20180420_172.28.95.255041> y
20180420_172.28.95.25504120180420:21:50:40:002098 gprecoverseg:dw01:gpadmin-[INFO]:-2 segment(s) to recover
20180420_172.28.95.25504120180420:21:50:40:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Ensuring 2 failed segment(s) are stopped
20180420_172.28.95.255042 
20180420_172.28.95.25504220180420:21:50:41:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Ensuring that shared memory is cleaned up for stopped segments
20180420_172.28.95.255047updating flat files
20180420_172.28.95.25504720180420:21:50:46:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating configuration with new mirrors
20180420_172.28.95.25504720180420:21:50:46:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating mirrors
20180420_172.28.95.255048. 
20180420_172.28.95.25504820180420:21:50:47:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Starting mirrors
20180420_172.28.95.25504820180420:21:50:48:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
20180420_172.28.95.255052.... 
20180420_172.28.95.25505220180420:21:50:52:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Process results...
20180420_172.28.95.25505220180420:21:50:52:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating configuration to mark mirrors up
20180420_172.28.95.25505220180420:21:50:52:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating primaries
20180420_172.28.95.25505220180420:21:50:52:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Commencing parallel primary conversion of 2 segments, please wait...
20180420_172.28.95.255054.. 
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Process results...
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Done updating primaries
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-******************************************************************
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Updating segments for resynchronization is completed.
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-For segments updated successfully, resynchronization will continue in the background.
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-Use  gpstate -s  to check the resynchronization progress.
20180420_172.28.95.25505420180420:21:50:54:002098 gprecoverseg:dw01:gpadmin-[INFO]:-******************************************************************

修复完成查看节点状态:

20180420_172.28.95.255110[gpadmin@dw01 pg_log]$ gpstate -m
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-Starting gpstate with args: -m
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0 build 1'
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.12.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Feb 27 2017 20:45:12'
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-Obtaining Segment details from master...
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--Current GPDB mirror list and status
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--Type = Group
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   Mirror   Datadir                        Port    Status    Data Status       
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   sdw2     /data/gpdata/gpdatam1/gpseg0   50000   Passive   Resynchronizing
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   sdw2     /data/gpdata/gpdatam1/gpseg1   50001   Passive   Resynchronizing
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam1/gpseg2   50000   Passive   Synchronized
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:-   sdw1     /data/gpdata/gpdatam1/gpseg3   50001   Passive   Synchronized
20180420_172.28.95.25511120180420:21:51:10:002350 gpstate:dw01:gpadmin-[INFO]:--------------------------------------------------------------

节点全部启动,sdw2节点正在重新同步,过一段时间一般几分钟即可,根据数据量大小而定,一般很很快同步完毕;
参考文档:
https://gp-docs-cn.github.io/docs/utility_guide/admin_utilities/gprecoverseg.html
http://mysql.taobao.org/monthly/2016/04/03/

jira7.x饼图中文乱码解决

jira7.x安装成功在导出选择饼图的时候,中午字符不显示,如下图:
jira-font.png
是因为系统缺少字体,安装字体以后,重启jenkins即可

yum -y install fonts-chinese fonts-ISO8859*

如果yum提示找不到,直接安装fonts-chinese-3.02-12.el5.noarch.rpm和fonts-ISO8859-2-75dpi-1.0-17.1.noarch.rpm包,随后重启jira即可:

rpm -ivh --force --nodeps fonts*.rpm

MySQL忽略区分大小写

在MySQL中,数据库对应数据目录中的目录。数据库中的每个表至少对应数据库目录中的一个文件(也可能是多个,取决于存储引擎)。因此,所使用操作系统的大小写敏感性决定了数据库名和表名的大小写敏感性。

在大多数Unix中数据库名和表名对大小写敏感,而在Windows中对大小写不敏感。一个显著的例外情况是Mac OS X,它基于Unix但使用默认文件系统类型(HFS+),对大小写不敏感。然而,Mac OS X也支持UFS卷,该卷对大小写敏感,就像Unix一样。

变量lower_case_file_system说明是否数据目录所在的文件系统对文件名的大小写敏感。

ON说明对文件名的大小写不敏感,OFF表示敏感。

一般线上不建议忽略大小写,仅在一些特殊场景下适用,

大小写区分规则

    linux下:
    数据库名与表名是严格区分大小写的;
    表的别名是严格区分大小写的;
    列名与列的别名在所有的情况下均是忽略大小写的;
    变量名也是严格区分大小写的;
    windows下:
    都不区分大小写
    Mac OS下(非UFS卷):
    都不区分大小写

MySQL默认是区分大小写的:

mysql> show variables like 'lower%';
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| lower_case_file_system | OFF   |
| lower_case_table_names | 0     |
+------------------------+-------+
2 rows in set (0.01 sec)
lower_case_table_names = 0时,mysql会根据表名直接操作,大小写敏感。 
lower_case_table_names = 1时,mysql会先把表名转为小写,再执行操作。 

由大小写敏感转换为不敏感方法:

    如果原来所建立库及表都是对大小写敏感的,想要转换为对大小写不敏感,主要需要进行如下3步:
    1.将数据库数据通过mysqldump导出。
    2.在my.cnf中更改lower_case_tables_name = 1,并重启mysql数据库。
    3.将导出的数据导入mysql数据库。

mysqldump导出所有库,修改my.cnf, 在[mysqld]下加入一行:

lower_case_table_names=1;
/etc/init.d/mysql restart

重新查询

mysql> show variables like 'lower%';
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| lower_case_file_system | OFF   |
| lower_case_table_names | 1     |
+------------------------+-------+
2 rows in set (0.01 sec)

source进刚才备份的数据;

MySQL错误ERROR 1786 (HY000)解决

业务上需要支持create table XXX as select * from XXX; 这种创建表的语法,但是MySQL5.7.x版本里面gtid是开启的,会报错

ERROR 1786 (HY000):Statement violates GTID consistency: CREATE TABLE ... SELECT.

官方说明:https://dev.mysql.com/doc/refman/5.7/en/replication-gtids-restrictions.html

CREATE TABLE ... SELECT statements.  CREATE TABLE ... SELECT is not safe for statement-based replication. When using row-based replication, this statement is actually logged as two separate events—one for the creation of the table, and another for the insertion of rows from the source table into the new table just created. When this statement is executed within a transaction, it is possible in some cases for these two events to receive the same transaction identifier, which means that the transaction containing the inserts is skipped by the slave. Therefore, CREATE TABLE ... SELECT is not supported when using GTID-based replication.

解决办法关闭GTID模式:
my.cnf里面修改参数为:

gtid_mode = OFF
enforce_gtid_consistency = OFF

重启MySQL,再次创建成功:

mysql> show variables like '%gtid_mode%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| gtid_mode     | OFF   |
+---------------+-------+
1 row in set (0.01 sec)

mysql> show variables like '%enforce_gtid_consistency%';
+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| enforce_gtid_consistency | OFF   |
+--------------------------+-------+
1 row in set (0.01 sec)

mysql> create table t1 as select * from BS_CONT;
Query OK, 0 rows affected (0.12 sec)

FastDFS配置参数tracker.conf、storage.conf详解

启动命令:

/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf
/usr/bin/fdfs_storaged /etc/fdfs/storage.conf

文件位置:/etc/fdfs/storage.conf

基本配置(基础配置,不考虑性能调优情况下)
group_name=group1                               #组名   指定 此 storage server 所在 组(卷)
port=23000                                                # 存储服务端口
base_path=/data/fastdfs-storage            #设置storage数据文件和日志目录,需预先创建
目录下自动生成 两个文件夹
 data  存储信息  
logs   日志信息 
store_path_count=1                                  #存储路径个数,需要和 store_path 个数匹配、
store_path0=/data/fastdfs-storage          #存储路径
#store_path1=/data/fastdfs-storage2

tracker_server=192.168.116.145:22122       (主动连接tracker_server )
                       # tracker 服务器的 IP地址和端口号,如果是单机搭建,IP不要写127.0.0.1,否则启动不成功。
http.server_port=8888                              #设置 http 端口号


storage.conf
# is this config file disabled
# false for enabled
# true for disabled
disabled=false        false为生效,true不生效

# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configed correctly.
group_name=group1
# 指定 此 storage server 所在 组(卷)

#如果 注释或者删除这个参数,而从tracker那里获取分组信息
#  则  use_storage_id参数必须设为true 在 tracker.conf 配置文件中
#  storage_ids.conf文件必须正确配置


# bind an address of this host
# empty for bind all addresses of this host
bind_addr=
# 绑定ip地址,空值为主机上全部地址

# if bind an address of this host when connect to other servers 
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true

# the storage server port     存储服务端口
port=23000

# connect timeout in seconds          连接超时时间,针对socket套接字函数connect
# default value is 30s
connect_timeout=30

# network timeout in seconds
# default value is 30s
network_timeout=60
#  storage server 网络超时时间,单位为秒。发送或接收数据时,
# 如果在超时时间后还不能发送或接收数据,则本次网络通信失败

# heart beat interval in seconds
heart_beat_interval=30
# 心跳间隔时间,单位为秒 (这里是指主动向tracker server 发送心跳)

# disk usage report interval in seconds
stat_report_interval=60
# storage server向tracker server报告磁盘剩余空间的时间间隔,单位为秒

# the base path to store data and log files
base_path=/home/yuqing/fastdfs
# base_path 目录地址,根目录必须存在  子目录会自动生成 
#(注 :这里不是上传的文件存放的地址,之前是的,在某个版本后更改了)

# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
max_connections=256
# 服务器支持的最大并发连接
# 默认为256
# 更大的值 意味着需要使用更大的内存

# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
buff_size = 256KB
# 发送/接收 数据的缓冲区大小
# 此参数必须大于8KB
# 设置队列结点的buffer大小。工作队列消耗的内存大小 = buff_size * max_connections
# 设置得大一些,系统整体性能会有所提升。
# 消耗的内存请不要超过系统物理内存大小。另外,对于32位系统,请注意使用到的内存不要超过3GB

# accept thread count
# default value is 1
# since V4.07
accept_threads=1
# 接受线程数 ???  默认为1 ???

# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
work_threads=4
# V2.0引入的这个参数,工作线程数 <=max_connections
# 此参数 处理 网络的 I/O
# 通常设置为CPU数

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true
# 磁盘是否 读写分离
# false为读写混合  true为读写分离
# 默认读写分离

# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1
# 针对每个个存储路径的读线程数,缺省值为1
# 读写混合时 此值设为0
# 读写分离时,系统中的读线程数 = disk_reader_threads * store_path_count
# 读写混合时,系统中的读写线程数 = (disk_reader_threads + disk_writer_threads) * store_path_count

# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1
# 针对每个个存储路径的写线程数,缺省值为1
# 读写混合时 此值设为0
# 读写分离时,系统中的写线程数 = disk_writer_threads * store_path_count
# 读写混合时,系统中的读写线程数 = (disk_reader_threads + disk_writer_threads) * store_path_count

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50
# 同步文件时,如果从binlog中没有读到要同步的文件,休眠N毫秒后重新读取。0表示不休眠,立即再次尝试读取。
# 出于CPU消耗考虑,不建议设置为0。如何希望同步尽可能快一些,可以将本参数设置得小一些,比如设置为10ms

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0
# 同步上一个文件后,再同步下一个文件的时间间隔,单位为毫秒,0表示不休眠,直接同步下一个文件

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00
# 存储每天同步的开始时间 (默认是00:00开始) 
# 一般用于避免高峰同步产生一些问题而设定

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59
# 存储每天同步的结束时间 (默认是23:59结束) 
# 一般用于避免高峰同步产生一些问题而设定

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500
# 同步完N个文件后,把storage的mark文件同步到磁盘
# 注:如果mark文件内容没有变化,则不会同步

# path(disk or mount point) count, default value is 1
store_path_count=1
# 存放文件时storage server支持多个路径(例如磁盘)
# 这里配置存放文件的基路径数目,通常只配一个目录

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/home/yuqing/fastdfs
#store_path1=/home/yuqing/fastdfs2
# 逐一配置store_path的路径,索引号基于0。注意配置方法后面有0,1,2 ......,需要配置0到store_path - 1。
# 如果不配置base_path0,那边它就和base_path对应的路径一样。
# 路径必须存在

# subdir_count  * subdir_count directories will be auto created under each 
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256
# N*N 个 目录会自动创建 在 每个磁盘路径下
# 值 可以设为1-256 默认256
# FastDFS存储文件时,采用了两级目录。这里配置存放文件的目录个数 (系统的存储机制)
# 如果本参数只为N(如:256),那么storage server在初次运行时,会自动创建 N * N 个存放文件的子目录。

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
tracker_server=192.168.209.121:22122
# tracker_server 的地址列表 要写端口的 (再次提醒是主动连接tracker_server )
# 有多个tracker server时,每个tracker server写一行

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
# 日志级别

#unix group name to run this program, 
#not set (empty) means run by the group of current user
run_by_group=
# 操作系统运行FastDFS的用户组 (不填就是当前用户组,哪个启动进程就是哪个)

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
# 操作系统运行FastDFS的用户 (不填就是当前用户,哪个启动进程就是哪个)

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*
# 允许连接本storage server的IP地址列表 (不包括自带HTTP服务的所有连接)
# 可以配置多行,每行都会起作用

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0
#  文件在data目录下分散存储策略。
# 0: 轮流存放,在一个目录下存储设置的文件数后(参数file_distribute_rotate_count中设置文件数),使用下一个目录进行存储。
# 1: 随机存储,根据文件名对应的hash code来分散存储

# valid when file_distribute_to_path is set to 0 (round robin), 
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100
# 当上面的参数file_distribute_path_mode配置为0(轮流存放方式)时,本参数有效。
# 当一个目录下的文件存放的文件数达到本参数值时,后续上传的文件存储到下一个目录中。

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0
# 当写入大文件时,每写入N个字节,调用一次系统函数fsync将内容强行同步到硬盘。0表示从不调用fsync  
# 其它值表示 写入字节为此值时,调用fsync函数

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10
# 同步或刷新日志信息到硬盘的时间间隔,单位为秒
# 注意:storage server 的日志信息不是时时写硬盘的,而是先写内存

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=10
# 同步binglog(更新操作日志)到硬盘的时间间隔,单位为秒
# 本参数会影响新上传文件同步延迟时间

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300
# 把storage的stat文件同步到磁盘的时间间隔,单位为秒。
# 注:如果stat文件内容没有变化,不会进行同步

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB
# 线程栈的大小。FastDFS server端采用了线程方式。
# 对于V1.x,storage server线程栈不应小于512KB;对于V2.0,线程栈大于等于128KB即可。
# 线程栈越大,一个线程占用的系统资源就越多。
# 对于V1.x,如果要启动更多的线程(max_connections),可以适当降低本参数值。

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10
#  本storage server作为源服务器,上传文件的优先级
#  可以为负数。值越小,优先级越高
#  这里就和 tracker.conf 中store_server= 2时的配置相对应了 

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=
# 网卡别名前缀 在linux上 你可以通过ifconfig -a 看到它
# 多个别名需要用逗号隔开
# 空值表示根据操作系统类型自动设置



FastDHT 文件去重设置:
# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0
# 是否检测上传文件已经存在。如果已经存在,则不存储文件内容
# FastDHT 建立一个符号链接 以节省磁盘空间。 
# 这个应用要配合FastDHT 使用,所以打开前要先安装FastDHT 
# 1或yes 是检测, 0或no 是不检测

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash
# 检查文件重复时,文件内容的签名方式:
## hash: 4个hash code
## md5:MD5

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS
# 存储文件符号连接的命名空间
# check_file_duplicate 参数必须设置为  true / on

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0
# 与FastDHT servers 的连接方式 (是否为持久连接) ,默认是0(短连接方式)
# 可以考虑使用长连接,这要看FastDHT server的连接数是否够用

# you can use "#include filename" (not include double quotes) directive to 
# load FastDHT server list, when the filename is a relative path such as 
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf
# 你可以使用  "#include filename"指令(不包括双引号)来加载 FastDHT服务列表
# 文件名是一个相对路径 比如一个纯粹的文件名
# 主路径是当前配置文件的主路径
# 必须设置FastDHT服务列表 当此 check_file_duplicate 参数 是true / on
# 更多信息 参考FastDHT 的安装文件


日志类设置:
# if log to access log
# default value is false
# since V4.00
use_access_log = false
# 是否将文件操作记录到access log

# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false
# 是否定期轮转access log,目前仅支持一天轮转一次

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00
# access log定期轮转的时间点,只有当rotate_access_log设置为true时有效

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false
# 是否定期轮转error log,目前仅支持一天轮转一次

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00
# error log定期轮转的时间点,只有当rotate_error_log设置为true时有效

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0
# access log按文件大小轮转
# 设置为0表示不按文件大小轮转,否则当access log达到该大小,就会轮转到新文件中

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# error log按文件大小轮转
# 设置为0表示不按文件大小轮转,否则当error log达到该大小,就会轮转到新文件中

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0
# 是否保留日志文件
# 0 为不删除旧日志文件

# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false
# 文件同步的时候,是否忽略无效的binlog记录

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
# 是否使用连接池

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# 关闭 闲置连接的 时间
# 默认3600秒

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=
# 如果存储服务器域名是空的 则使用ip地址
# storage server上web server域名,通常仅针对单独部署的web server
# 这样URL中就可以通过域名方式来访问storage server上的文件了


# the port of the web server on this storage server
http.server_port=8888
# 存储服务器 使用的web端口

文件位置:/etc/fdfs/tracker.conf

# is this config file disabled
# false for enabled
# true for disabled
#该配置文件是否生效
disabled=false

# bind an address of this host
# empty for bind all addresses of this host
# 绑定IP
# 后面为绑定的IP地址 (常用于服务器有多个IP但只希望一个IP提供服务)。
# 如果不填则表示所有的(一般不填就OK),相信较熟练的SA都常用到类似功能,
# 很多系统和应用都有
bind_addr=

# the tracker server port
# tracker server 服务端口
port=22122

# connect timeout in seconds
# default value is 30s
# 连接超时(秒)
connect_timeout=30

# network timeout in seconds
# default value is 30s
# 网络超时(秒)
network_timeout=60

# the base path to store data and log files
# Tracker数据/日志目录地址
# ${base_path}
#     |__data
#     |     |__storage_groups.dat:存储分组信息
#     |     |__storage_servers.dat:存储服务器列表
#     |__logs
#           |__trackerd.log:tracker server日志文件
base_path=/usr/local/src/fastdfs/tracker

# 数据文件storage_groups.dat和storage_servers.dat中的记录之间以换行符(\n)分隔,字段之间以西文逗号(,)分隔。
# storage_groups.dat中的字段依次为:
#   1. group_name:组名
#   2. storage_port:storage server端口号
# 
# storage_servers.dat中记录storage server相关信息,字段依次为:
#   1. group_name:所属组名
#   2. ip_addr:ip地址
#   3. status:状态
#   4. sync_src_ip_addr:向该storage server同步已有数据文件的源服务器
#   5. sync_until_timestamp:同步已有数据文件的截至时间(UNIX时间戳)
#   6. stat.total_upload_count:上传文件次数
#   7. stat.success_upload_count:成功上传文件次数
#   8. stat.total_set_meta_count:更改meta data次数
#   9. stat.success_set_meta_count:成功更改meta data次数
#   10. stat.total_delete_count:删除文件次数
#   11. stat.success_delete_count:成功删除文件次数
#   12. stat.total_download_count:下载文件次数
#   13. stat.success_download_count:成功下载文件次数
#   14. stat.total_get_meta_count:获取meta data次数
#   15. stat.success_get_meta_count:成功获取meta data次数
#   16. stat.last_source_update:最近一次源头更新时间(更新操作来自客户端)
#   17. stat.last_sync_update:最近一次同步更新时间(更新操作来自其他storage server的同步)

# max concurrent connections this server supported
# 最大连接数
max_connections=100

# accept thread count
# default value is 1
# since V4.07
# w线程数,通常设置CPU数,值 <= 最大连接数
accept_threads=1

# work thread count, should <= max_connections
# default value is 4
# since V2.00
work_threads=8

# the method of selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
# 上传文件的选组方式,如果在应用层指定了上传到一个固定组,那么这个参数被绕过
# 0: 表示轮询
# 1: 表示指定组
# 2: 表示存储负载均衡(选择剩余空间最大的组)
store_lookup=0

# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
# 指定上传的组,如果在应用层指定了具体的组,那么这个参数将不会起效。
# 另外如果store_lookup如果是0或2,则此参数无效
store_group=group2

# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# 选择哪个storage server 进行上传操作
# 一个文件被上传后,这个storage server就相当于这个文件的storage server源,
# 会对同组的storage server推送这个文件达到同步效果
# 0: 轮询方式(默认)
# 1: 根据ip 地址进行排序选择第一个服务器(IP地址最小者)
# 2: 根据优先级进行排序(上传优先级由storage server来设置,参数名为upload_priority),优先级值越小优先级越高。
store_server=0

# which path(means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
# 上传路径的选择方式。storage server可以有多个存放文件的base path(可以理解为多个磁盘)
# 0: 轮流方式,多个目录依次存放文件
# 2: 选择剩余空间最大的目录存放文件(注意:剩余磁盘空间是动态的,因此存储到的目录或磁盘可能也是变化的)
store_path=0

# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
# 选择哪个 storage server 作为下载服务器 
# 0: 轮询方式,可以下载当前文件的任一storage server
# 1: 哪个为源storage server就用哪一个,就是之前上传到哪个storage server服务器就是哪个了
download_server=0

# storage server上保留的空间,保证系统或其他应用需求空间。
# 可以用绝对值或者百分比(V4开始支持百分比方式)
# 如果同组的服务器的硬盘大小一样,以最小的为准
# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in 
# a group <= reserved_storage_space, 
# no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as reserved_storage_space = 10%
reserved_storage_space = 10%

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
# 选择日志级别
log_level=info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
# 指定运行该程序的用户组(不填 就是当前用户组,哪个启动进程就是哪个)
run_by_group=root

#unix username to run this program,
#not set (empty) means run by current user
# 操作系统运行FastDFS的用户 (不填 就是当前用户,哪个启动进程就是哪个)
run_by_user=root

# 可以连接到此 tracker server 的ip范围(对所有类型的连接都有影响,包括客户端,storage server)
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
# host[01-08,20-25].domain.com, for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
allow_hosts=*

# sync log buff to disk every interval seconds
# default value is 10 seconds
# 同步或刷新日志信息到硬盘的时间间隔,单位为秒
# 注意:tracker server 的日志不是时时写硬盘的,而是先写内存。
sync_log_buff_interval = 10

# 检测 storage server 存活的时间隔,单位为秒。
# storage server定期向tracker server 发心跳,
# 如果tracker server在一个check_active_interval内还没有收到storage server的一次心跳,
# 那边将认为该storage server已经下线。所以本参数值必须大于storage server配置的心跳时间间隔。
# 通常配置为storage server心跳时间间隔的2倍或3倍。
# check storage server alive interval seconds
check_active_interval = 120

# thread stack size, should >= 64KB
# default value is 64KB
# 线程栈的大小。FastDFS server端采用了线程方式。
# 线程栈不应小于64KB
# 线程栈越大,一个线程占用的系统资源就越多。如果要启动更多的线程可以适当降低本参数值。
thread_stack_size = 64KB

# auto adjust when the ip address of the storage server changed
# default value is true
# 这个参数控制当storage server IP地址改变时,集群是否自动调整。
# 注:只有在storage server进程重启时才完成自动调整。
storage_ip_changed_auto_adjust = true

# ===========================同步======================================
# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
# V2.0引入的参数。存储服务器之间同步文件的最大延迟时间,缺省为1天。根据实际情况进行调整
# 注:本参数并不影响文件同步过程。本参数仅在下载文件时,判断文件是否已经被同步完成的一个阀值(经验值)
storage_sync_file_max_delay = 86400

# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
# V2.0引入的参数。存储服务器同步一个文件需要消耗的最大时间,缺省为300s,即5分钟。
# 注:本参数并不影响文件同步过程。本参数仅在下载文件时,作为判断当前文件是否被同步完成的一个阀值(经验值)
storage_sync_file_max_time = 300

# ===========================trunk 和 slot============================
# if use a trunk file to store several small files
# default value is false
# since V3.00
# V3.0引入的参数。是否使用小文件合并存储特性,缺省是关闭的。
use_trunk_file = false 

# the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
slot_min_size = 256

# trunk file分配的最小字节数。比如文件只有16个字节,系统也会分配slot_min_size个字节。
# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <=  this value
# default value is 16MB
# since V3.00
slot_max_size = 16MB

# 只有文件大小<=这个参数值的文件,才会合并存储。
# 如果一个文件的大小大于这个参数值,将直接保存到一个文件中(即不采用合并存储方式)。
# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
trunk_file_size = 64MB

# 是否提前创建trunk file。只有当这个参数为true,下面3个以trunk_create_file_打头的参数才有效。
# if create trunk file advancely
# default value is false
# since V3.06
trunk_create_file_advance = false

# 提前创建trunk file的起始时间点(基准时间),02:00表示第一次创建的时间点是凌晨2点
# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00

# 创建trunk file的时间间隔,单位为秒。如果每天只提前创建一次,则设置为86400
# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400

# 提前创建trunk file时,需要达到的空闲trunk大小
# 比如本参数为20G,而当前空闲trunk为4GB,那么只需要创建16GB的trunk file即可。
# the threshold to create trunk file
# when the free trunk file size less than the threshold, will create 
# the trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G

# trunk初始化时,是否检查可用空间是否被占用
# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces 
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false

# 是否无条件从trunk binlog中加载trunk可用空间信息
# FastDFS缺省是从快照文件storage_trunk.dat中加载trunk可用空间,
# 该文件的第一行记录的是trunk binlog的offset,然后从binlog的offset开始加载
# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false

# the min interval for compressing the trunk binlog file
# unit: second
# default value is 0, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# since V5.01
trunk_compress_binlog_min_interval = 0

# 是否使用server ID作为storage server标识
# if use storage ID instead of IP address
# default value is false
# since V4.00
use_storage_id = false

# use_storage_id 设置为true,才需要设置本参数
# 在文件中设置组名、server ID和对应的IP地址,参见源码目录下的配置示例:conf/storage_ids.conf
# specify storage ids filename, can use relative or absolute path
# since V4.00
storage_ids_filename = storage_ids.conf

# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
# 存储服务器的文件名中的id类型,取值如下
## IP:存储服务器的IP地址
## id:被存储服务器的服务器标识
# 只有当use_storage_id设置为true时此参数是有效的
# 默认值是IP
id_type_in_filename = ip

# 存储从文件是否采用symbol link(符号链接)方式
# 如果设置为true,一个从文件将占用两个文件:原始文件及指向它的符号链接。
# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false

# 是否定期轮转error log,目前仅支持一天轮转一次
# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = true

# error log定期轮转的时间点,只有当rotate_error_log设置为true时有效
# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00

# error log按大小轮转
# 设置为0表示不按文件大小轮转,否则当error log达到该大小,就会轮转到新文件中
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# 是否使用连接池
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# 连接的空闲时间超过这个时间将被关闭,单位:秒
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# ===========================HTTP 相关=================================
# HTTP port on this tracker server
# tracker server上的HTTP服务器端口号
http.server_port=8080

# 检查storage http server存活的间隔时间,单位为秒
# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval=30

# 检查storage http server存活的方式
# tcp:连接到storage server的http端口,不进行request和response。
# check storage HTTP server alive type, values are:
#   tcp : connect to the storge server with HTTP port only, 
#        do not request and get response
#   http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type=tcp

# 检查storage http server是否alive的uri/url
# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri=/status.html

SSH服务配置监听多端口

配置sshd监听多个端口,编辑sshd_config,增加ListenAddress选项 – 指定监听的网络地址,默认监听所有地址。
opensshimg.jpg
可以使用下面的格式:

ListenAddress host|IPv4_addr|IPv6_addr
ListenAddress host|IPv4_addr:port
ListenAddress [host|IPv6_addr]:port

如果未指定 port ,那么将使用 Port 指令的值。可以使用多个 ListenAddress 指令监听多个地址。

vi /etc/ssh/sshd_config
增加
ListenAddress 0.0.0.0:22
ListenAddress 0.0.0.0:18929
ListenAddress 0.0.0.0:10761

即监听22, 18929, 10761 (Port选项的端口也要加上)

/etc/init.d/sshd restart

重启服务生效。

CentOS 7.4安装配置VNC Server及桌面环境

CentOS 7.4安装配置VNC Server及桌面环境步骤如下:
安装vnc、Gnome桌面

yum groupinstall "GNOME Desktop" "Graphical Administration Tools" -y
yum groupinstall "X Window System" "Desktop" -y
yum install tigervnc tigervnc-server -y

配置VNC
将/lib/systemd/system/vncserver@.service文件复制一份

cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service

将vncserver@:1.service文件中得<USER>修改为VNC Client连接的账号,这里修改为root了,PIDFile也需要修改下,文件内容如下:

[Unit]
Description=Remote desktop service (VNC)
After=syslog.target network.target

[Service]
Type=forking
User=root

ExecStartPre=-/usr/bin/vncserver -kill %i
ExecStart=/usr/bin/vncserver %i
PIDFile=/root/.vnc/%H%i.pid
ExecStop=-/usr/bin/vncserver -kill %i

[Install]
WantedBy=multi-user.target

设置VNCServer密码:

vncpasswd

启动并设置VNCServer为开机自启动

systemctl start vncserver@:1.service
systemctl enable vncserver@:1.service

如果启动报错:

Job for vncserver@:1.service failed because the control process exited with error code. See 
"systemctl status vncserver@:1.service" and "journalctl -xe" for details.

直接删除/tmp/.X11-unix/目录后,重新启动服务即可

rm /tmp/.X11-unix/ -rf

CentOS7防火墙规则:

firewall-cmd --permanent --add-service="vnc-server" --zone="public"
firewall-cmd --reload

清明---风大雨急

岁月无声,时间如飞;

又到一年清明节,降温、下雨、大风冷嗖嗖,温度骤降,夜无眠;

想起了很多......
qingming.jpg

生活,生容易,活不易;

操蛋的社会;

谁不是喊着不想活了,又努力活着......

为了谁?为了打破自己所处的阶层;

我们不都是如此么?

安装greenplum-cc-web-3.2.0监控工具

所有节点都提前部署好了greenplum-db数据库,且数据库及节点能启动正常并可访问,参考安装文档:https://www.unixso.com/PostgreSQL/GreenPlum.html
一、解压安装

unzip greenplum-cc-web-3.2.0-LINUX-x86_64.zip
./greenplum-cc-web-3.2.0-LINUX-x86_64.bin
安装过程中控制台提示一律yes

greenplum-cc-web默认安装在/usr/local/目录下
在/home/gpadmin/.bash_profile文件中增加

source /usr/local/greenplum-cc-web/gpcc_path.sh

赋予gpadmin用户权限:

chown -R gpadmin:gpadmin /usr/local/greenplum-cc-web-3.2.0
chown -R gpadmin:gpadmin /usr/local/greenplum-cc-web

给其他三台机器安装greenplum-cc-web

su - gpadmin
cd /usr/local/greenplum-db/gpconfig/dw234
smdw
sdw1
sdw2
gpccinstall -f /usr/local/greenplum-db/gpconfig/dw234

在/data/gpmaster/gpseg-1/pg_hba.conf中添加用户登录权限(如果不添加可能会导致不能创建gpcc实例)

host  all   all   ::1/128  trust

重启GP

gpstop -r

设置the Command Center Console

[gpadmin@dw01 ~]$ gpcmdr --setup

The instance name identifies the GPDB cluster this Greenplum Command Center web UI monitors and controls.
Instance names can contain letters, digits, and underscores and are not case sensitive.

Please enter the instance name
gpmon_ys

The display name is shown as the "server" in the web interface and does not need to be
a hostname.Display names can contain letters, digits, and underscores and ARE case sensitive.

Please enter the display name for this instance:(Press ENTER to use instance name)
gpmon_db

A GPCC instance can be set to manage and monitor a remote Greenplum Database.
Notice: remote mode will disable these functionalities:
1. Standby host for GPCC.
2. Workload Manager UI.

Is the master host for the Greenplum Database remote? Yy/Nn (default=N)
n

What port does the Greenplum Database use? (default=5432)


Enable kerberos login for this instance? Yy/Nn (default=N)
n

Creating instance schema in GPDB.  Please wait ...

The Greenplum Command Center runs a small web server for the UI and web API.
This web server by default runs on port 28080, but you may specify any available port.

What port would you like the new web server to use for this instance? (default=28080)


Users logging in to the Command Center must provide database user
credentials. In order to protect user names and passwords, it is recommended
that SSL be enabled.

Enable SSL for the Web API Yy/Nn (default=N)
n

Copy the instance to a standby master host Yy/Nn (default=Y)
y

What is the hostname of the standby master host?
smdw
standby is smdw
Done writing webserver configuration to  /usr/local/greenplum-cc-web/instances/gpmon_ys/webserver/conf/app.conf
Copying instance gpmon_ys to host smdw ...
=>Cleanup standby host's instance gpmon_ys if any ...
=>Copying the instance folder to standby host ...

Creating instance at /usr/local/greenplum-cc-web/instances/gpmon_ys

Greenplum Command Center UI configuration is now complete.

To change parameters of this instance, edit the configuration file
at /usr/local/greenplum-cc-web/instances/gpmon_ys/webserver/conf/app.conf

To configure multi-cluster view, edit the configuration file at /usr/local/greenplum-cc-web/instances/gpmon_ys/conf/clusters.conf

The web UI for this instance is located at http://dw01:28080

You can now start the web UI for this instance by running: gpcmdr --start gpmon_ys

启动实例

gpcmdr --start gpmon_ys
Starting instance gpmon_ys ...
Greenplum Command Center UI for instance 'gpmon_ys' - [RUNNING on PORT: 28080, pid 11984]

访问console,打开浏览器,在地址栏输入http://mastert[或者你主机的ip地址]:28080,如果出现以下界面,那么恭喜你,你的greenplum-cc-web全部安装成功!
web1.png
用户名是默认的gpmon
密码是安装Performance Monitor时的密码,登陆进去界面如下:
web2.png
web3.png
官方安装文档:
https://gpcc.docs.pivotal.io/320/gpcc/topics/setup-software.html

ucloud云上部署GreenPlum4.3.12集群

一、机器环境、版本如下:
操作系统:CentOS6.5
数据库版本:
greenplum-db-4.3.12.0-rhel5-x86_64
greenplum-cc-web-3.2.0-LINUX-x86_64

节点信息:

主机名        Ip地址          说明
mdw          172.28.1.11     主库master
smdw         172.28.1.12     备库standby master
sdw1         172.28.1.13     segment库节点一
sdw2         172.28.1.14     segment库节点二

二、系统初始化设置(四台机器都修改)
1、修改系统参数/etc/sysctl.conf

kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.arp_filter = 1
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535
net.core.netdev_max_backlog = 10000
vm.overcommit_memory = 2

2、文件句柄数修改/etc/security/limits.conf

* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
* soft core unlimited

3、关闭selinux、iptables

修改/etc/selinux/config中SELINUX=disabled
关闭iptables
service iptables stop
chkconfig iptables off

4、调整磁盘IO调度
Linux磁盘I/O调度器对磁盘的访问支持不同的策略,默认的为cfq,GreenPlum建议设置为deadline
查看磁盘的I/O调度策略,看到默认的为[cfq]

查看系统分区情况:
[root@dw01 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        20G  3.3G   16G  18% /
tmpfs            12G     0   12G   0% /dev/shm
/dev/vdb        493G  311M  467G   1% /data
查看/分区的I/O调度策略
[root@dw01 ~]# cat /sys/block/vda/queue/scheduler  
noop anticipatory [deadline] cfq 
查看/data分区的I/O调度策略
[root@dw01 ~]# cat /sys/block/vdb/queue/scheduler 
noop anticipatory [deadline] cfq 

查看当前系统内核

[root@dw01 ~]# uname -r
2.6.32-696.18.7.1.el6.ucloud.x86_64

修改系统引导文件,在/boot/grub/menu.lst 文件里面关于kernel这一行的末尾添加elevator=deadline

 /boot/grub/menu.lst 
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You do not have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /, eg.
#          root (hd0,0)
#          kernel /boot/vmlinuz-version ro root=/dev/vda1
#          initrd /boot/initrd-[generic-]version.img
#boot=/dev/vda
default=0
timeout=1
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.32-696.18.7.1.el6.ucloud.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-696.18.7.1.el6.ucloud.x86_64 ro root=/dev/vda1 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=512M-2G:64M,2G-4G:128M,4G-:192M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM quiet console=tty1 console=ttyS0,115200n8 elevator=deadline
        initrd /boot/initramfs-2.6.32-696.18.7.1.el6.ucloud.x86_64.img
title CentOS (2.6.32-696.18.7.el6.x86_64.debug)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-696.18.7.el6.x86_64.debug ro root=/dev/vda1 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=512M-2G:64M,2G-4G:128M,4G-:192M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM quiet console=tty1 console=ttyS0,115200n8
        initrd /boot/initramfs-2.6.32-696.18.7.el6.x86_64.debug.img
title CentOS (2.6.32-431.11.29.el6.ucloud.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-431.11.29.el6.ucloud.x86_64 ro root=/dev/vda1 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=512M-2G:64M,2G-4G:128M,4G-:192M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM quiet console=tty1 console=ttyS0,115200n8
        initrd /boot/initramfs-2.6.32-431.11.29.el6.ucloud.x86_64.img
title CentOS (2.6.32-431.11.25.el6.ucloud.x86_64)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.32-431.11.25.el6.ucloud.x86_64 ro root=/dev/vda1 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=512M-2G:64M,2G-4G:128M,4G-:192M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM quiet console=tty1 console=ttyS0,115200n8
        initrd /boot/initramfs-2.6.32-431.11.25.el6.ucloud.x86_64.img

查看硬盘I/O预读扇区值:默认为256

[root@dw01 ~]# blockdev --getra /dev/vda1 
256
[root@dw01 ~]# blockdev --getra /dev/vdb
256

修改为65536

blockdev --setra 65536 /dev/vda1
blockdev --setra 65536 /dev/vdb

须将其写入开机配置文件/etc/rc.d/rc.local 否则重启就会失效。

/etc/rc.d/rc.local 
echo "blockdev --setra 65536 /dev/vda1" >> /etc/rc.d/rc.local 
echo "blockdev --setra 65536 /dev/vdb" >> /etc/rc.d/rc.local 

5、修改hostname

[root@dw01 ~]# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=dw01
NOZEROCONF=yes

6、修改hosts

 /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.28.1.11   dw01 mdw
172.28.1.12   dw02 smdw
172.28.1.13   dw03 sdw1
172.28.1.14   dw04 sdw2

以上设置四台机器都需设置,重启生效;

7、创建配置文件
a、创建包含Greenplum部署的所有主机名,内容如下

[gpadmin@dw01 gpconfig]$ cat host_file 
mdw
smdw
sdw1
sdw2

b、创建包含备库standby mastersegment库节点一节点二的主机名,内容如下

[gpadmin@dw01 gpconfig]$ cat dw234 
smdw
sdw1
sdw2

c、创建包含segment库节点一节点二的主机名,内容如下

[gpadmin@dw01 gpconfig]$ cat seg_hosts 
sdw1
sdw2

三、下载GreenPlum安装包
根据操作系统版本下载:
https://network.pivotal.io/products/pivotal-gpdb#/releases/4540/file_groups/560
这里选择
Greenplum Database 4.3.12.0 for RedHat Entrerprise Linux 5, 6 and 7
122 MB4.3.12.0
注意 登陆Pivotal Network账号以后,才能下载

md5sum *zip
ee90c7a35c706404840be38ba1be557b  greenplum-cc-web-3.2.0-LINUX-x86_64.zip
edaa67d561653fbf81431c968e5f297f  greenplum-db-4.3.12.0-rhel5-x86_64.zip

四、解压安装

unzip greenplum-db-4.3.12.0-rhel5-x86_64.zip
./greenplum-db-4.3.12.0-rhel5-x86_64.bin

根据系统提示输入yes和回车,默认会按照在/usr/local/greenplum-db-4.3.12.0下,并创建软链接greenplum-db ---/greenplum-db-4.3.12.0
在/home/gpadmin/.bash_profile文件中添加

source /usr/local/greenplum-db/greenplum_path.sh
export GPHOME=/usr/local/greenplum-db
export MASTER_DATA_DIRECTORY=/data/gpmaster/gpseg-1

五、免key登陆
四台机器之间互相做免密码登陆,参考https://www.unixso.com/Linux/ssh-key.html

source  /usr/local/greenplum-db/greenplum_path.sh
gpssh-exkeys -f /usr/local/greenplum-db/gpconfig/host_file     #打通所有服务器       

六、创建配置文件gpinitsystem_config
在master机器上创建

mkdir -p /usr/local/greenplum-db/gpconfig
创建/usr/local/greenplum-db/gpconfig/gpinitsystem_config文件内容如下:
ARRAY_NAME="EMC Greenplum DW"
SEG_PREFIX=gpseg
PORT_BASE=40000
declare -a DATA_DIRECTORY=(/data/gpdata/gpdatap1 /data/gpdata/gpdatap1)
MASTER_HOSTNAME=dw01
MASTER_DIRECTORY=/data/gpmaster
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UNICODE
MIRROR_PORT_BASE=50000
REPLICATION_PORT_BASE=41000
MIRROR_REPLICATION_PORT_BASE=51000
declare -a MIRROR_DATA_DIRECTORY=(/data/gpdata/gpdatam1 /data/gpdata/gpdatam1)
MACHINE_LIST_FILE=/usr/local/greenplum-db/gpconfig/seg_hosts

六、分发、创建用户、目录、权限
安装包分发:master上操作

cd /usr/local
scp -r greenplum-db-4.3.12.0/ dw02:/usr/local/
scp -r greenplum-db-4.3.12.0/ dw03:/usr/local/
scp -r greenplum-db-4.3.12.0/ dw04:/usr/local/
gpssh -f /usr/local/greenplum-db/gpconfig/host_file
Note: command history unsupported on this machine ...
=>
依次执行以下命令
useradd -g gpadmin gpadmin
useradd -g gpmon gpmon
echo 123456 | passwd gpadmin --stdin
echo 123456 | passwd gpmon --stdin
mkdir -p /data/gpdata/gpdatap1
mkdir -p /data/gpdata/gpdatam1
mkdir -p /data/gpmaster
chown -R gpadmin.gpadmin /data/gpdata/
chown -R gpadmin.gpadmin /data/gpmaster/
chown -R gpadmin.gpadmin /usr/local/greenplum-db-4.3.12.0/

七、系统检查

source /usr/local/greenplum-db/greenplum_path.sh
gpcheck -f /usr/local/greenplum-db/gpconfig/host_file -m mdw -s smdw          
20180406:14:39:12:026168 gpcheck:dw01:root-[INFO]:-dedupe hostnames
20180406:14:39:13:026168 gpcheck:dw01:root-[INFO]:-Detected platform: Generic Linux Cluster
20180406:14:39:13:026168 gpcheck:dw01:root-[INFO]:-generate data on servers
20180406:14:39:13:026168 gpcheck:dw01:root-[INFO]:-copy data files from servers
20180406:14:39:13:026168 gpcheck:dw01:root-[INFO]:-delete remote tmp files
20180406:14:39:13:026168 gpcheck:dw01:root-[INFO]:-Using gpcheck config file: /usr/local/greenplum-db/./etc/gpcheck.cnf
20180406:14:39:13:026168 gpcheck:dw01:root-[INFO]:-GPCHECK_NORMAL
20180406:14:39:13:026168 gpcheck:dw01:root-[INFO]:-gpcheck completing...

如果系统检查有Error提示,需根据提示修改系统参数,然后重新检查
检查网络性能

gpcheckperf -f host_file -r N -d /tmp/ > checknet.out
cat checknet.out
-------------------
--  NETPERF TEST
-------------------

====================
==  RESULT
====================
Netperf bisection bandwidth test
mdw -> smdw = 287.060000
sdw1 -> sdw2 = 279.900000
smdw -> mdw = 299.840000
sdw2 -> sdw1 = 302.450000

Summary:
sum = 1169.25 MB/sec
min = 279.90 MB/sec
max = 302.45 MB/sec
avg = 292.31 MB/sec
median = 299.84 MB/sec

八、初始化数据库
seg_hosts是segment服务器列表,一行存一个hostname,smdw是standby master的hostname名字,在gpadmin账号下运行:

su - gpadmin
cd /usr/local/greenplum-db/gpconfig/
gpinitsystem -c /usr/local/greenplum-db/gpconfig/gpinitsystem_config  -h seg_hosts -s smdw

如有报错参考官方文档:http://gpdb.docs.pivotal.io/43120/install_guide/init_gpdb.html
安装完以后登陆

[gpadmin@dw01]$ psql -d postgres

psql (8.2.15)

Type "help" for help.

 

postgres=# help

You are using psql, the command-line interface to PostgreSQL.

Type:  \copyright for distribution terms

       \h for help with SQL commands

       \? for help with psql commands

       \g or terminate with semicolon to execute query

       \q to quit

postgres=#

查看各机器进程

gpssh -f /usr/local/greenplum-db/gpconfig/host_file 
Note: command history unsupported on this machine ...
=> netstat -nltp | grep postgres
[sdw2] (Not all processes could be identified, non-owned process info
[sdw2]  will not be shown, you would have to be root to see it all.)
[sdw2] tcp        0      0 0.0.0.0:40000               0.0.0.0:*                   LISTEN      10601/postgres      
[sdw2] tcp        0      0 0.0.0.0:40001               0.0.0.0:*                   LISTEN      10602/postgres      
[sdw2] tcp        0      0 172.28.64.190:41000         0.0.0.0:*                   LISTEN      10636/postgres      
[sdw2] tcp        0      0 172.28.64.190:41001         0.0.0.0:*                   LISTEN      10641/postgres      
[sdw2] tcp        0      0 0.0.0.0:50000               0.0.0.0:*                   LISTEN      10603/postgres      
[sdw2] tcp        0      0 0.0.0.0:50001               0.0.0.0:*                   LISTEN      10600/postgres      
[sdw2] tcp        0      0 172.28.64.190:51000         0.0.0.0:*                   LISTEN      10621/postgres      
[sdw2] tcp        0      0 172.28.64.190:51001         0.0.0.0:*                   LISTEN      10620/postgres      
[sdw2] tcp        0      0 :::40000                    :::*                        LISTEN      10601/postgres      
[sdw2] tcp        0      0 :::40001                    :::*                        LISTEN      10602/postgres      
[sdw2] tcp        0      0 :::50000                    :::*                        LISTEN      10603/postgres      
[sdw2] tcp        0      0 :::50001                    :::*                        LISTEN      10600/postgres      
[smdw] (Not all processes could be identified, non-owned process info
[smdw]  will not be shown, you would have to be root to see it all.)
[smdw] tcp        0      0 0.0.0.0:5432                0.0.0.0:*                   LISTEN      8501/postgres       
[smdw] tcp        0      0 :::5432                     :::*                        LISTEN      8501/postgres       
[ mdw] (Not all processes could be identified, non-owned process info
[ mdw]  will not be shown, you would have to be root to see it all.)
[ mdw] tcp        0      0 0.0.0.0:5432                0.0.0.0:*                   LISTEN      17489/postgres      
[ mdw] tcp        0      0 :::5285                     :::*                        LISTEN      17496/postgres      
[ mdw] tcp        0      0 :::5432                     :::*                        LISTEN      17489/postgres      
[sdw1] (Not all processes could be identified, non-owned process info
[sdw1]  will not be shown, you would have to be root to see it all.)
[sdw1] tcp        0      0 0.0.0.0:40000               0.0.0.0:*                   LISTEN      10662/postgres      
[sdw1] tcp        0      0 0.0.0.0:40001               0.0.0.0:*                   LISTEN      10660/postgres      
[sdw1] tcp        0      0 172.28.56.68:41000          0.0.0.0:*                   LISTEN      10685/postgres      
[sdw1] tcp        0      0 172.28.56.68:41001          0.0.0.0:*                   LISTEN      10684/postgres      
[sdw1] tcp        0      0 0.0.0.0:50000               0.0.0.0:*                   LISTEN      10661/postgres      
[sdw1] tcp        0      0 0.0.0.0:50001               0.0.0.0:*                   LISTEN      10663/postgres      
[sdw1] tcp        0      0 172.28.56.68:51000          0.0.0.0:*                   LISTEN      10693/postgres      
[sdw1] tcp        0      0 172.28.56.68:51001          0.0.0.0:*                   LISTEN      10694/postgres      
[sdw1] tcp        0      0 :::40000                    :::*                        LISTEN      10662/postgres      
[sdw1] tcp        0      0 :::40001                    :::*                        LISTEN      10660/postgres      
[sdw1] tcp        0      0 :::50000                    :::*                        LISTEN      10661/postgres      
[sdw1] tcp        0      0 :::50001                    :::*                        LISTEN      10663/postgres      
=> 

九、安装PerformanceMonitor数据收集Agent

source /usr/local/greenplum-db/greenplum_path.sh
gpperfmon_install --enable --password 123456 --port 5432
gpstop -r   #重启GP生效

查看进程

ps -ef |grep gpmmon
gpadmin  17498 17489  0 Apr04 ?        00:01:04 /usr/local/greenplum-db-4.3.12.0/bin/gpmmon -D /data/gpmaster/gpseg-1/gpperfmon/conf/gpperfmon.conf -p 5432

查看端口

netstat -nltp | grep gp
tcp        0      0 0.0.0.0:28080               0.0.0.0:*                   LISTEN      11984/gpmonws             
tcp        0      0 :::8888                     :::*                        LISTEN      17611/gpsmon 

查看监控数据是否写入数据库

psql -d gpperfmon -c 'select * from system_now;'
        ctime        | hostname |  mem_total  |  mem_used  | mem_actual_used | mem_actual_free | swap_total | swap_used | swap_page_in | swap_page_out | c
pu_user | cpu_sys | cpu_idle | load0 | load1 | load2 | quantum | disk_ro_rate | disk_wo_rate | disk_rb_rate | disk_wb_rate | net_rp_rate | net_wp_rate | n
et_rb_rate | net_wb_rate 
---------------------+----------+-------------+------------+-----------------+-----------------+------------+-----------+--------------+---------------+--
--------+---------+----------+-------+-------+-------+---------+--------------+--------------+--------------+--------------+-------------+-------------+--
-----------+-------------
 2018-04-06 14:59:15 | dw01     | 25132879872 | 1774948352 |       391598080 |     24741281792 |  536866816 |         0 |            0 |             0 |  
   0.22 |    0.34 |    99.44 |  0.02 |  0.04 |     0 |      15 |            0 |            3 |            0 |        47850 |          40 |          43 |  
      9267 |       21458
 2018-04-06 14:59:15 | dw02     |  8188444672 |  963731456 |       157970432 |      8030474240 |  536866816 |         0 |            0 |             0 |  
   0.13 |    0.32 |    99.55 |  0.05 |  0.03 |     0 |      15 |            0 |            2 |            0 |         9788 |           3 |           3 |  
       331 |         572
 2018-04-06 14:59:15 | dw03     |  8188444672 | 2338099200 |       239792128 |      7948652544 |  536866816 |         0 |            0 |             0 |  
   0.28 |    0.42 |    99.27 |     0 |     0 |     0 |      15 |            0 |            9 |            0 |       178355 |         169 |          66 |  
    174387 |      154813
 2018-04-06 14:59:15 | dw04     |  8188444672 | 2338926592 |       242188288 |      7946256384 |  536866816 |         0 |            0 |             0 |  
   0.28 |    0.61 |    99.07 |     0 |     0 |     0 |      15 |            0 |            8 |            0 |       175088 |         165 |          65 |  
    167569 |      162326
(4 rows)

至此GreenPlum4.3.12集群安装完毕·

GreenPlum启动、关闭及状态查看命令参数说明

GreenPlum启动:
greenplum.jpg
在Master主机上运行gpstart启动Greenplum数据库:

gpstart

常用的启动参数有以下几个参数:
-a,该模式不需要在启动过程中输入Y进行确认,将直接启动数据库。
-m,只启动Master节点,不启动Segment节点,通常在维护的时候使用。
-y,只启动Master的primary节点,不启动standby节点。

GreenPlum停止或重启:
不要发出kill命令来结束任何Postgres进程,发出kill -9或者kill -11可能会导致数据库损坏并且妨碍对根本原因的分析。
在Master主机上运行gpstop停止Greenplum数据库:

gpstop
常用的参数如下:
-a,不需要输入Y确认是否关闭,将直接关闭数据库。

-m,只关闭Master节点,一般用于维护模式

-r,重启数据库。

-u,加载参数文件,使修改的参数生效。pg_hba.conf配置文件和Master上postgresql.conf、pg_hba.conf文件中运行时参数的更改,活动会话将会在它们重新连接到数据库时使用这些更新。很多服务器配置参数需要完全重启系统(gpstop -r)才能激活

-M,设置关闭数据库的级别,有三种级别,fast、immediate和smart。
Immediate smart 这是默认的关闭级别,所有连接的会话会收到关闭警告,不允许新链接访问数据库。
gpstop –M immediate,强制关闭数据库,这种方式是不一致的关闭模式,不建议使用。
gpstop –M fast 快速模式,停止所有连接将中断并且回滚

GreenPlum状态查看gpstate

常用的参数如下:

-s,详细信息。

-m,Mirror信息。

-f,Master的Standby信息。

-e,Segment的Mirror信息。

-i,版本信息。

参考文档:https://gp-docs-cn.github.io/docs/admin_guide/managing/startstop.html

最新

分类

归档

评论

  • Liang: 贴下编译参数和步骤,...
  • shao3911: 您好,为什么我在编译...
  • aliang: 先看是yum安装还是...
  • aliang: 将原来的nginx安...
  • yen: 3、如果要回滚的话,...
  • yen: 刚好需要升级ngin...
  • 文雨: 一些新的method...
  • aliang: 默认不屏蔽估计开发团...
  • 山野愚人居: PHP既然允许直接使...
  • aliang: 最下面有github地址·

其它