移动开发中接口 测试与管理的利器 个人业余开发。现共享出来给大家使用
http://api.so-cools.com
移动开发中接口 测试与管理的利器 个人业余开发。现共享出来给大家使用
http://api.so-cools.com
认识了很多的朋友。
对应PPT 下载地址: http://www.so-cools.com/down/pyconppt.pdf
1.针对rke集群方案的rancher
此方法 非网上的 重新导入集群的方法, 对系统本身影响非常小.测试rancher2.4.x rancher2.5.x完美通过测试
.1 通过登陆rancher ui 创建一个api token 复制下来备用(此步骤只是用来防止备份,无实际用途)
.2 备份rancher 的etcd 数据,登陆到rancher ui管理节点主机
apt-get install etcd-client #或者手动安装etcdctl ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/ssl/kube-ca.pem --cert=/etc/kubernetes/ssl/kube-node.pem --key=/etc/kubernetes/ssl/kube-node-key.pem --endpoints=https://127.0.0.1:2379/ get / --prefix --keys-only | sort | uniq | xargs -I{} sh -c 'ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/ssl/kube-ca.pem --cert=/etc/kubernetes/ssl/kube-node.pem --key=/etc/kubernetes/ssl/kube-node-key.pem --endpoints=https://127.0.0.1:2379 get {} >> output.data && echo "" >> output.data'
.3 备份rancher 集群中的 secret 中的tls-rancher-ingress
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/ssl/kube-ca.pem --cert=/etc/kubernetes/ssl/kube-node.pem --key=/etc/kubernetes/ssl/kube-node-key.pem --endpoints=https://127.0.0.1:2379/ get /registry/secrets/cattle-system/tls-rancher-ingress
备份其中的证书,私钥:如
-----BEGIN CERTIFICATE----- xxxxxxx -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- xxxxxxx -----END RSA PRIVATE KEY-----
.4 备份kubeconfig文件, 包含rancher集群、应用集群的kubeconfig文件 rancher集群的kubeconfig文件最好是rke安装好k8s集群后生成的文件, 防止rancher ui 启动失败后, 仍然能够通过kubectl来操作、管理集群
#若提示证书错误,可尝试 kubectl 跳过tls证书验证 kubectl --insecure-skip-tls-verify get pods -A
.5 获取旧的CA证书与CA的私钥文件(这是重点)
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/ssl/kube-ca.pem --cert=/etc/kubernetes/ssl/kube-node.pem --key=/etc/kubernetes/ssl/kube-node-key.pem --endpoints=https://127.0.0.1:2379/ get /registry/secrets/cattle-system/tls-rancher
-----BEGIN CERTIFICATE----- xxxxxx -----END CERTIFICATE----- -----BEGIN EC PRIVATE KEY----- xxxxxxxx -----END EC PRIVATE KEY-----
.6 生成新的证书文件
上一步获取到的ca证书重命名为cacerts1.pem ca私钥重命名为cakey1.pem 放到某文件夹中,创建自命名脚本
create_self-signed-cert.sh 其脚本内部如下:
#!/bin/bash -e help () { echo ' ================================================================ ' echo ' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为www.rancher.local,如果是ip访问服务,则可忽略;' echo ' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;' echo ' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;' echo ' --ssl-size: ssl加密位数,默认2048;' echo ' --ssl-cn: 国家代码(2个字母的代号),默认CN;' echo ' 使用示例:' echo ' ./create_self-signed-cert.sh --ssl-domain=www.test.com --ssl-trusted-domain=www.test2.com \ ' echo ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=3650' echo ' ================================================================' } case "$1" in -h|--help) help; exit;; esac if [[ $1 == '' ]];then help; exit; fi CMDOPTS="$*" for OPTS in $CMDOPTS; do key=$(echo ${OPTS} | awk -F"=" '{print $1}' ) value=$(echo ${OPTS} | awk -F"=" '{print $2}' ) case "$key" in --ssl-domain) SSL_DOMAIN=$value ;; --ssl-trusted-ip) SSL_TRUSTED_IP=$value ;; --ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;; --ssl-size) SSL_SIZE=$value ;; --ssl-date) SSL_DATE=$value ;; --ca-date) CA_DATE=$value ;; --ssl-cn) CN=$value ;; esac done # CA相关配置 CA_DATE=${CA_DATE:-3650} CA_KEY=${CA_KEY:-cakey1.pem} CA_CERT=${CA_CERT:-cacerts1.pem} CA_DOMAIN=dynamiclistener-ca CA_ORG=dynamiclistener-org # ssl相关配置 SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf} SSL_DOMAIN=${SSL_DOMAIN:-'www.rancher.local'} SSL_DATE=${SSL_DATE:-3650} SSL_SIZE=${SSL_SIZE:-2048} ## 国家代码(2个字母的代号),默认CN; CN=${CN:-CN} SSL_KEY=$SSL_DOMAIN.key SSL_CSR=$SSL_DOMAIN.csr SSL_CERT=$SSL_DOMAIN.crt echo -e "\033[32m ---------------------------- \033[0m" echo -e "\033[32m | 生成 SSL Cert | \033[0m" echo -e "\033[32m ---------------------------- \033[0m" echo -e "\033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} \033[0m" cat > ${SSL_CONFIG} <<EOM [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth, serverAuth EOM if [[ -n ${SSL_TRUSTED_IP} || -n ${SSL_TRUSTED_DOMAIN} ]]; then cat >> ${SSL_CONFIG} <<EOM subjectAltName = @alt_names [alt_names] EOM IFS="," dns=(${SSL_TRUSTED_DOMAIN}) dns+=(${SSL_DOMAIN}) for i in "${!dns[@]}"; do echo DNS.$((i+1)) = ${dns[$i]} >> ${SSL_CONFIG} done if [[ -n ${SSL_TRUSTED_IP} ]]; then ip=(${SSL_TRUSTED_IP}) for i in "${!ip[@]}"; do echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG} done fi fi echo -e "\033[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} \033[0m" openssl genrsa -out ${SSL_KEY} ${SSL_SIZE} echo -e "\033[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} \033[0m" openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG} echo -e "\033[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} \033[0m" openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} \ -CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} \ -days ${SSL_DATE} -extensions v3_req \ -extfile ${SSL_CONFIG} echo -e "\033[32m ====> 7. 证书制作完成 \033[0m" echo echo -e "\033[32m ====> 8. 以YAML格式输出结果 \033[0m" echo "----------------------------------------------------------" echo "ca_key: |" cat $CA_KEY | sed 's/^/ /' echo echo "ca_cert: |" cat $CA_CERT | sed 's/^/ /' echo echo "ssl_key: |" cat $SSL_KEY | sed 's/^/ /' echo echo "ssl_csr: |" cat $SSL_CSR | sed 's/^/ /' echo echo "ssl_cert: |" cat $SSL_CERT | sed 's/^/ /' echo echo -e "\033[32m ====> 9. 附加CA证书到Cert文件 \033[0m" cat ${CA_CERT} >> ${SSL_CERT} echo "ssl_cert: |" cat $SSL_CERT | sed 's/^/ /' echo echo -e "\033[32m ====> 10. 重命名服务证书 \033[0m" echo "cp ${SSL_DOMAIN}.key tls.key" cp ${SSL_DOMAIN}.key tls.key echo "cp ${SSL_DOMAIN}.crt tls.crt" cp ${SSL_DOMAIN}.crt tls.crt
#生成10年证书, 注意域名需要跟原rancher ui的域名一至
./create_self-signed-cert.sh –ssl-domain=rancher.xxx.com –ssl-trusted-domain=rancher1.xxx.com –ssl-size=2048 –ssl-date=3650
.7 替换 tls-rancher-ingress 证书的内容为新证书
#注 需要确定kubectl 要在rancher 集群的会话下 kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=tls.crt --key=tls.key --dry-run --save-config -o yaml | kubectl apply -f -
.8 重启nginx-ingress
通过rancher ui 或者kubectl 命令行 重启 rancher下的工作负载 nginx-ingress
over
#路由下接三层交换机
本站几年前曾经破解过ntopng V3版本, 今天老朋友突然发信息来,需要最新的ntopng的企业版,抽点时间看了一下
1. 快速安装
#debian wget https://packages.ntop.org/apt-stable/buster/all/apt-ntop-stable.deb apt install ./apt-ntop-stable.deb apt-get clean all apt-get update apt-get install pfring-dkms nprobe ntopng n2disk cento
2.此版本与此前的老版本差异较大。花了1个小时分析了一下
3.破解过程详见 https://www.so-cools.com/?p=1271
4. 有需要此版本的或想查看破解过程的可微信联系,毕竟公开后以后再次破解的难度就大一些了
from decimal import Decimal a1 = 0.00000999 a2 = 13400 b1 = a1*a2 print(b1) # 0.13386599999999999 a1=Decimal(0.00000999) a2=Decimal(13400) b1 = a1*a2 print(b1) #0.1338659999999999895440921591 a1=Decimal(str(0.00000999)) a2=Decimal(str(13400)) b1 = a1*a2 print(b1) #0.13386600 正确 #注意用Decimal函数,需要用str()函数转成字符型
#classb.py
class classb: def foo(self): print("this is classb") def bar(self,i): print("classb:%s" %i)
#classa.py
class classa: def foo(self): print("this is foo") def bar(self,i): print("sssss:%s" %i)
#main.py
class Main: def __init__(self, module_name): self.module_name = module_name self.module = None def __getattr__(self, funcname): if self.module is None: self.module = __import__(self.module_name) class_tmp=getattr(self.module, self.module_name) class_obj = class_tmp() func_tmp = getattr(class_obj, funcname) return func_tmp abc = Main('classa') abc.bar("aaaaa") abc = Main('classb') abc.bar("aaaaa")
老忘记 mark一下
通用--->关于本机---> 证书信任设置 通用--->描述文件
#nginx failed (13: Permission denied) while reading upstream 错误解决
## 1.设置nginx.conf的error的错误级别, 建议设置为info级别
## error_log /var/log/nginx/error.log info;
## 2. 设置合理的 proxy_temp_file_write_size 值大小 ,因为当代理文件大小超过配置的proxy_temp_file_write_size值时,nginx会将文件写入到临时目录下(默认为/proxy_temp)。
##如果nginx对/proxy_temp没有权限,就写不进去。 就会出现 nginx failed (13: Permission denied) while reading upstream 错误
## 因此,在nginx在启用 proxy时, 必须检查 nginx用户对proxy_temp 目录是否有读写权限
#常见缺失软件
apk add procps
apk add docker-cli
apk add busybox-extras
#rancher 通过yaml方式创建服务 不能ping通故障解决
1. 创建服务与设置好对应的主机DNS
rancher kubectl create -f appserver-extend.yaml
yaml示例代码:
--- apiVersion: v1 items: - apiVersion: v1 kind: Service metadata: annotations: field.cattle.io/creatorId: user-w9lgp field.cattle.io/ipAddresses: "null" field.cattle.io/targetDnsRecordIds: "null" field.cattle.io/targetWorkloadIds: '["deployment:application:appserver-extend-job"]' labels: cattle.io/creator: norman name: service-appserver-extend-extend-job namespace: application selfLink: /api/v1/namespaces/application/services/service-appserver-extend-extend-job spec: clusterIP: None ports: - name: default port: 42 protocol: TCP targetPort: 42 selector: workloadID_service-appserver-extend-extend-job: "true" sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" generation: 1 labels: cattle.io/creator: norman workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job name: appserver-extend-job namespace: application selfLink: /apis/apps/v1/namespaces/application/deployment/appserver-extend-job spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: creationTimestamp: null labels: workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job spec: containers: - env: - name: RUNPRO value: pro - name: aliyun_logs_catalina value: "stdout" - name: aliyun_logs_access value: "/opt/logs/*.log" - name: aliyun_logs_catalina_tags value: "type=appserver-extend-xxx-catalina,topic=appserver-extend-xxx-extend-job-catalina" - name: aliyun_logs_access_tags value: "type=appserver-extend-xxx-access,topic=appserver-extend-xxx-extend-job-access" image: alpine imagePullPolicy: Always name: appserver-extend-job resources: {} securityContext: allowPrivilegeEscalation: false capabilities: {} privileged: false readOnlyRootFilesystem: false runAsNonRoot: false stdin: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File tty: true dnsPolicy: ClusterFirst imagePullSecrets: - name: registry-harbor restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 kind: List
故障现象:
同命名空间下 ping service-appserver-extend-extend-job 提示找不到主机。
排查过程:
rancher kubectl describe services service-appserver-extend-extend-job -n application Name: service-appserver-extend-extend-job Namespace: application Labels: cattle.io/creator=norman Annotations: field.cattle.io/creatorId: user-w9lgp field.cattle.io/ipAddresses: null field.cattle.io/targetDnsRecordIds: null field.cattle.io/targetWorkloadIds: ["deployment:application:appserver-extend-job"] Selector: workloadID_service-appserver-extend-extend-job=true Type: ClusterIP IP: None Port: default 42/TCP TargetPort: 42/TCP Endpoints: <none> #故障点: Endpoints 为空 Session Affinity: None Events: <none>
修复: 由于yaml文件中 先定义了service 后定义的 deployment 导致 service中找不到机器 知道原因后修复也很简单, 在yaml中先创建 deployment后, 再创建service然后就解决了。
#使用docker快速搭建各大漏洞学习平台,目前可以一键搭建12个平台
https://github.com/c0ny1/vulstudy
https://github.com/vulhub/vulhub
https://github.com/vulnspy
https://www.vsplate.com/labs.php
#at sun.awt.FontConfiguration.getVersion(FontConfiguration.java docker openjdk openjdk:8-jdk-alpine 报错
原因为缺少字体
解决:添加 字体 ttf-dejavu
RUN apk add --no-cache ttf-dejavu //加上其它的 RUN apk add --no-cache bash tini ttf-dejavu libc6-compat linux-pam krb5 krb5-libs
#awvs docker
docker run --name wvs13 -p 3443:3443 -itd registry.cn-shanghai.aliyuncs.com/t3st0r/acunetix_13:20200220 admin@admin.cn Admin@admin.cn
#Could not initialize class org.xerial.snappy.Snappy
由于项目中使用了org.xerial.snappy.Snappy这个类,在正常的centos系统环境下,没有问题;在微服务容器(openjdk:8-jdk-alpine)测试的时候发现有一个功能不正常,爆出异常 Could not initialize class org.xerial.snappy.Snappy
解决方式:
由于openjdk:8-jdk-alpine容器使用的是Alpine Linux,
创建软连接
ln -s /lib /lib64
对应dockerfile为:
FROM openjdk:8-jdk-alpine ARG RUNPRO ENV TZ=Asia/Shanghai RUN apk add -U tzdata RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN ln -s /lib /lib64 #新增 RUN apk add --no-cache bash tini libc6-compat linux-pam krb5 krb5-libs #新增 VOLUME /tmp VOLUME /opt/logs WORKDIR /opt/ COPY server-xx*.jar server-xx.jar ENTRYPOINT ["java","-jar","server-xx.jar","--spring.profiles.active=${RUNPRO}"]
最后还是准备直接用oracle jdk了, 感觉openjdk还是不太稳定。
参考:https://www.cnblogs.com/hellxz/p/11936994.html
报错:
fatal: unable to set up default path; use --file fatal: could not read Username for 'http://10.100.11.5': No such device or address
demo 代码:
$giturl='http://10.100.11.5/appserver/appserver-api/'; $output=shell_exec("git clone {$giturl} 2>&1"); var_dump($output)
对应的www-data用户做了sudo 免密码
做了git免密码 (git config –global credential.helper store)
此代码在CLI命令行下运行正常
在apache下的web界面下报如题所示错误, 中间尝过许多办法。都无解。
思考过程:
1. su – www-data用户下的cli能正常运行,说明权限应该是没问题的
2. 分别在cli下和web下打印env相关东西
system("env");
结果发现两种情况下的环境变量相差很大。
尝试把web中的环境变量补充HOME变量后,问题解决了。
putenv("HOME=/home/www-data"); putenv("USER=www-data"); $giturl='http://10.100.11.5/appserver/appserver-api/'; $output=shell_exec("git clone {$giturl} 2>&1");
另一个解决办法:
默认的apache2.4会把HOME环境变量给unset 掉 见:/etc/apache2/envvars 第4行
# envvars - default environment variables for apache2ctl # this won't be correct after changing uid unset HOME #就是这里 # for supporting multiple apache2 instances
针对apache的解决办法也比较简单了 修改envvars文件, 增加HOME环境变量的导出就行
#unset HOME 注释这里 # for supporting multiple apache2 instances if [ "${APACHE_CONFDIR##/etc/apache2-}" != "${APACHE_CONFDIR}" ] ; then SUFFIX="-${APACHE_CONFDIR##/etc/apache2-}" else SUFFIX= fi #增加这里 export HOME=/home/www-data
apachectl stop && apachectl start (restart好像不会刷新环境变量)
问题解决, 解决此小问题,花费了好几个小时,都搞得有些怀疑自己的技术了。 ^_^