k8s kubernetes mark

# calico 下, 若需要集群外机器通过添加路由的方式直接访问pod地址(不开calico bgp代理模式),可添加此内核参数

net.ipv4.conf.tunl0.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0

# 参考 https://imroc.cc/kubernetes/tencent/faq/modify-rp-filter-causing-exception.html
# 其它网卡接口可考虑是否关闭
    net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.eth0.rp_filter=0
    net.ipv4.conf.default.rp_filter = 0
    net.ipv4.conf.lo.rp_filter = 0
    net.ipv4.conf.docker0.rp_filter = 0
# 通过不断对比尝试发现的,相关参数供参考与搜索
net.ipv4.conf.default.accept_source_route = 1
net.ipv4.conf.default.promote_secondaries = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.docker0.accept_source_route = 1
net.ipv4.conf.docker0.promote_secondaries = 0
net.ipv4.conf.docker0.rp_filter = 0
net.ipv4.conf.enp1s0.accept_source_route = 1
net.ipv4.conf.enp1s0.promote_secondaries = 0
net.ipv4.conf.enp1s0.rp_filter = 0
net.ipv4.conf.lo.accept_source_route = 1
net.ipv4.conf.lo.promote_secondaries = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.tunl0.accept_source_route = 1
net.ipv4.conf.tunl0.promote_secondaries = 0
net.ipv4.conf.tunl0.rp_filter = 0
net.ipv4.conf.all.promote_secondaries = 0
net.ipv4.conf.all.rp_filter = 0

rancher更新证书

1.针对rke集群方案的rancher

此方法 非网上的 重新导入集群的方法,  对系统本身影响非常小.测试rancher2.4.x rancher2.5.x完美通过测试

.1 通过登陆rancher ui  创建一个api token  复制下来备用(此步骤只是用来防止备份,无实际用途)

.2 备份rancher 的etcd 数据,登陆到rancher ui管理节点主机  

apt-get install etcd-client  #或者手动安装etcdctl
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/ssl/kube-ca.pem --cert=/etc/kubernetes/ssl/kube-node.pem --key=/etc/kubernetes/ssl/kube-node-key.pem --endpoints=https://127.0.0.1:2379/ get / --prefix --keys-only | sort | uniq | xargs -I{} sh -c 'ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/ssl/kube-ca.pem --cert=/etc/kubernetes/ssl/kube-node.pem --key=/etc/kubernetes/ssl/kube-node-key.pem  --endpoints=https://127.0.0.1:2379 get {} >> output.data && echo "" >> output.data'

.3 备份rancher 集群中的 secret 中的tls-rancher-ingress

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/ssl/kube-ca.pem --cert=/etc/kubernetes/ssl/kube-node.pem --key=/etc/kubernetes/ssl/kube-node-key.pem --endpoints=https://127.0.0.1:2379/ get /registry/secrets/cattle-system/tls-rancher-ingress 

备份其中的证书,私钥:如

-----BEGIN CERTIFICATE-----
xxxxxxx
-----END CERTIFICATE-----

-----BEGIN RSA PRIVATE KEY-----
xxxxxxx
-----END RSA PRIVATE KEY-----

.4  备份kubeconfig文件, 包含rancher集群、应用集群的kubeconfig文件   rancher集群的kubeconfig文件最好是rke安装好k8s集群后生成的文件, 防止rancher ui 启动失败后, 仍然能够通过kubectl来操作、管理集群

#若提示证书错误,可尝试 kubectl  跳过tls证书验证
kubectl --insecure-skip-tls-verify get pods -A

.5 获取旧的CA证书与CA的私钥文件(这是重点)

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/ssl/kube-ca.pem --cert=/etc/kubernetes/ssl/kube-node.pem --key=/etc/kubernetes/ssl/kube-node-key.pem --endpoints=https://127.0.0.1:2379/ get /registry/secrets/cattle-system/tls-rancher
-----BEGIN CERTIFICATE-----
xxxxxx
-----END CERTIFICATE-----

-----BEGIN EC PRIVATE KEY-----
xxxxxxxx
-----END EC PRIVATE KEY-----

.6 生成新的证书文件

上一步获取到的ca证书重命名为cacerts1.pem   ca私钥重命名为cakey1.pem  放到某文件夹中,创建自命名脚本

 create_self-signed-cert.sh 其脚本内部如下:

#!/bin/bash -e

help ()
{
    echo  ' ================================================================ '
    echo  ' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为www.rancher.local,如果是ip访问服务,则可忽略;'
    echo  ' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;'
    echo  ' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;'
    echo  ' --ssl-size: ssl加密位数,默认2048;'
    echo  ' --ssl-cn: 国家代码(2个字母的代号),默认CN;'
    echo  ' 使用示例:'
    echo  ' ./create_self-signed-cert.sh --ssl-domain=www.test.com --ssl-trusted-domain=www.test2.com \ '
    echo  ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=3650'
    echo  ' ================================================================'
}

case "$1" in
    -h|--help) help; exit;;
esac

if [[ $1 == '' ]];then
    help;
    exit;
fi

CMDOPTS="$*"
for OPTS in $CMDOPTS;
do
    key=$(echo ${OPTS} | awk -F"=" '{print $1}' )
    value=$(echo ${OPTS} | awk -F"=" '{print $2}' )
    case "$key" in
        --ssl-domain) SSL_DOMAIN=$value ;;
        --ssl-trusted-ip) SSL_TRUSTED_IP=$value ;;
        --ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;;
        --ssl-size) SSL_SIZE=$value ;;
        --ssl-date) SSL_DATE=$value ;;
        --ca-date) CA_DATE=$value ;;
        --ssl-cn) CN=$value ;;
    esac
done

# CA相关配置
CA_DATE=${CA_DATE:-3650}
CA_KEY=${CA_KEY:-cakey1.pem}
CA_CERT=${CA_CERT:-cacerts1.pem}
CA_DOMAIN=dynamiclistener-ca
CA_ORG=dynamiclistener-org

# ssl相关配置
SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf}
SSL_DOMAIN=${SSL_DOMAIN:-'www.rancher.local'}
SSL_DATE=${SSL_DATE:-3650}
SSL_SIZE=${SSL_SIZE:-2048}

## 国家代码(2个字母的代号),默认CN;
CN=${CN:-CN}

SSL_KEY=$SSL_DOMAIN.key
SSL_CSR=$SSL_DOMAIN.csr
SSL_CERT=$SSL_DOMAIN.crt

echo -e "\033[32m ---------------------------- \033[0m"
echo -e "\033[32m       | 生成 SSL Cert |       \033[0m"
echo -e "\033[32m ---------------------------- \033[0m"

echo -e "\033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} \033[0m"
cat > ${SSL_CONFIG} <<EOM
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
EOM

if [[ -n ${SSL_TRUSTED_IP} || -n ${SSL_TRUSTED_DOMAIN} ]]; then
    cat >> ${SSL_CONFIG} <<EOM
subjectAltName = @alt_names
[alt_names]
EOM
    IFS=","
    dns=(${SSL_TRUSTED_DOMAIN})
    dns+=(${SSL_DOMAIN})
    for i in "${!dns[@]}"; do
      echo DNS.$((i+1)) = ${dns[$i]} >> ${SSL_CONFIG}
    done

    if [[ -n ${SSL_TRUSTED_IP} ]]; then
        ip=(${SSL_TRUSTED_IP})
        for i in "${!ip[@]}"; do
          echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG}
        done
    fi
fi

echo -e "\033[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} \033[0m"
openssl genrsa -out ${SSL_KEY} ${SSL_SIZE}

echo -e "\033[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} \033[0m"
openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG}

echo -e "\033[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} \033[0m"
openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} \
    -CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} \
    -days ${SSL_DATE} -extensions v3_req \
    -extfile ${SSL_CONFIG}

echo -e "\033[32m ====> 7. 证书制作完成 \033[0m"
echo
echo -e "\033[32m ====> 8. 以YAML格式输出结果 \033[0m"
echo "----------------------------------------------------------"
echo "ca_key: |"
cat $CA_KEY | sed 's/^/  /'
echo
echo "ca_cert: |"
cat $CA_CERT | sed 's/^/  /'
echo
echo "ssl_key: |"
cat $SSL_KEY | sed 's/^/  /'
echo
echo "ssl_csr: |"
cat $SSL_CSR | sed 's/^/  /'
echo
echo "ssl_cert: |"
cat $SSL_CERT | sed 's/^/  /'
echo

echo -e "\033[32m ====> 9. 附加CA证书到Cert文件 \033[0m"
cat ${CA_CERT} >> ${SSL_CERT}
echo "ssl_cert: |"
cat $SSL_CERT | sed 's/^/  /'
echo

echo -e "\033[32m ====> 10. 重命名服务证书 \033[0m"
echo "cp ${SSL_DOMAIN}.key tls.key"
cp ${SSL_DOMAIN}.key tls.key
echo "cp ${SSL_DOMAIN}.crt tls.crt"
cp ${SSL_DOMAIN}.crt tls.crt

生成10年证书, 注意域名需要跟原rancher ui的域名一至

./create_self-signed-cert.sh –ssl-domain=rancher.xxx.com –ssl-trusted-domain=rancher1.xxx.com –ssl-size=2048 –ssl-date=3650

.7 替换 tls-rancher-ingress 证书的内容为新证书

#注 需要确定kubectl 要在rancher 集群的会话下
kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=tls.crt --key=tls.key --dry-run --save-config -o yaml | kubectl apply -f -

.8 重启nginx-ingress

通过rancher ui 或者kubectl 命令行  重启 rancher下的工作负载 nginx-ingress

over

—————————————————————

# 独立容器跑的rancher-server的解决办法

docker exec -it rancherserver bash
kubectl --insecure-skip-tls-verify -n kube-system delete secrets k3s-serving
kubectl --insecure-skip-tls-verify delete secret serving-cert -n cattle-system
rm -f /var/lib/rancher/k3s/server/tls/dynamic-cert.json
exit
docker restart rancherserver
docker exec -it rancherserver bash
curl --insecure -sfL https://127.0.0.1/v3
exit
docker restart rancherserver

  

ntopng Enterprise L v.4.2.201222

本站几年前曾经破解过ntopng V3版本, 今天老朋友突然发信息来,需要最新的ntopng的企业版,抽点时间看了一下

1. 快速安装

#debian
 wget https://packages.ntop.org/apt-stable/buster/all/apt-ntop-stable.deb apt install ./apt-ntop-stable.deb 
 apt-get clean all apt-get update apt-get install pfring-dkms nprobe ntopng n2disk cento 

2.此版本与此前的老版本差异较大。花了1个小时分析了一下

3.破解过程详见  https://www.so-cools.com/?p=1271

4. 有需要此版本的或想查看破解过程的可微信联系,毕竟公开后以后再次破解的难度就大一些了

python中关于浮点数的计算问题

from decimal import Decimal
a1 = 0.00000999
a2 = 13400
b1 = a1*a2
print(b1) # 0.13386599999999999
a1=Decimal(0.00000999)
a2=Decimal(13400)
b1 = a1*a2
print(b1)  #0.1338659999999999895440921591
a1=Decimal(str(0.00000999))
a2=Decimal(str(13400))
b1 = a1*a2
print(b1) #0.13386600   正确

#注意用Decimal函数,需要用str()函数转成字符型

python动态加载

#classb.py

class classb:

    def foo(self):
        print("this is classb")
    def bar(self,i):
        print("classb:%s" %i)

#classa.py

class classa:

    def foo(self):
        print("this is foo")
    def bar(self,i):
        print("sssss:%s" %i)

#main.py

class Main:
    def __init__(self, module_name):
        self.module_name = module_name
        self.module = None

    def __getattr__(self, funcname):
        if self.module is None:
            self.module = __import__(self.module_name)
            class_tmp=getattr(self.module, self.module_name)
            class_obj = class_tmp()
            func_tmp = getattr(class_obj, funcname)
        return func_tmp

abc = Main('classa')
abc.bar("aaaaa")

abc = Main('classb')
abc.bar("aaaaa")


nginx-mark

#nginx failed (13: Permission denied) while reading upstream 错误解决

## 1.设置nginx.conf的error的错误级别, 建议设置为info级别 

## error_log /var/log/nginx/error.log info;

## 2. 设置合理的 proxy_temp_file_write_size 值大小 ,因为当代理文件大小超过配置的proxy_temp_file_write_size值时,nginx会将文件写入到临时目录下(默认为/proxy_temp)。
##如果nginx对/proxy_temp没有权限,就写不进去。 就会出现  nginx failed (13: Permission denied) while reading upstream 错误

## 因此,在nginx在启用 proxy时,  必须检查  nginx用户对proxy_temp 目录是否有读写权限 

docker mark

#docker中基于devicemapper中修改容器根目录10G限制为

docker run -tid --name xxx --storage-opt size=600G busybox

#k8s查看所有资源

#查询一个命名空间下的所有资源
kubectl get all -o wide -n xxx
#或者
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n tidb-admin

#查询所有命名空间下的所有资源
kubectl get all -o wide -A
#或者
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -A

#docker 安装jdk让/etc/profile.d生效

#1 将要java8.sh放到/etc/profile.d目录中
cat /etc/profile.d/java8.sh 
JAVA_HOME=/usr/local/jdk8/jdk1.8.0_341
JRE_HOME=$JAVA_HOME/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export JAVA_HOME JRE_HOME PATH CLASSPATH

#docker exec 进入容器 注意bash后面加-l
docker exec -it test1 bash -l

#对应的Dockerfile
cat Dockerfile 
FROM tinazh/debian
ADD jdk-8u341-linux-x64.tar.gz /usr/local/jdk8
COPY java8.sh /etc/profile.d/

#安装zabbix 

#默认密码: Admin zabbix

docker network create c_net --subnet=192.168.250.0/24
docker run -d  --hostname mysql  --name zabbix-mysql -t --network c_net  -e MYSQL_USER="zabbix"  -e MYSQL_DATABASE="zabbix"  -e MYSQL_PASSWORD="123456"  -e MYSQL_ROOT_PASSWORD="123456"  -e TZ=CST-8  -v /data/zabbix/mysql/data:/var/lib/mysql:rw  -v /data/zabbix/mysql/my.cnf:/etc/mysql/my.cnf  -v /data/zabbix/mysql/conf.d:/etc/mysql/conf.d  -v /data/zabbix/mysql/mysql.conf.d:/etc/mysql/mysql.conf.d  daocloud.io/library/mysql:5.7  --character-set-server=utf8 --collation-server=utf8_bin

docker run -td  --name zabbix-web --network c_net -p 8081:8080  --hostname zabbix-web  -e PHP_TZ="Asia/Shanghai"  -e DB_SERVER_HOST="zabbix-mysql"  -e MYSQL_DATABASE="zabbix"  -e MYSQL_USER="zabbix"  -e MYSQL_PASSWORD="123456"  -e MYSQL_ROOT_PASSWORD="123456"  -e TZ=CST-8 zabbix/zabbix-web-nginx-mysql:centos-5.4-latest

docker run -td  --name zabbix-server -p 10051:10051 --network c_net --hostname zabbix-server  -e DB_SERVER_HOST="zabbix-mysql"  -e MYSQL_DATABASE="zabbix"  -e MYSQL_USER="zabbix"  -e MYSQL_PASSWORD="123456"  -e MYSQL_ROOT_PASSWORD="123456"  -e ZBX_JAVAGATEWAY="zabbix-java-gateway"  -v /var/run/docker.sock:/var/run/docker.sock  -v /etc/localtime:/etc/localtime:ro  -v /data/zabbix/server/alertscripts:/usr/lib/zabbix/alertscripts  -v /data/zabbix/server/externalscripts:/usr/lib/zabbix/externalscripts  zabbix/zabbix-server-mysql:centos-5.4-latest

#init 1 容器 保证systemctl可用

docker run -itd --name debian1 --network jms_net --privileged=true tinazh/debian /sbin/init

#批量清空镜像(手工执行 未使用xargs进行自动执行)

docker images | grep "months ago" | grep 49 | awk '{print "docker rmi "$1":"$2}'

#docker desktop 中修改  /etc/kubernetes/manifests/kube-apiserver.yaml 配置文件

#直接 kubectl edit 方式是不行的 kubectl edit pods -n kube-system kube-apiserver-docker-desktop

docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh

#然后
vi /etc/kubernetes/manifests/kube-apiserver.yaml

# 直接保存 apiserver会自动重启
# 其它配置文件类似

#常见缺失软件

apt install iputils-ping
apt install net-tools

apk add procps
apk add docker-cli
apk add busybox-extras 

#rancher  通过yaml方式创建服务 不能ping通故障解决

1. 创建服务与设置好对应的主机DNS

rancher kubectl create -f appserver-extend.yaml

yaml示例代码:

---
apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      field.cattle.io/creatorId: user-w9lgp
      field.cattle.io/ipAddresses: "null"
      field.cattle.io/targetDnsRecordIds: "null"
      field.cattle.io/targetWorkloadIds: '["deployment:application:appserver-extend-job"]'
    labels:
      cattle.io/creator: norman
    name: service-appserver-extend-extend-job
    namespace: application
    selfLink: /api/v1/namespaces/application/services/service-appserver-extend-extend-job
  spec:
    clusterIP: None
    ports:
    - name: default
      port: 42
      protocol: TCP
      targetPort: 42
    selector:
      workloadID_service-appserver-extend-extend-job: "true"
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
    generation: 1
    labels:
      cattle.io/creator: norman
      workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job
    name: appserver-extend-job
    namespace: application
    selfLink: /apis/apps/v1/namespaces/application/deployment/appserver-extend-job
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job
      spec:
        containers:
        - env:
          - name: RUNPRO
            value: pro
          - name: aliyun_logs_catalina
            value: "stdout" 
          - name: aliyun_logs_access
            value: "/opt/logs/*.log"
          - name: aliyun_logs_catalina_tags
            value: "type=appserver-extend-xxx-catalina,topic=appserver-extend-xxx-extend-job-catalina"
          - name: aliyun_logs_access_tags
            value: "type=appserver-extend-xxx-access,topic=appserver-extend-xxx-extend-job-access"
          image: alpine
          imagePullPolicy: Always
          name: appserver-extend-job
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities: {}
            privileged: false
            readOnlyRootFilesystem: false
            runAsNonRoot: false
          stdin: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          tty: true
        dnsPolicy: ClusterFirst
        imagePullSecrets:
        - name: registry-harbor
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
kind: List

故障现象:

同命名空间下 ping service-appserver-extend-extend-job 提示找不到主机。

排查过程:

rancher kubectl describe services service-appserver-extend-extend-job -n application
Name:              service-appserver-extend-extend-job
Namespace:         application
Labels:            cattle.io/creator=norman
Annotations:       field.cattle.io/creatorId: user-w9lgp
                   field.cattle.io/ipAddresses: null
                   field.cattle.io/targetDnsRecordIds: null
                   field.cattle.io/targetWorkloadIds: ["deployment:application:appserver-extend-job"]
Selector:          workloadID_service-appserver-extend-extend-job=true
Type:              ClusterIP
IP:                None
Port:              default  42/TCP
TargetPort:        42/TCP
Endpoints:         <none>      #故障点:  Endpoints 为空 
Session Affinity:  None
Events:            <none>

修复: 由于yaml文件中 先定义了service 后定义的 deployment  导致 service中找不到机器     知道原因后修复也很简单, 在yaml中先创建 deployment后, 再创建service然后就解决了。

#使用docker快速搭建各大漏洞学习平台,目前可以一键搭建12个平台

https://github.com/c0ny1/vulstudy

https://github.com/vulhub/vulhub

https://github.com/vulnspy

https://www.vsplate.com/labs.php

#at sun.awt.FontConfiguration.getVersion(FontConfiguration.java  docker  openjdk  openjdk:8-jdk-alpine 报错

原因为缺少字体

解决:添加 字体   ttf-dejavu

RUN apk add --no-cache ttf-dejavu 

//加上其它的
RUN apk add --no-cache bash tini ttf-dejavu libc6-compat linux-pam krb5 krb5-libs

#awvs  docker

docker run --name wvs13 -p 3443:3443 -itd registry.cn-shanghai.aliyuncs.com/t3st0r/acunetix_13:20200220
admin@admin.cn
Admin@admin.cn

#Could not initialize class org.xerial.snappy.Snappy

由于项目中使用了org.xerial.snappy.Snappy这个类,在正常的centos系统环境下,没有问题;在微服务容器(openjdk:8-jdk-alpine)测试的时候发现有一个功能不正常,爆出异常 Could not initialize class org.xerial.snappy.Snappy
解决方式:
由于openjdk:8-jdk-alpine容器使用的是Alpine Linux,
创建软连接
ln -s /lib /lib64

对应dockerfile为:

FROM openjdk:8-jdk-alpine
ARG RUNPRO
ENV TZ=Asia/Shanghai
RUN apk add -U tzdata
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN ln -s /lib /lib64   #新增
RUN apk add --no-cache bash tini libc6-compat linux-pam krb5 krb5-libs #新增
VOLUME /tmp
VOLUME /opt/logs
WORKDIR /opt/
COPY server-xx*.jar server-xx.jar
ENTRYPOINT ["java","-jar","server-xx.jar","--spring.profiles.active=${RUNPRO}"]

最后还是准备直接用oracle jdk了, 感觉openjdk还是不太稳定。

参考:https://www.cnblogs.com/hellxz/p/11936994.html