所有由admin发布的文章

ntopng Enterprise L v.4.2.201222

本站几年前曾经破解过ntopng V3版本, 今天老朋友突然发信息来,需要最新的ntopng的企业版,抽点时间看了一下

1. 快速安装

#debian
 wget https://packages.ntop.org/apt-stable/buster/all/apt-ntop-stable.deb apt install ./apt-ntop-stable.deb 
 apt-get clean all apt-get update apt-get install pfring-dkms nprobe ntopng n2disk cento 

2.此版本与此前的老版本差异较大。花了1个小时分析了一下

3.破解过程详见  https://www.so-cools.com/?p=1271

4. 有需要此版本的或想查看破解过程的可微信联系,毕竟公开后以后再次破解的难度就大一些了

python中关于浮点数的计算问题

from decimal import Decimal
a1 = 0.00000999
a2 = 13400
b1 = a1*a2
print(b1) # 0.13386599999999999
a1=Decimal(0.00000999)
a2=Decimal(13400)
b1 = a1*a2
print(b1)  #0.1338659999999999895440921591
a1=Decimal(str(0.00000999))
a2=Decimal(str(13400))
b1 = a1*a2
print(b1) #0.13386600   正确

#注意用Decimal函数,需要用str()函数转成字符型

python动态加载

#classb.py

class classb:

    def foo(self):
        print("this is classb")
    def bar(self,i):
        print("classb:%s" %i)

#classa.py

class classa:

    def foo(self):
        print("this is foo")
    def bar(self,i):
        print("sssss:%s" %i)

#main.py

class Main:
    def __init__(self, module_name):
        self.module_name = module_name
        self.module = None

    def __getattr__(self, funcname):
        if self.module is None:
            self.module = __import__(self.module_name)
            class_tmp=getattr(self.module, self.module_name)
            class_obj = class_tmp()
            func_tmp = getattr(class_obj, funcname)
        return func_tmp

abc = Main('classa')
abc.bar("aaaaa")

abc = Main('classb')
abc.bar("aaaaa")


nginx-mark

#nginx failed (13: Permission denied) while reading upstream 错误解决

## 1.设置nginx.conf的error的错误级别, 建议设置为info级别 

## error_log /var/log/nginx/error.log info;

## 2. 设置合理的 proxy_temp_file_write_size 值大小 ,因为当代理文件大小超过配置的proxy_temp_file_write_size值时,nginx会将文件写入到临时目录下(默认为/proxy_temp)。
##如果nginx对/proxy_temp没有权限,就写不进去。 就会出现  nginx failed (13: Permission denied) while reading upstream 错误

## 因此,在nginx在启用 proxy时,  必须检查  nginx用户对proxy_temp 目录是否有读写权限 

docker mark

#常见缺失软件

apk add procps
apk add docker-cli
apk add busybox-extras 

#rancher  通过yaml方式创建服务 不能ping通故障解决

1. 创建服务与设置好对应的主机DNS

rancher kubectl create -f appserver-extend.yaml

yaml示例代码:

---
apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      field.cattle.io/creatorId: user-w9lgp
      field.cattle.io/ipAddresses: "null"
      field.cattle.io/targetDnsRecordIds: "null"
      field.cattle.io/targetWorkloadIds: '["deployment:application:appserver-extend-job"]'
    labels:
      cattle.io/creator: norman
    name: service-appserver-extend-extend-job
    namespace: application
    selfLink: /api/v1/namespaces/application/services/service-appserver-extend-extend-job
  spec:
    clusterIP: None
    ports:
    - name: default
      port: 42
      protocol: TCP
      targetPort: 42
    selector:
      workloadID_service-appserver-extend-extend-job: "true"
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
    generation: 1
    labels:
      cattle.io/creator: norman
      workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job
    name: appserver-extend-job
    namespace: application
    selfLink: /apis/apps/v1/namespaces/application/deployment/appserver-extend-job
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          workload.user.cattle.io/workloadselector: deployment-application-appserver-extend-job
      spec:
        containers:
        - env:
          - name: RUNPRO
            value: pro
          - name: aliyun_logs_catalina
            value: "stdout" 
          - name: aliyun_logs_access
            value: "/opt/logs/*.log"
          - name: aliyun_logs_catalina_tags
            value: "type=appserver-extend-xxx-catalina,topic=appserver-extend-xxx-extend-job-catalina"
          - name: aliyun_logs_access_tags
            value: "type=appserver-extend-xxx-access,topic=appserver-extend-xxx-extend-job-access"
          image: alpine
          imagePullPolicy: Always
          name: appserver-extend-job
          resources: {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities: {}
            privileged: false
            readOnlyRootFilesystem: false
            runAsNonRoot: false
          stdin: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          tty: true
        dnsPolicy: ClusterFirst
        imagePullSecrets:
        - name: registry-harbor
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
kind: List

故障现象:

同命名空间下 ping service-appserver-extend-extend-job 提示找不到主机。

排查过程:

rancher kubectl describe services service-appserver-extend-extend-job -n application
Name:              service-appserver-extend-extend-job
Namespace:         application
Labels:            cattle.io/creator=norman
Annotations:       field.cattle.io/creatorId: user-w9lgp
                   field.cattle.io/ipAddresses: null
                   field.cattle.io/targetDnsRecordIds: null
                   field.cattle.io/targetWorkloadIds: ["deployment:application:appserver-extend-job"]
Selector:          workloadID_service-appserver-extend-extend-job=true
Type:              ClusterIP
IP:                None
Port:              default  42/TCP
TargetPort:        42/TCP
Endpoints:         <none>      #故障点:  Endpoints 为空 
Session Affinity:  None
Events:            <none>

修复: 由于yaml文件中 先定义了service 后定义的 deployment  导致 service中找不到机器     知道原因后修复也很简单, 在yaml中先创建 deployment后, 再创建service然后就解决了。

#使用docker快速搭建各大漏洞学习平台,目前可以一键搭建12个平台

https://github.com/c0ny1/vulstudy

https://github.com/vulhub/vulhub

https://github.com/vulnspy

https://www.vsplate.com/labs.php

#at sun.awt.FontConfiguration.getVersion(FontConfiguration.java  docker  openjdk  openjdk:8-jdk-alpine 报错

原因为缺少字体

解决:添加 字体   ttf-dejavu

RUN apk add --no-cache ttf-dejavu 

//加上其它的
RUN apk add --no-cache bash tini ttf-dejavu libc6-compat linux-pam krb5 krb5-libs

#awvs  docker

docker run --name wvs13 -p 3443:3443 -itd registry.cn-shanghai.aliyuncs.com/t3st0r/acunetix_13:20200220
admin@admin.cn
Admin@admin.cn

#Could not initialize class org.xerial.snappy.Snappy

由于项目中使用了org.xerial.snappy.Snappy这个类,在正常的centos系统环境下,没有问题;在微服务容器(openjdk:8-jdk-alpine)测试的时候发现有一个功能不正常,爆出异常 Could not initialize class org.xerial.snappy.Snappy
解决方式:
由于openjdk:8-jdk-alpine容器使用的是Alpine Linux,
创建软连接
ln -s /lib /lib64

对应dockerfile为:

FROM openjdk:8-jdk-alpine
ARG RUNPRO
ENV TZ=Asia/Shanghai
RUN apk add -U tzdata
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN ln -s /lib /lib64   #新增
RUN apk add --no-cache bash tini libc6-compat linux-pam krb5 krb5-libs #新增
VOLUME /tmp
VOLUME /opt/logs
WORKDIR /opt/
COPY server-xx*.jar server-xx.jar
ENTRYPOINT ["java","-jar","server-xx.jar","--spring.profiles.active=${RUNPRO}"]

最后还是准备直接用oracle jdk了, 感觉openjdk还是不太稳定。

参考:https://www.cnblogs.com/hellxz/p/11936994.html

解决php下通过shell_exec git clone 报fatal: unable to set up default path; use –file 问题

报错:

fatal: unable to set up default path; use --file
fatal: could not read Username for 'http://10.100.11.5': No such device or address

demo 代码:

$giturl='http://10.100.11.5/appserver/appserver-api/';
$output=shell_exec("git clone {$giturl} 2>&1");
var_dump($output)

对应的www-data用户做了sudo 免密码

做了git免密码  (git config –global credential.helper store)

此代码在CLI命令行下运行正常

在apache下的web界面下报如题所示错误, 中间尝过许多办法。都无解。

思考过程:

1. su – www-data用户下的cli能正常运行,说明权限应该是没问题的

2. 分别在cli下和web下打印env相关东西

system("env");

结果发现两种情况下的环境变量相差很大。

尝试把web中的环境变量补充HOME变量后,问题解决了。

putenv("HOME=/home/www-data");
putenv("USER=www-data");
$giturl='http://10.100.11.5/appserver/appserver-api/';
$output=shell_exec("git clone {$giturl} 2>&1");

另一个解决办法:

默认的apache2.4会把HOME环境变量给unset 掉  见:/etc/apache2/envvars 第4行

# envvars - default environment variables for apache2ctl

# this won't be correct after changing uid
unset HOME   #就是这里

# for supporting multiple apache2 instances

针对apache的解决办法也比较简单了 修改envvars文件,  增加HOME环境变量的导出就行

#unset HOME   注释这里

# for supporting multiple apache2 instances
if [ "${APACHE_CONFDIR##/etc/apache2-}" != "${APACHE_CONFDIR}" ] ; then
        SUFFIX="-${APACHE_CONFDIR##/etc/apache2-}"
else
        SUFFIX=
fi

#增加这里
export HOME=/home/www-data

apachectl stop && apachectl start   (restart好像不会刷新环境变量)

问题解决, 解决此小问题,花费了好几个小时,都搞得有些怀疑自己的技术了。  ^_^

解决老旧设备 连接ftp 被动模式错误 Response: 227 Entering Passive Mode (10,1,0,9,85,148)”

原因分析:

1.FTP客户端代码自己写的, 代码比较阵旧

2.FTP服务器处于云主机的内网中,网卡设置的为内网IP,  虽然有绑定外网IP。  但是是做了NAT。

3.FTP服务器版本过高。

解决办法:

1.安装旧版本的ftp服务端  比如vsftpd-2.2.2-24.el6.x86_64

2./etc/vsftpd/vsftpd.conf

anonymous_enable=YES
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
listen=YES
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
chroot_local_user=YES
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd.chroot_list
pasv_address=118.3.2.1