Kimen's blog Kimen's blog
首页
  • Vue
  • Java
  • 数据库
  • 技术文档
  • 经验之谈
  • 疑难杂症
  • IDE
  • MAC软件分享
  • 学习
  • 运维
  • 折腾
关于
  • 分类
  • 标签
  • 归档
GitHub (opens new window)

Kimen Tang

全沾打工人
首页
  • Vue
  • Java
  • 数据库
  • 技术文档
  • 经验之谈
  • 疑难杂症
  • IDE
  • MAC软件分享
  • 学习
  • 运维
  • 折腾
关于
  • 分类
  • 标签
  • 归档
GitHub (opens new window)
  • MAC软件分享

  • 学习

  • 运维

  • 折腾

    • 【NAS In Docker】👨‍👩‍👧‍👦家庭数据中心 = 闲置台式机 + Docker
    • 【NAS In Docker】重要数据备份并自动上传至阿里云盘
      • 压缩
      • 加密
      • 解密
      • 阿里云盘配置
      • 获取阿里云盘的REFRESH_TOKEN
      • 其他配置
      • 定时执行脚本
        • crond常用命令
        • crontab常用命令
      • 收工
      • 附:脚本文件
        • 大神的脚本
        • 本文使用的脚本
    • 当『Infuse』遇上『阿里云盘』 🚀
    • 当『Emby、Plex、Jellyfin』遇上『阿里云盘』🎬
  • 更多
  • 折腾
Kimen
2021-05-21
目录

【NAS In Docker】重要数据备份并自动上传至阿里云盘


提示

修改自秋水逸冰 (opens new window)的backup.sh脚本 参考自WeiCN的教程:一键备份脚本backup.sh(新增支持COS/阿里云盘) (opens new window) 本文只关注阿里云盘的上传

提示

数据备份拢共分三步

  • 压缩文件或文件夹
  • 加密压缩好的文件
  • 将加密好的文件上传至网盘

# 压缩

# 存储压缩文件的目录
LOCALDIR="/root/backup/file/"
# 创建备份时的临时目录
TEMPDIR="/root/backup/temp/"
# 备份时脚本产生的log文件(注意是文件不是目录)
LOGFILE="/root/backup/backup.log"
1
2
3
4
5
6
# 待压缩文件夹
# BACKUP是一个数组变量,可以存放多个PATH
BACKUP[0]="/data/www/default/test1"
BACKUP[1]="/data/www/default/test2"
1
2
3
4

# 加密

# 是否加密
ENCRYPTFLG=true
# 加密秘钥
BACKUPPASS=""
1
2
3
4

加密命令:

openssl enc -aes256 -in "源文件PATH" -out "输出文件PATH" -pass pass:"加密秘钥" -md sha256
1

# 解密

openssl enc -aes256 -in "待解密文件PATH" -out decrypted_backup.tgz -pass pass:"加密秘钥" -d -md sha256
1

# 阿里云盘配置

阿里云盘cli地址 aliyunpan (opens new window)

# 阿里云盘上的远程地址,从根目录开始算起,可以不写`/`
ALI_FOLDER="backup"
# 是否开启上传
ALI_FLG=true
# aliyunpan中的main.py文件地址
ALI_PY_FILE="main.py"
# 阿里云盘免登陆token
ALI_REFRESH_TOKEN=""
1
2
3
4
5
6
7
8

# 获取阿里云盘的REFRESH_TOKEN

  • 登录网页版阿里云盘 (opens new window)
  • F12打开控制台
  • 找到Local Storage中Key为token的值
  • 在上一步的值中找到refresh_token,收工
  • 必须得再来个图,不然浑身难受

注意

拿到REFRESH_TOKEN后,如果重新登录之前的token就会失效,需要重新获取

# 其他配置

# 定期删除备份文件的天数,默认7天,也即只会保存近7天的备份文件
LOCALAGEDAILIES="7"
# 是否删除远程文件,默认true
DELETE_REMOTE_FILE_FLG=true
# MySQL/MariaDB/Percona 的 root 用户密码
MYSQL_ROOT_PASSWORD=""
# 指定 MySQL/MariaDB/Percona 的数据库名,留空则是备份所有数据库
# MYSQL_DATABASE_NAME 是一个数组变量,可以指定多个
MYSQL_DATABASE_NAME[0]="admin"
MYSQL_DATABASE_NAME[1]="test"
1
2
3
4
5
6
7
8
9
10

# 定时执行脚本

必须使用root权限执行脚本,创建Linux定时任务,每天晚上凌晨4点执行备份脚本

# crond常用命令

systemctl start   crond         #启动服务
systemctl stop    crond         #停止服务
systemctl restart crond         #重启服务
systemctl reload  crond         #重载配置文件
systemctl status  crond         #查看状态
1
2
3
4
5

# crontab常用命令

crontab -u 		#设定某个用户的cron服务
crontab -l		#显示crontab文件(显示已设置的定时任务)
crontab -e		#编辑crontab文件(编辑定时任务)
crontab -r		#删除crontab文件(删除定时任务)
crontab -i		#删除crontab文件提醒用户(删除定时任务)
1
2
3
4
5
# Example of job definition:
# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name  command to be executed
  0  4  *  *  * /root/backup.sh
1
2
3
4
5
6
7
8
9

# 收工

最后就可以在阿里云盘中看到备份好上传的以.文件了

# 附:脚本文件

# 大神的脚本

https://github.com/teddysun/across/raw/master/backup.sh (opens new window)

# 本文使用的脚本

点击查看
#!/usr/bin/env bash
# Copyright (C) 2013 - 2020 Teddysun <i@teddysun.com>
#
# This file is part of the LAMP script.
#
# LAMP is a powerful bash script for the installation of
# Apache + PHP + MySQL/MariaDB and so on.
# You can install Apache + PHP + MySQL/MariaDB in an very easy way.
# Just need to input numbers to choose what you want to install before installation.
# And all things will be done in a few minutes.
#
# Description:      Auto backup shell script
# Description URL:  https://teddysun.com/469.html
#
# Website:  https://lamp.sh
# Github:   https://github.com/teddysun/lamp
#
# You must to modify the config before run it!!!
# Backup MySQL/MariaDB datebases, files and directories
# Backup file is encrypted with AES256-cbc with SHA1 message-digest (option)
# Auto transfer backup file to Google Drive (need install rclone command) (option)
# Auto transfer backup file to FTP server (option)
# Auto delete Google Drive's or FTP server's remote file (option)

[[ $EUID -ne 0 ]] && echo "Error: This script must be run as root!" && exit 1

########## START OF CONFIG ##########

# Encrypt flag (true: encryspt, false: not encrypt)
ENCRYPTFLG=true

# WARNING: KEEP THE PASSWORD SAFE!!!
# The password used to encrypt the backup
# To decrypt backups made by this script, run the following command:
# openssl enc -aes256 -in [ENCRYPTED BACKUP] -out decrypted_backup.tgz -pass pass:[BACKUPPASS] -d -md sha256
BACKUPPASS=""

# Directory to store backups
LOCALDIR="/root/backup/file/"

# Temporary directory used during backup creation
TEMPDIR="/root/backup/temp/"

# File to log the outcome of backups
LOGFILE="/root/backup/backup.log"

# OPTIONAL:
# If you want to backup the MySQL database, enter the MySQL root password below, otherwise leave it blank
MYSQL_ROOT_PASSWORD=""

# Below is a list of MySQL database name that will be backed up
# If you want backup ALL databases, leave it blank.
MYSQL_DATABASE_NAME[0]=""

# Below is a list of files and directories that will be backed up in the tar backup
# For example:
# File: /data/www/default/test.tgz
# Directory: /data/www/default/test
BACKUP[0]="/data/www/default/test"

# Number of days to store daily local backups (default 7 days)
LOCALAGEDAILIES="7"

# Delete remote file from Googole Drive or FTP server flag (true: delete, false: not delete)
DELETE_REMOTE_FILE_FLG=true

# Rclone remote name
RCLONE_NAME=""

# Rclone remote folder name (default "")
RCLONE_FOLDER=""

# Cos remote folder name (default "")
COS_FOLDER=""

# AliyunDrive remote folder name (default "")
ALI_FOLDER="backup"

# Upload local file to FTP server flag (true: upload, false: not upload)
FTP_FLG=false

# Upload local file to Google Drive flag (true: upload, false: not upload)
RCLONE_FLG=false

# Upload local file to Cos flag (true: upload, false: not upload)
COS_FLG=false

# Upload local file to AliyunDrive flag (true: upload, false: not upload)
ALI_FLG=true
# AliyunDrive main.py
ALI_PY_FILE="main.py"
ALI_REFRESH_TOKEN=""

# FTP server
# OPTIONAL: If you want to upload to FTP server, enter the Hostname or IP address below
FTP_HOST=""

# FTP username
# OPTIONAL: If you want to upload to FTP server, enter the FTP username below
FTP_USER=""

# FTP password
# OPTIONAL: If you want to upload to FTP server, enter the username's password below
FTP_PASS=""

# FTP server remote folder
# OPTIONAL: If you want to upload to FTP server, enter the FTP remote folder below
# For example: public_html
FTP_DIR=""

########## END OF CONFIG ##########

# Date & Time
DAY=$(date +%d)
MONTH=$(date +%m)
YEAR=$(date +%C%y)
BACKUPDATE=$(date +%Y%m%d%H%M%S)
# Backup file name
TARFILE="${LOCALDIR}""$(hostname)"_"${BACKUPDATE}".tgz
# Encrypted backup file name
ENC_TARFILE="${TARFILE}.enc"
# Backup MySQL dump file name
SQLFILE="${TEMPDIR}mysql_${BACKUPDATE}.sql"

log() {
    #echo "$(date "+%Y-%m-%d %H:%M:%S")" "$1"
    echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}

# Check for list of mandatory binaries
check_commands() {
    # This section checks for all of the binaries used in the backup
    # Do not check mysql command if you do not want to backup the MySQL database
    if [ -z "${MYSQL_ROOT_PASSWORD}" ]; then
        BINARIES=( cat cd du date dirname echo openssl pwd rm tar )
    else
        BINARIES=( cat cd du date dirname echo openssl mysql mysqldump pwd rm tar )
    fi

    # Iterate over the list of binaries, and if one isn't found, abort
    for BINARY in "${BINARIES[@]}"; do
        if [ ! "$(command -v "$BINARY")" ]; then
            log "$BINARY is not installed. Install it and try again"
            exit 1
        fi
    done

    # check rclone command
    RCLONE_COMMAND=false
    if [ "$(command -v "rclone")" ]; then
        RCLONE_COMMAND=true
    fi

    # check COS command
    COS_COMMAND=false
    if [ "$(command -v "coscmd")" ]; then
        COS_COMMAND=true
    fi

    # check AliyunDrive command
    ALI_COMMAND=false
    if [ ! -f "${ALI_PY_FILE}" ]; then
        ALI_COMMAND=true
    fi

    # check ftp command
    if ${FTP_FLG}; then
        if [ ! "$(command -v "ftp")" ]; then
            log "ftp is not installed. Install it and try again"
            exit 1
        fi
    fi
}

calculate_size() {
    local file_name=$1
    local file_size=$(du -h $file_name 2>/dev/null | awk '{print $1}')
    if [ "x${file_size}" = "x" ]; then
        echo "unknown"
    else
        echo "${file_size}"
    fi
}

# Backup MySQL databases
mysql_backup() {
    if [ -z "${MYSQL_ROOT_PASSWORD}" ]; then
        log "MySQL root password not set, MySQL backup skipped"
    else
        log "MySQL dump start"
        mysql -u root -p"${MYSQL_ROOT_PASSWORD}" 2>/dev/null <<EOF
exit
EOF
        if [ $? -ne 0 ]; then
            log "MySQL root password is incorrect. Please check it and try again"
            exit 1
        fi
        if [ "${MYSQL_DATABASE_NAME[@]}" == "" ]; then
            mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" --all-databases > "${SQLFILE}" 2>/dev/null
            if [ $? -ne 0 ]; then
                log "MySQL all databases backup failed"
                exit 1
            fi
            log "MySQL all databases dump file name: ${SQLFILE}"
            #Add MySQL backup dump file to BACKUP list
            BACKUP=(${BACKUP[@]} ${SQLFILE})
        else
            for db in ${MYSQL_DATABASE_NAME[@]}; do
                unset DBFILE
                DBFILE="${TEMPDIR}${db}_${BACKUPDATE}.sql"
                mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" ${db} > "${DBFILE}" 2>/dev/null
                if [ $? -ne 0 ]; then
                    log "MySQL database name [${db}] backup failed, please check database name is correct and try again"
                    exit 1
                fi
                log "MySQL database name [${db}] dump file name: ${DBFILE}"
                #Add MySQL backup dump file to BACKUP list
                BACKUP=(${BACKUP[@]} ${DBFILE})
            done
        fi
        log "MySQL dump completed"
    fi
}

start_backup() {
    [ "${#BACKUP[@]}" -eq 0 ] && echo "Error: You must to modify the [$(basename $0)] config before run it!" && exit 1

    log "Tar backup file start"
    tar -zcPf ${TARFILE} ${BACKUP[@]}
    if [ $? -gt 1 ]; then
        log "Tar backup file failed"
        exit 1
    fi
    log "Tar backup file completed"

    # Encrypt tar file
    if ${ENCRYPTFLG}; then
        log "Encrypt backup file start"
        openssl enc -aes256 -in "${TARFILE}" -out "${ENC_TARFILE}" -pass pass:"${BACKUPPASS}" -md sha256
        log "Encrypt backup file completed"

        # Delete unencrypted tar
        log "Delete unencrypted tar file: ${TARFILE}"
        rm -f ${TARFILE}
    fi

    # Delete MySQL temporary dump file
    for sql in $(ls ${TEMPDIR}*.sql); do
        log "Delete MySQL temporary dump file: ${sql}"
        rm -f ${sql}
    done

    if ${ENCRYPTFLG}; then
        OUT_FILE="${ENC_TARFILE}"
    else
        OUT_FILE="${TARFILE}"
    fi
    log "File name: ${OUT_FILE}, File size: $(calculate_size ${OUT_FILE})"
}

# Transfer backup file to Google Drive
# If you want to install rclone command, please visit website:
# https://rclone.org/downloads/
rclone_upload() {
    if ${RCLONE_FLG} && ${RCLONE_COMMAND}; then
        [ -z "${RCLONE_NAME}" ] && log "Error: RCLONE_NAME can not be empty!" && return 1
        if [ -n "${RCLONE_FOLDER}" ]; then
            rclone ls ${RCLONE_NAME}:${RCLONE_FOLDER} 2>&1 > /dev/null
            if [ $? -ne 0 ]; then
                log "Create the path ${RCLONE_NAME}:${RCLONE_FOLDER}"
                rclone mkdir ${RCLONE_NAME}:${RCLONE_FOLDER}
            fi
        fi
        log "Tranferring backup file: ${OUT_FILE} to Google Drive"
        rclone copy ${OUT_FILE} ${RCLONE_NAME}:${RCLONE_FOLDER} >> ${LOGFILE}
        if [ $? -ne 0 ]; then
            log "Error: Tranferring backup file: ${OUT_FILE} to Google Drive failed"
            return 1
        fi
        log "Tranferring backup file: ${OUT_FILE} to Google Drive completed"
    fi
}


# Tranferring backup file to COS
cos_upload() {
    if ${COS_FLG} && ${COS_COMMAND}; then
        [ -z "${COS_FOLDER}" ] && log "Error: COS_FOLDER can not be empty!" && return 1
        log "Tranferring backup file: ${OUT_FILE} to COS"
        coscmd upload ${OUT_FILE} ${COS_FOLDER}/ >> ${LOGFILE}
        if [ $? -ne 0 ]; then
            log "Error: Tranferring backup file: ${OUT_FILE} to COS"
            return 1
        fi
        log "Tranferring backup file: ${OUT_FILE} to COS completed"
    fi
}

# Tranferring backup file to COS
ali_upload() {
    if ${ALI_FLG}; then
        [ -z "${ALI_FOLDER}" ] && log "Error: ALI_FOLDER can not be empty!" && return 1
        log "Tranferring backup file: ${OUT_FILE} to AliyunDrive"
        python3 ${ALI_PY_FILE} -t ${ALI_REFRESH_TOKEN} u ${OUT_FILE} /${ALI_FOLDER}/ >> ${LOGFILE}
        if [ $? -ne 0 ]; then
            log "Error: Tranferring backup file: ${OUT_FILE} to AliyunDrive"
            return 1
        fi
        log "Tranferring backup file: ${OUT_FILE} to AliyunDrive completed"
    fi
}

# Tranferring backup file to FTP server
ftp_upload() {
    if ${FTP_FLG}; then
        [ -z "${FTP_HOST}" ] && log "Error: FTP_HOST can not be empty!" && return 1
        [ -z "${FTP_USER}" ] && log "Error: FTP_USER can not be empty!" && return 1
        [ -z "${FTP_PASS}" ] && log "Error: FTP_PASS can not be empty!" && return 1
        [ -z "${FTP_DIR}" ] && log "Error: FTP_DIR can not be empty!" && return 1
        local FTP_OUT_FILE=$(basename ${OUT_FILE})
        log "Tranferring backup file: ${FTP_OUT_FILE} to FTP server"
        ftp -in ${FTP_HOST} 2>&1 >> ${LOGFILE} <<EOF
user $FTP_USER $FTP_PASS
binary
lcd $LOCALDIR
cd $FTP_DIR
put $FTP_OUT_FILE
quit
EOF
        if [ $? -ne 0 ]; then
            log "Error: Tranferring backup file: ${FTP_OUT_FILE} to FTP server failed"
            return 1
        fi
        log "Tranferring backup file: ${FTP_OUT_FILE} to FTP server completed"
    fi
}

# Get file date
get_file_date() {
    #Approximate a 30-day month and 365-day year
    DAYS=$(( $((10#${YEAR}*365)) + $((10#${MONTH}*30)) + $((10#${DAY})) ))
    unset FILEYEAR FILEMONTH FILEDAY FILEDAYS FILEAGE
    FILEYEAR=$(echo "$1" | cut -d_ -f2 | cut -c 1-4)
    FILEMONTH=$(echo "$1" | cut -d_ -f2 | cut -c 5-6)
    FILEDAY=$(echo "$1" | cut -d_ -f2 | cut -c 7-8)
    if [[ "${FILEYEAR}" && "${FILEMONTH}" && "${FILEDAY}" ]]; then
        #Approximate a 30-day month and 365-day year
        FILEDAYS=$(( $((10#${FILEYEAR}*365)) + $((10#${FILEMONTH}*30)) + $((10#${FILEDAY})) ))
        FILEAGE=$(( 10#${DAYS} - 10#${FILEDAYS} ))
        return 0
    fi
    return 1
}

# Delete Google Drive's old backup file
delete_gdrive_file() {
    local FILENAME=$1
    if ${DELETE_REMOTE_FILE_FLG} && ${RCLONE_COMMAND}; then
        rclone ls ${RCLONE_NAME}:${RCLONE_FOLDER}/${FILENAME} 2>&1 > /dev/null
        if [ $? -eq 0 ]; then
            rclone delete ${RCLONE_NAME}:${RCLONE_FOLDER}/${FILENAME} >> ${LOGFILE}
            if [ $? -eq 0 ]; then
                log "Google Drive's old backup file: ${FILENAME} has been deleted"
            else
                log "Failed to delete Google Drive's old backup file: ${FILENAME}"
            fi
        else
            log "Google Drive's old backup file: ${FILENAME} is not exist"
        fi
    fi
}

# Delete COS's old backup file
delete_cos_file() {
    local FILENAME=$1
    if ${DELETE_REMOTE_FILE_FLG} && ${COS_COMMAND}; then
        cos delete ${COS_FOLDER}/${FILENAME} >> ${LOGFILE}
        if [ $? -eq 0 ]; then
            log "COS's old backup file: ${FILENAME} has been deleted"
        else
            log "Failed to delete COS's old backup file: ${FILENAME}"
        fi
    fi
}

# Delete AliyunDrive's old backup file
delete_ali_file() {
    local FILENAME=$1
    if ${DELETE_REMOTE_FILE_FLG} && ${ALI_COMMAND}; then
        python3 ${ALI_PY_FILE} -t ${ALI_REFRESH_TOKEN} del /${ALI_FOLDER}/${FILENAME} >> ${LOGFILE}
        if [ $? -eq 0 ]; then
            log "AliyunDrive's old backup file: ${FILENAME} has been deleted"
        else
            log "Failed to delete AliyunDrive's old backup file: ${FILENAME}"
        fi
    fi
}

# Delete FTP server's old backup file
delete_ftp_file() {
    local FILENAME=$1
    if ${DELETE_REMOTE_FILE_FLG} && ${FTP_FLG}; then
        ftp -in ${FTP_HOST} 2>&1 >> ${LOGFILE} <<EOF
user $FTP_USER $FTP_PASS
cd $FTP_DIR
del $FILENAME
quit
EOF
        if [ $? -eq 0 ]; then
            log "FTP server's old backup file: ${FILENAME} has been deleted"
        else
            log "Failed to delete FTP server's old backup file: ${FILENAME}"
        fi
    fi
}

# Clean up old file
clean_up_files() {
    cd ${LOCALDIR} || exit
    if ${ENCRYPTFLG}; then
        LS=($(ls *.enc))
    else
        LS=($(ls *.tgz))
    fi
    for f in ${LS[@]}; do
        get_file_date ${f}
        if [ $? -eq 0 ]; then
            if [[ ${FILEAGE} -gt ${LOCALAGEDAILIES} ]]; then
                #rm -f ${f}
                log "Old backup file name: ${f} has been deleted"
                delete_gdrive_file ${f}
                delete_ftp_file ${f}
                delete_cos_file ${f}
                delete_ali_file ${f}
            fi
        fi
    done
}

# 立即删除本地文件
rm_local_file() {
    cd ${LOCALDIR} || exit
    rm -rf *.enc
}

# Main progress
STARTTIME=$(date +%s)

# Check if the backup folders exist and are writeable
[ ! -d "${LOCALDIR}" ] && mkdir -p ${LOCALDIR}
[ ! -d "${TEMPDIR}" ] && mkdir -p ${TEMPDIR}

log "Backup progress start"
check_commands
mysql_backup
start_backup
log "Backup progress complete"

log "Upload progress start"
rclone_upload
ftp_upload
cos_upload
ali_upload
log "Upload progress complete"

log "Cleaning up"
#rm_local_file
clean_up_files
ENDTIME=$(date +%s)
DURATION=$((ENDTIME - STARTTIME))
log "All done"
log "Backup and transfer completed in ${DURATION} seconds"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
编辑 (opens new window)
上次更新: 2021/12/20, 09:41:36
【NAS In Docker】👨‍👩‍👧‍👦家庭数据中心 = 闲置台式机 + Docker
当『Infuse』遇上『阿里云盘』 🚀

← 【NAS In Docker】👨‍👩‍👧‍👦家庭数据中心 = 闲置台式机 + Docker 当『Infuse』遇上『阿里云盘』 🚀→

最近更新
01
『MySQL5.7』创建用户并授权
09-07
02
当『Emby、Plex、Jellyfin』遇上『阿里云盘』🎬
08-17
03
当『Infuse』遇上『阿里云盘』 🚀
08-17
更多文章>
Theme by Vdoing | Copyright © 2020-2022 Kimen Tang | MIT License
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式