03
2020
10

Gdrive

https://stackoverflow.com/questions/58589734/pydrive-trying-to-upload-files-to-google-drive-from-a-remote-serverhttps://medium.com/analytics-vidhya/how-to-connect-google-drive-to-python-using-pydrive-9681b2a14f20 https://medium.com/@fsflyingsoar/%E7%
01
2020
10

Server Admin - Ubuntu 16.04 LTS

#!/usr/bin/python3#!/usr/bin/env python3import timeimport os  while True:    try:        a=int(input("\nCheck Status press '1'\n\nStart All Services press '2'\n\nStart v2ray VPN press '
01
2020
10

Baidu Cloud

安装软件工具:apt-get install python-pippip install requestspip install bypyor pip install bypy==1.6.10授权登陆:执行 bypy info,显示下边信息,根据提示,通过浏览器访问下边灰色的https链接,如果此时百度网盘账号正在登陆,会出现长串授权码,复制。visit:https://openapi.baidu.com/oauth/2.0/authorize?scope=basic+netdisk&
20
2020
09

IP Pool

import urllibfrom bs4 import BeautifulSoupimport requestsimport osimport timeimport lxmlimport jsonimport csvimport telnetlibrows =[]i=1for i in range(3597):    url='https://www.kuaidaili.com/free/inha/'+str(i)    html = r
20
2020
09

Auto-add to cart

download chrome drive first base on the version you are usinghttps://chromedriver.chromium.org/downloads# coding=utf-8import osfrom selenium import webdriverimport datetimeimport timefrom os import path#此处chromedriver改为自己下载的路径driver = webdriver.Chrom
20
2020
09

Transforming to JSON data format

import lxmlfrom bs4 import BeautifulSoupimport timeimport randomimport csvimport codecsimport unicodecsv as csvimport jsonimport urllib.request as requrl = 'https://hk.appledaily.com/pf/api/v3/content/fetch/query-feed?query=%7B%22feedQuery%22%3A%
20
2020
09

Crawling data and save to CSV /Excel

# -*- coding: UTF-8 -*-import requestsimport pandas as pdimport lxmlfrom bs4 import BeautifulSoupimport timeimport randomimport csv#import codecs#import unicodecsv as csvname, score, comment = [], [], []URL = 'https://ithelp.ithome.com.tw/article
20
2020
09

Google photos crawler

import urllibimport threadingfrom bs4 import BeautifulSoupimport requestsimport osimport timeimport lxml# 頁面鏈接的初始化列表page_links_list=['https://www.google.com.hk/search?q=emoji&hl=zh-HK&gbv=2&biw=1263&bih=625&tbm=isch&ei=sLn
19
2020
04

import requests and BeautifulSoup

import requestsfrom bs4 import BeautifulSoupurl = 'https://xxxxxxxxx'page = requests.get(url)page.text soup = BeautifulSoup(page.text, 'html.parser')print(soup.prettify())soup.find_all('p')soup.find_all('p')[2].get_tex
18
2020
04

python crawler

from selenium import webdriverdriver=webdriver.Chrome("xxx location")driver.get("http://xxxxxxxx") driver.page_sourcefrom bs4 import BeautifulSoupsoup = BeautifulSoup(driver.page_source,  'lxml')soup.select_one('#