08
2020
11

teaching materials

selenium

https://www.techbeamers.com/locate-elements-selenium-python/#locate-element-by-name

26
2020
10

Google sheet crawler

# -*- coding: UTF-8 -*-import gspreadfrom oauth2client.service_account import ServiceAccountCredentialsimport requestsimport lxmlfrom bs4 import BeautifulSoupscopes = ["https://spreadsheets.google.com/feeds"]credentials = ServiceAccountCred
13
2020
10

Cron scheduled jobs

https://blog.csdn.net/aaronjny/article/details/80551696https://my.oschina.net/xiaomijiejie/blog/1537522https://tendcode.com/article/hello-crontab/crontab可以用来实现linux下的定时任务,这里记录crontab在ubuntu上的配置使用以及简单测试。1.开启日志服务器rsyslog对crontab的支持打开终端,输入:cd /etc/rsysl
03
2020
10

Gdrive

https://stackoverflow.com/questions/58589734/pydrive-trying-to-upload-files-to-google-drive-from-a-remote-serverhttps://medium.com/analytics-vidhya/how-to-connect-google-drive-to-python-using-pydrive-9681b2a14f20 https://medium.com/@fsflyingsoar/%E7%
01
2020
10

Server Admin - Ubuntu 16.04 LTS

#!/usr/bin/python3#!/usr/bin/env python3import timeimport os  while True:    try:        a=int(input("\nCheck Status press '1'\n\nStart All Services press '2'\n\nStart v2ray VPN press '
01
2020
10

Baidu Cloud

安装软件工具:apt-get install python-pippip install requestspip install bypyor pip install bypy==1.6.10授权登陆:执行 bypy info,显示下边信息,根据提示,通过浏览器访问下边灰色的https链接,如果此时百度网盘账号正在登陆,会出现长串授权码,复制。visit:https://openapi.baidu.com/oauth/2.0/authorize?scope=basic+netdisk&
20
2020
09

IP Pool

import urllibfrom bs4 import BeautifulSoupimport requestsimport osimport timeimport lxmlimport jsonimport csvimport telnetlibrows =[]i=1for i in range(3597):    url='https://www.kuaidaili.com/free/inha/'+str(i)    html = r
20
2020
09

Auto-add to cart

download chrome drive first base on the version you are usinghttps://chromedriver.chromium.org/downloads# coding=utf-8import osfrom selenium import webdriverimport datetimeimport timefrom os import path#此处chromedriver改为自己下载的路径driver = webdriver.Chrom
20
2020
09

Transforming to JSON data format

import lxmlfrom bs4 import BeautifulSoupimport timeimport randomimport csvimport codecsimport unicodecsv as csvimport jsonimport urllib.request as requrl = 'https://hk.appledaily.com/pf/api/v3/content/fetch/query-feed?query=%7B%22feedQuery%22%3A%
20
2020
09

Crawling data and save to CSV /Excel

# -*- coding: UTF-8 -*-import requestsimport pandas as pdimport lxmlfrom bs4 import BeautifulSoupimport timeimport randomimport csv#import codecs#import unicodecsv as csvname, score, comment = [], [], []URL = 'https://ithelp.ithome.com.tw/article