Il-waranofsinhar it-tajjeb. Għaddew sentejn minn mindu ktibtha l-aħħar artiklu dwar Habr parsing, u xi affarijiet inbidlu.
Meta ridt li jkolli kopja ta' Habr, iddeċidejt li nikteb parser li jsalva l-kontenut kollu tal-awturi f'database. Kif ġara u liema żbalji ltqajt magħhom - tista 'taqra taħt il-qatgħa.
L-ewwel verżjoni tal-parser. Ħajt wieħed, ħafna problemi
Biex tibda, iddeċidejt li nagħmel prototip ta 'kitba li fiha, immedjatament mat-tniżżil, l-artiklu jkun parsed u mqiegħed fid-database. Mingħajr ma ħsibt darbtejn, użajt sqlite3, għax... kien inqas xogħol intensiv: m'għandekx bżonn li jkollok server lokali, toħloq, tħares, tħassar u tagħmel hekk.
one_thread.py
from bs4 import BeautifulSoup
import sqlite3
import requests
from datetime import datetime
def main(min, max):
conn = sqlite3.connect('habr.db')
c = conn.cursor()
c.execute('PRAGMA encoding = "UTF-8"')
c.execute("CREATE TABLE IF NOT EXISTS habr(id INT, author VARCHAR(255), title VARCHAR(255), content TEXT, tags TEXT)")
start_time = datetime.now()
c.execute("begin")
for i in range(min, max):
url = "https://m.habr.com/post/{}".format(i)
try:
r = requests.get(url)
except:
with open("req_errors.txt") as file:
file.write(i)
continue
if(r.status_code != 200):
print("{} - {}".format(i, r.status_code))
continue
html_doc = r.text
soup = BeautifulSoup(html_doc, 'html.parser')
try:
author = soup.find(class_="tm-user-info__username").get_text()
content = soup.find(id="post-content-body")
content = str(content)
title = soup.find(class_="tm-article-title__text").get_text()
tags = soup.find(class_="tm-article__tags").get_text()
tags = tags[5:]
except:
author,title,tags = "Error", "Error {}".format(r.status_code), "Error"
content = "При парсинге этой странице произошла ошибка."
c.execute('INSERT INTO habr VALUES (?, ?, ?, ?, ?)', (i, author, title, content, tags))
print(i)
c.execute("commit")
print(datetime.now() - start_time)
main(1, 490406)
Kollox huwa skond il-klassiċi - nużaw Beautiful Soppa, talbiet u l-prototip ta 'malajr huwa lest. Dak hu biss...
Il-paġna titniżżel f'ħajt wieħed
Jekk tinterrompi l-eżekuzzjoni tal-iskrittura, allura d-database kollha ma tmur imkien. Wara kollox, l-impenn jiġi esegwit biss wara l-parsing kollu.
Naturalment, tista' tikkommetti bidliet fid-database wara kull inserzjoni, iżda mbagħad iż-żmien tal-eżekuzzjoni tal-iskript jiżdied b'mod sinifikanti.
L-analiżi tal-ewwel 100 artiklu ħadni 000 sigħat.
Imbagħad insib l-artiklu tal-utent kointegrat, li qrajt u sibt diversi life hacks biex iħaffef dan il-proċess:
L-użu ta 'multithreading iħaffef it-tniżżil b'mod sinifikanti.
Tista' tirċievi mhux il-verżjoni sħiħa ta' Habr, iżda l-verżjoni mobbli tagħha.
Pereżempju, jekk artikolu kointegrat fil-verżjoni tad-desktop jiżen 378 KB, allura fil-verżjoni mobbli huwa diġà 126 KB.
It-tieni verżjoni. Ħafna ħjut, projbizzjoni temporanja minn Habr
Meta għamilt l-Internet dwar is-suġġett tal-multithreading f'python u għażilt l-iktar għażla sempliċi b'multiprocessing.dummy, innutajt li dehru problemi flimkien ma 'multithreading.
SQLite3 ma jridx jaħdem b'aktar minn ħajt wieħed.
Fiss check_same_thread=False, iżda dan l-iżball mhuwiex l-uniku wieħed; meta nipprova ddaħħal fid-database, xi drabi jinqalgħu żbalji li ma stajtx insolvi.
Għalhekk, niddeċiedi li nabbanduna l-inserzjoni immedjata ta 'artikoli direttament fid-database u, filwaqt li niftakar is-soluzzjoni kointegrata, niddeċiedi li nuża fajls, peress li m'hemm l-ebda problemi bil-kitba b'ħafna kamini f'fajl.
Habr jibda jipprojbixxi għall-użu ta' aktar minn tliet ħjut.
Tentattivi partikolarment żelużi biex jintlaħaq Habr jistgħu jirriżultaw fi projbizzjoni tal-IP għal ftit sigħat. Allura għandek tuża biss 3 ħjut, iżda dan diġà huwa tajjeb, peress li l-ħin biex issolvi 100 artiklu jitnaqqas minn 26 għal 12-il sekonda.
Ta 'min jinnota li din il-verżjoni hija pjuttost instabbli, u t-tniżżil perjodikament jonqos fuq numru kbir ta' artikoli.
async_v1.py
from bs4 import BeautifulSoup
import requests
import os, sys
import json
from multiprocessing.dummy import Pool as ThreadPool
from datetime import datetime
import logging
def worker(i):
currentFile = "files\{}.json".format(i)
if os.path.isfile(currentFile):
logging.info("{} - File exists".format(i))
return 1
url = "https://m.habr.com/post/{}".format(i)
try: r = requests.get(url)
except:
with open("req_errors.txt") as file:
file.write(i)
return 2
# Запись заблокированных запросов на сервер
if (r.status_code == 503):
with open("Error503.txt", "a") as write_file:
write_file.write(str(i) + "n")
logging.warning('{} / 503 Error'.format(i))
# Если поста не существует или он был скрыт
if (r.status_code != 200):
logging.info("{} / {} Code".format(i, r.status_code))
return r.status_code
html_doc = r.text
soup = BeautifulSoup(html_doc, 'html5lib')
try:
author = soup.find(class_="tm-user-info__username").get_text()
timestamp = soup.find(class_='tm-user-meta__date')
timestamp = timestamp['title']
content = soup.find(id="post-content-body")
content = str(content)
title = soup.find(class_="tm-article-title__text").get_text()
tags = soup.find(class_="tm-article__tags").get_text()
tags = tags[5:]
# Метка, что пост является переводом или туториалом.
tm_tag = soup.find(class_="tm-tags tm-tags_post").get_text()
rating = soup.find(class_="tm-votes-score").get_text()
except:
author = title = tags = timestamp = tm_tag = rating = "Error"
content = "При парсинге этой странице произошла ошибка."
logging.warning("Error parsing - {}".format(i))
with open("Errors.txt", "a") as write_file:
write_file.write(str(i) + "n")
# Записываем статью в json
try:
article = [i, timestamp, author, title, content, tm_tag, rating, tags]
with open(currentFile, "w") as write_file:
json.dump(article, write_file)
except:
print(i)
raise
if __name__ == '__main__':
if len(sys.argv) < 3:
print("Необходимы параметры min и max. Использование: async_v1.py 1 100")
sys.exit(1)
min = int(sys.argv[1])
max = int(sys.argv[2])
# Если потоков >3
# то хабр банит ipшник на время
pool = ThreadPool(3)
# Отсчет времени, запуск потоков
start_time = datetime.now()
results = pool.map(worker, range(min, max))
# После закрытия всех потоков печатаем время
pool.close()
pool.join()
print(datetime.now() - start_time)
It-tielet verżjoni. Finali
Waqt id-debugging tat-tieni verżjoni, skoprejt li Habr f'daqqa waħda għandha API li hija aċċessata mill-verżjoni mobbli tas-sit. Huwa jgħabbi aktar malajr mill-verżjoni mobbli, peress li huwa biss json, li lanqas biss jeħtieġ li jiġi parsed. Fl-aħħar, iddeċidejt li nikteb mill-ġdid l-iskript tiegħi.
Allura, wara li skoprew din ir-rabta API, tista 'tibda teżaminaha.
async_v2.py
import requests
import os, sys
import json
from multiprocessing.dummy import Pool as ThreadPool
from datetime import datetime
import logging
def worker(i):
currentFile = "files\{}.json".format(i)
if os.path.isfile(currentFile):
logging.info("{} - File exists".format(i))
return 1
url = "https://m.habr.com/kek/v1/articles/{}/?fl=ru%2Cen&hl=ru".format(i)
try:
r = requests.get(url)
if r.status_code == 503:
logging.critical("503 Error")
return 503
except:
with open("req_errors.txt") as file:
file.write(i)
return 2
data = json.loads(r.text)
if data['success']:
article = data['data']['article']
id = article['id']
is_tutorial = article['is_tutorial']
time_published = article['time_published']
comments_count = article['comments_count']
lang = article['lang']
tags_string = article['tags_string']
title = article['title']
content = article['text_html']
reading_count = article['reading_count']
author = article['author']['login']
score = article['voting']['score']
data = (id, is_tutorial, time_published, title, content, comments_count, lang, tags_string, reading_count, author, score)
with open(currentFile, "w") as write_file:
json.dump(data, write_file)
if __name__ == '__main__':
if len(sys.argv) < 3:
print("Необходимы параметры min и max. Использование: asyc.py 1 100")
sys.exit(1)
min = int(sys.argv[1])
max = int(sys.argv[2])
# Если потоков >3
# то хабр банит ipшник на время
pool = ThreadPool(3)
# Отсчет времени, запуск потоков
start_time = datetime.now()
results = pool.map(worker, range(min, max))
# После закрытия всех потоков печатаем время
pool.close()
pool.join()
print(datetime.now() - start_time)
Fiha oqsma relatati kemm mal-artiklu nnifsu kif ukoll mal-awtur li kiteb.
API.png
Ma ħallejtx il-json sħiħ ta' kull artikolu, imma salvajt biss l-oqsma li kelli bżonn:
id
is_tutorial
time_published
titolu
kontenut
comments_count
lang hija l-lingwa li biha jinkiteb l-artiklu. S'issa fih biss en u ru.
tags_string — it-tikketti kollha mill-post
għadd_qari
awtur
punteġġ — klassifikazzjoni tal-artiklu.
Għalhekk, bl-użu tal-API, naqqas il-ħin tal-eżekuzzjoni tal-iskript għal 8 sekondi għal kull 100 url.
Wara li niżżilna d-data li neħtieġu, irridu nipproċessawha u nidħlu fid-database. Ma kien hemm l-ebda problemi ma 'dan lanqas:
parser.py
import json
import sqlite3
import logging
from datetime import datetime
def parser(min, max):
conn = sqlite3.connect('habr.db')
c = conn.cursor()
c.execute('PRAGMA encoding = "UTF-8"')
c.execute('PRAGMA synchronous = 0') # Отключаем подтверждение записи, так скорость увеличивается в разы.
c.execute("CREATE TABLE IF NOT EXISTS articles(id INTEGER, time_published TEXT, author TEXT, title TEXT, content TEXT,
lang TEXT, comments_count INTEGER, reading_count INTEGER, score INTEGER, is_tutorial INTEGER, tags_string TEXT)")
try:
for i in range(min, max):
try:
filename = "files\{}.json".format(i)
f = open(filename)
data = json.load(f)
(id, is_tutorial, time_published, title, content, comments_count, lang,
tags_string, reading_count, author, score) = data
# Ради лучшей читаемости базы можно пренебречь читаемостью кода. Или нет?
# Если вам так кажется, можно просто заменить кортеж аргументом data. Решать вам.
c.execute('INSERT INTO articles VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)', (id, time_published, author,
title, content, lang,
comments_count, reading_count,
score, is_tutorial,
tags_string))
f.close()
except IOError:
logging.info('FileNotExists')
continue
finally:
conn.commit()
start_time = datetime.now()
parser(490000, 490918)
print(datetime.now() - start_time)
Statistika
Ukoll, tradizzjonalment, finalment, tista 'tiġbed xi statistika mid-dejta:
Mill-490 mistennija, 406 artiklu biss ġew imniżżla. Jirriżulta li aktar minn nofs (228) tal-artikoli dwar Habré kienu moħbija jew imħassra.
Id-database kollha, li tikkonsisti fi kważi nofs miljun artiklu, tiżen 2.95 GB. F'forma kkompressata - 495 MB.
B'kollox, hemm 37804 awtur fuq Habré. Ħa nfakkarkom li dawn huma statistiċi biss minn live posts.
L-aktar awtur produttiv fuq Habré - alizar — 8774 artikoli.