ΠΠ»ΠΈ Π½Π΅ΠΌΠ½ΠΎΠ³ΠΎ ΠΏΡΠΈΠΊΠ»Π°Π΄Π½ΠΎΠΉ ΡΠ΅ΡΡΠΈΡΠΎΠ»ΠΎΠ³ΠΈΠΈ.
ΠΡΡ Π½ΠΎΠ²ΠΎΠ΅- Ρ
ΠΎΡΠΎΡΠΎ Π·Π°Π±ΡΡΠΎΠ΅ ΡΡΠ°ΡΠΎΠ΅.
ΠΠΏΠΈΠ³ΡΠ°ΡΡ.
ΠΠΎΡΡΠ°Π½ΠΎΠ²ΠΊΠ° Π·Π°Π΄Π°ΡΠΈ
ΠΠ΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ ΠΏΠ΅ΡΠΈΠΎΠ΄ΠΈΡΠ΅ΡΠΊΠΈ Π·Π°Π³ΡΡΠΆΠ°ΡΡ ΡΠ΅ΠΊΡΡΠΈΠΉ Π»ΠΎΠ³-ΡΠ°ΠΉΠ» PostgreSQL ΠΈΠ· ΠΎΠ±Π»Π°ΠΊΠ° AWS Π½Π° Π»ΠΎΠΊΠ°Π»ΡΠ½ΡΠΉ Linux Ρ
ΠΎΡΡ. ΠΠ΅ Π² ΡΠ΅Π°Π»ΡΠ½ΠΎΠΌ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ, Π½ΠΎ, ΡΠΊΠ°ΠΆΠ΅ΠΌ ΡΠ°ΠΊ, Ρ Π½Π΅Π±ΠΎΠ»ΡΡΠΎΠΉ Π·Π°Π΄Π΅ΡΠΆΠΊΠΎΠΉ.
ΠΠ΅ΡΠΈΠΎΠ΄ Π·Π°Π³ΡΡΠ·ΠΊΠΈ ΠΎΠ±Π½ΠΎΠ²Π»Π΅Π½ΠΈΡ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π° β 5 ΠΌΠΈΠ½ΡΡ.
ΠΠΎΠ³-ΡΠ°ΠΉΠ», Π² AWS, ΡΠΎΡΠΈΡΡΠ΅ΡΡΡ ΠΊΠ°ΠΆΠ΄ΡΠΉ ΡΠ°Ρ.
ΠΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΠ΅ ΠΈΠ½ΡΡΡΡΠΌΠ΅Π½ΡΡ
ΠΠ»Ρ Π·Π°Π³ΡΡΠ·ΠΊΠΈ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π° Π½Π° Ρ
ΠΎΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ bash-ΡΠΊΡΠΈΠΏΡ, Π²ΡΠ·ΡΠ²Π°ΡΡΠΈΠΉ AWS API Β«
ΠΠ°ΡΠ°ΠΌΠ΅ΡΡΡ:
- —db-instance-identifier: ΠΠΌΡ ΠΈΠ½ΡΡΠ°Π½ΡΠ° Π² AWS;
- —log-file-name: ΠΈΠΌΡ ΡΠ΅ΠΊΡΡΠ΅Π³ΠΎ ΡΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½Π½ΠΎΠ³ΠΎ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π°
- —max-item: ΠΠ±ΡΠ΅Π΅ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΡΠ»Π΅ΠΌΠ΅Π½ΡΠΎΠ², Π²ΠΎΠ·Π²ΡΠ°ΡΠ°Π΅ΠΌΡΡ Π² Π²ΡΡ ΠΎΠ΄Π½ΡΡ Π΄Π°Π½Π½ΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ.Π Π°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ Π·Π°Π³ΡΡΠΆΠ°Π΅ΠΌΠΎΠ³ΠΎ ΡΠ°ΠΉΠ»Π°.
- —starting-token: ΠΠ΅ΡΠΊΠ° Π½Π°ΡΠ°Π»ΡΠ½ΠΎΠΉ ΠΏΠΎΡΡΠΈΠΈ
ΠΠ° ΠΈ ΠΏΡΠΎΡΡΠΎ β ΠΈΠ½ΡΠ΅ΡΠ΅ΡΠ½Π°Ρ Π·Π°Π΄Π°ΡΠ°, Π΄Π»Ρ ΡΡΠ΅Π½ΠΈΡΠΎΠ²ΠΊΠΈ ΠΈ ΡΠ°Π·Π½ΠΎΠΎΠ±ΡΠ°Π·ΠΈΡ Π² Ρ
ΠΎΠ΄Π΅ ΡΠ°Π±ΠΎΡΠ΅Π³ΠΎ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ.
ΠΡΠ΅Π΄ΠΏΠΎΠ»ΠΎΠΆΡ, ΡΡΠΎ Π·Π°Π΄Π°ΡΠ° Π² ΡΠΈΠ»Ρ ΠΎΠ±ΡΠ΄Π΅Π½Π½ΠΎΡΡΠΈ ΡΠΆΠ΅ ΡΠ΅ΡΠ΅Π½Π°. ΠΠΎ Π±ΡΡΡΡΡΠΉ Π³ΡΠ³Π» ΡΠ΅ΡΠ΅Π½ΠΈΠΉ Π½Π΅ ΠΏΠΎΠ΄ΡΠΊΠ°Π·Π°Π», Π° ΠΈΡΠΊΠ°ΡΡ Π±ΠΎΠ»Π΅Π΅ ΡΠ³Π»ΡΠ±Π»Π΅Π½Π½ΠΎ Π½Π΅ Π±ΡΠ»ΠΎ ΠΎΡΠΎΠ±ΠΎΠ³ΠΎ ΠΆΠ΅Π»Π°Π½ΠΈΡ. Π Π»ΡΠ±ΠΎΠΌ ΡΠ»ΡΡΠ°Π΅ β Π½Π΅ΠΏΠ»ΠΎΡ
Π°Ρ ΡΡΠ΅Π½ΠΈΡΠΎΠ²ΠΊΠ°.
Π€ΠΎΡΠΌΠ°Π»ΠΈΠ·Π°ΡΠΈΡ Π·Π°Π΄Π°ΡΠΈ
ΠΠΎΠ½Π΅ΡΠ½ΡΠΉ Π»ΠΎΠ³-ΡΠ°ΠΉΠ» ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»ΡΠ΅Ρ ΡΠΎΠ±ΠΎΠΉ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²ΠΎ ΡΡΡΠΎΠΊ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ Π΄Π»ΠΈΠ½Ρ. ΠΡΠ°ΡΠΈΡΠ΅ΡΠΊΠΈ, Π»ΠΎΠ³-ΡΠ°ΠΉΠ» ΠΌΠΎΠΆΠ½ΠΎ ΠΏΡΠ΅Π΄ΡΡΠ°Π²ΠΈΡΡ, ΠΏΡΠΈΠΌΠ΅ΡΠ½ΠΎ ΡΠ°ΠΊ:
Π£ΠΆΠ΅ ΡΡΠΎ-ΡΠΎ Π½Π°ΠΏΠΎΠΌΠΈΠ½Π°Π΅Ρ? ΠΡΠΈ ΡΡΠΌ ΡΡΡ Β«ΡΠ΅ΡΡΠΈΡΒ»? Π Π²ΠΎΡ, ΠΏΡΠΈ ΡΠ΅ΠΌ.
ΠΡΠ»ΠΈ ΠΏΡΠ΅Π΄ΡΡΠ°Π²ΠΈΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΡΠ΅ Π²Π°ΡΠΈΠ°Π½ΡΡ, Π²ΠΎΠ·Π½ΠΈΠΊΠ°ΡΡΠΈΠ΅ ΠΏΡΠΈ Π·Π°Π³ΡΡΠ·ΠΊΠ΅ ΠΎΡΠ΅ΡΠ΅Π΄Π½ΠΎΠ³ΠΎ ΡΠ°ΠΉΠ»Π° Π³ΡΠ°ΡΠΈΡΠ΅ΡΠΊΠΈ (Π΄Π»Ρ ΠΏΡΠΎΡΡΠΎΡΡ, Π² Π΄Π°Π½Π½ΠΎΠΌ ΡΠ»ΡΡΠ°Π΅, ΠΏΡΡΡΡ ΡΡΡΠΎΠΊΠΈ ΠΈΠΌΠ΅ΡΡ ΠΎΠ΄Π½Ρ Π΄Π»ΠΈΠ½Ρ), ΠΏΠΎΠ»ΡΡΠ°ΡΡΡ ΡΡΠ°Π½Π΄Π°ΡΡΠ½ΡΠ΅ ΡΠΈΠ³ΡΡΡ ΡΠ΅ΡΡΠΈΡΠ°:
1) Π€Π°ΠΉΠ» Π·Π°Π³ΡΡΠΆΠ΅Π½ ΡΠ΅Π»ΠΈΠΊΠΎΠΌ ΠΈ ΡΠ²Π»ΡΠ΅ΡΡΡ ΠΊΠΎΠ½Π΅ΡΠ½ΡΠΌ. Π Π°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ Π±ΠΎΠ»ΡΡΠ΅ ΡΠ°Π·ΠΌΠ΅ΡΠ° ΠΊΠΎΠ½Π΅ΡΠ½ΠΎΠ³ΠΎ ΡΠ°ΠΉΠ»Π°:
2) Π€Π°ΠΉΠ» ΠΈΠΌΠ΅Π΅Ρ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠ΅Π½ΠΈΠ΅. Π Π°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ ΠΌΠ΅Π½ΡΡΠ΅ ΡΠ°Π·ΠΌΠ΅ΡΠ° ΠΊΠΎΠ½Π΅ΡΠ½ΠΎΠ³ΠΎ ΡΠ°ΠΉΠ»Π°:
3) Π€Π°ΠΉΠ» ΡΠ²Π»ΡΠ΅ΡΡΡ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠ΅Π½ΠΈΠ΅ΠΌ ΠΏΡΠ΅Π΄ΡΠ΄ΡΡΠ΅Π³ΠΎ ΡΠ°ΠΉΠ»Π° ΠΈ ΠΈΠΌΠ΅Π΅Ρ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠ΅Π½ΠΈΠ΅. Π Π°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ ΠΌΠ΅Π½ΡΡΠ΅ ΡΠ°Π·ΠΌΠ΅ΡΠ° ΠΎΡΡΠ°ΡΠΊΠ° ΠΊΠΎΠ½Π΅ΡΠ½ΠΎΠ³ΠΎ ΡΠ°ΠΉΠ»Π°:
4) Π€Π°ΠΉΠ» ΡΠ²Π»ΡΠ΅ΡΡΡ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠ΅Π½ΠΈΠ΅ΠΌ ΠΏΡΠ΅Π΄ΡΠ΄ΡΡΠ΅Π³ΠΎ ΡΠ°ΠΉΠ»Π° ΠΈ ΡΠ²Π»ΡΠ΅ΡΡΡ ΠΊΠΎΠ½Π΅ΡΠ½ΡΠΌ. Π Π°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ Π±ΠΎΠ»ΡΡΠ΅ ΡΠ°Π·ΠΌΠ΅ΡΠ° ΠΎΡΡΠ°ΡΠΊΠ° ΠΊΠΎΠ½Π΅ΡΠ½ΠΎΠ³ΠΎ ΡΠ°ΠΉΠ»Π°:
ΠΠ°Π΄Π°ΡΠ° β ΡΠΎΠ±ΡΠ°ΡΡ ΠΏΡΡΠΌΠΎΡΠ³ΠΎΠ»ΡΠ½ΠΈΠΊ ΠΈΠ»ΠΈ ΠΏΠΎΠΈΠ³ΡΠ°ΡΡ Π² ΡΠ΅ΡΡΠΈΡ, Π½Π° Π½ΠΎΠ²ΠΎΠΌ ΡΡΠΎΠ²Π½Π΅.
ΠΡΠΎΠ±Π»Π΅ΠΌΡ, Π²ΠΎΠ·Π½ΠΈΠΊΠ°ΡΡΠΈΠ΅ ΠΏΠΎ Ρ ΠΎΠ΄Ρ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π·Π°Π΄Π°ΡΠΈ
1) Π‘ΠΊΠ»Π΅ΠΈΡΡ ΡΡΡΠΎΠΊΡ ΠΈΠ· 2-Ρ ΠΏΠΎΡΡΠΈΠΉ
Π ΠΎΠ±ΡΠ΅ΠΌ-ΡΠΎ Π½ΠΈΠΊΠ°ΠΊΠΈΡ
ΠΎΡΠΎΠ±ΡΡ
ΠΏΡΠΎΠ±Π»Π΅ΠΌ Π½Π΅ Π²ΠΎΠ·Π½ΠΈΠΊΠ»ΠΎ. Π‘ΡΠ°Π½Π΄Π°ΡΡΠ½Π°Ρ Π·Π°Π΄Π°ΡΠ° ΠΈΠ· Π½Π°ΡΠ°Π»ΡΠ½ΠΎΠ³ΠΎ ΠΊΡΡΡΠ° ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ.
ΠΠΏΡΠΈΠΌΠ°Π»ΡΠ½ΡΠΉ ΡΠ°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ
Π Π²ΠΎΡ ΡΡΠΎ, Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΠΈΠ½ΡΠ΅ΡΠ΅ΡΠ½Π΅Π΅.
Π ΡΠΎΠΆΠ°Π»Π΅Π½ΠΈΡ, Π½Π΅Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΡΠΌΠ΅ΡΠ΅Π½ΠΈΠ΅ ΠΏΠΎΡΠ»Π΅ ΠΌΠ΅ΡΠΊΠΈ Π½Π°ΡΠ°Π»ΡΠ½ΠΎΠΉ ΠΏΠΎΡΡΠΈΠΈ:
As you already know the option —starting-token is used to specify where to start paginating. This option takes String values which would mean that if you try to add an offset value in front of the Next Token string, the option will not be taken into consideration as an offset.
Π ΠΏΠΎΡΡΠΎΠΌΡ, ΠΏΡΠΈΡ
ΠΎΠ΄ΠΈΡΡΡ ΡΠΈΡΠ°ΡΡ ΠΊΡΡΠΊΠ°ΠΌΠΈ-ΠΏΠΎΡΡΠΈΡΠΌΠΈ.
ΠΡΠ»ΠΈ ΡΠΈΡΠ°ΡΡ Π±ΠΎΠ»ΡΡΠΈΠΌΠΈ ΠΏΠΎΡΡΠΈΡΠΌΠΈ, ΡΠΎ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΡΡΠ΅Π½ΠΈΠΉ Π±ΡΠ΄Π΅Ρ ΠΌΠΈΠ½ΠΈΠΌΠ°Π»ΡΠ½ΡΠΌ, Π½ΠΎ ΠΎΠ±ΡΠ΅ΠΌ Π±ΡΠ΄Π΅Ρ ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΡΠΌ.
ΠΡΠ»ΠΈ ΡΠΈΡΠ°ΡΡ ΠΌΠ°Π»Π΅Π½ΡΠΊΠΈΠΌΠΈ ΠΏΠΎΡΡΠΈΡΠΌΠΈ, ΡΠΎ Π½Π°ΠΎΠ±ΠΎΡΠΎΡ, ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΡΡΠ΅Π½ΠΈΠΉ Π±ΡΠ΄Π΅Ρ ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΡΠΌ, Π½ΠΎ Π·Π°ΡΠΎ ΠΎΠ±ΡΠ΅ΠΌ Π±ΡΠ΄Π΅Ρ ΠΌΠΈΠ½ΠΈΠΌΠ°Π»ΡΠ½ΡΠΌ.
ΠΠΎΡΡΠΎΠΌΡ, Π΄Π»Ρ ΡΠΎΠΊΡΠ°ΡΠ΅Π½ΠΈΡ ΡΡΠ°ΡΡΠΈΠΊΠ° ΠΈ Π΄Π»Ρ ΠΎΠ±ΡΠ΅ΠΉ ΠΊΡΠ°ΡΠΎΡΡ ΡΠ΅ΡΠ΅Π½ΠΈΡ, ΠΏΡΠΈΡΠ»ΠΎΡΡ ΠΏΡΠΈΠ΄ΡΠΌΠ°ΡΡ Π½Π΅ΠΊΠΎΠ΅ ΡΠ΅ΡΠ΅Π½ΠΈΠ΅, ΠΊ ΡΠΎΠΆΠ°Π»Π΅Π½ΠΈΡ, Π½Π΅ΠΌΠ½ΠΎΠ³ΠΎ ΡΠΌΠ°Ρ
ΠΈΠ²Π°ΡΡΠ΅Π΅ Π½Π° ΠΊΠΎΡΡΡΠ»Ρ.
ΠΠ»Ρ ΠΈΠ»Π»ΡΡΡΡΠ°ΡΠΈΠΈ, ΡΠ°ΡΡΠΌΠΎΡΡΠΈΠΌ ΠΏΡΠΎΡΠ΅ΡΡ Π·Π°Π³ΡΡΠ·ΠΊΠΈ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π° Π² 2-Ρ ΡΠΈΠ»ΡΠ½ΠΎ ΡΠΏΡΠΎΡΠ΅Π½Π½ΡΡ Π²Π°ΡΠΈΠ°Π½ΡΠ°Ρ . ΠΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΡΡΠ΅Π½ΠΈΠΉ Π² ΠΎΠ±ΠΎΠΈΡ ΡΠ»ΡΡΠ°ΡΡ Π·Π°Π²ΠΈΡΠΈΡ ΠΎΡ ΡΠ°Π·ΠΌΠ΅ΡΠ° ΠΏΠΎΡΡΠΈΠΈ.
1) ΠΠ°Π³ΡΡΠΆΠ°Π΅ΠΌ ΠΌΠ°Π»ΡΠΌΠΈ ΠΏΠΎΡΡΠΈΡΠΌΠΈ:
2) ΠΠ°Π³ΡΡΠΆΠ°Π΅ΠΌ Π±ΠΎΠ»ΡΡΠΈΠΌΠΈ ΠΏΠΎΡΡΠΈΡΠΌΠΈ:
ΠΠ°ΠΊ ΠΎΠ±ΡΡΠ½ΠΎ, ΠΎΠΏΡΠΈΠΌΠ°Π»ΡΠ½ΠΎΠ΅ ΡΠ΅ΡΠ΅Π½ΠΈΠ΅-ΠΏΠΎΡΡΠ΅Π΄ΠΈΠ½Π΅.
Π Π°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ ΠΌΠΈΠ½ΠΈΠΌΠ°Π»ΡΠ½ΡΠΉ, Π½ΠΎ Π² ΠΏΡΠΎΡΠ΅ΡΡΠ΅ ΡΡΠ΅Π½ΠΈΡ, ΡΠ°Π·ΠΌΠ΅Ρ ΠΌΠΎΠΆΠ½ΠΎ ΡΠ²Π΅Π»ΠΈΡΠΈΠ²Π°ΡΡ, Π΄Π»Ρ ΡΠΎΠΊΡΠ°ΡΠ΅Π½ΠΈΡ ΡΠΈΡΠ»Π° ΡΡΠ΅Π½ΠΈΠΉ.
ΠΡΠΆΠ½ΠΎ ΠΎΡΠΌΠ΅ΡΠΈΡΡ, ΡΡΠΎ ΠΏΠΎΠ»Π½ΠΎΡΡΡΡ Π·Π°Π΄Π°ΡΠ° ΠΏΠΎΠ΄Π±ΠΎΡΠ° ΠΎΠΏΡΠΈΠΌΠ°Π»ΡΠ½ΠΎΠ³ΠΎ ΡΠ°Π·ΠΌΠ΅ΡΠ° ΡΡΠΈΡΡΠ²Π°Π΅ΠΌΠΎΠΉ ΠΏΠΎΡΡΠΈΠΈ ΠΏΠΎΠΊΠ° Π½Π΅ ΡΠ΅ΡΠ΅Π½Π° ΠΈ ΡΡΠ΅Π±ΡΠ΅Ρ Π±ΠΎΠ»Π΅Π΅ Π³Π»ΡΠ±ΠΎΠΊΠΎΠΉ ΠΏΡΠΎΡΠ°Π±ΠΎΡΠΊΠΈ ΠΈ Π°Π½Π°Π»ΠΈΠ·Π°. ΠΠΎΠΆΠ΅Ρ, Π±ΡΡΡ, ΡΡΡΡ ΠΏΠΎΠ·ΠΆΠ΅.
ΠΠ±ΡΠ΅Π΅ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ
ΠΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΠ΅ ΡΠ΅ΡΠ²ΠΈΡΠ½ΡΠ΅ ΡΠ°Π±Π»ΠΈΡΡ
CREATE TABLE endpoint
(
id SERIAL ,
host text
);
TABLE database
(
id SERIAL ,
β¦
last_aws_log_time text ,
last_aws_nexttoken text ,
aws_max_item_size integer
);
last_aws_log_time β Π²ΡΠ΅ΠΌΠ΅Π½Π½Π°Ρ ΠΌΠ΅ΡΠΊΠ° ΠΏΠΎΡΠ»Π΅Π΄Π½Π΅Π³ΠΎ Π·Π°Π³ΡΡΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π° Π² ΡΠΎΡΠΌΠ°ΡΠ΅ YYYY-MM-DD-HH24.
last_aws_nexttoken β ΡΠ΅ΠΊΡΡΠΎΠ²Π°Ρ ΠΌΠ΅ΡΠΊΠ° ΠΏΠΎΡΠ»Π΅Π΄Π½Π΅ΠΉ Π·Π°Π³ΡΡΠΆΠ΅Π½Π½ΠΎΠΉ ΠΏΠΎΡΡΠΈΠΈ.
aws_max_item_size- ΡΠΌΠΏΠΈΡΠΈΡΠ΅ΡΠΊΠΈΠΌ ΠΏΡΡΠ΅ΠΌ, ΠΏΠΎΠ΄ΠΎΠ±ΡΠ°Π½Π½ΡΠΉ Π½Π°ΡΠ°Π»ΡΠ½ΡΠΉ ΡΠ°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ.
ΠΠΎΠ»Π½ΡΠΉ ΡΠ΅ΠΊΡΡ ΡΠΊΡΠΈΠΏΡΠ°
download_aws_piece.sh
#!/bin/bash
#########################################################
# download_aws_piece.sh
# downloan piece of log from AWS
# version HABR
let min_item_size=1024
let max_item_size=1048576
let growth_factor=3
let growth_counter=1
let growth_counter_max=3
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh:''STARTED'
AWS_LOG_TIME=$1
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh:AWS_LOG_TIME='$AWS_LOG_TIME
database_id=$2
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh:database_id='$database_id
RESULT_FILE=$3
endpoint=`psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE_DATABASE -A -t -c "select e.host from endpoint e join database d on e.id = d.endpoint_id where d.id = $database_id "`
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh:endpoint='$endpoint
db_instance=`echo $endpoint | awk -F"." '{print toupper($1)}'`
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh:db_instance='$db_instance
LOG_FILE=$RESULT_FILE'.tmp_log'
TMP_FILE=$LOG_FILE'.tmp'
TMP_MIDDLE=$LOG_FILE'.tmp_mid'
TMP_MIDDLE2=$LOG_FILE'.tmp_mid2'
current_aws_log_time=`psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -A -t -c "select last_aws_log_time from database where id = $database_id "`
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh:current_aws_log_time='$current_aws_log_time
if [[ $current_aws_log_time != $AWS_LOG_TIME ]];
then
is_new_log='1'
if ! psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -v ON_ERROR_STOP=1 -A -t -q -c "update database set last_aws_log_time = '$AWS_LOG_TIME' where id = $database_id "
then
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: FATAL_ERROR - update database set last_aws_log_time .'
exit 1
fi
else
is_new_log='0'
fi
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh:is_new_log='$is_new_log
let last_aws_max_item_size=`psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -A -t -c "select aws_max_item_size from database where id = $database_id "`
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: last_aws_max_item_size='$last_aws_max_item_size
let count=1
if [[ $is_new_log == '1' ]];
then
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: START DOWNLOADING OF NEW AWS LOG'
if ! aws rds download-db-log-file-portion
--max-items $last_aws_max_item_size
--region REGION
--db-instance-identifier $db_instance
--log-file-name error/postgresql.log.$AWS_LOG_TIME > $LOG_FILE
then
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: FATAL_ERROR - Could not get log from AWS .'
exit 2
fi
else
next_token=`psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -v ON_ERROR_STOP=1 -A -t -c "select last_aws_nexttoken from database where id = $database_id "`
if [[ $next_token == '' ]];
then
next_token='0'
fi
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: CONTINUE DOWNLOADING OF AWS LOG'
if ! aws rds download-db-log-file-portion
--max-items $last_aws_max_item_size
--starting-token $next_token
--region REGION
--db-instance-identifier $db_instance
--log-file-name error/postgresql.log.$AWS_LOG_TIME > $LOG_FILE
then
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: FATAL_ERROR - Could not get log from AWS .'
exit 3
fi
line_count=`cat $LOG_FILE | wc -l`
let lines=$line_count-1
tail -$lines $LOG_FILE > $TMP_MIDDLE
mv -f $TMP_MIDDLE $LOG_FILE
fi
next_token_str=`cat $LOG_FILE | grep NEXTTOKEN`
next_token=`echo $next_token_str | awk -F" " '{ print $2}' `
grep -v NEXTTOKEN $LOG_FILE > $TMP_FILE
if [[ $next_token == '' ]];
then
cp $TMP_FILE $RESULT_FILE
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: NEXTTOKEN NOT FOUND - FINISH '
rm $LOG_FILE
rm $TMP_FILE
rm $TMP_MIDDLE
rm $TMP_MIDDLE2
exit 0
else
psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -v ON_ERROR_STOP=1 -A -t -q -c "update database set last_aws_nexttoken = '$next_token' where id = $database_id "
fi
first_str=`tail -1 $TMP_FILE`
line_count=`cat $TMP_FILE | wc -l`
let lines=$line_count-1
head -$lines $TMP_FILE > $RESULT_FILE
###############################################
# MAIN CIRCLE
let count=2
while [[ $next_token != '' ]];
do
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: count='$count
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: START DOWNLOADING OF AWS LOG'
if ! aws rds download-db-log-file-portion
--max-items $last_aws_max_item_size
--starting-token $next_token
--region REGION
--db-instance-identifier $db_instance
--log-file-name error/postgresql.log.$AWS_LOG_TIME > $LOG_FILE
then
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: FATAL_ERROR - Could not get log from AWS .'
exit 4
fi
next_token_str=`cat $LOG_FILE | grep NEXTTOKEN`
next_token=`echo $next_token_str | awk -F" " '{ print $2}' `
TMP_FILE=$LOG_FILE'.tmp'
grep -v NEXTTOKEN $LOG_FILE > $TMP_FILE
last_str=`head -1 $TMP_FILE`
if [[ $next_token == '' ]];
then
concat_str=$first_str$last_str
echo $concat_str >> $RESULT_FILE
line_count=`cat $TMP_FILE | wc -l`
let lines=$line_count-1
tail -$lines $TMP_FILE >> $RESULT_FILE
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: NEXTTOKEN NOT FOUND - FINISH '
rm $LOG_FILE
rm $TMP_FILE
rm $TMP_MIDDLE
rm $TMP_MIDDLE2
exit 0
fi
if [[ $next_token != '' ]];
then
let growth_counter=$growth_counter+1
if [[ $growth_counter -gt $growth_counter_max ]];
then
let last_aws_max_item_size=$last_aws_max_item_size*$growth_factor
let growth_counter=1
fi
if [[ $last_aws_max_item_size -gt $max_item_size ]];
then
let last_aws_max_item_size=$max_item_size
fi
psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -A -t -q -c "update database set last_aws_nexttoken = '$next_token' where id = $database_id "
concat_str=$first_str$last_str
echo $concat_str >> $RESULT_FILE
line_count=`cat $TMP_FILE | wc -l`
let lines=$line_count-1
#############################
#Get middle of file
head -$lines $TMP_FILE > $TMP_MIDDLE
line_count=`cat $TMP_MIDDLE | wc -l`
let lines=$line_count-1
tail -$lines $TMP_MIDDLE > $TMP_MIDDLE2
cat $TMP_MIDDLE2 >> $RESULT_FILE
first_str=`tail -1 $TMP_FILE`
fi
let count=$count+1
done
#
#################################################################
exit 0
Π€ΡΠ°Π³ΠΌΠ΅Π½ΡΡ ΡΠΊΡΠΈΠΏΡΠ° Ρ Π½Π΅ΠΊΠΎΡΠΎΡΡΠΌΠΈ ΠΏΠΎΡΡΠ½Π΅Π½ΠΈΡΠΌΠΈ:
ΠΡ ΠΎΠ΄Π½ΡΠ΅ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΡ ΡΠΊΡΠΈΠΏΡΠ°:
- ΠΡΠ΅ΠΌΠ΅Π½Π½Π°Ρ ΠΌΠ΅ΡΠΊΠ° ΠΈΠΌΠ΅Π½ΠΈ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π° Π² ΡΠΎΡΠΌΠ°ΡΠ΅ YYYY-MM-DD-HH24: AWS_LOG_TIME=$1
- ID ΠΠ°Π·Ρ Π΄Π°Π½Π½ΡΡ : database_id=$2
- ΠΠΌΡ ΡΠΎΠ±ΡΠ°Π½Π½ΠΎΠ³ΠΎ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π°: RESULT_FILE=$3
ΠΠΎΠ»ΡΡΠΈΡΡ Π²ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ ΠΌΠ΅ΡΠΊΡ ΠΏΠΎΡΠ»Π΅Π΄Π½Π΅Π³ΠΎ Π·Π°Π³ΡΡΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π°:
current_aws_log_time=`psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -A -t -c "select last_aws_log_time from database where id = $database_id "`
ΠΡΠ»ΠΈ Π²ΡΠ΅ΠΌΠ΅Π½Π½Π°Ρ ΠΌΠ΅ΡΠΊΠ° ΠΏΠΎΡΠ»Π΅Π΄Π½Π΅Π³ΠΎ Π·Π°Π³ΡΡΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π° Π½Π΅ ΡΠΎΠ²ΠΏΠ°Π΄Π°Π΅Ρ Ρ Π²Ρ ΠΎΠ΄Π½ΡΠΌ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΠΎΠΌ β Π·Π°Π³ΡΡΠΆΠ°Π΅ΡΡΡ Π½ΠΎΠ²ΡΠΉ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»:
if [[ $current_aws_log_time != $AWS_LOG_TIME ]];
then
is_new_log='1'
if ! psql -h ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -v ON_ERROR_STOP=1 -A -t -c "update database set last_aws_log_time = '$AWS_LOG_TIME' where id = $database_id "
then
echo '***download_aws_piece.sh -FATAL_ERROR - update database set last_aws_log_time .'
exit 1
fi
else
is_new_log='0'
fi
ΠΠΎΠ»ΡΡΠ°Π΅ΠΌ Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅ ΠΌΠ΅ΡΠΊΠΈ nexttoken ΠΈΠ· Π·Π°Π³ΡΡΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΡΠ°ΠΉΠ»Π°:
next_token_str=`cat $LOG_FILE | grep NEXTTOKEN`
next_token=`echo $next_token_str | awk -F" " '{ print $2}' `
ΠΡΠΈΠ·Π½Π°ΠΊΠΎΠΌ ΠΎΠΊΠΎΠ½ΡΠ°Π½ΠΈΡ Π·Π°Π³ΡΡΠ·ΠΊΠΈ ΡΠ»ΡΠΆΠΈΡ ΠΏΡΡΡΠΎΠ΅ Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅ nexttoken.
Π ΡΠΈΠΊΠ»Π΅ ΡΡΠΈΡΠ°Π΅ΠΌ ΠΏΠΎΡΡΠΈΠΈ ΡΠ°ΠΉΠ»Π°, ΠΏΠΎΠΏΡΡΠ½ΠΎ, ΡΡΠ΅ΠΏΠ»ΡΡ ΡΡΡΠΎΠΊΠΈ ΠΈ ΡΠ²Π΅Π»ΠΈΡΠΈΠ²Π°Ρ ΡΠ°Π·ΠΌΠ΅Ρ ΠΏΠΎΡΡΠΈΠΈ:
ΠΠ»Π°Π²Π½ΡΠΉ ΡΠΈΠΊΠ»
# MAIN CIRCLE
let count=2
while [[ $next_token != '' ]];
do
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: count='$count
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: START DOWNLOADING OF AWS LOG'
if ! aws rds download-db-log-file-portion
--max-items $last_aws_max_item_size
--starting-token $next_token
--region REGION
--db-instance-identifier $db_instance
--log-file-name error/postgresql.log.$AWS_LOG_TIME > $LOG_FILE
then
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: FATAL_ERROR - Could not get log from AWS .'
exit 4
fi
next_token_str=`cat $LOG_FILE | grep NEXTTOKEN`
next_token=`echo $next_token_str | awk -F" " '{ print $2}' `
TMP_FILE=$LOG_FILE'.tmp'
grep -v NEXTTOKEN $LOG_FILE > $TMP_FILE
last_str=`head -1 $TMP_FILE`
if [[ $next_token == '' ]];
then
concat_str=$first_str$last_str
echo $concat_str >> $RESULT_FILE
line_count=`cat $TMP_FILE | wc -l`
let lines=$line_count-1
tail -$lines $TMP_FILE >> $RESULT_FILE
echo $(date +%Y%m%d%H%M)': download_aws_piece.sh: NEXTTOKEN NOT FOUND - FINISH '
rm $LOG_FILE
rm $TMP_FILE
rm $TMP_MIDDLE
rm $TMP_MIDDLE2
exit 0
fi
if [[ $next_token != '' ]];
then
let growth_counter=$growth_counter+1
if [[ $growth_counter -gt $growth_counter_max ]];
then
let last_aws_max_item_size=$last_aws_max_item_size*$growth_factor
let growth_counter=1
fi
if [[ $last_aws_max_item_size -gt $max_item_size ]];
then
let last_aws_max_item_size=$max_item_size
fi
psql -h MONITOR_ENDPOINT.rds.amazonaws.com -U USER -d MONITOR_DATABASE -A -t -q -c "update database set last_aws_nexttoken = '$next_token' where id = $database_id "
concat_str=$first_str$last_str
echo $concat_str >> $RESULT_FILE
line_count=`cat $TMP_FILE | wc -l`
let lines=$line_count-1
#############################
#Get middle of file
head -$lines $TMP_FILE > $TMP_MIDDLE
line_count=`cat $TMP_MIDDLE | wc -l`
let lines=$line_count-1
tail -$lines $TMP_MIDDLE > $TMP_MIDDLE2
cat $TMP_MIDDLE2 >> $RESULT_FILE
first_str=`tail -1 $TMP_FILE`
fi
let count=$count+1
done
Π§ΡΠΎ ΠΆΠ΅ Π΄Π°Π»ΡΡΠ΅ ?
ΠΡΠ°ΠΊ, ΠΏΠ΅ΡΠ²Π°Ρ ΠΏΡΠΎΠΌΠ΅ΠΆΡΡΠΎΡΠ½Π°Ρ Π·Π°Π΄Π°ΡΠ° β Β«Π·Π°Π³ΡΡΠ·ΠΈΡΡ Π»ΠΎΠ³-ΡΠ°ΠΉΠ» Ρ ΠΎΠ±Π»Π°ΠΊΠ°Β» ΡΠ΅ΡΠ΅Π½Π°. Π§ΡΠΎ Π΄Π΅Π»Π°ΡΡ Ρ Π·Π°Π³ΡΡΠΆΠ΅Π½Π½ΡΠΌ Π»ΠΎΠ³ΠΎΠΌ?
ΠΠ»Ρ Π½Π°ΡΠ°Π»Π° Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ ΡΠ°Π·ΠΎΠ±ΡΠ°ΡΡ Π»ΠΎΠ³-ΡΠ°ΠΉΠ» ΠΈ Π²ΡΠ΄Π΅Π»ΠΈΡΡ ΠΈΠ· Π½Π΅Π³ΠΎ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΠΎ Π·Π°ΠΏΡΠΎΡΡ.
ΠΠ°Π΄Π°ΡΠ° Π½Π΅ ΡΠΈΠ»ΡΠ½ΠΎ ΡΠ»ΠΎΠΆΠ½Π°Ρ. ΠΡΠΎΡΡΠ΅ΠΉΡΠΈΠΉ bash-script Π²ΠΏΠΎΠ»Π½Π΅ ΡΠΏΡΠ°Π²Π»ΡΠ΅ΡΡΡ.
upload_log_query.sh
#!/bin/bash
#########################################################
# upload_log_query.sh
# Upload table table from dowloaded aws file
# version HABR
###########################################################
echo 'TIMESTAMP:'$(date +%c)' Upload log_query table '
source_file=$1
echo 'source_file='$source_file
database_id=$2
echo 'database_id='$database_id
beginer=' '
first_line='1'
let "line_count=0"
sql_line=' '
sql_flag=' '
space=' '
cat $source_file | while read line
do
line="$space$line"
if [[ $first_line == "1" ]]; then
beginer=`echo $line | awk -F" " '{ print $1}' `
first_line='0'
fi
current_beginer=`echo $line | awk -F" " '{ print $1}' `
if [[ $current_beginer == $beginer ]]; then
if [[ $sql_flag == '1' ]]; then
sql_flag='0'
log_date=`echo $sql_line | awk -F" " '{ print $1}' `
log_time=`echo $sql_line | awk -F" " '{ print $2}' `
duration=`echo $sql_line | awk -F" " '{ print $5}' `
#replace ' to ''
sql_modline=`echo "$sql_line" | sed 's/'''/''''''/g'`
sql_line=' '
################
#PROCESSING OF THE SQL-SELECT IS HERE
if ! psql -h ENDPOINT.rds.amazonaws.com -U USER -d DATABASE -v ON_ERROR_STOP=1 -A -t -c "select log_query('$ip_port',$database_id , '$log_date' , '$log_time' , '$duration' , '$sql_modline' )"
then
echo 'FATAL_ERROR - log_query '
exit 1
fi
################
fi #if [[ $sql_flag == '1' ]]; then
let "line_count=line_count+1"
check=`echo $line | awk -F" " '{ print $8}' `
check_sql=${check^^}
#echo 'check_sql='$check_sql
if [[ $check_sql == 'SELECT' ]]; then
sql_flag='1'
sql_line="$sql_line$line"
ip_port=`echo $sql_line | awk -F":" '{ print $4}' `
fi
else
if [[ $sql_flag == '1' ]]; then
sql_line="$sql_line$line"
fi
fi #if [[ $current_beginer == $beginer ]]; then
done
Π’Π΅ΠΏΠ΅ΡΡ Ρ Π²ΡΠ΄Π΅Π»Π΅Π½Π½ΡΠΌ ΠΈΠ· Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π° Π·Π°ΠΏΡΠΎΡΠΎΠΌ, ΠΌΠΎΠΆΠ½ΠΎ ΡΠ°Π±ΠΎΡΠ°ΡΡ.
Π ΠΏΠΎΠ»Π΅Π·Π½ΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠ΅ΠΉ ΠΎΡΠΊΡΡΠ²Π°Π΅ΡΡΡ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ.
Π Π°Π·ΠΎΠ±ΡΠ°Π½Π½ΡΠ΅ Π·Π°ΠΏΡΠΎΡΡ Π½Π°Π΄ΠΎ Π³Π΄Π΅-ΡΠΎ Ρ ΡΠ°Π½ΠΈΡΡ. ΠΠ»Ρ ΡΡΠΎΠ³ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΡΠ΅ΡΠ²ΠΈΡΠ½Π°Ρ ΡΠ°Π±Π»ΠΈΡΠ° log_query
CREATE TABLE log_query
(
id SERIAL ,
queryid bigint ,
query_md5hash text not null ,
database_id integer not null ,
timepoint timestamp without time zone not null,
duration double precision not null ,
query text not null ,
explained_plan text[],
plan_md5hash text ,
explained_plan_wo_costs text[],
plan_hash_value text ,
baseline_id integer ,
ip text ,
port text
);
ALTER TABLE log_query ADD PRIMARY KEY (id);
ALTER TABLE log_query ADD CONSTRAINT queryid_timepoint_unique_key UNIQUE (queryid, timepoint );
ALTER TABLE log_query ADD CONSTRAINT query_md5hash_timepoint_unique_key UNIQUE (query_md5hash, timepoint );
CREATE INDEX log_query_timepoint_idx ON log_query (timepoint);
CREATE INDEX log_query_queryid_idx ON log_query (queryid);
ALTER TABLE log_query ADD CONSTRAINT database_id_fk FOREIGN KEY (database_id) REFERENCES database (id) ON DELETE CASCADE ;
ΠΠ±ΡΠ°Π±ΠΎΡΠΊΠ° ΡΠ°Π·ΠΎΠ±ΡΠ°Π½Π½ΠΎΠ³ΠΎ Π·Π°ΠΏΡΠΎΡΠ° ΠΎΡΡΡΠ΅ΡΡΠ²Π»ΡΠ΅ΡΡΡ Π² plpgsql ΡΡΠ½ΠΊΡΠΈΠΈ Β«log_queryΒ».
log_query.sql
--log_query.sql
--verison HABR
CREATE OR REPLACE FUNCTION log_query( ip_port text ,log_database_id integer , log_date text , log_time text , duration text , sql_line text ) RETURNS boolean AS $$
DECLARE
result boolean ;
log_timepoint timestamp without time zone ;
log_duration double precision ;
pos integer ;
log_query text ;
activity_string text ;
log_md5hash text ;
log_explain_plan text[] ;
log_planhash text ;
log_plan_wo_costs text[] ;
database_rec record ;
pg_stat_query text ;
test_log_query text ;
log_query_rec record;
found_flag boolean;
pg_stat_history_rec record ;
port_start integer ;
port_end integer ;
client_ip text ;
client_port text ;
log_queryid bigint ;
log_query_text text ;
pg_stat_query_text text ;
BEGIN
result = TRUE ;
RAISE NOTICE '***log_query';
port_start = position('(' in ip_port);
port_end = position(')' in ip_port);
client_ip = substring( ip_port from 1 for port_start-1 );
client_port = substring( ip_port from port_start+1 for port_end-port_start-1 );
SELECT e.host , d.name , d.owner_pwd
INTO database_rec
FROM database d JOIN endpoint e ON e.id = d.endpoint_id
WHERE d.id = log_database_id ;
log_timepoint = to_timestamp(log_date||' '||log_time,'YYYY-MM-DD HH24-MI-SS');
log_duration = duration:: double precision;
pos = position ('SELECT' in UPPER(sql_line) );
log_query = substring( sql_line from pos for LENGTH(sql_line));
log_query = regexp_replace(log_query,' +',' ','g');
log_query = regexp_replace(log_query,';+','','g');
log_query = trim(trailing ' ' from log_query);
log_md5hash = md5( log_query::text );
--Explain execution plan--
EXECUTE 'SELECT dblink_connect(''LINK1'',''host='||database_rec.host||' dbname='||database_rec.name||' user=DATABASE password='||database_rec.owner_pwd||' '')';
log_explain_plan = ARRAY ( SELECT * FROM dblink('LINK1', 'EXPLAIN '||log_query ) AS t (plan text) );
log_plan_wo_costs = ARRAY ( SELECT * FROM dblink('LINK1', 'EXPLAIN ( COSTS FALSE ) '||log_query ) AS t (plan text) );
PERFORM dblink_disconnect('LINK1');
--------------------------
BEGIN
INSERT INTO log_query
(
query_md5hash ,
database_id ,
timepoint ,
duration ,
query ,
explained_plan ,
plan_md5hash ,
explained_plan_wo_costs ,
plan_hash_value ,
ip ,
port
)
VALUES
(
log_md5hash ,
log_database_id ,
log_timepoint ,
log_duration ,
log_query ,
log_explain_plan ,
md5(log_explain_plan::text) ,
log_plan_wo_costs ,
md5(log_plan_wo_costs::text),
client_ip ,
client_port
);
activity_string = 'New query has logged '||
' database_id = '|| log_database_id ||
' query_md5hash='||log_md5hash||
' , timepoint = '||to_char(log_timepoint,'YYYYMMDD HH24:MI:SS');
RAISE NOTICE '%',activity_string;
PERFORM pg_log( log_database_id , 'log_query' , activity_string);
EXCEPTION
WHEN unique_violation THEN
RAISE NOTICE '*** unique_violation *** query already has logged';
END;
SELECT queryid
INTO log_queryid
FROM log_query
WHERE query_md5hash = log_md5hash AND
timepoint = log_timepoint;
IF log_queryid IS NOT NULL
THEN
RAISE NOTICE 'log_query with query_md5hash = % and timepoint = % has already has a QUERYID = %',log_md5hash,log_timepoint , log_queryid ;
RETURN result;
END IF;
------------------------------------------------
RAISE NOTICE 'Update queryid';
SELECT *
INTO log_query_rec
FROM log_query
WHERE query_md5hash = log_md5hash AND timepoint = log_timepoint ;
log_query_rec.query=regexp_replace(log_query_rec.query,';+','','g');
FOR pg_stat_history_rec IN
SELECT
queryid ,
query
FROM
pg_stat_db_queries
WHERE
database_id = log_database_id AND
queryid is not null
LOOP
pg_stat_query = pg_stat_history_rec.query ;
pg_stat_query=regexp_replace(pg_stat_query,'n+',' ','g');
pg_stat_query=regexp_replace(pg_stat_query,'t+',' ','g');
pg_stat_query=regexp_replace(pg_stat_query,' +',' ','g');
pg_stat_query=regexp_replace(pg_stat_query,'$.','%','g');
log_query_text = trim(trailing ' ' from log_query_rec.query);
pg_stat_query_text = pg_stat_query;
--SELECT log_query_rec.query like pg_stat_query INTO found_flag ;
IF (log_query_text LIKE pg_stat_query_text) THEN
found_flag = TRUE ;
ELSE
found_flag = FALSE ;
END IF;
IF found_flag THEN
UPDATE log_query SET queryid = pg_stat_history_rec.queryid WHERE query_md5hash = log_md5hash AND timepoint = log_timepoint ;
activity_string = ' updated queryid = '||pg_stat_history_rec.queryid||
' for log_query with id = '||log_query_rec.id
;
RAISE NOTICE '%',activity_string;
EXIT ;
END IF ;
END LOOP ;
RETURN result ;
END
$$ LANGUAGE plpgsql;
ΠΡΠΈ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΡΠ΅ΡΠ²ΠΈΡΠ½Π°Ρ ΡΠ°Π±Π»ΠΈΡΠ° pg_stat_db_queries, ΡΠΎΠ΄Π΅ΡΠΆΠ°ΡΠ°Ρ ΡΠ½ΠΈΠΌΠΎΠΊ ΡΠ΅ΠΊΡΡΠΈΡ
Π·Π°ΠΏΡΠΎΡΠΎΠ² ΠΈΠ· ΡΠ°Π±Π»ΠΈΡΡ pg_stat_history (ΠΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΡΠ°Π±Π»ΠΈΡΡ ΠΎΠΏΠΈΡΠ°Π½ΠΎ Π·Π΄Π΅ΡΡ β
TABLE pg_stat_db_queries
(
database_id integer,
queryid bigint ,
query text ,
max_time double precision
);
TABLE pg_stat_history
(
β¦
database_id integer ,
β¦
queryid bigint ,
β¦
max_time double precision ,
β¦
);
Π€ΡΠ½ΠΊΡΠΈΡ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ ΠΎΡΡΡΠ΅ΡΡΠ²ΠΈΡΡ ΡΡΠ΄ ΠΏΠΎΠ»Π΅Π·Π½ΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠ΅ΠΉ ΠΏΠΎ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠ΅ Π·Π°ΠΏΡΠΎΡΠΎΠ² ΠΈΠ· Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π°. Π ΠΈΠΌΠ΅Π½Π½ΠΎ:
ΠΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ β1 β ΠΡΡΠΎΡΠΈΡ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ Π·Π°ΠΏΡΠΎΡΠΎΠ²
ΠΡΠ΅Π½Ρ ΠΏΠΎΠ»Π΅Π·Π½ΠΎ Π΄Π»Ρ Π½Π°ΡΠ°Π»Π° ΡΠ΅ΡΠ΅Π½ΠΈΡ ΠΈΠ½ΡΠΈΠ΄Π΅Π½ΡΠ° ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ. Π‘Π½Π°ΡΠ°Π»Π° ΠΎΠ·Π½Π°ΠΊΠΎΠΌΠΈΡΡΡΡ Ρ ΠΈΡΡΠΎΡΠΈΠ΅ΠΉ β Π° ΠΊΠΎΠ³Π΄Π° Π½Π°ΡΠ°Π»ΠΎΡΡ Π·Π°ΠΌΠ΅Π΄Π»Π΅Π½ΠΈΠ΅?
ΠΠ°ΡΠ΅ΠΌ, ΠΏΠΎ ΠΊΠ»Π°ΡΡΠΈΠΊΠ΅ β ΠΏΠΎΠΈΡΠΊΠ°ΡΡ Π²Π½Π΅ΡΠ½ΠΈΠ΅ ΠΏΡΠΈΡΠΈΠ½Ρ. ΠΠΎΠΆΠ΅Ρ Π±ΡΡΡ ΠΏΡΠΎΡΡΠΎ ΡΠ΅Π·ΠΊΠΎ ΡΠ²Π΅Π»ΠΈΡΠΈΠ»Π°ΡΡ Π·Π°Π³ΡΡΠ·ΠΊΠ° Π±Π°Π·Ρ Π΄Π°Π½Π½ΡΡ
ΠΈ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΡΠΉ Π·Π°ΠΏΡΠΎΡ Π½ΠΈ ΠΏΡΠΈ ΡΠ΅ΠΌ.
ΠΠΎΠ±Π°Π²ΠΈΡΡ Π½ΠΎΠ²ΡΡ Π·Π°ΠΏΠΈΡΡ Π² ΡΠ°Π±Π»ΠΈΡΡ log_query
port_start = position('(' in ip_port);
port_end = position(')' in ip_port);
client_ip = substring( ip_port from 1 for port_start-1 );
client_port = substring( ip_port from port_start+1 for port_end-port_start-1 );
SELECT e.host , d.name , d.owner_pwd
INTO database_rec
FROM database d JOIN endpoint e ON e.id = d.endpoint_id
WHERE d.id = log_database_id ;
log_timepoint = to_timestamp(log_date||' '||log_time,'YYYY-MM-DD HH24-MI-SS');
log_duration = to_number(duration,'99999999999999999999D9999999999');
pos = position ('SELECT' in UPPER(sql_line) );
log_query = substring( sql_line from pos for LENGTH(sql_line));
log_query = regexp_replace(log_query,' +',' ','g');
log_query = regexp_replace(log_query,';+','','g');
log_query = trim(trailing ' ' from log_query);
RAISE NOTICE 'log_query=%',log_query ;
log_md5hash = md5( log_query::text );
--Explain execution plan--
EXECUTE 'SELECT dblink_connect(''LINK1'',''host='||database_rec.host||' dbname='||database_rec.name||' user=DATABASE password='||database_rec.owner_pwd||' '')';
log_explain_plan = ARRAY ( SELECT * FROM dblink('LINK1', 'EXPLAIN '||log_query ) AS t (plan text) );
log_plan_wo_costs = ARRAY ( SELECT * FROM dblink('LINK1', 'EXPLAIN ( COSTS FALSE ) '||log_query ) AS t (plan text) );
PERFORM dblink_disconnect('LINK1');
--------------------------
BEGIN
INSERT INTO log_query
(
query_md5hash ,
database_id ,
timepoint ,
duration ,
query ,
explained_plan ,
plan_md5hash ,
explained_plan_wo_costs ,
plan_hash_value ,
ip ,
port
)
VALUES
(
log_md5hash ,
log_database_id ,
log_timepoint ,
log_duration ,
log_query ,
log_explain_plan ,
md5(log_explain_plan::text) ,
log_plan_wo_costs ,
md5(log_plan_wo_costs::text),
client_ip ,
client_port
);
ΠΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ β2 β Π‘ΠΎΡ ΡΠ°Π½ΡΡΡ ΠΏΠ»Π°Π½Ρ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ Π·Π°ΠΏΡΠΎΡΠΎΠ²
ΠΠ° ΡΡΠΎΠΌ ΠΌΠ΅ΡΡΠ΅ ΠΌΠΎΠΆΠ΅Ρ Π²ΠΎΠ·Π½ΠΈΠΊΠ½ΡΡΡ Π²ΠΎΠ·ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅-ΡΡΠΎΡΠ½Π΅Π½ΠΈΠ΅-ΠΊΠΎΠΌΠΌΠ΅Π½ΡΠ°ΡΠΈΠΉ: Β«ΠΠΎ Π²Π΅Π΄Ρ ΡΠΆΠ΅ Π΅ΡΡΡ autoexplainΒ». ΠΡΡΡ ΡΠΎ ΠΎΠ½ Π΅ΡΡΡ, Π° ΡΡΠΎ ΡΠΎΠ»ΠΊΡ, Π΅ΡΠ»ΠΈ ΠΏΠ»Π°Π½ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ Ρ ΡΠ°Π½ΠΈΡΡΡ Π² ΡΠΎΠΌ ΠΆΠ΅ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π΅ ΠΈ Π΄Π»Ρ ΡΠΎΠ³ΠΎ, ΡΡΠΎΠ±Ρ Π΅Π³ΠΎ ΡΠΎΡ ΡΠ°Π½ΠΈΡΡ Π΄Π»Ρ Π΄Π°Π»ΡΠ½Π΅ΠΉΡΠ΅Π³ΠΎ Π°Π½Π°Π»ΠΈΠ·Π°, ΠΏΡΠΈΠ΄ΡΡΡΡ ΠΏΠ°ΡΡΠΈΡΡ Π»ΠΎΠ³-ΡΠ°ΠΉΠ»?
ΠΠ½Π΅, ΠΆΠ΅, Π½ΡΠΆΠ½ΠΎ Π±ΡΠ»ΠΎ:
Π²ΠΎ-ΠΏΠ΅ΡΠ²ΡΡ
: Ρ
ΡΠ°Π½ΠΈΡΡ ΠΏΠ»Π°Π½ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ Π² ΡΠ΅ΡΠ²ΠΈΡΠ½ΠΎΠΉ ΡΠ°Π±Π»ΠΈΡΠ΅ Π±Π°Π·Ρ Π΄Π°Π½Π½ΡΡ
ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π°;
Π²ΠΎ-Π²ΡΠΎΡΡΡ
: ΠΈΠΌΠ΅ΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠΈ ΡΡΠ°Π²Π½ΠΈΠ²Π°ΡΡ ΠΏΠ»Π°Π½Ρ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ ΠΌΠ΅ΠΆΠ΄Ρ ΡΠΎΠ±ΠΎΠΉ, ΡΡΠΎ Π±Ρ ΡΡΠ°Π·Ρ Π²ΠΈΠ΄Π΅ΡΡ, ΡΡΠΎ ΠΏΠ»Π°Π½ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ Π·Π°ΠΏΡΠΎΡΠ° ΠΈΠ·ΠΌΠ΅Π½ΠΈΠ»ΡΡ.
ΠΠ°ΠΏΡΠΎΡ Ρ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΡΠΌΠΈ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΠ°ΠΌΠΈ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ ΠΈΠΌΠ΅Π΅ΡΡΡ. ΠΠΎΠ»ΡΡΠΈΡΡ ΠΈ ΡΠΎΡ
ΡΠ°Π½ΠΈΡΡ Π΅Π³ΠΎ ΠΏΠ»Π°Π½ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡ EXPLAIN β Π·Π°Π΄Π°ΡΠ° ΡΠ»Π΅ΠΌΠ΅Π½ΡΠ°ΡΠ½Π°Ρ.
ΠΠΎΠ»Π΅Π΅ ΡΠΎΠ³ΠΎ, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡ Π²ΡΡΠ°ΠΆΠ΅Π½ΠΈΠ΅ EXPLAIN (COSTS FALSE), ΠΌΠΎΠΆΠ½ΠΎ ΠΏΠΎΠ»ΡΡΠΈΡΡ ΠΊΠ°ΡΠΊΠ°Ρ ΠΏΠ»Π°Π½Π°, ΠΊΠΎΡΠΎΡΡΠΉ ΠΈ Π±ΡΠ΄Π΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ Π΄Π»Ρ ΠΏΠΎΠ»ΡΡΠ΅Π½ΠΈΡ hash-Π·Π½Π°ΡΠ΅Π½ΠΈΡ ΠΏΠ»Π°Π½Π°, ΡΡΠΎ ΠΏΠΎΠΌΠΎΠΆΠ΅Ρ ΠΏΡΠΈ ΠΏΠΎΡΠ»Π΅Π΄ΡΡΡΠ΅ΠΌ Π°Π½Π°Π»ΠΈΠ·Π΅ ΠΈΡΡΠΎΡΠΈΠΈ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΡ ΠΏΠ»Π°Π½Π° Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ.
ΠΠΎΠ»ΡΡΠΈΡΡ ΡΠ°Π±Π»ΠΎΠ½ ΠΏΠ»Π°Π½Π° Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ
--Explain execution plan--
EXECUTE 'SELECT dblink_connect(''LINK1'',''host='||database_rec.host||' dbname='||database_rec.name||' user=DATABASE password='||database_rec.owner_pwd||' '')';
log_explain_plan = ARRAY ( SELECT * FROM dblink('LINK1', 'EXPLAIN '||log_query ) AS t (plan text) );
log_plan_wo_costs = ARRAY ( SELECT * FROM dblink('LINK1', 'EXPLAIN ( COSTS FALSE ) '||log_query ) AS t (plan text) );
PERFORM dblink_disconnect('LINK1');
ΠΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ β3 β ΠΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ Π»ΠΎΠ³Π° Π·Π°ΠΏΡΠΎΡΠΎΠ² Π΄Π»Ρ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π°
ΠΠΎΡΠΊΠΎΠ»ΡΠΊΡ ΠΌΠ΅ΡΡΠΈΠΊΠΈ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ Π½Π°ΡΡΡΠΎΠ΅Π½Ρ Π½Π΅ Π½Π° ΡΠ΅ΠΊΡΡ Π·Π°ΠΏΡΠΎΡΠ°, Π° Π½Π° Π΅Π³ΠΎ ID, Π½ΡΠΆΠ½ΠΎ ΡΠ²ΡΠ·ΡΠ²Π°ΡΡ Π·Π°ΠΏΡΠΎΡΡ ΠΈΠ· Π»ΠΎΠ³-ΡΠ°ΠΉΠ»Π° Ρ Π·Π°ΠΏΡΠΎΡΠ°ΠΌΠΈ Π΄Π»Ρ ΠΊΠΎΡΠΎΡΡΡ
Π½Π°ΡΡΡΠΎΠ΅Π½Ρ ΠΌΠ΅ΡΡΠΈΠΊΠΈ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ.
ΠΡ Ρ
ΠΎΡΡ Π±Ρ Π΄Π»Ρ ΡΠΎΠ³ΠΎ, ΡΡΠΎΠ±Ρ ΠΈΠΌΠ΅ΡΡ ΡΠΎΡΠ½ΠΎΠ΅ Π²ΡΠ΅ΠΌΡ Π²ΠΎΠ·Π½ΠΈΠΊΠ½ΠΎΠ²Π΅Π½ΠΈΡ ΠΈΠ½ΡΠΈΠ΄Π΅Π½ΡΠ° ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ.
Π’Π°ΠΊΠΈΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ, ΠΏΡΠΈ Π²ΠΎΠ·Π½ΠΈΠΊΠ½ΠΎΠ²Π΅Π½ΠΈΠΈ ΠΈΠ½ΡΠΈΠ΄Π΅Π½ΡΠ° ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ Π΄Π»Ρ ID Π·Π°ΠΏΡΠΎΡΠ°, Π±ΡΠ΄Π΅Ρ ΡΡΡΠ»ΠΊΠ° Π½Π° ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΡΠΉ Π·Π°ΠΏΡΠΎΡ Ρ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΡΠΌΠΈ Π·Π½Π°ΡΠ΅Π½ΠΈΡΠΌΠΈ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΠ° ΠΈ ΡΠΎΡΠ½ΡΠΌ Π²ΡΠ΅ΠΌΠ΅Π½Π΅ΠΌ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ ΠΈ Π΄Π»ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ Π·Π°ΠΏΡΠΎΡΠ°. ΠΠΎΠ»ΡΡΠΈΡΡ Π΄Π°Π½Π½ΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡ ΡΠΎΠ»ΡΠΊΠΎ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½ΠΈΠ΅ pg_stat_statements β Π½Π΅Π»ΡΠ·Ρ.
ΠΠ°ΠΉΡΠΈ queryid Π·Π°ΠΏΡΠΎΡΠ° ΠΈ ΠΎΠ±Π½ΠΎΠ²ΠΈΡΡ Π·Π°ΠΏΠΈΡΡ Π² ΡΠ°Π±Π»ΠΈΡΠ΅ log_query
SELECT *
INTO log_query_rec
FROM log_query
WHERE query_md5hash = log_md5hash AND timepoint = log_timepoint ;
log_query_rec.query=regexp_replace(log_query_rec.query,';+','','g');
FOR pg_stat_history_rec IN
SELECT
queryid ,
query
FROM
pg_stat_db_queries
WHERE
database_id = log_database_id AND
queryid is not null
LOOP
pg_stat_query = pg_stat_history_rec.query ;
pg_stat_query=regexp_replace(pg_stat_query,'n+',' ','g');
pg_stat_query=regexp_replace(pg_stat_query,'t+',' ','g');
pg_stat_query=regexp_replace(pg_stat_query,' +',' ','g');
pg_stat_query=regexp_replace(pg_stat_query,'$.','%','g');
log_query_text = trim(trailing ' ' from log_query_rec.query);
pg_stat_query_text = pg_stat_query;
--SELECT log_query_rec.query like pg_stat_query INTO found_flag ;
IF (log_query_text LIKE pg_stat_query_text) THEN
found_flag = TRUE ;
ELSE
found_flag = FALSE ;
END IF;
IF found_flag THEN
UPDATE log_query SET queryid = pg_stat_history_rec.queryid WHERE query_md5hash = log_md5hash AND timepoint = log_timepoint ;
activity_string = ' updated queryid = '||pg_stat_history_rec.queryid||
' for log_query with id = '||log_query_rec.id
;
RAISE NOTICE '%',activity_string;
EXIT ;
END IF ;
END LOOP ;
ΠΠΎΡΠ»Π΅ΡΠ»ΠΎΠ²ΠΈΠ΅
ΠΠΏΠΈΡΠ°Π½Π½Π°Ρ ΠΌΠ΅ΡΠΎΠ΄ΠΈΠΊΠ° Π² ΠΈΡΠΎΠ³Π΅, Π½Π°ΡΠ»Π° ΡΠ΅Π±Π΅ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ Π²
Π₯ΠΎΡΡ, ΠΊΠΎΠ½Π΅ΡΠ½ΠΎ, Π½Π° ΠΌΠΎΠΉ Π»ΠΈΡΠ½ΡΠΉ Π°Π²ΡΠΎΡΡΠΊΠΈΠΉ Π²Π·Π³Π»ΡΠ΄, Π½ΡΠΆΠ½ΠΎ Π±ΡΠ΄Π΅Ρ Π΅ΡΠ΅ ΠΏΠΎΡΠ°Π±ΠΎΡΠ°ΡΡ Π½Π°Π΄ Π°Π»Π³ΠΎΡΠΈΡΠΌΠΎΠΌ Π²ΡΠ±ΠΎΡΠ° ΠΈ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΡ ΡΠ°Π·ΠΌΠ΅ΡΠ° Π·Π°Π³ΡΡΠΆΠ°Π΅ΠΌΠΎΠΉ ΠΏΠΎΡΡΠΈΠΈ. ΠΠ°Π΄Π°ΡΠ° ΠΏΠΎΠΊΠ° Π½Π΅ ΡΠ΅ΡΠ΅Π½Π° Π² ΠΎΠ±ΡΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅. ΠΠ°Π²Π΅ΡΠ½ΠΎΠ΅, Π±ΡΠ΄Π΅Ρ ΠΈΠ½ΡΠ΅ΡΠ΅ΡΠ½ΠΎ.
ΠΠΎ ΡΡΠΎ ΡΠΆΠ΅ ΡΠΎΠ²ΡΠ΅ΠΌ Π΄ΡΡΠ³Π°Ρ ΠΈΡΡΠΎΡΠΈΡ β¦
ΠΡΡΠΎΡΠ½ΠΈΠΊ: habr.com