Ātra CRUD izveide ar ligzdu, @nestjsx/crud un TestMace

Ātra CRUD izveide ar ligzdu, @nestjsx/crud un TestMace

Å obrÄ«d REST API ir kļuvis par tÄ«mekļa aplikāciju izstrādes standartu, ļaujot izstrādi sadalÄ«t neatkarÄ«gās daļās. UI paÅ”laik tiek izmantoti dažādi populāri ietvari, piemēram, Angular, React, Vue un citi. Aizmugursistēmas izstrādātāji var izvēlēties no dažādām valodām un ietvariem. Å odien es gribētu runāt par tādu ietvaru kā NestJS. Mes esam ieksa TestMace Mēs to aktÄ«vi izmantojam iekŔējiem projektiem. Izmantojot ligzdu un iepakojumu @nestjsx/crud, izveidosim vienkārÅ”u CRUD aplikāciju.

Kāpēc NestJS

Pēdējā laikā JavaScript kopienā ir parādÄ«jies diezgan daudz aizmugursistēmas ietvaru. Un, ja funkcionalitātes ziņā tie nodroÅ”ina lÄ«dzÄ«gas iespējas Nest, tad vienā lietā tas noteikti uzvar ā€“ tāda ir arhitektÅ«ra. Tālāk norādÄ«tās NestJS funkcijas ļauj izveidot rÅ«pnieciskas lietojumprogrammas un mērogot izstrādi lielām komandām:

  • izmantojot TypeScript kā galveno izstrādes valodu. Lai gan NestJS atbalsta JavaScript, dažas funkcijas var nedarboties, it Ä«paÅ”i, ja mēs runājam par treÅ”o puÅ”u pakotnēm;
  • DI konteinera klātbÅ«tne, kas ļauj izveidot brÄ«vi savienotas sastāvdaļas;
  • Pati ietvara funkcionalitāte ir sadalÄ«ta neatkarÄ«gos maināmos komponentos. Piemēram, zem pārsega kā karkasu to var izmantot kā izteiktUn pastiprināt, lai strādātu ar datu bāzi, Nest out of the box nodroÅ”ina saistÄ«jumus ar tipa vētra, mangusts, turpinājums;
  • NestJS ir platformas agnostiÄ·is un atbalsta REST, GraphQL, Websockets, gRPC utt.

Pats ietvars ir iedvesmots no Angular frontend ietvara, un konceptuāli tam ir daudz kopīga ar to.

NestJS instalēŔana un projekta izvietoŔana

Nest satur iepakojumu ligzda/cli, kas ļauj ātri izvietot pamata lietojumprogrammu sistēmu. Instalēsim Å”o pakotni globāli:

npm install --global @nest/cli

Pēc instalÄ“Å”anas mēs Ä£enerēsim mÅ«su lietojumprogrammas pamata ietvaru ar nosaukumu nest-rest. Tas tiek darÄ«ts, izmantojot komandu nest new nest-rest.

ligzda jauna ligzda-atpūta

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects $ nest new nest-rest
  We will scaffold your app in a few seconds..

CREATE /nest-rest/.prettierrc (51 bytes)
CREATE /nest-rest/README.md (3370 bytes)
CREATE /nest-rest/nest-cli.json (84 bytes)
CREATE /nest-rest/nodemon-debug.json (163 bytes)
CREATE /nest-rest/nodemon.json (67 bytes)
CREATE /nest-rest/package.json (1805 bytes)
CREATE /nest-rest/tsconfig.build.json (97 bytes)
CREATE /nest-rest/tsconfig.json (325 bytes)
CREATE /nest-rest/tslint.json (426 bytes)
CREATE /nest-rest/src/app.controller.spec.ts (617 bytes)
CREATE /nest-rest/src/app.controller.ts (274 bytes)
CREATE /nest-rest/src/app.module.ts (249 bytes)
CREATE /nest-rest/src/app.service.ts (142 bytes)
CREATE /nest-rest/src/main.ts (208 bytes)
CREATE /nest-rest/test/app.e2e-spec.ts (561 bytes)
CREATE /nest-rest/test/jest-e2e.json (183 bytes)

? Which package manager would you ļø to use? yarn
 Installation in progress... 

  Successfully created project nest-rest
  Get started with the following commands:

$ cd nest-rest
$ yarn run start

                          Thanks for installing Nest 
                 Please consider donating to our open collective
                        to help us maintain this package.

                 Donate: https://opencollective.com/nest

Mēs izvēlēsimies dziju kā mūsu iepakojuma pārvaldnieku.
Å ajā brÄ«dÄ« jÅ«s varat startēt serveri ar komandu npm start un dodos uz adresi http://localhost:3000 jÅ«s varat redzēt galveno lapu. Taču ne jau tāpēc esam Å”eit pulcējuÅ”ies un ejam tālāk.

Darba iestatīŔana ar datu bāzi

Es izvēlējos PostrgreSQL kā DBVS Å”im rakstam. Par gaumēm nav strÄ«du, manuprāt, Ŕī ir visnobrieduŔākā DBVS, kurai ir visas nepiecieÅ”amās iespējas. Kā jau minēts, Nest nodroÅ”ina integrāciju ar dažādām pakotnēm darbam ar datu bāzēm. Jo Tā kā mana izvēle krita uz PostgreSQL, bÅ«tu loÄ£iski izvēlēties TypeORM kā ORM. Instalēsim nepiecieÅ”amās pakotnes integrācijai ar datu bāzi:

yarn add typeorm @nestjs/typeorm pg

Kārtībā, kam ir nepiecieŔama katra pakete:

  1. typeorm - pakotne tieŔi no paŔa ORM;
  2. @nestjs/typeorm ā€” TypeORM pakotne NestJS. Pievieno moduļus importÄ“Å”anai projektu moduļos, kā arÄ« palÄ«gdekoratoru komplektu;
  3. pg - draiveris darbam ar PostgreSQL.

Labi, pakotnes ir instalētas, tagad jums ir jāpalaiž pati datu bāze. Lai izvietotu datubāzi, es izmantoÅ”u docker-compose.yml ar Ŕādu saturu:

docker-compose.yml

version: '3.1'

services:
  db:
    image: postgres:11.2
    restart: always
    environment:
      POSTGRES_PASSWORD: example
    volumes:
      - ../db:/var/lib/postgresql/data
      - ./postgresql.conf:/etc/postgresql/postgresql.conf
    ports:
      - 5432:5432
  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

Kā redzat, Å”is fails konfigurē 2 konteineru palaiÅ”anu:

  1. db ir konteiners, kas tieŔi satur datu bāzi. Mūsu gadījumā tiek izmantota postgresql versija 11.2;
  2. adminer ā€” datu bāzes pārvaldnieks. NodroÅ”ina tÄ«mekļa saskarni datu bāzes apskatei un pārvaldÄ«bai.

Lai strādātu ar tcp savienojumiem, es pievienoju Ŕādu konfigurāciju.

postgresql.conf

# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
#   name = value
#
# (The "=" is optional.)  Whitespace may be used.  Comments are introduced with
# "#" anywhere on a line.  The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal.  If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()".  Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on".  Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units:  kB = kilobytes        Time units:  ms  = milliseconds
#                MB = megabytes                     s   = seconds
#                GB = gigabytes                     min = minutes
#                TB = terabytes                     h   = hours
#                                                   d   = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir'       # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = ''         # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*'
#listen_addresses = 'localhost'     # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
#port = 5432                # (change requires restart)
#max_connections = 100          # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/tmp'   # comma-separated list of directories
# (change requires restart)
#unix_socket_group = ''         # (change requires restart)
#unix_socket_permissions = 0777     # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off              # advertise server via Bonjour
# (change requires restart)
#bonjour_name = ''          # defaults to the computer name
# (change requires restart)
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0        # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0        # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0       # TCP_KEEPCNT;
# 0 selects the system default
# - Authentication -
#authentication_timeout = 1min      # 1s-600s
#password_encryption = md5      # md5 or scram-sha-256
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - SSL -
#ssl = off
#ssl_ca_file = ''
#ssl_cert_file = 'server.crt'
#ssl_crl_file = ''
#ssl_key_file = 'server.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
#shared_buffers = 32MB          # min 128kB
# (change requires restart)
#huge_pages = try           # on, off, or try
# (change requires restart)
#temp_buffers = 8MB         # min 800kB
#max_prepared_transactions = 0      # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB             # min 64kB
#maintenance_work_mem = 64MB        # min 1MB
#autovacuum_work_mem = -1       # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB          # min 100kB
#shared_memory_type = mmap      # the default is the first option
# supported by the operating system:
#   mmap
#   sysv
#   windows
# (change requires restart)
#dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
#   posix
#   sysv
#   windows
#   mmap
# (change requires restart)
# - Disk -
#temp_file_limit = -1           # limits per-process temp file space
# in kB, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000       # min 25
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0          # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1       # 0-10000 credits
#vacuum_cost_page_miss = 10     # 0-10000 credits
#vacuum_cost_page_dirty = 20        # 0-10000 credits
#vacuum_cost_limit = 200        # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms         # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100        # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0      # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 0       # measured in pages, 0 disables
# - Asynchronous Behavior -
#effective_io_concurrency = 1       # 1-1000; 0 disables prefetching
#max_worker_processes = 8       # (change requires restart)
#max_parallel_maintenance_workers = 2   # taken from max_parallel_workers
#max_parallel_workers_per_gather = 2    # taken from max_parallel_workers
#parallel_leader_participation = on
#max_parallel_workers = 8       # maximum number of max_worker_processes that
# can be used in parallel operations
#old_snapshot_threshold = -1        # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#backend_flush_after = 0        # measured in pages, 0 disables
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#wal_level = replica            # minimal, replica, or logical
# (change requires restart)
#fsync = on             # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
#synchronous_commit = on        # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync        # the default is the first option
# supported by the operating system:
#   open_datasync
#   fdatasync (default on Linux)
#   fsync
#   fsync_writethrough
#   open_sync
#full_page_writes = on          # recover from partial page writes
#wal_compression = off          # enable compression of full-page writes
#wal_log_hints = off            # also do full page writes of non-critical updates
# (change requires restart)
#wal_buffers = -1           # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms       # 1-10000 milliseconds
#wal_writer_flush_after = 1MB       # measured in pages, 0 disables
#commit_delay = 0           # range 0-100000, in microseconds
#commit_siblings = 5            # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min      # range 30s-1d
#max_wal_size = 1GB
#min_wal_size = 80MB
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 0     # measured in pages, 0 disables
#checkpoint_warning = 30s       # 0 disables
# - Archiving -
#archive_mode = off     # enables archiving; off, on, or always
# (change requires restart)
#archive_command = ''       # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
#               %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0        # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = ''       # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
#               %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
# (change requires restart)
#archive_cleanup_command = ''   # command to execute at every restartpoint
#recovery_end_command = ''  # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = ''       # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = ''  # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = ''  # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = ''   # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = ''   # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest'    # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause'   # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the master and on any standby that will send replication data.
#max_wal_senders = 10       # max number of walsender processes
# (change requires restart)
#wal_keep_segments = 0      # in logfile segments; 0 disables
#wal_sender_timeout = 60s   # in milliseconds; 0 disables
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#track_commit_timestamp = off   # collect timestamp of transaction commit
# (change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a master server.
#primary_conninfo = ''          # connection string to sending server
# (change requires restart)
#primary_slot_name = ''         # replication slot on sending server
# (change requires restart)
#promote_trigger_file = ''      # file name whose presence ends recovery
#hot_standby = on           # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s    # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s  # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off     # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s     # time that receiver waits for
# communication from master
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s   # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0       # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4    # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2  # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_parallel_hash = on
#enable_partition_pruning = on
# - Planner Cost Constants -
#seq_page_cost = 1.0            # measured on an arbitrary scale
#random_page_cost = 4.0         # same scale as above
#cpu_tuple_cost = 0.01          # same scale as above
#cpu_index_tuple_cost = 0.005       # same scale as above
#cpu_operator_cost = 0.0025     # same scale as above
#parallel_tuple_cost = 0.1      # same scale as above
#parallel_setup_cost = 1000.0   # same scale as above
#jit_above_cost = 100000        # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000     # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000   # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5            # range 1-10
#geqo_pool_size = 0         # selects default based on effort
#geqo_generations = 0           # selects default based on effort
#geqo_selection_bias = 2.0      # range 1.5-2.0
#geqo_seed = 0.0            # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100    # range 1-10000
#constraint_exclusion = partition   # on, off, or partition
#cursor_tuple_fraction = 0.1        # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8        # 1 disables collapsing of explicit
# JOIN clauses
#force_parallel_mode = off
#jit = on               # allow JIT compilation
#plan_cache_mode = auto         # auto, force_generic_plan or
# force_custom_plan
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr'     # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform.  csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off        # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log'          # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'    # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600           # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off     # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation.  Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d          # Automatic rotation of logfiles will
# happen after that time.  0 disables.
#log_rotation_size = 10MB       # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning     # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   info
#   notice
#   warning
#   error
#   log
#   fatal
#   panic
#log_min_error_statement = error    # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   info
#   notice
#   warning
#   error
#   log
#   fatal
#   panic (effectively off)
#log_min_duration_statement = -1    # logs statements and their durations
# according to log_statement_sample_rate. -1 is disabled,
# 0 logs all statement, > 0 logs only statements running at
# least this number of milliseconds.
#log_statement_sample_rate = 1  # Fraction of logged statements over
# log_min_duration_statement. 1.0 logs all statements,
# 0 never logs.
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default      # terse, default, or verbose messages
#log_hostname = off
#log_line_prefix = '%m [%p] '       # special values:
#   %a = application name
#   %u = user name
#   %d = database name
#   %r = remote host and port
#   %h = remote host
#   %p = process ID
#   %t = timestamp without milliseconds
#   %m = timestamp with milliseconds
#   %n = timestamp with milliseconds (as a Unix epoch)
#   %i = command tag
#   %e = SQL state
#   %c = session ID
#   %l = session line number
#   %s = session start timestamp
#   %v = virtual transaction ID
#   %x = transaction ID (0 if none)
#   %q = stop here in non-session
#        processes
#   %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off           # log lock waits >= deadlock_timeout
#log_statement = 'none'         # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1            # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
#log_timezone = 'GMT'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
#cluster_name = ''          # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none         # none, pl, all
#track_activity_query_size = 1024   # (change requires restart)
#stats_temp_directory = 'pg_stat_tmp'
# - Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on            # Enable autovacuum subprocess?  'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1   # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3     # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min      # time between autovacuum runs
#autovacuum_vacuum_threshold = 50   # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50  # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000    # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice       # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   log
#   notice
#   warning
#   error
#search_path = '"$user", public'    # schema names
#row_security = on
#default_tablespace = ''        # a tablespace name, '' uses the default
#temp_tablespaces = ''          # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0          # in milliseconds, 0 is disabled
#lock_timeout = 0           # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0    # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_cleanup_index_scale_factor = 0.1    # fraction of total number of tuples
# before index cleanup, 0 always performs
# index cleanup
#bytea_output = 'hex'           # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_fuzzy_search_limit = 0
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
#datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = 'GMT'
#timezone_abbreviations = 'Default'     # Select the set of available time zone
# abbreviations.  Currently, there are
#   Default
#   Australia (historical usage)
#   India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1         # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii        # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
#lc_messages = 'C'          # locale for system error message
# strings
#lc_monetary = 'C'          # locale for monetary formatting
#lc_numeric = 'C'           # locale for number formatting
#lc_time = 'C'              # locale for time formatting
# default configuration for text search
#default_text_search_config = 'pg_catalog.simple'
# - Shared Library Preloading -
#shared_preload_libraries = ''  # (change requires restart)
#local_preload_libraries = ''
#session_preload_libraries = ''
#jit_provider = 'llvmjit'       # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64     # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64    # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2   # negative values mean
# (max_pred_locks_per_transaction
#  / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2            # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding    # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#operator_precedence_warning = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off            # terminate session on any error?
#restart_after_crash = on       # reinitialize after backend crash?
#data_sync_retry = off          # retry or panic on failure to fsync
# data?
# (change requires restart)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf.
#include_dir = 'conf.d'         # include files ending in '.conf' from
# directory 'conf.d'
#include_if_exists = 'exists.conf'  # include file only if it exists
#include = 'special.conf'       # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

Tas arÄ« viss, jÅ«s varat sākt konteinerus ar komandu docker-compose up -d. Vai arÄ« atseviŔķā konsolē ar komandu docker-compose up.

Tātad, pakotnes ir instalētas, datubāze ir palaista, atliek tikai sadraudzēties savā starpā. Lai to izdarÄ«tu, projekta saknei jāpievieno Ŕāds fails: ormconfig.js:

ormconfig.js

const process = require('process');
const username = process.env.POSTGRES_USER || "postgres";
const password = process.env.POSTGRES_PASSWORD || "example";
module.exports = {
"type": "postgres",
"host": "localhost",
"port": 5432,
username,
password,
"database": "postgres",
"synchronize": true,
"dropSchema": false,
"logging": true,
"entities": [__dirname + "/src/**/*.entity.ts", __dirname + "/dist/**/*.entity.js"],
"migrations": ["migrations/**/*.ts"],
"subscribers": ["subscriber/**/*.ts", "dist/subscriber/**/.js"],
"cli": {
"entitiesDir": "src",
"migrationsDir": "migrations",
"subscribersDir": "subscriber"
}
}

Šī konfigurācija tiks izmantota cli tipa formā.

ApskatÄ«sim Å”o konfigurāciju sÄ«kāk. 3. un 4. rindā mēs iegÅ«stam lietotājvārdu un paroli no vides mainÄ«gajiem. Tas ir ērti, ja jums ir vairākas vides (izstrādātājs, stadija, prod. utt.). Pēc noklusējuma lietotājvārds ir postgres, un parole ir piemērs. Pārējā konfigurācija ir triviāla, tāpēc mēs koncentrēsimies tikai uz interesantākajiem parametriem:

  • sinhronizēt ā€” norāda, vai datu bāzes shēma ir automātiski jāizveido, startējot lietojumprogrammu. Esiet piesardzÄ«gs ar Å”o opciju un neizmantojiet to ražoÅ”anā, pretējā gadÄ«jumā jÅ«s zaudēsiet datus. Å Ä« opcija ir ērta, izstrādājot un atkļūdojot lietojumprogrammu. Kā alternatÄ«vu Å”ai opcijai varat izmantot komandu schema:sync no CLI TypeORM.
  • dropSchema ā€” atiestatiet shēmu katru reizi, kad tiek izveidots savienojums. Tāpat kā iepriekŔējā, arÄ« Ŕī opcija ir jāizmanto tikai lietojumprogrammas izstrādes un atkļūdoÅ”anas laikā.
  • entÄ«tijas ā€” kuros ceļos meklēt modeļu aprakstus. LÅ«dzu, ņemiet vērā, ka tiek atbalstÄ«ta meklÄ“Å”ana pēc maskas.
  • cli.entitiesDir ir direktorijs, kurā pēc noklusējuma ir jāsaglabā modeļi, kas izveidoti no TypeORM CLI.

Lai mēs savā Nest lietojumprogrammā varētu izmantot visas TypeORM funkcijas, mums ir jāimportē modulis TypeOrmModule Š² AppModule. Tie. jÅ«su AppModule izskatÄ«sies Ŕādi:

app.module.ts

import { Module } from '@nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';
import { TypeOrmModule } from '@nestjs/typeorm';
import * as process from "process";
const username = process.env.POSTGRES_USER || 'postgres';
const password = process.env.POSTGRES_PASSWORD || 'example';
@Module({
imports: [
TypeOrmModule.forRoot({
type: 'postgres',
host: 'localhost',
port: 5432,
username,
password,
database: 'postgres',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: true,
}),
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}

Kā jūs, iespējams, pamanījāt, metode forRoot tiek pārsūtīta tāda pati konfigurācija darbam ar datu bāzi kā failā ormconfig.ts

Atliek pēdējais pieskāriens - pievienojiet vairākus uzdevumus darbam ar TypeORM failā package.json. Fakts ir tāds, ka CLI ir rakstÄ«ts JavaScript un darbojas nodejs vidē. Tomēr visi mÅ«su modeļi un migrācijas tiks rakstÄ«ti maŔīnrakstā. Tāpēc pirms CLI izmantoÅ”anas ir jāpārveido mÅ«su migrācijas un modeļi. Å im nolÅ«kam mums ir nepiecieÅ”ama ts-node pakotne:

yarn add -D ts-node

Pēc tam failam package.json pievienojiet nepiecieÅ”amās komandas:

"typeorm": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js",
"migration:generate": "yarn run typeorm migration:generate -n",
"migration:create": "yarn run typeorm migration:create -n",
"migration:run": "yarn run typeorm migration:run"

Pirmā komanda typeorm pievieno ts-node iesaiņojumu, lai palaistu TypeORM klip. AtlikuŔās komandas ir ērti saÄ«snes, kuras jÅ«s kā izstrādātājs izmantosit gandrÄ«z katru dienu:
migration:generate ā€” izveidot migrāciju, pamatojoties uz izmaiņām jÅ«su modeļos.
migration:create ā€” tukÅ”as migrācijas izveidoÅ”ana.
migration:run ā€” migrācijas uzsākÅ”ana.
Tagad tas ir viss, esam pievienojuÅ”i nepiecieÅ”amās pakotnes, konfigurējuÅ”i lietojumprogrammu darbam ar datu bāzi gan no klipa, gan no paÅ”as lietojumprogrammas, kā arÄ« palaiduÅ”i DBVS. Ir pienācis laiks pievienot loÄ£iku mÅ«su lietojumprogrammai.

PakeŔu instalēŔana CRUD izveidei

Izmantojot tikai Nest, varat izveidot API, kas ļauj izveidot, lasÄ«t, atjaunināt un dzēst entÄ«tiju. Å is risinājums bÅ«s pēc iespējas elastÄ«gāks, taču dažos gadÄ«jumos tas bÅ«s lieks. Piemēram, ja jums ir nepiecieÅ”ams ātri izveidot prototipu, jÅ«s bieži varat upurēt elastÄ«bu attÄ«stÄ«bas ātruma dēļ. Daudzas sistēmas nodroÅ”ina funkcionalitāti CRUD Ä£enerÄ“Å”anai, aprakstot noteiktas entÄ«tijas datu modeli. Un Nest nav izņēmums! Å o funkcionalitāti nodroÅ”ina pakotne @nestjsx/crud. Tās iespējas ir ļoti interesantas:

  • vienkārÅ”a uzstādÄ«Å”ana un konfigurÄ“Å”ana;
  • DBVS neatkarÄ«ba;
  • jaudÄ«ga vaicājumu valoda ar iespēju filtrēt, sakārtot lapas, kārtot, ielādēt attiecÄ«bas un ligzdotās entÄ«tijas, keÅ”atmiņu utt.;
  • pakotne pieprasÄ«jumu Ä£enerÄ“Å”anai priekÅ”galā;
  • vienkārÅ”a kontroliera metožu ignorÄ“Å”ana;
  • maza konfigurācija;
  • swagger dokumentācijas atbalsts.

Funkcionalitāte ir sadalīta vairākās paketēs:

  • @nestjsx/crud - pamata komplekts, ko nodroÅ”ina dekorators Crud() marÅ”ruta Ä£enerÄ“Å”anai, konfigurÄ“Å”anai un apstiprināŔanai;
  • @nestjsx/crud-request ā€” pakotne, kas nodroÅ”ina vaicājumu veidotāju/parsētāju lietoÅ”anai priekÅ”gala pusē;
  • @nestjsx/crud-typeorm ā€” pakotne integrācijai ar TypeORM, kas nodroÅ”ina pamatpakalpojumu TypeOrmCrudService ar CRUD metodēm darbam ar entÄ«tijām datu bāzē.

Šajā apmācībā mums būs nepiecieŔami iepakojumi ligzdajsx/crud un ligzdajsx/crud-typeorm. Pirmkārt, ievietosim tos

yarn add @nestjsx/crud class-transformer class-validator

Iepakojumi klase-transformators Šø klases pārbaudÄ«tājs Å”ajā pieteikumā ir nepiecieÅ”ami, lai deklaratÄ«vi aprakstÄ«tu modeļu gadÄ«jumu pārveidoÅ”anas un ienākoÅ”o pieprasÄ«jumu apstiprināŔanas noteikumus. Å Ä«s pakotnes ir no viena autora, tāpēc saskarnes ir lÄ«dzÄ«gas.

TieŔa CRUD ievieŔana

Kā piemēru ņemsim lietotāju sarakstu. Lietotājiem bÅ«s Ŕādi lauki: id, username, displayName, email. id - automātiskās pieauguma lauks, email Šø username - unikāli lauki. Tas ir vienkārÅ”i! Atliek tikai Ä«stenot mÅ«su ideju Nest aplikācijas veidā.
Vispirms jums ir jāizveido modulis users, kurŔ būs atbildīgs par darbu ar lietotājiem. Izmantosim cli no NestJS un izpildīsim komandu mūsu projekta saknes direktorijā nest g module users.

nest g moduļa lietotāji

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects/nest-rest git:(master*)$ nest g module users
CREATE /src/users/users.module.ts (82 bytes)
UPDATE /src/app.module.ts (312 bytes)

Å ajā modulÄ« mēs pievienosim entÄ«tiju mapi, kurā mums bÅ«s Ŕī moduļa modeļi. Konkrēti, pievienosim Å”eit failu user.entity.ts ar lietotāja modeļa aprakstu:

user.entity.ts

import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm';
@Entity()
export class User {
@PrimaryGeneratedColumn()
id: string;
@Column({unique: true})
email: string;
@Column({unique: true})
username: string;
@Column({nullable: true})
displayName: string;
}

Lai Å”o modeli ā€œredzētuā€ mÅ«su lietojumprogramma, tas ir nepiecieÅ”ams modulÄ« UsersModule imports TypeOrmModule Ŕādu saturu:

users.module.ts

import { Module } from '@nestjs/common';
import { UsersController } from './controllers/users/users.controller';
import { UsersService } from './services/users/users.service';
import { TypeOrmModule } from '@nestjs/typeorm';
import { User } from './entities/user.entity';
@Module({
controllers: [UsersController],
providers: [UsersService],
imports: [
TypeOrmModule.forFeature([User])
]
})
export class UsersModule {}

Tas ir, Å”eit mēs importējam TypeOrmModule, kur kā metodes parametrs forFeature Mēs norādām ar Å”o moduli saistÄ«to modeļu sarakstu.

Atliek tikai izveidot atbilstoÅ”o entÄ«tiju datu bāzē. Migrācijas mehānisms kalpo Å”iem mērÄ·iem. Lai izveidotu migrāciju, pamatojoties uz izmaiņām modeļos, jums ir jāpalaiž komanda npm run migration:generate -- CreateUserTable:

Spoilera virsraksts

$ npm run migration:generate -- CreateUserTable
Migration /home/dmitrii/projects/nest-rest/migrations/1563346135367-CreateUserTable.ts has been generated successfully.
Done in 1.96s.

Mums nebija manuāli jāraksta migrācija, viss notika maģiski. Vai tas nav brīnums! Tomēr tas vēl nav viss. Apskatīsim izveidoto migrācijas failu:

1563346135367-CreateUserTable.ts

import {MigrationInterface, QueryRunner} from "typeorm";
export class CreateUserTable1563346816726 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<any> {
await queryRunner.query(`CREATE TABLE "user" ("id" SERIAL NOT NULL, "email" character varying NOT NULL, "username" character varying NOT NULL, "displayName" character varying, CONSTRAINT "UQ_e12875dfb3b1d92d7d7c5377e22" UNIQUE ("email"), CONSTRAINT "UQ_78a916df40e02a9deb1c4b75edb" UNIQUE ("username"), CONSTRAINT "PK_cace4a159ff9f2512dd42373760" PRIMARY KEY ("id"))`);
}
public async down(queryRunner: QueryRunner): Promise<any> {
await queryRunner.query(`DROP TABLE "user"`);
}
}

Kā redzat, automātiski tika Ä£enerēta ne tikai migrācijas sākÅ”anas metode, bet arÄ« tās atgrieÅ”anas metode. Fantastiski!
Atliek tikai ieviest Ŕo migrāciju. Tas tiek darīts ar Ŕādu komandu:

npm run migration:run.

Tas arī viss, tagad shēmas izmaiņas ir migrētas uz datu bāzi.
Tālāk mēs izveidosim pakalpojumu, kas bÅ«s atbildÄ«gs par darbu ar lietotājiem un mantosim to no TypeOrmCrudService. MÅ«su gadÄ«jumā interesējoŔās entÄ«tijas repozitorijs ir jānodod vecāka konstruktora parametram User krātuve.

users.service.ts

import { Injectable } from '@nestjs/common';
import { TypeOrmCrudService } from '@nestjsx/crud-typeorm';
import { User } from '../../entities/user.entity';
import { InjectRepository } from '@nestjs/typeorm';
import { Repository } from 'typeorm';
@Injectable()
export class UsersService extends TypeOrmCrudService<User>{
constructor(@InjectRepository(User) usersRepository: Repository<User>){
super(usersRepository);
}
}

Å is pakalpojums mums bÅ«s nepiecieÅ”ams kontrolierÄ« users. Lai izveidotu kontrolieri, ierakstiet konsolē nest g controller users/controllers/users

nest g kontrollera lietotāji/kontrolleri/lietotāji

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects/nest-rest git:(master*)$ nest g controller users/controllers/users
CREATE /src/users/controllers/users/users.controller.spec.ts (486 bytes)
CREATE /src/users/controllers/users/users.controller.ts (99 bytes)
UPDATE /src/users/users.module.ts (188 bytes)

Atvērsim Å”o kontrolieri un rediģēsim to, lai pievienotu nedaudz burvÄ«bas ligzdajsx/crud. Katrai klasei UsersController Pievienosim Ŕādu dekoratoru:

@Crud({
model: {
type: User
}
})

Crud ir dekorators, kas kontrolierim pievieno nepiecieŔamās metodes darbam ar modeli. Modeļa tips ir norādīts laukā model.type dekoratoru konfigurācijas.
Otrais solis ir interfeisa ievieÅ”ana CrudController<User>. ā€œSamontētaisā€ kontrollera kods izskatās Ŕādi:

import { Controller } from '@nestjs/common';
import { Crud, CrudController } from '@nestjsx/crud';
import { User } from '../../entities/user.entity';
import { UsersService } from '../../services/users/users.service';
@Crud({
model: {
type: User
}
})
@Controller('users')
export class UsersController implements CrudController<User>{
constructor(public service: UsersService){}
}

Un viss! Tagad kontrolieris atbalsta visu darbību komplektu ar modeli! Netici man? Izmēģināsim mūsu lietojumprogrammu darbībā!

Vaicājuma skripta izveide programmā TestMace

Lai pārbaudÄ«tu mÅ«su pakalpojumu, mēs izmantosim IDE darbam ar API TestMace. Kāpēc TestMace? SalÄ«dzinot ar lÄ«dzÄ«giem produktiem, tam ir Ŕādas priekÅ”rocÄ«bas:

  • spēcÄ«gs darbs ar mainÄ«gajiem. Å obrÄ«d ir vairāki mainÄ«go veidi, no kuriem katrs spēlē noteiktu lomu: iebÅ«vētie mainÄ«gie, dinamiskie mainÄ«gie, vides mainÄ«gie. Katrs mainÄ«gais pieder pie mezgla ar atbalstu mantojuma mehānismam;
  • Viegli izveidojiet skriptus bez programmÄ“Å”anas. Tas tiks apspriests turpmāk;
  • cilvēkam lasāms formāts, kas ļauj saglabāt projektu versiju kontroles sistēmās;
  • automātiskā pabeigÅ”ana, sintakses izcelÅ”ana, mainÄ«go vērtÄ«bu izcelÅ”ana;
  • API apraksta atbalsts ar iespēju importēt no Swagger.

Sāksim savu serveri ar komandu npm start un mēģiniet piekļūt lietotāju sarakstam. Lietotāju sarakstu, spriežot pēc mÅ«su kontrollera konfigurācijas, var iegÅ«t no url localhost:3000/users. Iesniegsim pieprasÄ«jumu uz Å”o URL.
Pēc TestMace palaiÅ”anas jÅ«s varat redzēt Ŕādu saskarni:

Ātra CRUD izveide ar ligzdu, @nestjsx/crud un TestMace

AugŔējā kreisajā stÅ«rÄ« ir projekta koks ar saknes mezglu projekts. Mēģināsim izveidot pirmo pieprasÄ«jumu, lai iegÅ«tu lietotāju sarakstu. Å im nolÅ«kam mēs izveidosim PieprasÄ«juma darbÄ«ba mezgls Tas tiek darÄ«ts projekta mezgla konteksta izvēlnē Pievienot mezglu -> RequestStep.

Ātra CRUD izveide ar ligzdu, @nestjsx/crud un TestMace

URL laukā ielÄ«mējiet localhost:3000/users un izpildiet pieprasÄ«jumu. Mēs saņemsim kodu 200 ar tukÅ”u masÄ«vu atbildes pamattekstā. Tas ir saprotams, mēs vēl nevienu neesam pievienojuÅ”i.
Izveidosim skriptu, kurā būs Ŕādas darbības:

  1. izveidot lietotāju;
  2. jaunizveidotā lietotāja ID pieprasījums;
  3. dzÄ“Å”ana, izmantojot lietotāja ID, kas izveidots 1. darbÄ«bā.

Tātad, ejam. ĒrtÄ«bas labad izveidosim tādu mezglu kā Mape. BÅ«tÄ«bā Ŕī ir tikai mape, kurā mēs saglabāsim visu skriptu. Lai izveidotu mapes mezglu, mezgla konteksta izvēlnē atlasiet Projekts Pievienot mezglu -> Mape. Sauksim mezglu pārbaudÄ«t-izveidot. Mezgla iekÅ”pusē pārbaudÄ«t-izveidot Izveidosim savu pirmo pieprasÄ«jumu izveidot lietotāju. Sauksim jaunizveidoto mezglu izveidot-lietotājs. Tas ir, Å”obrÄ«d mezglu hierarhija izskatÄ«sies Ŕādi:

Ātra CRUD izveide ar ligzdu, @nestjsx/crud un TestMace

Dosimies uz atvērto cilni izveidot-lietotājs mezgls. IevadÄ«sim pieprasÄ«jumam Ŕādus parametrus:

  • PieprasÄ«juma veids - POST
  • URL ā€” localhost:3000/lietotājiem
  • Pamatteksts ā€” JSON ar vērtÄ«bu {"email": "[email protected]", "displayName": "New user", "username": "user"}

Izpildīsim Ŕo lūgumu. Mūsu pieteikumā teikts, ka ieraksts ir izveidots.

Ātra CRUD izveide ar ligzdu, @nestjsx/crud un TestMace

Nu, pārbaudÄ«sim Å”o faktu. Lai turpmākajās darbÄ«bās darbotos ar izveidotā lietotāja ID, Å”is parametrs ir jāsaglabā. Mehānisms tam ir lieliski piemērots. dinamiskie mainÄ«gie. Izmantosim mÅ«su piemēru, lai apskatÄ«tu, kā ar tiem strādāt. Atbildes parsētajā cilnē blakus ID mezglam konteksta izvēlnē atlasiet vienumu PieŔķirt mainÄ«gajam. Dialoglodziņā jāiestata Ŕādi parametri:

  • mezgls ā€” kurā no senčiem izveidot dinamisko mainÄ«go. Izvēlēsimies pārbaudÄ«t-izveidot
  • MainÄ«gais nosaukums ā€” Ŕī mainÄ«gā lieluma nosaukums. PiezvanÄ«sim userId.

Lūk, kā izskatās dinamiskā mainīgā izveides process:

Ātra CRUD izveide ar ligzdu, @nestjsx/crud un TestMace

Tagad katru reizi, kad tiek izpildÄ«ts Å”is vaicājums, dinamiskā mainÄ«gā vērtÄ«ba tiks atjaunināta. Un tāpēc dinamiskie mainÄ«gie atbalsta hierarhiskās mantoÅ”anas mehānismu, mainÄ«gais userId bÅ«s pieejams pēcnācējiem pārbaudÄ«t-izveidot jebkura ligzdoÅ”anas lÄ«meņa mezgls.
Å is mainÄ«gais mums noderēs nākamajā pieprasÄ«jumā. Proti, mēs pieprasÄ«sim jaunizveidoto lietotāju. Kā mezgla bērns pārbaudÄ«t-izveidot mēs izveidosim pieprasÄ«jumu pārbaudi, vai pastāv ar parametru url vienāds localhost:3000/users/${$dynamicVar.userId}. SkatÄ«t dizainu ${variable_name} tas iegÅ«st mainÄ«gā lieluma vērtÄ«bu. Jo Mums ir dinamisks mainÄ«gais, tāpēc, lai to iegÅ«tu, jums ir jāpiekļūst objektam $dynamicVar, t.i., pilnÄ«bā piekļūstot dinamiskam mainÄ«gajam userId izskatÄ«sies Ŕādi ${$dynamicVar.userId}. IzpildÄ«sim pieprasÄ«jumu un pārliecināsimies, ka dati ir pieprasÄ«ti pareizi.
Pēdējā darbÄ«ba ir pieprasÄ«t dzÄ“Å”anu. Mums tas ir vajadzÄ«gs ne tikai, lai pārbaudÄ«tu dzÄ“Å”anas darbÄ«bu, bet arÄ«, tā teikt, lai sakoptos pēc sevis datubāzē, jo E-pasta un lietotājvārda lauki ir unikāli. Tātad, pārbaudes izveides mezglā mēs izveidosim lietotāja dzÄ“Å”anas pieprasÄ«jumu ar Ŕādiem parametriem

  • PieprasÄ«juma veids ā€” DZĒST
  • url - localhost:3000/users/${$dynamicVar.userId}

Sāksim palaist. Mēs gaidam. Mēs izbaudām rezultātu)

Tagad mēs varam palaist visu Å”o skriptu jebkurā laikā. Lai palaistu skriptu, konteksta izvēlnē ir jāizvēlas pārbaudÄ«t-izveidot mezgla vienums skrējiens.

Ātra CRUD izveide ar ligzdu, @nestjsx/crud un TestMace

Skripta mezgli tiks izpildīti viens pēc otra
Šo skriptu var saglabāt savā projektā, palaižot Fails -> Saglabāt projektu.

Secinājums

Visas izmantoto rÄ«ku funkcijas vienkārÅ”i nevarēja iekļauties Ŕī raksta formātā. Kas attiecas uz galveno vaininieku - paku ligzdajsx/crud ā€” Ŕādas tēmas paliek atklātas:

  • modeļu validācija un transformācija pēc pasÅ«tÄ«juma;
  • jaudÄ«ga vaicājumu valoda un ērta tās lietoÅ”ana priekÅ”pusē;
  • pārdefinēt un pievienot jaunas metodes rupjiem kontrolieriem;
  • swagger atbalsts;
  • keÅ”atmiņas pārvaldÄ«ba.

Tomēr pietiek ar to, kas aprakstÄ«ts rakstā, lai saprastu, ka pat tādam uzņēmuma ietvaram kā NestJS ir rÄ«ki ātrai lietojumprogrammu prototipÄ“Å”anai. Un tik forÅ”s IDE patÄ«k TestMace ļauj uzturēt noteiktu tempu.

Å Ä« raksta pirmkods kopā ar projektu TestMace, pieejams repozitorijā https://github.com/TestMace/nest-rest. Lai atvērtu projektu TestMace vienkārÅ”i dariet to lietotnē Fails -> Atvērt projektu.

Avots: www.habr.com

Pievieno komentāru