Uundaji wa haraka wa CRUD kwa kutumia nest, @nestjsx/crud na TestMace

Uundaji wa haraka wa CRUD kwa kutumia nest, @nestjsx/crud na TestMace

Kwa sasa, API ya REST imekuwa kiwango cha ukuzaji wa programu ya wavuti, ikiruhusu usanidi kugawanywa katika sehemu huru. Mifumo mbalimbali maarufu kama vile Angular, React, Vue na nyinginezo kwa sasa zinatumika kwa UI. Watengenezaji wa nyuma wanaweza kuchagua kutoka kwa anuwai ya lugha na mifumo. Leo ningependa kuzungumza juu ya mfumo kama vile NestJS. Tuko ndani TestMace Tunaitumia kikamilifu kwa miradi ya ndani. Kutumia kiota na kifurushi @nestjsx/crud, tutaunda programu rahisi ya CRUD.

Kwa nini NestJS

Hivi majuzi, mifumo mingi ya nyuma imeonekana katika jamii ya JavaScript. Na ikiwa katika suala la utendaji hutoa uwezo sawa na Nest, basi kwa jambo moja hakika inashinda - huu ni usanifu. Vipengele vifuatavyo vya NestJS vinakuruhusu kuunda programu za kiviwanda na kuongeza maendeleo kwa timu kubwa:

  • kwa kutumia TypeScript kama lugha kuu ya ukuzaji. Ingawa NestJS inatumia JavaScript, baadhi ya utendakazi huenda usifanye kazi, hasa ikiwa tunazungumzia kuhusu vifurushi vya watu wengine;
  • uwepo wa chombo cha DI, ambacho kinakuwezesha kuunda vipengele vilivyounganishwa kwa uhuru;
  • Utendaji wa mfumo yenyewe umegawanywa katika vipengele vinavyoweza kubadilishwa. Kwa mfano, chini ya kofia kama mfumo inaweza kutumika kama kuelezaNa funga, kufanya kazi na hifadhidata, kiota nje ya kisanduku hutoa vifungo kwa typorm, mongoose, fuata;
  • NestJS ni mfumo wa agnostic na inaauni REST, GraphQL, Websockets, gRPC, n.k.

Mfumo wenyewe umechochewa na mfumo wa sehemu ya mbele ya Angular na kimawazo ina mengi yanayofanana nayo.

Inasakinisha NestJS na Kusambaza Mradi

Nest ina kifurushi kiota/cli, ambayo hukuruhusu kupeleka haraka mfumo wa msingi wa programu. Hebu tusakinishe kifurushi hiki duniani kote:

npm install --global @nest/cli

Baada ya usakinishaji, tutazalisha mfumo wa msingi wa programu yetu na jina kiota-rest. Hii inafanywa kwa kutumia amri nest new nest-rest.

nest new nest-rest

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects $ nest new nest-rest
  We will scaffold your app in a few seconds..

CREATE /nest-rest/.prettierrc (51 bytes)
CREATE /nest-rest/README.md (3370 bytes)
CREATE /nest-rest/nest-cli.json (84 bytes)
CREATE /nest-rest/nodemon-debug.json (163 bytes)
CREATE /nest-rest/nodemon.json (67 bytes)
CREATE /nest-rest/package.json (1805 bytes)
CREATE /nest-rest/tsconfig.build.json (97 bytes)
CREATE /nest-rest/tsconfig.json (325 bytes)
CREATE /nest-rest/tslint.json (426 bytes)
CREATE /nest-rest/src/app.controller.spec.ts (617 bytes)
CREATE /nest-rest/src/app.controller.ts (274 bytes)
CREATE /nest-rest/src/app.module.ts (249 bytes)
CREATE /nest-rest/src/app.service.ts (142 bytes)
CREATE /nest-rest/src/main.ts (208 bytes)
CREATE /nest-rest/test/app.e2e-spec.ts (561 bytes)
CREATE /nest-rest/test/jest-e2e.json (183 bytes)

? Which package manager would you ️ to use? yarn
 Installation in progress... 

  Successfully created project nest-rest
  Get started with the following commands:

$ cd nest-rest
$ yarn run start

                          Thanks for installing Nest 
                 Please consider donating to our open collective
                        to help us maintain this package.

                 Donate: https://opencollective.com/nest

Tutachagua uzi kama msimamizi wetu wa kifurushi.
Katika hatua hii, unaweza kuanza seva na amri npm start na kwenda kwa anwani http://localhost:3000 unaweza kuona ukurasa kuu. Walakini, hii sio kwa nini tumekusanyika hapa na tunaendelea.

Kuanzisha kazi na hifadhidata

Nilichagua PostrgreSQL kama DBMS ya nakala hii. Hakuna ubishi juu ya ladha; kwa maoni yangu, hii ndio DBMS iliyokomaa zaidi, iliyo na uwezo wote muhimu. Kama ilivyotajwa tayari, Nest hutoa muunganisho na vifurushi mbalimbali ili kufanya kazi na hifadhidata. Kwa sababu Kwa kuwa chaguo langu lilianguka kwenye PostgreSQL, itakuwa busara kuchagua TypeORM kama ORM. Wacha tusakinishe vifurushi muhimu vya kuunganishwa na hifadhidata:

yarn add typeorm @nestjs/typeorm pg

Kwa mpangilio, kila kifurushi kinahitajika kwa:

  1. typeorm - kifurushi moja kwa moja kutoka kwa ORM yenyewe;
  2. @nestjs/typeorm - Kifurushi cha TypeORM cha NestJS. Inaongeza moduli za kuagiza kwenye moduli za mradi, pamoja na seti ya wapambaji wa wasaidizi;
  3. pg - dereva wa kufanya kazi na PostgreSQL.

Sawa, vifurushi vimewekwa, sasa unahitaji kuzindua hifadhidata yenyewe. Ili kupeleka hifadhidata, nitatumia docker-compose.yml na maudhui yafuatayo:

docker-compose.yml

version: '3.1'

services:
  db:
    image: postgres:11.2
    restart: always
    environment:
      POSTGRES_PASSWORD: example
    volumes:
      - ../db:/var/lib/postgresql/data
      - ./postgresql.conf:/etc/postgresql/postgresql.conf
    ports:
      - 5432:5432
  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

Kama unavyoona, faili hii inasanidi uzinduzi wa vyombo 2:

  1. db ni kontena iliyo na hifadhidata moja kwa moja. Kwa upande wetu, toleo la postgresql 11.2 linatumika;
  2. msimamizi - meneja wa hifadhidata. Hutoa kiolesura cha wavuti kwa kutazama na kudhibiti hifadhidata.

Ili kufanya kazi na viunganisho vya tcp, niliongeza usanidi ufuatao.

postgresql.conf

# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
#   name = value
#
# (The "=" is optional.)  Whitespace may be used.  Comments are introduced with
# "#" anywhere on a line.  The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal.  If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()".  Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on".  Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units:  kB = kilobytes        Time units:  ms  = milliseconds
#                MB = megabytes                     s   = seconds
#                GB = gigabytes                     min = minutes
#                TB = terabytes                     h   = hours
#                                                   d   = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir'       # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = ''         # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*'
#listen_addresses = 'localhost'     # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
#port = 5432                # (change requires restart)
#max_connections = 100          # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/tmp'   # comma-separated list of directories
# (change requires restart)
#unix_socket_group = ''         # (change requires restart)
#unix_socket_permissions = 0777     # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off              # advertise server via Bonjour
# (change requires restart)
#bonjour_name = ''          # defaults to the computer name
# (change requires restart)
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0        # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0        # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0       # TCP_KEEPCNT;
# 0 selects the system default
# - Authentication -
#authentication_timeout = 1min      # 1s-600s
#password_encryption = md5      # md5 or scram-sha-256
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - SSL -
#ssl = off
#ssl_ca_file = ''
#ssl_cert_file = 'server.crt'
#ssl_crl_file = ''
#ssl_key_file = 'server.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
#shared_buffers = 32MB          # min 128kB
# (change requires restart)
#huge_pages = try           # on, off, or try
# (change requires restart)
#temp_buffers = 8MB         # min 800kB
#max_prepared_transactions = 0      # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB             # min 64kB
#maintenance_work_mem = 64MB        # min 1MB
#autovacuum_work_mem = -1       # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB          # min 100kB
#shared_memory_type = mmap      # the default is the first option
# supported by the operating system:
#   mmap
#   sysv
#   windows
# (change requires restart)
#dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
#   posix
#   sysv
#   windows
#   mmap
# (change requires restart)
# - Disk -
#temp_file_limit = -1           # limits per-process temp file space
# in kB, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000       # min 25
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0          # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1       # 0-10000 credits
#vacuum_cost_page_miss = 10     # 0-10000 credits
#vacuum_cost_page_dirty = 20        # 0-10000 credits
#vacuum_cost_limit = 200        # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms         # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100        # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0      # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 0       # measured in pages, 0 disables
# - Asynchronous Behavior -
#effective_io_concurrency = 1       # 1-1000; 0 disables prefetching
#max_worker_processes = 8       # (change requires restart)
#max_parallel_maintenance_workers = 2   # taken from max_parallel_workers
#max_parallel_workers_per_gather = 2    # taken from max_parallel_workers
#parallel_leader_participation = on
#max_parallel_workers = 8       # maximum number of max_worker_processes that
# can be used in parallel operations
#old_snapshot_threshold = -1        # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#backend_flush_after = 0        # measured in pages, 0 disables
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#wal_level = replica            # minimal, replica, or logical
# (change requires restart)
#fsync = on             # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
#synchronous_commit = on        # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync        # the default is the first option
# supported by the operating system:
#   open_datasync
#   fdatasync (default on Linux)
#   fsync
#   fsync_writethrough
#   open_sync
#full_page_writes = on          # recover from partial page writes
#wal_compression = off          # enable compression of full-page writes
#wal_log_hints = off            # also do full page writes of non-critical updates
# (change requires restart)
#wal_buffers = -1           # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms       # 1-10000 milliseconds
#wal_writer_flush_after = 1MB       # measured in pages, 0 disables
#commit_delay = 0           # range 0-100000, in microseconds
#commit_siblings = 5            # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min      # range 30s-1d
#max_wal_size = 1GB
#min_wal_size = 80MB
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 0     # measured in pages, 0 disables
#checkpoint_warning = 30s       # 0 disables
# - Archiving -
#archive_mode = off     # enables archiving; off, on, or always
# (change requires restart)
#archive_command = ''       # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
#               %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0        # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = ''       # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
#               %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
# (change requires restart)
#archive_cleanup_command = ''   # command to execute at every restartpoint
#recovery_end_command = ''  # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = ''       # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = ''  # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = ''  # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = ''   # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = ''   # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest'    # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause'   # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the master and on any standby that will send replication data.
#max_wal_senders = 10       # max number of walsender processes
# (change requires restart)
#wal_keep_segments = 0      # in logfile segments; 0 disables
#wal_sender_timeout = 60s   # in milliseconds; 0 disables
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#track_commit_timestamp = off   # collect timestamp of transaction commit
# (change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a master server.
#primary_conninfo = ''          # connection string to sending server
# (change requires restart)
#primary_slot_name = ''         # replication slot on sending server
# (change requires restart)
#promote_trigger_file = ''      # file name whose presence ends recovery
#hot_standby = on           # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s    # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s  # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off     # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s     # time that receiver waits for
# communication from master
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s   # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0       # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4    # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2  # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_parallel_hash = on
#enable_partition_pruning = on
# - Planner Cost Constants -
#seq_page_cost = 1.0            # measured on an arbitrary scale
#random_page_cost = 4.0         # same scale as above
#cpu_tuple_cost = 0.01          # same scale as above
#cpu_index_tuple_cost = 0.005       # same scale as above
#cpu_operator_cost = 0.0025     # same scale as above
#parallel_tuple_cost = 0.1      # same scale as above
#parallel_setup_cost = 1000.0   # same scale as above
#jit_above_cost = 100000        # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000     # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000   # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5            # range 1-10
#geqo_pool_size = 0         # selects default based on effort
#geqo_generations = 0           # selects default based on effort
#geqo_selection_bias = 2.0      # range 1.5-2.0
#geqo_seed = 0.0            # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100    # range 1-10000
#constraint_exclusion = partition   # on, off, or partition
#cursor_tuple_fraction = 0.1        # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8        # 1 disables collapsing of explicit
# JOIN clauses
#force_parallel_mode = off
#jit = on               # allow JIT compilation
#plan_cache_mode = auto         # auto, force_generic_plan or
# force_custom_plan
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr'     # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform.  csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off        # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log'          # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'    # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600           # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off     # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation.  Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d          # Automatic rotation of logfiles will
# happen after that time.  0 disables.
#log_rotation_size = 10MB       # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning     # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   info
#   notice
#   warning
#   error
#   log
#   fatal
#   panic
#log_min_error_statement = error    # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   info
#   notice
#   warning
#   error
#   log
#   fatal
#   panic (effectively off)
#log_min_duration_statement = -1    # logs statements and their durations
# according to log_statement_sample_rate. -1 is disabled,
# 0 logs all statement, > 0 logs only statements running at
# least this number of milliseconds.
#log_statement_sample_rate = 1  # Fraction of logged statements over
# log_min_duration_statement. 1.0 logs all statements,
# 0 never logs.
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default      # terse, default, or verbose messages
#log_hostname = off
#log_line_prefix = '%m [%p] '       # special values:
#   %a = application name
#   %u = user name
#   %d = database name
#   %r = remote host and port
#   %h = remote host
#   %p = process ID
#   %t = timestamp without milliseconds
#   %m = timestamp with milliseconds
#   %n = timestamp with milliseconds (as a Unix epoch)
#   %i = command tag
#   %e = SQL state
#   %c = session ID
#   %l = session line number
#   %s = session start timestamp
#   %v = virtual transaction ID
#   %x = transaction ID (0 if none)
#   %q = stop here in non-session
#        processes
#   %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off           # log lock waits >= deadlock_timeout
#log_statement = 'none'         # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1            # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
#log_timezone = 'GMT'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
#cluster_name = ''          # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none         # none, pl, all
#track_activity_query_size = 1024   # (change requires restart)
#stats_temp_directory = 'pg_stat_tmp'
# - Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on            # Enable autovacuum subprocess?  'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1   # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3     # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min      # time between autovacuum runs
#autovacuum_vacuum_threshold = 50   # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50  # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000    # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice       # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   log
#   notice
#   warning
#   error
#search_path = '"$user", public'    # schema names
#row_security = on
#default_tablespace = ''        # a tablespace name, '' uses the default
#temp_tablespaces = ''          # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0          # in milliseconds, 0 is disabled
#lock_timeout = 0           # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0    # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_cleanup_index_scale_factor = 0.1    # fraction of total number of tuples
# before index cleanup, 0 always performs
# index cleanup
#bytea_output = 'hex'           # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_fuzzy_search_limit = 0
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
#datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = 'GMT'
#timezone_abbreviations = 'Default'     # Select the set of available time zone
# abbreviations.  Currently, there are
#   Default
#   Australia (historical usage)
#   India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1         # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii        # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
#lc_messages = 'C'          # locale for system error message
# strings
#lc_monetary = 'C'          # locale for monetary formatting
#lc_numeric = 'C'           # locale for number formatting
#lc_time = 'C'              # locale for time formatting
# default configuration for text search
#default_text_search_config = 'pg_catalog.simple'
# - Shared Library Preloading -
#shared_preload_libraries = ''  # (change requires restart)
#local_preload_libraries = ''
#session_preload_libraries = ''
#jit_provider = 'llvmjit'       # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64     # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64    # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2   # negative values mean
# (max_pred_locks_per_transaction
#  / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2            # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding    # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#operator_precedence_warning = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off            # terminate session on any error?
#restart_after_crash = on       # reinitialize after backend crash?
#data_sync_retry = off          # retry or panic on failure to fsync
# data?
# (change requires restart)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf.
#include_dir = 'conf.d'         # include files ending in '.conf' from
# directory 'conf.d'
#include_if_exists = 'exists.conf'  # include file only if it exists
#include = 'special.conf'       # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

Hiyo ndiyo yote, unaweza kuanza vyombo na amri docker-compose up -d. Au katika console tofauti na amri docker-compose up.

Kwa hiyo, vifurushi vimewekwa, hifadhidata imezinduliwa, kilichobaki ni kuwafanya marafiki wao kwa wao. Ili kufanya hivyo, unahitaji kuongeza faili ifuatayo kwenye mzizi wa mradi: ormconfig.js:

ormconfig.js

const process = require('process');
const username = process.env.POSTGRES_USER || "postgres";
const password = process.env.POSTGRES_PASSWORD || "example";
module.exports = {
"type": "postgres",
"host": "localhost",
"port": 5432,
username,
password,
"database": "postgres",
"synchronize": true,
"dropSchema": false,
"logging": true,
"entities": [__dirname + "/src/**/*.entity.ts", __dirname + "/dist/**/*.entity.js"],
"migrations": ["migrations/**/*.ts"],
"subscribers": ["subscriber/**/*.ts", "dist/subscriber/**/.js"],
"cli": {
"entitiesDir": "src",
"migrationsDir": "migrations",
"subscribersDir": "subscriber"
}
}

Usanidi huu utatumika kwa cli typeorm.

Wacha tuangalie usanidi huu kwa undani zaidi. Kwenye mstari wa 3 na 4 tunapata jina la mtumiaji na nenosiri kutoka kwa vigezo vya mazingira. Hii ni rahisi wakati una mazingira kadhaa (dev, hatua, prod, nk). Kwa chaguo-msingi, jina la mtumiaji ni postgres na nenosiri ni mfano. Usanidi uliobaki ni mdogo, kwa hivyo tutazingatia tu vigezo vya kupendeza zaidi:

  • synchronize - Huonyesha kama schema ya hifadhidata inapaswa kuundwa kiotomatiki programu inapoanza. Jihadharini na chaguo hili na usiitumie katika uzalishaji, vinginevyo utapoteza data. Chaguo hili ni rahisi wakati wa kuunda na kurekebisha programu. Kama mbadala kwa chaguo hili, unaweza kutumia amri schema:sync kutoka kwa CLI TypeORM.
  • dropSchema - weka upya schema kila wakati unganisho unapoanzishwa. Kama ilivyo hapo awali, chaguo hili linapaswa kutumika tu wakati wa ukuzaji na utatuzi wa programu.
  • vyombo - ambayo njia za kutafuta maelezo ya mifano. Tafadhali kumbuka kuwa utafutaji kwa kutumia barakoa unatumika.
  • cli.entitiesDir ni saraka ambapo miundo iliyoundwa kutoka kwa TypeORM CLI inapaswa kuhifadhiwa kwa chaguo-msingi.

Ili tuweze kutumia vipengele vyote vya TypeORM katika programu yetu ya Nest, tunahitaji kuleta moduli. TypeOrmModule Π² AppModule. Wale. yako AppModule itaonekana kama hii:

app.module.ts

import { Module } from '@nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';
import { TypeOrmModule } from '@nestjs/typeorm';
import * as process from "process";
const username = process.env.POSTGRES_USER || 'postgres';
const password = process.env.POSTGRES_PASSWORD || 'example';
@Module({
imports: [
TypeOrmModule.forRoot({
type: 'postgres',
host: 'localhost',
port: 5432,
username,
password,
database: 'postgres',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: true,
}),
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}

Kama unaweza kuwa umeona, mbinu forRoot usanidi sawa wa kufanya kazi na hifadhidata huhamishwa kama katika faili ya ormconfig.ts

Mguso wa mwisho unabaki - ongeza kazi kadhaa za kufanya kazi na TypeORM kwenye package.json. Ukweli ni kwamba CLI imeandikwa katika javascript na inaendesha katika mazingira ya nodejs. Hata hivyo, miundo na uhamaji wetu wote utaandikwa kwa maandishi. Kwa hivyo, ni muhimu kutafsiri uhamiaji na mifano yetu kabla ya kutumia CLI. Kwa hili tunahitaji kifurushi cha ts-node:

yarn add -D ts-node

Baada ya hapo, ongeza amri zinazohitajika kwa package.json:

"typeorm": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js",
"migration:generate": "yarn run typeorm migration:generate -n",
"migration:create": "yarn run typeorm migration:create -n",
"migration:run": "yarn run typeorm migration:run"

Amri ya kwanza, typeorm, inaongeza kanga ya ts-node ili kuendesha TypeORM cli. Amri zilizobaki ni njia za mkato ambazo wewe, kama msanidi programu, utatumia karibu kila siku:
migration:generate - kuunda uhamiaji kulingana na mabadiliko katika miundo yako.
migration:create - kuunda uhamiaji tupu.
migration:run - kuzindua uhamiaji.
Kweli, ndio sasa, tumeongeza vifurushi muhimu, tukasanidi programu kufanya kazi na hifadhidata kutoka kwa cli na kutoka kwa programu yenyewe, na pia tukazindua DBMS. Ni wakati wa kuongeza mantiki kwa maombi yetu.

Kufunga vifurushi vya kuunda CRUD

Kwa kutumia Nest pekee, unaweza kuunda API inayokuruhusu kuunda, kusoma, kusasisha na kufuta huluki. Suluhisho hili litakuwa rahisi iwezekanavyo, lakini katika hali nyingine litakuwa la ziada. Kwa mfano, ikiwa unahitaji kuunda haraka mfano, unaweza mara nyingi kutoa kubadilika kwa kasi ya maendeleo. Mifumo mingi hutoa utendakazi wa kutengeneza CRUD kwa kuelezea muundo wa data wa huluki fulani. Na Nest sio ubaguzi! Utendaji huu hutolewa na kifurushi @nestjsx/crud. Uwezo wake ni wa kuvutia sana:

  • ufungaji rahisi na usanidi;
  • DBMS uhuru;
  • lugha yenye nguvu ya kuuliza yenye uwezo wa kuchuja, kufanya paginate, kupanga, kupakia uhusiano na huluki zilizowekwa, akiba, n.k.;
  • kifurushi cha kutengeneza ombi kwenye sehemu ya mbele;
  • kupitisha rahisi kwa njia za mtawala;
  • muundo mdogo;
  • usaidizi wa nyaraka za swagger.

Utendaji umegawanywa katika vifurushi kadhaa:

  • @nestjsx/crud - mfuko wa msingi ambao mpambaji hutoa Crud() kwa ajili ya utengenezaji wa njia, usanidi na uthibitishaji;
  • @nestjsx/crud-request β€” kifurushi kinachotoa kijenzi/kichanganuzi cha hoja kwa matumizi kwenye upande wa mbele;
  • @nestjsx/crud-typeorm β€” kifurushi cha kuunganishwa na TypeORM, kinachotoa huduma ya msingi ya TypeOrmCrudService na mbinu za CRUD za kufanya kazi na vyombo kwenye hifadhidata.

Katika somo hili tutahitaji vifurushi kiotajsx/crud na kiotajsx/crud-typeorm. Kwanza, hebu tuwaweke

yarn add @nestjsx/crud class-transformer class-validator

Vifurushi darasa-transformer ΠΈ darasa-kithibitishaji katika maombi haya yanahitajika kwa maelezo ya kutangaza sheria za kubadilisha mifano ya mifano na kuthibitisha maombi yanayoingia, kwa mtiririko huo. Vifurushi hivi vinatoka kwa mwandishi mmoja, kwa hivyo miingiliano ni sawa.

Utekelezaji wa moja kwa moja wa CRUD

Tutachukua orodha ya watumiaji kama mfano wa mfano. Watumiaji watakuwa na sehemu zifuatazo: id, username, displayName, email. id - uwanja wa kuongeza kiotomatiki, email ΠΈ username - mashamba ya kipekee. Ni rahisi! Kilichosalia ni kutekeleza wazo letu katika mfumo wa programu ya Nest.
Kwanza unahitaji kuunda moduli users, ambaye atawajibika kufanya kazi na watumiaji. Wacha tutumie cli kutoka kwa NestJS na tutekeleze amri kwenye saraka ya mizizi ya mradi wetu nest g module users.

watumiaji wa moduli ya nest g

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects/nest-rest git:(master*)$ nest g module users
CREATE /src/users/users.module.ts (82 bytes)
UPDATE /src/app.module.ts (312 bytes)

Katika moduli hii tutaongeza folda ya vyombo, ambapo tutakuwa na mifano ya moduli hii. Hasa, hebu tuongeze hapa faili user.entity.ts na maelezo ya mtindo wa mtumiaji:

user.entity.ts

import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm';
@Entity()
export class User {
@PrimaryGeneratedColumn()
id: string;
@Column({unique: true})
email: string;
@Column({unique: true})
username: string;
@Column({nullable: true})
displayName: string;
}

Ili mtindo huu "uonekane" na maombi yetu, ni muhimu katika moduli UsersModule kuagiza TypeOrmModule maudhui yafuatayo:

watumiaji.moduli.ts

import { Module } from '@nestjs/common';
import { UsersController } from './controllers/users/users.controller';
import { UsersService } from './services/users/users.service';
import { TypeOrmModule } from '@nestjs/typeorm';
import { User } from './entities/user.entity';
@Module({
controllers: [UsersController],
providers: [UsersService],
imports: [
TypeOrmModule.forFeature([User])
]
})
export class UsersModule {}

Hiyo ni, hapa tunaagiza TypeOrmModule, ambapo kama kigezo cha njia forFeature Tunaonyesha orodha ya mifano inayohusiana na moduli hii.

Kinachobaki ni kuunda chombo kinacholingana kwenye hifadhidata. Utaratibu wa uhamiaji hutumikia kwa madhumuni haya. Ili kuunda uhamiaji kulingana na mabadiliko katika mifano, unahitaji kuendesha amri npm run migration:generate -- CreateUserTable:

cheo cha mharibifu

$ npm run migration:generate -- CreateUserTable
Migration /home/dmitrii/projects/nest-rest/migrations/1563346135367-CreateUserTable.ts has been generated successfully.
Done in 1.96s.

Hatukuhitaji kuandika uhamiaji kwa mikono, kila kitu kilifanyika kwa uchawi. Je, huu si muujiza? Walakini, hiyo sio yote. Wacha tuangalie faili ya uhamiaji iliyoundwa:

1563346135367-CreateUserTable.ts

import {MigrationInterface, QueryRunner} from "typeorm";
export class CreateUserTable1563346816726 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<any> {
await queryRunner.query(`CREATE TABLE "user" ("id" SERIAL NOT NULL, "email" character varying NOT NULL, "username" character varying NOT NULL, "displayName" character varying, CONSTRAINT "UQ_e12875dfb3b1d92d7d7c5377e22" UNIQUE ("email"), CONSTRAINT "UQ_78a916df40e02a9deb1c4b75edb" UNIQUE ("username"), CONSTRAINT "PK_cace4a159ff9f2512dd42373760" PRIMARY KEY ("id"))`);
}
public async down(queryRunner: QueryRunner): Promise<any> {
await queryRunner.query(`DROP TABLE "user"`);
}
}

Kama unaweza kuona, sio tu njia ya kuanza uhamiaji ilitolewa kiatomati, lakini pia njia ya kuirudisha nyuma. Ajabu!
Kilichobaki ni kusambaza uhamiaji huu. Hii inafanywa kwa amri ifuatayo:

npm run migration:run.

Hiyo ni, sasa mabadiliko ya schema yamehamia kwenye hifadhidata.
Ifuatayo, tutaunda huduma ambayo itakuwa na jukumu la kufanya kazi na watumiaji na kurithi kutoka TypeOrmCrudService. Hifadhi ya chombo cha riba lazima ipitishwe kwa kigezo cha mjenzi mzazi, kwa upande wetu. User hazina.

watumiaji.huduma.ts

import { Injectable } from '@nestjs/common';
import { TypeOrmCrudService } from '@nestjsx/crud-typeorm';
import { User } from '../../entities/user.entity';
import { InjectRepository } from '@nestjs/typeorm';
import { Repository } from 'typeorm';
@Injectable()
export class UsersService extends TypeOrmCrudService<User>{
constructor(@InjectRepository(User) usersRepository: Repository<User>){
super(usersRepository);
}
}

Tutahitaji huduma hii katika kidhibiti users. Ili kuunda kidhibiti, chapa kwenye koni nest g controller users/controllers/users

nest g kidhibiti watumiaji/vidhibiti/watumiaji

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects/nest-rest git:(master*)$ nest g controller users/controllers/users
CREATE /src/users/controllers/users/users.controller.spec.ts (486 bytes)
CREATE /src/users/controllers/users/users.controller.ts (99 bytes)
UPDATE /src/users/users.module.ts (188 bytes)

Hebu tufungue kidhibiti hiki na kukihariri ili kuongeza uchawi kidogo kiotajsx/crud. Kwa darasa UsersController Wacha tuongeze mapambo kama haya:

@Crud({
model: {
type: User
}
})

Crud ni mapambo ambayo huongeza kwa mtawala njia muhimu za kufanya kazi na mfano. Aina ya mfano imeonyeshwa kwenye uwanja model.type usanidi wa mapambo.
Hatua ya pili ni kutekeleza interface CrudController<User>. Nambari ya kidhibiti "iliyokusanywa" inaonekana kama hii:

import { Controller } from '@nestjs/common';
import { Crud, CrudController } from '@nestjsx/crud';
import { User } from '../../entities/user.entity';
import { UsersService } from '../../services/users/users.service';
@Crud({
model: {
type: User
}
})
@Controller('users')
export class UsersController implements CrudController<User>{
constructor(public service: UsersService){}
}

Na ni yote! Sasa mtawala anaunga mkono seti nzima ya shughuli na mfano! Usiniamini? Hebu tujaribu maombi yetu kwa vitendo!

Kuunda Hati ya Maswali katika TestMace

Ili kujaribu huduma yetu tutatumia IDE kufanya kazi na API TestMace. Kwa nini TestMace? Ikilinganishwa na bidhaa zinazofanana, ina faida zifuatazo:

  • kazi yenye nguvu na vigezo. Kwa sasa, kuna aina kadhaa za vigezo, ambayo kila mmoja ina jukumu maalum: vigezo vya kujengwa, vigezo vya nguvu, vigezo vya mazingira. Kila tofauti ni ya nodi yenye usaidizi wa utaratibu wa urithi;
  • Unda hati kwa urahisi bila programu. Hili litajadiliwa hapa chini;
  • muundo unaoweza kusomeka na binadamu unaokuwezesha kuhifadhi mradi katika mifumo ya udhibiti wa toleo;
  • kukamilisha kiotomatiki, kuangazia sintaksia, kuangazia thamani tofauti;
  • Usaidizi wa maelezo ya API na uwezo wa kuagiza kutoka kwa Swagger.

Wacha tuanze seva yetu na amri npm start na jaribu kufikia orodha ya watumiaji. Orodha ya watumiaji, kwa kuzingatia usanidi wetu wa kidhibiti, inaweza kupatikana kutoka kwa mwenyeji wa url:3000/users. Hebu tutume ombi kwa url hii.
Baada ya kuendesha TestMace unaweza kuona kiolesura kama hiki:

Uundaji wa haraka wa CRUD kwa kutumia nest, @nestjsx/crud na TestMace

Juu kushoto ni mti wa mradi na nodi ya mizizi Mradi. Hebu jaribu kuunda ombi la kwanza ili kupata orodha ya watumiaji. Kwa hili tutaunda OmbiHatua nodi Hii inafanywa katika menyu ya muktadha ya nodi ya Mradi Ongeza nodi -> OmbiHatua.

Uundaji wa haraka wa CRUD kwa kutumia nest, @nestjsx/crud na TestMace

Kwenye uga wa URL, bandika localhost:3000/users na utekeleze ombi. Tutapokea msimbo 200 na safu tupu katika kikundi cha majibu. Inaeleweka, bado hatujaongeza mtu yeyote.
Wacha tuunde hati ambayo itajumuisha hatua zifuatazo:

  1. kuunda mtumiaji;
  2. ombi la kitambulisho cha mtumiaji mpya;
  3. kufuta kwa kitambulisho cha mtumiaji iliyoundwa katika hatua ya 1.

Kwa hiyo, twende. Kwa urahisi, wacha tuunda nodi kama Folder. Kimsingi, hii ni folda tu ambayo tutahifadhi hati nzima. Ili kuunda nodi ya Folda, chagua Mradi kutoka kwa menyu ya muktadha wa nodi Ongeza nodi -> Folda. Wacha tuite nodi angalia-unda. Ndani ya nodi angalia-unda Hebu tuunde ombi letu la kwanza la kuunda mtumiaji. Wacha tuite nodi mpya iliyoundwa kuunda-mtumiaji. Hiyo ni, kwa sasa uongozi wa nodi utaonekana kama hii:

Uundaji wa haraka wa CRUD kwa kutumia nest, @nestjsx/crud na TestMace

Twende kwenye kichupo wazi kuunda-mtumiaji nodi. Wacha tuingize vigezo vifuatavyo kwa ombi:

  • Aina ya ombi - POST
  • URL - localhost:3000/users
  • Mwili - JSON yenye thamani {"email": "[email protected]", "displayName": "New user", "username": "user"}

Tutimize ombi hili. Maombi yetu yanasema kuwa rekodi imeundwa.

Uundaji wa haraka wa CRUD kwa kutumia nest, @nestjsx/crud na TestMace

Naam, hebu tuangalie ukweli huu. Ili kufanya kazi na kitambulisho cha mtumiaji aliyeundwa katika hatua zinazofuata, parameter hii lazima ihifadhiwe. Utaratibu ni kamili kwa hili. vigezo vya nguvu. Hebu tutumie mfano wetu kuangalia jinsi ya kufanya kazi nao. Katika kichupo kilichochanganuliwa cha jibu, karibu na nodi ya kitambulisho kwenye menyu ya muktadha, chagua kipengee Agiza kutofautisha. Katika sanduku la mazungumzo lazima uweke vigezo vifuatavyo:

  • Node - katika mababu gani kuunda mabadiliko ya nguvu. Hebu tuchague angalia-unda
  • Jina linaloweza kutekelezwa - jina la tofauti hii. Hebu piga simu userId.

Hivi ndivyo mchakato wa kuunda utofauti wenye nguvu unaonekana kama:

Uundaji wa haraka wa CRUD kwa kutumia nest, @nestjsx/crud na TestMace

Sasa, kila wakati hoja hii inapotekelezwa, thamani ya mabadiliko yanayobadilika itasasishwa. Na kwa sababu vigezo vya nguvu vinasaidia utaratibu wa urithi wa hierarkia, kutofautiana userId itapatikana kwa wazao angalia-unda nodi ya kiwango chochote cha kuota.
Tofauti hii itakuwa ya manufaa kwetu katika ombi linalofuata. Yaani, tutaomba mtumiaji mpya iliyoundwa. Kama mtoto wa nodi angalia-unda tutaunda ombi angalia ikiwa ipo na parameter url sawa localhost:3000/users/${$dynamicVar.userId}. Tazama muundo ${variable_name} hii ni kupata thamani ya kutofautisha. Kwa sababu Tuna mabadiliko yanayobadilika, kwa hivyo ili kuipata unahitaji kufikia kitu $dynamicVar, yaani, kufikia kabisa kigezo chenye nguvu userId itaonekana hivi ${$dynamicVar.userId}. Wacha tutekeleze ombi na tuhakikishe kuwa data imeombwa kwa usahihi.
Hatua ya mwisho iliyobaki ni kuomba kufutwa. Hatuhitaji tu kuangalia utendakazi wa kufutwa, lakini pia, kwa kusema, kujisafisha kwenye hifadhidata, kwa sababu. Sehemu za barua pepe na jina la mtumiaji ni za kipekee. Kwa hiyo, katika node ya kuangalia-kuunda tutaunda ombi la kufuta-mtumiaji na vigezo vifuatavyo

  • Aina ya ombi - FUTA
  • URL - localhost:3000/users/${$dynamicVar.userId}

Hebu tuzindue. Tunasubiri. Tunafurahia matokeo)

Kweli, sasa tunaweza kuendesha hati hii yote wakati wowote. Ili kuendesha hati unahitaji kuchagua kutoka kwa menyu ya muktadha angalia-unda kipengee cha nodi Kukimbia.

Uundaji wa haraka wa CRUD kwa kutumia nest, @nestjsx/crud na TestMace

Nodi kwenye hati zitatekelezwa moja baada ya nyingine
Unaweza kuhifadhi hati hii kwa mradi wako kwa kukimbia Faili -> Hifadhi mradi.

Hitimisho

Vipengele vyote vya zana vilivyotumiwa havikuweza kutoshea katika muundo wa nakala hii. Kama kwa mkosaji mkuu - kifurushi kiotajsx/crud - mada zifuatazo zimebaki wazi:

  • uthibitishaji wa desturi na mabadiliko ya mifano;
  • lugha yenye nguvu ya kuuliza na matumizi yake rahisi mbele;
  • kufafanua upya na kuongeza mbinu mpya kwa vidhibiti crud;
  • msaada wa swagger;
  • usimamizi wa kache.

Walakini, hata kile kilichoelezewa katika kifungu kinatosha kuelewa kuwa hata mfumo wa biashara kama vile NestJS una zana za uchapaji wa haraka wa programu. Na IDE nzuri kama hiyo TestMace hukuruhusu kudumisha kasi fulani.

Nambari ya chanzo ya nakala hii, pamoja na mradi TestMace, inapatikana kwenye hifadhi https://github.com/TestMace/nest-rest. Ili kufungua mradi TestMace fanya tu kwenye programu Faili -> Fungua mradi.

Chanzo: mapenzi.com

Kuongeza maoni