Ƙirƙirar CRUD mai sauri tare da gida, @nestjsx/crud da TestMace

Ƙirƙirar CRUD mai sauri tare da gida, @nestjsx/crud da TestMace

A halin yanzu, API ɗin REST ya zama ma'auni don haɓaka aikace-aikacen yanar gizo, yana ba da damar haɓaka haɓaka zuwa sassa masu zaman kansu. Shahararrun tsarin tsarin kamar Angular, React, Vue da sauransu a halin yanzu ana amfani da su don UI. Masu haɓaka baya za su iya zaɓar daga cikin yaruka daban-daban da tsarin aiki. A yau ina so in yi magana game da irin wannan tsarin kamar NestJS. Muna ciki TestMace Muna amfani da shi sosai don ayyukan ciki. Amfani da gida da kunshin @nestjsx, za mu ƙirƙiri aikace-aikacen CRUD mai sauƙi.

Menene dalilin da yasa NestJS

Kwanan nan, yawancin tsarin tsarin baya sun bayyana a cikin jama'ar JavaScript. Kuma idan dangane da ayyuka suna ba da irin wannan damar zuwa Nest, to, a cikin abu ɗaya tabbas yayi nasara - wannan shine gine-gine. Abubuwan NestJS masu zuwa suna ba ku damar ƙirƙirar aikace-aikacen masana'antu da haɓaka haɓaka ga manyan ƙungiyoyi:

  • amfani da TypeScript a matsayin babban harshen ci gaba. Ko da yake NestJS yana goyan bayan JavaScript, wasu ayyuka na iya ƙi yin aiki, musamman idan muna magana game da fakitin ɓangare na uku;
  • kasancewar akwati na DI, wanda ke ba ka damar ƙirƙirar abubuwan da ba a haɗa su ba;
  • Ayyukan tsarin da kansa ya kasu kashi kashi masu zaman kansu masu musanyawa. Misali, a karkashin kaho a matsayin tsarin ana iya amfani dashi azaman bayyanakuma yi sauri, don yin aiki tare da bayanan bayanai, gida daga cikin akwatin yana ba da ɗauri zuwa nau'in rubutu, mongoose, sequelize;
  • NestJS shine agnostic dandamali kuma yana goyan bayan REST, GraphQL, Websockets, gRPC, da sauransu.

Tsarin kanta yana da wahayi ta hanyar tsarin gaba na Angular kuma a zahiri yana da alaƙa da yawa tare da shi.

Shigar da NestJS da Ƙaddamar da Aikin

Nest ya ƙunshi fakiti gida/cli, wanda ke ba ku damar tura tsarin tsarin aikace-aikacen da sauri. Bari mu shigar da wannan kunshin a duniya:

npm install --global @nest/cli

Bayan shigarwa, za mu samar da ainihin tsarin aikace-aikacen mu tare da sunan gida-rest. Ana yin wannan ta amfani da umarnin nest new nest-rest.

sabon gida-huta

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects $ nest new nest-rest
  We will scaffold your app in a few seconds..

CREATE /nest-rest/.prettierrc (51 bytes)
CREATE /nest-rest/README.md (3370 bytes)
CREATE /nest-rest/nest-cli.json (84 bytes)
CREATE /nest-rest/nodemon-debug.json (163 bytes)
CREATE /nest-rest/nodemon.json (67 bytes)
CREATE /nest-rest/package.json (1805 bytes)
CREATE /nest-rest/tsconfig.build.json (97 bytes)
CREATE /nest-rest/tsconfig.json (325 bytes)
CREATE /nest-rest/tslint.json (426 bytes)
CREATE /nest-rest/src/app.controller.spec.ts (617 bytes)
CREATE /nest-rest/src/app.controller.ts (274 bytes)
CREATE /nest-rest/src/app.module.ts (249 bytes)
CREATE /nest-rest/src/app.service.ts (142 bytes)
CREATE /nest-rest/src/main.ts (208 bytes)
CREATE /nest-rest/test/app.e2e-spec.ts (561 bytes)
CREATE /nest-rest/test/jest-e2e.json (183 bytes)

? Which package manager would you ️ to use? yarn
 Installation in progress... 

  Successfully created project nest-rest
  Get started with the following commands:

$ cd nest-rest
$ yarn run start

                          Thanks for installing Nest 
                 Please consider donating to our open collective
                        to help us maintain this package.

                 Donate: https://opencollective.com/nest

Za mu zabi yarn a matsayin manajan kunshin mu.
A wannan lokacin zaku iya fara uwar garken tare da umarnin npm start kuma zuwa adireshin http://localhost:3000 kana iya ganin babban shafi. Duk da haka, wannan ba shine dalilin da ya sa muka taru a nan ba kuma muna ci gaba.

Kafa aiki tare da database

Na zaɓi PostrgreSQL azaman DBMS don wannan labarin. Babu jayayya game da abubuwan dandano; a ganina, wannan shine mafi girman DBMS, yana da duk damar da ake bukata. Kamar yadda aka ambata, Nest yana ba da haɗin kai tare da fakiti daban-daban don aiki tare da bayanan bayanai. Domin Tun da zaɓi na ya faɗi akan PostgreSQL, zai zama ma'ana don zaɓar TypeORM azaman ORM. Bari mu shigar da buƙatun da suka dace don haɗawa tare da bayanan bayanai:

yarn add typeorm @nestjs/typeorm pg

Domin, abin da kowane fakiti ake buƙata don:

  1. typeorm - kunshin kai tsaye daga ORM kanta;
  2. @nestjs/typeorm - Kunshin TypeORM don NestJS. Yana ƙara samfura don shigo da su cikin kayan aikin, da kuma saitin masu adon mata;
  3. pg - direba don aiki tare da PostgreSQL.

Da kyau, an shigar da fakitin, yanzu kuna buƙatar ƙaddamar da bayanan da kanta. Don tura bayanan, zan yi amfani da docker-compose.yml tare da abun ciki mai zuwa:

docker-compose.yml

version: '3.1'

services:
  db:
    image: postgres:11.2
    restart: always
    environment:
      POSTGRES_PASSWORD: example
    volumes:
      - ../db:/var/lib/postgresql/data
      - ./postgresql.conf:/etc/postgresql/postgresql.conf
    ports:
      - 5432:5432
  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

Kamar yadda kake gani, wannan fayil ɗin yana daidaita ƙaddamar da kwantena 2:

  1. db wani akwati ne kai tsaye mai dauke da bayanan. A cikin yanayinmu, ana amfani da sigar postgresql 11.2;
  2. adminer-database manager. Yana ba da hanyar haɗin yanar gizo don dubawa da sarrafa bayanan bayanai.

Don yin aiki tare da haɗin tcp, na ƙara saitin mai zuwa.

postgresql.conf

# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
#   name = value
#
# (The "=" is optional.)  Whitespace may be used.  Comments are introduced with
# "#" anywhere on a line.  The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal.  If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()".  Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on".  Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units:  kB = kilobytes        Time units:  ms  = milliseconds
#                MB = megabytes                     s   = seconds
#                GB = gigabytes                     min = minutes
#                TB = terabytes                     h   = hours
#                                                   d   = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir'       # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = ''         # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*'
#listen_addresses = 'localhost'     # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
#port = 5432                # (change requires restart)
#max_connections = 100          # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/tmp'   # comma-separated list of directories
# (change requires restart)
#unix_socket_group = ''         # (change requires restart)
#unix_socket_permissions = 0777     # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off              # advertise server via Bonjour
# (change requires restart)
#bonjour_name = ''          # defaults to the computer name
# (change requires restart)
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0        # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0        # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0       # TCP_KEEPCNT;
# 0 selects the system default
# - Authentication -
#authentication_timeout = 1min      # 1s-600s
#password_encryption = md5      # md5 or scram-sha-256
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - SSL -
#ssl = off
#ssl_ca_file = ''
#ssl_cert_file = 'server.crt'
#ssl_crl_file = ''
#ssl_key_file = 'server.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
#shared_buffers = 32MB          # min 128kB
# (change requires restart)
#huge_pages = try           # on, off, or try
# (change requires restart)
#temp_buffers = 8MB         # min 800kB
#max_prepared_transactions = 0      # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB             # min 64kB
#maintenance_work_mem = 64MB        # min 1MB
#autovacuum_work_mem = -1       # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB          # min 100kB
#shared_memory_type = mmap      # the default is the first option
# supported by the operating system:
#   mmap
#   sysv
#   windows
# (change requires restart)
#dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
#   posix
#   sysv
#   windows
#   mmap
# (change requires restart)
# - Disk -
#temp_file_limit = -1           # limits per-process temp file space
# in kB, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000       # min 25
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0          # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1       # 0-10000 credits
#vacuum_cost_page_miss = 10     # 0-10000 credits
#vacuum_cost_page_dirty = 20        # 0-10000 credits
#vacuum_cost_limit = 200        # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms         # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100        # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0      # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 0       # measured in pages, 0 disables
# - Asynchronous Behavior -
#effective_io_concurrency = 1       # 1-1000; 0 disables prefetching
#max_worker_processes = 8       # (change requires restart)
#max_parallel_maintenance_workers = 2   # taken from max_parallel_workers
#max_parallel_workers_per_gather = 2    # taken from max_parallel_workers
#parallel_leader_participation = on
#max_parallel_workers = 8       # maximum number of max_worker_processes that
# can be used in parallel operations
#old_snapshot_threshold = -1        # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#backend_flush_after = 0        # measured in pages, 0 disables
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#wal_level = replica            # minimal, replica, or logical
# (change requires restart)
#fsync = on             # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
#synchronous_commit = on        # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync        # the default is the first option
# supported by the operating system:
#   open_datasync
#   fdatasync (default on Linux)
#   fsync
#   fsync_writethrough
#   open_sync
#full_page_writes = on          # recover from partial page writes
#wal_compression = off          # enable compression of full-page writes
#wal_log_hints = off            # also do full page writes of non-critical updates
# (change requires restart)
#wal_buffers = -1           # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms       # 1-10000 milliseconds
#wal_writer_flush_after = 1MB       # measured in pages, 0 disables
#commit_delay = 0           # range 0-100000, in microseconds
#commit_siblings = 5            # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min      # range 30s-1d
#max_wal_size = 1GB
#min_wal_size = 80MB
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 0     # measured in pages, 0 disables
#checkpoint_warning = 30s       # 0 disables
# - Archiving -
#archive_mode = off     # enables archiving; off, on, or always
# (change requires restart)
#archive_command = ''       # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
#               %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0        # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = ''       # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
#               %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
# (change requires restart)
#archive_cleanup_command = ''   # command to execute at every restartpoint
#recovery_end_command = ''  # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = ''       # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = ''  # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = ''  # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = ''   # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = ''   # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest'    # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause'   # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the master and on any standby that will send replication data.
#max_wal_senders = 10       # max number of walsender processes
# (change requires restart)
#wal_keep_segments = 0      # in logfile segments; 0 disables
#wal_sender_timeout = 60s   # in milliseconds; 0 disables
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#track_commit_timestamp = off   # collect timestamp of transaction commit
# (change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a master server.
#primary_conninfo = ''          # connection string to sending server
# (change requires restart)
#primary_slot_name = ''         # replication slot on sending server
# (change requires restart)
#promote_trigger_file = ''      # file name whose presence ends recovery
#hot_standby = on           # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s    # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s  # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off     # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s     # time that receiver waits for
# communication from master
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s   # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0       # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4    # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2  # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_parallel_hash = on
#enable_partition_pruning = on
# - Planner Cost Constants -
#seq_page_cost = 1.0            # measured on an arbitrary scale
#random_page_cost = 4.0         # same scale as above
#cpu_tuple_cost = 0.01          # same scale as above
#cpu_index_tuple_cost = 0.005       # same scale as above
#cpu_operator_cost = 0.0025     # same scale as above
#parallel_tuple_cost = 0.1      # same scale as above
#parallel_setup_cost = 1000.0   # same scale as above
#jit_above_cost = 100000        # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000     # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000   # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5            # range 1-10
#geqo_pool_size = 0         # selects default based on effort
#geqo_generations = 0           # selects default based on effort
#geqo_selection_bias = 2.0      # range 1.5-2.0
#geqo_seed = 0.0            # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100    # range 1-10000
#constraint_exclusion = partition   # on, off, or partition
#cursor_tuple_fraction = 0.1        # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8        # 1 disables collapsing of explicit
# JOIN clauses
#force_parallel_mode = off
#jit = on               # allow JIT compilation
#plan_cache_mode = auto         # auto, force_generic_plan or
# force_custom_plan
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr'     # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform.  csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off        # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log'          # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'    # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600           # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off     # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation.  Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d          # Automatic rotation of logfiles will
# happen after that time.  0 disables.
#log_rotation_size = 10MB       # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning     # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   info
#   notice
#   warning
#   error
#   log
#   fatal
#   panic
#log_min_error_statement = error    # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   info
#   notice
#   warning
#   error
#   log
#   fatal
#   panic (effectively off)
#log_min_duration_statement = -1    # logs statements and their durations
# according to log_statement_sample_rate. -1 is disabled,
# 0 logs all statement, > 0 logs only statements running at
# least this number of milliseconds.
#log_statement_sample_rate = 1  # Fraction of logged statements over
# log_min_duration_statement. 1.0 logs all statements,
# 0 never logs.
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default      # terse, default, or verbose messages
#log_hostname = off
#log_line_prefix = '%m [%p] '       # special values:
#   %a = application name
#   %u = user name
#   %d = database name
#   %r = remote host and port
#   %h = remote host
#   %p = process ID
#   %t = timestamp without milliseconds
#   %m = timestamp with milliseconds
#   %n = timestamp with milliseconds (as a Unix epoch)
#   %i = command tag
#   %e = SQL state
#   %c = session ID
#   %l = session line number
#   %s = session start timestamp
#   %v = virtual transaction ID
#   %x = transaction ID (0 if none)
#   %q = stop here in non-session
#        processes
#   %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off           # log lock waits >= deadlock_timeout
#log_statement = 'none'         # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1            # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
#log_timezone = 'GMT'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
#cluster_name = ''          # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none         # none, pl, all
#track_activity_query_size = 1024   # (change requires restart)
#stats_temp_directory = 'pg_stat_tmp'
# - Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on            # Enable autovacuum subprocess?  'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1   # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3     # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min      # time between autovacuum runs
#autovacuum_vacuum_threshold = 50   # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50  # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000    # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice       # values in order of decreasing detail:
#   debug5
#   debug4
#   debug3
#   debug2
#   debug1
#   log
#   notice
#   warning
#   error
#search_path = '"$user", public'    # schema names
#row_security = on
#default_tablespace = ''        # a tablespace name, '' uses the default
#temp_tablespaces = ''          # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0          # in milliseconds, 0 is disabled
#lock_timeout = 0           # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0    # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_cleanup_index_scale_factor = 0.1    # fraction of total number of tuples
# before index cleanup, 0 always performs
# index cleanup
#bytea_output = 'hex'           # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_fuzzy_search_limit = 0
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
#datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = 'GMT'
#timezone_abbreviations = 'Default'     # Select the set of available time zone
# abbreviations.  Currently, there are
#   Default
#   Australia (historical usage)
#   India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1         # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii        # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
#lc_messages = 'C'          # locale for system error message
# strings
#lc_monetary = 'C'          # locale for monetary formatting
#lc_numeric = 'C'           # locale for number formatting
#lc_time = 'C'              # locale for time formatting
# default configuration for text search
#default_text_search_config = 'pg_catalog.simple'
# - Shared Library Preloading -
#shared_preload_libraries = ''  # (change requires restart)
#local_preload_libraries = ''
#session_preload_libraries = ''
#jit_provider = 'llvmjit'       # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64     # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64    # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2   # negative values mean
# (max_pred_locks_per_transaction
#  / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2            # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding    # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#operator_precedence_warning = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off            # terminate session on any error?
#restart_after_crash = on       # reinitialize after backend crash?
#data_sync_retry = off          # retry or panic on failure to fsync
# data?
# (change requires restart)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf.
#include_dir = 'conf.d'         # include files ending in '.conf' from
# directory 'conf.d'
#include_if_exists = 'exists.conf'  # include file only if it exists
#include = 'special.conf'       # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

Wannan ke nan, zaku iya fara kwantena tare da umarnin docker-compose up -d. Ko a cikin na'ura mai kwakwalwa daban tare da umarnin docker-compose up.

Don haka, an shigar da fakitin, an kaddamar da rumbun adana bayanai, abin da ya rage shi ne a yi abota da juna. Don yin wannan, kuna buƙatar ƙara fayil ɗin mai zuwa zuwa tushen aikin: ormconfig.js:

ormconfig.js

const process = require('process');
const username = process.env.POSTGRES_USER || "postgres";
const password = process.env.POSTGRES_PASSWORD || "example";
module.exports = {
"type": "postgres",
"host": "localhost",
"port": 5432,
username,
password,
"database": "postgres",
"synchronize": true,
"dropSchema": false,
"logging": true,
"entities": [__dirname + "/src/**/*.entity.ts", __dirname + "/dist/**/*.entity.js"],
"migrations": ["migrations/**/*.ts"],
"subscribers": ["subscriber/**/*.ts", "dist/subscriber/**/.js"],
"cli": {
"entitiesDir": "src",
"migrationsDir": "migrations",
"subscribersDir": "subscriber"
}
}

Za a yi amfani da wannan tsarin don cli typeorm.

Bari mu kalli wannan tsari daki-daki. A layi na 3 da 4 muna samun sunan mai amfani da kalmar sirri daga masu canjin yanayi. Wannan ya dace lokacin da kuke da wurare da yawa (dev, mataki, prod, da sauransu). Ta hanyar tsoho, sunan mai amfani shine postgres kuma kalmar sirri misali. Sauran saitin ba shi da mahimmanci, don haka za mu mai da hankali kawai akan mafi kyawun sigogi:

  • aiki tare - Yana nuna ko ya kamata a ƙirƙiri tsarin bayanai ta atomatik lokacin da aikace-aikacen ya fara. Yi hankali da wannan zaɓi kuma kada ku yi amfani da shi a cikin samarwa, in ba haka ba za ku rasa bayanai. Wannan zaɓin ya dace lokacin haɓakawa da gyara aikace-aikacen. A matsayin madadin wannan zaɓi, zaku iya amfani da umarnin schema:sync daga CLI TypeORM.
  • dropSchema - sake saita tsarin duk lokacin da aka kafa haɗi. Kamar dai wanda ya gabata, wannan zaɓi ya kamata a yi amfani da shi kawai a lokacin haɓakawa da ƙaddamar da aikace-aikacen.
  • ƙungiyoyi - waɗanne hanyoyi don neman kwatancen samfura. Lura cewa bincike ta hanyar abin rufe fuska yana da tallafi.
  • cli.entitiesDir shine directory inda samfuran da aka ƙirƙira daga TypeORM CLI yakamata a adana su ta tsohuwa.

Domin mu sami damar amfani da duk fasalulluka na TypeORM a cikin aikace-aikacen Nest, muna buƙatar shigo da tsarin. TypeOrmModule в AppModule. Wadancan. ku AppModule zai yi kama da haka:

app.module.ts

import { Module } from '@nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';
import { TypeOrmModule } from '@nestjs/typeorm';
import * as process from "process";
const username = process.env.POSTGRES_USER || 'postgres';
const password = process.env.POSTGRES_PASSWORD || 'example';
@Module({
imports: [
TypeOrmModule.forRoot({
type: 'postgres',
host: 'localhost',
port: 5432,
username,
password,
database: 'postgres',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: true,
}),
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}

Kamar yadda za ku iya lura, hanyar forRoot Ana canja wurin tsari iri ɗaya don aiki tare da bayanan bayanai kamar a cikin fayil ormconfig.ts

Taɓawar ƙarshe ta rage - ƙara ayyuka da yawa don aiki tare da TypeORM a cikin kunshin.json. Gaskiyar ita ce an rubuta CLI a cikin javascript kuma yana gudana a cikin yanayin nodejs. Koyaya, duk samfuranmu da ƙaura za a rubuta su cikin rubutun rubutu. Don haka, ya zama dole mu fassara ƙauranmu da samfuranmu kafin amfani da CLI. Don wannan muna buƙatar kunshin ts-node:

yarn add -D ts-node

Bayan haka, ƙara mahimman umarni zuwa package.json:

"typeorm": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js",
"migration:generate": "yarn run typeorm migration:generate -n",
"migration:create": "yarn run typeorm migration:create -n",
"migration:run": "yarn run typeorm migration:run"

Umarni na farko, typeorm, yana ƙara ts-node wrapper don gudanar da TypeORM cli. Sauran umarnin su ne gajerun hanyoyi masu dacewa waɗanda ku, a matsayin mai haɓakawa, za ku yi amfani da su kusan kowace rana:
migration:generate - ƙirƙirar ƙaura dangane da canje-canje a samfuran ku.
migration:create - ƙirƙirar ƙaura mara kyau.
migration:run - ƙaddamar da ƙaura.
To, shi ke nan a yanzu, mun ƙara abubuwan da suka dace, mun tsara aikace-aikacen don yin aiki tare da bayanan bayanai daga cli da kuma daga aikace-aikacen kanta, kuma mun ƙaddamar da DBMS. Lokaci ya yi da za mu ƙara dabaru a aikace-aikacen mu.

Shigar da fakiti don ƙirƙirar CRUD

Yin amfani da Nest kawai, zaku iya ƙirƙirar API wanda ke ba ku damar ƙirƙira, karantawa, ɗaukakawa, da share abin halitta. Wannan bayani zai kasance mai sauƙi kamar yadda zai yiwu, amma a wasu lokuta zai zama marar amfani. Misali, idan kuna buƙatar ƙirƙirar samfuri da sauri, galibi kuna iya sadaukar da sassauci don saurin ci gaba. Yawancin tsare-tsare suna ba da ayyuka don ƙirƙira CRUD ta hanyar kwatanta ƙirar bayanai na wani mahaluƙi. Kuma Nest ba banda! Kunshin ya samar da wannan aikin @nestjsx. Ƙarfinsa yana da ban sha'awa sosai:

  • sauƙi shigarwa da daidaitawa;
  • DBMS 'yancin kai;
  • Harshen tambaya mai ƙarfi tare da ikon tacewa, sassauƙa, nau'in, ɗaukar alaƙa da abubuwan gida, caching, da sauransu;
  • kunshin don samar da buƙatun akan gaba-gaba;
  • sauƙin kawar da hanyoyin sarrafawa;
  • ƙananan tsari;
  • goyon bayan takardun swagger.

An raba aikin zuwa fakiti da yawa:

  • @nestjsx - ainihin kunshin da mai yin ado ya samar Danye() don samar da hanya, daidaitawa da tabbatarwa;
  • @nestjsx/rud-request - kunshin da ke ba da maginin tambaya / fassarori don amfani a gefen gaba;
  • @nestjsx/crud-typeorm - kunshin don haɗawa tare da TypeORM, yana samar da ainihin sabis na TypeOrmCrudService tare da hanyoyin CRUD don aiki tare da ƙungiyoyi a cikin bayanan.

A cikin wannan koyawa za mu buƙaci fakiti gidajsx/da gidajsx/crud-typeorm. Da farko, bari mu sanya su

yarn add @nestjsx/crud class-transformer class-validator

Fakitin class-canji и class-validator A cikin wannan aikace-aikacen ana buƙatar bayanin bayanin ƙa'idodi don canza yanayin ƙira da tabbatar da buƙatun masu shigowa, bi da bi. Waɗannan fakitin daga marubuci ɗaya ne, don haka musaya ɗin suna kama da juna.

Aiwatar da CRUD kai tsaye

Za mu ɗauki jerin masu amfani azaman misali. Masu amfani za su sami filayen masu zuwa: id, username, displayName, email. id - filin haɓaka ta atomatik, email и username - filayen musamman. Yana da sauki! Abin da ya rage shi ne aiwatar da ra'ayinmu ta hanyar aikace-aikacen Nest.
Da farko kana buƙatar ƙirƙirar wani tsari users, wanda zai dauki alhakin aiki tare da masu amfani. Bari mu yi amfani da cli daga NestJS kuma mu aiwatar da umarni a tushen tushen aikin mu nest g module users.

nest g module masu amfani

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects/nest-rest git:(master*)$ nest g module users
CREATE /src/users/users.module.ts (82 bytes)
UPDATE /src/app.module.ts (312 bytes)

A cikin wannan rukunin za mu ƙara babban fayil ɗin mahaɗan, inda za mu sami samfuran wannan module. Musamman, bari mu ƙara anan fayil user.entity.ts tare da bayanin ƙirar mai amfani:

mai amfani.entity.ts

import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm';
@Entity()
export class User {
@PrimaryGeneratedColumn()
id: string;
@Column({unique: true})
email: string;
@Column({unique: true})
username: string;
@Column({nullable: true})
displayName: string;
}

Domin wannan samfurin ya zama "ganin" ta aikace-aikacen mu, ya zama dole a cikin tsarin UsersModule shigo da TypeOrmModule abun ciki mai zuwa:

masu amfani.module.ts

import { Module } from '@nestjs/common';
import { UsersController } from './controllers/users/users.controller';
import { UsersService } from './services/users/users.service';
import { TypeOrmModule } from '@nestjs/typeorm';
import { User } from './entities/user.entity';
@Module({
controllers: [UsersController],
providers: [UsersService],
imports: [
TypeOrmModule.forFeature([User])
]
})
export class UsersModule {}

Wato a nan muna shigo da kaya TypeOrmModule, inda a matsayin siga na hanya forFeature Muna nuna jerin samfuran da ke da alaƙa da wannan ƙirar.

Abin da ya rage shi ne ƙirƙirar abin da ya dace a cikin ma'ajin bayanai. Tsarin ƙaura yana aiki don waɗannan dalilai. Don ƙirƙirar ƙaura dangane da canje-canje a samfuri, kuna buƙatar gudanar da umarni npm run migration:generate -- CreateUserTable:

lakabin lalata

$ npm run migration:generate -- CreateUserTable
Migration /home/dmitrii/projects/nest-rest/migrations/1563346135367-CreateUserTable.ts has been generated successfully.
Done in 1.96s.

Ba sai mun rubuta ƙaura da hannu ba, komai ya faru da sihiri. Wannan ba abin al'ajabi ba ne! Duk da haka, ba wannan ke nan ba. Bari mu kalli fayil ɗin ƙaura da aka ƙirƙira:

1563346135367-Kira mai amfani da tebur.ts

import {MigrationInterface, QueryRunner} from "typeorm";
export class CreateUserTable1563346816726 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<any> {
await queryRunner.query(`CREATE TABLE "user" ("id" SERIAL NOT NULL, "email" character varying NOT NULL, "username" character varying NOT NULL, "displayName" character varying, CONSTRAINT "UQ_e12875dfb3b1d92d7d7c5377e22" UNIQUE ("email"), CONSTRAINT "UQ_78a916df40e02a9deb1c4b75edb" UNIQUE ("username"), CONSTRAINT "PK_cace4a159ff9f2512dd42373760" PRIMARY KEY ("id"))`);
}
public async down(queryRunner: QueryRunner): Promise<any> {
await queryRunner.query(`DROP TABLE "user"`);
}
}

Kamar yadda kuke gani, ba kawai hanyar fara ƙaura an ƙirƙira ta ta atomatik ba, har ma da hanyar jujjuya shi. Abin mamaki!
Abin da ya rage shi ne fitar da wannan ƙaura. Ana yin wannan tare da umarni mai zuwa:

npm run migration:run.

Shi ke nan, yanzu sauye-sauyen tsari sun yi ƙaura zuwa rumbun adana bayanai.
Na gaba, za mu ƙirƙiri sabis ɗin da zai ɗauki alhakin aiki tare da masu amfani kuma mu gaji shi TypeOrmCrudService. Dole ne a wuce ma'ajiyar abubuwan sha'awa zuwa ma'aunin maginin mahaifa, a cikin yanayinmu User wurin ajiya.

masu amfani.sabis.ts

import { Injectable } from '@nestjs/common';
import { TypeOrmCrudService } from '@nestjsx/crud-typeorm';
import { User } from '../../entities/user.entity';
import { InjectRepository } from '@nestjs/typeorm';
import { Repository } from 'typeorm';
@Injectable()
export class UsersService extends TypeOrmCrudService<User>{
constructor(@InjectRepository(User) usersRepository: Repository<User>){
super(usersRepository);
}
}

Za mu buƙaci wannan sabis ɗin a cikin mai sarrafawa users. Don ƙirƙirar mai sarrafawa, rubuta a cikin na'ura wasan bidiyo nest g controller users/controllers/users

nest g masu amfani / masu sarrafawa / masu amfani

dmitrii@dmitrii-HP-ZBook-17-G3:~/projects/nest-rest git:(master*)$ nest g controller users/controllers/users
CREATE /src/users/controllers/users/users.controller.spec.ts (486 bytes)
CREATE /src/users/controllers/users/users.controller.ts (99 bytes)
UPDATE /src/users/users.module.ts (188 bytes)

Bari mu buɗe wannan mai sarrafa mu gyara shi don ƙara ɗan sihiri gidajsx/gudu. Kowane aji UsersController Bari mu ƙara kayan ado kamar haka:

@Crud({
model: {
type: User
}
})

Danye wani kayan ado ne wanda ke ƙara wa mai sarrafawa hanyoyin da ake bukata don aiki tare da samfurin. Ana nuna nau'in samfurin a cikin filin model.type saitin kayan ado.
Mataki na biyu shine aiwatar da hanyar sadarwa CrudController<User>. Lambar “Taruwa” tana kama da haka:

import { Controller } from '@nestjs/common';
import { Crud, CrudController } from '@nestjsx/crud';
import { User } from '../../entities/user.entity';
import { UsersService } from '../../services/users/users.service';
@Crud({
model: {
type: User
}
})
@Controller('users')
export class UsersController implements CrudController<User>{
constructor(public service: UsersService){}
}

Kuma duka! Yanzu mai sarrafawa yana goyan bayan duk saitin ayyuka tare da samfurin! Kar ku yarda da ni? Bari mu gwada aikace-aikacen mu a aikace!

Ƙirƙirar Rubutun Tambaya a cikin TestMace

Don gwada sabis ɗinmu za mu yi amfani da IDE don aiki tare da API TestMace. Me yasa TestMace? Idan aka kwatanta da irin waɗannan samfuran, yana da fa'idodi masu zuwa:

  • aiki mai ƙarfi tare da masu canji. A halin yanzu, akwai nau'o'in nau'i-nau'i da yawa, kowannensu yana taka muhimmiyar rawa: ginannen ma'auni, masu canzawa, masu canjin yanayi. Kowane m yana cikin kumburi tare da goyan bayan tsarin gado;
  • A sauƙaƙe ƙirƙirar rubutun ba tare da shirye-shirye ba. Wannan za a tattauna a kasa;
  • tsarin da mutum zai iya karantawa wanda ke ba ka damar adana aikin a cikin tsarin sarrafa sigar;
  • cikawa ta atomatik, alamar haɗin gwiwa, nuna alamar ƙima mai canzawa;
  • Taimakon bayanin bayanin API tare da ikon shigo da daga Swagger.

Bari mu fara uwar garken mu tare da umarni npm start da kuma kokarin samun damar jerin masu amfani. Lissafin masu amfani, yin hukunci ta hanyar daidaitawar mai sarrafa mu, ana iya samun su daga url localhost:3000/masu amfani. Bari mu yi tambaya ga wannan url.
Bayan gudanar da TestMace za ku iya ganin abin dubawa kamar haka:

Ƙirƙirar CRUD mai sauri tare da gida, @nestjsx/crud da TestMace

A saman hagu akwai bishiyar aikin da tushen tushe Project. Bari mu yi ƙoƙarin ƙirƙirar buƙatun farko don samun jerin masu amfani. Don wannan za mu ƙirƙira Matakin nema kumburi Ana yin wannan a cikin mahallin menu na kumburin aikin Ƙara kumburi -> Matakin nema.

Ƙirƙirar CRUD mai sauri tare da gida, @nestjsx/crud da TestMace

A cikin filin URL, liƙa localhost:3000/masu amfani kuma gudanar da buƙatar. Za mu karɓi lambar 200 tare da tsararrun komai a cikin jikin amsawa. Yana da fahimta, ba mu ƙara kowa ba tukuna.
Bari mu ƙirƙiri rubutun da zai haɗa da matakai masu zuwa:

  1. ƙirƙirar mai amfani;
  2. nema don id na sabon mai amfani da aka ƙirƙira;
  3. gogewa ta id mai amfani da aka ƙirƙira a mataki na 1.

Don haka, mu tafi. Don dacewa, bari mu ƙirƙiri kumburi kamar Jaka. Ainihin, wannan babban fayil ne kawai wanda za mu adana dukkan rubutun. Don ƙirƙirar kumburin Jaka, zaɓi Project daga mahallin menu na kumburin Ƙara kumburi -> Jaka. Bari mu kira kumburi duba-halitta. Ciki kumburi duba-halitta Bari mu ƙirƙiri buƙatun mu na farko don ƙirƙirar mai amfani. Bari mu kira sabon ƙirƙira kumburi ƙirƙira-mai amfani. Wato, a halin yanzu matsayi na kumburi zai kasance kamar haka:

Ƙirƙirar CRUD mai sauri tare da gida, @nestjsx/crud da TestMace

Mu je shafin budewa ƙirƙira-mai amfani kumburi. Bari mu shigar da sigogi masu zuwa don buƙatar:

  • Nau'in nema - POST
  • URL - mai gida: 3000/masu amfani
  • Jiki - JSON tare da darajar {"email": "[email protected]", "displayName": "New user", "username": "user"}

Mu cika wannan bukata. Aikace-aikacen mu ya ce an ƙirƙiri rikodin.

Ƙirƙirar CRUD mai sauri tare da gida, @nestjsx/crud da TestMace

To, bari mu duba wannan gaskiyar. Domin aiki tare da id na mai amfani da aka ƙirƙira a matakai na gaba, dole ne a adana wannan siga. Tsarin ya dace da wannan. masu canji masu ƙarfi. Bari mu yi amfani da misalinmu mu kalli yadda za mu yi aiki da su. A cikin shafin da aka rarraba na martani, kusa da kullin id a cikin menu na mahallin, zaɓi abu Sanya zuwa mai canzawa. A cikin akwatin maganganu dole ne ka saita sigogi masu zuwa:

  • kumburi - a cikin wanne ne daga cikin kakanni don ƙirƙirar canji mai ƙarfi. Mu zabi duba-halitta
  • Sunan mai suna - sunan wannan m. Mu kira userId.

Ga yadda tsarin ƙirƙirar canji mai ƙarfi yayi kama:

Ƙirƙirar CRUD mai sauri tare da gida, @nestjsx/crud da TestMace

Yanzu, duk lokacin da aka aiwatar da wannan tambayar, za a sabunta ƙimar madaidaicin canji. Kuma saboda masu canji masu ƙarfi suna goyan bayan tsarin gadon matsayi, mai canzawa userId za a samu a cikin zuriya duba-halitta kumburi na kowane matakin gida.
Wannan canjin zai zama da amfani gare mu a buƙatu na gaba. Wato, za mu nemi sabon mai amfani da aka ƙirƙira. A matsayin yaro na kumburi duba-halitta za mu ƙirƙiri buƙata duba-idan akwai tare da siga url daidai localhost:3000/users/${$dynamicVar.userId}. Duba ƙira ${variable_name} wannan yana samun darajar ma'auni. Domin Muna da canji mai ƙarfi, don haka don samun shi kuna buƙatar samun dama ga abu $dynamicVar, watau gaba daya samun dama ga canji mai ƙarfi userId zai yi kama da wannan ${$dynamicVar.userId}. Bari mu aiwatar da buƙatar kuma tabbatar da cewa an nemi bayanan daidai.
Mataki na ƙarshe da ya rage shine neman gogewa. Muna buƙatar shi ba kawai don bincika aikin gogewa ba, har ma, don yin magana, don tsabtace kanmu a cikin bayanan, saboda Filayen imel da sunan mai amfani na musamman ne. Don haka, a cikin alamar ƙirƙira ƙirƙira za mu ƙirƙiri buƙatun share-mai amfani tare da sigogi masu zuwa

  • Nau'in nema - GAME
  • URL - localhost:3000/users/${$dynamicVar.userId}

Mu kaddamar. Muna jira. Muna jin daɗin sakamakon)

To, yanzu za mu iya gudanar da wannan duka rubutun a kowane lokaci. Don gudanar da rubutun kuna buƙatar zaɓar daga menu na mahallin duba-halitta abun kumburi Run.

Ƙirƙirar CRUD mai sauri tare da gida, @nestjsx/crud da TestMace

Za a aiwatar da nodes ɗin da ke cikin rubutun ɗaya bayan ɗaya
Kuna iya ajiye wannan rubutun zuwa aikinku ta hanyar gudu Fayil -> Ajiye aikin.

ƙarshe

Duk fasalulluka na kayan aikin da aka yi amfani da su kawai ba za su iya dacewa da tsarin wannan labarin ba. Amma ga babban mai laifi - kunshin gidajsx/crud - ba a gano batutuwa masu zuwa ba:

  • tabbatar da al'ada da canji na samfuri;
  • Harshen tambaya mai ƙarfi da dacewarsa a gaba;
  • sake fasalin da ƙara sababbin hanyoyin zuwa masu kula da crud;
  • goyon bayan swagger;
  • sarrafa caching.

Duk da haka, ko da abin da aka bayyana a cikin labarin ya isa ya fahimci cewa ko da irin wannan tsarin kasuwanci kamar NestJS yana da kayan aiki don saurin samfurin aikace-aikacen. Kuma irin wannan IDE mai sanyi kamar TestMace yana ba ku damar kula da saurin da aka ba ku.

Lambar tushe don wannan labarin, tare da aikin TestMace, akwai a cikin ma'ajiya https://github.com/TestMace/nest-rest. Don buɗe aikin TestMace kawai yi shi a cikin app Fayil -> Buɗe aikin.

source: www.habr.com

Add a comment