Study on the implementation of business logic at the level of PostgreSQL stored functions

The motive for writing the sketch was the article β€œDuring quarantine, the load increased 5 times, but we were ready.” How Lingualeo moved to PostgreSQL from 23 million users. It also seemed interesting article published 4 years ago - Implementing Business Logic in MySQL.

It seemed interesting that the same thought-"implement business logic in the database".

Study on the implementation of business logic at the level of PostgreSQL stored functions

came to mind not only me alone.

Also, for the future, I wanted to save, for myself first of all, interesting developments that arose during the implementation. Especially considering that relatively recently a strategic decision was made to change the architecture and move the business logic to the backend level. So that everything that has been developed will soon be of no use to anyone and no one will be interested.

The described methods are not some kind of discovery and exclusive know-how, everything is classic and has been implemented more than once (for example, I applied a similar approach 20 years ago on Oracle). I just decided to collect everything in one place. Suddenly someone will come in handy. As practice has shown, quite often the same idea comes independently to different people. Yes, and for yourself to leave a memory, it's useful.

Of course, nothing in this world is perfect, errors and typos are unfortunately possible. Criticism and comments are welcome and expected in every possible way. And one more small detail - specific implementation details are omitted. All the same, everything is used so far in a really working project. So, the article is like an etude and a description of the general concept, nothing more. I hope for the concept of the big picture, the details are enough.

The general idea is "divide and conquer, hide and own"

The idea is classic - a separate schema for tables, a separate schema for stored functions.
The client does not have direct access to the data. All the client can do is just call the stored function and process the received response.

Roles

CREATE ROLE store;

CREATE ROLE sys_functions;

CREATE ROLE loc_audit_functions;

CREATE ROLE service_functions;

CREATE ROLE business_functions;

Schemes

Table storage schema

Target tables that implement subject entities.

CREATE SCHEMA store AUTHORIZATION store ;

Diagram of system functions

System functions, in particular for logging table changes.

CREATE SCHEMA sys_functions AUTHORIZATION sys_functions ;

Local Audit Scheme

Functions and tables to implement local audit of the execution of stored functions and changes to target tables.

CREATE SCHEMA loc_audit_functions AUTHORIZATION loc_audit_functions;

Scheme of service functions

Functions for service and DML functions.

CREATE SCHEMA service_functions AUTHORIZATION service_functions;

Business function diagram

Functions for final business functions called by the client.

CREATE SCHEMA business_functions AUTHORIZATION business_functions;

Access rights

Role - WBA has full access to all schemas (separated from the DB Owner role).

CREATE ROLE dba_role;
GRANT store TO dba_role;
GRANT sys_functions TO dba_role;
GRANT loc_audit_functions TO dba_role;
GRANT service_functions TO dba_role;
GRANT business_functions TO dba_role;

Role - USER has the privilege EXECUTE in the scheme business_functions.

CREATE ROLE user_role;

Privileges Between Schemas

GRANT
Since all functions are created with the attribute SECURITY DEFINIER instruction needed REVOKE EXECUTE ON ALL FUNCTION… FROM public;

REVOKE EXECUTE ON ALL FUNCTION IN SCHEMA sys_functions FROM public ; 
REVOKE EXECUTE ON ALL FUNCTION IN SCHEMA  loc_audit_functions  FROM public ; 
REVOKE EXECUTE ON ALL FUNCTION IN SCHEMA  service_functions FROM public ; 
REVOKE EXECUTE ON ALL FUNCTION IN SCHEMA  business_functions FROM public ; 

GRANT USAGE ON SCHEMA sys_functions TO dba_role ; 
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA sys_functions TO dba_role ;
GRANT USAGE ON SCHEMA loc_audit_functions  TO dba_role ; 
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA loc_audit_functions  TO dba_role ;
GRANT USAGE ON SCHEMA service_functions TO dba_role ; 
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA service_functions TO dba_role ;
GRANT USAGE ON SCHEMA business_functions TO dba_role ; 
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA business_functions TO dba_role ;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA business_functions TO user_role ;

GRANT ALL PRIVILEGES ON SCHEMA store TO GROUP business_functions ;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA store TO business_functions ;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA store TO business_functions ;

So the database schema is ready. You can start filling in the data.

Target tables

Creating tables is trivial. No features, except that it was decided to stop using SERIAL and generate sequences explicitly. Plus, of course, the maximum use of instructions

COMMENT ON ...

Comments for all objects, no exceptions.

Local audit

To log the execution of stored functions and change target tables, a local audit table is used, which includes, among other things, the details of the client connection, the label of the module being called, the actual values ​​of the input and output parameters in the form of JSON.

System functions

Designed for logging changes in target tables. They are trigger functions.

Template - system function

---------------------------------------------------------
-- INSERT
CREATE OR REPLACE FUNCTION sys_functions.table_insert_log ()
RETURNS TRIGGER AS $$
BEGIN
  PERFORM loc_audit_functions.make_log( ' '||'table' , 'insert' , json_build_object('id', NEW.id)  );
  RETURN NULL ;
END
$$ LANGUAGE plpgsql SECURITY DEFINER;

CREATE TRIGGER table_after_insert AFTER INSERT ON storage.table FOR EACH ROW EXECUTE PROCEDURE sys_functions.table_insert_log();

---------------------------------------------------------
-- UPDATE
CREATE OR REPLACE FUNCTION sys_functions.table_update_log ()
RETURNS TRIGGER AS $$
BEGIN
  IF OLD.column != NEW.column
  THEN
    PERFORM loc_audit_functions.make_log( ' '||'table' , 'update' , json_build_object('OLD.column', OLD.column , 'NEW.column' , NEW.column )  );
  END IF ;
  RETURN NULL ;
END
$$ LANGUAGE plpgsql SECURITY DEFINER;

CREATE TRIGGER table_after_update AFTER UPDATE ON storage.table FOR EACH ROW EXECUTE PROCEDURE sys_functions.table_update_log ();

---------------------------------------------------------
-- DELETE
CREATE OR REPLACE FUNCTION sys_functions.table_delete_log ()
RETURNS TRIGGER AS $$
BEGIN
  PERFORM loc_audit_functions.make_log( ' '||'table' , 'delete' , json_build_object('id', OLD.id )  );
  RETURN NULL ;
END
$$ LANGUAGE plpgsql SECURITY DEFINER;

CREATE TRIGGER table_after_delete AFTER DELETE ON storage.table FOR EACH ROW EXECUTE PROCEDURE sys_functions.table_delete_log ();

Service functions

Designed to implement service and DML operations on target tables.

Template - service function

--INSERT
--RETURN id OF NEW ROW
CREATE OR REPLACE FUNCTION service_functions.table_insert ( new_column store.table.column%TYPE )
RETURNS integer AS $$
DECLARE
  new_id integer ;
BEGIN
  -- Generate new id
  new_id = nextval('store.table.seq');

  -- Insert into table
  INSERT INTO store.table
  ( 
    id ,
    column
   )
  VALUES
  (
   new_id ,
   new_column
   );

RETURN new_id ;
END
$$ LANGUAGE plpgsql SECURITY DEFINER;

--DELETE
--RETURN ROW NUMBERS DELETED
CREATE OR REPLACE FUNCTION service_functions.table_delete ( current_id integer ) 
RETURNS integer AS $$
DECLARE
  rows_count integer  ;    
BEGIN
  DELETE FROM store.table WHERE id = current_id; 

  GET DIAGNOSTICS rows_count = ROW_COUNT;                                                                           

  RETURN rows_count ;
END
$$ LANGUAGE plpgsql SECURITY DEFINER;
 
-- UPDATE DETAILS
-- RETURN ROW NUMBERS UPDATED
CREATE OR REPLACE FUNCTION service_functions.table_update_column 
(
  current_id integer 
  ,new_column store.table.column%TYPE
) 
RETURNS integer AS $$
DECLARE
  rows_count integer  ; 
BEGIN
  UPDATE  store.table
  SET
    column = new_column
  WHERE id = current_id;

  GET DIAGNOSTICS rows_count = ROW_COUNT;                                                                           

  RETURN rows_count ;
END
$$ LANGUAGE plpgsql SECURITY DEFINER;

Business functions

Designed for final business functions called by the client. Always return - JSON. To intercept and log execution errors, use the block EXCEPTION.

Template - business function

CREATE OR REPLACE FUNCTION business_functions.business_function_template(
--Input parameters        
 )
RETURNS JSON AS $$
DECLARE
  ------------------------
  --for exception catching
  error_message text ;
  error_json json ;
  result json ;
  ------------------------ 
BEGIN
--LOGGING
  PERFORM loc_audit_functions.make_log
  (
    'business_function_template',
    'STARTED',
    json_build_object
    (
	--IN Parameters
    ) 
   );

  PERFORM business_functions.notice('business_function_template');            

  --START BUSINESS PART
  --END BUSINESS PART

  -- SUCCESFULLY RESULT
  PERFORM business_functions.notice('result');
  PERFORM business_functions.notice(result);

  PERFORM loc_audit_functions.make_log
  (
    'business_function_template',
    'FINISHED', 
    json_build_object( 'result',result )
  );

  RETURN result ;
----------------------------------------------------------------------------------------------------------
-- EXCEPTION CATCHING
EXCEPTION                        
  WHEN OTHERS THEN    
    PERFORM loc_audit_functions.make_log
    (
      'business_function_template',
      'STARTED',
      json_build_object
      (
	--IN Parameters	
      ) , TRUE );

     PERFORM loc_audit_functions.make_log
     (
       'business_function_template',
       ' ERROR',
       json_build_object('SQLSTATE',SQLSTATE ), TRUE 
     );

     PERFORM loc_audit_functions.make_log
     (
       'business_function_template',
       ' ERROR',
       json_build_object('SQLERRM',SQLERRM  ), TRUE 
      );

     GET STACKED DIAGNOSTICS error_message = RETURNED_SQLSTATE ;
     PERFORM loc_audit_functions.make_log
     (
      'business_function_template',
      ' ERROR-RETURNED_SQLSTATE',json_build_object('RETURNED_SQLSTATE',error_message  ), TRUE );

     GET STACKED DIAGNOSTICS error_message = COLUMN_NAME ;
     PERFORM loc_audit_functions.make_log
     (
       'business_function_template',
       ' ERROR-COLUMN_NAME',
       json_build_object('COLUMN_NAME',error_message  ), TRUE );

     GET STACKED DIAGNOSTICS error_message = CONSTRAINT_NAME ;
     PERFORM loc_audit_functions.make_log
     (
      'business_function_template',
      ' ERROR-CONSTRAINT_NAME',
      json_build_object('CONSTRAINT_NAME',error_message  ), TRUE );

     GET STACKED DIAGNOSTICS error_message = PG_DATATYPE_NAME ;
     PERFORM loc_audit_functions.make_log
     (
       'business_function_template',
       ' ERROR-PG_DATATYPE_NAME',
       json_build_object('PG_DATATYPE_NAME',error_message  ), TRUE );

     GET STACKED DIAGNOSTICS error_message = MESSAGE_TEXT ;
     PERFORM loc_audit_functions.make_log
     (
       'business_function_template',
       ' ERROR-MESSAGE_TEXT',json_build_object('MESSAGE_TEXT',error_message  ), TRUE );

     GET STACKED DIAGNOSTICS error_message = SCHEMA_NAME ;
     PERFORM loc_audit_functions.make_log
     (s
       'business_function_template',
       ' ERROR-SCHEMA_NAME',json_build_object('SCHEMA_NAME',error_message  ), TRUE );

     GET STACKED DIAGNOSTICS error_message = PG_EXCEPTION_DETAIL ;
     PERFORM loc_audit_functions.make_log
     (
      'business_function_template',
      ' ERROR-PG_EXCEPTION_DETAIL',
      json_build_object('PG_EXCEPTION_DETAIL',error_message  ), TRUE );

     GET STACKED DIAGNOSTICS error_message = PG_EXCEPTION_HINT ;
     PERFORM loc_audit_functions.make_log
     (
       'business_function_template',
       ' ERROR-PG_EXCEPTION_HINT',json_build_object('PG_EXCEPTION_HINT',error_message  ), TRUE );

     GET STACKED DIAGNOSTICS error_message = PG_EXCEPTION_CONTEXT ;
     PERFORM loc_audit_functions.make_log
     (
      'business_function_template',
      ' ERROR-PG_EXCEPTION_CONTEXT',json_build_object('PG_EXCEPTION_CONTEXT',error_message  ), TRUE );                                      

    RAISE WARNING 'ALARM: %' , SQLERRM ;

    SELECT json_build_object
    (
      'isError' , TRUE ,
      'errorMsg' , SQLERRM
     ) INTO error_json ;

  RETURN  error_json ;
END
$$ LANGUAGE plpgsql SECURITY DEFINER;

Π‘onclusion

To describe the overall picture, I think it is enough. If anyone is interested in the details and results, write comments, I will gladly complete the picture with additional strokes.

PS

Simple error logging - input parameter type

-[ RECORD 1 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1072
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          | STARTED
jsonb_pretty    | {
                |     "dko": {
                |         "id": 4,
                |         "type": "Type1",                                                                                                                                                                                            
                |         "title": "CREATED BY addKD",
                |         "Weight": 10,
                |         "Tr": "300",
                |         "reduction": 10,
                |         "isTrud": "TRUE",
                |         "description": "decription",
                |         "lowerTr": "100",
                |         "measurement": "measurement1",
                |         "methodology": "m1",                                                                                                                                                                                           
                |         "passportUrl": "files",
                |         "upperTr": "200",
                |         "weightingFactor": 100.123,
                |         "actualTrValue": null,
                |         "upperTrCalcNumber": "120"
                |     },
                |     "CardId": 3
                | }
-[ RECORD 2 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1073
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR
jsonb_pretty    | {
                |     "SQLSTATE": "22P02"
                | }
-[ RECORD 3 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1074
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR
jsonb_pretty    | {
                |     "SQLERRM": "invalid input syntax for type numeric: "null""
                | }
-[ RECORD 4 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1075
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-RETURNED_SQLSTATE
jsonb_pretty    | {
                |     "RETURNED_SQLSTATE": "22P02"
                | }
-[ RECORD 5 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1076
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-COLUMN_NAME
jsonb_pretty    | {
                |     "COLUMN_NAME": ""
                | }

-[ RECORD 6 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1077
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-CONSTRAINT_NAME
jsonb_pretty    | {
                |     "CONSTRAINT_NAME": ""
                | }
-[ RECORD 7 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1078
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-PG_DATATYPE_NAME
jsonb_pretty    | {
                |     "PG_DATATYPE_NAME": ""
                | }
-[ RECORD 8 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1079
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-MESSAGE_TEXT
jsonb_pretty    | {
                |     "MESSAGE_TEXT": "invalid input syntax for type numeric: "null""
                | }
-[ RECORD 9 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1080
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-SCHEMA_NAME
jsonb_pretty    | {
                |     "SCHEMA_NAME": ""
                | }
-[ RECORD 10 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1081
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-PG_EXCEPTION_DETAIL
jsonb_pretty    | {
                |     "PG_EXCEPTION_DETAIL": ""
                | }
-[ RECORD 11 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1082
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-PG_EXCEPTION_HINT
jsonb_pretty    | {
                |     "PG_EXCEPTION_HINT": ""
                | }
-[ RECORD 12 ]-
date_trunc      | 2020-08-19 13:15:46
id              | 1083
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-PG_EXCEPTION_CONTEXT
jsonb_pretty    | {
usename         | emp1
log_module      | addKD
log_module_hash | 0b4c1529a89af3ddf6af3821dc790e8a
status          |  ERROR-MESSAGE_TEXT
jsonb_pretty    | {
                |     "MESSAGE_TEXT": "invalid input syntax for type numeric: "null""
                | }

Source: habr.com

Add a comment