Blockchain development for industry on Go. Part 1

For the past four months, I have been working on a project called "Development of data protection and management tools in government and industrial sectors based on the blockchain."
Now I would like to tell you about how I started this project, which I will now describe in detail the program code.

Blockchain development for industry on Go. Part 1

This is the first article in a series of articles. Here I describe the server and the protocol. In fact, the reader can even write their own versions of these blockchain elements.

And here is the second part - about the data structures of the blockchain and transactions, as well as about the package that implements the interaction with the database.

Last year, at the Digital Breakthrough hackathon, an idea was thrown up to make a useful system for industry and the digital economy using distributed registry technology, and a grant was issued for the development by the Innovation Promotion Foundation (I should write a separate article about the grant, for those who are just starting to do startups ), and now in order.

The development takes place in the Go language, and the database in which the blocks are stored is LevelDB.
The main parts are the protocol, the server (which runs TCP and WebSocket - the first for blockchain synchronization, the second for connecting clients, sending transactions and commands from JavaScript, for example.

As mentioned, this blockchain is needed primarily to automate and protect the exchange of products between suppliers and customers, or both in one person. They are not in a hurry to trust each other. But the task is not only to make a "checkbook" with a built-in calculator, but a system with the automation of most of the routine tasks that arise when working with the product life cycle. The bytecode that is responsible for this business, as is customary with blockchains, is stored in the inputs and outputs of transactions (transactions themselves sit in blocks, blocks in LevelDB are pre-coded in the GOB format). First, let's talk about the protocol and the server (aka node).

The protocol does not work difficult, its whole point is to switch to the mode of loading some data, usually a block or a transaction, in response to a special command line, and it is also needed to exchange inventory, so that the node knows to whom it is connected and how they have things to do (the nodes connected for the synchronization session are also called β€œneighbors” because their IPs are known and their state data is stored in memory).

Folders (directories, as Linux calls them) are called packages in the understanding of Go programmers, so at the beginning of each file with Go code from this directory, package folder_name_where_this_file_sits_is_sitting. Otherwise, it will not be possible to feed the package to the compiler. Well, this is not a secret for those who know this language. Here are the packages:

  • Networking (server, client, protocol)
  • Structures of stored and transmitted data (block, transaction)
  • Database (blockchain)
  • Consensus
  • Stack virtual machine (xvm)
  • Auxiliary (crypto, types) are all for now.

Here is the link to github

This is an educational version, it lacks inter-process communication and several experimental components, but the structure corresponds to the one being developed. If you have something to suggest in the comments, I will be happy to take it into account in further development. And now the explanations for the server and protocol.

Let's look at server first.

The server subroutine acts as a data server running over the TCP protocol using data structures from the protocol package.

The routine uses the following packages: server, protocol, types. In the package itself tcp_server.go contains a data structure Serve.

type Serve struct {
	Port string
	BufSize int
	ST *types.Settings
}

It can take the following parameters:

  • Network port through which data will be exchanged
  • JSON server configuration file
  • Launch flag in debug mode (private blockchain)

Progress:

  • Read configuration from JSON file
  • The debug mode flag is checked: if it is set, then the network synchronization scheduler is not launched and the blockchain is not loaded
  • Initialize the configuration data structure and start the server

Server & Hosting

  • Starts TCP server and network communication according to the protocol.
  • It has a Serve data structure consisting of a port number, a buffer size, and a pointer to the structure types.Settings
  • The Run method starts network communication (listening for incoming connections on a given port, when a new connection is received, its processing is transferred to the private handle method in a new thread)
  • Π’ handle the data from the connection is read into the buffer, converted to a string representation, and passed to protocol.Choice
  • protocol.Choice returns result or throws an error. result then transferred to protocol.Interpretewhich returns intrpr - object of type InterpreteData, or causes an error processing the result of the selection
  • Then the switch is executed on intrpr.Commands[0] which checks one of: result, inv, error and there is a section default
  • In section result find switch by value intrpr.Commands[1] which checks the values bufferlength ΠΈ version (the corresponding function is called in each case)

Functions GetVersion ΠΈ BufferLength are in the file srvlib.go server package.

GetVersion(conn net.Conn, version string)

simply prints to the console and sends the version passed in the parameter to the client:

conn.Write([]byte("result:" + version))

.
Function

BufferLength(conn net.Conn, intrpr *protocol.InterpreteData)

loads a block, transaction, or other specific data as follows:

  • Prints to the console the type of data specified in the log to be received:
    fmt.Println("DataType:", intrpr.Commands[2])
  • Reads a value intrpr.Body into a numeric variable buf_len
  • Creates a buffer newbuf specified size:
    make([]byte, buf_len)
  • Sends an ok response:
    conn.Write([]byte("result:ok"))
  • Performs full buffer filling from the read stream:
    io.ReadFull(conn, newbuf)

    .

  • Prints the contents of the buffer to the console
    fmt.Println(string(newbuf))

    and the number of bytes read

    fmt.Println("Bytes length:", n)
  • Sends an ok response:
    conn.Write([]byte("result:ok"))

Methods from the server package are configured in such a way that they process the received data with functions from the package protocol.

Protocol

A protocol is a means that represents data in a network exchange.

Choice(str string) (string, error) performs primary processing of the data received by the server, receives a string representation of the data as input and returns a string prepared for Interprete:

  • The input string is split into head and body with ReqParseN2(str)
  • head is split into elements and placed in the commands slice with ReqParseHead(head)
  • Π’ switch(commands[0]) select the received command (cmd key address or the section is triggered default)
  • 2 commands are checked in cmd switch(commands[1]) - length ΠΈ get version.
  • length checks the data type in commands[2] and saves it in data type
  • Checks that body contains a string value
    len(body) < 1
  • Returns a response string:
    "result:bufferlength:" + datatype + "/" + body
  • get version returns a string
    return "result:version/auto"

Interprete

Contains an InterpreteData structure and performs secondary processing on the returned Choice strings and object formation InterpreteData.

type InterpreteData struct {
	Head string
	Commands []string
	Body string
	IsErr bool
	ErrCode int 
	ErrMessage string
}

Function

Interprete(str string) (*InterpreteData, error)

takes a string result and creates returns a reference to the object InterpreteData.

Progress:

  • Similarly Choice fetches head and body with ReqParseN2(str)
  • head is split into elements with ReqParseHead(head)
  • Object is initialized InterpreteData and returns a pointer to it:

res := &InterpreteData{
	Head: head,
	Commands: commands,
	Body: body,
}
return res, nil

This object is used in server.go main package.

Client

The client package contains functions TCPConnect ΠΈ TCPResponseData.

Function

TCPConnect(s *types.Settings, data []byte, payload []byte)

works as follows:

  • A connection is being made to the connection specified in the passed settings object
    net.Dial("tcp", s.Host + ":" + s.Port)
  • The data passed in the data parameter is transmitted:
    conn.Write(data)
  • The answer is read
    resp, n, _ := TCPResponseData(conn, s.BufSize)

    and printed to the console

    fmt.Println(string(resp[:n]))
  • If transferred payload then passes it on
    conn.Write(payload)

    and also reads the server's response, printing it to the console

Function

 TCPResponseData(conn net.Conn, bufsiz int) ([]byte, int, error)

creates a buffer of the specified size, reads the server's response into it, and returns this buffer and the number of bytes read, as well as an error object.

subroutine client

Serves for sending commands to node servers, as well as getting brief statistics and testing.

It can take the following parameters: configuration file in JSON format, data to send to the server as a string, path to the file to send it to payload, node scheduler emulation flag, type of data to send as a numeric value.

  • Getting the configuration
    st := types.ParseConfig(*config)
  • If the emu flag is passed, it starts scheduler
  • If the f flag is given with the path to the file, then we load its data into fdb and the content is sent to the server
    client.TCPConnect(st, []byte(CMD_BUFFER_LENGTH + ":" + strconv.Itoa(*t) + "/" + strconv.Itoa(fdblen)), fdb)
  • If the file is not specified, then the data from the flag is simply sent -d:
    client.TCPConnect(st, []byte(*data), nil)

All this is a simplified representation showing the structure of the protocol. During development, the necessary functionality is added to its structure.

In the second part I will talk about data structures for blocks and transactions, in 3 about the WebSocket server for connecting from JavaScript, in 4 I will consider the synchronization scheduler, then the stack machine processing the bytecode from inputs and outputs, cryptography and pools for outputs.

Source: habr.com

Add a comment