Huffman compressio algorithmus

Ante initium cursus "Algorithms for Developers" parata est tibi translatio alterius materiae utilis.

Huffman coding est algorithmus notitia compressionis quae notionem fundamentalem documenti compressionis format. In hoc articulo, de certa et variabili longitudine descriptam, unice decodable codicibus, praefixis regulis, et Huffman arborem aedificantem, dicemus.

Scimus singulas personas in serie 0's et 1's reponendas esse et sumit 8 frusta. Haec dicitur delatam certam longitudinem, quia idem quisque character utetur ad certum numerum reprimendum.

Dicamus textum nos datum. Quomodo quantitatem spatii exigi possumus reducere ad singularem indolem reponendam?

Praecipua notio variam longitudinem descriptam est. Hoc uti possumus, quod nonnullae notae in textu frequentius occurrunt quam aliae (hic vide) ad algorithmum evolvendum, qui eandem characterum seriem in rarioribus lamellis exprimet. In longitudinem variabili descriptam, notas varias numerorum calcarium assignamus, prout saepe in textu dato apparent. Tandem nonnullae notae minus quam 1 partem capere possunt, aliae vero 2 particulas, 3 vel plures capere possunt. Problema cum longitudine variabili translitteratum est solum sequentis decoctionis sequentis.

Quomodo, sciens seriem frenorum, sine ambiguitate decode?

Considera linea "abcdab". VIII characteres habet, et cum certam longitudinem descriptam, necesse erit eam 8 frusta condere. Nota quod symbolum frequentiae "a", "b", "c" ΠΈ "D" pares 4, 2, 1, 1 resp. Sit scriptor experiri ad imaginari "abcdab" paucioribus frena utens eo quod "Ut" occurs magis frequentius quam "B"quod "B" occurs magis frequentius quam "c" ΠΈ "D". Sit scriptor satus per coding "Ut" una parte = 0; "B" dabimus duos frenum codicem XI, et utens tribus bits C et XI erimus encode "c" ΠΈ "D".

Quam ob rem impetrabimus;

a
0

b
11

c
100

d
011

Ita recta "abcdab" nos encode ut 00110100011011 (0|0|11|0|100|011|0|11)codicibus supra adhibitis. Sed consectetur consequat elit in ornare. Dum conantur decode chorda 00110100011011ambiguum consequitur exitum, cum possit repraesentari;

0|011|0|100|011|0|11    adacdab
0|0|11|0|100|0|11|011   aabacabd
0|011|0|100|0|11|0|11   adacabab 

...
etc.

Ad hanc ambiguitatem evitandam, curare debemus ut talem notionem tamquam notionem nostram delatam satisfaceremus praepositionem regulaequod vicissim innuit codices nonnisi singulariter decoqui. Praeposita regula efficit ut nullum codicem sit praepositio alterius. In codice, frena significamus peculiarem indolem repraesentantes. In exemplum supra 0 praeposita est 011quae praepositionem violat. Si igitur codices nostri praepositionem regulae satisfaciunt, tunc singulariter decoquere possumus (et vice versa).

Exemplum supra repetamus. Hoc tempus pro symbolis assignabimus "a", "b", "c" ΠΈ "D" codicibus satisfaciunt praepositionis regulae.

a
0

b
10

c
110

d
111

Cum hac descriptam, chorda "abcdab" ut encoded 00100100011010 (0|0|10|0|100|011|0|10). Et hic 00100100011010 nos iam sine ambiguitate decodere et ad pristinum nervum redire posse "abcdab".

Huffman coding

Nunc quod deformata et praepositionis regulae prolixitate egimus, fama de Huffman descriptam.

Modus creationis arborum binariorum sumitur. In ea nodi potest esse vel finalis vel internus. Initio omnes nodi considerantur folia (terminalia), quae ipsum symbolum eiusque pondus (scilicet occurrentium frequentiam) repraesentant. Nodi interni gravitatem indoli continent et ad duos nodos descendentes referunt. Communi consensu, bit "0" significat hoc in sinistro genere, et "1" β€” Ad dextram. in plena arbore N folia et N-1 nodis internis. Commendatur ut, cum Huffman arborem construens, symbola insueta abiiciantur, ad meliorem longitudinem codicibus obtinendam.

Priore queue utemur ad aedificandam arborem Huffman, ubi nodi frequentia infima summa prioritas dabitur. De gradibus constructionis infra describuntur:

  1. Nodum folium crea pro singulis characteribus et eas ad prioritatem queue adde.
  2. Cum plus quam una scheda in queue, sequentia fac;
    • Aufer duos nodos cum summa prioritate (infima frequentia) a queue;
    • Novam nodi internam crea, ubi hi duo nodi liberi erunt, et frequentia occursus aequalis erit summae frequentiae horum duorum nodi.
    • Novam nodi priori queue addere.
  3. Reliquum nodi radix erit, et haec ligni constructionem perficiet.

Finge nos habere textum aliquem qui tantum characterum consistit "a b c D" ΠΈ "et"et frequentiae eorum sunt 15, 7, 6, 6, et 5, respective. Infra illustrationes sunt quae algorithm vestigia reflectunt.

Huffman compressio algorithmus

Huffman compressio algorithmus

Huffman compressio algorithmus

Huffman compressio algorithmus

Huffman compressio algorithmus

Semita ab radice ad finem nodi ad optimalem praepositionem codicem (etiam Huffman notae notae) characteri cum nodi extremitate coniungitur respondentem.

Huffman compressio algorithmus
Huffman arbor

Infra invenies algorithmum compressionis Huffman in C++ et Java:

#include <iostream>
#include <string>
#include <queue>
#include <unordered_map>
using namespace std;

// A Tree node
struct Node
{
	char ch;
	int freq;
	Node *left, *right;
};

// Function to allocate a new tree node
Node* getNode(char ch, int freq, Node* left, Node* right)
{
	Node* node = new Node();

	node->ch = ch;
	node->freq = freq;
	node->left = left;
	node->right = right;

	return node;
}

// Comparison object to be used to order the heap
struct comp
{
	bool operator()(Node* l, Node* r)
	{
		// highest priority item has lowest frequency
		return l->freq > r->freq;
	}
};

// traverse the Huffman Tree and store Huffman Codes
// in a map.
void encode(Node* root, string str,
			unordered_map<char, string> &huffmanCode)
{
	if (root == nullptr)
		return;

	// found a leaf node
	if (!root->left && !root->right) {
		huffmanCode[root->ch] = str;
	}

	encode(root->left, str + "0", huffmanCode);
	encode(root->right, str + "1", huffmanCode);
}

// traverse the Huffman Tree and decode the encoded string
void decode(Node* root, int &index, string str)
{
	if (root == nullptr) {
		return;
	}

	// found a leaf node
	if (!root->left && !root->right)
	{
		cout << root->ch;
		return;
	}

	index++;

	if (str[index] =='0')
		decode(root->left, index, str);
	else
		decode(root->right, index, str);
}

// Builds Huffman Tree and decode given input text
void buildHuffmanTree(string text)
{
	// count frequency of appearance of each character
	// and store it in a map
	unordered_map<char, int> freq;
	for (char ch: text) {
		freq[ch]++;
	}

	// Create a priority queue to store live nodes of
	// Huffman tree;
	priority_queue<Node*, vector<Node*>, comp> pq;

	// Create a leaf node for each character and add it
	// to the priority queue.
	for (auto pair: freq) {
		pq.push(getNode(pair.first, pair.second, nullptr, nullptr));
	}

	// do till there is more than one node in the queue
	while (pq.size() != 1)
	{
		// Remove the two nodes of highest priority
		// (lowest frequency) from the queue
		Node *left = pq.top(); pq.pop();
		Node *right = pq.top();	pq.pop();

		// Create a new internal node with these two nodes
		// as children and with frequency equal to the sum
		// of the two nodes' frequencies. Add the new node
		// to the priority queue.
		int sum = left->freq + right->freq;
		pq.push(getNode('', sum, left, right));
	}

	// root stores pointer to root of Huffman Tree
	Node* root = pq.top();

	// traverse the Huffman Tree and store Huffman Codes
	// in a map. Also prints them
	unordered_map<char, string> huffmanCode;
	encode(root, "", huffmanCode);

	cout << "Huffman Codes are :n" << 'n';
	for (auto pair: huffmanCode) {
		cout << pair.first << " " << pair.second << 'n';
	}

	cout << "nOriginal string was :n" << text << 'n';

	// print encoded string
	string str = "";
	for (char ch: text) {
		str += huffmanCode[ch];
	}

	cout << "nEncoded string is :n" << str << 'n';

	// traverse the Huffman Tree again and this time
	// decode the encoded string
	int index = -1;
	cout << "nDecoded string is: n";
	while (index < (int)str.size() - 2) {
		decode(root, index, str);
	}
}

// Huffman coding algorithm
int main()
{
	string text = "Huffman coding is a data compression algorithm.";

	buildHuffmanTree(text);

	return 0;
}

import java.util.HashMap;
import java.util.Map;
import java.util.PriorityQueue;

// A Tree node
class Node
{
	char ch;
	int freq;
	Node left = null, right = null;

	Node(char ch, int freq)
	{
		this.ch = ch;
		this.freq = freq;
	}

	public Node(char ch, int freq, Node left, Node right) {
		this.ch = ch;
		this.freq = freq;
		this.left = left;
		this.right = right;
	}
};

class Huffman
{
	// traverse the Huffman Tree and store Huffman Codes
	// in a map.
	public static void encode(Node root, String str,
							  Map<Character, String> huffmanCode)
	{
		if (root == null)
			return;

		// found a leaf node
		if (root.left == null && root.right == null) {
			huffmanCode.put(root.ch, str);
		}


		encode(root.left, str + "0", huffmanCode);
		encode(root.right, str + "1", huffmanCode);
	}

	// traverse the Huffman Tree and decode the encoded string
	public static int decode(Node root, int index, StringBuilder sb)
	{
		if (root == null)
			return index;

		// found a leaf node
		if (root.left == null && root.right == null)
		{
			System.out.print(root.ch);
			return index;
		}

		index++;

		if (sb.charAt(index) == '0')
			index = decode(root.left, index, sb);
		else
			index = decode(root.right, index, sb);

		return index;
	}

	// Builds Huffman Tree and huffmanCode and decode given input text
	public static void buildHuffmanTree(String text)
	{
		// count frequency of appearance of each character
		// and store it in a map
		Map<Character, Integer> freq = new HashMap<>();
		for (int i = 0 ; i < text.length(); i++) {
			if (!freq.containsKey(text.charAt(i))) {
				freq.put(text.charAt(i), 0);
			}
			freq.put(text.charAt(i), freq.get(text.charAt(i)) + 1);
		}

		// Create a priority queue to store live nodes of Huffman tree
		// Notice that highest priority item has lowest frequency
		PriorityQueue<Node> pq = new PriorityQueue<>(
										(l, r) -> l.freq - r.freq);

		// Create a leaf node for each character and add it
		// to the priority queue.
		for (Map.Entry<Character, Integer> entry : freq.entrySet()) {
			pq.add(new Node(entry.getKey(), entry.getValue()));
		}

		// do till there is more than one node in the queue
		while (pq.size() != 1)
		{
			// Remove the two nodes of highest priority
			// (lowest frequency) from the queue
			Node left = pq.poll();
			Node right = pq.poll();

			// Create a new internal node with these two nodes as children 
			// and with frequency equal to the sum of the two nodes
			// frequencies. Add the new node to the priority queue.
			int sum = left.freq + right.freq;
			pq.add(new Node('', sum, left, right));
		}

		// root stores pointer to root of Huffman Tree
		Node root = pq.peek();

		// traverse the Huffman tree and store the Huffman codes in a map
		Map<Character, String> huffmanCode = new HashMap<>();
		encode(root, "", huffmanCode);

		// print the Huffman codes
		System.out.println("Huffman Codes are :n");
		for (Map.Entry<Character, String> entry : huffmanCode.entrySet()) {
			System.out.println(entry.getKey() + " " + entry.getValue());
		}

		System.out.println("nOriginal string was :n" + text);

		// print encoded string
		StringBuilder sb = new StringBuilder();
		for (int i = 0 ; i < text.length(); i++) {
			sb.append(huffmanCode.get(text.charAt(i)));
		}

		System.out.println("nEncoded string is :n" + sb);

		// traverse the Huffman Tree again and this time
		// decode the encoded string
		int index = -1;
		System.out.println("nDecoded string is: n");
		while (index < sb.length() - 2) {
			index = decode(root, index, sb);
		}
	}

	public static void main(String[] args)
	{
		String text = "Huffman coding is a data compression algorithm.";

		buildHuffmanTree(text);
	}
}

Note: memoria in chorda initus adhibita est 47 * 8 = 376 frena et chorda encoded non nisi 194 frena i.e. data comprimitur circiter 48%. In programmatis C++ supra, genus chordae utimur ad reponendam chordam encoded ad rationem legendi faciendam.

Quia efficient prior queue notitia structurae per insertionem require O (log(N)) tempore, sed in plena arbore binaria N relinquit praesens 2N 1, nodis, et Huffman arbor est arbor binaria completa, deinde algorithmus in . O(Nlog(N)) tempus, quo N β€” Characteres.

fontes:

en.wikipedia.org/wiki/Huffman_coding
en.wikipedia.org/wiki/Variable-length_code
www.youtube.com/watch?v=5wRPin4oxCo

Plura de cursus.

Source: www.habr.com