FPGA Implementation of High Throughput Lossless Canonical Huffman Machine Decoder

Abstract

Because there are more data bits and memory operations in modern digital networks, data transport and reception are more complicated, resulting in more data loss and lower throughputs. As a result, the suggested work of this study uses the Canonical Huffman compression approach to deliver lossless data compression with minimal memory architecture. The Huffman machine will present a memory-efficient design that is lossless and supports multi-bit data compression [1]. Here, utilizing variable length and the Canonical Huffman encoding method, this methodology will show input as 640 data bits, compressed output as 90 data bits, and de-compressor 90 data bits to 640 data bits using the Canonical Huffman decoding method. Finally, this work will be synthesized on a Vertex FPGA and presented in Verilog HDL, with results for area, delay, and power.

Authors and Affiliations

P. Uday Kumar, K. Vineela, J. venkatavamsi, N. Rajesh, R. V. Lokesh kumar, and P. Hyndavi

Keywords

Related Articles

Reverberation of X-Rays near Accreting Black Holes

A compact core corona emits power–law continuous X-ray radiation from Primordial black pre - drilled and bright gravitational perturbations star mass Calibration lags are caused by light journey time’s delays involving d...

A Review Study on Small Scale Wind Turbines

Country's growth is inversely proportional to the quantity of electricity it produces. A country's economic development should be sustainable if it has the resources to generate energy continually. Experimental and numer...

Deep Web Crawler: A Review

In today’s scenario, there is an ample amount of data on the internet that can be accessed by everyone. This is the data that can be indexed by search engines. There are softwares named Web Crawlers that explore the WWW...

Deep Learning-Based Lung Medical Image Recognition

Pulmonary nodules serve as critical indicators for early lung cancer diagnosis, making their detection and classification essential. The prevalent use of transfer learning in recognition algorithms often encounters a sig...

A Survey on Feature Selection in Data Mining

Feature Selection is a fundamental problem in machine learning and data mining . Feature Selection is an effective way for reducing dimensionality, removing irrelevant data increasing learning accuracy. Feature Selection...

Download PDF file
  • EP ID EP745068
  • DOI 10.55524/ijircst.2023.11.4.14
  • Views 65
  • Downloads 0

How To Cite

P. Uday Kumar, K. Vineela, J. venkatavamsi, N. Rajesh, R. V. Lokesh kumar, and P. Hyndavi (2023). FPGA Implementation of High Throughput Lossless Canonical Huffman Machine Decoder. International Journal of Innovative Research in Computer Science and Technology, 11(4), -. https://www.europub.co.uk/articles/-A-745068