Descriptor
Coding | 12 |
Information Storage | 12 |
Information Theory | 12 |
Mathematical Models | 12 |
Performance | 10 |
Evaluation | 8 |
Comparative Analysis | 7 |
Algorithms | 6 |
Illustrations | 5 |
Tables (Data) | 4 |
Data Processing | 3 |
More ▼ |
Source
Information Processing &… | 12 |
Author
Storer, James A. | 2 |
Bookstein, A. | 1 |
Constantinescu, Cornel | 1 |
Culik, Karel II | 1 |
Fang, Yonggang | 1 |
Feygin, Gennady | 1 |
Grumbach, Stephane | 1 |
Howard, Paul G | 1 |
Kari, Jarkko | 1 |
Kossentini, Faouzi | 1 |
Lin, Jianhua | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Evaluative | 12 |
Speeches/Meeting Papers | 11 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Yokoo, Hidetoshi – Information Processing & Management, 1994
Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…
Descriptors: Coding, Information Storage, Information Theory, Mathematical Models

Bookstein, A.; And Others – Information Processing & Management, 1994
Examines two models to study the consequences for coding efficiency of sample fluctuations, a specific type of error in the statistics on which data compression codes are based. The possibility that random fluctuation can be exploited for data compression is discussed. (six references) (KRN)
Descriptors: Coding, Comparative Analysis, Information Storage, Information Theory

Villasenor, John D. – Information Processing & Management, 1994
Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)
Descriptors: Coding, Comparative Analysis, Illustrations, Information Storage

Kossentini, Faouzi; And Others – Information Processing & Management, 1994
Discusses a flexible, high performance subband coding system. Residual vector quantization is discussed as a basis for coding subbands, and subband decomposition and bit allocation issues are covered. Experimental results showing the quality achievable at low bit rates are presented. (13 references) (KRN)
Descriptors: Coding, Evaluation, Experiments, Illustrations

Wu, Xiaolin; Fang, Yonggang – Information Processing & Management, 1994
Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…
Descriptors: Coding, Comparative Analysis, Evaluation, Illustrations

Howard, Paul G; Vitter, Jeffrey Scott – Information Processing & Management, 1994
Describes a detailed algorithm for fast text compression. Related to the PPM (prediction by partial matching) method, it simplifies the modeling phase by eliminating the escape mechanism and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. Details of the use of quasi-arithmetic code tables are given, and their…
Descriptors: Algorithms, Coding, Electronic Text, Information Storage

Lin, Jianhua; Storer, James A. – Information Processing & Management, 1994
Describes the design of optimal tree-structured vector quantizers that minimize the expected distortion subject to cost functions related to storage cost, encoding rate, or quantization time. Since the optimal design problem is intractable in most cases, the performance of a general design heuristic based on successive partitioning is analyzed.…
Descriptors: Algorithms, Coding, Comparative Analysis, Costs

Constantinescu, Cornel; Storer, James A. – Information Processing & Management, 1994
Presents a new image compression algorithm that employs some of the most successful approaches to adaptive lossless compression to perform adaptive online (single pass) vector quantization with variable size codebook entries. Results of tests of the algorithm's effectiveness on standard test images are given. (12 references) (KRN)
Descriptors: Algorithms, Coding, Data Processing, Evaluation

Moffat, Alistair; And Others – Information Processing & Management, 1994
Evaluates the performance of different methods of data compression coding in several situations. Huffman's code, arithmetic coding, fixed codes, fast approximations to arithmetic coding, and splay coding are discussed in terms of their speed, memory requirements, and proximity to optimal performance. Recommendations for the best methods of…
Descriptors: Coding, Data Processing, Evaluation, Experiments

Feygin, Gennady; And Others – Information Processing & Management, 1994
Presents two new algorithms for performing arithmetic coding without employing multiplication and discusses their implementation requirements. The first algorithm, suitable for an alphabet of arbitrary size, reduces the worst case excess length to under 0.8%. The second algorithm, suitable only for alphabets of less than 12 symbols, allows even…
Descriptors: Algorithms, Coding, Comparative Analysis, Evaluation

Culik, Karel II; Kari, Jarkko – Information Processing & Management, 1994
Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…
Descriptors: Algorithms, Coding, Comparative Analysis, Data Processing

Grumbach, Stephane; Tahi, Fariza – Information Processing & Management, 1994
Analyzes the properties of genetic sequences that cause the failure of classical algorithms used for data compression. A lossless algorithm, which compresses the information contained in DNA and RNA sequences by detecting regularities such as palindromes, is presented. This algorithm combines substitutional and statistical methods and appears to…
Descriptors: Algorithms, Coding, Comparative Analysis, Databases