A New Fractal Image Compression technique using Genetic Algorithm

S.Mohamed Azath Baik , Dr.A.Nagarajan; JECET; March 2018 - May 2018; Sec. B; Vol.7. No.2, 138-144.; [DOI: 10.24214/jecet.B.7.2.13844]

Abstract

Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. With fractal compression, encoding is extremely computationally expensive because of the search used to find the self-similarities. Decoding however, is quite fast. The main idea of fractal compression is to exploit local self-similarity in images. This permits a self-referential description of image data to be yielded, which shows a typical fractal structure when decoded. This type of redundancy is not exploited by tradition image coders and this is one of the motivations for fractal image coding. The main objective of this paper is to improve the encoding time and provides higher compression ratio with help of thresholding function.

A Review on Analyzing Various Log Files Using Hadoop

Prof. L.K. Vishwamitra, Vijay Singh Pawar; JECET; March 2018 - May 2018; Sec. B; Vol.7. No.2, 133-137.; [DOI: 10.24214/jecet.B.7.2.13337.]

Abstract

Web usage mining is application of data mining techniques to discover usage patterns from web data, in order to better serve the needs of web based applications. The user access log files present very significant information about a web server. This paper is concerned with the in-depth analysis of Web Log Data of website to find information about a web site, top errors, potential visitors of the site etc. which help system administrator and Web designer to improve their system by determining occurred systems errors, corrupted and broken links by using web using mining. Now a days due to increase on internet user logs files are also increase rapidly. Processing this explosive growth of log files using relational database technology has been facing a bottle neck. To analyze such large datasets we need parallel processing system and reliable data storage mechanism.. In this paper we present the hadoop framework for storing and processing large log files.

Multi-Objective Bi-Level Programming for Environmental Constrained Electric Power Generation and Dispatch via Genetic Algorithm

Papun Biswas, Tuhin Subhra Das; JECET; March 2018- May 2018; Sec. B; Vol.7. No.2, 117-132.; [DOI: 10.24214/jecet.B.7.2.11732]

Abstract

This article presents how multi-objective bi-level programming (MOBLP) in a hierarchical structure can be efficiently used for modeling and solving environmental-economic power generation and dispatch (EEPGDD) problems through Fuzzy Goal Programming (FGP) based on genetic algorithm (GA) in a thermal power system operation and planning horizon. In MOBLP formulation, first the objectives associated with environmental and economic power generation are considered two optimization problems at two individual hierarchical levels ( top level and bottom level ) with the control of more than one objective, that are inherent to the problem each level. Then, the optimization problems of both the levels are described fuzzily to accommodate the impression arises for optimizing them simultaneously in the decision situation. In the model formulation, the concept of membership functions in fuzzy sets for measuring the achievement of highest membership value (unity) of the defined fuzzy goals in FGP formulation to the extent possible by minimising under-deviational variables associated with membership goals defined for them on the basis of their weights of importance is considered. Actually, the modeling aspects of FGP are used here to incorporate various uncertainties arises in generation of power and dispatch to various locations. In the solution process, a GA scheme is used in the framework of FGP model in an iterative manner to reach a satisfactory decision on the basis of needs in society in uncertain environment. The GA scheme is employed at two different stages. At the first stage, individual optimal decisions of objectives are determined for fuzzy goal description of them. At the second stage,evaluation of goal achievement function to arrive at the highest membership value of the fuzzy goals in the order of hierarchical of optimizing them in the decision situation. The effective use of the approach is tested on the standard IEEE 6-Generator 30-Bus System.

Population in India: Suitability of Logistic, Modified Exponential and Second Degree Parabolic Curves

Biswajit Das, Dhritikesh Chakrabarty and Manash Pratim Kashyap; JECET; March 2018 - May 2018; Sec. B; Vol.7. No.2, 107-116; [DOI: 10.24214/jecet.B.7.2.10716.]

Abstract

There exist innumerable situations/problems where it is required to represent a set of numerical data on a pair of variables by different a suitable mathematical curve. In some of these situations/problems it is required to represent a set of numerical data by a mathematical curve containing three parameters. Second Degree Parabolic curve, Modified Exponential curve and Logistic curve are such curves containing three parameters. For a set of numerical data, one curve is suitable to represent the data. The suitability of these three curves has been studied, in the current study, in the case of the total population in India. This paper is based on the findings of this study.

A Review on Optimizing Hadoop Mapreduce Performance

Amit Dubey, Shilpa Tripathi; JECET; March 2018 - May 2018; Sec. B; Vol.7. No.2, 101-106; [DOI: 10.24214/jecet.B.7.2.10106.]

Abstract

Hadoop has become a key component of big data, and gained more and more support. Since the companies and people will recognized the potential of hadoop framework, many companies will working on hadoop to enhance the performance of mapreduce framework. Hadoop MapReduce is a popular framework for distributed storage and processing of large datasets and is used for big data analytics. Mapreduce comes with various configuration parameters which play an important role in optimizing hadoop mapreduce performance. Default values of these parameters do not result in good performance and therefore it is important to tune them. In this paper we proposed that there are more than 100 configuration parameters in mapreduce framework which plays an important role in enhanching a mapreduce performance.