Shape Classification Using Shape Context and Dynamic Programming
The suggested algorithm for shape classification described in this paper is based on several steps. The algorithm analyzes the contour of pairs of shapes. Their contours are recovered and represented by a pair of N points. Given two points pi and qj from the two shapes the cost of their matching is evaluated by using the shape context and by using dynamic programming, the best matching between the point sets is obtained. Dynamic programming not only recovers the best matching, but also identifies occlusions, i.e. points in the two shapes which cannot be properly matched. From dynamic programming we obtain the minimum cost for matching pairs of shapes. After computing pair wise minimum cost between input and all reference shapes in the given database, we sort based on the minimum cost in ascending order and select first two shapes to check if it belongs to the input class. If it belongs to the input class, then we say that the shape is classified as a perfect match, else it is a mismatch. The algorithm has been tested on a set of shape databases like Kimia-25, Kimia-99, Kimia-216 and MPEG-7 providing good performances for shape classification.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Spectral Clustering in Data mining with the case study of Customer Relationship Management
In Data mining world, Lead generation is a data searching technique which is used to collect relevant customer information (leads), one of the examples for this techniques is contextual advertising. You might have noticed as soon as you open google site to search something, it displays unique advertisement or sponsored link along with search results. This sponsored link is typically based on search text, user logged in (ex: google user), location, browser to name a few. This type of preparing customized advertisement and sponsored links is called as Contextual advertisement and this technique is an example for Lead generation. It is an easy and painless way of attracting people/users and cultivating prospective customers out of them. The key idea of this paper is to bring out the importance of data mining in the field of CRM and also to explain the benefits of M-Clustering algorithm which we propose for data mining which proves to be efficient as it uses clustering approach compared to k-means algorithm. Also, there is a comparison with Newman’s algorithm where the significance is highlighted in terms of training set and historical data handling in M-Clustering.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Design and implementation of testing tool for code smell rectification using C-Mean algorithm
A code smell is a hint or the description of a symptom that something has gone wrong somewhere in your code. These are commonly occurring patterns in source code that indicate poor programming practice or code decay. The presence of code smells can have a severe impact on the quality of a program, i.e. making system more complex, less understandable and cause maintainability problem. Herein, an automated tool have been developed that can rectify code smells present in the source code written in java, C# and C++ to support quality assurance of software. Also, it computes complexity, total memory utilized/wastage, maintainability index of software. In this research paper an approach used for the design and implementation of testing tool for code smell rectification is discussed and is validated on three different projects.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Human action recognition to understand hand signals for traffic surveillance
Gesture Recognition plays a vital role in computer vision. The purpose of this survey is to provide a detailed overview and categories of current issues and trends. The recognition of human hand gesture movement can be performed at various levels of abstraction. Many applications and algorithms were discussed here with the explanation of system recognition framework. General overview of an action and its various applications were discussed in this paper. Most of the recognition system uses the data sets like KTH, Weizmann. Some other data sets were used by the action recognition system. In this paper, various approaches for image representation, feature extraction, activity detection and action recognition were also discussed.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Intelligent Mobile Agents for Heterogeneous Devices in Cloud Computing
Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A software agent offers a new computing paradigm in which a program, can suspend its execution on a host computer and can transfer to another agent enabled computer on network so that it could run there. Such mobile agents are used when there is limited capacity in computers. But at times, the systems could not be able to run even the mobile agents. This paper proposes an approach to execute mobile agents on any sort of systems even with limited capacity of Cloud. A platform that supports various mobile agents depending upon the capacity of the system will be developed. The mobile clients are differentiated into two types depending upon their capacity and the corresponding mobile agent is chosen for the traversal.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Iterative Software Process Based Collaboration Model for Software Stakeholders
Software engineering is well known for its significance on software development complexities minimization. Due to software engineering significance researches were carried out to improve software engineering practices. However, there are some identified problems that lead to software development complexities like the lack of understandable collaboration or communication between software stakeholders during software development. And to this identified problem a research was carried out to solve this problem by proposing a collaboration model for software stakeholders’ collaboration during software development. However, the model proposed was restricted to a waterfall software process model. This study used segment of the framework used for developing the waterfall software process based collaboration model to develop an iterative software process based collaboration model for software stakeholders during software development. And this study proposed model will help minimize the problem of lack of understandable collaboration between software stakeholders during software development
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
More Accurate Value Prediction using Neural methods
Data dependencies between instructions have traditionally limited the ability of processors to execute instructions in parallel. Data value predictors are used to overcome these dependencies by guessing the outcomes of instructions in a program. Because mispredictions can result in a significant performance decrease, most data value predictors include a confidence estimator that indicates whether a prediction should be used or not.Much research has been done recently in the area of data value prediction as a means of overcoming these data dependencies [7,8,9,10,11,17,18,20,21]. The goal of data value prediction is to guess the outcome of an instruction before the instruction is actually executed, allowing future instructions that depend on its outcome to be executed sooner. Data value predictors are usually designed to look for patterns among the data produced in repeated iterations of static instructions. Accurate prediction can be attained when the repeated outcomes of a particular instruction follow easily discernable patterns.This paper presents a global approach to confidence estimation in which the prediction accuracy of previous instructions is used to estimate the confidence of the current prediction. Data value prediction is done using perceptrons and Support Vector Machines are used to identify which past instructions affect the accuracy of a prediction and to decide based on their results whether the prediction is likely to be correct or not .
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Reducing transfer latency of peer to peer system using unstructured model
This paper presents a queuing model to evaluate the latency associated with file transfers or replications in peer-to-peer (P2P) computer systems. The main contribution of this paper is a modelling framework for the peers that accounts for the file size distribution, the search time, load distribution at peers, and number of concurrent downloads allowed by a peer. We propose a queuing model that models the nodes or peers in such systems as M/G/1/K processor sharing queues. The model is extended to account for peers which alternate between online and offline states. The proposed queuing model for the peers is combined with a single class open queuing network for the routers interconnecting the peers to obtain the overall file transfer latency. We also show that in scenarios with multipart downloads from different peers, a rate proportional allocation strategy minimizes the download times.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
A low-cost built-in redundancy-analysis scheme for word-oriented RAMs with 2-D redundancy
Built-in self-repair (BISR) techniques are widely used for repairing embedded random access memories (RAMs). One key component of a BISR module is the built-in redundancy-analysis (BIRA) design. This paper presents an effective BIRA scheme which executes the 2-D redundancy allocation based on a 1-D local bitmap. Two BIRA algorithms for supporting two different redundancy organizations are also proposed. Simulation results show that the proposed BIRA scheme can provide high repair rate (i.e., the ratio of the number of repaired memories to the number of defective memories) for the RAMs with different fault distributions. Experimental results show that the hardware overhead of the BIRA design is only about 2.9% for an 8192 64-bit RAM with two spare rows and two spare columns. Also, the ratio of the BIRA analysis time to the test time is only about 0.02% if the March-CW test is performed. Furthermore, a simulation flow is proposed to determine the size of the 1-D local bitmap such that the BIRA algorithm can provide the best repair rate using the smallest-size 1-D local bitmap.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Analysis of Test Case Prioritization in Regression Testing Using Genetic Algorithm
Testing is an accepted technique for improving the quality of developed software With the increase in size and complexity of modern software products, the importance of testing is rapidly growing. Regression testing plays a vital role for software maintenance when software is modified. The main purpose of regression testing is to ensure the bugs are fixed and the new functionality that are incorporated in a new version of a software do not unfavorably affect the correct functionality of the previous version. So to revalidate the modified software, regression testing is the right testing process. Though it is an expensive process which requires executing maintenance process frequently but it becomes necessary for subsequent version of test suites. To evaluate the quality of test cases which are used to test a program, testing requires execution of a program. In this paper we propose a new test case prioritization technique using genetic algorithm. The proposed technique separate the test case detected as severe by customer and among the rest test case prioritizes subsequences of the original test suite so that the new suite, which is run within a time-constrained execution environment, will have a superior rate of fault detection when compared to rates of randomly prioritized test suites. This experiment analyzes the genetic algorithm with regard to effectiveness and time overhead by utilizing structurally-based criterion to prioritize test cases.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]