Note: Papers are listed in no particular order.
Alfonso Alba, Ruth Aguilar, Javier Flavio Vigueras-Gómez and Edgar Arce-Santana.
Phase correlation based image alignment with subpixel accuracy
Abstract:
The phase correlation method is a well-known image alignment technique with broad applications in medical image processing, image stitching, and computer vision. This method relies on estimating the maximum of the phase-only correlation (POC) function, which is defined as the inverse Fourier transform of the normalized cross-spectrum between two images. The coordinates of the maximum correspond to the translation between the two images. One of the main drawbacks of this method, in its basic form, is that the location of the maximum can only be obtained with integer accuracy. In this paper, we propose a new technique to estimate the location with subpixel accuracy, by minimizing the magnitude of gradient of the POC function around a point near the maximum. We also present some experimental results where the proposed method shows an increased accuracy of at least one order of magnitude. Finally, we illustrate the application of the proposed algorithm to the rigid registration of digital images.
Abstract:
This article presents a Language Independent Textual Entailment (LITE) system. The work explores textual entailment as a relation between two texts in same language and also in different languages and proposes different measures for entailment decision in two-way classification tasks (yes and no). We set up different heuristics and measures for evaluating the entailment between two texts based on lexical relations. Experiments have been carried out with both the text and hypothesis converted to the same language using the Microsoft Bing translation system. The entailment system considers some text similarity measures of the text pair to decide the entailment judgments. Rules have been developed to encounter the two-way entailment issue. Our system decides on the entailment judgment after comparing the entailment scores for the text pairs. In that system we have used Italian Text Entailment data sets. We trained our system on Italian development datasets by Weka machine learning tool. We have tested on Italian test data sets. The accuracy for our system is 0.66.
Guillermo Gonzalez-Campos, Edith Luévano-Hipólito, Luis Martin Torres-Treviño and Azael Martinez-De La Cruz.
Artificial neural network for optimization of a synthesis process of Bi2MoO6 using surface response methodology
Abstract:
In this work an Artificial Neural Network was utilized in order to optimize the synthesis process of Bi2MoO6 oxide by co-precipitation assisted with ultrasonic radiation. This molybdate is recognized as an efficient photocatalyst for degradation of organic pollutants in aqueous media. For the synthesis of Bi2MoO6 three variables were considered, the exposure time to ultrasonic radiation, temperature and calcination time. The efficiency of photocatalysts synthesized was evaluated in the
photodegradation of rhodamine B (rhB) under sun-like irradiation. A set of experimental data were introduced into the neural network to simulate the results by modifying one of the three input variables and observing the efficiency of photocatalysts using the methodology of response surface.
Renato Cordeiro de Amorim and Mario Chirinos Colunga.
An Empirical Evaluation of Different Initialization on the Number of K-Means Iterations
Abstract:
This paper presents an analysis of the number of iterations K-Means takes to converge under dierent initializations. We have experimented with seven initialization algorithms in a total of 37 real and synthetic datasets. We have found that hierarchical-based initializations tend to be most eective at reducing the number of iterations, especially a divisive algorithm using the Ward criterion when applied to real datasets.
Pawel Wozniak, Andrzej Romanowski, Tomasz Jaworski, Pawel Fiderek and Jacek Kucharski.
Modelling social relationships in surgical teams with fuzzy decision-making techniques
Abstract:
This work covers the design of a decision-making system in a particular hospital environment, where clinical activities are shared between units in two distant buildings. A novel context-aware interactive system for managing this cooperation was prototyped. A user study of the system concluded that human preconceptions are of high importance in this work environment and personal relations between the physicians play a crucial role in assembling surgical teams. Therefore, one of the most important modules of the system the selection of optimal surgical team rosters was modeled using fuzzy inference methods in order to include existing personal preferences and to achieve greater efficiency and optimal use of the hospital's resources. In this interdisciplinary endeavour, the research team combined knowledge from the fields of interaction design, ubiquitous computing, operations research and fuzzy modelling to tackle a real-life problem in a safety-critical setting.
Carlos-Francisco Méndez-Cruz, Juan-Manuel Torres-Moreno, Alfonso Medina-Urrea and Gerardo Sierra.
Extrinsic Evaluation on Automatic Summarization Tasks: testing Affixality Measurements for Statistical Word Stemming
Abstract:
This paper presents some experiments of evaluation of a statistical stemming algorithm based on morphological segmentation. The method estimates affixality of word fragments. It combines three probabilistic indexes associated to possible segmentations. This unsupervised and language-independent method has been easily adapted to generate an effective morphological stemmer. This stemmer has been coupled with Cortex, an automatic summarization system to generate summaries in English, Spanish and French. The summaries have been evaluated using ROUGE. The results of this extrinsic evaluation show that our stemming algorithm outperform several classical systems.
Maria Jose Fresnadillo Martinez, Enrique Garcia Merino, Enrique Garcia Sanchez, Jose Elias Garcia Sanchez, Angel Martin Del Rey and Gerardo Rodriguez Sanchez.
A graph cellular automata model to study the spreading of an infectious disease
Abstract:
A mathematical model based on cellular automata on graphs to simulate a general
epidemic spreading is presented in this paper. Specifically, it is a SIR-type model where the population
is divided into susceptible, infected and recovered individuals.
Federico Felipe, Angel Martínez, Elena Acevedo and Marco Antonio Acevedo.
A Novel Encryption Method with Associative Approach for Gray-Scale Images
Abstract:
Encryption is used to protect confidential data from unauthorized access. This paper introduces a novel method for encrypting images with an associative approach. The encryption method divides the gray-scale image in n squares with which a Morphological Associative Memory is built. Then, the information from the image is stored in the array which represents the memory. The meaning of the array is very difficult to decipher without the previous knowledge for operating the associative memory. The original and the recovered images were correlated, and the correlating coefficient is 1 in all cases.
Carlos Román Mariaca, Julio César Tovar and Floriberto Ortiz.
Recurrent Neural Control of a Continuous Bioprocess Using First and Second Order Learning
Abstract:
The aim of this paper is to propose a new Kalman Filter Recurrent Neural Network (KFRNN) topology and a recursive Levenberg-Marquardt (L-M) algorithm of its learning capable to estimate the states and parameters of a highly nonlinear continuous fermentation bioprocess in noisy environment. The direct adaptive control scheme containing also feedback and feedforward recurrent neural controllers. The proposed control scheme is applied for real-time identification and control of continuous stirred tank bioreactor model, taken from the literature, where a fast convergence, noise filtering and low mean squared error of reference tracking were achieved.
Gonzalo Nápoles, Isel Grau and Ricardo Grau.
Modelling, aggregation and simulation of a dynamic biological system through Fuzzy Cognitive Maps
Abstract:
The complex dynamics of Human Immunodeficiency Virus leads to serious problems on predicting the drug resistance. Several machine learning techniques have been proposed for modelling this classification problem, but most of them are difficult to aggregate and interpret. In fact, in last years the protein modelling of this virus has become, from diverse points of view, an open problem for researchers. This paper presents a modelling of the protease protein as a dynamic system through Fuzzy Cognitive Maps, using the amino acids contact energies for the sequence description. In addition, an evolutionary computation based learning scheme called PSO-RSVN is used to estimate the causal weight matrix that characterize these structures. Finally, an aggregation procedure with previously adjusted maps is applied for obtaining a prototype map, in order to discover knowledge in the causal influences, and simulate the system behaviour when a single (or multiple) mutation takes place.
Abstract:
The aim of Hyper-Heuristics is to solve a wide range of problem instances with a set of heuristics, each chosen according to the specific characteristics of the problem. In this paper, our intention is to explore two different heuristics to segment the Course Timetabling problem (CTT) in subproblems with the objective of solving the problem efficiently. Each subproblem is resolved as a Constraint Satisfaction Problem (CSP). Once the CTT is partitioned and each part solved separately, we also propose two different strategies to integrate the solutions and have a complete assignment. Both integration strategies use the Min-Conflicts algorithm to reduce the inconsistencies that might arise during this integration. Each problem instance was solved with and without segmentation. The results show that simple problems do not benefit with the use of seg-mentation heuristics, whilst harder problems have a better behavior when we use these heuristics. This suggests potential benefit of using hyper-heuristic in hard CTT problems.
Carlos Fco Alvarez Salgado, Luis E. Palafox Maestre, Leocundo Aguilar Noriega and Juan R. Castro.
Distance Aproximator Using IEEE 802.11 Received Signal Strength and Fuzzy logic
Abstract:
Wireless localization systems based on IEEE 802.11 are becoming more and more common in recent years, due in part to low costs in hardware and effortlessness of deployment with off the shelve Access Points (AP), such localization systems are based on Received Signal Strength (RSS) using a periodic beacon containing information about the source where a signal strength value can be obtained upon reception of this beacon; shadow attenuation effect and multipath fading influences RSS when indoors becoming a random variable dependent on the location of the antennas with a distinguishing statistical distribution called Rayleigh distribution; this article takes upon the measurement process of the distance from AP to a device, where soon after this a position could be resolved by triangulation or trilateration on the device by means of more AP’s. This paper proposes a method that uses fuzzy logic for modeling and dealing with noisy and uncertain measurements.
B. Lorena Villarreal, Christian Hassard and J.L. Gordillo.
Integration of directional smell sense on an UGV
Abstract:
The olfaction sense in animals is very important and has been exploited on some applications like search and rescue and chemicals detection. Thanks to these advantages, there has been considerable attention to bring this capability to mobile robots. A very important task of a sniffing robot is the odor source localization. The objective of this research is to use a robot with an odor sensor that, inspired on nature, could identify the direction from where the odor is coming on an indoor or outdoor environment. The design of the sensor consists in two bio-inspired nostrils, each with three sensors, separated by a septum, integrating a full nose system. Furthermore, inspired in nature, each nostril has the ability of inhalation and exhalation, which helps to desaturate the sensors. The vehicle used has a bus architecture based in Controller Area Network (CAN) with distributed control to increase the robustness of the data communication and to make the integration of the sensor relatively fast and without need of interfering with the existing systems of the vehicle. After many experiments, we concluded that the designed sensor is capable of discriminate the direction from where an odor is coming in respect to the direction of the vehicle even in outdoor environments.
José Martín Castro-Manzano.
Towards Computational Political Philosophy
Abstract:
We present the firsts results of a series of studies around the concept of computational political philosophy. In this paper we show a computational interpretation of three classical models of political philosophy that justify three different forms of government by implementing some experiments in order to provide evidence to answer the question of which political philosophy proposes a more welfare-like form of government. We focus in the relation commitment vis-a-vis earnings and a then we observe that although some political philosophies would justify highly united societies or highly competitive communities, they would
not necessarily imply societies with a reasonable level of welfare.
José Martín Castro-Manzano.
A Defeasible Logic of Intention
Abstract:
We follow the hypothesis that intentional reasoning is a form of logical reasoning sui generis by its double nature: temporal and defeasible. Then we briefly describe a formal framework that deals with these topics and we study the metalogical properties of its notion of inference. The idea is that intentional reasoning can be represented in a well-behaved defeasible logic and has the right to be called logical reasoning since it behaves, mutatis mutandis, as a logic, strictly speaking,
as a non-monotonic logic.
Omar Nuñez and Antonio Camarena-Ibarrola.
Monitoring the content of audio broadcasted by Internet Radio Stations
Abstract:
Auditing the content of audio as transmitted by radio stations is of great interest to those who pay for publicity, radio station managers, and governments as well. Our approach consists of making use of a robust audio-fingerprint for characterization of the monitored audio and a proximity index for fast searching the most similar piece of audio among the collection of audio clips (ads mainly) known to the system. Since the audio signal as broadcasted via Internet suffers very little degradation, an inverted index proved to be a great solution. Such combination of index-fingerprint performed really well having a 100% recall allowing real time monitoring of several radio-stations simultaneously with a single desktop computer.
Marco Cruz-Ramos, Christian Hassard and Jl Gordillo.
Electric Vehicle Automation through a Distributed Control System for Search and Rescue Operations
Abstract:
In search and rescue operations, time plays an important role. Therefore, an automatic or teleoperated deployment service to bring rescue robots near to or on a target location is desired. A proposal for this issue has been to employ bigger in size robots to function as carriers of smaller rescue robots. This work proposes a distributed control architecture to automate an electric vehicle, which after instrumentation and automation, serves as carrier robot of other robots. Distributed control architectures rely heavily on communication networks for information exchange. The control architecture discussed on this paper is based on a CAN protocol network, to which different nodes have been attached. Three main control loops are closed through the network: speed, steering and ramp for deploying rescue robots. Tests were carried out to prove reliability and effectiveness of the distributed control architecture. Such tests indicated that a distributed control network based on CAN protocol is suitable to control the speed, steering and ramp of an electric vehicle at real time. In addition, the proposed network provides robustness in terms of communication and opens the possibility of expansion to develop a complete control architecture in order to successfully build a fully autonomous vehicle that can serve as a carrier robot.
Marilyn Bello, Maria M. Garcia and Rafael Bello.
A method for building prototypes in the nearest prototype approach based on similarity relations for problems of approximation of functions
Abstract:
In this article, the problem of the approximation of functions is studied using the paradigm of the nearest prototypes. A method is proposed to construct prototypes using similarity relations; the relations are constructed from the measurement, quality of similarity and the metaheuristic UMDA. For every class of similarity, a prototype is constructed. The experimental results show that the proposed method achieves a significant reduction of the quantity of instances to consider, while significant differences do not exist with regard to the performance reached with all the instances.
Yenny Villuendas-Rey, Yailé Caballero-Mota and María Matilde García-Lorenzo.
Intelligent Feature and Instance Selection to improve Nearest Neighbor classifiers
Abstract:
Feature and instance selection before classification is a very important task, which can lead to big improvements in both classifier accuracy and speed. However, few papers consider the simultaneous or combined instance and feature selection for Nearest Neighbor classifiers in a deterministic way. In this paper, we propose a metadata-guided algorithm, which uses the recently introduced Minimum Neighborhood Rough Sets as the basis for the metadata-guided selection. It also uses Compact Sets and Reducts computation for the combined instance and feature selection. The proposed algorithm deals with mixed and incomplete data and arbitrarily dissimilarity functions. Numerical experiments over repository databases show the high quality performance of the proposed method compared to previous methods and the classifier using the original sample in classifier accuracy, as well as in instance and feature reduction
Assaf Toledo, Sophia Katrenko, Stavroula Alexandropoulou, Heidi Klockmann, Asher Stern, Ido Dagan and Yoad Winter.
Semantic Annotation for Textual Entailment Recognition
Abstract:
We introduce a new semantic annotation scheme for the Recognizing Textual Entailment (RTE) dataset as well as a manually annotated dataset that uses this scheme. The scheme addresses three types of modification that license entailment patterns: restrictive, appositive and conjunctive, with a formal semantic specification of these patterns' contribution for establishing entailment. These inferential constructions were found to occur in 77.68% of the entailments in the RTE 1-3 corpora. They were annotated with cross-annotator agreement of 70.73% on average. A central aim of our annotations is to examine components that address these phenomena in RTE systems. Specifically, the new annotated dataset is used for examining a syntactic rule base within the BIUTEE recognizer, a publicly available entailment system. According to our tests, the rule base is rarely used to process the phenomena annotated in our corpus and most of the recognition work is done by other components in the system.
César Navarro, Chidentree Treesatayapun and Arturo Baltazar.
Control of the instantaneous initial contact on a parallel gripper using a fuzzy-rules emulated network with feedback from ultrasonic and force sensors
Abstract:
Many applications of robotic manipulators require a precise applied force control especially during the transient state from free to restricted motion. To achieve that, the instantaneous initial contact (force approaching zero) needs to be determined as accurately as possible. In this work, a multi input fuzzy rules emulated network (MiFREN) control scheme with adaptation is developed to find the first contact position between the fingers of a parallel gripper and a soft object. We propose the use of an ultrasonic sensor (with Hertzian contact) working simultaneously with a high sensitivity load cell. The IF-Then rules for the MiFREN controller and a new cost function using both the ultrasonic signal and the contact force are proposed. The results show that the proposed controller is capable of finding the instantaneous initial contact without any knowledge of the object, material properties and/or its location on the working space.
Irving Barragán, Juan Carlos Seck Tuoh and Joselito Medina.
Relationship between Petri Nets and Cellular Automata for the Analysis of Flexible Manufacturing Systems
Abstract:
In this paper an association between Petri nets (PN) and cellular automata (CA) is proposed in order to analyze the global dynamics of flexible manufacturing systems (FMS). This relation is carried out taking into account the discreteness in the dynamics of both PN and CA. In particular PN of finite capacity and one-dimensional CA are used. The work consists in modeling with PN both a single process with a shared resource and two parallel processes with several shared resources. The PN models are simplified by reduction rules and then the corresponding one-dimensional CA is obtained. Finally, the global dynamics of the FMS modeled is described by means of the analysis methods of CA.
Juan Pablo Nieto González and Pedro Pérez Villanueva.
Vehicle Lateral Dynamics Fault Diagnosis Using an Autoassociative Neural Network and a Fuzzy System
Abstract:
The main goals of a fault diagnosis system in a vehicle are to prevent dangerous situations for occupants. This domain is a complex system that turns the monitoring task a very challenging one. On one hand, there is an inherent uncertainty caused by noisy sensor measurements and unmodeled dynamics and in the other hand the existence of false alarms that appears in a natural way due to the high correlation between several variables. This paper presents a new approach based on history data process that can manage the variable correlation and can carry out a complete fault diagnosis system. In the first phase, the approach learns behavior from normal operation of the system using an autoassociative neural network. On a second phase a fuzzy system is carried out in order to diminish the presence of false alarms that could be originated by the noise presence and then a competitive neural network is used to give the final diagnosis. Results are shown for a six variables vehicle model.
Abstract:
Particle swarm optimization (PSO) is a bio-inspired optimization method based on populations. This paper describes the use of a Parallel Particle Swarm Optimization with fuzzy parameters adaptation. In this paper we are using parallel processing to demonstrate the performance that can be achieved using this technique. To validate this proposed method we optimize a set of mathematical functions in sequential and parallel form, to perform a comparative study.
Leticia Cervantes, Oscar Castillo, Patricia Melin and Fevrier Valdez.
Comparative study of type-1 and type-2 fuzzy systems for the Three-Tank water control problem
Abstract:
In this paper we present simulation results with type-1 fuzzy systems and a type-2 fuzzy granular approach for intelligent control of non-linear dynamical plants. First, we present the proposed method for intelligent control. Then, the proposed method is illustrated with the benchmark case of three tank water level control. In this paper, a comparison between a type-1 fuzzy system in water control and the type-2 fuzzy granular system water control is presented.
Daniela Sanchez, Patricia Melin, Oscar Castillo and Fevrier Valdez.
Modular Neural Networks Optimization with Hierarchical Genetic Algorithms with Fuzzy Response Integration for Pattern Recognition
Abstract:
In this paper a new model of a Modular Neural Network (MNN) with fuzzy integration based on granular computing is proposed. The topology and parameters of the MNN are optimized with a Hierarchical Genetic Algorithm (HGA). The proposed method can divide the data automatically into sub modules or granules, chooses the percentage of images and selects which images will be used for training. Each response is combined using a fuzzy integrator, the number of inputs of the fuzzy integrator will depend of the number of sub modules or granules that the MNN has at a particular moment. The method was applied to the case of human recognition to illustrate its applicability with good results.
Luis G. Martínez, Juan R. Castro, Guillermo Licea, Antonio R. Diaz and Reynaldo Salas.
Towards a Personality Fuzzy Model based on Big Five Patterns for Engineers Using an ANFIS Learning Approach
Abstract:
This paper proposes an ANFIS (Adaptive Network Based Fuzzy Inference System) Learning Approach where we have found patterns of personality types using Big Five Personality Tests for students in Engineering Programs. An ANFIS model is applied to the personality traits of the Big Five Personality Model obtaining a Takagi-Sugeno-Kang (TSK) Fuzzy Inference System (FIS) type model with rules that are helping us identify profiles of engineering program students based on Big Five Test.
Houssam Salmane, Yassine Ruichek and Louahdi Khoudour.
Using hidden Markov model and Dempster–Shafer theory for evaluating and detecting dangerous situations in level crossing environments
Abstract:
In this paper we present a video surveillance system for evaluating and detecting dangerous situations in level crossing environments. The system is composed of the following main parts: a robust algorithm able to detect and separate moving objects in the perceived environment, a Gaussian propagation model based dense optical flow for objects tracking, a Hidden markov model to recognize trajectories of detected objects, and an uncertainty model using theory of evidence to calculate the level of danger allowing to detect dangerous situations in level crossings. This method is tested on real image sequences, and the results are discussed. This work is developed within the framework of PANsafer project, supported by the ANR VTT program.
Airel Pérez Suárez, José Fco. Martínez Trinidad, Jesús A. Carrasco Ochoa and José E. Medina Pagola.
A new Overlapping Clustering Algorithm based on Graph Theory
Abstract:
Most of the clustering algorithms reported in the literature build disjoint clusters; however, there are several applications where overlapping clustering is useful and important. Although several overlapping clustering algorithms have been proposed, most of them have a high computational complexity or they have some limitations which reduce their usefulness in real problems. In this paper, we introduce a new overlapping clustering algorithm, which solves the limitations of previous algorithms, while it has an acceptable computational complexity. The experimentation, conducted over several standard collections, demonstrates the good performance of the proposed algorithm.
Abstract:
Abstract. In order to cope with the free-riding problem in file sharing
P2P systems, two kinds of incentive mechanisms have been proposed:
reciprocity based and currency based. The main goal of this work was
to study the impact of those incentive mechanisms in the emergence of
cooperation in file sharing P2P systems. For each kind of incentive mech-
anism we designed a game and the outcome of this game was used as a
fitness function to carry out an evolutionary process. We were able to
observe that the Currency Game obtains an enough cooperative popu-
lation slightly faster than the Reciprocity Game but, in the long run,
the Reciprocity Game outperforms the Currency Game because the final
populations under the former are consistently more cooperative than the
final populations produced by the latter.
Abstract:
In Evolutionary Robotics (ER), bioinspired algorithms are used to generate robotic behavior. Several researchers used classic Genetic Algorithms (GA) or adaptations of GAs for developing experiments in ER. Here, we use Differential Evolution (DE) and set a wall-follow behavior for an epuck robot. We detail the results and conclusions/advantages when using the DE approaches in our application.
Ramón Zatarain Cabada, María Lucía Barrón Estrada, Yasmín Hernández Pérez and Carlos Alberto Reyes García.
Designing and Implementing Affective and Intelligent Tutoring Systems in a Learning Social Network
Abstract:
In this paper we present step by step the design and implementation of affective tutoring systems inside a learning social network using soft computing technologies. We have designed a new architecture for an entire system that includes a new social network with an educational approach, and a set of intelligent tutoring systems for mathematics learning which analyze and evaluate cognitive and affective aspects of the learners. Moreover, our intelligent tutoring systems were developed based on different theories, concepts and technologies such as Knowledge Space Theory for the domain module, an overlay model for the student module, ACT-R Theory of Cognition and fuzzy logic for the tutoring module, Kohonen neural networks for emotion recognition and decision theory to help students achieve positive affective states. We present preliminary results with different groups of students using the software system.
Dolores Torres, Aurora Torres, Felipe Cuellar, Luz Torres, Eunice. Ponce de León, Francisco Pinales and Jorge Cardona.
Identification of risk factors for TRALI using a hybrid algorithm
Abstract:
This paper presents a hybrid evolutionary algorithm to identify risk factors associated with Transfusion related acute lung injury (TRALI). This medical condition occurs mainly in intensive care units and operating rooms, and the main strategy for its treatment is prevention. The proposed algorithm works with information from the model known as “two hits”, in which the first hit is the original disease and the second corresponds to the blood transfusion. This algorithm is based on a genetic algorithm hybridized with testor analysis. This research used information from 87 patients treated at the Centenary Hospital Miguel Hidalgo in the city of Aguascalientes, Mexico. As a result of the algorithm analysis, it was found that most variables are related to the first hit, while only some of them belong to the second one. The analysis also revealed that some variables physicians believed significant a priori, were not very important; among other discoveries.
Santiago Omar Caballero Morales and Felipe Trujillo-Romero.
Dynamic Estimation of Phoneme Confusion Patterns with a Genetic Algorithm to Improve the Performance of Metamodels for Recognition of Disordered Speech
Abstract:
A field of research in Automatic Speech Recognition (ASR) is the development of assistive technology, particularly for people with speech disabilities. Diverse techniques have been proposed to accomplish accurately this task, among them the use of Metamodels. In this paper we present an approach to improve the performance of Metamodels which consists in using a speaker's phoneme confusion matrix to model the pronunciation patterns of this speaker. In contrast with previous confusion-matrix approaches, where the confusion-matrix is only estimated with fixed settings for language model, here we explore on the response of the ASR for different language model restrictions. A Genetic Algorithm (GA) was applied to further balance the contribution of each confusion-matrix estimation, and thus, to provide more reliable patterns. When incorporating these estimates into the ASR process with the Metamodels, consistent improvement in accuracy was accomplished when tested with speakers of mild to severe dysarthria which is a common speech disorder.
Angel-Ivan Garcia-Moreno, José-Joel Gonzalez-Barbosa, Juan-Bautista Hurtado-Ramos, Francisco-Javier Ornelas-Rodríguez and Alfonso Ramirez-Pedraza.
Automatic 3D City Reconstruction Platform using a LIDAR and DGPS
Abstract:
This paper introduces an approach for geo-registered 3D reconstruction of an outdoor scene using LIDAR (Light Detection And Ranging) technology and high precision DGPS. We develop a computationally efficient method for 3D reconstruction of city-sized environments using both sensors providing high-detail street views. In the proposed method, the translation between local maps is obtained using the GPS data and the rotation is obtained extracting planes of the two point clouds and matching them, after extract these parameters we merge many local scenes to obtained a global map. We validate the accuracy of the proposed method making a comparison between the reconstruction and real measures and plans of the scanned scene. The results show that the proposed system is a solution for 3D reconstruction of large scale city models.
Abstract:
In recent years, a technique known as thermography has been again seriously considered as a complementary tool for the pre-diagnosis of breast cancer. In this paper, we evaluate the potential of such a technique, from a database containing 99 cases of patients with suspicion of having breast cancer, using Bayesian networks. Each patient has corresponding results for different diagnostic tests: mammography, thermography and biopsy. Our results suggest that thermography has a comparable performance to that of mammography in the pre-diagnosis of breast cancer (78.57% of accuracy for both). Moreover, the Bayesian network resultant from running this database shows unexpected interactions among the thermographical attributes, especially those directly related to the class variable.
Boris Kriheli and Eugene Levner.
Search and Detection of Failed Components in Repairable Complex Systems under Imperfect Inspections
Abstract:
We study a problem of scheduling search-and-detection activities in complex technological and organizational systems. Prior probabilities of failures are known for each element, and the decision maker has to sequentially inspect the components so that to find a failed component within a given level of confidence. The inspections are imperfect: a probability of overlooking the failed element and a probability of a "false alarm" exist. An index-based algorithm for finding the optimal search strategy is developed. An example for robotic search systems is discussed.
Abstract:
We present a hybrid method to produce a velocity model of
the Earth's crust using evolutionary and seismic tomography algorithms.
This method takes advantage of the global search ability of an evolution
strategy and the quick convergence of an iterative three-dimensional seismic
tomography technique to generate a model of the Earth's crustal
structure from recorded arrival times of wave fronts produced by controlled
sources. The evolution strategy nds a three-dimensional velocity
model with constant lateral velocity layers that minimizes the root
mean square residuals computed by the tomographic algorithm. The
model found is provided as the initial search point to a rst arrival
traveltime seismic tomography algorithm, which then computes the -
nal three-dimensional velocity model. The method was tested with a
real-world data set from an active source experiment performed in the
Potrillo Volcanic Field, in Southern New Mexico. Results show that our
hybrid method obtains faster convergence and more accurate results than
the conventional methods, and does not require an expert
Juan Pablo Nieto González.
Multiple Fault Diagnosis in Electrical Power Systems with Dynamic Load Changes Using Soft Computing
Abstract:
Power systems monitoring is particularly challenging due to the presence of dynamic load changes in normal operation mode of network nodes, as well as the presence of both continuous and discrete variables, noisy information and lack or excess of data. In this domain, the need to develop more powerful approaches has been recognized, and hybrid techniques that combine several reasoning methods start to be used. This paper proposes a fault diagnosis framework that is able to locate the set of nodes involved in multiple fault events. The proposal is a methodology based on system history data. It detects the faulty nodes, the type of fault in those nodes and the time when it is present. The framework is composed of two phases: In the first phase a probabilistic neural network is trained with the eigenvalues of voltage data collected during normal operation, symmetrical and asymmetrical fault disturbances. The second phase uses an Adaptive Neuro-Fuzzy Inference Systems (ANFIS) to give the final diagnosis. A set of simulations are carried out over an electrical power system proposed by the IEEE to show the performance of the approach. A comparison is made against a diagnostic system based on probabilistic logic.
Fernando Gaxiola, Patricia Melin, Fevrier Valdez and Oscar Castillo.
Neural Network with Type-2 Fuzzy Weight Adjustment for Human Recognition based on the Human Iris Biometrics
Abstract:
In this paper a neural network architecture with type-2 fuzzy weight adjustment in based on the backpropagation method is proposed. The structure of the neural network for the problem of recognition is analyzed in this paper. The method for obtaining the new weights in each hidden layer and output layer is based in the functionally of the backpropagation method, the difference between the traditional method with the proposed method is the used of type-2 fuzzy inference system to obtain the new weights using the actual weights and the change of the new weight for that process. The proposed approach is applied to a case of study of the recognition patterns using the human iris biometrics.
Diego Uribe.
Measuring Feature Distributions in Sentiment Classification
Abstract:
We address in this paper the adaptation problem in sentiment classification. As we know, the availability of labeled data required by sentiment classifiers is not always possible. Given a set of labeled data from different domains and a small amount of labeled data of the target domain, it would be interesting to determine which subset of those domains has a feature distribution most similar to the target domain. In this way, in the absence of labeled data for a particular target domain, it would be plausible to make use of the labeled data corresponding to the most similar domains.
Enrique Guzman, Ignacio Arroyo, Carlos Gonzales and Oleksiy Pogrebnyak.
FPGA-based architecture for Extended Associative Memories and its Application in Images Recognition
Abstract:
In this paper, an efficient FPGA-based architecture for Extended Associative Memories (EAM) focused on the classification stage of an image recognition system for real-time applications is presented. Conventional processors use algorithmic approach, performing a program of instructions sequentially, in FPGA devices many hypotheses can be evaluated concurrently; this feature makes FPGAs an ideal device to implement models consisting of processing units that work concurrently, such as EAM. The EAM training phase is only used during the generation of associative memory, completed this task, this module is disconnected from the system; for this reason the hardware architecture of this module was designed for optimize the FPGA resource usage. On the other hand, the EAM can be part of a system requiring working in real time, such a perception system for a mobile robot or a personal identification system; for this reason, the hardware architecture of EAM classification phase was designed for obtaining high processing speeds. Experimental results show high performance of our proposal when altered versions of the images used to train the memory are presented.
Loreto Gonzalez-Hernandez, Jose Torres-Jimenez, Nelson Rangel-Valdez and Josue Bracho-Rios.
A post-optimization strategy for combinatorial testing: test suite reduction through the identification of wild cards and merge of rows
Abstract:
The development of a new software system involves extensive tests on the software functionality in order to identify possible failures. It will be ideal to test all possible input cases (configurations), but exhaustive approach usually demands too large cost and time. The test suite reduction problem can be defined as the task of generating small set of test cases under certain requirements. A way to design test suites is through interaction testing using a matrix called Covering Array; CA(N,t,k,v), which guarantees that all configurations among every $t$ parameters are covered. This paper presents a simple strategy that reduces the number of rows of a CA. The algorithms represent a post-optimization process which detects wild cards (values that can be changed arbitrarily without the CA losses its degree of coverage) and uses them to merge rows. In the experiment, 667 CAs, created by a state-of-the-art algorithm, were subject to the reduction process. The results report a reduction in the size of 347 CAs (52% of the cases). As part of these results, we report the matrix for CA(42;2,8,6) constructed from CA(57;2,8,6) with an impressive reduction of 15 rows, which is the best upper bound so far.
Angel Kuri-Morales.
Application of a Method Based on Computational Intelligence for the Optimization of Resources Determined from Multivariate Phenomena
Abstract:
The optimization of complex systems one of whose variables is time has been attempted in the past but its inherent mathematical complexity makes it hard to tackle with standard methods. In this paper we solve this problem by appealing to two tools of computational intelligence: a) Genetic algorithms and b) Artificial Neural Networks (NN). We assume that there is a large enough set of data whose intrinsic information is enough to reflect the behavior of the system. We solved the problem by, first, designing a system capable of predicting selected variables from a multivariate environment. For each one of the vari-ables we trained a NN such that the variable at time t+k is expressed as a non-linear combination of a subset of the variables at time t. Having found the fore-casted variables we proceeded to optimize their combination such that its cost function is minimized. In our case, the function to minimize expresses the cost of operation of an economic system related to the physical distribution of coins and bills. The cost of transporting, insuring, storing, distributing, etc. such cur-rency is large enough to guarantee the time invested in this study. We discuss the methods, the algorithms used and the results obtained in experiments as of today.
Ernesto Cortés Pérez, Airel Núñez Rodríguez, Rosa E. Moreno de La Torre, Orlando Lastres Danguillecourt and J. Rafael Dorrego Portela.
Performance Analysis Of ANFIS in short term Wind Speed Prediction.
Abstract:
Results are presented on the performance of an adaptive Neuro-Fuzzy Inference system (ANFIS) for wind velocity forecasts in the Isthmus of Tehuantepec region in the state of Oaxaca, Mexico. The data bank was provided by the meteorological station located at the University of Isthmus, Tehuantepec campus, covers the period of 2008 - 2011. Three data models were constructed to carry out 16, 24 and 48-hour forecasts using the following variables: wind velocity, temperature, barometric pressure, and date. The performance measure for the three models is the mean square error (MSE).
Abstract:
In structural behavior, the analysis of civil buildings by seismic tests has been generalized by the use of shaking tables. This method requires advanced control systems. In our research we show the implementation of a model reference adaptive control to control the position of a shaking table, modified by the introduction of a Smith predictor to compensate the error produced by the system delay. The mechanic is based on a Slider-crank device. The control system is implemented on a 32 bits platform by Microchip, and the control is done via a remote server using the RENATA network. The results of our adaptive control system were experimentally verified using the shaking table with a 24 kg mass as a load.
Abstract:
A single chaotic neuron can be developed using a single neu-
ron with a Gaussian activation function and feeding back the output to
one of the inputs. The Gaussian activation function has two parameters,
the center of mass and the width of the bell called sensibility factor. The
change of these parameters determines the behavior of the single neu-
ron that could be stationary but more of the times is dynamic inclusive
chaotic. This single neuron is implemented in an embedded system gen-
erating a set of bifurcations plots illustrating the dynamic complexity
that could have this simple system.
Nahun Loya, David Pinto, Ivan Olmos Pineda, Helena Gómez-Adorno and Yuridiana Alemán.
Forecast of air quality based in Ozone by decision trees and neural networks
Abstract:
The present paper aims to find decision tree and neural network models for forecast of the Ozone levels based on air quality predictors, which work using the data sets of Atmospheric Monitoring System of Mexico City (SIMAT). The data used for this study was obtained from SIMAT and correspond to the hours measures of the period of 2010 to 2011 from meteorological stations known as: Pedregal, Tlalnepantla and Xalostoc. The data set is composed of four chemical variables and four meteorological variables: Ozone, Carbon monoxide, Nitrogen Dioxide, Sulfur dioxide, Temperature, Relative humidity, Speed wind and Wind direction. The developed models will be applied to forecast the air quality based on hours measures (daily) data.
Abstract:
A method for constructing an emotion lexicon from a text corpus is proposed. For this, fuzzy C-means clustering followed by a Support Vector Machine (SVM) based classification is used. The objective function of the Fuzzy C-means clustering is modified by including some more functions like point-wise mutual information (PMI) and Law of Gravitation (LGr) between the word pairs and similarity scores. Similar functions are used to modify the kernel functions of the SVM. With this, we compute the modified probability function of the system instead of the intractable actual probability function. A criterion for parameter selection is proposed on the basis of Law of Gravitation between two word pairs. The proposed method extracts emotion orientations with high accuracy on the generalized lexical network and on the additional modified networks. The system is evaluated by using several other psychological features provided with the ISEAR dataset. Satisfactory results are obtained.
Jesús Emeterio Navarro-Barrientos, Dieter Armbruster, Hongmin Li, Morgan Dempsey and Karl G. Kempf.
Towards Automated Extraction of Expert System Rules from Sales Data for the Semiconductor Market
Abstract:
Chip purchasing policies of the Original Equipment Manufacturers (OEMs) of laptop computers are characterized by probabilistic rules. The rules are extracted from data on products bought by the OEMs in the semiconductor market over twenty quarters.
We present the data collected and a qualitative data mining approach to extract probabilistic rules from the data that best characterize the purchasing behavior of the OEMs. We validate and simulate the extracted probabilistic rules as a first step towards building an expert system for predicting purchasing behavior in the semiconductor market. Our results show a prediction score of approximately 95\% over a one-year window for quarterly data.
Luis Mateos.
DeWaLoP In-pipe Robot Position from Visual Patterns
Abstract:
This article presents a methodology to position an in-pipe robot in the center of a pipe from a line matching applied to the unwrapped omni-directional camera located at the robot's front-end. The advantage of use omni-directional camera inside the pipes is the relation between the cylindrical image obtained from the camera and the position of the camera on the robot inside the pipe, where by direct relation the circular features become linear. The DeWaLoP in-pipe robot objective is to redevelop the cast-iron pipe-joints of the over 100 years old fresh water supply systems of Vienna and Bratislava. In order to redevelop the pipes, the robot uses a rotating mechanism to clean and apply a sealing material to the pipe-joints. This mechanism must be set perfectly in the center of the pipe to work properly. Therefore, it is crucial to set the in-pipe robot in the center of the pipe's horizontal x and y axes.
Juan Carlos Gomez and Marie-Francine Moens.
Document Categorization Based on Minimum Loss of Reconstruction Information
Abstract:
In this paper we present and validate a novel approach for single-label multi-class document categorization. The proposed categorization approach relies on the statistical property of Principal Component Analysis (PCA), which minimizes the reconstruction error of the training documents used to compute a low-rank category transformation matrix. This matrix allows projecting the original training documents from a given category to a new low-rank space and then optimally reconstructs them to the original space with a minimum loss of information. The proposed method, called Minimum Loss of Reconstruction Information (mLRI) classifier, uses this property, extends and applies it to unseen documents. Several experiments on three well-known multi-class datasets for text categorization are conducted in order to highlight the stable and generally better performance of the proposed approach in comparison with other popular categorization methods.
Alan I. Torres-Nogales, Santiago E. Conant-Pablos and Hugo Terashima-Marín.
Local Features Classification For Adaptive Tracking
Abstract:
In this paper we propose using invariant local features and a global appearance validation for building a robust object detector which can be learned via semi-supervised learning. Although local features have been used before for several object detection and tracking applications, these approaches often model an object as a collection of key-points and descriptors, which then involves constructing a set of correspondences between object and image key-points via descriptor matching or key-point classification. However, these algorithms cannot properly adapt to long video sequences due to their limited capacity for incremental update. We differentiate from these approaches in that we obtain key-point-to-object correspondences instead of key-point-to-key-point correspondences converting the problem into an easier binary classification problem, which allows us to use a state-of-the-art algorithm to incrementally update our classifier. Our approach is embedded into the Tracking-Learning-Detection (TLD) framework [1] by performing a set of changes in the detection stage. We show how measuring the density of positive local features given by a binary classifier trained on-line is a good signal of the object's presence, and in combination with a global appearance validation it yields a strong object detector for assisting a tracking algorithm. In order to validate our approach we compare the tracking results against the original TLD approach on a set of 10 videos.
[1] Z. Kalal, J. Matas, and K. Mikolajczyk. P-N Learning: Bootstrapping Binary Classifiers by Structural Constraints. Conference on Computer Vision and Pattern Recognition, 2010.
Antonio Neme, Sergio Hernández and Vicente Carrión.
Identification of the minimal set of attributes that maximizes the information towards the author of a political discourse: The case of the three principal candidates in the Mexican presidential elections.
Abstract:
Authorship attribution is a task that has attracted the attention of the natural language processing and machine learning communities in the past few years. In this contribution, we are interested in finding a general measure of the style followed in the texts from the three main candidates in the Mexican presidential elections of 2012. We analyzed dozens of texts (discourses) from those three authors. We applied tools from the time series processing field and machine learning community in order to identify the overal attributes that define the writing style of the three authors. Several attributes and time series were extracted from each text. A novel methodology, based in mutual information, was applied on those time series and attributes to explore the relevance of each attribute to linearly separate the texts accordingly to their authorship. We show that less than 20 easily obtained variables are enough to identify, by means of a linear recognizer, the authorship of a text from within one of the three considered authors.
Ieroham Baruch, Sergio Hernandez and Jacob Moreno.
Recurrent Neural Identification and I-Term Sliding Mode Control of a Vehicle System Using Levenberg-Marquardt Learning
Abstract:
A new Modular Recurrent Trainable Neural Network (MRTNN) has been used for system identification of a vehicle motor system. The first MRTNN module identified the exponential part of the unknown vehicle motor plant and the second one - the oscillatory part of that plant. The vehicle motor plant has been controlled by an indirect sliding mode adaptive control system with integral term. The sliding mode controller used the estimated parameters and states to suppress the vehicle plant oscillations and the static plant output control error is reduced by an I-term added to the control.
Abstract:
The growth in interest in RGB-D devices (e.g. Microsoft
Kinect or ASUS Xtion Pro) is based on their low price, as well as the
wide range of possible applications. These devices can provide skeletal
data consisting of 3D position, as well as orientation data, which can be
further used for pose or action recognition. Data for 15 or 20 joints can
be retrieved, depending on the libraries used. Recently, many datasets
have been made available which allow the comparison of dierent action
recognition approaches for diverse applications (e.g. gaming, Ambient-
Assisted Living, etc.). In this work, a genetic algorithm is used to de-
termine the contribution of each of the skeleton's joints to the accuracy
of an action recognition algorithm, thus using or ignoring the data from
each joint depending on its relevance. The proposed method has been
validated using a k-means-based action recognition approach and using
the MSR-Action3D dataset for test. Results show the presented algo-
rithm is able to improve the recognition rates while reducing the feature
size.
Roman Barták and Vladimír Rovenský.
Verifying Nested Workflows with Extra Constraints
Abstract:
Nested workflows are used to formally describe processes with a hierarchical structure similar to hierarchical task networks in planning. The nested structure guarantees that the workflow is valid in terms of possibility to select for each involved activity a process that contains the activity. However, if extra synchronization, precedence, or causal constraints are added to the nested structure, the problem of selecting a process containing a given activity becomes NP-complete. This paper presents techniques how to verify such workflows, in particular, how to ensure that a process exists for each activity.
Karina Bogdan and Valdinei Silva.
Different Approaches to Feature Selection in MDPs
Abstract:
In problems modeled as Markov Decision Processes (MDP), knowledge transfer is related to the notion of generalization and state abstraction. Abstraction can be obtained through factored representation by describing states with a set of features. Thus, the definition of the best action to be taken in a state can be easily transferred to similar states, i.e., states with similar features. In this paper we present two approaches to find an appropriate compact set of features for such abstraction, thus facilitating the transfer of knowledge to new problems. We also present heuristic versions of both approaches and compare all of the approaches within a discrete simulated navigation problem.
Renato Minami and Valdinei Silva.
Shortest stochastic path with risk sensitive evaluation
Abstract:
In an environment of uncertainty where decisions must be taken, how to make a decision considering the risk? The shortest stochastic path (SSP) problem models the problem of reaching a goal with the least cost. However under uncertainty, a best decision may: minimize expected cost, minimize variance, minimize worst case, maximize best case, etc. Markov Decision Processes (MDPs) defines optimal decision in the shortest stochastic path problem as the decision that minimizes expected cost, however MDPs does not care about the risk. An extension of MDP which has few works in Artificial Intelligence literature is Risk Sensitive MDP. RSMDPs considers the risk and integrates expected cost, variance, worst case and best case in a simply way. We show theoretically the differences between MDPs and RSMDPs for modeling the SSP problem and show the results of each model in an artificial scenario.
María De Lourdes Guadalupe Martínez-Villaseñor and Miguel González-Mendoza.
Process of concept alignment for interoperability between heterogeneous sources
Abstract:
Some researchers in the community of user modeling envision the need to share and reuse information scattered over different user models of heterogeneous sources. In a multi-application environment each application and service must repeat the effort of building a user model to obtain just a narrow understanding of the user. Sharing and reusing information between models can prevent the user from repeated configurations, help deal with application and services’ “cold start” problem, and provide enrichment to user models to obtain a better understanding of the user. But gathering distributed user information from heterogeneous sources to achieve user models interoperability implies handling syntactic and semantic heterogeneity. In order to integrate a ubiquitous user model gathering information from heterogeneous sources, and be able to reuse this information, an alignment process between similar concepts based in Simple Knowledge Organization System (SKOS) data model is proposed. We show that the process of concept alignment for interoperability based in an element level matching together with a structure level matching, can allow the interoperability between social networking applications, FOAF, Personal Health Records (PHR) and personal devices. We present an example of how to reuse profile information for Web service personalization.
Ricardo Parra and Leonardo Garrido.
Bayesian Networks for Micromanagement Decision Imitation in the RTS Game Starcraft
Abstract:
Real time strategy (RTS) games provide various research areas for Artificial Intelligence. One of these areas involves the management of either individual or small group of units, called micromanagement. This research provides an approach that implements an imitation of the player's decisions as a mean for micromanagement combat in the RTS game Starcraft. A bayesian network is generated to fit the decisions taken by player and then trained with information gather from the player's combat micromanagement game. Then, this network is implemented on the game in order to enhance the performance of the game's built-in Artificial Intelligence module. Moreover, as the increase in performance is directly related to the player's game, it enriches the player's gaming experience. The results obtained proved that imitation through the implementation of bayesian networks can be achieved. Consequently, this provided an increase in the performance compared to the one presented by the game's built-in AI module.
Abstract:
This paper proposes a Multilayer Perceptron (MLP) with a new higher order neuron whose decision region is generated by a conic section (circle, ellipse,parabola,hyperbola). We call it the the hyper-conic neuron. The conic neuron is defined for the conformal space where it can freely work and take advantage of all the rules of Geometric (Clifford) Algebra. The proposed neuron is a non-linear associator that estimates distances from vectors (points) to decision regions. The computational model of the conic neuron is based on the geometric product (an outer product plus an inner product) of geometric algebra in conformal space. The Particle Swarm Optimization (PSO) algorithm is used to find the values of the weights that properly define some MLP for a given classification problem. The performance is presented with a classical benchmark used in neural computing.
Mauricio J. Garcia Vazquez, Jorge Francisco Madrigal, Oscar Mar, Claudia Esteves and Jean-Bernard Hayet.
Robust visual localization of a humanoid robot in a symmetric space
Abstract:
Solving the global localization problem for a humanoid robot in a fully symmetric environment, such as the soccer field of the RoboCup games under the most recent rules, may be a difficult task. It requires to maintain the robot position distribution multi-modality whenever it is necessary, for ensuring that the correct position is among the distribution modes. We describe a three-level approach for handling this problem, where (1) a particle filter is run to implement the Bayes filter and integrate elegantly our probabilistic knowledge about the visual observations and the one on the robot motion, (2) a mode management system maintains explicitly the distribution modes and allows to guarantee the satisfaction of constraints such as unresolved symmetry, and (3) a discrete state machine over the modes is used to determine the most pertinent observation models. We present very promising results of our strategy both in simulated environments and in real configurations.
Hamid Parvin, Sara Ansari and Sajad Parvin.
A Diversity Production in Ensemble of Classifiers
Abstract:
Generally in design of combinational classifier systems, the more diverse the results of the classifiers, the more appropriate final result. In this paper, a new method for combining classifiers is proposed that its main idea is heuristic retraining of classifiers. Specifically, in the new method that proposes a new approach for generating diversity during creation of an ensemble, a classifier is first run, then, focusing on the drawbacks of this base classifier, other classifiers are retrained heuristically. Each of these classifiers looks at the data with its own attitude. The major attention in the retrained classifiers is to leverage the error-prone data. So, retrained classifiers usually have different votes about the sample points which are close to boundaries and may be likely erroneous. Experiments show significant improvements in terms of accuracies of consensus classification. This study also investigates that focusing on which crucial data points can lead to more performance in base classifiers. Also, this study shows that adding the number of all “difficult” data points like boosting method, does not always cause a better performance. The experimental results show that the performance of the proposed algorithm outperforms some of the best methods in the literature. So empirically, the authors claim that forcing crucial data points to the training set as well as eliminating them from the training set can yield to the more accurate results, conditionally.
Mireya García-Vázquez and Alejandro Alvaro Ramírez-Acosta.
Two Adaptive Methods Based on Edge Analysis for Improved Concealing Damaged Coded Images in Critical Error Situations
Abstract:
The original coded image signal can be affected when it is transmitted over error-prone networks. Error concealment techniques for compressed image or video attempt to exploit correctly received information to recover corrupted regions that are lost. If these regions have edges, most of these conventional approaches cause noticeable visual degradations, because they not consider the edge characteristics of images. The spatial error concealment methods cannot work well; especially over high burst error condition since a great of neighboring information have been corrupted or lost (called ‘critical error situations’). This paper proposes two adaptive and effective methods to select the required support area, based on edge analysis using local geometric information, suited base functions and optimal expansion coefficients, in order to conceal the damaged macroblocks in critical error situations. Experimental results show that the proposed two approaches outperform existing methods by up to 7.9 dB on average.
José-Lázaro Martínez-Rodríguez, Víctor-Jesús Sosa-Sosa and Ivan Lopez-Arevalo.
Automatic Discovery of Web content related to IT in the Mexican Internet based on supervised classifiers
Abstract:
General web search engines, such as Google, Yahoo and Bing have been very successful information retrieval tools. However, many users with domain-specific interests are still disappointed with the responses obtained from these generic tools. This situation has motivated the creation of domain-specific search engines because they are able to offer increased accuracy in a minor maintenance and infrastructure cost. This paper introduce a method to discover domain-specific web content delimited by a country-context. This method allows a search engine to improve its accuracy for users that are interested in a domain-specific web content from a particular country. Our method is based on supervised classifiers and define country bounds for the search. To delimit the country context, our web content extraction process takes information from different sources, such as the Unified Resource locators (URLs), official government web pages, the Network Information Center (NIC) and the IP numbers reserved to the country of interest. Details of the system architecture are presented. A proof of concept was carried out using the Information and Communication Technologies (ICT) domain in the Mexican context. The testing prototype has obtained encouraging results.
Grigori Sidorov, Taras Ivchenko and Irina Kobozeva.
Implementation of spatial dialog with a mobile robot
Abstract:
We present models and strategies of dialog with a mobile robot. This dialog is centered on robot positions and goals that the robot can achieve. We made our experiments using speech recognition, morphological and syntactic representations for Spanish language.
Karina Ruiz-Mireles, Ivan Lopez-Arevalo and Victor Sosa-Sosa.
Semantic Classification of Posts in Social Networks by Means of Concept Hierarchies
Abstract:
Social networks are in constant grow, where users share all kind of information, such as news, pictures and their personal opinions among others. In order to search for a topic of interest in social networks, the user must provide the terms to search for, but in a matter of semantics, this tends to leave out relevant results. This paper proposes an approach for semantic search in social networks trough the use of concept hierarchies (CH), it is also included a method to obtain the CH of a particular subject extracting information from DBpedia. With the implementation of this approach, the results show a good behavior of the proposed method, obtaining more than 64% in the F-measure.
Manju Sardana, Baljeet Kaur and R. K. Agrawal.
Performance Evaluation of Ranking Methods for Relevant Gene Selection in Cancer Microarray Datasets
Abstract:
Microarray data is often characterized by high dimension and small sample size. Gene ranking is one of the most widely explored techniques to reduce the dimension because of its simplicity and computational efficiency. Many Ranking methods have been suggested which depict their efficiency dependent upon the problem at hand. We have investigated the performance of six ranking methods on eleven cancer microarray datasets. The performance is evaluated in terms of classification accuracy and number of genes. Experimental results on all datasets show that there is significant variation in classification accuracy which depends on the choice of ranking method and classifier.Empirical results show that there is no clear winner among six ranking methods. However, Brown Forsythe test statistics and Mutual Information show high accuracy with few genes whereas Gini and Pearson coefficient perform poorly in most cases.
Ajay Jaiswal, Nitin Kumar and R. K. Agrawal.
Statistical Framework for Facial Pose Classification
Abstract:
Pose classification is one of the important steps in some pose invariant face recognition methods. In this paper, we propose to use: (i) Partial least square (PLS) and (ii) Linear regression for facial pose classification. The performance of these two approaches is compared with two edge based approaches and pose-eigenspace approach in terms of classification accuracy. Experimental results on two publicly available face databases (PIE and FERET) show that the regression based approach outperforms other approaches significantly for both the databases.
Bharti Rana and R. K. Agrawal.
Salient Features Selection for Multiclass Texture Classification
Abstract:
Texture classification is one of the important components in texture analysis which has drawn the attention of research community during the past few decades. Various texture feature extraction techniques have been proposed in the literature. However, combining texture methods from different families has demonstrated to produce better classification at the cost of complexity of the learning model. In this paper, we have investigated three parametric test sta-tistics (ANOVA F statistic, Welch test statistic, Adjusted Welch test statistic) to determine salient features for multiclass texture classification. The salient fea-tures are obtained from a pool of features obtained using five textural feature extraction methods. Experiments are performed on a widely used publicly available Brodatz dataset. Experimental results show that the classification error decreases significantly with the use of all the three feature selection methods with all classifiers. The reduced set of features will also lead to significant decrease in computation time of the learning model.
Fernando Figueroa and Ruben Jaramillo.
Application of Probabilistic Neural Networks in the Diagnosis of insulation in electrical equipment.
Abstract:
One of the major faults in the electrical industry are given in auxiliary equipment such as underground cables at the distribution level, these equipments have a little attention because have low costs compared with the main equipment and diagnostic operations and maintenance are very expensive. The most sensitive techniques to detect insulation problems are the partial discharge measurement. However, its application is limited due to electromagnetic interference problems that occur on site as well that are difficult to interpret and only experienced trained staff can give a conclusion. This paper presents the partial discharge measurements on site and laboratory for distribution cables, obtaining the relevant format like Q-Ф and N-Q-Ф and obtaining partial discharge patterns (internal, external and corona) which were used for classification using probabilistic neural networks.
Hillel Romero-Monsivais, Eduardo Rodriguez-Tello and Gabriel Ramírez.
A New Branch and Bound Algorithm for the Cyclic Bandwidth Problem
Abstract:
The \emph{Cyclic Bandwidth} problem (CB) for graphs consists in labeling the vertices of a guest graph $G$ by distinct vertices of a host cycle $C$ (both of order $n$) in such a way that the maximum distance in the cycle between adjacent vertices in $G$ is minimized. The CB problem arises in application areas like VLSI designs, data structure representations and interconnection networks for parallel computer systems.
In this paper a new \emph{Branch and Bound} (B\&B) algorithm for the CB problem is introduced. Its key components were carefully devised after an in-depth analysis of the given problem. The practical effectiveness of this algorithm is shown through extensive experimentation over 20 standard graphs. The results show that the proposed exact algorithm attains the best bounds for these graphs (of order $n\leq 40$) expending a reasonable computational time.
Elizabeth Santiago, Manuel Romero-Salcedo and Jorge X. Velasco-Hernández.
An Integrated Strategy for Analyzing Flow Conductivity of Fractures in a Naturally Fractured Reservoir using Complex Network Metrics
Abstract:
In this paper a new strategy for analyzing the capacity of flow conductivity of hydrocarbon in fractures associated to the reservoir under study is presented. This strategy is described as an integrated methodology which involves as input data the intersection points that are extracted from hand-sample fracture images obtained from cores in a Naturally Fractured Reservoir. This methodology consists of two main stages. The first stage carries out the analysis and image processing, which goal is the extraction of the topological structural. The second stage is focused on finding the node or vertex which represents the most important node of the graph applying an improved betweenness centrality measure. Once the representative node is obtained, the intensity of intersection points of the fractures is quantified. In this stage a sand box technique based on different ratios for obtaining the behavior of the node intensity in the reservoir is used. The results obtained from the integrated strategy allow us to characterize the topology of possible flow conductivity in fractures viewed as complex networks. Moreover these results are also of interest in the formulation of models in the whole characterization of the reservoir.
Abstract:
A novel approach for Textual Entailment (TE) is described in this paper. The TE system is trained by Support Vector Machine classifier. SVM use twenty-three features for lexical similarity. Our TE system depends on the syntactic formulation of words of a sentence. We generate all combinations of words by their position in a sentence. So we may get different sentences from root sentence. We employ this method on both hypothesis and text sentences. So, we have different text sentences from root text sentence as well as have several new hypothesis sentences, got from root hypothesis sentence. We assume if any of those newly get hypothesis and text pairs are entailed as YES then the root hypothesis and text pair also entailed by YES otherwise they should be entailed by NO. The important lexical features that are used in the present system are: WordNet based unigram match, bigram match, longest common sub-sequence, skip-gram, stemming, named entity matching and lexical distance.
In Recognizing Inference Text@NTCIR9 has four subtasks, Binary-class (BC) subtask, Multi-class (MC) subtask, Entrance Exam and NTCIR-9 RITE4QA. We have participated that entire task. For that BC subtask was based on Machine Translation using the web based Bing translator system and then lexical matching. For BC subtask we have achieved the highest accuracy 0.508. The system for MC subtask was based on a learned system that uses different lexical similarity features: Word Net based Unigram Matching, Bigram Matching, Trigram Machine, Skip-gram Matching, LCS Matching and Named Entity (NE) Matching. For the MC subtask the accuracy was 0.175. For BC task, we present two system for Textual Entailment (TE) recognition system. For the first system uses lexical features (N-Gram and Lexical Distance) that is trained by SVM. For the second system uses lexical features (N-Gram and Lexical Distance) that is classify trained by SVM.
Abstract:
Nowadays one of the biggest problems many manufacturing companies face is the loss of knowledge from the information it possesses. Whether it tries to make business or improves the exchange of information within its different areas, valuable knowledge does not reach all stakeholders due to abstraction and ambiguity. A clear example in which both problems have a clear effect in terms of knowledge loss occurs during the interpretation of Computer Aided Designs (CAD). If there is no experience doing such task the only data extracted will be limited to the elements contained on the drawing. By creating a semantic model we are able to know the contents specific details of a CAD without the use of a graphical tool, also ambiguity problems disappear as the terms used on the semantic model are based on a controlled vocabulary
Israel Cruz, Wen Yu and Luis Omar Moreno.
Control Adaptable Indirecto con Redes Neuro Difusas vía Regresión Kernel
Abstract:
En este artículo es desarrollada una estructura de control adaptable neuro difusa para sistemas en tiempo discreto basados en la regresión kernel. La regresión kernel es una técnica de regresión estadística no paramétrica usada para determinar el modelo de regresión cuando ninguna suposición del modelo ha sido presentada. Debido a la similitud de los sistemas difusos con la regresión kernel, ésta técnica es usada para obtener conocimiento de la estructura del sistema difuso y esta información es usada como condiciones iniciales para el control adaptable indirecto neuro difuso. Los resultados de la simulación muestran la efectividad de esta técnica.
Ivan Shamshurin.
Extracting Domain-Specific Opinion Words for Sentiment Analysis
Abstract:
In this paper, we consider opinion word extraction, one of
the key problems in sentiment analysis. Sentiment analysis (or opinion
mining) is an important research area within computational linguistics.
Opinion words, which form an opinion lexicon, describe the attitude of
the author towards certain opinion targets, i.e., entities and their attributes
on which opinions have been expressed. Hence, the availability
of a representative opinion lexicon can facilitate the extraction of
opinions from texts. For this reason, opinion word mining is one of the
key issues in sentiment analysis. We designed and implemented several
methods for extracting opinion words. We evaluated these approaches by
testing how well the resulting opinion lexicons help improve the accuracy
of methods for determining the polarity of the reviews if the extracted
opinion words are used as features. We used several machine learning
methods: SVM, Logistic Regression, Nave Bayes, and KNN. By using
the extracted opinion words as features we were able to improve the
baselines in some cases. Our experiments showed that, although opinion
words are useful for polarity detection, they are not sucient on their
own and should be used only in combination with other features.
Abstract:
In this Paper a method to answer queries of an epidemiological database using a semantic approach is detailed. It is described how the relational database is converted to semantic information, and the mechanism proposed to query this information easily.
Ameni Azzouz, Mariem Ennigrou, Boutheina Jlifi and Khaled Ghedira.
Combining Tabu Search and Genetic Algorithm in a Multi-Agent System for solving Flexible Job Shop Problem
Abstract:
The Flexible Job Shop problem (FJSP) is an important extension of the classical job shop scheduling problem, in that each operation can be processed by a set of resources and has a processing time depending on the resource used. The objective is to minimize the makespan, i.e., the time needed to complete all the jobs. This works aims to propose a new promising approach using multi-agent systems in order to solve the FJSP. Our model combines a local optimization approach based on Tabu Search (TS) meta-heuristic and a global optimization approach based on genetic algorithm (GA).
José Carlos Ortiz-Bayliss, Hugo Terashima-Marín and Santiago E. Conant-Pablos.
Using Learning Classifier Systems to Design Selective Hyper-Heuristics for Constraint Satisfaction Problems
Abstract:
Constraint Satisfaction Problems (CSP) are defined by a set of variables, where each variable contains a series of values it can be instantiated with. There is a set of constraints among the variables that restrict the different values they can take simultaneously. The task is to find one assignment to all those variables without breaking any constraint. To solve a CSP instance, a search tree is created and each node in the tree represents a variable of the instance. The order in which the variables are selected for instantiation changes the form of the search tree and affects the cost of finding a solution. Many heuristics have been used to decide the next variable to instantiate and they have proved to provide good advice for some instances. In this paper we explore the use of learning classifier systems to construct selective hyper-heuristics that dynamically select from a set of variable ordering heuristics for CSP, the one that best matches the current problem state in order to show an acceptable performance over a wide range of instances. The approach is tested on random instances, providing promising results with respect to the average performance of the variable ordering heuristic.
Edistio Verdecia, Lizandra Arza Pérez, Joanner Hung Martínez and Adrian De Jesús Sanchez Nievares.
A new fuzzy TOPSIS approach for personnel selection with veto threshold and majority voting rule.
Abstract:
The personal selection, one of the fundamental activities of the human resource management, has as objective to select the most appropriate candidate for the organization. This process is defined as a comparison and decision making process. In this process the human experts have an active participation. It is a tendency to consider this problem of multi-criteria decision making under uncertainty. The present work proposes a new approach for TOPSIS, one of the most used in multi-criteria decision making. This variant is the result of the study of the fuzzy TOPSIS variant that uses, veto threshold to substitute the positive and negative ideal solutions. The other element used is the rule of the majority vote, used fundamentally in the construction of multi-classifiers to combine the predictions of individual classifiers and to make a consensus classification. In the case of the proposal for the selection of the most appropriate alternative, several distance measure are uses. A computer application was developed, this application receives the configuration of the human selection problem and evaluates the candidates and ranking using the votes.
Sergio Ruiz and Benjamin Hernandez.
Adaptable Markov Decision Process for Real Time Crowd Behavior Simulation
Abstract:
Real time crowd simulation has shown its importance in applications such as urban and emergency planning, evacuations, police crowd control training and entertainment. Fundamental problems such as collision avoidance and agent navigation must be solved efficiently to achieve interactive rates. In this paper we propose a novel method which allows the simulation of crowds based on a modified version of Markov Decision Processes, that we call Adaptive Markov Decision Processes (A-MDP), which are first run as a preprocess to calculate multiple free-of-collision trajectories that characters in a crowd will follow. Then on run-time these MDP are adapted within a given radius, thus characters are able to avoid collisions against other agents or moving objects. Finally we present the parallel implementation on GPU of this algorithm using CUDA.
David Ortega-Pacheco, Natalia Arias-Trejo and Julia B. Barrón Martínez.
Latent Semantic Analysis Model as a representation of free-association word norms
Abstract:
In the adults’ lexicon, words are organized according to their overlap in semantic features, co-occurrence pattern, phonological overlap, among other properties. The current work aims to validate, by means of a computational model, an empirical database of free association norms collected with 150 speakers of Mexican Spanish.
Specifically, this work has two main goals: (1) to detect the associated weight of word-word pairs, and (2) to provide an understanding of a lexical network formed beyond an input-output word pair, similar to the mediated priming effect reported experimentally.
We used the Term Frequency-Inverse Document Frequency Weighting (tf•idf) to obtain the associated weight between an input-output word pair and to calculate the tf•idf-matrix which is used as an input in the Latent Semantic Analysis (LSA) Model. The LSA model, in the word-word context, is a semantic representation at the lexical level that allows us to discover semantic relationships beyond input-output word pairs. Our computational model replicates and further explains previous experimental work on lexical networks.
Miguel Murguía-Romero, Rafael Jiménez-Flores, René Méndez-Cruz and Rafael Villalobos-Molina.
Improving the body mass index (BMI) formula with heuristic search
Abstract:
The body mass index (BMI) is nowadays the most used tool to evaluate obesity, involving only two anthropometric measures easy to obtain, the weight and the height (BMI=weight/height^2). The BMI is valuable because it evaluates obesity, classifying people into ‘underweight’, ’normal weight’, and ’overweight’ classes. A previous study with young population showed that obesity, evaluated through BMI, could predict metabolic alterations related to metabolic syndrome, including triglycerides, HDL cholesterol, glucose and blood pressure, with a specificity and sensitivity lower than 25%, i.e. it fails to detect positive and negative cases in a quarter of the population. The aim was to evaluate variations of the BMI formula searching for one which increases the specificity and sensitivity, respect to metabolic alterations. We applied heuristic search of algebraic and constant variation to the original BMI formula, for example, a rule to generate new variations of the BMI formula is increasing the exponent of the denominator by 0.1. The heuristic function used was the intersection of specificity and sensitivity of the particular formula evaluated, i.e., the maximum values of the two statistics. To evaluate the specificity and sensitivity a database of a sample of 4,310 young Mexicans (17-24 years old), including the parameters of the metabolic alterations evaluated, and weight and height was used. The heuristic search can be applied to adjust formula that evaluates other clinical alterations, such as the atherogenic index. Also, we proposed to use the variations of the BMI formula found in this study, with the high sensitivity and specificity when evaluate obesity of young Mexican as a risk to present metabolic alterations.
Abstract:
The field of modal logic programming has been developed to extend the expressiveness of logic programming. By introducing the modal operators of necessity and possibility within the language of Horn clauses, modal logic programming languages retain its declarative nature without resorting to non-logical features. In this work, we propose a novel approach in this field by introducing dynamic logic modalities in pure Prolog to embed efficient imperative programs, while retaining a declarative reading. Furthermore the proposed modal extensions to pure Prolog provide the means to isolate non-logical features of metapredicates (like cut and is) into semantically equivalent dynamic logic modalities. The contributions of this paper are twofold: firstly, by introducing a dynamic logic-based modal Prolog that may extend the area of applications of the logic programming paradigm, and secondly, by showing the soundness of this modal Prolog through a logical system with inference rules written in the Gentzen sequent style.
Juan Carlos Gonzalez Ibarra, Carlos Soubervielle Montalvo and Omar Vital Ochoa.
EMG Pattern Recognition System Based on Neural Networks
Abstract:
In this document appears a methodology for movement pattern recognition from arm-forearm myoelectric signals. Which begin from the design and implementation
of an EMG instrumentation system with SENIAM rules for surface electromyography. Signal Processing and caracterization techniques were applied using: Passband Butterworth filter and Fast Fourier Transform. Artificial Neural Networks (ANN) like Backpropagation and Radial Basis Function (RBF) were used for the pattern recognition or classification of EMG signals. The best results were obtained using the RBF ANN, achieving an average accuracy of 98%.
Aaron Rocha-Rocha, Enrique Munoz De Cote, L. Enrique Sucar and Saul E. Pomares Hernandez.
Balancing optimality and learning speed in multiagent learning
Abstract:
Many real world applications demand solutions that are difficult to implement. For these cases, is common practice for system designers to recur to multiagent theory, where the problem at hand is broken in sub-problems and each is handled by an autonomous agent.
Notwithstanding, new questions emerge, like how should a problem be broken? What the task of each agent should be? And, what information should they need to process their task? Furthermore, conflicts between agents' partial solutions (actions) may arise as a consequence of their autonomy.
In this paper we conduct a study to answer those questions under a multiagent learning framework. The proposed framework guarantees an optimal solution to the original problem, at the cost of a low learning speed, but can be tuned to balance between learning speed and and optimality. We then present an experimental analysis (inspired on a robotics application) that shows learning curves until convergence to optimality and its trade-off between better learning speeds and sub-optimality.
Abstract:
A maximum sensibility neural networks was implemented in an embedded system to make on-line learning. This neural network has advantages like easy implementation and a quick learning based on neighbors in place of a iterative or gradient algorithm. The embedded maximum sensibility neural network was used to learn on-line non linear functions using potentiometers and push buttons for activation and learning. The results give us a platform to apply on-line learning using neural networks.
Abstract:
In this paper, we present a cooperative coevolutionary-based approach to the problem of developing automatically defined functions (ADFs), we implemented a module of Gene Expression Programming (GEP) for the virtual gene Genetic Algorithm (vgGA), and tested the coevolution of ADFs in two symbolic regresion problems, comparing it with a conventional genetic algorithm. Our results show that on a simple function a conventional genetic algorithm approach can find acceptable solutions, but on a more complex function the conventional genetic algorithm is outperformed by our coevolutionary approach.
Santiago Omar Caballero Morales, Yara Pérez Maldonado and Felipe Trujillo-Romero.
Improvement on Automatic Speech Recognition Using Micro-Genetic Algorithm
Abstract:
In this paper we extend on previous work about the application of Genetic Algorithms (GAs) to optimize the transition structure of phoneme Hidden Markov Models (HMMs) for Automatic Speech Recognition (ASR). We focus on the development of a micro-GA where, in contrast to other GA approaches, each individual in the initial population consists of an element of the transition matrix of an HMM. Each
individual's fitness is measured at the phoneme recognition level, which makes the execution of the algorithm faster. Evaluation of performance was performed with test speech data from theWall Street Journal (WSJ) database. When measuring the performance of the optimized HMMs at the word recognition level, statistically significant improvements were obtained when compared with the performance of a standard speaker adaptation technique.
Bahamida Bachir and Dalila Boughaci.
Intrusion Detection Using Fuzzy Stochastic Local Search Classifier
Abstract:
This paper proposes a stochastic local search classifier combined with the fuzzy logic concepts for intrusion detection. The proposed classifier works on knowledge base modeled as a fuzzy rule "if-then» and improved by using a stochastic local search. The method is tested on the Benchmark KDD'99 intrusion dataset and compared with other existing techniques for intrusion detection.
Abstract:
This paper presents the path tracking control and the multi-robot system formation, using Matlab Simulink 3D Animation. A non-linear control based on techniques of input-output linearisation was used to control path tracking and the formation of robots.
The results show three dierent paths, a straight line, hyperbola curve, and semicircle path, maintaining a simple triangular formation of multiple robots. In this case, we control three mobile robots, but this types of control law can be used to arbitrary large number of robot moving in general types of formations.
Abstract:
In this work, we apply a joint encryption and compression procedure to image information. The encryption scheme is based on the synchronization of the cellular automaton rule 90, where some statistical tests show that this system has a good performance, despite a high latency present. On the other hand, an energy compression scheme based on the Haar transform is considered. This compressionscheme has become a useful and flexible procedure to compress different kind of information. This proposal could be an appealing option in real time applications such as communications, bank transactions, among others.
Abstract:
In this paper, we propose an approach for article clustering based on the Wikipedia graph structure, to examine the possible relation between them. The aim is to analyze the degree of similarity between articles of the same topic using the tf-idf weighting as a strenght linkage of relationship between nodes.
Hamid Parvin, Behzad Maleki and Sajad Parvin.
Using LDA Method to build a Robust Classifier
Abstract:
Usage of recognition systems has found many applications in almost all fields. Generally in design of multiple classifier systems, the more diverse the results of the classifiers, the more appropriate the aggregated result. While most of classification algorithms have obtained a good performance for specific problems they have not enough robustness for other problems. Combination of multiple classifiers can be considered as a general solution method for pattern recognition problems. It has been shown that combination of multiple classifiers can usually operate better than a single classifier system provided that its components are independent or their components have diverse outputs. It has been shown that the necessary diversity for the ensemble can be achieved by manipulation of dataset features, manipulation of data points in dataset, different sub-samplings of dataset, and usage of different classification algorithms. We also propose a new method of creating this diversity. We use Linear Discriminant Analysis to manipulate the data points in dataset. The ensemble created by proposed method may not always outperform any of its members, it always possesses the diversity needed for creation of an ensemble, and consequently it always outperforms the simple classifier systems.
Abstract:
Knowledge representation is an important topic in common-sense reasoning and Artificial Intelligence, and one of the earliest techniques to represent it is by means of knowledge bases encoded into logic clauses.
Encoding knowledge, however, is prone to typos and other kinds of mistakes, which may yield incorrect results or even internal contradictions with conflicting information from other parts of the same code.
In order to overcome such situations, we propose a logic-programming system to debug knowledge bases.
The system has a strong theoretical framework on knowledge representation and reasoning, and an on-line prototype where one can test logic programs.
Such logic programs may have, of course, conflicting information and the system shall prompt the user where the possible source of conflict is.
As a result, the system can be employed both to identify conflicts of the knowledge base with upcoming new information, and to locate the source of conflict from a given inherent inconsistent static knowledge base.
This paper describes an implementation of a declarative version of the system that has been characterised to debug knowledge bases in a semantical fashion.
Some of the key components of such implementation is to use existing solvers, so this paper focuses on how to use them and why they work, towards an implemented a fully-fledged system.
In particular, the paper shows an outline of the basic structure of the proposed system, describes the employed technology, discuses the major process of computing the models and illustrates the system though examples.
Hamid Parvin and Sara Ansari.
Diagnosis of Breast Cancer
Abstract:
In the most of standard learning algorithms it is presumed or at least expected that the distribution of data points in different classes of at-hand dataset are balanced; it means that there are almost the identical number of data points in each class. It is also implicitly presumed that the misclassification cost of each data point is a fixed value regardless of its class. The standard algorithms will fail to learn if the at-hand dataset is imbalanced. An imbalance dataset is the one that the distribution of data points among their classes is not identical; it means that the numbers of data points in different classes are considerably different. A well-known domain in that it is highly likely for each exemplary dataset to be imbalanced is patient detection. In such systems there are many clients while a few of them are patient and the all others are healthy. So it is very common and likely to face an imbalanced dataset in such a system that is to detect a patient from various clients. In a breast cancer detection that is a special case of the mentioned systems, it is tried to discriminate the patient clients from healthy clients. It should be noted that the imbalanced shape of a dataset can be either relative or non-relative. The imbalanced shape of a dataset is relative where the mean number of samples is high in the minority class, but it is very less rather than the number of samples in the majority class. The imbalanced shape of a dataset is non-relative where the mean number of samples is low in the minority class. This paper presents an algorithm which is well-suited for and applicable to the field of non-relative imbalanced datasets. It is efficient in terms of both of the speed and the efficacy of learning. The experimental results show that the performance of the proposed algorithm outperforms some of the best methods in the literature.
Yanet Rodríguez and Dayrelis Mena Torres.
An Instance based Learning Model for Classication in Data Streams which Concept Change
Abstract:
Mining data streams has attracted the attention of the scientific community in recent years with the development of new algorithms for processing and sorting data in this area. Incremental learning techniques have been used extensively in these issues. A major challenge posed by data streams is that their underlying concepts can change over time. This research delves into the study of applying different techniques of classification for data streams, with a proposal based on similarity including a new methodology for detect and treatment of concept change. An extensively experimentation and comparative statistical analysis are presented, that shows the good performance of the proposed algorithm.
Abstract:
In this paper we present the design and framework of a shop- ping assistant system to be used in supermarkets mainly by elderly or disabled people. The whole system is based on the interaction of three dif- ferent kinds of electronic devices: a) mobile devices that users carry with them (smart phones or electronic tablets), b) autonomous mobile robots that assist users, displaying information and carrying groceries; and fi- nally, c) supermarket technological infrastructure (database servers, Wi- Fi and Bluetooth access points, etc.). These three components interact, and so users can carry and/or produce their shopping list on their mobile devices, which is transferred to the supermarket system. Then the user is assigned a mobile robot to assist him, providing to the system the user’s shopping list, as well as some other useful information (e.g. credit card number, user preferences, etc.).
Jesús Carlos Carmona Frausto, Víctor Jesús Sosa Sosa and Iván López Arévalo.
Middleware for Information Exchange in Heterogeneous Social Network
Abstract:
The purpose of this paper is to introduce
the design and implementation of a middleware that
will facilitate the development of new social networks
with a new information exchange component. This
component will allow new social networks to send
and receive information from heterogeneous existing
social networks (Facebook, Twitter and LinkedIn) in
a transparent way. The component functions include
the most common actions in a social network and a
component to build and organize the complete users
friendship graph integrating all the information coming
from heterogeneous social networks. It provides homogeneous
methods that are similar in different networks,
saving software developers from the implementation
of a wrapper for every different social network.
Middleware details and results obtained with a prototype
implementation are presented.
Abstract:
In this paper we compare the behavior of the classical PID
controller under differents configurations for the same plant. The
nonlinear system that we use to apply the controllers is an AC
motor. The numerical simulation results show that the best
controller is the PID wavenet fuzzy, where it tunes online the
proportional, integral and derivative gains and the learning rates
of neural network are computed by a Mamdani fuzzy logic system.
Abstract:
Electronic Learning (eLearning) is very powerful tool and technique that is used to educate the people in these days. A number of world ranking universities are started the different courses for high school level education to degree level and event at post graduate level through distance learning. This paper describes the best-known different machine learning (ML) techniques to boost up the eLearning education standard and model. Our paper comprehensively the supervised and unsupervised ML techniques that are help for the eLearning paradigm to auto reply of student questions. Main drawback of eLearning environment is not frequent replies of student queries. The key demand in eLearning environment is to dealing with ML classification can give higher to learning techniques but not fully automated with this paradigm. Training data set is used form to training the machine and then test data set is used to validate the this approach. This paper analyzes the different ML technique and proposed a solution from using these techniques. In our proposed solution there is a number of machine learning techniques which are discussed with their pros and cons.