Neural networks and fuzzy systems Books
Manning Publications Grokking Machine Learning
Book SynopsisIt's time to dispel the myth that machine learning is difficult. Grokking Machine Learning teaches you how to apply ML to your projects using only standard Python code and high school-level math. No specialist knowledge is required to tackle the hands-on exercises using readily available machine learning tools! In Grokking Machine Learning, expert machine learning engineer Luis Serrano introduces the most valuable ML techniques and teaches you how to make them work for you. Practical examples illustrate each new concept to ensure you’re grokking as you go. You’ll build models for spam detection, language analysis, and image recognition as you lock in each carefully-selected skill. Packed with easy-to-follow Python-based exercises and mini-projects, this book sets you on the path to becoming a machine learning expert. Key Features · Different types of machine learning, including supervised and unsupervised learning · Algorithms for simplifying, classifying, and splitting data · Machine learning packages and tools · Hands-on exercises with fully-explained Python code samples For readers with intermediate programming knowledge in Python or a similar language. About the technology Machine learning is a collection of mathematically-based techniques and algorithms that enable computers to identify patterns and generate predictions from data. This revolutionary data analysis approach is behind everything from recommendation systems to self-driving cars, and is transforming industries from finance to art. Luis G. Serrano has worked as the Head of Content for Artificial Intelligence at Udacity and as a Machine Learning Engineer at Google, where he worked on the YouTube recommendations system. He holds a PhD in mathematics from the University of Michigan, a Bachelor and Masters from the University of Waterloo, and worked as a postdoctoral researcher at the University of Quebec at Montreal. He shares his machine learning expertise on a YouTube channel with over 2 million views and 35 thousand subscribers, and is a frequent speaker at artificial intelligence and data science conferences.
£43.19
Manning Publications Grokking Deep Reinforcement Learning
Book Synopsis Written for developers with some understanding of deep learning algorithms. Experience with reinforcement learning is not required. Grokking Deep Reinforcement Learning introduces this powerful machine learning approach, using examples, illustrations, exercises, and crystal-clear teaching. You'll love the perfectly paced teaching and the clever, engaging writing style as you dig into this awesome exploration of reinforcement learning fundamentals, effective deep learning techniques, and practical applications in this emerging field. We all learn through trial and error. We avoid the things that cause us to experience pain and failure. We embrace and build on the things that give us reward and success. This common pattern is the foundation of deep reinforcement learning: building machine learning systems that explore and learn based on the responses of the environment. • Foundational reinforcement learning concepts and methods • The most popular deep reinforcement learning agents solving high-dimensional environments • Cutting-edge agents that emulate human-like behavior and techniques for artificial general intelligence Deep reinforcement learning is a form of machine learning in which AI agents learn optimal behavior on their own from raw sensory input. The system perceives the environment, interprets the results of its past decisions and uses this information to optimize its behavior for maximum long-term return.
£35.99
Manning Publications Deep Learning with R, Second Edition
Book SynopsisDeep learning from the ground up using R and the powerful Keras library! In Deep Learning with R, Second Edition you will learn: Deep learning from first principles Image classification and image segmentation Time series forecasting Text classification and machine translation Text generation, neural style transfer, and image generation Deep Learning with R, Second Edition shows you how to put deep learning into action. It's based on the revised new edition of François Chollet's bestselling Deep Learning with Python. All code and examples have been expertly translated to the R language by Tomasz Kalinowski, who maintains the Keras and Tensorflow R packages at RStudio. Novices and experienced ML practitioners will love the expert insights, practical techniques, and important theory for building neural networks. about the technology Deep learning has become essential knowledge for data scientists, researchers, and software developers. The R language APIs for Keras and TensorFlow put deep learning within reach for all R users, even if they have no experience with advanced machine learning or neural networks. This book shows you how to get started on core DL tasks like computer vision, natural language processing, and more using R. what's inside Image classification and image segmentation Time series forecasting Text classification and machine translation Text generation, neural style transfer, and image generation about the reader For readers with intermediate R skills. No previous experience with Keras, TensorFlow, or deep learning is required.
£41.39
Elsevier Science Artificial Neural Networks for Engineering
Book SynopsisTable of Contents1. Hierarchical Dynamic Neural Networks for Cascade System Modeling with Application to Wastewater Treatment 2. Hyperellipsoidal Neural Network trained with Extended Kalman Filter for forecasting of time series 3. Neural networks: a methodology for modeling and control design of dynamical systems 4. Continuous–Time Decentralized Neural Control of a Quadrotor UAV 5. Support Vector Regression for digital video processing 6. Artificial Neural Networks Based on Nonlinear Bioprocess Models for Predicting Wastewater Organic Compounds and Biofuels Production 7. Neural Identification for Within-Host Infectious Disease Progression 8. Attack Detection and Estimation for Cyber-physical Systems by using Learning Methodology 9. Adaptive PID Controller using a Multilayer Perceptron Trained with the Extended Kalman Filter for an Unmanned Aerial Vehicle 10. Sensitivity Analysis with Artificial Neural Networks for Operation of Photovoltaic Systems 11. Pattern Classification and its Applications to Control of Biomechatronic Systems
£94.95
Clarendon Press Statistical Physics of Spin Glasses and Information Processing
Book SynopsisSpin glasses are magnetic materials. Statistical mechanics, a subfield of physics, has been a powerful tool to theoretically analyse various unique properties of spin glasses. A number of new analytical techniques have been developed to establish a theory of spin glasses. Surprisingly, these techniques have turned out to offer new tools and viewpoints for the understanding of information processing problems, including neural networks, error-correcting codes, image restoration, and optimization problems. This book is one of the first publications of the past ten years that provide a broad overview of this interdisciplinary field. Most of the book is written in a self-contained manner, assuming only a general knowledge of statistical mechanics and basic probability theory. It provides the reader with a sound introduction to the field and to the analytical techniques necessary to follow its most recent developments.Trade Review... very enjoyable to read and often opening the reader's eye to new possibilities. This is a perfect introduction to the field for students and researchers who want to study problems in information science, including the use of physics in information processing * Butsuri *Table of Contents1. Mean-field theory of phase transitions ; 2. Mean-field theory of spin glasses ; 3. Replica symmetry breaking ; 4. Gauge theory of spin glasses ; 5. Error-correcting codes ; 6. Image restoration ; 7. Associative memory ; 8. Learning in perceptron ; 9. Optimization problems ; A. Eigenvalues of the Hessian ; B. Parisi equation ; C. Channel coding theorem ; D. Distribution and free energy of K-Sat ; References ; Index
£92.25
MIT Press Ltd Elements of Causal Inference
Book Synopsis
£38.70
Elsevier Science Deep Learning for Robot Perception and Cognition
Book SynopsisTable of Contents1. Introduction 2. Neural Networks and Backpropagation 3. Convolutional Neural Networks 4. Graph Convolutional Networks 5. Recurrent Neural Networks 6. Deep Reinforcement Learning 7. Lightweight Deep Learning 8. Knowledge Distillation 9. Progressive and Compressive Deep Learning 10. Representation Learning and Retrieval 11. Object Detection and Tracking 12. Semantic Scene Segmentation for Robotics 13. 3D Object Detection and Tracking 14. Human Activity Recognition 15. Deep Learning for Vision-based Navigation in Autonomous Drone Racing 16. Robotic Grasping in Agile Production 17. Deep learning in Multiagent Systems 18. Simulation Environments 19. Biosignal time-series analysis 20. Medical Image Analysis 21. Deep learning for robotics examples using OpenDR
£89.96
HarperCollins Publishers Inc A Guide to Effective Collaboration and Learning
Book Synopsis
£31.34
Taylor & Francis Ltd (Sales) AI and Deep Learning in Biometric Security Trends
Book SynopsisThis book provides an in-depth overview of artificial intelligence and deep learning approaches with case studies to solve problems associated with biometric security such as authentication, indexing, template protection, spoofing attack detection, ROI detection, gender classification etc. This text highlights a showcase of cutting-edge research on the use of convolution neural networks, autoencoders, recurrent convolutional neural networks in face, hand, iris, gait, fingerprint, vein, and medical biometric traits. It also provides a step-by-step guide to understanding deep learning concepts for biometrics authentication approaches and presents an analysis of biometric images under various environmental conditions. This book is sure to catch the attention of scholars, researchers, practitioners, and technology aspirants who are willing to research in the field of AI and biometric security.Table of Contents1. Deep Learning-Based Hyperspectral Multimodal Biometric Authentication System Using Palmprint and Dorsal Hand Vein. 2. Cancelable Biometrics for Template Protection: Future Directives with Deep Learning. 3. On Training Generative Adversarial Network for Enhancement of Latent Fingerprints. 4. DeepFake Face Video Detection Using Hybrid Deep Residual Networks nad LSTM Architecture. 5. Multi-spectral Short-Wave Infrared Sensors and Convolutional Neural Networks for Biometric Presentation Attack Detection. 6. AI-Based Approach for Person Identification Using ECG Biometric. 7. Cancelable Biometric Systems from Research to Reality: The Road Less Travelled. 8. Gender Classification under Eyeglass Occluded Ocular Region: An Extensive Study Using Multi-spectral Imaging. 9. Investigation of the Fingernail Plate for Biometric Authentication using Deep Neural Networks. 10. Fraud Attack Detection in Remote Verification systems for Non-enrolled Users. 11. Indexing on Biometric Databases. 12. Iris Segmentation in the Wild Using Encoder-Decoder-Based Deep Learning Techniques. 13. PPG-Based Biometric Recognition: Opportunities with Machine and Deep Learning. 14. Current Trends of Machine Learning Techniques in Biometrics and its Applications.
£142.50
Taylor & Francis Ltd AI for Cars
Book SynopsisArtificial Intelligence (AI) is undoubtedly playing an increasingly significant role in automobile technology. In fact, cars inhabit one of just a few domains where you will find many AI innovations packed into a single product.AI for Cars provides a brief guided tour through many different AI landscapes including robotics, image and speech processing, recommender systems and onto deep learning, all within the automobile world. From pedestrian detection to driver monitoring to recommendation engines, the book discusses the background, research and progress thousands of talented engineers and researchers have achieved thus far, and their plans to deploy this life-saving technology all over the world.Table of ContentsForeword Preface AI for Advanced Driver Assistance Systems Automatic Parking Traffic Sign Recognition Driver Monitoring System Summary AI for Autonomous Driving Perception Planning Motion Control Summary AI for In-Vehicle Infotainment Systems Gesture Control Voice Assistant User Action Prediction Summary AI for Research & Development Automated Rules Generation Virtual Testing Platform Synthetic Scenario Generation Summary AI for Services Predictive Diagnostics Predictive Maintenance Driver Behavior Analysis Summary The Future of AI in Cars A Tale Of Two Paradigms AI & Car Safety AI & Car Security Summary Further Reading References
£21.84
John Wiley & Sons Inc Neural and Adaptive Systems
Book SynopsisLike no other text in this field, authors Jose C. Principe, Neil R. Euliano, and W. Curt Lefebvre have written a unique and innovative text unifying the concepts of neural networks and adaptive filters into a common framework. The text is suitable for senior/graduate courses in neural networks and adaptive filters. It offers over 200 fully functional simulations (with instructions) to demonstrate and reinforce key concepts and help the reader develop an intuition about the behavior of adaptive systems with real data. This creates a powerful self-learning environment highly suitable for the professional audience.Table of ContentsChapter 1 Data Fitting with Linear Models 1 Chapter 2 Pattern Recognition 68 Chapter 3 Multilayer Perceptrons 100 Chapter 4 Designing and Training MLPS 173 Chapter 5 Function Approximation with MLPs, Radial Basis Functions, and Support Vector Machines 223 Chapter 6 Hebbian Learning and Principal Component Analysis 279 Chapter 7 Competitive and Kohonen Networks 333 Chapter 8 Principles of Digital Signal Processing 364 Chapter 9 Adaptive Filters 429 Chapter 10 Temporal Processing with Neural Networks 473 Chapter 11 Training and Using Recurrent Networks 525 Appendix A Elements of Linear Algebra and Pattern Recognition 589 Appendix B NeuroSolutions Tutorial 613 Appendix C Data Directory 637 Glossary 639 Index 647
£122.35
John Wiley and Sons Ltd Connectionism and the Mind
Book SynopsisConnectionism and the Mind provides a clear and balanced introduction to connectionist networks and explores theoretical and philosophical implications. Much of this discussion from the first edition has been updated, and three new chapters have been added on the relation of connectionism to recent work on dynamical systems theory, artificial life, and cognitive neuroscience. Read two of the sample chapters on line: Connectionism and the Dynamical Approach to Cognition: http://www.blackwellpublishing.com/pdf/bechtel.pdf Networks, Robots, and Artificial Life: http://www.blackwellpublishing.com/pdf/bechtel2.pdfTrade Review"Much more than just an update, this is a thorough and exciting re-build of the classic text. Excellent new treatments of modularity, dynamics, artificial life, and cognitive neuroscience locate connectionism at the very heart of contemporary debates. A superb combination of detail, clarity, scope, and enthusiasm." Andy Clark, University of Sussex "Connectionism and the Mind is an extraordinarily comprehensive and thoughtful review of connectionism, with particular emphasis on recent developments. This new edition will be a valuable primer to those new to the field. But there is more: Bechtel and Abrahamsen's trenchant and even-handed analysis of the conceptual issues that are addressed by connectionist models constitute an important original theoretical contribution to cognitive science." Jeff Elman, University of California at San DiegoTable of ContentsPreface xiii 1 Networks Versus Symbol Systems: Two Approaches To Modeling Cognition 1 1.1 A Revolution in the Making? 1 1.2 Forerunners of Connectionism: Pandemonium and Perceptrons 2 1.3 The Allure of Symbol Manipulation 7 1.3.1 From logic to artificial intelligence 7 1.3.2 From linguistics to information processing 10 1.3.3 Using artificial intelligence to simulate human information processing 11 1.4 The Decline and Re-emergence of Network Models 12 1.4.1 Problems with perceptrons 12 1.4.2 Re-emergence: The new connectionism 13 1.5 New Alliances and Unfinished Business 15 Notes 17 Sources and Suggested Readings 17 2 Connectionist Architectures 19 2.1 The Flavor of Connectionist Processing: A Simulation of Memory Retrieval 19 2.1.1 Components of the model 20 2.1.2 Dynamics of the model 22 2.1.2.1 Memory retrieval in the Jets and Sharks network 22 2.1.2.2 The equations 23 2.1.3 Illustrations of the dynamics of the model 24 2.1.3.1 Retrieving properties from a name 24 2.1.3.2 Retrieving a name from other properties 26 2.1.3.3 Categorization and prototype formation 26 2.1.3.4 Utilizing regularities 28 2.2 The Design Features of a Connectionist Architecture 29 2.2.1 Patterns of connectivity 29 2.2.1.1 Feedforward networks 29 2.2.1.2 Interactive networks 31 2.2.2 Activation rules for units 32 2.2.2.1 Feedforward networks 32 2.2.2.2 Interactive networks: Hopfield networks and Boltzmann machines 34 2.2.2.3 Spreading activation vs. interactive connectionist models 37 2.2.3 Learning principles 38 2.2.4 Semantic interpretation of connectionist systems 40 2.2.4.1 Localist networks 41 2.2.4.2 Distributed networks 41 2.3 The Allure of the Connectionist Approach 45 2.3.1 Neural plausibility 45 2.3.2 Satisfaction of soft constraints 46 2.3.3 Graceful degradation 48 2.3.4 Content-addressable memory 49 2.3.5 Capacity to learn from experience and generalize 51 2.4 Challenges Facing Connectionist Networks 51 2.5 Summary 52 Notes 52 Sources and Recommended Readings 53 3 Learning 54 3.1 Traditional and Contemporary Approaches to Learning 54 3.1.1 Empiricism 54 3.1.2 Rationalism 55 3.1.3 Contemporary cognitive science 56 3.2 Connectionist Models of Learning 57 3.2.1 Learning procedures for two-layer feedforward networks 58 3.2.1.1 Training and testing a network 58 3.2.1.2 The Hebbian rule 58 3.2.1.3 The delta rule 60 3.2.1.4 Comparing the Hebbian and delta rules 67 3.2.1.5 Limitations of the delta rule: The XOR problem 67 3.2.2 The backpropagation learning procedure for multi-layered networks 69 3.2.2.1 Introducing hidden units and backpropagation learning 69 3.2.2.2 Using backpropagation to solve the XOR problem 74 3.2.2.3 Using backpropagation to train a network to pronounce words 77 3.2.2.4 Some drawbacks of using backpropagation 78 3.2.3 Boltzmann learning procedures for non-layered networks 79 3.2.4 Competitive learning 80 3.2.5 Reinforcement learning 81 3.3 Some Issues Regarding Learning 82 3.3.1 Are connectionist systems associationist? 82 3.3.2 Possible roles for innate knowledge 84 3.3.2.1 Networks and the rationalist–empiricist continuum 84 3.3.2.2 Rethinking innateness: Connectionism and emergence 85 Notes 87 Sources and Suggested Readings 88 4 Pattern Recognition and Cognition 89 4.1 Networks as Pattern Recognition Devices 90 4.1.1 Pattern recognition in two-layer networks 90 4.1.2 Pattern recognition in multi-layered networks 93 4.1.2.1 McClelland and Rumelhart’s interactive activation model of word recognition 93 4.1.2.2 Evaluating the interactive activation model of word recognition 100 4.1.3 Generalization and similarity 101 4.2 Extending Pattern Recognition to Higher Cognition 102 4.2.1 Smolensky’s proposal: Reasoning in harmony networks 103 4.2.2 Margolis’s proposal: Cognition as sequential pattern recognition 103 4.3 Logical Inference as Pattern Recognition 106 4.3.1 What is it to learn logic? 106 4.3.2 A network for evaluating validity of arguments 109 4.3.3 Analyzing how a network evaluates arguments 112 4.3.4 A network for constructing derivations 115 4.4 Beyond Pattern Recognition 117 Notes 118 Sources and Suggested Readings 119 5 Are Rules Required to Process Representations? 120 5.1 Is Language Use Governed by Rules? 120 5.2 Rumelhart and McClelland’s Model of Past-tense Acquisition 122 5.2.1 A pattern associator with Wickelfeature encodings 122 5.2.2 Activation function and learning procedure 126 5.2.3 Overregularization in a simpler network: The rule of 78 127 5.2.4 Modeling U-shaped learning 130 5.2.5 Modeling differences between different verb classes 133 5.3Pinker and Prince’s Arguments for Rules 135 5.3.1 Overview of the critique of Rumelhart and McClelland’s model 135 5.3.2 Putative linguistic inadequacies 136 5.3.3 Putative behavioral inadequacies 139 5.3.4 Do the inadequacies reflect inherent limitations of PDP networks? 140 5.4 Accounting for the U-shaped Learning Function 141 5.4.1 The role of input for children 142 5.4.2 The role of input for networks: The rule of 78 revisited 146 5.4.3 Plunkett and Marchman’s simulations of past-tense acquisition 148 5.5 Conclusion 152 Notes 153 Sources and Suggested Readings 155 6 Are Syntactically Structured Representations Needed? 156 6.1 Fodor and Pylyshyn’s Critique: The Need for Symbolic Representations with Constituent Structure 156 6.1.1 The need for compositional syntax and semantics 156 6.1.2 Connectionist representations lack compositionality 158 6.1.3 Connectionism as providing mere implementation 160 6.2 First Connectionist Response: Explicitly Implementing Rules and Representations 163 6.2.1 Implementing a production system in a network 163 6.2.2 The variable binding problem 165 6.2.3 Shastri and Ajjanagadde’s connectionist model of variable binding 166 6.3Second Connectionist Response: Implementing Functionally Compositional Representations 170 6.3.1 Functional vs. concatenative compositionality 170 6.3.2 Developing compressed representations using Pollack’s RAAM networks 171 6.3.3 Functional compositionality of compressed representations 175 6.3.4 Performing operations on compressed representations 177 6.4 Third Connectionist Response: Employing Procedural Knowledge with External Symbols 178 6.4.1 Temporal dependencies in processing language 179 6.4.2 Achieving short-term memory with simple recurrent networks 180 6.4.3 Elman’s first study: Learning grammatical categories 181 6.4.4 Elman’s second study: Respecting dependency relations 184 6.4.5 Christiansen’s extension: Pushing the limits of SRNs 187 6.5 Using External Symbols to Provide Exact Symbol Processing 190 6.6 Clarifying the Standard: Systematicity and Degree of Generalizability 194 6.7 Conclusion 197 Notes 198 Sources and Suggested Readings 199 7 Simulating Higher Cognition: a Modular Architecture For Processing Scripts 200 7.1 Overview of Scripts 200 7.2 Overview of Miikkulainen’s DISCERN System 201 7.3Modular Connectionist Architectures 203 7.4 FGREP: An Architecture that Allows the System to Devise Its Own Representations 206 7.4.1 Why FGREP? 206 7.4.2 Exploring FGREP in a simple sentence parser 208 7.4.3 Exploring representations for words in categories 210 7.4.4 Moving to multiple modules: The DISCERN system 212 7.5 A Self-organizing Lexicon Using Kohonen Feature Maps 212 7.5.1 Innovations in lexical design 212 7.5.2 Using Kohonen feature maps in DISCERN’s lexicon 213 7.5.2.1 Orthography: From high-dimensional vector representations to map units 213 7.5.2.2 Associative connections: From the orthographic map to the semantic map 216 7.5.2.3 Semantics: From map unit to high-dimensional vector representations 216 7.5.2.4 Reversing direction: From semantic to orthographic representations 216 7.5.3 Advantages of Kohonen feature maps 216 7.6 Encoding and Decoding Stories as Scripts 217 7.6.1 Using recurrent FGREP modules in DISCERN 217 7.6.2 Using the Sentence Parser and Story Parser to encode stories 218 7.6.3 Using the Story Generator and Sentence Generator to paraphrase stories 221 7.6.4 Using the Cue Former and Answer Producer to answer questions 223 7.7 A Connectionist Episodic Memory 223 7.7.1 Making Kohonen feature maps hierarchical 223 7.7.2 How role-binding maps become self-organized 225 7.7.3 How role-binding maps become trace feature maps 225 7.8 Performance: Paraphrasing Stories and Answering Questions 228 7.8.1 Training and testing DISCERN 228 7.8.2 Watching DISCERN paraphrase a story 229 7.8.3 Watching DISCERN answer questions 229 7.9 Evaluating DISCERN 231 7.10 Paths Beyond the First Decade of Connectionism 233 Notes 234 Sources and Suggested Readings 234 8 Connectionism and the Dynamical Approach to Cognition 235 8.1 Are We on the Road to a Dynamical Revolution? 235 8.2 Basic Concepts of DST: The Geometry of Change 237 8.2.1 Trajectories in state space: Predators and prey 237 8.2.2 Bifurcation diagrams and chaos 240 8.2.3 Embodied networks as coupled dynamical systems 242 8.3Using Dynamical Systems Tools to Analyze Networks 243 8.3.1 Discovering limit cycles in network controllers for robotic insects 244 8.3.2 Discovering multiple attractors in network models of reading 246 8.3.2.1 Modeling the semantic pathway 248 8.3.2.2 Modeling the phonological pathway 249 8.3.3 Discovering trajectories in SRNs for sentence processing 253 8.3.4 Dynamical analyses of learning in networks 256 8.4 Putting Chaos to Work in Networks 257 8.4.1 Skarda and Freeman’s model of the olfactory bulb 257 8.4.2 Shifting interpretations of ambiguous displays 260 8.5 Is Dynamicism a Competitor to Connectionism? 264 8.5.1 Van Gelder and Port’s critique of classic connectionism 264 8.5.2 Two styles of modeling 265 8.5.3 Mechanistic versus covering-law explanations 266 8.5.4 Representations: Who needs them? 270 8.6 Is Dynamicism Complementary to Connectionism? 276 8.7 Conclusion 280 Notes 280 Sources and Suggested Readings 281 9 Networks, Robots, and Artificial Life 282 9.1 Robots and the Genetic Algorithm 282 9.1.1 The robot as an artificial lifeform 282 9.1.2 The genetic algorithm for simulated evolution 283 9.2 Cellular Automata and the Synthetic Strategy 284 9.2.1 Langton’s vision: The synthetic strategy 284 9.2.2 Emergent structures from simple beings: Cellular automata 286 9.2.3 Wolfram’s four classes of cellular automata 288 9.2.4 Langton and l at the edge of chaos 289 9.3Evolution and Learning in Food-seekers 291 9.3.1 Overview and study 1: Evolution without learning 291 9.3.2 The Baldwin effect and study 2: Evolution with learning 293 9.4 Evolution and Development in Khepera 295 9.4.1 Introducing Khepera 295 9.4.2 The development of phenotypes from genotypes 296 9.4.3 The evolution of genotypes 298 9.4.4 Embodied networks: Controlling real robots 298 9.5 The Computational Neuroethology of Robots 300 9.6 When Philosophers Encounter Robots 301 9.6.1 No Cartesian split in embodied agents? 301 9.6.2 No representations in subsumption architectures? 302 9.6.3 No intentionality in robots and Chinese rooms? 303 9.6.4 No armchair when Dennett does philosophy? 304 9.7 Conclusion 305 Sources and Suggested Readings 305 10 Connectionism and the Brain 306 10.1 Connectionism Meets Cognitive Neuroscience 306 10.2 Four Connectionist Models of Brain Processes 309 10.2.1 What/Where streams in visual processing 309 10.2.2 The role of the hippocampus in memory 313 10.2.2.1 The basic design and functions of the hippocampal system 313 10.2.2.2 Spatial navigation in rats 315 10.2.2.3 Spatial versus declarative memory accounts 316 10.2.2.4 Declarative memory in humans and monkeys 318 10.2.3 Simulating dyslexia in network models of reading 323 10.2.3.1 Double dissociations in dyslexia 323 10.2.3.2 Modeling deep dyslexia 327 10.2.3.3 Modeling surface dyslexia 331 10.2.3.4 Two pathways versus dual routes 335 10.2.4 The computational power of modular structure in neocortex 338 10.3The Neural Implausibility of Many Connectionist Models 341 10.3.1 Biologically implausible aspects of connectionist networks 342 10.3.2 How important is neurophysiological plausibility? 343 10.4 Whither Connectionism? 346 Notes 347 Sources and Suggested Readings 348 Appendix A: Notation 349 Appendix B: Glossary 350 Bibliography 363 Name Index 384 Subject Index 395
£32.36
Taylor & Francis Ltd Stochastic Optimization for Largescale Machine
Book SynopsisAdvancements in the technology and availability of data sources have led to the `Big Data'' era. Working with large data offers the potential to uncover more fine-grained patterns and take timely and accurate decisions, but it also creates a lot of challenges such as slow training and scalability of machine learning models. One of the major challenges in machine learning is to develop efficient and scalable learning algorithms, i.e., optimization techniques to solve large scale learning problems.Stochastic Optimization for Large-scale Machine Learning identifies different areas of improvement and recent research directions to tackle the challenge. Developed optimisation techniques are also explored to improve machine learning algorithms based on data access and on first and second order optimisation methods.Key Features: Bridges machine learning and Optimisation. Bridges theory and practice in machine learning. Identifies key reTable of ContentsList of FiguresList of TablesPreface Section I BACKGROUND Introduction1.1 LARGE-SCALE MACHINE LEARNING 1.2 OPTIMIZATION PROBLEMS 1.3 LINEAR CLASSIFICATION1.3.1 Support Vector Machine (SVM) 1.3.2 Logistic Regression 1.3.3 First and Second Order Methods1.3.3.1 First Order Methods 1.3.3.2 Second Order Methods 1.4 STOCHASTIC APPROXIMATION APPROACH 1.5 COORDINATE DESCENT APPROACH 1.6 DATASETS 1.7 ORGANIZATION OF BOOK Optimisation Problem, Solvers, Challenges and Research Directions2.1 INTRODUCTION 2.1.1 Contributions 2.2 LITERATURE 2.3 PROBLEM FORMULATIONS 2.3.1 Hard Margin SVM (1992) 2.3.2 Soft Margin SVM (1995) 2.3.3 One-versus-Rest (1998) 2.3.4 One-versus-One (1999) 2.3.5 Least Squares SVM (1999) 2.3.6 v-SVM (2000) 2.3.7 Smooth SVM (2001) 2.3.8 Proximal SVM (2001) 2.3.9 Crammer Singer SVM (2002) 2.3.10 Ev-SVM (2003) 2.3.11 Twin SVM (2007) 2.3.12 Capped lp-norm SVM (2017) 2.4 PROBLEM SOLVERS 2.4.1 Exact Line Search Method 2.4.2 Backtracking Line Search 2.4.3 Constant Step Size 2.4.4 Lipschitz & Strong Convexity Constants 2.4.5 Trust Region Method 2.4.6 Gradient Descent Method 2.4.7 Newton Method 2.4.8 Gauss-Newton Method 2.4.9 Levenberg-Marquardt Method 2.4.10 Quasi-Newton Method 2.4.11 Subgradient Method 2.4.12 Conjugate Gradient Method 2.4.13 Truncated Newton Method 2.4.14 Proximal Gradient Method 2.4.15 Recent Algorithms 2.5 COMPARATIVE STUDY 2.5.1 Results from Literature 2.5.2 Results from Experimental Study 2.5.2.1 Experimental Setup and Implementation Details 2.5.2.2 Results and Discussions 2.6 CURRENT CHALLENGES AND RESEARCH DIRECTIONS 2.6.1 Big Data Challenge 2.6.2 Areas of Improvement 2.6.2.1 Problem Formulations 2.6.2.2 Problem Solvers 2.6.2.3 Problem Solving Strategies/Approaches 2.6.2.4 Platforms/Frameworks 2.6.3 Research Directions 2.6.3.1 Stochastic Approximation Algorithms 2.6.3.2 Coordinate Descent Algorithms 2.6.3.3 Proximal Algorithms 2.6.3.4 Parallel/Distributed Algorithms 2.6.3.5 Hybrid Algorithms 2.7 CONCLUSION Section II FIRST ORDER METHODSMini-batch and Block-coordinate Approach 3.1 INTRODUCTION 3.1.1 Motivation 3.1.2 Batch Block Optimization Framework (BBOF) 3.1.3 Brief Literature Review 3.1.4 Contributions 3.2 STOCHASTIC AVERAGE ADJUSTED GRADIENT (SAAG) METHODS3.3 ANALYSIS 3.4 NUMERICAL EXPERIMENTS 3.4.1 Experimental setup 3.4.2 Convergence against epochs 3.4.3 Convergence against Time 3.5 CONCLUSION AND FUTURE SCOPE Variance Reduction Methods 4.1 INTRODUCTION 4.1.1 Optimization Problem 4.1.2 Solution Techniques for Optimization Problem 4.1.3 Contributions 4.2 NOTATIONS AND RELATED WORK 4.2.1 Notations 4.2.2 Related Work 4.3 SAAG-I, II AND PROXIMAL EXTENSIONS 4.4 SAAG-III AND IV ALGORITHMS 4.5 ANALYSIS 4.6 EXPERIMENTAL RESULTS 4.6.1 Experimental Setup 4.6.2 Results with Smooth Problem 4.6.3 Results with non-smooth Problem 4.6.4 Mini-batch Block-coordinate versus mini-batch setting 4.6.5 Results with SVM 4.7 CONCLUSION Learning and Data Access 5.1 INTRODUCTION 5.1.1 Optimization Problem 5.1.2 Literature Review 5.1.3 Contributions 5.2 SYSTEMATIC SAMPLING 5.2.1 Definitions 5.2.2 Learning using Systematic Sampling 5.3 ANALYSIS 5.4 EXPERIMENTS 5.4.1 Experimental Setup 5.4.2 Implementation Details 5.4.3 Results 5.5 CONCLUSION Section III SECOND ORDER METHODS Mini-batch Block-coordinate Newton Method 6.1 INTRODUCTION 6.1.1 Contributions 6.2 MBN 6.3 EXPERIMENTS 6.3.1 Experimental Setup 6.3.2 Comparative Study 6.4 CONCLUSION Stochastic Trust Region Inexact Newton Method 7.1 INTRODUCTION 7.1.1 Optimization Problem 7.1.2 Solution Techniques 7.1.3 Contributions 7.2 LITERATURE REVIEW 7.3 TRUST REGION INEXACT NEWTON METHOD 7.3.1 Inexact Newton Method 7.3.2 Trust Region Inexact Newton Method 7.4 STRON 7.4.1 Complexity 7.4.2 Analysis 7.5 EXPERIMENTAL RESULTS 7.5.1 Experimental Setup 7.5.2 Comparative Study 7.5.3 Results with SVM 7.6 EXTENSIONS 7.6.1 PCG Subproblem Solver 17.6.2 Stochastic Variance Reduced Trust Region Inexact Newton Method 7.7 CONCLUSION Section IV CONCLUSIONConclusion and Future Scope 8.1 FUTURE SCOPE 142 Bibliography Index
£142.50
Taylor & Francis Ltd Cognitive and Neural Modelling for Visual
Book SynopsisFocusing on how visual information is represented, stored and extracted in the human brain, this book uses cognitive neural modeling in order to show how visual information is represented and memorized in the brain. Breaking through traditional visual information processing methods, the author combines our understanding of perception and memory from the human brain with computer vision technology, and provides a new approach for image recognition and classification. While biological visual cognition models and human brain memory models are established, applications such as pest recognition and carrot detection are also involved in this book.Given the range of topics covered, this book is a valuable resource for students, researchers and practitioners interested in the rapidly evolving field of neurocomputing, computer vision and machine learning.Table of Contents1. Introduction 2. Methods of visual perception and memory modeling 3. Bio-inspired model for object recognition based on histogram of oriented gradients 4. Modeling object recognition in visual cortex using multiple firing K-means and non-negative sparse coding 5. Biological modeling of human visual system using GLoP filters and sparse coding on multi-manifolds 6. Increment learning and rapid retrieval of visual information based on pattern association memory 7. Memory modeling based on free energy theory and restricted Boltzmann machine 8. Research on insect pest image detection and recognition based on bio-inspired methods 9. Carrot defect detection and grading based on computer vision and deep learning
£74.09
Taylor & Francis Ltd AI for Finance
Book SynopsisFinance students and practitioners may ask: can machines learn everything? Could AI help me? Computing students or practitioners may ask: which of my skills could contribute to finance? Where in finance should I pay attention? This book aims to answer these questions. No prior knowledge is expected in AI or finance.Including original research, the book explains the impact of ignoring computation in classical economics; examines the relationship between computing and finance and points out potential misunderstandings between economists and computer scientists; and introduces Directional Change and explains how this can be used.To finance students and practitioners, this book will explain the promise of AI, as well as its limitations. It will cover knowledge representation, modelling, simulation and machine learning, explaining the principles of how they work. To computing students and practitioners, this book will introduce the financial applications in which AI has madTrade Review“This important book is an unusually topical attempt to introduce readers to the relationship between the technical analysis of financial market prices and the automated implementation of its findings. The book will be of considerable interest to those who wish to know about this relationship in an eminently readable form: both professional financial market analysts and those considering future employment in the field.” --Michael Dempster, Professor Emeritus in the Statistical Laboratory at the University of Cambridge“AI is an important part of finance today. Students who want to join the finance industry should read this book. The trained eyes will also find a lot of insights in the book. I cannot think of any other book that teaches computational finance at a beginner's level but at the same time is useful to practitioners.” --Amadeo Alentorn, PhD, Head of Systematic Equities at Jupiter Asset Management"AI for Finance is an excellent primer for experts and newcomers seeking to unlock the potential of AI. The book combines deep thinking with a bird’s eye view of the whole field - the ideal text to get inspired and apply AI. A big thank you to Edward Tsang, a pioneer of AI and quantitative finance, for making the concepts and usage of AI easily accessible to academics and practitioners." --Richard Olsen, Founder and CEO of Lykke, co-founder of OANDA, and pioneer in high frequency finance and fintech“Without a doubt, AI symbolizes the future of finance and, in this important book, Professor Tsang provides an excellent account of its mechanics, concepts and strategies. Books featuring AI in finance are rare so practitioners and students would do well to read it to gain focus and valuable insights into this fast-evolving technology. Congratulations to Professor Tsang for providing a readable and engaging work in a complex technology that will appeal to all levels of readers!” --Dr David Norman, Founder of the TTC Institute"The use of AI/ML in the financial industry is now more than a hype. In financial institutions there are numerous active transformation programs to introduce AI/ML enabled products in areas such as risk, trading and advanced analytics. In this book, Edward, one of the early adopters of AI in finance, has provided an insightful guide for both finance practitioners and academics. I can see this book becoming a major reference in real-world applied AI in finance. Directional Change (Chapter 6) should be of particular interest to data scientists in finance, as how one collects data determines what one can reason about." -- Dr Ali Rais Shaghaghi, Lead Data Scientist at NatWest Group.Table of Contents1. AI-Finance Synergy, 2. Machine Learning Knows No Boundaries?, 3.Machine Learning in Finance, 4. Modelling, Simulation and Machine Learning, 5. Portfolio Optimization, 6. Financial Data: Beyond Time Series, 7. Over the Horizon
£114.00
CRC Press Deep Learning Approach for Natural Language
Book SynopsisDeep Learning Approach for Natural Language Processing, Speech, and Computer Vision provides an overview of general deep learning methodology and its applications of natural language processing (NLP), speech, and computer vision tasks. It simplifies and presents the concepts of deep learning in a comprehensive manner, with suitable, full-fledged examples of deep learning models, with an aim to bridge the gap between the theoretical and the applications using case studies with code, experiments, and supporting analysis.Features: Covers latest developments in deep learning techniques as applied to audio analysis, computer vision, and natural language processing. Introduces contemporary applications of deep learning techniques as applied to audio, textual, and visual processing. Discovers deep learning frameworks and libraries for NLP, speech, and computer vision in Python. Gives insights into usTable of Contents1 Introduction 2 Natural Language Processing 3 State-of-the-Art Natural Language 4 Applications of Natural Language Processing Fundamentals of Speech Recognition 6 Deep Learning Models for Speech Recognition 7 End-to-End Speech Recognition Models 8 Computer Vision Basics 9 Deep Learning Models for Computer Vision 10 Applications of Computer Vision
£118.75
Cambridge University Press Deep Learning on Graphs
Book SynopsisDeep learning on graphs has become one of the hottest topics in machine learning. The book consists of four parts to best accommodate our readers with diverse backgrounds and purposes of reading. Part 1 introduces basic concepts of graphs and deep learning; Part 2 discusses the most established methods from the basic to advanced settings; Part 3 presents the most typical applications including natural language processing, computer vision, data mining, biochemistry and healthcare; and Part 4 describes advances of methods and applications that tend to be important and promising for future research. The book is self-contained, making it accessible to a broader range of readers including (1) senior undergraduate and graduate students; (2) practitioners and project managers who want to adopt graph neural networks into their products and platforms; and (3) researchers without a computer science background who want to use graph neural networks to advance their disciplines.Trade Review'This timely book covers a combination of two active research areas in AI: deep learning and graphs. It serves the pressing need for researchers, practitioners, and students to learn these concepts and algorithms, and apply them in solving real-world problems. Both authors are world-leading experts in this emerging area.' Huan Liu, Arizona State University'Deep learning on graphs is an emerging and important area of research. This book by Yao Ma and Jiliang Tang covers not only the foundations, but also the frontiers and applications of graph deep learning. This is a must-read for anyone considering diving into this fascinating area.' Shuiwang Ji, Texas A&M University'The first textbook of Deep Learning on Graphs, with systematic, comprehensive and up-to-date coverage of graph neural networks, autoencoder on graphs, and their applications in natural language processing, computer vision, data mining, biochemistry and healthcare. A valuable book for anyone to learn this hot theme!' Jiawei Han, University of Illinois at Urbana-Champaign'This book systematically covers the foundations, methodologies, and applications of deep learning on graphs. Especially, it comprehensively introduces graph neural networks and their recent advances. This book is self-contained and nicely structured and thus suitable for readers with different purposes. I highly recommend those who want to conduct research in this area or deploy graph deep learning techniques in practice to read this book.' Charu Aggarwal, Distinguished Research Staff Member at IBM and recipient of the W. Wallace McDowell AwardTable of Contents1. Deep Learning on Graphs: An Introduction; 2. Foundation of Graphs; 3. Foundation of Deep Learning; 4. Graph Embedding; 5. Graph Neural Networks; 6. Robust Graph Neural Networks; 7. Scalable Graph Neural Networks; 8. Graph Neural Networks for Complex Graphs; 9. Beyond GNNs: More Deep Models for Graphs; 10. Graph Neural Networks in Natural Language Processing; 11. Graph Neural Networks in Computer Vision; 12. Graph Neural Networks in Data Mining; 13. Graph Neural Networks in Biochemistry and Healthcare; 14. Advanced Topics in Graph Neural Networks; 15. Advanced Applications in Graph Neural Networks.
£44.64
John Wiley & Sons Inc Fuzzy Computing in Data Science
Book SynopsisFUZZY COMPUTING IN DATA SCIENCE This book comprehensively explains how to use various fuzzy-based models to solve real-time industrial challenges. The book provides information about fundamental aspects of the field and explores the myriad applications of fuzzy logic techniques and methods. It presents basic conceptual considerations and case studies of applications of fuzzy computation. It covers the fundamental concepts and techniques for system modeling, information processing, intelligent system design, decision analysis, statistical analysis, pattern recognition, automated learning, system control, and identification. The book also discusses the combination of fuzzy computation techniques with other computational intelligence approaches such as neural and evolutionary computation. Audience Researchers and students in computer science, artificial intelligence, machine learning, big data analytics, and information and communication technology.Table of ContentsPreface xvii Acknowledgement xxi 1 Band Reduction of HSI Segmentation Using FCM 1 V. Saravana Kumar, S. Anantha Sivaprakasam, E.R. Naganathan, Sunil Bhutada and M. Kavitha 1.1 Introduction 2 1.2 Existing Method 3 1.2.1 K-Means Clustering Method 3 1.2.2 Fuzzy C-Means 3 1.2.3 Davies Bouldin Index 4 1.2.4 Data Set Description of HSI 4 1.3 Proposed Method 5 1.3.1 Hyperspectral Image Segmentation Using Enhanced Estimation of Centroid 5 1.3.2 Band Reduction Using K-Means Algorithm 6 1.3.3 Band Reduction Using Fuzzy C-Means 7 1.4 Experimental Results 8 1.4.1 DB Index Graph 8 1.4.2 K-Means–Based PSC (EEOC) 9 1.4.3 Fuzzy C-Means–Based PSC (EEOC) 10 1.5 Analysis of Results 12 1.6 Conclusions 16 References 17 2 A Fuzzy Approach to Face Mask Detection 21 Vatsal Mishra, Tavish Awasthi, Subham Kashyap, Minerva Brahma, Monideepa Roy and Sujoy Datta 2.1 Introduction 22 2.2 Existing Work 23 2.3 The Proposed Framework 26 2.4 Set-Up and Libraries Used 26 2.5 Implementation 27 2.6 Results and Analysis 29 2.7 Conclusion and Future Work 33 References 34 3 Application of Fuzzy Logic to the Healthcare Industry 37 Biswajeet Sahu, Lokanath Sarangi, Abhinadita Ghosh and Hemanta Kumar Palo 3.1 Introduction 38 3.2 Background 41 3.3 Fuzzy Logic 42 3.4 Fuzzy Logic in Healthcare 45 3.5 Conclusions 49 References 50 4 A Bibliometric Approach and Systematic Exploration of Global Research Activity on Fuzzy Logic in Scopus Database 55 Sugyanta Priyadarshini and Nisrutha Dulla 4.1 Introduction 56 4.2 Data Extraction and Interpretation 58 4.3 Results and Discussion 59 4.3.1 Per Year Publication and Citation Count 59 4.3.2 Prominent Affiliations Contributing Toward Fuzzy Logic 60 4.3.3 Top Journals Emerging in Fuzzy Logic in Major Subject Areas 61 4.3.4 Major Contributing Countries Toward Fuzzy Research Articles 63 4.3.5 Prominent Authors Contribution Toward the Fuzzy Logic Analysis 66 4.3.6 Coauthorship of Authors 67 4.3.7 Cocitation Analysis of Cited Authors 68 4.3.8 Cooccurrence of Author Keywords 68 4.4 Bibliographic Coupling of Documents, Sources, Authors, and Countries 70 4.4.1 Bibliographic Coupling of Documents 70 4.4.2 Bibliographic Coupling of Sources 71 4.4.3 Bibliographic Coupling of Authors 72 4.4.4 Bibliographic Coupling of Countries 73 4.5 Conclusion 74 References 76 5 Fuzzy Decision Making in Predictive Analytics and Resource Scheduling 79 Rekha A. Kulkarni, Suhas H. Patil and Bithika Bishesh 5.1 Introduction 80 5.2 History of Fuzzy Logic and Its Applications 81 5.3 Approximate Reasoning 82 5.4 Fuzzy Sets vs Classical Sets 83 5.5 Fuzzy Inference System 84 5.5.1 Characteristics of FIS 85 5.5.2 Working of FIS 85 5.5.3 Methods of FIS 86 5.6 Fuzzy Decision Trees 86 5.6.1 Characteristics of Decision Trees 87 5.6.2 Construction of Fuzzy Decision Trees 87 5.7 Fuzzy Logic as Applied to Resource Scheduling in a Cloud Environment 88 5.8 Conclusion 90 References 91 6 Application of Fuzzy Logic and Machine Learning Concept in Sales Data Forecasting Decision Analytics Using ARIMA Model 93 S. Mala and V. Umadevi 6.1 Introduction 94 6.1.1 Aim and Scope 94 6.1.2 R-Tool 94 6.1.3 Application of Fuzzy Logic 94 6.1.4 Dataset 95 6.2 Model Study 96 6.2.1 Introduction to Machine Learning Method 96 6.2.2 Time Series Analysis 96 6.2.3 Components of a Time Series 97 6.2.4 Concepts of Stationary 99 6.2.5 Model Parsimony 100 6.3 Methodology 100 6.3.1 Exploratory Data Analysis 100 6.3.1.1 Seed Types—Analysis 101 6.3.1.2 Comparison of Location and Seeds 101 6.3.1.3 Comparison of Season (Month) and Seeds 103 6.3.2 Forecasting 103 6.3.2.1 Auto Regressive Integrated Moving Average (ARIMA) 103 6.3.2.2 Data Visualization 106 6.3.2.3 Implementation Model 108 6.4 Result Analysis 108 6.5 Conclusion 110 References 110 7 Modified m-Polar Fuzzy Set ELECTRE-I Approach 113 Madan Jagtap, Prasad Karande and Pravin Patil 7.1 Introduction 114 7.1.1 Objectives 114 7.2 Implementation of m-Polar Fuzzy ELECTRE-I Integrated Shannon’s Entropy Weight Calculations 115 7.2.1 The m-Polar Fuzzy ELECTRE-I Integrated Shannon’s Entropy Weight Calculation Method 115 7.3 Application to Industrial Problems 118 7.3.1 Cutting Fluid Selection Problem 118 7.3.2 Results Obtained From m-Polar Fuzzy ELECTRE-I for Cutting Fluid Selection Problem 122 7.3.3 FMS Selection Problem 125 7.3.4 Results Obtained From m-Polar Fuzzy ELECTRE-I for FMS Selection 130 7.4 Conclusions 143 References 143 8 Fuzzy Decision Making: Concept and Models 147 Bithika Bishesh 8.1 Introduction 148 8.2 Classical Set 149 8.3 Fuzzy Set 150 8.4 Properties of Fuzzy Set 151 8.5 Types of Decision Making 153 8.5.1 Individual Decision Making 153 8.5.2 Multiperson Decision Making 157 8.5.3 Multistage Decision Making 158 8.5.4 Multicriteria Decision Making 160 8.6 Methods of Multiattribute Decision Making (MADM) 162 8.6.1 Weighted Sum Method (WSM) 162 8.6.2 Weighted Product Method (WPM) 162 8.6.3 Weighted Aggregates Sum Product Assessment (WASPAS) 163 8.6.4 Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS) 166 8.7 Applications of Fuzzy Logic 167 8.8 Conclusion 169 References 169 9 Use of Fuzzy Logic for Psychological Support to Migrant Workers of Southern Odisha (India) 173 Sanjaya Kumar Sahoo and Sukanta Chandra Swain 9.1 Introduction 174 9.2 Objectives and Methodology 175 9.2.1 Objectives 175 9.2.2 Methodology 176 9.3 Effect of COVID-19 on the Psychology and Emotion of Repatriated Migrants 176 9.3.1 Psychological Variables Identified 176 9.3.2 Fuzzy Logic for Solace to Migrants 176 9.4 Findings 178 9.5 Way Out for Strengthening the Psychological Strength of the Migrant Workers through Technological Aid 178 9.6 Conclusion 179 References 180 10 Fuzzy-Based Edge AI Approach: Smart Transformation of Healthcare for a Better Tomorrow 181 B. RaviKrishna, Sirisha Potluri, J. Rethna Virgil Jeny, Guna Sekhar Sajja and Katta Subba Rao 10.1 Significance of Machine Learning in Healthcare 182 10.2 Cloud-Based Artificial Intelligent Secure Models 183 10.3 Applications and Usage of Machine Learning in Healthcare 183 10.3.1 Detecting Diseases and Diagnosis 183 10.3.2 Drug Detection and Manufacturing 183 10.3.3 Medical Imaging Analysis and Diagnosis 184 10.3.4 Personalized/Adapted Medicine 185 10.3.5 Behavioral Modification 185 10.3.6 Maintenance of Smart Health Data 185 10.3.7 Clinical Trial and Study 185 10.3.8 Crowdsourced Information Discovery 185 10.3.9 Enhanced Radiotherapy 186 10.3.10 Outbreak/Epidemic Prediction 186 10.4 Edge AI: For Smart Transformation of Healthcare 186 10.4.1 Role of Edge in Reshaping Healthcare 186 10.4.2 How AI Powers the Edge 187 10.5 Edge AI-Modernizing Human Machine Interface 188 10.5.1 Rural Medicine 188 10.5.2 Autonomous Monitoring of Hospital Rooms—A Case Study 188 10.6 Significance of Fuzzy in Healthcare 189 10.6.1 Fuzzy Logic—Outline 189 10.6.2 Fuzzy Logic-Based Smart Healthcare 190 10.6.3 Medical Diagnosis Using Fuzzy Logic for Decision Support Systems 191 10.6.4 Applications of Fuzzy Logic in Healthcare 193 10.7 Conclusion and Discussions 193 References 194 11 Video Conferencing (VC) Software Selection Using Fuzzy TOPSIS 197 Rekha Gupta 11.1 Introduction 197 11.2 Video Conferencing Software and Its Major Features 199 11.2.1 Video Conferencing/Meeting Software (VC/MS) for Higher Education Institutes 199 11.3 Fuzzy TOPSIS 203 11.3.1 Extension of TOPSIS Algorithm: Fuzzy TOPSIS 203 11.4 Sample Numerical Illustration 207 11.5 Conclusions 213 References 213 12 Estimation of Nonperforming Assets of Indian Commercial Banks Using Fuzzy AHP and Goal Programming 215 Kandarp Vidyasagar and Rajiv Kr. Dwivedi 12.1 Introduction 216 12.1.1 Basic Concepts of Fuzzy AHP and Goal Programming 217 12.2 Research Model 221 12.2.1 Average Growth Rate Calculation 227 12.3 Result and Discussion 233 12.4 Conclusion 234 References 234 13 Evaluation of Ergonomic Design for the Visual Display Terminal Operator at Static Work Under FMCDM Environment 237 Bipradas Bairagi 13.1 Introduction 238 13.2 Proposed Algorithm 240 13.3 An Illustrative Example on Ergonomic Design Evaluation 245 13.4 Conclusions 249 References 249 14 Optimization of Energy Generated from Ocean Wave Energy Using Fuzzy Logic 253 S. B. Goyal, Pradeep Bedi, Jugnesh Kumar and Prasenjit Chatterjee 14.1 Introduction 254 14.2 Control Approach in Wave Energy Systems 255 14.3 Related Work 257 14.4 Mathematical Modeling for Energy Conversion from Ocean Waves 259 14.5 Proposed Methodology 260 14.5.1 Wave Parameters 261 14.5.2 Fuzzy-Optimizer 262 14.6 Conclusion 264 References 264 15 The m-Polar Fuzzy TOPSIS Method for NTM Selection 267 Madan Jagtap and Prasad Karande 15.1 Introduction 268 15.2 Literature Review 268 15.3 Methodology 270 15.3.1 Steps of the mFS TOPSIS 270 15.4 Case Study 272 15.4.1 Effect of Analytical Hierarchy Process (AHP) Weight Calculation on the mFS TOPSIS Method 273 15.4.2 Effect of Shannon’s Entropy Weight Calculation on the m-Polar Fuzzy Set TOPSIS Method 277 15.5 Results and Discussions 281 15.5.1 Result Validation 281 15.6 Conclusions and Future Scope 283 References 284 16 Comparative Analysis on Material Handling Device Selection Using Hybrid FMCDM Methodology 287 Bipradas Bairagi 16.1 Introduction 288 16.2 MCDM Techniques 289 16.2.1 Fahp 289 16.2.2 Entropy Method as Weights (Influence) Evaluation Technique 290 16.3 The Proposed Hybrid and Super Hybrid FMCDM Approaches 291 16.3.1 Topsis 291 16.3.2 FMOORA Method 292 16.3.3 FVIKOR 292 16.3.4 Fuzzy Grey Theory (FGT) 293 16.3.5 COPRAS –G 293 16.3.6 Super Hybrid Algorithm 294 16.4 Illustrative Example 295 16.5 Results and Discussions 298 16.5.1 FTOPSIS 298 16.5.2 FMOORA 298 16.5.3 FVIKRA 298 16.5.4 Fuzzy Grey Theory (FGT) 299 16.5.5 COPRAS-G 299 16.5.6 Super Hybrid Approach (SHA) 299 16.6 Conclusions 302 References 302 17 Fuzzy MCDM on CCPM for Decision Making: A Case Study 305 Bimal K. Jena, Biswajit Das, Amarendra Baral and Sushanta Tripathy 17.1 Introduction 306 17.2 Literature Review 307 17.3 Objective of Research 308 17.4 Cluster Analysis 308 17.4.1 Hierarchical Clustering 309 17.4.2 Partitional Clustering 309 17.5 Clustering 310 17.6 Methodology 314 17.7 TOPSIS Method 316 17.8 Fuzzy TOPSIS Method 318 17.9 Conclusion 325 17.10 Scope of Future Study 326 References 326 Index 329
£133.20
Taylor & Francis Ltd A Primer on Machine Learning Applications in
Book SynopsisMachine learning has undergone rapid growth in diversification and practicality, and the repertoire of techniques has evolved and expanded. The aim of this book is to provide a broad overview of the available machine-learning techniques that can be utilized for solving civil engineering problems. The fundamentals of both theoretical and practical aspects are discussed in the domains of water resources/hydrological modeling, geotechnical engineering, construction engineering and management, and coastal/marine engineering. Complex civil engineering problems such as drought forecasting, river flow forecasting, modeling evaporation, estimation of dew point temperature, modeling compressive strength of concrete, ground water level forecasting, and significant wave height forecasting are also included.Features Exclusive information on machine learning and data analytics applications with respect to civil engineering Includes many machiTable of Contents1. Introduction 2. Artificial Neural Networks 3. Fuzzy Logic 4. Support Vector Machine 5. Genetic Algorithm (GA) 6. Hybrid Systems 7. Data Statistics and Analytics 8. Applications in the Civil Engineering Domain 9. Conclusion and Future Scope of Work
£87.39
Taylor & Francis Ltd A First Course in Fuzzy Logic
Book SynopsisA First Course in Fuzzy Logic, Fourth Edition is an expanded version of the successful third edition. It provides a comprehensive introduction to the theory and applications of fuzzy logic.This popular text offers a firm mathematical basis for the calculus of fuzzy concepts necessary for designing intelligent systems and a solid background for readers to pursue further studies and real-world applications.New in the Fourth Edition: Features new results on fuzzy sets of type-2 Provides more information on copulas for modeling dependence structures Includes quantum probability for uncertainty modeling in social sciences, especially in economics With its comprehensive updates, this new edition presents all the background necessary for students, instructors and professionals to begin using fuzzy logic in its manyapplications in computer science, mathemaTable of ContentsThe Concept of FuzzinessExamples. Mathematical modeling. Some operations on fuzzy sets. Fuzziness as uncertainty.Some Algebra of Fuzzy SetsBoolean algebras and lattices. Equivalence relations and partitions. Composing mappings. Isomorphisms and homomorphisms. Alpha-cuts. Images of alpha-level sets.Fuzzy QuantitiesFuzzy quantities. Fuzzy numbers. Fuzzy intervals. Logical Aspects of Fuzzy SetsClassical two-valued logic. A three-valued logic. Fuzzy logic. Fuzzy and Lukasiewicz logics. Interval-valued fuzzy logic.Basic Connectivest-norms. Generators of t-norms. Isomorphisms of t-norms. Negations. Nilpotent t-norms and negations. T-conforms. De Morgan systems. Groups and t-norms. Interval-valued fuzzy sets. Type-2 fuzzy sets.Additional Topics on ConnectivesFuzzy implications. Averaging operators. Powers of t-norms. Sensitivity of connectives. Copulas and t-norms.Fuzzy RelationsDefinitions and examples. Binary fuzzy relations. Operations on fuzzy relations. Fuzzy partitions. Fuzzy relations as Chu spaces. Approximate reasoning. Approximate reasoning in expert systems. A simple form of generalized modus ponens. The compositional rule of inference.Universal Approximation Fuzzy rule bases. Design methodologies. Some mathematical background. Approximation capability. Possibility TheoryProbability and uncertainty. Random sets. Possibility measures. Partial KnowledgeMotivations. Belief functions and incidence algebras. Monotonicity. Beliefs, densities, and allocations. Belief functions on infinite sets. Mobius transforms of set-functions. Reasoning with belief functions. Decision making using belief functions. Rough sets. Conditional events.Fuzzy MeasuresMotivation and definitions. Fuzzy measures and lower probabilities. Fuzzy measures in other areas. Conditional fuzzy measures.The Choquet IntegralThe Lebesgue integral. The Sugeno integral. The Choquet integral. Fuzzy Modeling and ControlMotivation for fuzzy control. The methodology of fuzzy control. Optimal fuzzy control. An analysis of fuzzy control techniques.
£114.00
John Wiley & Sons Inc Principles of Soft Computing Using Python
Book SynopsisPrinciples of Soft Computing Using Python Programming An accessible guide to the revolutionary techniques of soft computing Soft computing is a computing approach designed to replicate the human mind's unique capacity to integrate uncertainty and imprecision into its reasoning. It is uniquely suited to computing operations where rigid analytical models will fail to account for the variety and ambiguity of possible solutions. As machine learning and artificial intelligence become more and more prominent in the computing landscape, the potential for soft computing techniques to revolutionize computing has never been greater. Principles of Soft Computing Using Python Programming provides readers with the knowledge required to apply soft computing models and techniques to real computational problems. Beginning with a foundational discussion of soft or fuzzy computing and its differences from hard computing, it describes different models for soft computing and
£85.46
John Wiley and Sons Ltd Minds and Machines
Book SynopsisExamines different kinds of models and investigates some of the basic properties of connectionism in the context of synthetic psychology, including accounts of how the internal structure of connectionist networks can be interpreted. This title investigates basic properties of connectionism in the context of synthetic psychology.Trade Review"In this remarkable book, Dawson refines and develops synthetic psychology – an approach to explaining mental capacities that takes as its inspiration the investigation of simple systems exhibiting emergent behavior. Rich with examples, the book shows with extraordinary clarity how ideas from embodied cognitive science, robotics, artificial life, and connectionism can be combined to shed new light on the workings of the mind. It's hard to imagine a better book for anyone wishing to understand the latest advances in cognitive science." Larry Shapiro, University of Wisconsin "Minds and Machines provides an easily understood introduction to synthetic psychology – start with simple processes, see what emerges, and analyze the resulting system. Dawson lays a solid foundation describing the strengths and weaknesses of various modeling approaches in psychology, and then builds on this by giving concrete examples of how connectionism – using the synthetic approach – can be used to provide simple explanations of seemingly complex cognitive phenomena." David A. Medler, The Medical College of Wisconsin "Thisis a wonderful book, both in terms of the thought-provoking technical content and the delightfully conversational style that readers have come to expect from the author of Understanding Cognitive Science. Dawson has a real gift for presenting complex ideas in an accessible and engaging way that does not dilute the scientific or philosophical intricacies involved." Stefan C. Kremer, University of Guelph, Canada "An important virtue of this book is that the content and order of presentation has clearly been tested at length in the classroom of a dedicated and creative teacher. The book has many illustrations from teaching practice, and would be an excellent basis for a senior undergraduate or introductory graducate course on cognitive modelling, and I'd be delighted to use it for that purpose myself ... This is a fine book, and I suspect it would be a valuable resource for those who don't know much about synthetic psychology but would like to get a clear sense of the lie of the land." David Spurrett, University of KwaZulu-Natal, Psychology in Society, 30, 2004, 77-79Table of ContentsList of Figures. List of Tables. 1. The Kids in the Hall. Synthetic Versus Analytic Traditions. . 2. Advantages and Disadvantages of Modeling. What Is A Model?. Advantages and Disadvantages of Models. . 3. Models of Data. An Example of a Model of Data. Properties of Models of Data. . 4. Mathematical Models. An Example Mathematical Model. Mathematical Models vs. Models of Data. . 5. Computer Simulations. A Sample Computer Simulation. Connectionist Models. Properties of Computer Simulations. . 6. First Steps Toward Synthetic Psychology. Introduction. Building a Thoughtless Walker. Step 1: Synthesis. Step 2: Emergence. Step 3: Analysis. Issues Concerning Synthetic Psychology. . 7. Uphill Analysis, Downhill Synthesis. Introduction. From Homeostats to Tortoises. Ashby’s Homeostat. Vehicles. Synthesis and Emergence: Some Modern Examples. The Law of Uphill Analysis and Downhill Synthesis. . 8. Connectionism As Synthetic Psychology. Introduction. Beyond Sensory Reflexes. Connectionism, Synthesis, and Representation. Summary and Conclusions. . 9. Building Associations. From Associationism To Connectionism. Building An Associative Memory. Beyond the Limitations of Hebb Learning. Associative Memory and Synthetic Psychology. . 10. Making Decisions. The Limits of Linearity. A Fundamental Nonlinearity. Building a Perceptron: A Nonlinear Associative Memory. The Psychology of Perceptrons. The Need for Layers. . 11. Sequences of Decisions. The Logic of Layers. Training Multilayered Networks. A Simple Case Study: Exclusive Or. A Second Case Study: Classifying Musical Chords. A Third Case Study: From Connectionism to Selectionism. . 12. From Synthesis To Analysis. Representing Musical Chords in a Pdp Network. Interpreting the Internal Structure of Value Unit Networks. Network Interpretation and Synthetic Psychology. . 13. From Here To Synthetic Psychology. References. Index
£99.86
Pearson Education Artificial Intelligence
Book SynopsisDr Michael Negnevitsky is a Professor in Electrical Engineering and Computer Science at the University of Tasmania, Australia. The book has developed from his lectures to undergraduates. Educated as an electrical engineer, Dr Negnevitsky's many interests include artificial intelligence and soft computing. His research involves the development and application of intelligent systems in electrical engineering, process control and environmental engineering. He has authored and co-authored over 300 research publications including numerous journal articles, four patents for inventions and two books.Trade Review“This book covers many areas related to my module. I would be happy to recommend this book to my students. I believe my students would be able to follow this book without any difficulty. Book chapters are very well organised and this will help me to pick and choose the subjects related to this module.” Dr Ahmad Lotfi, Nottingham Trent University, UKTable of Contents Contents Preface xii New to this edition xiii Overview of the book xiv Acknowledgements xvii 1 Introduction to knowledge-based intelligent systems 1 1.1 Intelligent machines, or what machines can do 1 1.2 The history of artificial intelligence, or from the ‘Dark Ages’ to knowledge-based systems 4 1.3 Summary 17 Questions for review 21<
£72.99
APress PyTorch Recipes
Book SynopsisLearn how to use PyTorch to build neural network models using code snippets updated for this second edition. This book includes new chapters covering topics such as distributed PyTorch modeling, deploying PyTorch models in production, and developments around PyTorch with updated code. You'll start by learning how to use tensors to develop and fine-tune neural network models and implement deep learning models such as LSTMs, and RNNs. Next, you'll explore probability distribution concepts using PyTorch, as well as supervised and unsupervised algorithms with PyTorch. This is followed by a deep dive on building models with convolutional neural networks, deep neural networks, and recurrent neural networks using PyTorch. This new edition covers also topics such as Scorch, a compatible module equivalent to the Scikit machine learning library, model quantization to reduce parameter size, and preparing a model for deployment within a production system. Distributed parallel processing for balaTrade Review“The book covers all important facets of neural network implementation and modeling, and could definitely be useful to students and developers keen for an in-depth look at how to build models using PyTorch, or how to engineer particular neural network features using this platform.” (Mariana Damova, Computing Reviews, July 24, 2023)Table of ContentsChapter 1: Introduction to PyTorch, Tensors, and Tensor OperationsChapter Goal: This chapter is to understand what is PyTorch and its basic building blocks.Chapter 2: Probability Distributions Using PyTorchChapter Goal: This chapter aims at covering different distributions compatible with PyTorch for data analysis. Chapter 3: Neural Networks Using PyTorchChapter Goal: This chapter explains the use of PyTorch to develop a neural network model and optimize the model.Chapter 4: Deep Learning (CNN and RNN) Using PyTorchChapter Goal: This chapter explains the use of PyTorch to train deep neural networks for complex datasets.Chapter 5: Language Modeling Using PyTorchChapter Goal: In this chapter, we are going to use torch text for natural language processing, pre-processing, and feature engineering. Chapter 6: Supervised Learning Using PyTorchGoal: This chapter explains how supervised learning algorithms implementation with PyTorch. Chapter 7: Fine Tuning Deep Learning Models using PyTorchGoal: This chapter explains how to Fine Tuning Deep Learning Models using the PyTorch framework.Chapter 8: Distributed PyTorch ModelingChapter Goal: This chapter explains the use of parallel processing using the PyTorch framework.Chapter 9: Model Optimization Using Quantization MethodsChapter Goal: This chapter explains the use of quantization methods to optimize the PyTorch models and hyperparameter tuning with ray tune. Chapter 10: Deploying PyTorch Models in ProductionChapter Goal: In this chapter we are going to use torch serve, to deploy the PyTorch models into production. Chapter 11: PyTorch for AudioChapter Goal: In this chapter torch audio will be used for audio resampling, data augmentation, features extractions, model training, and pipeline development. Chapter 12: PyTorch for ImageChapter Goal: This chapter aims at using Torchvision for image transformations, pre-processing, feature engineering, and model training. Chapter 13: Model Explainability using CaptumChapter Goal: In this chapter, we are going to use the captum library for model interpretability to explain the model as if you are explaining the model to a 5-year-old. Chapter 14: Scikit Learn Model compatibility using SkorchChapter Goal: In this chapter, we are going to use skorch which is a high-level library for PyTorch that provides full sci-kit learn compatibility.
£33.74
IGI Global Computational Intelligence for Movement Sciences:
Book SynopsisRecent years have seen many new developments in computational intelligence (CI) techniques and, consequently, this has led to an exponential increase in the number of applications in a variety of areas, including: engineering, finance, social and biomedical. In particular, CI techniques are increasingly being used in biomedical and human movement areas because of the complexity of the biological systems, as well as the limitations of the existing quantitative techniques in modelling. ""Computational Intelligence for Movement Sciences: Neural Networks and Other Emerging Techniques"" contains information regarding state-of-the-art research outcomes and cutting-edge technology from leading scientists and researchers working on various aspects of the human movement. Readers of this book will gain an insight into this field, as well as access to pertinent information, which they will be able to use for continuing research in this area.
£66.75
Manning Publications Machine Learning with R, tidyverse, and mlr
Book SynopsisMachine Learning with R, tidyverse, and mlr teaches readers how to gain valuable insights from their data using the powerful R programming language. In his engaging and informal style, author and R expert Hefin Ioan Rhys lays a firm foundation of ML basics and introduces readers to the tidyverse, a powerful set of R tools designed specifically for practical data science. Key Features · Commonly used ML techniques · Using the tidyverse packages to organize and plot your data · Validating model performance · Choosing the best ML model for your task · A variety of hands-on coding exercises · ML best practices For readers with basic programming skills in R, Python, or another standard programming language. About the technology Machine learning techniques accurately and efficiently identify patterns and relationships in data and use those models to make predictions about new data. ML techniques can work on even relatively small datasets, making these skills a powerful ally for nearly any data analysis task. Hefin Ioan Rhysis a senior laboratory research scientist in the Flow Cytometry Shared Technology Platform at The Francis Crick Institute. He spent the final year of his PhD program teaching basic R skills at the university. A data science and machine learning enthusiast, he has his own Youtube channel featuring screencast tutorials in R and R Studio.
£46.23
Manning Publications Succeeding with AI
Book SynopsisThe big challenge for a successful AI project isn’t deciding which problems you can solve. It’s deciding which problems you should solve. In Managing Successful AI Projects, author and AI consultant Veljko Krunic reveals secrets for succeeding in AI that he developed with Fortune 500 companies, early-stage start-ups, and other business across multiple industries. Key Features · Selecting the right AI project to meet specific business goals · Economizing resources to deliver the best value for money · How to measure the success of your AI efforts in the business terms · Predict if you are you on the right track to deliver your intended business results For executives, managers, team leaders, and business-focused data scientists. No specific technical knowledge or programming skills required. About the technology Companies small and large are initiating AI projects, investing vast sums of money on software, developers, and data scientists. Too often, these AI projects focus on technology at the expense of actionable or tangible business results, resulting in scattershot results and wasted investment. Managing Successful AI Projects sets out a blueprint for AI projects to ensure they are predictable, successful, and profitable. It’s filled with practical techniques for running data science programs that ensure they’re cost effective and focused on the right business goals. Veljko Krunic is an independent data science consultant who has worked with companies that range from start-ups to Fortune 10 enterprises. He holds a PhD in Computer Science and an MS in Engineering Management, both from the University of Colorado at Boulder. He is also a Six Sigma Master Black Belt.
£35.99
Manning Publications Classic Computer Science Problems in Java
Book SynopsisSharpen your coding skills by exploring established computer science problems! Classic Computer Science Problems in Java challenges you with time-tested scenarios and algorithms. You’ll work through a series of exercises based in computer science fundamentals that are designed to improve your software development abilities, improve your understanding of artificial intelligence, and even prepare you to ace an interview. Classic Computer Science Problems in Java will teach you techniques to solve common-but-tricky programming issues. You’ll explore foundational coding methods, fundamental algorithms, and artificial intelligence topics, all through code-centric Java tutorials and computer science exercises. As you work through examples in search, clustering, graphs, and more, you'll remember important things you've forgotten and discover classic solutions to your "new" problems! Key Features · Recursion, memorization, bit manipulation · Search algorithms · Constraint-satisfaction problems · Graph algorithms · K-means clustering For intermediate Java programmers. About the technology In any computer science classroom you’ll find a set of tried-and-true algorithms, techniques, and coding exercises. These techniques have stood the test of time as some of the best ways to solve problems when writing code, and expanding your Java skill set with these classic computer science methods will make you a better Java programmer. David Kopec is an assistant professor of computer science and innovation at Champlain College in Burlington, Vermont. He is the author of Dart for Absolute Beginners (Apress, 2014), Classic Computer Science Problems in Swift (Manning, 2018), and Classic Computer Science Problems in Python (Manning, 2019).
£35.99
Manning Publications Feature Engineering Bookcamp
Book SynopsisKubernetes is an essential tool for anyone deploying and managing cloud-native applications. It lays out a complete introduction to container technologies and containerized applications along with practical tips for efficient deployment and operation. This revised edition of the bestselling Kubernetes in Action contains new coverage of the Kubernetes architecture, including the Kubernetes API, and a deep dive into managing a Kubernetes cluster in production. In Kubernetes in Action, Second Edition, you'll start with an overview of how Docker containers work with Kubernetes and move quickly to building your first cluster. You'll gradually expand your initial application, adding features and deepening your knowledge of Kubernetes architecture and operation. As you navigate this comprehensive guide, you'll also appreciate thorough coverage of high-value topics like monitoring, tuning, and scaling Kubernetes in Action, Second Edition teaches you to use Kubernetes to deploy container-based distributed applications. You'll start with an overview of how Docker containers work with Kubernetes and move quickly to building your first cluster. You'll gradually expand your initial application, adding features and deepening your knowledge of Kubernetes architecture and operation. In this revised and expanded second edition, you'll take a deep dive into the structure of a Kubernetes-based application and discover how to manage a Kubernetes cluster in production. As you navigate this comprehensive guide, you'll also appreciate thorough coverage of high-value topics like monitoring, tuning, and scaling.Table of Contentstable of contents detailed TOC PART 1: FIRST TIME ON A BOAT: INTRODUCTION TO KUBERNETES READ IN LIVEBOOK 1INTRODUCING KUBERNETES READ IN LIVEBOOK 2UNDERSTANDING CONTAINERS READ IN LIVEBOOK 3DEPLOYING YOUR FIRST APPLICATION PART 2: LEARNING THE ROPES: KUBERNETES API OBJECTS READ IN LIVEBOOK 4INTRODUCING KUBERNETES API OBJECTS READ IN LIVEBOOK 5RUNNING WORKLOADS IN PODS READ IN LIVEBOOK 6MANGING THE POD LIFECYCLE READ IN LIVEBOOK 7ATTACHING STORAGE VOLUMES TO PODS READ IN LIVEBOOK 8PERSISTING DATA IN PERSISTENTVOLUMES READ IN LIVEBOOK 9CONFIGURATION VIA CONFIGMAPS, SECRETS, AND THE DOWNWARD API READ IN LIVEBOOK 10ORGANIZING OBJECTS USING NAMESPACES AND LABELS READ IN LIVEBOOK 11EXPOSING PODS WITH SERVICES READ IN LIVEBOOK 12EXPOSING SERVICES WITH INGRESS READ IN LIVEBOOK 13REPLICATING PODS WITH REPLICASETS READ IN LIVEBOOK 14MANAGING PODS WITH DEPLOYMENTS 15 DEPLOYING STATEFUL WORKLOADS WITH STATEFULSETS 16 DEPLOYING SPECIALIZED WORKLOADS WITH DAEMONSETS, JOBS, AND CRONJOBS PART 3: GOING BELOW DECK: KUBERNETES INTERNALS 17 UNDERSTANDING THE KUBERNETES API IN DETAIL 18 UNDERSTANDING THE CONTROL PLANE COMPONENTS 19 UNDERSTANDING THE CLUSTER NODE COMPONENTS 20 UNDERSTANDING THE INTERNAL OPERATION OF KUBERNETES CONTROLLERS PART 4: SAILING OUT TO HIGH SEAS: MANAGING KUBERNETES 21 DEPLOYING HIGHLY-AVAILABLE CLUSTERS 22 MANAGING THE COMPUTING RESOURCES AVAILABLE TO PODS 23 ADVANCED SCHEDULING USING AFFINITY AND ANTI-AFFINITY 24 AUTOMATIC SCALING USING THE HORIZONTALPODAUTOSCALER 25 SECURING THE API USING ROLE-BASED ACCESS CONTROL 26 PROTECTING CLUSTER NODES 27 SECURING NETWORK COMMUNICATION USING NETWORKPOLICIES 28 UPGRADING, BACKING UP, AND RESTORING KUBERNETES CLUSTERS 29 ADDING CENTRALIZED LOGGING, METRICS, ALERTING, AND TRACING PART 5: BECOMING A SEASONED MARINER: MAKING THE MOST OF KUBERNETES 30 KUBERNETES DEVELOPMENT AND DEPLOYMENT BEST PRACTICES 30 EXTENDING KUBERNETES WITH CUSTOMRESOURCEDEFINITIONS AND OPERATORS
£37.04
Bravex Publications Machine Learning: An Essential Guide to Machine
Book Synopsis
£14.37
Packt Publishing Limited Advanced Deep Learning with R: Become an expert
Book SynopsisDiscover best practices for choosing, building, training, and improving deep learning models using Keras-R, and TensorFlow-R librariesKey Features Implement deep learning algorithms to build AI models with the help of tips and tricks Understand how deep learning models operate using expert techniques Apply reinforcement learning, computer vision, GANs, and NLP using a range of datasets Book DescriptionDeep learning is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data. Advanced Deep Learning with R will help you understand popular deep learning architectures and their variants in R, along with providing real-life examples for them.This deep learning book starts by covering the essential deep learning techniques and concepts for prediction and classification. You will learn about neural networks, deep learning architectures, and the fundamentals for implementing deep learning with R. The book will also take you through using important deep learning libraries such as Keras-R and TensorFlow-R to implement deep learning algorithms within applications. You will get up to speed with artificial neural networks, recurrent neural networks, convolutional neural networks, long short-term memory networks, and more using advanced examples. Later, you'll discover how to apply generative adversarial networks (GANs) to generate new images; autoencoder neural networks for image dimension reduction, image de-noising and image correction and transfer learning to prepare, define, train, and model a deep neural network. By the end of this book, you will be ready to implement your knowledge and newly acquired skills for applying deep learning algorithms in R through real-world examples.What you will learn Learn how to create binary and multi-class deep neural network models Implement GANs for generating new images Create autoencoder neural networks for image dimension reduction, image de-noising and image correction Implement deep neural networks for performing efficient text classification Learn to define a recurrent convolutional network model for classification in Keras Explore best practices and tips for performance optimization of various deep learning models Who this book is forThis book is for data scientists, machine learning practitioners, deep learning researchers and AI enthusiasts who want to develop their skills and knowledge to implement deep learning techniques and algorithms using the power of R. A solid understanding of machine learning and working knowledge of the R programming language are required.Table of ContentsTable of Contents Revisiting Deep Learning architecture and techniques Deep Neural Networks for multiclass classification Deep Neural Networks for regression Image classification and recognition Image classification using convolutional neural networks Applying Autoencoder neural networks using Keras Image classification for small data using transfer learning Creating new images using generative adversarial networks Deep network for text classification Text classification using recurrent neural networks Text classification using Long Short-Term Memory Network Text classification using convolutional recurrent networks Tips, tricks and the road ahead
£34.19
Packt Publishing Limited Hands-On Neural Network Programming with C#: Add
Book SynopsisCreate and unleash the power of neural networks by implementing C# and .Net codeKey Features Get a strong foundation of neural networks with access to various machine learning and deep learning libraries Real-world case studies illustrating various neural network techniques and architectures used by practitioners Cutting-edge coverage of Deep Networks, optimization algorithms, convolutional networks, autoencoders and many more Book DescriptionNeural networks have made a surprise comeback in the last few years and have brought tremendous innovation in the world of artificial intelligence. The goal of this book is to provide C# programmers with practical guidance in solving complex computational challenges using neural networks and C# libraries such as CNTK, and TensorFlowSharp. This book will take you on a step-by-step practical journey, covering everything from the mathematical and theoretical aspects of neural networks, to building your own deep neural networks into your applications with the C# and .NET frameworks.This book begins by giving you a quick refresher of neural networks. You will learn how to build a neural network from scratch using packages such as Encog, Aforge, and Accord. You will learn about various concepts and techniques, such as deep networks, perceptrons, optimization algorithms, convolutional networks, and autoencoders. You will learn ways to add intelligent features to your .NET apps, such as facial and motion detection, object detection and labeling, language understanding, knowledge, and intelligent search.Throughout this book, you will be working on interesting demonstrations that will make it easier to implement complex neural networks in your enterprise applications.What you will learn Understand perceptrons and how to implement them in C# Learn how to train and visualize a neural network using cognitive services Perform image recognition for detecting and labeling objects using C# and TensorFlowSharp Detect specific image characteristics such as a face using Accord.Net Demonstrate particle swarm optimization using a simple XOR problem and Encog Train convolutional neural networks using ConvNetSharp Find optimal parameters for your neural network functions using numeric and heuristic optimization techniques. Who this book is forThis book is for Machine Learning Engineers, Data Scientists, Deep Learning Aspirants and Data Analysts who are now looking to move into advanced machine learning and deep learning with C#. Prior knowledge of machine learning and working experience with C# programming is required to take most out of this bookTable of ContentsTable of Contents A Quick Refresher Building our first Neural Network Together Decision Tress and Random Forests Face and Motion Detection Training CNNs using ConvNetSharp Training Autoencoders Using RNNSharp Replacing Back Propagation with PSO Function Optimizations; How and Why Finding Optimal Parameters Object Detection with TensorFlowSharp Time Series Prediction and LSTM Using CNTK GRUs Compared to LSTMs, RNNs, and Feedforward Networks Appendix A- Activation Function Timings Appendix B- Function Optimization Reference
£29.44
Packt Publishing Limited The The Reinforcement Learning Workshop: Learn
Book SynopsisStart with the basics of reinforcement learning and explore deep learning concepts such as deep Q-learning, deep recurrent Q-networks, and policy-based methods with this practical guideKey Features Use TensorFlow to write reinforcement learning agents for performing challenging tasks Learn how to solve finite Markov decision problems Train models to understand popular video games like Breakout Book DescriptionVarious intelligent applications such as video games, inventory management software, warehouse robots, and translation tools use reinforcement learning (RL) to make decisions and perform actions that maximize the probability of the desired outcome. This book will help you to get to grips with the techniques and the algorithms for implementing RL in your machine learning models.Starting with an introduction to RL, you’ll be guided through different RL environments and frameworks. You’ll learn how to implement your own custom environments and use OpenAI baselines to run RL algorithms. Once you’ve explored classic RL techniques such as Dynamic Programming, Monte Carlo, and TD Learning, you’ll understand when to apply the different deep learning methods in RL and advance to deep Q-learning. The book will even help you understand the different stages of machine-based problem-solving by using DARQN on a popular video game Breakout. Finally, you’ll find out when to use a policy-based method to tackle an RL problem.By the end of The Reinforcement Learning Workshop, you’ll be equipped with the knowledge and skills needed to solve challenging problems using reinforcement learning.What you will learn Use OpenAI Gym as a framework to implement RL environments Find out how to define and implement reward function Explore Markov chain, Markov decision process, and the Bellman equation Distinguish between Dynamic Programming, Monte Carlo, and Temporal Difference Learning Understand the multi-armed bandit problem and explore various strategies to solve it Build a deep Q model network for playing the video game Breakout Who this book is forIf you are a data scientist, machine learning enthusiast, or a Python developer who wants to learn basic to advanced deep reinforcement learning algorithms, this workshop is for you. A basic understanding of the Python language is necessary.Table of ContentsTable of Contents Introduction to Reinforcement Learning Markov Decision Processes and Bellman Equations Deep Learning in Practice with TensorFlow 2 Getting Started with OpenAI and TensorFlow for Reinforcement Learning Dynamic Programming Monte Carlo Methods Temporal Difference Learning The Multi-Armed Bandit Problem What Is Deep Q Learning? Playing an Atari Game with Deep Recurrent Q Networks Policy-Based Methods for Reinforcement Learning Evolutionary Strategies for RL
£34.19
Emerald Publishing Limited Self-Learning and Adaptive Algorithms for
Book SynopsisIn today’s data-driven world, more sophisticated algorithms for data processing are in high demand, mainly when the data cannot be handled with the help of traditional techniques. Self-learning and adaptive algorithms are now widely used by such leading giants that as Google, Tesla, Microsoft, and Facebook in their projects and applications. In this guide designed for researchers and students of computer science, readers will find a resource for how to apply methods that work on real-life problems to their challenging applications, and a go-to work that makes fuzzy clustering issues and aspects clear. Including research relevant to those studying cybernetics, applied mathematics, statistics, engineering, and bioinformatics who are working in the areas of machine learning, artificial intelligence, complex system modeling and analysis, neural networks, and optimization, this is an ideal read for anyone interested in learning more about the fascinating new developments in machine learning.Trade ReviewThis guide explains how to apply methods using systems built by a combination of the neural network approach and fuzzy logic (neuro-fuzzy systems) to solve practical data classification problems in business. It describes methods aimed at handling the main types of uncertainties in data, using adaptive methods of fuzzy clustering; the use of Kohonen maps and their ensembles for fuzzy clustering tasks; and simulation results of these neuro-fuzzy architectures, their learning methods, self-organization, and clustering procedures. -- Annotation ©2019 * (protoview.com) *Table of ContentsIntroduction 1. Review of the Problem Area 2. Adaptive Methods of Fuzzy Clustering 3. Kohonen Maps and their Ensembles for Fuzzy Clustering Tasks 4. Simulation Results and Solutions for Practical Tasks Conclusion
£43.69
Edward Elgar Publishing Ltd Nonlinear Economic Models: Cross-sectional, Time
Book SynopsisNonlinear modelling has become increasingly important and widely used in economics. This valuable book brings together recent advances in the area including contributions covering cross-sectional studies of income distribution and discrete choice models, time series models of exchange rate dynamics and jump processes, and artificial neural network and genetic algorithm models of financial markets. Attention is given to the development of theoretical models as well as estimation and testing methods with a wide range of applications in micro and macroeconomics, labour and finance.The book provides valuable introductory material that is accessible to students and scholars interested in this exciting research area, as well as presenting the results of new and original research. Nonlinear Economic Models provides a sequel to Chaos and Nonlinear Models in Economics by the same editors.Trade Review'This collection provides valuable introductory material that is accessible to students and scholars interested in this research area.' -- Business HorizonsTable of ContentsContents: Part I: Introduction 1. Nonlinear Modelling: An Introduction Part II: Cross-sectional Applications 2. A Model of Income Distribution 3. Truncated Distribution Families 4. Betit: A Flexible Binary Choice Model 5. Estimation of Generalised Distributions 6. Age and the Distribution of Earnings 7. Count Data and Discrete Distributions Part III: Time Series Applications 8. A Model of the Real Exchange Rate 9. Jump Models and Higher Moments 10. A Topological Test of Chaos 11. Genetic Algorithms and Trading Rules Part IV: Neural Network Applications 12. Artificial Neural Networks 13. An ANN Model of the Stock Market 14. Exchange Rate Forecasting Models Index
£111.00
Springer Nature Switzerland AG Proceedings of the 22nd Engineering Applications
Book SynopsisThis book contains the proceedings of the 22nd EANN “Engineering Applications of Neural Networks” 2021 that comprise of research papers on both theoretical foundations and cutting-edge applications of artificial intelligence. Based on the discussed research areas, emphasis is given in advances of machine learning (ML) focusing on the following algorithms-approaches: Augmented ML, autoencoders, adversarial neural networks, blockchain-adaptive methods, convolutional neural networks, deep learning, ensemble methods, learning-federated learning, neural networks, recurrent – long short-term memory. The application domains are related to: Anomaly detection, bio-medical AI, cyber-security, data fusion, e-learning, emotion recognition, environment, hyperspectral imaging, fraud detection, image analysis, inverse kinematics, machine vision, natural language, recommendation systems, robotics, sentiment analysis, simulation, stock market prediction.Table of ContentsAutomatic Facial Expression Neutralisation Using Generative Adversarial Network.- Creating Ensembles of Generative Adversarial Network Discriminators for One-class Classification.- A Hybrid Deep Learning Ensemble for Cyber Intrusion Detection.- Anomaly Detection by Robust Feature Reconstruction.- Deep Learning of Brain Asymmetry Images and Transfer Learning for Early Diagnosis of Dementia.- Deep learning topology-preserving EEG-based images for autism detection in infants.- Improving the Diagnosis of Breast Cancer by Combining Visual and Semantic Feature Descriptors.- Liver cancer trait detection and classification through Machine Learning on smart mobile devices.
£224.99
Springer Nature Switzerland AG Fuzzy Information Processing 2020: Proceedings of
Book SynopsisThis book describes how to use expert knowledge—which is often formulated by using imprecise (fuzzy) words from a natural language. In the 1960s, Zadeh designed special "fuzzy" techniques for such use. In the 1980s, fuzzy techniques started controlling trains, elevators, video cameras, rice cookers, car transmissions, etc. Now, combining fuzzy with neural, genetic, and other intelligent methods leads to new state-of-the-art results: in aerospace industry (from drones to space flights), in mobile robotics, in finances (predicting the value of crypto-currencies), and even in law enforcement (detecting counterfeit banknotes, detecting online child predators and in creating explainable AI systems). The book describes these (and other) applications—as well as foundations and logistics of fuzzy techniques. This book can be recommended to specialists—both in fuzzy and in various application areas—who will learn latest techniques and their applications, and to students interested in innovative ideas.Table of ContentsPowerset operators in categories with fuzzy relations dened by monads.- Improved Fuzzy Q-Learning with Replay Memory.- The ulem package: underlining for emphasis.- A Dynamic Hierarchical Genetic-Fuzzy Sugeno Network.- Fuzzy Mathematical Morphology and Applications in Image Processing.
£179.99
Springer International Publishing AG Artificial Neural Networks in Pattern Recognition: 10th IAPR TC3 Workshop, ANNPR 2022, Dubai, United Arab Emirates, November 24–26, 2022, Proceedings
Book SynopsisThis book constitutes the refereed proceedings of the 10th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2022, held in Dubai, UAE, in November 2022. The 16 revised full papers presented were carefully reviewed and selected from 24 submissions. The conference presents papers on subject such as pattern recognition and machine learning based on artificial neural networks. Table of ContentsTransformer-Encoder generated context-aware embeddings for spell correction.- Graph Augmentation for Neural Networks Using Matching-Graphs.- Wavelet Scattering Transform Depth Benefit, An Application for Speaker Identification.- Assessment of Pharmaceutical Patent Novelty using Siamese Neural Network.- A Review of Capsule Networks in Medical Image Analysis.- Multi-stage Bias Mitigation for Individual Fairness in Algorithmic Decisions.- Introducing an Atypical Loss: A Perceptual Metric Learning for Image Pairing.- A Study on the Autonomous Detection of Impact Craters.- Minimizing Cross Intersections in Graph Drawing via Linear Splines.- Sequence-to-Sequence CNN-BiLSTM Based Glottal Closure Instant Detection from Raw Speech.- Do Minimal Complexity Least Squares Support Vector Machines Work?.- A Novel Representation of Graphical Patterns for Graph Convolution Networks.- Mono vs Multilingual BERT for Hate Speech Detection and Text Classification: A Case Study in Marathi Utilization of Vision Transformer for Classification and Ranking of Video Distortions.- White Blood Cell Classification of Porcine Blood Smear Images.- Medical Deepfake Detection using 3-Dimensional Neural Learning.
£47.49
Springer International Publishing AG Neural Networks and Deep Learning: A Textbook
Book SynopsisThis book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The theory and algorithms of neural networks are particularly important for understanding important concepts, so that one can understand the important design concepts of neural architectures in different applications. Why do neural networks work? When do they work better than off-the-shelf machine-learning models? When is depth useful? Why is training neural networks so hard? What are the pitfalls? The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. Deep learning methods for various data domains, such as text, images, and graphs are presented in detail. The chapters of this book span three categories: The basics of neural networks: The backpropagation algorithm is discussed in Chapter 2.Many traditional machine learning models can be understood as special cases of neural networks. Chapter 3 explores the connections between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 4 and 5. Chapters 6 and 7 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 8, 9, and 10 discuss recurrent neural networks, convolutional neural networks, and graph neural networks. Several advanced topics like deep reinforcement learning, attention mechanisms, transformer networks, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 11 and 12. The textbook is written for graduate students and upper under graduate level students. Researchers and practitioners working within this related field will want to purchase this as well.Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques.The second edition is substantially reorganized and expanded with separate chapters on backpropagation and graph neural networks. Many chapters have been significantly revised over the first edition.Greater focus is placed on modern deep learning ideas such as attention mechanisms, transformers, and pre-trained language models.Table of ContentsAn Introduction to Neural Networks.- The Backpropagation Algorithm.- Machine Learning with Shallow Neural Networks.- Deep Learning: Principles and Training Algorithms.- Teaching a Deep Neural Network to Generalize.- Radial Basis Function Networks.- Restricted Boltzmann Machines.- Recurrent Neural Networks.- Convolutional Neural Networks.- Graph Neural Networks.- Deep Reinforcement Learning.- Advanced Topics in Deep Learning.
£53.99
Springer International Publishing AG Neural Information Processing: 29th International
Book SynopsisThe three-volume set LNCS 13623, 13624, and 13625 constitutes the refereed proceedings of the 29th International Conference on Neural Information Processing, ICONIP 2022, held as a virtual event, November 22–26, 2022.The 146 papers presented in the proceedings set were carefully reviewed and selected from 810 submissions. They were organized in topical sections as follows: Theory and Algorithms; Cognitive Neurosciences; Human Centered Computing; and Applications.The ICONIP conference aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progress, and achievements.Table of ContentsTheory and Algorithms.- Solving Partial Differential Equations using Point-based Neural Networks.- Patch Mix Augmentation with Dual Encoders for Meta-Learning.- Tacit Commitments Emergence in Multi-agent Reinforcement Learning.- Saccade Direction Information Channel.- Shared-Attribute Multi-Graph Clustering with Global Self-Attention.- Mutual Diverse-Label Adversarial Training.- Multi-Agent Hyper-Attention Policy Optimization.- Filter Pruning via Similarity Clustering for Deep Convolutional Neural Networks.- FPD: Feature Pyramid Knowledge Distillation.- An effective ensemble model related to incremental learning in neural machine translation.- Local-Global Semantic Fusion Single-shot Classification Method.- Self-Reinforcing Feedback Domain Adaptation Channel.- General Algorithm for Learning from Grouped Uncoupled Data and Pairwise Comparison Data.- Additional Learning for Joint Probability Distribution Matching in BiGAN.- Multi-View Self-Attention for Regression Domain Adaptation with Feature Selection.- EigenGRF: Layer-Wise Eigen-Learning for Controllable Generative Radiance Fields.- Partial Label learning with Gradually Induced Error-Correction Output Codes.- HMC-PSO: A Hamiltonian Monte Carlo and Particle Swarm Optimization-based optimizer.- Heterogeneous Graph Representation for Knowledge Tracing.- Intuitionistic fuzzy universum support vector machine.- Support vector machine based models with sparse auto-encoder based features for classification problem.- Selectively increasing the diversity of GAN-generated samples.- Cooperation and Competition: Flocking with Evolutionary Multi-Agent Reinforcement Learning.- Differentiable Causal Discovery Under Heteroscedastic Noise.- IDPL: Intra-subdomain adaptation adversarial learning segmentation method based on Dynamic Pseudo Labels.- Adaptive Scaling for U-Net in Time Series Classification.- Permutation Elementary Cellular Automata: Analysis and Application of Simple Examples.- SSPR: A Skyline-Based Semantic Place Retrieval Method.- Double Regularization-based RVFL and edRVFL Networks for Sparse-Dataset Classification.- Adaptive Tabu Dropout for Regularization of Deep Neural Networks.- Class-Incremental Learning with Multiscale Distillation for Weakly Supervised Temporal Action Localization.- Nearest Neighbor Classifier with Margin Penalty for Active Learning.- Factual Error Correction in Summarization with Retriever-Reader Pipeline.- Context-adapted Multi-policy Ensemble Method for Generalization in Reinforcement Learning.- Self-attention based multi-scale graph convolutional networks.- Synesthesia Transformer with Contrastive Multimodal Learning.- Context-based Point Generation Network for Point Cloud Completion.- Temporal Neighborhood Change Centrality for Important Node Identification in Temporal Networks.- DOM2R-Graph: A Web Attribute Extraction Architecture with Relation-aware Heterogeneous Graph Transformer.- Sparse Linear Capsules for Matrix Factorization-based Collaborative Filtering.- PromptFusion: a Low-cost Prompt-based Task Composition for Multi-task Learning.- A fast and efficient algorithm for filtering the training dataset.- Entropy-minimization Mean Teacher for Source-Free Domain Adaptive Object Detection.- IA-CL: A Deep Bidirectional Competitive Learning Method for Traveling Salesman Problem.- Boosting Graph Convolutional Networks With Semi-Supervised Training.- Auxiliary Network: Scalable and agile online learning for dynamic system with inconsistently available inputs.- VAAC: V-value Attention Actor-Critic for Cooperative Multi-agent Reinforcement Learning.- An Analytical Estimation of Spiking Neural Networks Energy Efficiency.- Correlation Based Semantic Transfer with Application to Domain Adaptation.- Minimum Variance Embedded Intuitionistic Fuzzy Weighted Random Vector Functional Link Network.- Neural Network Compression by Joint Sparsity Promotion and Redundancy Reduction.
£75.99
£75.05
De Gruyter Meta-heuristic Optimization Techniques:
Book SynopsisThis book offers a thorough overview of the most popular and researched meta-heuristic optimization techniques and nature-inspired algorithms. Their wide applicability makes them a hot research topic and an effi cient tool for the solution of complex optimization problems in various fi elds of sciences, engineering, and in numerous industries.
£88.50
Springer Verlag, Singapore Hesitant Fuzzy Set: Theory and Extension
Book SynopsisCovering a wide range of notions concerning hesitant fuzzy set and its extensions, this book provides a comprehensive reference to the topic. In the case where different sources of vagueness appear simultaneously, the concept of fuzzy set is not able to properly model the uncertainty, imprecise and vague information. In order to overcome such a limitation, different types of fuzzy extension have been introduced so far. Among them, hesitant fuzzy set was first introduced in 2010, and the existing extensions of hesitant fuzzy set have been encountering an increasing interest and attracting more and more attentions up to now. It is not an exaggeration to say that the recent decade has seen the blossoming of a larger set of techniques and theoretical outcomes for hesitant fuzzy set together with its extensions as well as applications.As the research has moved beyond its infancy, and now it is entering a maturing phase with increased numbers and types of extensions, this book aims to give a comprehensive review of such researches. Presenting the review of many and important types of hesitant fuzzy extensions, and including references to a large number of related publications, this book will serve as a useful reference book for researchers in this field.Table of ContentsChapter 1: Hesitant Fuzzy Set.- Chapter 2: Hesitant Fuzzy Linguistic Term Set.- Chapter 3: Neutrosophic Hesitant Fuzzy Set.- Chapter 4: Pythagorean Hesitant Fuzzy Set.- Chapter 5: q-Rung Orthopair Hesitant Fuzzy Set.- Chapter 6: Probabilistic Hesitant Fuzzy Set.- Chapter 7: Type 2 Hesitant Fuzzy Set.- Chapter 8: Hesitant Bipolar Fuzzy Set.- Chapter 9: Cubic Hesitant Fuzzy Set.- Chapter 10: Complex Hesitant Fuzzy Set.- Chapter 11: Picture Hesitant Fuzzy Set.- Chapter 12: Spherical Hesitant Fuzzy Set.
£98.99
Springer Verlag, Singapore Neural Information Processing: 29th International
Book SynopsisThe four-volume set CCIS 1791, 1792, 1793 and 1794 constitutes the refereed proceedings of the 29th International Conference on Neural Information Processing, ICONIP 2022, held as a virtual event, November 22–26, 2022. The 213 papers presented in the proceedings set were carefully reviewed and selected from 810 submissions. They were organized in topical sections as follows: Theory and Algorithms; Cognitive Neurosciences; Human Centered Computing; and Applications.The ICONIP conference aims to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progress, and achievements.Table of ContentsTheory and Algorithms.- Knowledge Transfer from Situation Evaluation to Multi-agent Reinforcement Learning.- Sequential three-way rules class-overlap under-sampling based on fuzzy hierarchical subspace for imbalanced data.- Two-stage Multilayer Perceptron Hawkes Process.- The Context Hierarchical Contrastive Learning for Time Series in Frequency Domain.- Hawkes Process via Graph Contrastive Discriminant representation Learning and Transformer capturing long-term dependencies.- A Temporal Consistency Enhancement Algorithm Based On Pixel Flicker Correction.- Data representation and clustering with double low-rank constraints.- RoMA: a Method for Neural Network Robustness Measurement and Assessment.- Independent Relationship Detection for Real-Time Scene Graph Generation.- A multi-label feature selection method based on feature graph with ridge regression and eigenvector centrality.- O3GPT: A Guidance-Oriented Periodic Testing Framework with Online Learning, Online Testing, and Online Feedback.- AFFSRN: Attention-Based Feature Fusion Super-Resolution Network.- Temporal-Sequential Learning with Columnar-Structured Spiking Neural Networks.- Graph Attention Transformer Network for Robust Visual Tracking.- GCL-KGE:Graph Contrastive Learning for Knowledge Graph Embedding.- Towards a Unified Benchmark for Reinforcement Learning in Sparse Reward Environments.- Effect of Logistic Activation Function and Multiplicative Input Noise on DNN-kWTA model.- A High-Speed SSVEP-Based Speller Using Continuous Spelling Method.- AAT: Non-Local Networks for Sim-to-Real Adversarial Augmentation Transfer.- Aggregating Intra-class and Inter-class information for Multi-label Text Classification.- Fast estimation of multidimensional regression functions by the Parzen kernel-based method.- ReGAE: Graph autoencoder based on recursive neural networks.- Efficient Uncertainty Quantification for Under-constraint Prediction following Learning using MCMC.- SMART: A Robustness Evaluation Framework for Neural Networks.- Time-aware Quaternion Convolutional Network for Temporal Knowledge Graph Reasoning.- SumBART - An improved BART model for abstractive text summarization.- Saliency-Guided Learned Image Compression for Object Detection.- Multi-Label Learning with Data Self-Augmentation.- MnRec: A News Recommendation Fusion Model Combining Multi-granularity Information.- Infinite Label Selection Method for Mutil-label Classification.- Simultaneous Perturbation Method for Multi-Task Weight Optimization in One-Shot Meta-Learning.- Searching for Textual Adversarial Examples with Learned Strategy.- Multivariate Time Series Retrieval with Binary Coding from Transformer. -Learning TSP Combinatorial Search and Optimization with Heuristic Search.- A Joint Learning Model for Open Set Recognition with Post-processing.- Cross-Layer Fusion for Feature Distillation.- MCHPT: A Weakly Supervise Based Merchant Pre-trained Model.- Progressive Latent Replay for efficient Generative Rehearsal.- Generalization Bounds for Set-to-Set Matching with Negative Sampling.- ADA: An Attention-Based Data Augmentation Approach to Handle Imbalanced Textual Datasets.- Countering the Anti-detection Adversarial Attacks.- Evolving Temporal Knowledge Graphs by Iterative Spatio-Temporal Walks.- Improving Knowledge Graph Embedding Using Dynamic Aggregation of Neighbor Information.- Generative Generalized Zero-Shot Learning based on Auxiliary-Features.- Learning Stable Representations with Progressive Autoencoder (PAE).- Effect of Image Down-sampling on Detection of Adversarial Examples .- Boosting the Robustness of Neural Networks with M-PGD.- StatMix: Data augmentation method that relies on image statistics in federated learning.- Classification by Components Including Chow's Reject Option. -Community discovery algorithm based on improved deep sparse autoencoder.- Fairly Constricted Multi-Objective Particle Swarm Optimization.- Argument Classification with BERT plus Contextual, Structural and Syntactic Features as Text.- Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient.- Optimizing Knowledge Distillation Via Shallow Texture Knowledge Transfer.- Unsupervised Domain Adaptation Supplemented with Generated Images.- MAR2MIX: A Novel Model for Dynamic Problem in Multi-Agent Reinforcement Learning.- Adversarial Training with Knowledge Distillation Considering Intermediate Representations in CNNs.- Deep Contrastive Multi-view Subspace Clustering.
£85.49
Springer Verlag, Singapore Neural Information Processing: 30th International
Book SynopsisThe six-volume set LNCS 14447 until 14452 constitutes the refereed proceedings of the 30th International Conference on Neural Information Processing, ICONIP 2023, held in Changsha, China, in November 2023. The 652 papers presented in the proceedings set were carefully reviewed and selected from 1274 submissions. They focus on theory and algorithms, cognitive neurosciences; human centred computing; applications in neuroscience, neural networks, deep learning, and related fields. Table of ContentsTheory and Algorithms.- Efficient Lightweight Network with Transformer-based Distillation for Micro-crack Detection of Solar Cells.- {MTLAN: Multi-Task Learning and Auxiliary Network for Enhanced Sentence Embedding.- Correlated Online k-Nearest Neighbors Regressor Chain for Online Multi-Output Regression.- Evolutionary Computation for Berth Allocation Problems: A Survey.- Cognitive Neurosciences.- Privacy-Preserving Travel Time Prediction for Internet of Vehicles: A Crowdsensing and Federated Learning Approach.- A Fine-Grained Domain Adaptation Method for Cross-Session Vigilance Estimation in SSVEP-Based BCI.- RMPE:Reducing Residual Membrane Potential Error for Enabling High-accuracy and Ultra-low-latency Spiking Neural Networks.- An improved target searching and imaging method for CSAR.- Block-Matching Multi-Pedestrian Tracking.- RPF3D: Range-Pillar Feature Deep Fusion 3D Detector for Autonomous Driving.- Traffic Signal Control Optimization Based on Deep Reinforcement Learning With Attention Mechanisms.- CMCI: A Robust Multimodal Fusion Method For Spiking Neural Networks.- A Weakly Supervised Deep Learning Model for Alzheimer's Disease Prognosis Using MRI and Incomplete Labels.- Two-Stream Spectral-Temporal Denoising Network for End-to-end Robust EEG-based Emotion Recognition.- Brain-inspired Binaural Sound Source Localization Method Based On Liquid State Machine.- A Causality-Based Interpretable Cognitive Diagnosis Model.- RoBrain: Towards Robust Brain-to-Image Reconstruction via Cross-Domain Contrastive Learning.- High-dimensional multi-objective PSO based on radial projection.- Link Prediction Based on the Sub-graphs Learning with Fused Features.- Naturalistic Emotion Recognition Using EEG and Eye Movements.- Task Scheduling With Improved Particle Swarm Optimization In Cloud Data Center.- Traffic Signal Optimization at T-shaped intersections Based on Deep Q Networks.- A Multi-task Framework for Solving Multimodal Multiobjective Optimization Problems.- Domain Generalized Object Detection with Triple Graph Reasoning Network.- RPUC: Semi-supervised 3D Biomedical Image Segmentation through Rectified Pyramid Unsupervised Consistency.- Cancellable iris recognition scheme based on inversion fusion and local ranking.- EWMIGCN: Emotional Weighting based Multimodal Interaction Graph Convolutional Networks for Personalized Prediction.- Neighborhood Learning for Artificial Bee Colony Algorithm: A Mini-survey.- Human Centred Computing.- Channel Attention Separable Convolution Network for Skin Lesion Segmentation.- A DNN-based Learning Framework for Continuous Movements Segmentation.- Neural-Symbolic Recommendation with Graph-Enhanced Information.- Contrastive Hierarchical Gating Networks for Rating Prediction.- Interactive Selection Recommendation Based on the Multi-Head Attention Graph Neural Network.- CM-TCN: Channel-aware Multi-scale Temporal Convolutional Networks For Speech Emotion Recognition.- FLDNet: A Foreground-Aware Network for Polyp Segmentation Leveraging Long-Distance Dependencies.- Domain-Invariant Task Optimization for Cross-domain Recommendation.- Ensemble of randomized neural network and boosted trees for eye tracking-based driver situation awareness recognition and interpretation.- Temporal Modeling Approach for Video Action Recognition Based on Vision-Language Models.- A Deep Learning Framework with Pruning RoI Proposal for Dental Caries Detection in Panoramic X-ray Images.- User stance aware network for rumor detection using semantic relation inference and temporal graph convolution.- IEEG-CT: A CNN and Transformer Based Method for Intracranial EEG Signal Classification.- Multi-Task Learning Network for Automatic Pancreatic Tumor Segmentation and Classification with Inter-Network Channel Feature Fusion.- Fast and Efficient Brain Extraction with Recursive MLP based 3D UNet.- A Hip-Knee Joint Coordination Evaluation System in Hemiplegic Individuals Based on Cyclogram Analysis.- Evaluation of football players' performance based on Multi-Criteria Decision Analysis approach and sensitivity analysis.
£75.99
Springer Verlag, Singapore Neural Information Processing: 30th International
Book SynopsisThe six-volume set LNCS 14447 until 14452 constitutes the refereed proceedings of the 30th International Conference on Neural Information Processing, ICONIP 2023, held in Changsha, China, in November 2023. The 652 papers presented in the proceedings set were carefully reviewed and selected from 1274 submissions. They focus on theory and algorithms, cognitive neurosciences; human centred computing; applications in neuroscience, neural networks, deep learning, and related fields. Table of ContentsText to Image Generation with Conformer-GAN.- MGFNet: A Multi-Granularity Feature Fusion and Mining Network for Visible-Infrared Person Re-Identification.- Isomorphic Dual-Branch Network for Non-homogeneous Image Dehazing and Super-Resolution.- Hi-Stega : A Hierarchical Linguistic Steganography Framework Combining Retrieval and Generation.- Effi-Seg: Rethinking EfficientNet Architecture for Real-time Semantic Segmentation.- Quantum Autoencoder Frameworks for Network Anomaly Detection.- Spatially-Aware Human-Object Interaction Detection with Cross-Modal Enhancement.- Intelligent trajectory tracking control of unmanned parafoil system based on SAC optimized LADRC.- CATS: Connection-aware and Interaction-based Text Steganalysis in Social Networks.- Syntax Tree Constrained Graph Network for Visual Question Answering.- CKR-Calibrator: Convolution Kernel Robustness Evaluation and Calibration.- SGLP-Net: Sparse Graph Label Propagation Network for Weakly-Supervised Temporal Action Localization.- VFIQ: A Novel Model of ViT-FSIMc Hybrid Siamese Network for Image Quality Assessment.- Spiking Reinforcement Learning for Weakly-supervised Anomaly Detection.- Resource-aware DNN Partitioning for Privacy-sensitive Edge-Cloud Systems.- A frequency reconfigurable multi-mode printed antenna.- Multi-view Contrastive learning for Knowledge-aware Recommendation.- PYGC: a PinYin Language Model Guided Correction Model for Chinese Spell Checking.- Empirical Analysis of Multi-label Classification on GitterCom using BERT.- A lightweight safety helmet detection network based on bidirectional connection module and Polarized Self-Attention.- Direct Inter-Intra View Association for Light Field Super-Resolution.- Responsive CPG-Based Locomotion Control for Quadruped Robots.- Vessel Behavior Anomaly Detection using Graph Attention Network.- TASFormer: Task-aware Image Segmentation Transformer.- Unsupervised Joint-Semantics Autoencoder Hashing for Multimedia Retrieval.- TKGR-RHETNE:A New Temporal Knowledge Graph Reasoning Model via Jointly Modeling Relevant Historical Event and Temporal Neighborhood Event Context.- High-Resolution Self-Attention with Fair Loss for Point Cloud Segmentation.- Transformer-based Video Deinterlacing Method.- SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack.- CSEC: A Chinese Semantic Error Correction Dataset for Written Correction.- Contrastive Kernel Subspace Clustering.- UATR: An Uncertainty Aware Two-stage Refinement Model for Targeted Sentiment Analysis.- AttIN: Paying More Attention to Neighborhood Information for Entity Typing in Knowledge Graphs.- Text-based Person Re-ID by Saliency Mask and Dynamic Label Smoothing.- Robust Multi-view Spectral Clustering with Auto-encoder for Preserving Information.- Learnable Color Image Zero-Watermarking Based on Feature Comparison.- P-IoU: Accurate Motion Prediction based Data Association for Multi-Object Tracking.- WCA-VFnet:a dedicated complex forest smoke fire detector.- Label Selection Algorithm Based on Ant Colony Optimization and Reinforcement Learning for Multi-label Classification.- Reversible Data Hiding Based on Adaptive Embedding with Local Complexity.- Generalized Category Discovery with Clustering Assignment Consistency.- CInvISP: Conditional Invertible Image Signal Processing Pipeline.- Ignored Details in Eyes: Exposing GAN-generated Faces by Sclera.- A Developer Recommendation Method Based on Disentangled.- Graph Convolutional Network.- Novel Method for Radar Echo Target Detection.
£66.49