Digital signal processing (DSP) Books

211 products


  • Synthetic Aperture Radar

    John Wiley & Sons Inc Synthetic Aperture Radar

    15 in stock

    Book SynopsisThe use of synthetic aperture radar (SAR) represents a new era in remote sensing technology. A complete handbook for anyone who must design an SAR system capable of reliably producing high quality image data products, free from image artifacts and calibrated in terms of the target backscatter coefficient. Combines fundamentals underlying the SAR imaging process and the practical system engineering required to produce quality images from a real SAR system. Beginning with a broad overview of SAR technology, it goes on to examine SAR system capabilities and components and detail the techniques required for design and development of the SAR ground data system with emphasis on the correlation processing. Intended for SAR system engineers and researchers, it is generously illustrated for maximum clarity.Table of ContentsThe Radar Equation. The Matched Filter and Pulse Compression. Imaging and the Rectangular Algorithm. Ancillary Processes in Image Formation. SAR Flight System. Radiometric Calibration of SAR Data. Geometric Calibration of SAR Data. The SAR Ground System. Other Imaging Algorithms. Appendices. List of Acronyms. Index.

    15 in stock

    £211.46

  • Neural Networks for Optimization and Signal

    Wiley Neural Networks for Optimization and Signal

    15 in stock

    Book SynopsisA topical introduction on the ability of artificial neural networks to not only solve on-line a wide range of optimization problems but also to create new techniques and architectures. Provides in-depth coverage of mathematical modeling along with illustrative computer simulation results.Table of ContentsMathematical Preliminaries of Neurocomputing. Architectures and Electronic Implementation of Neural Network Models. Unconstrained Optimization and Learning Algorithms. Neural Networks for Linear, Quadratic Programming and Linear Complementarity Problems. A Neural Network Approach to the On-Line Solution of a System of Linear Algebraic Equations and Related Problems. Neural Networks for Matrix Algebra Problems. Neural Networks for Continuous, Nonlinear, Constrained Optimization Problems. Neural Networks for Estimation, Identification and Prediction. Neural Networks for Discrete and Combinatorial Optimization Problems. Appendices. Subject Index.

    15 in stock

    £218.66

  • Signal Analysis

    John Wiley & Sons Inc Signal Analysis

    15 in stock

    Book SynopsisSignal analysis gives an insight into the properties of signals and stochastic processes by methodology. Linear transforms are integral to the continuing growth of signal processes as they characterize and classify signals. In particular, those transforms that provide time-frequency signal analysis are attracting greater numbers of researchers and are becoming an area of considerable importance. The key characteristic of these transforms, along with a certain time-frequency localization called the wavelet transform and various types of multirate filter banks, is their high computational efficiency. It is this computational efficiently which accounts for their increased application. This book provides a complete overview and introduction to signal analysis. It presents classical and modern signal analysis methods in a sequential structure starting with the background to signal theory. Progressing through the book the author introduces more advanced topics in an easy to understand style.Trade Review"...excellent and interesting reading for digital signal processing engineers and designers and for postgraduate students in electrical and computer faculties." (Mathematical Reviews, 2002d)Table of ContentsSignals and Signal Spaces. Integral Signal Representations. Discrete Signal Representations. Examples of Discrete Transforms. Transforms and Filters for Stochastic Processes. Filter Banks. Short-Time Fourier Analysis. Wavelet Transform. Non-Linear Time-Frequency Distributions. Bibliography. Index.

    15 in stock

    £181.76

  • Identification of TimeVarying Processes

    Wiley Identification of TimeVarying Processes

    15 in stock

    Book SynopsisTime varying process identification (TVPI) techniques facilitate adaptive noise reduction, echo cancellation and predictive coding of signals. This treatment addresses the identification of time-varying characteristics of dynamic processes.Trade Review"...a comprehensive treatment...well-written and successful in combining mathematics and practical understanding of real world applications..." (Automatica, No.38, 2002)Table of ContentsModeling Essentials. Models of Nonstationary Processes. Process Segmentation. Weighted Least Squares. Least Mean Squares. Basis Functions. Kalman Filtering. Practical Issues. Epilogue. References. Index.

    15 in stock

    £181.76

  • Introduction to Coding Theory

    Cambridge University Press Introduction to Coding Theory

    15 in stock

    Book SynopsisThis 2006 book introduces the reader to the theoretical foundations of error-correcting codes, with an emphasis on Reed-Solomon codes and their derivative codes. It is designed to be accessible to a broad readership, including students of computer science, electrical engineering, and mathematics, from senior-undergraduate to graduate level.Trade Review'… a most welcome addition. … well tested as a course text. Features include, the extensive collections of interesting and nontrivial problems at the end of chapters, the clear and insightful explanations of some of the deeper aspects of the subject and the extensive, interesting and useful historical notes on the development of the subject. This is an excellent volume that will reward the participants in any course that uses it with a deep understanding and appreciation for the subject.' Ian F, Blake, University of Toronto'This book introduces the reader to the theoretical foundations of error-correctiong codes ... While mathematical rigor is maintained, the text is designed to be accessible to a broad readership, including students of computer science, electrical engineering, and mathematics, from senior undergraduate to graduate level.' L'enseignement mathematique'The mathematical style of this book is clear, concise and scholarly with a pleasing layout. There are numerous exercises, many with hints and many introducing further new concepts. Altogether this is an excellent book covering a wide range of topics in this area, and including an extensive bibliography.' Publication of the International Statistical Institute'The reader will find many well-chosen examples throughout the book and will be challenged by over 300 exercises, many of which have hints. Some of the exercises develop concepts that are not contained within the main body of the text. For example, the very first problem of the book, filling up more than an entire page of the text, introduces the AWGN channel and requires the reader to check the crossover probability of a memoryless binary symmetric channel. Zentralblatt MATHTable of ContentsPreface; 1. Introduction; 2. Linear codes; 3. Introduction to finite fields; 4. Bounds on the parameters of codes; 5. Reed-Solomon codes and related codes; 6. Decoding of Reed-Solomon codes; 7. Structure of finite fields; 8. Cyclic codes; 9. List decoding of Reed-Solomon codes; 10. Codes in the Lee metric; 11. MDS codes; 12. Concatenated codes; 13. Graph codes; 14. Trellis codes and convolutional codes; Appendix A. Basics in modern algebra; Bibliography; List of symbols; Index.

    15 in stock

    £75.99

  • Quantum Processes Systems and Information

    Cambridge University Press Quantum Processes Systems and Information

    15 in stock

    Book SynopsisA new and exciting approach to the basics of quantum theory, this undergraduate textbook contains extensive discussions of conceptual puzzles and over 800 exercises and problems. In addition to the standard topics covered in other textbooks, it covers communication and measurement, quantum entanglement, entropy and thermodynamics, and quantum information processing.Trade Review'This is a fantastic book, with one of the authors no less than the very inventor of the word and idea of a qubit. When I opened the book for the first time, I found I couldn't stop reading through it and working out some of the problems. … There's no book out there I would recommend more for learning the mechanics of this quantum world.' Chris Fuchs, Perimeter Institute for Theoretical Physics'One of the most original and insightful introductions to quantum mechanics ever written, this book is also an excellent introduction to the emerging field of quantum information science.' Michael Nielsen, co-author of Quantum Computation and Quantum Information'This superb new book by Ben Schumacher and Mike Westmoreland is perfectly suited for a modern undergraduate course on quantum mechanics that emphasizes fundamental notions from quantum information science, such as entanglement, Bell's theorem, quantum teleportation, quantum cryptography, and quantum error correction. The authors, who are themselves important contributors to the subject, have complete mastery of the material, and they write clearly and engagingly.' John Preskill, California Institute of Technology'This is a wonderful book! It covers the usual topics of a first course in quantum mechanics and much more, and it does so with an unusual conceptual depth. The inclusion of information theoretic ideas not only enriches the presentation of the basic theory - for example in helping to articulate the conditions under which quantum coherence is lost - it also opens up the large area of physics in which both quantum mechanical and information theoretic concepts play central roles.' William K. Wootters, Williams College'With its comprehensive presentation of both quantum mechanics and QIC, this book, written by two pioneers of this emerging new approach to computing, is really one of a kind … Most concepts that one would find in traditional nonrelativistic quantum mechanics physics books are presented in a clear and well thought-out manner (but be prepared for a bit more work when dealing with subtle notions such as quantum relative entropy or mutual information) … The brilliant pedagogical approach taken by the authors, who are able to present quite abstract notions using a clear and sprightly style, together with the quality of the editing … will provide both students and researchers interested in the growing field of QIC with a pleasant and informative read.' Computing Reviews'… a very impressive piece of work. It has clearly been refined over some time; the explanations and proofs that are scattered throughout the text are clearly written and elegant, and common themes are picked up repeatedly with increasing sophistication as the book goes along …' Mathematical Reviews'A bright undergraduate would get a tremendous grounding in modern quantum theory from reading this book, and solving the problems therein. So would many postgraduate students and academics wanting to get into the heart of quantum information research.' Howard M. Wiseman, Quantum Information ProcessingTable of Contents1. Bits and quanta; 2. Qubits; 3. States and observables; 4. Distinguishability and information; 5. Quantum dynamics; 6. Entanglement; 7. Information and ebits; 8. Density operators; 9. Open systems; 10. A particle in space; 11. Dynamics of a free particle; 12. Spin and rotation; 13. Ladder systems; 14. Many particles; 15. Stationary states in 1-D; 16. Bound states in 3-D; 17. Perturbation theory; 18. Quantum information processing; 19. Classical and quantum entropy; 20. Error correction; Appendixes; Index.

    15 in stock

    £62.99

  • Hilbert Transforms Volume 1 124 Encyclopedia of Mathematics and its Applications Series Number 124

    Cambridge University Press Hilbert Transforms Volume 1 124 Encyclopedia of Mathematics and its Applications Series Number 124

    15 in stock

    Book SynopsisWritten to suit a wide audience (including physical sciences), these two volumes will become the reference of choice on the Hilbert transform, whatever the subject background of the reader. The author explains all the common Hilbert transforms, mathematical techniques for evaluating them, and has detailed discussions of their application.Trade Review"The author gives detailed and exhaustive information on almost all properties of the Hilbert transform... the selected topics are presented in an easy-to-use style." Lasha Ephremidze, Mathematical ReviewsTable of ContentsPreface; List of symbols; List of abbreviations; Volume I: 1. Introduction; 2. Review of some background mathematics; 3. Derivation of the Hilbert transform relations; 4. Some basic properties of the Hilbert transform; 5. Relationship between the Hilbert transform and some common transforms; 6. The Hilbert transform of periodic functions; 7. Inequalities for the Hilbert transform; 8. Asymptotic behavior of the Hilbert transform; 9. Hilbert transforms of some special functions; 10. Hilbert transforms involving distributions; 11. The finite Hilbert transform; 12. Some singular integral equations; 13. Discrete Hilbert transforms; 14. Numerical evaluation of Hilbert transforms; References; Subject index; Author index.

    15 in stock

    £127.30

  • Probability Random Processes and Statistical Analysis

    Cambridge University Press Probability Random Processes and Statistical Analysis

    15 in stock

    Book SynopsisTogether with the fundamental topics, this book covers advanced theories and engineering applications, including the EM algorithm, hidden Markov models, and queueing and loss systems. A solutions manual, lecture slides and MATLAB programs all available online make this ideal for classroom teaching as well as a valuable reference for professionals.Trade Review'This book provides a very comprehensive, well-written and modern approach to the fundamentals of probability and random processes, together with their applications in the statistical analysis of data and signals. … It provides a one-stop, unified treatment that gives the reader an understanding of the models, methodologies and underlying principles behind many of the most important statistical problems arising in engineering and the sciences today.' Dean H. Vincent Poor, Princeton University'This is a well-written up-to-date graduate text on probabilty and random processes. It is unique in combining statistical analysis with the probabilistic material. As noted by the authors, the material, as presented, can be used in a variety of current application areas, ranging from communications to bioinformatics. I particularly liked the historical introduction, which should make the field exciting to the student, as well as the introductory chapter on probability, which clearly describes for the student the distinction between the relative frequency and axiomatic approaches to probability. I recommend it unhesitatingly. It deserves to become a leading text in the field.' Professor Emeritus Mischa Schwartz, Columbia University'Hisashi Kobayashi, Brian L. Mark, and William Turin are highly experienced university teachers and scientists. Based on this background their book covers not only fundamentals but also a large range of applications. Some of them are treated in a textbook for the first time. … Without any doubt the book will be extremely valuable to graduate students and to scientists in universities and industry as well. Congratulations to the authors!' Prof. Dr.-Ing. Eberhard Hänsler, Technische Universität Darmstadt'An up-to-date and comprehensive book with all the fundamentals in Probability, Random Processes, Stochastic Analysis, and their interplays and applications, which lays a solid foundation for the students in related areas. It is also an ideal textbook with five relatively independent but logically interconnected parts and the corresponding solution manuals and lecture slides. Furthermore, to my best knowledge, the similar editing in Part IV and Part V can't be found elsewhere.' Zhisheng Niu, Tsinghua UniversityTable of Contents1. Introduction; Part I. Probability, Random Variables and Statistics: 2. Probability; 3. Discrete random variables; 4. Continuous random variables; 5. Functions of random variables and their distributions; 6. Fundamentals of statistical analysis; 7. Distributions derived from the normal distribution; Part II. Transform Methods, Bounds and Limits: 8. Moment generating function and characteristic function; 9. Generating function and Laplace transform; 10. Inequalities, bounds and large deviation approximation; 11. Convergence of a sequence of random variables, and the limit theorems; Part III. Random Processes: 12. Random process; 13. Spectral representation of random processes and time series; 14. Poisson process, birth-death process, and renewal process; 15. Discrete-time Markov chains; 16. Semi-Markov processes and continuous-time Markov chains; 17. Random walk, Brownian motion, diffusion and itô processes; Part IV. Statistical Inference: 18. Estimation and decision theory; 19. Estimation algorithms; Part V. Applications and Advanced Topics: 20. Hidden Markov models and applications; 21. Probabilistic models in machine learning; 22. Filtering and prediction of random processes; 23. Queuing and loss models.

    15 in stock

    £70.99

  • Algorithms for Noise Reduction in Signals

    Institute of Physics Publishing Algorithms for Noise Reduction in Signals

    Out of stock

    Book SynopsisThis book is the result of an exhaustive review of the general algorithms used for noise reduction using two general application criteria: one-input, one-output systems, and two-input, one-output systems. The text describes theoretically and experimentally the processes related to high-order statistical analysis algorithms, as well as its practical combination with the convolutional analysis. On the other hand, the results of applications in real telecommunications signals, sensing of variables and biosignals such as human tremor are also shown. The reader will benefit from both the theoretical foundations described in the book, and the practical examples including generic codes of all the functions described and modifiable for use in different applications.This book is an ideal text for engineers, graduate students and scientists involved with noise reduction and signal processing.Key Features:Emphas

    Out of stock

    £108.00

  • IOP Publishing Algorithms for Noise Reduction in Signals

    Out of stock

    Book Synopsis

    Out of stock

    £23.75

  • Digital Signal Processing A Practical Guide for Engineers and Scientists

    Elsevier Science Digital Signal Processing A Practical Guide for Engineers and Scientists

    15 in stock

    Table of ContentsThe Breadth and Depth of DSP; Statistics, Probability and Noise; ADC and DAC; DSP Software; Linear Systems; Convolution; Properties of Convolution; The Discrete Fourier Transform; Applications of the DFT; Fourier Transform Properties; Fourier Transform Pairs; The Fast Fourier Transform; Continuous Signal Processing; Introduction to Digital Filters; Moving Average Filters; Windowed-Sinc Filters; Custom Filters; FFT Convolution; Recursive Filters; Chebyshev Filters; Filter Comparison; Audio Processing; Image Formation and Display; Linear Image Processing; Special Imaging Techniques; Neural Networks (and more!); Data Compression; Digital Signal Processors; Getting Started with DSPs; Complex Numbers; The Complex Fourier Transform; The Laplace Transform; The z-Transform; Index

    15 in stock

    £65.70

  • Time Frequency and Wavelets in Biomedical Signal

    John Wiley & Sons Inc Time Frequency and Wavelets in Biomedical Signal

    15 in stock

    Book SynopsisBrimming with top articles from experts in signal processing and biomedical engineering, Time Frequency and Wavelets in Biomedical Signal Processing introduces time--frequency, time--scale, wavelet transform methods, and their applications in biomedical signal processing.Table of ContentsList of Contributors. Preface. TIME-FREQUENCY ANALYSIS METHODS WITH BIOMEDICAL APPLICATIONS. Recent Advances in Time-Frequency Representations: SomeTheoretical Foundation (W. Williams). Biological Applications and Interpretations of Time-Frequency Signal Analysis (W. Williams). The Application of Advanced Time-Frequency Analysis Techniques to Doppler Ultrasound (S. Marple, et al.). Analysis of ECG Late Potentials Using Time-Frequency Methods (H. Dickhaus & H. Heinrich). Time-Frequency Distributions Applied to Uterine EMG: Characterization and Assessment (J. Duchene & D. Devedeux). Time-Frequency Analyses of the Electrogastrogram (Z. Lin and J. Chen). Recent Advances in Time-Frequency and Time-Scale Methods (C. Mello & M. Akay). WAVELETS, WAVELET PACKETS, AND MATCHING PURSUITS WITH BIOMEDICAL APPLICATIONS. Fast Algorithms for Wavelet Transform Computation (O. Rioul & P. Duhamel). Analysis of Cellular Vibrations in the Living Cochlea Using the Continuous Wavelet Transform and the Short-Time Fourier Transform (M. Teich, et al.). Alterative Processing Method Using Gabor Wavelets and the Wavelet Transform for the Analysis of Phonocardiogram Signals (M. Matalgah, et al.). Wavelet Feature Extraction from Neurophysiological Signals (M. Sun & R. Sclabassi). Experiments with Adapted Wavelet De-Noising for Medical Signals and Images (R. Coifman & M. Wickerhauser). Speech Enhancement for Hearing Aids (J. Rutledge). From Continuous Wavelet Transform to Wavelet Packets: Application to the Estimation of Pulmonary Microvascular Pressure (M. Karrakchou & M. Kunt). In Pursuit of Time-Frequency Representation of Brain Signals (P. Durka & K. Blinowska). EEG Spike Directors Based on Different Decompositions: A Comparative Study (L. Senhadji, et al.). WAVELETS AND MEDICAL IMAGING. A Discrete Dyadic Wavelet Transform for Multidimensional Feature Analysis (I. Koren & A. Laine). Hexagonal QMF Banks and Wavelets (S. Schuler & A. Laine). Inversion of the Radon Transform under Wavelet Constraints (B. Sahiner & A. Yagle). Wavelets Applied to Mammograms (W. Richardson). Hybrid Wavelet Transform for Image Enhancement forComputer-Assisted Diagnosis and Telemedicine Applications (L. Clarke, et al.). Medical Image Enhancement Using Wavelet Transform and Arithmetic Coding (P. Saipetch, et al.). Adapted Wavelet Encoding in Functional Magnetic Resonance Imaging (D. Healy, et al.). A Tutorial Overview of a Stabilization Algorithm for Limited-Angle Tomography (T. Olson). Wavelet Compression of Medical Images (A. Manduca). WAVELETS, NEURAL NETWORKS, AND FRACTALS. Single Side Scaling Wavelet Frame and Neural Network (Q. Zhang). Analysis of Evoked Potentials Using Wavelet Networks (H. Heinrich & H. Dickhaus). Self-Organizing Wavelet-Based Neural Networks (K. Kobayashi). On Wavelets and Fractal Processes (P. Flandrin). Fractal Analysis of Heart Rate Variability (R. Fischer & M. Akay). Index. Editor's Biography.

    15 in stock

    £209.66

  • Engineering Networks for Synchronization CCS 7

    John Wiley & Sons Inc Engineering Networks for Synchronization CCS 7

    1 in stock

    Book SynopsisIn view of the extensive development of CCS 7 and fast-paced growth of ISDN in telecommunication networks throughout the world, this valuable resource serves as a timely reference and guide. Practical and up-to-date, Engineering Networks for Synchronization, CCS 7, and ISDN provides in-depth instruction on three important and closely related elements of the modern digital network: network synchronization, CCITT Common Channel Signaling System No. 7 (CCS 7), and Narrowband ISDN.Table of ContentsSeries Editor's Note. Foreword. Preface. Introduction. Digital Network Synchronization: Basic Concepts. Planning, Testing, and Monitoring Network Synchronization. CCS 7: General Description. Introduction to ISDN. Functions of the CCS 7 Signaling Link Level. Signaling Network Functions in CCS 7. ISDN: Services and Protocols. CCS 7 ISDN User Part. CCS 7 Planning and Implementation. Testing in CCS 7. Packet and Frame Mode Services in the ISDN. Planning and Implementation the ISDN. Testing in the ISDN. Timing in SONET and SDH. Appendix 1: Ordering Information. Appendix 2: List of ISUP Messages. Index. About the Author.

    1 in stock

    £187.16

  • Nonlinear Biomedical Signal Processing Volume 2

    John Wiley & Sons Inc Nonlinear Biomedical Signal Processing Volume 2

    Out of stock

    Book SynopsisFeaturing current contributions by experts in signal processing and biomedical engineering, this book introduces the concepts, recent advances, and implementations of nonlinear dynamic analysis methods. Together with Volume I in this series, this book provides comprehensive coverage of nonlinear signal and image processing techniques. Nonlinear Biomedical Signal Processing: Volume II combines analytical and biological expertise in the original mathematical simulation and modeling of physiological systems. Detailed discussions of the analysis of steady-state and dynamic systems, discrete-time system theory, and discrete modeling of continuous-time systems are provided. Biomedical examples include the analysis of the respiratory control system, the dynamics of cardiac muscle and the cardiorespiratory function, and neural firing patterns in auditory and vision systems. Examples include relevant MATLAB and Pascal programs. Topics covered include: Nonlinear dynamics<Table of ContentsPreface. List of Contributors. Nonlinear Dynamics Time Series Analysis (B. Henry, et al.). Searching for the Origin of Chaos (T. Yambe, et al.). Approximate Entropy and Its Applications to Biosignal Analysis (Y. Fusheng, et al.). Parsimonious Modeling of Biomedical Signals and Systems: Applications to the Cardiovascular System (P. Celka, et al.). Nonlinear Behavior of Heart Rate Variability as Registered After Heart Transplantation (C. Maier, et al.). Heart Rate Variability: Measures and Models (M. Teich, et al.). Ventriculo-Arterial Interaction After Acute Increase of the Aortic Input Impedance: Description Using Recurrence Plot Analysis (S. Schulz, et al.). Nonlinear Estimation of Respiratory-Induced Heart Movements and Its Application in ECG/VCG Processing (L. Sörnmo, et al.). Detecting Nonlinear Dynamics in Sympathetic Activity Directed to the Heart (A. Porta, et al.). Assessment of Nonlinear Dynamics in Heart Rate Variability Signal (M. Signorini, et al.). Nonlinear Deterministic Behavior in Blood Pressure Control (N. Lovell, et al.). Measurement and Quantification of Spatiotemporal Dynamics of Human Epileptic Seizures (L. Iasemidis, et al.). Rhythms and Chaos in the Stomach (Z. Wang, et al). Index. About the Editor.

    Out of stock

    £170.96

  • 3D Audio Using Loudspeakers 444 The Springer International Series in Engineering and Computer Science

    Springer Us 3D Audio Using Loudspeakers 444 The Springer International Series in Engineering and Computer Science

    15 in stock

    Book Synopsis3-D Audio Using Loudspeakers is concerned with 3-D audio systems implemented using a pair of conventional loudspeakers. 3-D Audio Using Loudspeakers discusses the theory, implementation, and testing of a head-tracked loudspeaker 3-D audio system.Table of ContentsPreface. 1. Introduction. 2. Background. 3. Theory and Implementation. 4. Physical Validation. 5. Psychophysical Validation. 6. Discussion. A: Inverting FIR Filters. References. Index.

    15 in stock

    £123.49

  • Digital Image Warping

    IEEE Computer Society Press,U.S. Digital Image Warping

    15 in stock

    Book Synopsis

    15 in stock

    £95.36

  • Unsupervised Signal Processing

    Taylor & Francis Inc Unsupervised Signal Processing

    1 in stock

    Book SynopsisUnsupervised Signal Processing: Channel Equalization and Source Separation provides a unified, systematic, and synthetic presentation of the theory of unsupervised signal processing. Always maintaining the focus on a signal processing-oriented approach, this book describes how the subject has evolved and assumed a wider scope that covers several topics, from well-established blind equalization and source separation methods to novel approaches based on machine learning and bio-inspired algorithms.From the foundations of statistical and adaptive signal processing, the authors explore and elaborate on emerging tools, such as machine learning-based solutions and bio-inspired methods. With a fresh take on this exciting area of study, this book: Provides a solid background on the statistical characterization of signals and systems and on linear filtering theory Emphasizes the link between supervised and unsupervised processing from thTable of ContentsIntroduction. Statistical Characterization of Signals and Systems. Linear Optimal and Adaptive Filtering. Unsupervised Channel Equalization. Unsupervised Multichannel Equalization. Blind Source Separation. Nonlinear Filtering and Machine Learning. Bio-Inspired Optimization Methods. Appendices.

    1 in stock

    £161.50

  • Adaptive Control Algorithms Analysis and

    Springer London Adaptive Control Algorithms Analysis and

    1 in stock

    Book SynopsisThoroughly revised and updated, this second edition of Adaptive Control covers new developments in the field, including multi-model adaptive control with switching, direct and indirect adaptive regulation, and adaptive feedforward disturbance compensation.Trade ReviewFrom the book reviews:“This book is intended as a textbook for graduate students, and a basic reference for control researchers, applied mathematicians and practicing engineers. It has a clear and coherent exposition, showing the themes addressed and providing solutions to these, highlighting its relevance and possible applications.” (Guillermo Fernández-Anaya, Mathematical Reviews, February, 2015)“The aim of this book is to provide a coherent and comprehensive treatment of the field of adaptive control. Throughout the book, the mathematical aspects of the synthesis and analysis of various algorithms are emphasized. The book contains various applications of control techniques. The book is intended as a textbook for graduate students as well as basic reference for practicing engineers facing the problem of designing adaptive control systems.” (Vjatscheslav Vasiliev, Zentralblatt MATH, Vol. 1234, 2012)Table of ContentsIntroduction to Adaptive Control.- Discrete-time System Models for Control.- Parameter Adaptation Algorithms: Deterministic Environment.- Parameter Adaptation Algorithms: Stochastic Environment.- Recursive Plant Model Identification in Open Loop.- Adaptive Prediction.- Digital Control Strategies.- Robust Digital Control Design.- Recursive Plant Model Identification in Closed Loop.- Robust Parameter Estimation.- Direct Adaptive Control.- Indirect Adaptive Control.- Practical Aspects and Applications.- Multimodel Adaptive Control with Switching.- Adaptive Regulation: Rejection of Unknown Disturbances.- Adaptive Feedforward Compensation of Disturbances.- Appendices: Stochastic Processes; Stability; Passive (Hyperstable) Systems; Martingales.

    1 in stock

    £134.99

  • AntennaBased Signal Processing Techniques for Radar Systems Antennas  Propagation Library

    Artech House Publishers AntennaBased Signal Processing Techniques for Radar Systems Antennas Propagation Library

    15 in stock

    Book SynopsisBrings the reader up-to-date on all aspects concerning ECCM at the antenna level. It is a reference tool for professionals seeking quick answers to on-the-job problems. This text delivers an accurate description of working principles, processing schemes and performance evaluation techniques.Table of ContentsIntroduction to ECM and ECCM techniques for radar systems; low sidelobe antennas; sidelobe blanking system; sidelobe canceller (SLC) system; adaptive arrays.

    15 in stock

    £116.00

  • Electronic Intelligence The Analysis of Radar Signals Second Edition Radar Library

    Artech House Publishers Electronic Intelligence The Analysis of Radar Signals Second Edition Radar Library

    15 in stock

    Book SynopsisProvides information on electronic intelligence (ELINT) analysis techniques, with coverage of their applications, strengths and limitations. Now refined and updated, this second edition presents new concepts and techniques.Table of ContentsSignal-to-Noise-Ratio Considerations for Analog and Digital Systems. Signal Power. Polarization. Beam Analysis. Antenna Scan Analysis. Intrapulse Analysis. Pulse Repetition Interval (PRI) Analysis. Radio Frequency (RF) Analysis. Deinterleaving Pulse Trains. Determining ELINT Parameter Limits. ELINT Data Files.

    15 in stock

    £100.70

  • Spotlight Synthetic Aperture Radar Signal Processing Algorithms Remote sensing library

    Artech House Publishers Spotlight Synthetic Aperture Radar Signal Processing Algorithms Remote sensing library

    15 in stock

    Book SynopsisThis is a practical solution sourcebook for real-world high-resolution and spotlight SAR image processing. Widely-used algorithms are presented for both system errors and propagation phenomenon, and a chapter is devoted to SAR system performance.Table of ContentsPart 1 Introduction: spotlight SAR; SAR modes; importance of spotlight SAR; early SAR chronology. Part 2 Synthetic aperture radar fundamentals: SAR system overview; imaging considerations; pulse compression and range resolution; synthetic aperture technique for Azimuth resolution; SAR coherence requirements; signal phase equation; inverse SAR (ISAR); SAR sensor parametric design. Part 3 Spotlight SAR and polar format algorithm: scope of processing task; polar format overview; polar data storage as a two-dimensional signal; correction for non-planar motion; polar format algorithm limitations; Taylor series expansion procedures; phase of image pixels; image geometric distortion; image focus error equations; displacements and absolute positioning. Part 4 Digital polar format processing: sampling rate conversion; polyphase filters; polar interpolation; image scale factors; image distortion correction; signal history projections; stabilized scene polar interpolation; subpatch processing and mosaicking. Part 5 Phase errors: classification of phase error; management of phase error; magnitude of phase error; requirements on a practical SAR motion sensor; moving target effects. Part 6 Autofocus techniques: mapdrift; multiple aperture mapdrift; phase difference; phase gradient; prominent point processing; considerations for space-variant refocus. Part 7 Processor design examples: the common UNIX SAR processor; the ground to air imaging radar processor. Part 8 SAR system performance: image quality metrics; system performance budgeting; requirements on system impulse response; requirements on system noise; geometric distortion; secondary image quality metrics; test arrays. Part 9 Spotlight processing applications: spotlight processing of scan and stripmap SAR data; interferometric SAR; forward look SAR; vibrating target detection. Part 10 Range migration algorithm: model; algorithm overview; analytical development; discussion; efficient algorithms for range migration processing. Part 11 Chirp scaling algorithm: non-dechirped signal model; algorithm overview; analytical development; discussion. Part 12 Comparison of image formation algorithms: image formation algorithm models; computational complexity; memory requirements; other considerations.

    15 in stock

    £151.05

  • Pearls of Algorithm Engineering

    Cambridge University Press Pearls of Algorithm Engineering

    Out of stock

    Book SynopsisThere are many textbooks on algorithms focusing on big-O notation and basic design principles. This book offers a unique approach to taking the design and analyses to the level of predictable practical efficiency, discussing core and classic algorithmic problems that arise in the development of big data applications, and presenting elegant solutions of increasing sophistication and efficiency. Solutions are analyzed within the classic RAM model, and the more practically significant external-memory model that allows one to perform I/O-complexity evaluations. Chapters cover various data types, including integers, strings, trees, and graphs, algorithmic tools such as sampling, sorting, data compression, and searching in dictionaries and texts, and lastly, recent developments regarding compressed data structures. Algorithmic solutions are accompanied by detailed pseudocode and many running examples, thus enriching the toolboxes of students, researchers, and professionals interested in effeTrade Review'When I joined Google in 2000, algorithmic problems came up every day. Even strong engineers didn't have all the background they needed to design efficient algorithms. Paolo Ferragina's well-written and concise book helps fill that void. A strong software engineer who masters this material will be an asset.' Martin Farach-Colton, Rutgers University'There are plenty of books on Algorithm Design, but few about Algorithm Engineering. This is one of those rare books on algorithms that pays the necessary attention to the more practical aspects of the process, which become crucial when actual performance matters, and which render some theoretically appealing algorithms useless in real life. The author is an authority on this challenging path between theory and practice of algorithms, which aims at both conceptually nontrivial and practically relevant solutions. I hope the readers will find the reading as pleasant and inspiring as I did.' Gonzalo Navarro, University of Chile'Ferragina combines his skills as a coding engineer, an algorithmic mathematician, and a pedagogic innovator to engineer a string of pearls made up of beautiful algorithms. In this, beauty dovetails with computational efficiency. His data structures of Stringomics hold the promise for a better understanding of population of genomes and the history of humanity. It belongs in the library of anyone interested in the beauty of code and the code of beauty.' Bud Mishra, Courant Institute, New York University'There are many textbooks on algorithms focusing on big-O notation and general design principles. This book offers a completely unique aspect of taking the design and analyses to the level of predictable practical efficiency. No sacrifices in generality are made, but rather a convenient formalism is developed around external memory efficiency and parallelism provided by modern computers. The benefits of randomization are elegantly used for obtaining simple algorithms, whose insightful analyses provide the reader with useful tools to be applied to other settings. This book will be invaluable in broadening the computer science curriculum with a course on algorithm engineering.' Veli Makinen, University of HelsinkiTable of Contents1. Prologue; 2. A warm-up!; 3. Random sampling; 4. List ranking; 5. Sorting atomic items; 6. Set intersection; 7. Sorting strings; 8. The dictionary problem; 9. Searching strings by prefix; 10. Searching strings by substring; 11. Integer coding; 12. Statistical coding; 13. Dictionary-based compressors; 14. The burrows-wheeler transform; 15. Compressed data structures; 16. Conclusion.

    Out of stock

    £999.99

  • Inference and Learning from Data Volume 1

    Cambridge University Press Inference and Learning from Data Volume 1

    1 in stock

    Book SynopsisWritten in an engaging and rigorous style by a world authority in the field, this is an accessible and comprehensive introduction to core topics in inference and learning. With downloadable Matlab code and solutions for instructors, this is the ideal introduction for students of data science, machine learning, and engineering.Trade Review'Inference and Learning from Data is a uniquely comprehensive introduction to the signal processing foundations of modern data science. Lucidly written, with a carefully balanced choice of topics, this textbook is an indispensable resource for both graduate students and data science practitioners, a piece of lasting value.' Helmut Bölcskei, ETH Zurich'This textbook provides a lucid and magisterial treatment of methods for inference and learning from data, aided by hundreds of solved examples, computer simulations, and over 1000 problems. The material ranges from fundamentals to recent advances in statistical learning theory; variational inference; neural, convolutional, and Bayesian networks; and several other topics. It is aimed at students and practitioners, and can be used for several different introductory and advanced courses.' Thomas Kailath, Stanford University'A tour de force comprehensive three-volume set for the fast-developing areas of data science, machine learning, and statistical signal processing. With masterful clarity and depth, Sayed covers, connects, and integrates background fundamentals and classical and emerging methods in inference and learning. The books are rich in worked-out examples, exercises, and links to data sets. Commentaries with historical background and contexts for the topics covered in each chapter are a special feature.' Mostafa Kaveh, University of Minnesota'This is the first of a three-volume series covering from fundamentals to the many various methods in inference and learning from data. Professor Sayed is a prolific author of award-winning books and research papers who has himself contributed significantly to many of the topics included in the series. With his encyclopedic knowledge, his careful attention to detail, and in a very approachable style, this first volume covers the basics of matrix theory, probability and stochastic processes, convex and non-convex optimization, gradient-descent, convergence analysis, and several other advanced topics that will be needed for volume II (Inference) and volume III (Learning). This series, and in particular this volume, will be a must-have for educators, students, researchers, and technologists alike who are pursuing a systematic study, want a quick refresh, or may use it as a helpful reference to learn about these fundamentals.' Jose Moura, Carnegie Mellon University'Volume I of Inference and Learning from Data provides a foundational treatment of one of the most topical aspects of contemporary signal and information processing, written by one of the most talented expositors in the field. It is a valuable resource both as a textbook for students wishing to enter the field and as a reference work for practicing engineers.' Vincent Poor, Princeton University'Inference and Learning from Data, Vol. I: Foundations offers an insightful and well-integrated primer with just the right balance of everything that new graduate students need to put their research on a solid footing. It covers foundations in a modern way - emphasizing the most useful concepts, including proofs, and timely topics which are often missing from introductory graduate texts. All in one beautifully written textbook. An impressive feat! I highly recommend it.' Nikolaos Sidiropoulos, University of Virginia'This exceptional encyclopedic work on learning from data will be the bible of the field for many years to come. Totaling more than 3000 pages, this three-volume book covers in an exhaustive and timely manner the topic of data science, which has become critically important to many areas and lies at the basis of modern signal processing, machine learning, artificial intelligence, and their numerous applications. Written by an authority in the field, the book is really unique in scale and breadth, and it will be an invaluable source of information for students, researchers, and practitioners alike.' Peter Stoica, Uppsala University'Very meticulous, thorough, and timely. This volume is largely focused on optimization, which is so important in the modern-day world of data science, signal processing, and machine learning. The book is classical and modern at the same time - many classical topics are nicely linked to modern topics of current interest. All the necessary mathematical background is covered. Professor Sayed is one of the foremost researchers and educators in the field and the writing style is unhurried and clear with many examples, truly reflecting the towering scholar that he is. This volume is so complete that it can be used for self-study, as a classroom text, and as a timeless research reference.' P. P. Vaidyanathan, Caltech'The book series is timely and indispensable. It is a unique companion for graduate students and early-career researchers. The three volumes provide an extraordinary breadth and depth of techniques and tools, and encapsulate the experience and expertise of a world-class expert in the field. The pedagogically crafted text is written lucidly, yet never compromises rigor. Theoretical concepts are enhanced with illustrative figures, well-thought problems, intuitive examples, datasets, and MATLAB codes that reinforce readers' learning.' Abdelhak Zoubir, TU DarmstadtTable of ContentsContents; Preface; Notation; 1. Matrix theory; 2. Vector differentiation; 3. Random variables; 4. Gaussian distribution; 5. Exponential distributions; 6. Entropy and divergence; 7. Random processes; 8. Convex functions; 9. Convex optimization; 10. Lipschitz conditions; 11. Proximal operator; 12. Gradient descent method; 13. Conjugate gradient method; 14. Subgradient method; 15. Proximal and mirror descent methods; 16. Stochastic optimization; 17. Adaptive gradient methods; 18. Gradient noise; 19. Convergence analysis I: Stochastic gradient algorithms; 20. Convergence analysis II: Stochasic subgradient algorithms; 21: Convergence analysis III: Stochastic proximal algorithms; 22. Variance-reduced methods I: Uniform sampling; 23. Variance-reduced methods II: Random reshuffling; 24. Nonconvex optimization; 25. Decentralized optimization I: Primal methods; 26: Decentralized optimization II: Primal-dual methods; Author index; Subject index.

    1 in stock

    £80.74

  • Inference and Learning from Data Volume 2

    Cambridge University Press Inference and Learning from Data Volume 2

    1 in stock

    Book SynopsisThis extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This second volume, Inference, builds on the foundational topics established in volume I to introduce students to techniques for inferring unknown variables and quantities, including Bayesian inference, Monte Carlo Markov Chain methods, maximum-likelihood estimation, hidden Markov models, Bayesian networks, and reinforcement learning. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 350 end-of-chapter problems (including solutions for instructors), 180 solved examples, almost 200 figures, datasets and downloadable Matlab code. Supported by sister volumes Foundations and Learning, and unique in its scale and depth, this teTrade Review'Inference and Learning from Data is a uniquely comprehensive introduction to the signal processing foundations of modern data science. Lucidly written, with a carefully balanced choice of topics, this textbook is an indispensable resource for both graduate students and data science practitioners, a piece of lasting value.' Helmut Bölcskei, ETH Zurich'This textbook provides a lucid and magisterial treatment of methods for inference and learning from data, aided by hundreds of solved examples, computer simulations, and over 1000 problems. The material ranges from fundamentals to recent advances in statistical learning theory; variational inference; neural, convolutional, and Bayesian networks; and several other topics. It is aimed at students and practitioners, and can be used for several different introductory and advanced courses.' Thomas Kailath, Stanford University'A tour de force comprehensive three-volume set for the fast-developing areas of data science, machine learning, and statistical signal processing. With masterful clarity and depth, Sayed covers, connects, and integrates background fundamentals and classical and emerging methods in inference and learning. The books are rich in worked-out examples, exercises, and links to data sets. Commentaries with historical background and contexts for the topics covered in each chapter are a special feature.' Mostafa Kaveh, University of Minnesota'This is the first of a three-volume series covering from fundamentals to the many various methods in inference and learning from data. Professor Sayed is a prolific author of award-winning books and research papers who has himself contributed significantly to many of the topics included in the series. With his encyclopedic knowledge, his careful attention to detail, and in a very approachable style, this first volume covers the basics of matrix theory, probability and stochastic processes, convex and non-convex optimization, gradient-descent, convergence analysis, and several other advanced topics that will be needed for volume II (Inference) and volume III (Learning). This series, and in particular this volume, will be a must-have for educators, students, researchers, and technologists alike who are pursuing a systematic study, want a quick refresh, or may use it as a helpful reference to learn about these fundamentals.' Jose Moura, Carnegie Mellon University'Volume I of Inference and Learning from Data provides a foundational treatment of one of the most topical aspects of contemporary signal and information processing, written by one of the most talented expositors in the field. It is a valuable resource both as a textbook for students wishing to enter the field and as a reference work for practicing engineers.' Vincent Poor, Princeton University'Inference and Learning from Data, Vol. I: Foundations offers an insightful and well-integrated primer with just the right balance of everything that new graduate students need to put their research on a solid footing. It covers foundations in a modern way - emphasizing the most useful concepts, including proofs, and timely topics which are often missing from introductory graduate texts. All in one beautifully written textbook. An impressive feat! I highly recommend it.' Nikolaos Sidiropoulos, University of Virginia'This exceptional encyclopedic work on learning from data will be the bible of the field for many years to come. Totaling more than 3000 pages, this three-volume book covers in an exhaustive and timely manner the topic of data science, which has become critically important to many areas and lies at the basis of modern signal processing, machine learning, artificial intelligence, and their numerous applications. Written by an authority in the field, the book is really unique in scale and breadth, and it will be an invaluable source of information for students, researchers, and practitioners alike.' Peter Stoica, Uppsala University'Very meticulous, thorough, and timely. This volume is largely focused on optimization, which is so important in the modern-day world of data science, signal processing, and machine learning. The book is classical and modern at the same time - many classical topics are nicely linked to modern topics of current interest. All the necessary mathematical background is covered. Professor Sayed is one of the foremost researchers and educators in the field and the writing style is unhurried and clear with many examples, truly reflecting the towering scholar that he is. This volume is so complete that it can be used for self-study, as a classroom text, and as a timeless research reference.' P. P. Vaidyanathan, Caltech'The book series is timely and indispensable. It is a unique companion for graduate students and early-career researchers. The three volumes provide an extraordinary breadth and depth of techniques and tools, and encapsulate the experience and expertise of a world-class expert in the field. The pedagogically crafted text is written lucidly, yet never compromises rigor. Theoretical concepts are enhanced with illustrative figures, well-thought problems, intuitive examples, datasets, and MATLAB codes that reinforce readers' learning.' Abdelhak Zoubir, TU DarmstadtTable of ContentsPreface; Notation; 27. Mean-Square-Error inference; 28. Bayesian inference; 29. Linear regression; 30. Kalman filter; 31. Maximum likelihood; 32. Expectation maximization; 33. Predictive modeling; 34. Expectation propagation; 35. Particle filters; 36. Variational inference; 37. Latent Dirichlet allocation; 38. Hidden Markov models; 39. Decoding HMMs; 40. Independent component analysis; 41. Bayesian networks; 42. Inference over graphs; 43. Undirected graphs; 44. Markov decision processes; 45. Value and policy iterations; 46. Temporal difference learning; 47. Q-learning; 48. Value function approximation; 49. Policy gradient methods; Author index; Subject index.

    1 in stock

    £71.24

  • Inference and Learning from Data Volume 3

    Cambridge University Press Inference and Learning from Data Volume 3

    1 in stock

    Book SynopsisThis extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This final volume, Learning, builds on the foundational topics established in volume I to provide a thorough introduction to learning methods, addressing techniques such as least-squares methods, regularization, online learning, kernel methods, feedforward and recurrent neural networks, meta-learning, and adversarial attacks. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 350 end-of-chapter problems (including complete solutions for instructors), 280 figures, 100 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Foundations and Inference, and unique in its scale and depth, this textbookTrade Review'Inference and Learning from Data is a uniquely comprehensive introduction to the signal processing foundations of modern data science. Lucidly written, with a carefully balanced choice of topics, this textbook is an indispensable resource for both graduate students and data science practitioners, a piece of lasting value.' Helmut Bölcskei, ETH Zurich'This textbook provides a lucid and magisterial treatment of methods for inference and learning from data, aided by hundreds of solved examples, computer simulations, and over 1000 problems. The material ranges from fundamentals to recent advances in statistical learning theory; variational inference; neural, convolutional, and Bayesian networks; and several other topics. It is aimed at students and practitioners, and can be used for several different introductory and advanced courses.' Thomas Kailath, Stanford University'A tour de force comprehensive three-volume set for the fast-developing areas of data science, machine learning, and statistical signal processing. With masterful clarity and depth, Sayed covers, connects, and integrates background fundamentals and classical and emerging methods in inference and learning. The books are rich in worked-out examples, exercises, and links to data sets. Commentaries with historical background and contexts for the topics covered in each chapter are a special feature.' Mostafa Kaveh, University of Minnesota'This is the first of a three-volume series covering from fundamentals to the many various methods in inference and learning from data. Professor Sayed is a prolific author of award-winning books and research papers who has himself contributed significantly to many of the topics included in the series. With his encyclopedic knowledge, his careful attention to detail, and in a very approachable style, this first volume covers the basics of matrix theory, probability and stochastic processes, convex and non-convex optimization, gradient-descent, convergence analysis, and several other advanced topics that will be needed for volume II (Inference) and volume III (Learning). This series, and in particular this volume, will be a must-have for educators, students, researchers, and technologists alike who are pursuing a systematic study, want a quick refresh, or may use it as a helpful reference to learn about these fundamentals.' Jose Moura, Carnegie Mellon University'Volume I of Inference and Learning from Data provides a foundational treatment of one of the most topical aspects of contemporary signal and information processing, written by one of the most talented expositors in the field. It is a valuable resource both as a textbook for students wishing to enter the field and as a reference work for practicing engineers.' Vincent Poor, Princeton University'Inference and Learning from Data, Vol. I: Foundations offers an insightful and well-integrated primer with just the right balance of everything that new graduate students need to put their research on a solid footing. It covers foundations in a modern way - emphasizing the most useful concepts, including proofs, and timely topics which are often missing from introductory graduate texts. All in one beautifully written textbook. An impressive feat! I highly recommend it.' Nikolaos Sidiropoulos, University of Virginia'This exceptional encyclopedic work on learning from data will be the bible of the field for many years to come. Totaling more than 3000 pages, this three-volume book covers in an exhaustive and timely manner the topic of data science, which has become critically important to many areas and lies at the basis of modern signal processing, machine learning, artificial intelligence, and their numerous applications. Written by an authority in the field, the book is really unique in scale and breadth, and it will be an invaluable source of information for students, researchers, and practitioners alike.' Peter Stoica, Uppsala University'Very meticulous, thorough, and timely. This volume is largely focused on optimization, which is so important in the modern-day world of data science, signal processing, and machine learning. The book is classical and modern at the same time - many classical topics are nicely linked to modern topics of current interest. All the necessary mathematical background is covered. Professor Sayed is one of the foremost researchers and educators in the field and the writing style is unhurried and clear with many examples, truly reflecting the towering scholar that he is. This volume is so complete that it can be used for self-study, as a classroom text, and as a timeless research reference.' P. P. Vaidyanathan, Caltech'The book series is timely and indispensable. It is a unique companion for graduate students and early-career researchers. The three volumes provide an extraordinary breadth and depth of techniques and tools, and encapsulate the experience and expertise of a world-class expert in the field. The pedagogically crafted text is written lucidly, yet never compromises rigor. Theoretical concepts are enhanced with illustrative figures, well-thought problems, intuitive examples, datasets, and MATLAB codes that reinforce readers' learning.' Abdelhak Zoubir, TU DarmstadtTable of ContentsPreface; Notation; 50. Least-squares problems; 51. Regularization; 52. Nearest-neighbor rule; 53. Self-organizing maps; 54. Decision trees; 55. Naive Bayes classifier; 56. Linear discriminant analysis; 57. Principal component analysis; 58. Dictionary learning; 59. Logistic regression; 60. Perceptron; 61. Support vector machines; 62. Bagging and boosting; 63. Kernel methods; 64. Generalization theory; 65. Feedforward neural networks; 66. Deep belief networks; 67. Convolutional networks; 68. Generative networks; 69. Recurrent networks; 70. Explainable learning; 71. Adversarial attacks; 72. Meta learning; Author index; Subject index.

    1 in stock

    £71.24

  • Introduction to Digital Communications

    Cambridge University Press Introduction to Digital Communications

    1 in stock

    Book SynopsisMaster the fundamentals of digital communications systems with this accessible and hands-on introductory textbook, carefully interweaving theory and practice. The just-in-time approach introduces essential background as needed, keeping academic theory firmly linked to practical applications. The example-led teaching frames key concepts in the context of real-world systems, such as 5G, WiFi, and GPS. Stark provides foundational material on the trade-offs between energy and bandwidth efficiency, giving students a solid grounding in the fundamental challenges of designing digital communications systems. Features include over 300 illustrative figures, 80 examples, and 130 end-of-chapter problems to reinforce student understanding, with solutions for instructors. Accompanied online by lecture slides, computational MATLAB and Python resources, and supporting data sets, this is the ideal introduction to digital communications for senior undergraduate and graduate students in electrical engineering.Trade Review'This book emphasizes the fundamentals of digital communication as well as its practice. It provides examples to enhance the understanding, and the many illustrations explain the basic concepts very well. Several concepts from actual engineering practice are discussed in detail.' Ender Ayanoglu, University of California, Irvine'Wayne Stark is a widely respected researcher in digital communications, as well as a dedicated and talented teacher. This book reflects his years of experience teaching a challenging and rapidly changing subject to senior undergraduate and first-year graduate students. His choice of topics and careful balance between theory and practice ensure that this book will be a valuable resource in electrical engineering curricula for years to come.' Tom Fuja, University of Notre Dame'This self-contained book is excellent for a first course in digital communications. It strikes a perfect balance in theory, practice, and insights, so that a beginner can get a good understanding without getting lost in advanced mathematical concepts.' Sudharman K. Jayaweera, University of New Mexico'This is an extraordinary textbook on digital communication theory and practices. Key results are derived step by step, and it provides many examples and figures that help students grasp key concepts. I wish it had been available when I was a student.' Sang Wu Kim, Iowa State University'Not only is this textbook comprehensive and well written, it is mathematically rigorous. The specific numerical examples and practical applications enhance the theoretical derivations. The author does an excellent job of communicating the importance of each result, making it an appropriate textbook for senior undergraduates taking a solid course in the theory of digital communications.' Laurence B. Milstein, University of California, San Diego'I enjoyed this book's clarity and logical presentation. It is easy to read, balancing mathematical fundamentals with practical applications, problem sets, and examples. I'd be delighted to use it when teaching my undergraduate course on Communication Systems and Principles. This concise resource provides a thorough foundation on digital communication concepts, systems, and techniques, explaining communication systems in general and digital communications specifically.' Lina Mohjazi, University of Glasgow'The real jewel of the book is the introduction chapter. It lays out the most important design considerations and trade-offs at a high (but not superficial) level straightaway, serving as a roadmap to the material in the rest of the book. It is the best and most useful introduction chapter that no one should skip!' Tan F. Wong, University of Florida'This is an excellent textbook for students, communications engineers, and researchers alike. Based on many years' teaching experience, it includes detailed and illustrative examples that help students understand the fundamentals of digital communications. Professor Stark explains the trade-offs of different key parameters in digital communications, and covers state-of-the-art technologies such as LDPC codes. Each chapter contains clear goals, summaries, and useful exercises.' Xiang-Gen Xia, University of DelawareTable of ContentsContents; Preface; Acknowledgement; List of abbreviations; 1. Fundamentals of digital communications; 2. Modulation and demodulation; 3. Probability, random variables, random processes, signal bandwidth; 4. Error probability for binary signals; 5. Optimal receivers for M-ary communication; 6. Modulation techniques; 7. Wireless channels and transmission techniques; 8. Block codes; 9. Convolutional codes; Appendix A. Pseudorandom sequences; Appendix B. Trigonometric and fourier transform iIdentities; Appendix C. Finite fields and BCH codes; Appendix D. Simulation of signals and noise; References; Index.

    1 in stock

    £71.24

  • Wireless Communications and Machine Learning

    Cambridge University Press Wireless Communications and Machine Learning

    1 in stock

    Book SynopsisThis concise single-semester textbook demonstrates cutting-edge concepts at the intersection of machine learning (ML) and wireless communications. Requiring no previous knowledge of ML, it includes over 20 examples addressing real-world challenges, and over 100 end-of-chapter exercises, including hands-on exercises using Python.

    1 in stock

    £66.49

  • Cambridge University Press Theory of Image Formation

    2 in stock

    Book SynopsisFully revised and updated, the second edition of this classic text is the definitive guide to the mathematical models underlying imaging from sensed data. Building on fundamental principles derived from the two- and three-dimensional Fourier transform, and other key mathematical concepts, it introduces a broad range of imaging modalities within a unified framework, emphasising universal theoretical concepts over specific physical aspects. This expanded edition presents new coverage of optical-coherence microscopy, electron-beam microscopy, near-field microscopy, and medical imaging modalities including MRI, CAT, ultrasound, and the imaging of viruses, and introduces additional end-of-chapter problems to support reader understanding. Encapsulating the author''s fifty years of experience in the field, this is the ideal introduction for senior undergraduate and graduate students, academic researchers, and professional engineers across engineering and the physical sciences.

    2 in stock

    £71.24

  • Compressed Sensing Theory and Applications

    Cambridge University Press Compressed Sensing Theory and Applications

    15 in stock

    Book SynopsisCompressed sensing has rapidly become a key concept in various areas of applied mathematics, computer science and electrical engineering. This book highlights theoretical advances and applications in this area. Ideal for both researchers and graduate students seeking an understanding of the potential of compressed sensing.Trade Review'… a charming encouragement to fascinating scientific adventure for talented students. Also … a solid reference platform for researchers in many fields.' Artur Przelaskowski, IEEE Communications MagazineTable of Contents1. Introduction to compressed sensing Mark A. Davenport, Marco F. Duarte, Yonina C. Eldar and Gitta Kutyniok; 2. Second generation sparse modeling: structured and collaborative signal analysis Alexey Castrodad, Ignacio Ramirez, Guillermo Sapiro, Pablo Sprechmann and Guoshen Yu; 3. Xampling: compressed sensing of analog signals Moshe Mishali and Yonina C. Eldar; 4. Sampling at the rate of innovation: theory and applications Jose Antonia Uriguen, Yonina C. Eldar, Pier Luigi Dragotta and Zvika Ben-Haim; 5. Introduction to the non-asymptotic analysis of random matrices Roman Vershynin; 6. Adaptive sensing for sparse recovery Jarvis Haupt and Robert Nowak; 7. Fundamental thresholds in compressed sensing: a high-dimensional geometry approach Weiyu Xu and Babak Hassibi; 8. Greedy algorithms for compressed sensing Thomas Blumensath, Michael E. Davies and Gabriel Rilling; 9. Graphical models concepts in compressed sensing Andrea Montanari; 10. Finding needles in compressed haystacks Robert Calderbank, Sina Jafarpour and Jeremy Kent; 11. Data separation by sparse representations Gitta Kutyniok; 12. Face recognition by sparse representation Arvind Ganesh, Andrew Wagner, Zihan Zhou, Allen Y. Yang, Yi Ma and John Wright.

    15 in stock

    £93.10

  • Random Matrix Methods for Wireless Communications

    Cambridge University Press Random Matrix Methods for Wireless Communications

    15 in stock

    Book SynopsisBlending theoretical results with practical applications, this book provides an introduction to random matrix theory and shows how it can be used to tackle a variety of real-world problems in wireless communications. Intuitive yet rigorous, it demonstrates how to choose the correct approach for obtaining mathematically accurate results.Table of Contents1. Introduction; Part I. Theoretical Aspects: 2. Random matrices; 3. The Stieltjes transform method; 4. Free probability theory; 5. Combinatoric approaches; 6. Deterministic equivalents; 7. Spectrum analysis; 8. Eigen-inference; 9. Extreme eigenvalues; 10. Summary and partial conclusions; Part II. Applications to Wireless Communications: 11. Introduction to applications in telecommunications; 12. System performance of CDMA technologies; 13. Performance of multiple antenna systems; 14. Rate performance in multiple access and broadcast channels; 15. Performance of multi-cellular and relay networks; 16. Detection; 17. Estimation; 18. System modeling; 19. Perspectives; 20. Conclusion.

    15 in stock

    £89.99

  • The Mathematics of Signal Processing Cambridge Texts in Applied Mathematics Series Number 48

    Cambridge University Press The Mathematics of Signal Processing Cambridge Texts in Applied Mathematics Series Number 48

    15 in stock

    Book SynopsisArising from courses taught by the authors, this largely self-contained treatment is ideal for mathematicians who are interested in applications or for students from applied fields who want to understand the mathematics behind their subject. Early chapters cover Fourier analysis, functional analysis, probability and linear algebra, all of which have been chosen to prepare the reader for the applications to come. The book includes rigorous proofs of core results in compressive sensing and wavelet convergence. Fundamental is the treatment of the linear system y=Îx in both finite and infinite dimensions. There are three possibilities: the system is determined, overdetermined or underdetermined, each with different aspects. The authors assume only basic familiarity with advanced calculus, linear algebra and matrix theory and modest familiarity with signal processing, so the book is accessible to students from the advanced undergraduate level. Many exercises are also included.Trade Review'Damelin and Miller provide a very detailed and thorough treatment of all the important mathematics related to signal processing. This includes the required background information found in elementary mathematics courses, so their book is really self-contained. The style of writing is suitable not only for mathematicians, but also for practitioners from other areas. Indeed, Damelin and Miller managed to write their text in a form that is accessible to nonspecialists, without giving up mathematical rigor.' Kai Diethelm, Computing Reviews'In the last 20 years or so, many books on wavelets have been published; most of them deal with wavelets from either the engineering or the mathematics perspective, but few try to connect the two viewpoints. The book under review falls under the last category … Overall, the book is a good addition to the literature on engineering mathematics.' Ahmed I. Zayed, Mathematical ReviewsTable of Contents1. Introduction; 2. Normed vector spaces; 3. Analytic tools; 4. Fourier series; 5. Fourier transforms; 6. Compressive sensing; 7. Discrete transforms; 8. Linear filters; 9. Windowed Fourier transforms, continuous wavelets, frames; 10. Multiresolution analysis; 11. Discrete wavelet theory; 12. Biorthogonal filters and wavelets; 13. Parsimonious representation of data; Bibliography; Index.

    15 in stock

    £105.45

  • Scalability Density and Decision Making in

    Cambridge University Press Scalability Density and Decision Making in

    Out of stock

    Book SynopsisThis cohesive treatment of cognitive radio and networking technology integrates information and decision theory to provide insight into relationships throughout all layers of networks and across all wireless applications. It encompasses conventional considerations of spectrum and waveform selection and covers topology determination, routing policies, content positioning and future hybrid architectures that fully integrate wireless and wired services. Emerging flexibility in spectrum regulation and the imminent adoption of spectrum-sharing policies make this topic of immediate relevance both to the research community and to the commercial wireless community. Features specific examples of decision-making structures and criteria required to extend network density and scaling to unprecedented levels Integrates sensing, control plane and content operations into a single cohesive structure Provides simpler and more powerful models of network operation Presents a unique approach to decisTrade Review'This is an extremely important text that comes at a critical time in the evolution of our understanding of both the characteristics of the spectrum need and the means by which this increasingly urgent need can be satisfied. [Marshall] has been one of the long term leaders in the development of the framework for dynamic spectrum access networks and cognitive radio technology, giving him a historic as well as current perspective on the challenges. The insights in this book should be of enormous value to students, active researchers, wireless systems developers, and regulatory and policy leaders.' Dennis A. Roberson, Illinois Institute of Technology'… highly original and brilliantly insightful. Preston Marshall has examined cognitive technologies under three essential key headings - scalability, density and decision-making. In doing this he unlocks the power of cognitive technologies and builds a realisable and compelling vision for communication networks of the future. Every section … is full of new ideas and insights that could only be written by someone who has been a leader in this field and has a handle on the bigger picture as well as a deep understanding of the technical details. This book is so far removed from the myriad of books that simple relate information to the reader. It is packed full of ideas, opinions and most crucially supporting evidence … a breath of fresh air … essential reading for someone who has any interest in how the challenges for communication systems of the future will be met.' Linda Doyle, University of Dublin'… a refreshing take … I strongly recommend this book to both scholars and experts in the field, and I am convinced that it will be frequently used as reference material for all interested in the future potential of cognitive wireless networks.' Shaunak Joshi, IEEE Communications MagazineTable of ContentsPreface; Part I. Overview: 1. Introduction; 2. Theoretical foundations; 3. Future wireless operation, environments and dynamic spectrum access; 4. Fundamental challenges in cognitive radio and wireless systems; Part II. Generalized Environmental Characterization: 5. The spectrum and channel environment; 6. Propagation modeling, characterization and control; 7. The connectivity environment; 8. The information and content environment; Part III. System Performance of Cognitive Wireless Systems: 9. Network scaling; 10. Network physical density limitations; 11. Network sensing and exchange information effectiveness; 12. Content access information effectiveness; 13. Minimizing nonlinear circuit effects; Part IV. Integrated Analysis and Decision-Making: 14. Awareness structure for cognitive wireless systems; 15. Instantiating and updating beliefs across wireless networks; 16. Decision-making structure for cognitive wireless systems; Part V. Summary: 17. Further research needs in cognitive wireless networks; Appendix A. Terms and acronyms; Appendix B. Symbols; Appendix C. Mathematica and MATLAB routines.

    Out of stock

    £112.10

  • Fundamentals of Stream Processing Application Design Systems and Analytics

    Cambridge University Press Fundamentals of Stream Processing Application Design Systems and Analytics

    15 in stock

    Book SynopsisStream processing is a novel distributed computing paradigm that supports the gathering, processing and analysis of high-volume, heterogeneous, continuous data streams, to extract insights and actionable results in real time. This comprehensive, hands-on guide combining the fundamental building blocks and emerging research in stream processing is ideal for application designers, system builders, analytic developers, as well as students and researchers in the field. This book introduces the key components of the stream computing paradigm, including the distributed system infrastructure, the programming model, design patterns and streaming analytics. The explanation of the underlying theoretical principles, illustrative examples and implementations using the IBM InfoSphere Streams SPL language and real-world case studies provide students and practitioners with a comprehensive understanding of such applications and the middleware that supports them.Table of ContentsPart I. Fundamentals: 1. What brought us here?; 2. Introduction to stream processing; Part II. Application Development: 3. Application development - the basics; 4. Application development - data flow programming; 5. Large-scale development - modularity, extensibility, and distribution; 6. Application engineering - debugging and visualization; Part III. System Architecture: 7. Architecture of a stream processing system; 8. InfoSphere streams architecture; Part IV. Application Design and Analytics: 9. Design principles and patterns for stream processing applications; 10. Stream processing and mining algorithms; Part V. Case Studies: 11. End-to-end application examples; Part VI. Closing Notes: 12. Conclusion.

    15 in stock

    £79.79

  • Information Theoretic Perspectives on 5G Systems

    Cambridge University Press Information Theoretic Perspectives on 5G Systems

    Out of stock

    Book SynopsisExperience a guided tour of the key information-theoretic principles that underpin the design of next-generation cellular systems with this invaluable reference. Written by experts in the field, the text encompasses principled theoretical guidelines for the design and performance analysis of network architectures, coding and modulation schemes, and communication protocols. Presenting an extensive overview of the most important ideas and topics necessary for the development of future wireless systems, as well as providing a detailed introduction to network information theory, this is the perfect tool for researchers and graduate students in the fields of information theory and wireless communications, as well as for practitioners in the telecommunications industry.Trade Review'Information Theoretic Perspectives on 5G Systems and Beyond deftly guides the reader through the key information-theoretic principles that lay the foundations for next-generation cellular network design. The book's expansive coverage by world-renowned experts includes PHY-layer modulation and coding as well as network architectures and protocols. This timely book will be an indispensable reference for researchers and practitioners seeking to advance the state-of-the-art in cellular technology.' Andrea Goldsmith, Princeton University'Anyone interested in the field of wireless systems, at any level of experience, will benefit from this book. With helpful introductions and a broad range of topics, it is a highly accessible guide to state-of-the-art wireless systems research.' Tom Richardson, Qualcomm Inc.'Information Theory is at the heart of major breakthroughs in wireless communications in the last decade. This book brings a refreshing look at the field with important new paradigms that will undoubtedly have an impact in the development of beyond 5G systems.' Mérouane Debbah, CentraleSupélecTable of Contents1. Introduction Shlomo Shamai (Shitz), Osvaldo Simeone and Ivana Marić; 2. Information theory for cellular wireless networks Gerhard Kramer and Young-Han Kim; Part I. Architecture: 3. Device-to-device communication Ratheesh K. Mungara, Geordie George and Angel Lozano; 4. Multihop wireless backhaul for 5G Song-Nam Hong and Ivana Marić; 5. Edge caching Navid Naderi Alizadeh, Mohammad Ali Maddah-Ali and Salman Avestimehr; 6. Cloud and fog radio access networks Osvaldo Simeone, Ravi Tandon, Seok-Hwan Park and Shlomo Shamai (Shitz); 7. Communication with energy harvesting and remotely powered radios Ayfer Ozgur and Dor Shaviv; Part II. Coding and Modulation: 8. Polarization and polar coding Erdal Arikan; 9. Massive MIMO and beyond Thomas L. Marzetta, Erik G. Larsson and Thorkild B. Hansen; 10. Short-packet transmission Giuseppe Durisi, Gianluigi Liva and Yury Polyanskiy; 11. Information theoretic perspectives on non-orthogonal multiple access (NOMA) Peng Xu, Zhiguo Ding and H. Vincent Poor; 12. Compute-forward strategies for next-generation wireless systems Bobak Nazer, Michael Gastpar and Sung Hoon Lim; 13. Waveform design Paolo Banelli, Giulio Colavolpe, Luca Rugini and Alessandro Ugolini; Part III. Protocols: 14. Information-theoretic aspects of 5G protocols Cedomir Stefanović, Kasper F. Trillingsgaard and Petar Popovski; 15. Interference management in wireless networks: an information theoretic perspective Ravi Tandon and Aydin Sezgin; 16. Cooperative cellular communications Benjamin M. Zaidel, Michèle Wigger and Shlomo Shamai (Shitz); 17. Service delivery in 5G Jaime Llorca, Antonia Tulino and Giuseppe Caire; 18. A broadcast approach to fading channels under secrecy constraints Shaofeng Zou, Yingbin Liang, Lifeng Lai, H. Vincent Poor and Shlomo Shamai (Shitz); 19. Cognitive cooperation and state management: an information theoretic perspective Anelia Baruch, Yingbin Liang, Haim Permuter and Shlomo Shamai (Shitz).

    Out of stock

    £71.24

  • Compressive Imaging Structure Sampling Learning

    Cambridge University Press Compressive Imaging Structure Sampling Learning

    1 in stock

    Book SynopsisAccurate, robust and fast image reconstruction is a critical task in many scientific, industrial and medical applications. Over the last decade, image reconstruction has been revolutionized by the rise of compressive imaging. It has fundamentally changed the way modern image reconstruction is performed. This in-depth treatment of the subject commences with a practical introduction to compressive imaging, supplemented with examples and downloadable code, intended for readers without extensive background in the subject. Next, it introduces core topics in compressive imaging including compressed sensing, wavelets and optimization in a concise yet rigorous way, before providing a detailed treatment of the mathematics of compressive imaging. The final part is devoted to recent trends in compressive imaging: deep learning and neural networks. With an eye to the next decade of imaging research, and using both empirical and mathematical insights, it examines the potential benefits and the piTable of Contents1. Introduction; Part I. The Essentials of Compressive Imaging: 2. Images, transforms and sampling; 3. A short guide to compressive imaging; 4. Techniques for enhancing performance; Part II. Compressed Sensing, Optimization and Wavelets: 5. An introduction to conventional compressed sensing; 6. The LASSO and its cousins; 7. Optimization for compressed sensing; 8. Analysis of optimization algorithms; 9. Wavelets; 10. A taste of wavelet approximation theory; Part III. Compressed Sensing with Local Structure: 11. From global to local; 12. Local structure and nonuniform recovery; 13. Local structure and uniform recovery; 14. Infinite-dimensional compressed sensing; Part IV. Compressed Sensing for Imaging: 15. Sampling strategies for compressive imaging; 16. Recovery guarantees for wavelet-based compressive imaging; 17. Total variation minimization; Part V. From Compressed Sensing to Deep Learning: 18. Neural networks and deep learning; 19. Deep learning for compressive imaging; 20. Accuracy and stability of deep learning for compressive imaging; 21. Stable and accurate neural networks for compressive imaging; 22. Epilogue; Appendices: A. Linear Algebra; B. Functional analysis; C. Probability; D. Convex analysis and convex optimization; E. Fourier transforms and series; F. Properties of Walsh functions and the Walsh transform; Notation; Abbreviations; References; Index.

    1 in stock

    £59.84

  • Bayesian Optimization

    Cambridge University Press Bayesian Optimization

    1 in stock

    Book SynopsisBayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations. The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies. Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applicTable of ContentsNotation; 1. Introduction; 2. Gaussian processes; 3. Modeling with Gaussian processes; 4. Model assessment, selection, and averaging; 5. Decision theory for optimization; 6. Utility functions for optimization; 7. Common Bayesian optimization policies; 8. Computing policies with Gaussian processes; 9. Implementation; 10. Theoretical analysis; 11. Extensions and related settings; 12. A brief history of Bayesian optimization; A. The Gaussian distribution; B. Methods for approximate Bayesian inference; C. Gradients; D. Annotated bibliography of applications; References; Index.

    1 in stock

    £42.74

  • Machine Learning Refined

    Cambridge University Press Machine Learning Refined

    3 in stock

    Book SynopsisWith its intuitive yet rigorous approach to machine learning, this text provides students with the fundamental knowledge and practical tools needed to conduct research and build data-driven products. The authors prioritize geometric intuition and algorithmic thinking, and include detail on all the essential mathematical prerequisites, to offer a fresh and accessible way to learn. Practical applications are emphasized, with examples from disciplines including computer vision, natural language processing, economics, neuroscience, recommender systems, physics, and biology. Over 300 color illustrations are included and have been meticulously designed to enable an intuitive grasp of technical concepts, and over 100 in-depth coding exercises (in Python) provide a real understanding of crucial machine learning algorithms. A suite of online resources including sample code, data sets, interactive lecture slides, and a solutions manual are provided online, making this an ideal text both for gradTrade Review'An excellent book that treats the fundamentals of machine learning from basic principles to practical implementation. The book is suitable as a text for senior-level and first-year graduate courses in engineering and computer science. It is well organized and covers basic concepts and algorithms in mathematical optimization methods, linear learning, and nonlinear learning techniques. The book is nicely illustrated in multiple colors and contains numerous examples and coding exercises using Python.' John G. Proakis, University of California, San Diego'Some machine learning books cover only programming aspects, often relying on outdated software tools; some focus exclusively on neural networks; others, solely on theoretical foundations; and yet more books detail advanced topics for the specialist. This fully revised and expanded text provides a broad and accessible introduction to machine learning for engineering and computer science students. The presentation builds on first principles and geometric intuition, while offering real-world examples, commented implementations in Python, and computational exercises. I expect this book to become a key resource for students and researchers.' Osvaldo Simeone, Kings College London'This book is great for getting started in machine learning. It builds up the tools of the trade from first principles, provides lots of examples, and explains one thing at a time at a steady pace. The level of detail and runnable code show what's really going when we run a learning algorithm.' David Duvenaud, University of Toronto'This book covers various essential machine learning methods (e.g., regression, classification, clustering, dimensionality reduction, and deep learning) from a unified mathematical perspective of seeking the optimal model parameters that minimize a cost function. Every method is explained in a comprehensive, intuitive way, and mathematical understanding is aided and enhanced with many geometric illustrations and elegant Python implementations.' Kimiaki Sihrahama, Kindai University, Japan'Books featuring machine learning are many, but those which are simple, intuitive, and yet theoretical are extraordinary 'outliers'. This book is a fantastic and easy way to launch yourself into the exciting world of machine learning, grasp its core concepts, and code them up in Python or Matlab. It was my inspiring guide in preparing my 'Machine Learning Blinks' on my BASIRA YouTube channel for both undergraduate and graduate levels.' Islem Rekik, Director of the Brain And SIgnal Research and Analysis (BASIRA) Laboratory'With its intuitive yet rigorous approach to machine learning, this text provides students with the fundamental knowledge and practical tools needed to conduct research and build data-driven products. The authors prioritize geometric intuition and algorithmic thinking, and include detail on all the essential mathematical prerequisites, to offer a fresh and accessible way to learn. Practical applications are emphasized, with examples from disciplines including computer vision, natural language processing, economics, neuroscience, recommender systems, physics, and biology. Over 300 color illustrations are included and have been meticulously designed to enable an intuitive grasp of technical concepts, and over 100 in-depth coding exercises (in Python) provide a real understanding of crucial machine learning algorithms. A suite of online resources including sample code, data sets, interactive lecture slides, and a solutions manual are provided online, making this an ideal text both for graduate courses on machine learning and for individual reference and self-study.' politcommerce.com'This is a comprehensive textbook on the fundamental concepts of machine learning. In the second edition, the authors provide a very accessible introduction to the main ideas behind machine learning models.' Helena Mihaljević, zbMATHTable of Contents1. Introduction to machine learning; Part I. Mathematical Optimization: 2. Zero order optimization techniques; 3. First order methods; 4. Second order optimization techniques; Part II. Linear Learning: 5. Linear regression; 6. Linear two-class classification; 7. Linear multi-class classification; 8. Linear unsupervised learning; 9. Feature engineering and selection; Part III. Nonlinear Learning: 10. Principles of nonlinear feature engineering; 11. Principles of feature learning; 12. Kernel methods; 13. Fully-connected neural networks; 14. Tree-based learners; Part IV. Appendices: Appendix A. Advanced first and second order optimization methods; Appendix B. Derivatives and automatic differentiation; Appendix C. Linear algebra.

    3 in stock

    £55.09

  • Digital Signal Processing with Kernel Methods

    John Wiley & Sons Inc Digital Signal Processing with Kernel Methods

    15 in stock

    Book SynopsisA realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors: hTable of ContentsAbout the Authors xiii Preface xvii Acknowledgements xxi List of Abbreviations xxiii Part I Fundamentals and Basic Elements 1 1 From Signal Processing to Machine Learning 3 1.1 A New Science is Born: Signal Processing 3 1.1.1 Signal Processing Before Being Coined 3 1.1.2 1948: Birth of the Information Age 4 1.1.3 1950s: Audio Engineering Catalyzes Signal Processing 4 1.2 From Analog to Digital Signal Processing 5 1.2.1 1960s: Digital Signal Processing Begins 5 1.2.2 1970s: Digital Signal Processing Becomes Popular 6 1.2.3 1980s: Silicon Meets Digital Signal Processing 6 1.3 Digital Signal Processing Meets Machine Learning 7 1.3.1 1990s: New Application Areas 7 1.3.2 1990s: Neural Networks, Fuzzy Logic, and Genetic Optimization 7 1.4 Recent Machine Learning in Digital Signal Processing 8 1.4.1 Traditional Signal Assumptions Are No Longer Valid 8 1.4.2 Encoding Prior Knowledge 8 1.4.3 Learning and Knowledge from Data 9 1.4.4 From Machine Learning to Digital Signal Processing 9 1.4.5 From Digital Signal Processing to Machine Learning 10 2 Introduction to Digital Signal Processing 13 2.1 Outline of the Signal Processing Field 13 2.1.1 Fundamentals on Signals and Systems 14 2.1.2 Digital Filtering 21 2.1.3 Spectral Analysis 24 2.1.4 Deconvolution 28 2.1.5 Interpolation 30 2.1.6 System Identification 31 2.1.7 Blind Source Separation 36 2.2.3 Sparsity, Compressed Sensing, and Dictionary Learning 44 2.3 Multidimensional Signals and Systems 48 2.3.1 Multidimensional Signals 49 2.3.2 Multidimensional Systems 51 2.4 Spectral Analysis on Manifolds 52 2.4.1 Theoretical Fundamentals 52 2.4.2 Laplacian Matrices 54 2.5 Tutorials and Application Examples 57 2.5.1 Real and Complex Signal Processing and Representations 57 2.5.2 Convolution, Fourier Transform, and Spectrum 63 2.5.3 Continuous-Time Signals and Systems 67 2.5.4 Filtering Cardiac Signals 70 2.5.5 Nonparametric Spectrum Estimation 74 2.5.6 Parametric Spectrum Estimation 77 2.5.7 Source Separation 81 2.5.8 Time–Frequency Representations and Wavelets 84 2.5.9 Examples for Spectral Analysis on Manifolds 87 2.6 Questions and Problems 94 3 Signal Processing Models 97 3.1 Introduction 97 3.2 Vector Spaces, Basis, and Signal Models 98 3.2.1 Basic Operations for Vectors 98 3.2.2 Vector Spaces 100 3.2.3 Hilbert Spaces 101 3.2.4 Signal Models 102 3.2.5 Complex Signal Models 104 3.2.6 Standard Noise Models in Digital Signal Processing 105 3.2.7 The Role of the Cost Function 107 3.2.8 The Role of the Regularizer 109 3.3 Digital Signal Processing Models 111 3.3.1 Sinusoidal Signal Models 112 3.3.2 System Identification Signal Models 113 3.3.3 Sinc Interpolation Models 116 3.3.4 Sparse Deconvolution 120 3.3.5 Array Processing 121 3.4 Tutorials and Application Examples 122 3.4.1 Examples of Noise Models 123 3.4.2 Autoregressive Exogenous System Identification Models 132 3.4.3 Nonlinear System Identification Using Volterra Models 138 3.4.4 Sinusoidal Signal Models 140 3.4.5 Sinc-based Interpolation 144 3.4.6 Sparse Deconvolution 152 3.4.7 Array Processing 157 3.5 Questions and Problems 160 3.A MATLABsimpleInterp Toolbox Structure 161 4 Kernel Functions and Reproducing Kernel Hilbert Spaces 165 4.1 Introduction 165 4.2 Kernel Functions and Mappings 169 4.2.1 Measuring Similarity with Kernels 169 4.2.2 Positive-Definite Kernels 169 4.2.3 Reproducing Kernel in Hilbert Space and Reproducing Property 170 4.2.4 Mercer’s Theorem 173 4.3 Kernel Properties 174 4.3.1 Tikhonov’s Regularization 175 4.3.2 Representer Theorem and Regularization Properties 176 4.3.3 Basic Operations with Kernels 178 4.4 Constructing Kernel Functions 179 4.4.1 Standard Kernels 179 4.4.2 Properties of Kernels 180 4.4.3 Engineering Signal Processing Kernels 181 4.5 Complex Reproducing Kernel in Hilbert Spaces 184 4.6 Support Vector Machine Elements for Regression and Estimation 186 4.6.1 Support Vector Regression Signal Model and Cost Function 186 4.6.2 Minimizing Functional 187 4.7 Tutorials and Application Examples 191 4.7.1 Kernel Calculations and Kernel Matrices 191 4.7.2 Basic Operations with Kernels 194 4.7.3 Constructing Kernels 197 4.7.4 Complex Kernels 199 4.7.5 Application Example for Support Vector Regression Elements 202 4.8 Concluding Remarks 205 4.9 Questions and Problems 205 Part II Function Approximation and Adaptive Filtering 209 5 A Support Vector Machine Signal Estimation Framework 211 5.1 Introduction 211 5.2 A Framework for Support Vector Machine Signal Estimation 213 5.3 Primal Signal Models for Support Vector Machine Signal Processing 216 5.3.1 Nonparametric Spectrum and System Identification 218 5.3.2 Orthogonal Frequency Division Multiplexing Digital Communications 220 5.3.3 Convolutional Signal Models 222 5.3.4 Array Processing 225 5.4 Tutorials and Application Examples 227 5.4.1 Nonparametric Spectral Analysis with Primal Signal Models 227 5.4.2 System Identification with Primal Signal Model ;;-filter 228 5.4.3 Parametric Spectral Density Estimation with Primal Signal Models 230 5.4.4 Temporal Reference Array Processing with Primal Signal Models 231 5.4.5 Sinc Interpolation with Primal Signal Models 233 6 Reproducing Kernel Hilbert Space Models for Signal Processing 241 6.1 Introduction 241 6.2 Reproducing Kernel Hilbert Space Signal Models 242 6.2.1 Kernel Autoregressive Exogenous Identification 244 6.2.2 Kernel Finite Impulse Response and the ;;-Filter 247 6.2.3 Kernel Array Processing with Spatial Reference 248 6.2.4 Kernel Semiparametric Regression 249 6.3 Tutorials and Application Examples 258 6.3.1 Nonlinear System Identification with Support Vector Machine–Autoregressive and Moving Average 258 6.3.2 Nonlinear System Identification with the ;;-filter 260 6.3.3 Electric Network Modeling with Semiparametric Regression 264 6.3.4 Promotional Data 272 6.3.5 Spatial and Temporal Antenna Array Kernel Processing 275 6.4 Questions and Problems 279 7 Dual Signal Models for Signal Processing 281 7.1 Introduction 281 7.2 Dual Signal Model Elements 281 7.3 Dual Signal Model Instantiations 283 7.3.1 Dual Signal Model for Nonuniform Signal Interpolation 283 7.3.2 Dual Signal Model for Sparse Signal Deconvolution 284 7.3.3 Spectrally Adapted Mercer Kernels 285 7.4 Tutorials and Application Examples 289 7.4.1 Nonuniform Interpolation with the Dual Signal Model 290 7.4.2 Sparse Deconvolution with the Dual Signal Model 292 7.4.3 Doppler Ultrasound Processing for Fault Detection 294 7.4.4 Spectrally Adapted Mercer Kernels 296 7.4.5 Interpolation of Heart Rate Variability Signals 304 7.4.6 Denoising in Cardiac Motion-Mode Doppler Ultrasound Images 309?m 7.4.7 Indoor Location from Mobile Devices Measurements 316 7.4.8 Electroanatomical Maps in Cardiac Navigation Systems 322 7.5 Questions and Problems 331 8 Advances in Kernel Regression and Function Approximation 333 8.1 Introduction 333 8.2 Kernel-Based Regression Methods 333 8.2.1 Advances in Support Vector Regression 334 8.2.2 Multi-output Support Vector Regression 338 8.2.3 Kernel Ridge Regression 339 8.2.4 Kernel Signal-To-Noise Regression 341 8.2.5 Semisupervised Support Vector Regression 343 8.2.6 Model Selection in Kernel Regression Methods 345 8.4.1 Comparing Support Vector Regression, Relevance Vector Machines, and Gaussian Process Regression 360 8.4.2 Profile-Dependent Support Vector Regression 362 8.4.3 Multi-output Support Vector Regression 364 8.4.4 Kernel Signal-to-Noise Ratio Regression 366 8.4.5 Semisupervised Support Vector Regression 368 8.4.6 Bayesian Nonparametric Model 369 8.4.7 Gaussian Process Regression 370 8.4.8 Relevance Vector Machines 379 8.5 Concluding Remarks 382 8.6 Questions and Problems 383 9 Adaptive Kernel Learning for Signal Processing 387 9.1 Introduction 387 9.2 Linear Adaptive Filtering 387 9.2.1 Least Mean Squares Algorithm 388 9.2.2 Recursive Least-Squares Algorithm 389 9.3 Kernel Adaptive Filtering 392 9.4 Kernel Least Mean Squares 392 9.4.1 Derivation of Kernel Least Mean Squares 393 9.4.2 Implementation Challenges and Dual Formulation 394 9.5.3 Prediction of the Mackey–Glass Time Series with Kernel Recursive Least Squares 401 9.5.4 Beyond the Stationary Model 402 9.5.5 Example on Nonlinear Channel Identification and Reconvergence 405 9.6 Explicit Recursivity for Adaptive Kernel Models 406 9.6.1 Recursivity in Hilbert Spaces 406 9.6.2 Recursive Filters in Reproducing Kernel Hilbert Spaces 408 9.7 Online Sparsification with Kernels 411 9.7.1 Sparsity by Construction 411 9.7.2 Sparsity by Pruning 413 9.8 Probabilistic Approaches to Kernel Adaptive Filtering 414 9.8.1 Gaussian Processes and Kernel Ridge Regression 415 9.8.2 Online Recursive Solution for Gaussian Processes Regression 416 9.8.3 Kernel Recursive Least Squares Tracker 417 9.8.4 Probabilistic Kernel Least Mean Squares 418 9.9 Further Reading 418 9.9.1 Selection of Kernel Parameters 418 9.9.2 Multi-Kernel Adaptive Filtering 419 9.9.3 Recursive Filtering in Kernel Hilbert Spaces 419 9.10 Tutorials and Application Examples 419 9.10.1 Kernel Adaptive Filtering Toolbox 420 9.10.2 Prediction of a Respiratory Motion Time Series 421 9.10.3 Online Regression on the KIN?h?eK Dataset 423 9.10.4 The Mackey–Glass Time Series 425 9.10.5 Explicit Recursivity on Reproducing Kernel in Hilbert Space and Electroencephalogram Prediction 427 9.10.6 Adaptive Antenna Array Processing 428 9.11 Questions and Problems 430 Part III Classification, Detection, and Feature Extraction 433 10 Support Vector Machine and Kernel Classification Algorithms 435 10.1 Introduction 435 10.2 Support Vector Machine and Kernel Classifiers 435 10.2.1 Support Vector Machines 435 10.2.2 Multiclass and Multilabel Support Vector Machines 441 10.2.3 Least-Squares Support Vector Machine 447 10.2.4 Kernel Fisher’s Discriminant Analysis 448 10.3 Advances in Kernel-Based Classification 452 10.3.1 Large Margin Filtering 452 10.3.2 Semisupervised Learning 454 10.3.3 Multiple Kernel Learning 460 10.3.4 Structured-Output Learning 462 10.3.5 Active Learning 468 10.4 Large-Scale Support Vector Machines 477 10.4.1 Large-Scale Support Vector Machine Implementations 477 10.4.2 Random Fourier Features 478 10.4.3 Parallel Support Vector Machine 480 10.4.4 Outlook 483 10.5 Tutorials and Application Examples 485 10.5.1 Examples of Support Vector Machine Classification 485 10.5.2 Example of Least-Squares Support Vector Machine 492 10.5.3 Kernel-Filtering Support Vector Machine for Brain–Computer Interface Signal Classification 493 10.5.4 Example of Laplacian Support Vector Machine 494 10.5.5 Example of Graph-Based Label Propagation 498 10.5.6 Examples of Multiple Kernel Learning 498 10.6 Concluding Remarks 501 10.7 Questions and Problems 502 11 Clustering and Anomaly Detection with Kernels 503 11.1 Introduction 503 11.2 Kernel Clustering 506 11.2.1 Kernelization of the Metric 506 11.2.2 Clustering in Feature Spaces 508 11.3 Domain Description Via Support Vectors 514 11.3.1 Support Vector Domain Description 514 11.3.2 One-Class Support Vector Machine 515 11.3.3 Relationship Between Support Vector Domain Description and Density Estimation 516 11.3.4 Semisupervised One-Class Classification 517 11.4 Kernel Matched Subspace Detectors 518 11.4.1 Kernel Orthogonal Subspace Projection 518 11.4.2 Kernel Spectral Angle Mapper 520 11.5 Kernel Anomaly Change Detection 522 11.5.1 Linear Anomaly Change Detection Algorithms 522 11.5.2 Kernel Anomaly Change Detection Algorithms 523 11.6 Hypothesis Testing with Kernels 525 11.6.1 Distribution Embeddings 526 11.6.3 Maximum Mean Discrepancy 527 11.6.3 One-Class Support Measure Machine 528 11.7 Tutorials and Application Examples 529 11.7.1 Example on Kernelization of the Metric 529 11.7.2 Example on Kernel k-Means 530 11.7.3 Domain Description Examples 531 11.7.4 Kernel Spectral Angle Mapper and Kernel Orthogonal Subspace Projection Examples 534 11.7.5 Example of Kernel Anomaly Change Detection Algorithms 536 11.7.6 Example on Distribution Embeddings and Maximum Mean Discrepancy 540 11.8 Concluding Remarks 541 11.9 Questions and Problems 542 12 Kernel Feature Extraction in Signal Processing 543 12.1 Introduction 543 12.2 Multivariate Analysis in Reproducing Kernel Hilbert Spaces 545 12.2.1 Problem Statement and Notation 545 12.2.2 Linear Multivariate Analysis 546 12.2.3 Kernel Multivariate Analysis 549 12.2.4 Multivariate Analysis Experiments 551 12.3 Feature Extraction with Kernel Dependence Estimates 555 12.3.1 Feature Extraction Using Hilbert–Schmidt Independence Criterion 556 12.3.2 Blind Source Separation Using Kernels 563 12.4 Extensions for Large-Scale and Semisupervised Problems 570 12.4.2 Efficiency with the Incomplete Cholesky Decomposition 570 12.4.3 Efficiency with Random Fourier Features 570 12.4.3 Sparse Kernel Feature Extraction 571 12.4.4 Semisupervised Kernel Feature Extraction 573 12.5 Domain Adaptation with Kernels 575 12.5.1 Kernel Mean Matching 578 12.5.2 Transfer Component Analysis 579 12.5.3 Kernel Manifold Alignment 581 12.5.4 Relations between Domain Adaptation Methods 585 12.5.5 Experimental Comparison between Domain Adaptation Methods 12.6 Concluding Remarks 587 12.7 Questions and Problems 588 References 589Index 631

    15 in stock

    £100.76

  • Financial Signal Processing and Machine Learning

    John Wiley & Sons Inc Financial Signal Processing and Machine Learning

    15 in stock

    Book SynopsisThe modern financial industry has been required to deal with large and diverse portfolios in a variety of asset classes often with limited market data available.Table of ContentsList of Contributors xiii Preface xv 1 Overview 1 Ali N. Akansu, Sanjeev R. Kulkarni, and Dmitry Malioutov 1.1 Introduction 1 1.2 A Bird’s-Eye View of Finance 2 1.2.1 Trading and Exchanges 4 1.2.2 Technical Themes in the Book 5 1.3 Overview of the Chapters 6 1.3.1 Chapter 2: “Sparse Markowitz Portfolios” by Christine De Mol 6 1.3.2 Chapter 3: “Mean-Reverting Portfolios: Tradeoffs between Sparsity and Volatility” by Marco Cuturi and Alexandre d’Aspremont 7 1.3.3 Chapter 4: “Temporal Causal Modeling” by Prabhanjan Kambadur, Aurélie C. Lozano, and Ronny Luss 7 1.3.4 Chapter 5: “Explicit Kernel and Sparsity of Eigen Subspace for the AR(1) Process” by Mustafa U. Torun, Onur Yilmaz and Ali N. Akansu 7 1.3.5 Chapter 6: “Approaches to High-Dimensional Covariance and Precision Matrix Estimation” by Jianqing Fan, Yuan Liao, and Han Liu 7 1.3.6 Chapter 7: “Stochastic Volatility: Modeling and Asymptotic Approaches to Option Pricing and Portfolio Selection” by Matthew Lorig and Ronnie Sircar 7 1.3.7 Chapter 8: “Statistical Measures of Dependence for Financial Data” by David S. Matteson, Nicholas A. James, and William B. Nicholson 8 1.3.8 Chapter 9: “Correlated Poisson Processes and Their Applications in Financial Modeling” by Alexander Kreinin 8 1.3.9 Chapter 10: “CVaR Minimizations in Support Vector Machines” by Junya Gotoh and Akiko Takeda 8 1.3.10 Chapter 11: “Regression Models in Risk Management” by Stan Uryasev 8 1.4 Other Topics in Financial Signal Processing and Machine Learning 9 References 9 2 Sparse Markowitz Portfolios 11 ChristineDeMol 2.1 Markowitz Portfolios 11 2.2 Portfolio Optimization as an Inverse Problem: The Need for Regularization 13 2.3 Sparse Portfolios 15 2.4 Empirical Validation 17 2.5 Variations on the Theme 18 2.5.1 Portfolio Rebalancing 18 2.5.2 Portfolio Replication or Index Tracking 19 2.5.3 Other Penalties and Portfolio Norms 19 2.6 Optimal Forecast Combination 20 Acknowlegments 21 References 21 3 Mean-Reverting Portfolios 23 Marco Cuturi and Alexandre d’Aspremont 3.1 Introduction 23 3.1.1 Synthetic Mean-Reverting Baskets 24 3.1.2 Mean-Reverting Baskets with Sufficient Volatility and Sparsity 24 3.2 Proxies for Mean Reversion 25 3.2.1 Related Work and Problem Setting 25 3.2.2 Predictability 26 3.2.3 Portmanteau Criterion 27 3.2.4 Crossing Statistics 28 3.3 Optimal Baskets 28 3.3.1 Minimizing Predictability 29 3.3.2 Minimizing the Portmanteau Statistic 29 3.3.3 Minimizing the Crossing Statistic 29 3.4 Semidefinite Relaxations and Sparse Components 30 3.4.1 A Semidefinite Programming Approach to Basket Estimation 30 3.4.2 Predictability 30 3.4.3 Portmanteau 31 3.4.4 Crossing Stats 31 3.5 Numerical Experiments 32 3.5.1 Historical Data 32 3.5.2 Mean-reverting Basket Estimators 33 3.5.3 Jurek and Yang (2007) Trading Strategy 33 3.5.4 Transaction Costs 33 3.5.5 Experimental Setup 36 3.5.6 Results 36 3.6 Conclusion 39 References 39 4 Temporal Causal Modeling 41 Prabhanjan Kambadur, Aurélie C. Lozano, and Ronny Luss 4.1 Introduction 41 4.2 TCM 46 4.2.1 Granger Causality and Temporal Causal Modeling 46 4.2.2 Grouped Temporal Causal Modeling Method 47 4.2.3 Synthetic Experiments 49 4.3 Causal Strength Modeling 51 4.4 Quantile TCM (Q-TCM) 52 4.4.1 Modifying Group OMP for Quantile Loss 52 4.4.2 Experiments 53 4.5 TCM with Regime Change Identification 55 4.5.1 Model 56 4.5.2 Algorithm 58 4.5.3 Synthetic Experiments 60 4.5.4 Application: Analyzing Stock Returns 62 4.6 Conclusions 63 References 64 5 Explicit Kernel and Sparsity of Eigen Subspace for the AR(1) Process 67 Mustafa U. Torun, Onur Yilmaz, and Ali N. Akansu 5.1 Introduction 67 5.2 Mathematical Definitions 68 5.2.1 Discrete AR(1) Stochastic Signal Model 68 5.2.2 Orthogonal Subspace 69 5.3 Derivation of Explicit KLT Kernel for a Discrete AR(1) Process 72 5.3.1 A Simple Method for Explicit Solution of a Transcendental Equation 73 5.3.2 Continuous Process with Exponential Autocorrelation 74 5.3.3 Eigenanalysis of a Discrete AR(1) Process 76 5.3.4 Fast Derivation of KLT Kernel for an AR(1) Process 79 5.4 Sparsity of Eigen Subspace 82 5.4.1 Overview of Sparsity Methods 83 5.4.2 pdf-Optimized Midtread Quantizer 84 5.4.3 Quantization of Eigen Subspace 86 5.4.4 pdf of Eigenvector 87 5.4.5 Sparse KLT Method 89 5.4.6 Sparsity Performance 91 5.5 Conclusions 97 References 97 6 Approaches to High-Dimensional Covariance and Precision Matrix Estimations 100 Jianqing Fan, Yuan Liao, and Han Liu 6.1 Introduction 100 6.2 Covariance Estimation via Factor Analysis 101 6.2.1 Known Factors 103 6.2.2 Unknown Factors 104 6.2.3 Choosing the Threshold 105 6.2.4 Asymptotic Results 105 6.2.5 A Numerical Illustration 107 6.3 Precision Matrix Estimation and Graphical Models 109 6.3.1 Column-wise Precision Matrix Estimation 110 6.3.2 The Need for Tuning-insensitive Procedures 111 6.3.3 TIGER: A Tuning-insensitive Approach for Optimal Precision Matrix Estimation 112 6.3.4 Computation 114 6.3.5 Theoretical Properties of TIGER 114 6.3.6 Applications to Modeling Stock Returns 115 6.3.7 Applications to Genomic Network 118 6.4 Financial Applications 119 6.4.1 Estimating Risks of Large Portfolios 119 6.4.2 Large Panel Test of Factor Pricing Models 121 6.5 Statistical Inference in Panel Data Models 126 6.5.1 Efficient Estimation in Pure Factor Models 126 6.5.2 Panel Data Model with Interactive Effects 127 6.5.3 Numerical Illustrations 130 6.6 Conclusions 131 References 131 7 Stochastic Volatility 135 Matthew Lorig and Ronnie Sircar 7.1 Introduction 135 7.1.1 Options and Implied Volatility 136 7.1.2 Volatility Modeling 137 7.2 Asymptotic Regimes and Approximations 141 7.2.1 Contract Asymptotics 142 7.2.2 Model Asymptotics 142 7.2.3 Implied Volatility Asymptotics 143 7.2.4 Tractable Models 145 7.2.5 Model Coefficient Polynomial Expansions 146 7.2.6 Small “Vol of Vol” Expansion 152 7.2.7 Separation of Timescales Approach 152 7.2.8 Comparison of the Expansion Schemes 154 7.3 Merton Problem with Stochastic Volatility: Model Coefficient Polynomial Expansions 155 7.3.1 Models and Dynamic Programming Equation 155 7.3.2 Asymptotic Approximation 157 7.3.3 Power Utility 159 7.4 Conclusions 160 Acknowledgements 160 References 160 8 Statistical Measures of Dependence for Financial Data 162 David S. Matteson, Nicholas A. James, and William B. Nicholson 8.1 Introduction 162 8.2 Robust Measures of Correlation and Autocorrelation 164 8.2.1 Transformations and Rank-Based Methods 166 8.2.2 Inference 169 8.2.3 Misspecification Testing 171 8.3 Multivariate Extensions 174 8.3.1 Multivariate Volatility 175 8.3.2 Multivariate Misspecification Testing 176 8.3.3 Granger Causality 176 8.3.4 Nonlinear Granger Causality 177 8.4 Copulas 179 8.4.1 Fitting Copula Models 180 8.4.2 Parametric Copulas 181 8.4.3 Extending beyond Two Random Variables 183 8.4.4 Software 185 8.5 Types of Dependence 185 8.5.1 Positive and Negative Dependence 185 8.5.2 Tail Dependence 187 References 188 9 Correlated Poisson Processes and Their Applications in Financial Modeling 191 Alexander Kreinin 9.1 Introduction 191 9.2 Poisson Processes and Financial Scenarios 193 9.2.1 Integrated Market–Credit Risk Modeling 193 9.2.2 Market Risk and Derivatives Pricing 194 9.2.3 Operational Risk Modeling 194 9.2.4 Correlation of Operational Events 195 9.3 Common Shock Model and Randomization of Intensities 196 9.3.1 Common Shock Model 196 9.3.2 Randomization of Intensities 196 9.4 Simulation of Poisson Processes 197 9.4.1 Forward Simulation 197 9.4.2 Backward Simulation 200 9.5 Extreme Joint Distribution 207 9.5.1 Reduction to Optimization Problem 207 9.5.2 Monotone Distributions 208 9.5.3 Computation of the Joint Distribution 214 9.5.4 On the Frechet–Hoeffding Theorem 215 9.5.5 Approximation of the Extreme Distributions 217 9.6 Numerical Results 219 9.6.1 Examples of the Support 219 9.6.2 Correlation Boundaries 221 9.7 Backward Simulation of the Poisson-Wiener Process 222 9.8 Concluding Remarks 227 Acknowledgments 228 Appendix A 229 A. 1 Proof of Lemmas 9.2 and 9.3 229 A.1.1 Proof of Lemma 9.2 229 A.1.2 Proof of Lemma 9.3 230 References 231 10 CVaR Minimizations in Support Vector Machines 233 Jun-ya Gotoh and Akiko Takeda 10.1 What Is CVaR? 234 10.1.1 Definition and Interpretations 234 10.1.2 Basic Properties of CVaR 238 10.1.3 Minimization of CVaR 240 10.2 Support Vector Machines 242 10.2.1 Classification 242 10.2.2 Regression 246 10.3 ν-SVMs as CVaR Minimizations 247 10.3.1 ν-SVMs as CVaR Minimizations with Homogeneous Loss 247 10.3.2 ν-SVMs as CVaR Minimizations with Nonhomogeneous Loss 251 10.3.3 Refining the ν-Property 253 10.4 Duality 256 10.4.1 Binary Classification 256 10.4.2 Geometric Interpretation of ν-SVM 257 10.4.3 Geometric Interpretation of the Range of ν for ν-SVC 258 10.4.4 Regression 259 10.4.5 One-class Classification and SVDD 259 10.5 Extensions to Robust Optimization Modelings 259 10.5.1 Distributionally Robust Formulation 259 10.5.2 Measurement-wise Robust Formulation 261 10.6 Literature Review 262 10.6.1 CVaR as a Risk Measure 263 10.6.2 From CVaR Minimization to SVM 263 10.6.3 From SVM to CVaR Minimization 263 10.6.4 Beyond CVaR 263 References 264 11 Regression Models in Risk Management 266 Stan Uryasev 11.1 Introduction 267 11.2 Error and Deviation Measures 268 11.3 Risk Envelopes and Risk Identifiers 271 11.3.1 Examples of Deviation Measures D, Corresponding Risk Envelopes Q, and Sets of Risk Identifiers QD(X) 272 11.4 Error Decomposition in Regression 273 11.5 Least-Squares Linear Regression 275 11.6 Median Regression 277 11.7 Quantile Regression and Mixed Quantile Regression 281 11.8 Special Types of Linear Regression 283 11.9 Robust Regression 284 References, Further Reading, and Bibliography 287 Index 289

    15 in stock

    £79.16

  • Digital Signal Processing Using the ARM Cortex M4

    John Wiley & Sons Inc Digital Signal Processing Using the ARM Cortex M4

    15 in stock

    Book SynopsisFeatures inexpensive ARM Cortex-M4 microcontroller development systems available from Texas Instruments and STMicroelectronics. This book presents a hands-on approach to teaching Digital Signal Processing (DSP) with real-time examples using the ARM Cortex-M4 32-bit microprocessor. Real-time examples using analog input and output signals are provided, giving visible (using an oscilloscope) and audible (using a speaker or headphones) results. Signal generators and/or audio sources, e.g. iPods, can be used to provide experimental input signals. The text also covers the fundamental concepts of digital signal processing such as analog-to-digital and digital-to-analog conversion, FIR and IIR filtering, Fourier transforms, and adaptive filtering. Digital Signal Processing Using the ARM Cortex-M4: Uses a large number of simple example programs illustrating DSP concepts in real-time, in an electrical engineering laboratory setting Includes exTable of ContentsPreface xi 1 ARM® CORTEX® - M4 Development Systems 1 1.1 Introduction 1 1.1.1 Audio Interfaces 2 1.1.2 Texas Instruments TM4C123 LaunchPad and STM32F407 Discovery Development Kits 2 1.1.3 Hardware and Software Tools 6 Reference 7 2 Analog Input and Output 9 2.1 Introduction 9 2.1.1 Sampling, Reconstruction, and Aliasing 9 2.2 TLV320AIC3104 (AIC3104) Stereo Codec for Audio Input and Output 10 2.3 WM5102 Audio Hub Codec for Audio Input and Output 12 2.4 Programming Examples 12 2.5 Real-Time Input and Output Using Polling, Interrupts, and Direct Memory Access (DMA) 12 2.5.1 I2S Emulation on the TM4C123 15 2.5.2 Program Operation 15 2.5.3 Running the Program 16 2.5.4 Changing the Input Connection to LINE IN 16 2.5.5 Changing the Sampling Frequency 16 2.5.6 Using the Digital MEMS Microphone on the Wolfson Audio Card 20 2.5.7 Running the Program 21 2.5.8 Running the Program 23 2.5.9 DMA in the TM4C123 Processor 26 2.5.10 Running the Program 30 2.5.11 Monitoring Program Execution 30 2.5.12 Measuring the Delay Introduced by DMA-Based I/O 30 2.5.13 DMA in the STM32F407 Processor 34 2.5.14 Running the Program 35 2.5.15 Measuring the Delay Introduced by DMA-Based I/O 35 2.5.16 Running the Program 46 2.6 Real-Time Waveform Generation 46 2.6.1 Running the Program 49 2.6.2 Out-of-Band Noise in the Output of the AIC3104 Codec (tm4c123_sine48_intr.c). 49 2.6.3 Running the Program 53 2.6.4 Running the Program 62 2.6.5 Running the Program 69 2.7 Identifying the Frequency Response of the DAC Using Pseudorandom Noise 70 2.7.1 Programmable De-Emphasis in the AIC3104 Codec 72 2.7.2 Programmable Digital Effects Filters in the AIC3104 Codec 72 2.8 Aliasing 78 2.8.1 Running the Program 83 2.9 Identifying the Frequency Response of the DAC Using An Adaptive Filter 83 2.9.1 Running the Program 84 2.10 Analog Output Using the STM32F407’S 12-BIT DAC 91 References 96 3 Finite Impulse Response Filters 97 3.1 Introduction to Digital Filters 97 3.1.1 The FIR Filter 97 3.1.2 Introduction to the z-Transform 99 3.1.3 Definition of the z-Transform 100 3.1.4 Properties of the z-Transform 108 3.1.5 z-Transfer Functions 111 3.1.6 Mapping from the s-Plane to the z-Plane 111 3.1.7 Difference Equations 112 3.1.8 Frequency Response and the z-Transform 113 3.1.9 The Inverse z-Transform 114 3.2 Ideal Filter Response Classifications: LP, HP, BP, BS 114 3.2.1 Window Method of FIR Filter Design 114 3.2.2 Window Functions 116 3.2.3 Design of Ideal High-Pass Band-Pass and Band-Stop FIR Filters Using the Window Method 120 3.3 Programming Examples 123 3.3.1 Altering the Coefficients of the Moving Average Filter 132 3.3.2 Generating FIR Filter Coefficient Header Files Using MATLAB 137 4 Infinite Impulse Response Filters 163 4.1 Introduction 163 4.2 IIR Filter Structures 164 4.2.1 Direct Form I Structure 164 4.2.2 Direct Form II Structure 165 4.2.3 Direct Form II Transpose 166 4.2.4 Cascade Structure 168 4.2.5 Parallel Form Structure 169 4.3 Impulse Invariance 171 4.4 Bilinear Transformation 171 4.4.1 Bilinear Transform Design Procedure 172 4.5 Programming Examples 173 4.5.1 Design of a Simple IIR Low-Pass Filter 173 Reference 216 5 Fast Fourier Transform 217 5.1 Introduction 217 5.2 Development of the FFT Algorithm with RADIX-2 218 5.3 Decimation-in-Frequency FFT Algorithm with RADIX-2 219 5.4 Decimation-in-Time FFT Algorithm with RADIX-2 222 5.4.1 Reordered Sequences in the Radix-2 FFT and Bit-Reversed Addressing 224 5.5 Decimation-in-Frequency FFT Algorithm with RADIX-4 226 5.6 Inverse Fast Fourier Transform 227 5.7 Programming Examples 228 5.7.1 Twiddle Factors 233 5.8 Frame- or Block-Based Programming 239 5.8.1 Running the Program 242 5.8.2 Spectral Leakage 244 5.9 Fast Convolution 252 5.9.1 Running the Program 256 5.9.2 Execution Time of Fast Convolution Method of FIR Filter Implementation 256 Reference 261 6 Adaptive Filters 263 6.1 Introduction 263 6.2 Adaptive Filter Configurations 264 6.2.1 Adaptive Prediction 264 6.2.2 System Identification or Direct Modeling 265 6.2.3 Noise Cancellation 265 6.2.4 Equalization 266 6.3 Performance Function 267 6.3.1 Visualizing the Performance Function 269 6.4 Searching for the Minimum 270 6.5 Least Mean Squares Algorithm 270 6.5.1 LMS Variants 272 6.5.2 Normalized LMS Algorithm 272 6.6 Programming Examples 273 6.6.1 Using CMSIS DSP Function arm_lms_f32() 280 Index 299

    15 in stock

    £68.36

  • FPGAbased Implementation of Signal Processing

    John Wiley & Sons Inc FPGAbased Implementation of Signal Processing

    15 in stock

    Book SynopsisAn important working resource for engineers and researchers involved in the design, development, and implementation of signal processing systems The last decade has seen a rapid expansion of the use of field programmable gate arrays (FPGAs) for a wide range of applications beyond traditional digital signal processing (DSP) systems. Written by a team of experts working at the leading edge of FPGA research and development, this second edition of FPGA-based Implementation of Signal Processing Systems has been extensively updated and revised to reflect the latest iterations of FPGA theory, applications, and technology. Written from a system-level perspective, it features expert discussions of contemporary methods and tools used in the design, optimization and implementation of DSP systems using programmable FPGA hardware. And it provides a wealth of practical insightsalong with illustrative case studies and timely real-world examplesof critical concern to engineers Table of ContentsPreface xv List of Abbreviations xxi 1 Introduction to Field Programmable Gate Arrays 1 1.1 Introduction 1 1.2 Field Programmable Gate Arrays 2 1.3 Influence of Programmability 6 1.4 Challenges of FPGAs 8 Bibliography 9 2 DSP Basics 11 2.1 Introduction 11 2.2 Definition of DSP Systems 12 2.3 DSP Transformations 16 2.4 Filters 20 2.5 Adaptive Filtering 29 2.6 Final Comments 38 Bibliography 38 3 Arithmetic Basics 41 3.1 Introduction 41 3.2 Number Representations 42 3.3 Arithmetic Operations 47 3.4 Alternative Number Representations 55 3.5 Division 59 3.6 Square Root 60 3.7 Fixed-Point versus Floating-Point 64 3.8 Conclusions 66 Bibliography 67 4 Technology Review 70 4.1 Introduction 70 4.2 Implications of Technology Scaling 71 4.3 Architecture and Programmability 72 4.4 DSP Functionality Characteristics 74 4.5 Microprocessors 76 4.6 DSP Processors 82 4.7 Graphical Processing Units 86 4.8 System-on-Chip Solutions 88 4.9 Heterogeneous Computing Platforms 91 4.10 Conclusions 92 Bibliography 92 5 Current FPGA Technologies 94 5.1 Introduction 94 5.2 Toward FPGAs 95 5.3 Altera Stratix® V and 10 FPGA Family 98 5.4 Xilinx UltrascaleTM/Virtex-7 FPGA Families 103 5.5 Xilinx Zynq FPGA Family 107 5.6 Lattice iCE40isp FPGA Family 108 5.7 MicroSemi RTG4 FPGA Family 111 5.8 Design Stratregies for FPGA-based DSP Systems 112 5.9 Conclusions 114 Bibliography 114 6 Detailed FPGA Implementation Techniques 116 6.1 Introduction 116 6.2 FPGA Functionality 117 6.3 Mapping to LUT-Based FPGA Technology 123 6.4 Fixed-Coefficient DSP 125 6.5 Distributed Arithmetic 130 6.6 Reduced-Coefficient Multiplier 133 6.7 Conclusions 137 Bibliography 138 7 Synthesis Tools for FPGAs 140 7.1 Introduction 140 7.2 High-Level Synthesis 141 7.3 Xilinx Vivado 143 7.4 Control Logic Extraction Phase Example 144 7.5 Altera SDK for OpenCL 145 7.6 Other HLS Tools 147 7.7 Conclusions 150 Bibliography 150 8 Architecture Derivation for FPGA-based DSP Systems 152 8.1 Introduction 152 8.2 DSP Algorithm Characteristics 153 8.3 DSP Algorithm Representations 157 8.4 Pipelining DSP Systems 160 8.5 Parallel Operation 170 8.6 Conclusions 178 Bibliography 179 9 Complex DSP Core Design for FPGA 180 9.1 Introduction 180 9.2 Motivation for Design for Reuse 181 9.3 Intellectual Property Cores 182 9.4 Evolution of IP Cores 184 9.5 Parameterizable (Soft) IP Cores 187 9.6 IP Core Integration 195 9.7 Current FPGA-based IP Cores 197 9.8 Watermarking IP 198 9.9 Summary 198 Bibliography 199 10 AdvancedModel-Based FPGA Accelerator Design 200 10.1 Introduction 200 10.2 Dataflow Modeling of DSP Systems 201 10.3 Architectural Synthesis of Custom Circuit Accelerators from DFGs 204 10.4 Model-Based Development of Multi-Channel Dataflow Accelerators 205 10.5 Model-Based Development for Memory-Intensive Accelerators 219 10.6 Summary 223 References 223 11 Adaptive Beamformer Example 225 11.1 Introduction to Adaptive Beamforming 226 11.2 Generic Design Process 226 11.3 Algorithm to Architecture 231 11.4 Efficient Architecture Design 235 11.5 Generic QR Architecture 240 11.6 Retiming the Generic Architecture 246 11.7 Parameterizable QR Architecture 253 11.8 Generic Control 266 11.9 Beamformer Design Example 269 11.10 Summary 271 References 271 12 FPGA Solutions for Big Data Applications 273 12.1 Introduction 273 12.2 Big Data 274 12.3 Big Data Analytics 275 12.4 Acceleration 280 12.5 k-Means Clustering FPGA Implementation 283 12.6 FPGA-Based Soft Processors 286 12.7 System Hardware 290 12.8 Conclusions 293 Bibliography 293 13 Low-Power FPGA Implementation 296 13.1 Introduction 296 13.2 Sources of Power Consumption 297 13.3 FPGA Power Consumption 300 13.4 Power Consumption Reduction Techniques 302 13.5 Dynamic Voltage Scaling in FPGAs 303 13.6 Reduction in Switched Capacitance 305 13.7 Final Comments 316 Bibliography 317 14 Conclusions 319 14.1 Introduction 319 14.2 Evolution in FPGA Design Approaches 320 14.3 Big Data and the Shift toward Computing 320 14.4 Programming Flow for FPGAs 321 14.5 Support for Floating-Point Arithmetic 322 14.6 Memory Architectures 322 Bibliography 323 Index 325

    15 in stock

    £78.26

  • Multidimensional Signal and Color Image

    John Wiley & Sons Inc Multidimensional Signal and Color Image

    1 in stock

    Book SynopsisAn Innovative Approach to Multidimensional Signals and Systems Theory for Image and Video Processing In this volume, Eric Dubois further develops the theory of multi-D signal processing wherein input and output are vector-value signals. With this framework, he introduces the reader to crucial concepts in signal processing such as continuous- and discrete-domain signals and systems, discrete-domain periodic signals, sampling and reconstruction, light and color, random field models, image representation and more. While most treatments use normalized representations for non-rectangular sampling, this approach obscures much of the geometrical and scale information of the signal. In contrast, Dr. Dubois uses actual units of space-time and frequency. Basis-independent representations appear as much as possible, and the basis is introduced where needed to perform calculations or implementations. Thus, lattice theory is developed from the beginning and rectangular samplTable of ContentsAbout the Companion Website xiii 1 Introduction 1 2 Continuous-Domain Signals and Systems 5 2.1 Introduction 5 2.2 Multidimensional Signals 7 2.2.1 Zero–One Functions 7 2.2.2 Sinusoidal Signals 7 2.2.3 Real Exponential Functions 10 2.2.4 Zone Plate 10 2.2.5 Singularities 12 2.2.6 Separable and Isotropic Functions 13 2.3 Visualization of Two-Dimensional Signals 13 2.4 Signal Spaces and Systems 14 2.5 Continuous-Domain Linear Systems 15 2.5.1 Linear Systems 15 2.5.2 Linear Shift-Invariant Systems 19 2.5.3 Response of a Linear System 20 2.5.4 Response of a Linear Shift-Invariant System 20 2.5.5 Frequency Response of an LSI System 22 2.6 The Multidimensional Fourier Transform 22 2.6.1 Fourier Transform Properties 23 2.6.2 Evaluation of Multidimensional Fourier Transforms 27 2.6.3 Two-Dimensional Fourier Transform of Polygonal Zero–One Functions 30 2.6.4 Fourier Transform of a Translating Still Image 33 2.7 Further Properties of Differentiation and Related Systems 33 2.7.1 Directional Derivative 34 2.7.2 Laplacian 34 2.7.3 Filtered Derivative Systems 35 Problems 37 3 Discrete-Domain Signals and Systems 41 3.1 Introduction 41 3.2 Lattices 42 3.2.1 Basic Definitions 42 3.2.2 Properties of Lattices 44 3.2.3 Examples of 2D and 3D Lattices 44 3.3 Sampling Structures 46 3.4 Signals Defined on Lattices 47 3.5 Special Multidimensional Signals on a Lattice 48 3.5.1 Unit Sample 48 3.5.2 Sinusoidal Signals 49 3.6 Linear Systems Over Lattices 51 3.6.1 Response of a Linear System 51 3.6.2 Frequency Response 52 3.7 Discrete-Domain Fourier Transforms Over a Lattice 52 3.7.1 Definition of the Discrete-Domain Fourier Transform 52 3.7.2 Properties of the Multidimensional Fourier Transform Over a Lattice Λ 53 3.7.3 Evaluation of Forward and Inverse Discrete-Domain Fourier Transforms 57 3.8 Finite Impulse Response (FIR) Filters 59 3.8.1 Separable Filters 66 Problems 67 4 Discrete-Domain Periodic Signals 69 4.1 Introduction 69 4.2 Periodic Signals 69 4.3 Linear Shift-Invariant Systems 72 4.4 Discrete-Domain Periodic Fourier Transform 73 4.5 Properties of the Discrete-Domain Periodic Fourier Transform 77 4.6 Computation of the Discrete-Domain Periodic Fourier Transform 81 4.6.1 Direct Computation 81 4.6.2 Selection of Coset Representatives 82 4.7 Vector Space Representation of Images Based on the Discrete-Domain Periodic Fourier Transform 87 4.7.1 Vector Space Representation of Signals with Finite Extent 87 4.7.2 Block-Based Vector-Space Representation 88 Problems 90 5 Continuous-Domain Periodic Signals 93 5.1 Introduction 93 5.2 Continuous-Domain Periodic Signals 93 5.3 Linear Shift-Invariant Systems 94 5.4 Continuous-Domain Periodic Fourier Transform 96 5.5 Properties of the Continuous-Domain Periodic Fourier Transform 96 5.6 Evaluation of the Continuous-Domain Periodic Fourier Transform 100 Problems 105 6 Sampling, Reconstruction and Sampling Theorems for Multidimensional Signals 107 6.1 Introduction 107 6.2 Ideal Sampling and Reconstruction of Continuous-Domain Signals 107 6.3 Practical Sampling 110 6.4 Practical Reconstruction 112 6.5 Sampling and Periodization of Multidimensional Signals and Transforms 113 6.6 Inverse Fourier Transforms 116 6.6.1 Inverse Discrete-Domain Aperiodic Fourier Transform 117 6.6.2 Inverse Continuous-Domain Periodic Fourier Transform 118 6.6.3 Inverse Continuous-Domain Fourier Transform 119 6.7 Signals and Transforms with Finite Support 119 6.7.1 Continuous-Domain Signals with Finite Support 119 6.7.2 Discrete-Domain Aperiodic Signals with Finite Support 120 6.7.3 Band-Limited Continuous-Domain Γ-Periodic Signals 121 Problems 121 7 Light and Color Representation in Imaging Systems 125 7.1 Introduction 125 7.2 Light 125 7.3 The Space of Light Stimuli 128 7.4 The Color Vector Space 129 7.4.1 Properties of Metamerism 130 7.4.2 Algebraic Condition for Metameric Equivalence 132 7.4.3 Extension of Metameric Equivalence to A 135 7.4.4 Definition of the Color Vector Space 135 7.4.5 Bases for the Vector Space C 137 7.4.6 Transformation of Primaries 138 7.4.7 The CIE Standard Observer 140 7.4.8 Specification of Primaries 142 7.4.9 Physically Realizable Colors 144 7.5 Color Coordinate Systems 147 7.5.1 Introduction 147 7.5.2 Luminance and Chromaticity 147 7.5.3 Linear Color Representations 153 7.5.4 Perceptually Uniform Color Coordinates 155 7.5.5 Display Referred Coordinates 157 7.5.6 Luma-Color-Difference Representation 158 Problems 158 8 Processing of Color Signals 163 8.1 Introduction 163 8.2 Continuous-Domain Systems for Color Images 163 8.2.1 Continuous-Domain Color Signals 163 8.2.2 Continuous-Domain Systems for Color Signals 166 8.2.3 Frequency Response and Fourier Transform 168 8.3 Discrete-Domain Color Images 173 8.3.1 Color Signals With All Components on a Single Lattice 173 8.3.1.1 Sampling a Continuous-Domain Color Signal Using a Single Lattice 175 8.3.1.2 S-CIELAB Error Criterion 175 8.3.2 Color Signals With Different Components on Different Sampling Structures 180 8.4 Color Mosaic Displays 188 9 Random Field Models 193 9.1 Introduction 193 9.2 What is a Random Field? 194 9.3 Image Moments 195 9.3.1 Mean, Autocorrelation, Autocovariance 195 9.3.2 Properties of the Autocorrelation Function 198 9.3.3 Cross-Correlation 199 9.4 Power Density Spectrum 199 9.4.1 Properties of the Power Density Spectrum 200 9.4.2 Cross Spectrum 201 9.4.3 Spectral Density Matrix 201 9.5 Filtering and Sampling of WSS Random Fields 202 9.5.1 LSI Filtering of a Scalar WSS Random Field 202 9.5.2 Why is Sf(u) Called a Power Density Spectrum? 204 9.5.3 LSI Filtering of a WSS Color Random Field 205 9.5.4 Sampling of a WSS Continuous-Domain Random Field 206 9.6 Estimation of the Spectral Density Matrix 207 Problems 214 10 Analysis and Design of Multidimensional FIR Filters 215 10.1 Introduction 215 10.2 Moving Average Filters 215 10.3 Gaussian Filters 217 10.4 Band-pass and Band-stop Filters 220 10.5 Frequency-Domain Design of Multidimensional FIR Filters 225 10.5.1 FIR Filter Design Using Windows 226 10.5.2 FIR Filter Design Using Least-pth Optimization 229 Problems 236 11 Changing the Sampling Structure of an Image 237 11.1 Introduction 237 11.2 Sublattices 237 11.3 Upsampling 239 11.4 Downsampling 245 11.5 Arbitrary Sampling Structure Conversion 248 11.5.1 Sampling Structure Conversion Using a Common Superlattice 248 11.5.2 Polynomial Interpolation 251 Problems 254 12 Symmetry Invariant Signals and Systems 255 12.1 LSI Systems Invariant to a Group of Symmetries 255 12.1.1 Symmetries of a Lattice 255 12.1.2 Symmetry-Group Invariant Systems 258 12.1.3 Spaces of Symmetric Signals 261 12.2 Symmetry-Invariant Discrete-Domain Periodic Signals and Systems 269 12.2.1 Symmetric Discrete-Domain Periodic Signals 270 12.2.2 Discrete-Domain Periodic Symmetry-Invariant Systems 271 12.2.3 Discrete-Domain Symmetry-Invariant Periodic Fourier Transform 273 12.3 Vector-Space Representation of Images Based on the Symmetry-Invariant Periodic Fourier Transform 282 13 Lattices 289 13.1 Introduction 289 13.2 Basic Definitions 289 13.3 Properties of Lattices 293 13.4 Reciprocal Lattice 294 13.5 Sublattices 295 13.6 Cosets and the Quotient Group 296 13.7 Basis Transformations 298 13.7.1 Elementary Column Operations 299 13.7.2 Hermite Normal Form 300 13.8 Smith Normal Form 302 13.9 Intersection and Sum of Lattices 304 Appendix A: Equivalence Relations 311 Appendix B: Groups 313 Appendix C: Vector Spaces 315 Appendix D: Multidimensional Fourier Transform Properties 319 References 323 Index 329

    1 in stock

    £98.96

  • Discrete Fourier Analysis and Wavelets

    John Wiley & Sons Inc Discrete Fourier Analysis and Wavelets

    2 in stock

    Book SynopsisDelivers an appropriate mix of theory and applications to help readers understand the process and problems of image and signal analysis Maintaining a comprehensive and accessible treatment of the concepts, methods, and applications of signal and image data transformation, this Second Edition of Discrete Fourier Analysis and Wavelets: Applications to Signal and Image Processing features updated and revised coverage throughout with an emphasis on key and recent developments in the field of signal and image processing. Topical coverage includes: vector spaces, signals, and images; the discrete Fourier transform; the discrete cosine transform; convolution and filtering; windowing and localization; spectrograms; frames; filter banks; lifting schemes; and wavelets. Discrete Fourier Analysis and Wavelets introduces a new chapter on framesa new technology in which signals, images, and other data are redundantly measured. This redundancy allows for more sopTable of ContentsPreface xvii Acknowledgments xxi 1 Vector Spaces, Signals, and Images 1 1.1 Overview 1 1.2 Some Common Image Processing Problems 1 1.2.1 Applications 2 1.2.1.1 Compression 2 1.2.1.2 Restoration 2 1.2.1.3 Edge Detection 3 1.2.1.4 Registration 3 1.2.2 Transform-Based Methods 3 1.3 Signals and Images 3 1.3.1 Signals 4 1.3.2 Sampling, Quantization Error, and Noise 5 1.3.3 Grayscale Images 6 1.3.4 Sampling Images 8 1.3.5 Color 9 1.3.6 Quantization and Noise for Images 9 1.4 Vector Space Models for Signals and Images 10 1.4.1 Examples—Discrete Spaces 11 1.4.2 Examples—Function Spaces 14 1.5 Basic Waveforms—The Analog Case 16 1.5.1 The One-Dimensional Waveforms 16 1.5.2 2D Basic Waveforms 19 1.6 Sampling and Aliasing 20 1.6.1 Introduction 20 1.6.2 Aliasing for Complex Exponential Waveforms 22 1.6.3 Aliasing for Sines and Cosines 23 1.6.4 The Nyquist Sampling Rate 24 1.6.5 Aliasing in Images 24 1.7 Basic Waveforms—The Discrete Case 25 1.7.1 Discrete Basic Waveforms for Finite Signals 25 1.7.2 Discrete Basic Waveforms for Images 27 1.8 Inner Product Spaces and Orthogonality 28 1.8.1 Inner Products and Norms 28 1.8.1.1 Inner Products 28 1.8.1.2 Norms 29 1.8.2 Examples 30 1.8.3 Orthogonality 33 1.8.4 The Cauchy–Schwarz Inequality 34 1.8.5 Bases and Orthogonal Decomposition 35 1.8.5.1 Bases 35 1.8.5.2 Orthogonal and Orthonormal Bases 37 1.8.5.3 Parseval’s Identity 39 1.9 Signal and Image Digitization 39 1.9.1 Quantization and Dequantization 40 1.9.1.1 The General Quantization Scheme 41 1.9.1.2 Dequantization 42 1.9.1.3 Measuring Error 42 1.9.2 Quantifying Signal and Image Distortion More Generally 43 1.10 Infinite-Dimensional Inner Product Spaces 45 1.10.1 Example: An Infinite-Dimensional Space 45 1.10.2 Orthogonal Bases in Inner Product Spaces 46 1.10.3 The Cauchy–Schwarz Inequality and Orthogonal Expansions 48 1.10.4 The Basic Waveforms and Fourier Series 49 1.10.4.1 Complex Exponential Fourier Series 49 1.10.4.2 Sines and Cosines 52 1.10.4.3 Fourier Series on Rectangles 53 1.10.5 Hilbert Spaces and L2(a, b ) 53 1.10.5.1 Expanding the Space of Functions 53 1.10.5.2 Complications 54 1.10.5.3 A Converse to Parseval 55 1.11 Matlab Project 55 Exercises 60 2 The Discrete Fourier Transform 71 2.1 Overview 71 2.2 The Time Domain and Frequency Domain 71 2.3 A Motivational Example 73 2.3.1 A Simple Signal 73 2.3.2 Decomposition into BasicWaveforms 74 2.3.3 Energy at Each Frequency 74 2.3.4 Graphing the Results 75 2.3.5 Removing Noise 77 2.4 The One-Dimensional DFT 78 2.4.1 Definition of the DFT 78 2.4.2 Sample Signal and DFT Pairs 80 2.4.2.1 An Aliasing Example 80 2.4.2.2 Square Pulses 81 2.4.2.3 Noise 82 2.4.3 Suggestions on Plotting DFTs 84 2.4.4 An Audio Example 84 2.5 Properties of the DFT 85 2.5.1 Matrix Formulation and Linearity 85 2.5.1.1 The DFT as a Matrix 85 2.5.1.2 The Inverse DFT as a Matrix 87 2.5.2 Symmetries for Real Signals 88 2.6 The Fast Fourier transform 90 2.6.1 DFT Operation Count 90 2.6.2 The FFT 91 2.6.3 The Operation Count 92 2.7 The Two-Dimensional DFT 93 2.7.1 Interpretation and Examples of the 2-D DFT 96 2.8 Matlab Project 97 2.8.1 Audio Explorations 97 2.8.2 Images 99 Exercises 101 3 The Discrete Cosine Transform 105 3.1 Motivation for the DCT—Compression 105 3.2 Other Compression Issues 106 3.3 Initial Examples—Thresholding 107 3.3.1 Compression Example 1: A Smooth Function 108 3.3.2 Compression Example 2: A Discontinuity 109 3.3.3 Compression Example 3 110 3.3.4 Observations 112 3.4 The Discrete Cosine Transform 112 3.4.1 DFT Compression Drawbacks 112 3.4.2 The Discrete Cosine Transform 113 3.4.2.1 Symmetric Reflection 113 3.4.2.2 DFT of the Extension 113 3.4.2.3 DCT/IDCT Derivation 114 3.4.2.4 Definition of the DCT and IDCT 115 3.4.3 Matrix Formulation of the DCT 116 3.5 Properties of the DCT 116 3.5.1 BasicWaveforms for the DCT 116 3.5.2 The Frequency Domain for the DCT 117 3.5.3 DCT and Compression Examples 117 3.6 The Two-Dimensional DCT 120 3.7 Block Transforms 121 3.8 JPEG Compression 123 3.8.1 Overall Outline 123 3.8.2 DCT and Quantization Details 124 3.8.3 The JPEG Dog 128 3.8.4 Sequential versus Progressive Encoding 128 3.9 Matlab Project 131 Exercises 134 4 Convolution and Filtering 139 4.1 Overview 139 4.2 One-Dimensional Convolution 139 4.2.1 Example: Low-Pass Filtering and Noise Removal 139 4.2.2 Convolution 142 4.2.2.1 Convolution Definition 142 4.2.2.2 Convolution Properties 143 4.3 Convolution Theorem and Filtering 146 4.3.1 The Convolution Theorem 146 4.3.2 Filtering and Frequency Response 147 4.3.2.1 Filtering Effect on BasicWaveforms 147 4.3.3 Filter Design 150 4.4 2D Convolution—Filtering Images 152 4.4.1 Two-Dimensional Filtering and Frequency Response 152 4.4.2 Applications of 2D Convolution and Filtering 153 4.4.2.1 Noise Removal and Blurring 153 4.4.2.2 Edge Detection 154 4.5 Infinite and Bi-Infinite Signal Models 156 4.5.1 L2(ℕ) and L2(ℤ) 158 4.5.1.1 The Inner Product Space L2(ℕ) 158 4.5.1.2 The Inner Product Space L2(ℤ) 159 4.5.2 Fourier Analysis in L2(ℤ) and L2(ℕ) 160 4.5.2.1 The Discrete Time Fourier Transform in L2(ℤ) 160 4.5.2.2 Aliasing and the Nyquist Frequency in L2(ℤ) 161 4.5.2.3 The Fourier Transform on L2(ℕ)) 163 4.5.3 Convolution and Filtering in L2(ℤ) and L2(ℕ) 163 4.5.3.1 The Convolution Theorem 164 4.5.4 The z-Transform 166 4.5.4.1 Two Points of View 166 4.5.4.2 Algebra of z-Transforms; Convolution 167 4.5.5 Convolution in ℂN versus L2(ℤ) 168 4.5.5.1 Some Notation 168 4.5.5.2 Circular Convolution and z-Transforms 169 4.5.5.3 Convolution in ℂN from Convolution in L2(ℤ) 170 4.5.6 Some Filter Terminology 171 4.5.7 The Space L2(ℤ × ℤ) 172 4.6 Matlab Project 172 4.6.1 Basic Convolution and Filtering 172 4.6.2 Audio Signals and Noise Removal 174 4.6.3 Filtering Images 175 Exercises 176 5 Windowing and Localization 185 5.1 Overview: Nonlocality of the DFT 185 5.2 Localization via Windowing 187 5.2.1 Windowing 187 5.2.2 Analysis of Windowing 188 5.2.2.1 Step 1: Relation of X and Y 189 5.2.2.2 Step 2: Effect of Index Shift 190 5.2.2.3 Step 3: N-Point versus M-Point DFT 191 5.2.3 Spectrograms 192 5.2.4 Other Types of Windows 196 5.3 Matlab Project 198 5.3.1 Windows 198 5.3.2 Spectrograms 199 Exercises 200 6 Frames 205 6.1 Introduction 205 6.2 Packet Loss 205 6.3 Frames—Using more Dot Products 208 6.4 Analysis and Synthesis with Frames 211 6.4.1 Analysis and Synthesis 211 6.4.2 Dual Frame and Perfect Reconstruction 213 6.4.3 Partial Reconstruction 214 6.4.4 Other Dual Frames 215 6.4.5 Numerical Concerns 216 6.4.5.1 Condition Number of a Matrix 217 6.5 Initial Examples of Frames 218 6.5.1 Circular Frames in ℝ2 218 6.5.2 Extended DFT Frames and Harmonic Frames 219 6.5.3 Canonical Tight Frame 221 6.5.4 Frames for Images 222 6.6 More on the Frame Operator 222 6.7 Group-Based Frames 225 6.7.1 Unitary Matrix Groups and Frames 225 6.7.2 Initial Examples of Group Frames 228 6.7.2.1 Platonic Frames 228 6.7.2.2 Symmetric Group Frames 230 6.7.2.3 Harmonic Frames 232 6.7.3 Gabor Frames 232 6.7.3.1 Flipped Gabor Frame 237 6.8 Frame Applications 237 6.8.1 Packet Loss 239 6.8.2 Redundancy and other duals 240 6.8.3 Spectrogram 241 6.9 Matlab Project 242 6.9.1 Frames and Frame Operator 243 6.9.2 Analysis and Synthesis 245 6.9.3 Condition Number 246 6.9.4 Packet Loss 246 6.9.5 Gabor Frames 246 Exercises 247 7 Filter Banks 251 7.1 Overview 251 7.2 The Haar Filter Bank 252 7.2.1 The One-Stage Two-Channel Filter Bank 252 7.2.2 Inverting the One-stage Transform 256 7.2.3 Summary of Filter Bank Operation 257 7.3 The General One-stage Two-channel Filter Bank 260 7.3.1 Formulation for Arbitrary FIR Filters 260 7.3.2 Perfect Reconstruction 261 7.3.3 Orthogonal Filter Banks 263 7.4 Multistage Filter Banks 264 7.5 Filter Banks for Finite Length Signals 267 7.5.1 Extension Strategy 267 7.5.2 Analysis of Periodic Extension 269 7.5.2.1 Adapting the Analysis Transform to Finite Length 270 7.5.2.2 Adapting the Synthesis Transform to Finite Length 272 7.5.2.3 Other Extensions 274 7.5.3 Matrix Formulation of the Periodic Case 274 7.5.4 Multistage Transforms 275 7.5.4.1 Iterating the One-stage Transform 275 7.5.4.2 Matrix Formulation of Multistage Transform 277 7.5.4.3 Reconstruction from Approximation Coefficients 278 7.5.5 Matlab Implementation of Discrete Wavelet Transforms 281 7.6 The 2D Discrete Wavelet Transform and JPEG 2000 281 7.6.1 Two-dimensional Transforms 281 7.6.2 Multistage Transforms for Two-dimensional Images 282 7.6.3 Approximations and Details for Images 286 7.6.4 JPEG 2000 288 7.7 Filter Design 289 7.7.1 Filter Banks in the z-domain 290 7.7.1.1 Downsampling and Upsampling in the z-domain 290 7.7.1.2 Filtering in the Frequency Domain 290 7.7.2 Perfect Reconstruction in the z-frequency Domain 290 7.7.3 Filter Design I: Synthesis from Analysis 292 7.7.4 Filter Design II: Product Filters 295 7.7.5 Filter Design III: More Product Filters 297 7.7.6 Orthogonal Filter Banks 299 7.7.6.1 Design Equations for an Orthogonal Bank 299 7.7.6.2 The Product Filter in the Orthogonal Case 300 7.7.6.3 Restrictions on P(z); Spectral Factorization 301 7.7.6.4 Daubechies Filters 301 7.8 Matlab Project 303 7.8.1 Basics 303 7.8.2 Audio Signals 304 7.8.3 Images 305 7.9 Alternate Matlab Project 306 7.9.1 Basics 306 7.9.2 Audio Signals 307 7.9.3 Images 307 Exercises 309 8 Lifting for Filter Banks and Wavelets 319 8.1 Overview 319 8.2 Lifting for the Haar Filter Bank 319 8.2.1 The Polyphase Analysis 320 8.2.2 Inverting the Polyphase Haar Transform 321 8.2.3 Lifting Decomposition for the Haar Transform 322 8.2.4 Inverting the Lifted Haar Transform 324 8.3 The Lifting Theorem 324 8.3.1 A Few Facts About Laurent Polynomials 325 8.3.1.1 The Width of a Laurent Polynomial 325 8.3.1.2 The Division Algorithm 325 8.3.2 The Lifting Theorem 326 8.4 Polyphase Analysis for Filter Banks 330 8.4.1 The Polyphase Decomposition and Convolution 331 8.4.2 The Polyphase Analysis Matrix 333 8.4.3 Inverting the Transform 334 8.4.4 Orthogonal Filters 338 8.5 Lifting 339 8.5.1 Relation Between the Polyphase Matrices 339 8.5.2 Factoring the Le Gall 5/3 Polyphase Matrix 341 8.5.3 Factoring the Haar Polyphase Matrix 343 8.5.4 Efficiency 345 8.5.5 Lifting to Design Transforms 346 8.6 Matlab Project 351 8.6.1 Laurent Polynomials 351 8.6.2 Lifting for CDF(2,2) 354 8.6.3 Lifting the D4 Filter Bank 356 Exercises 356 9 Wavelets 361 9.1 Overview 361 9.1.1 Chapter Outline 361 9.1.2 Continuous from Discrete 361 9.2 The Haar Basis 363 9.2.1 Haar Functions as a Basis for L2(0, 1) 364 9.2.1.1 Haar Function Definition and Graphs 364 9.2.1.2 Orthogonality 367 9.2.1.3 Completeness in L2(0, 1) 368 9.2.2 Haar Functions as an Orthonormal Basis for L2(ℝ) 372 9.2.3 Projections and Approximations 374 9.3 Haar Wavelets Versus the Haar Filter Bank 376 9.3.1 Single-stage Case 377 9.3.1.1 Functions from Sequences 377 9.3.1.2 Filter Bank Analysis/Synthesis 377 9.3.1.3 Haar Expansion and Filter Bank Parallels 378 9.3.2 Multistage Haar Filter Bank and Multiresolution 380 9.3.2.1 Some Subspaces and Bases 381 9.3.2.2 Multiresolution and Orthogonal Decomposition 381 9.3.2.3 Direct Sums 382 9.3.2.4 Connection to Multistage Haar Filter Banks 384 9.4 Orthogonal Wavelets 386 9.4.1 Essential Ingredients 386 9.4.2 Constructing a Multiresolution Analysis: The Dilation Equation 387 9.4.3 Connection to Orthogonal Filters 389 9.4.4 Computing the Scaling Function 390 9.4.5 Scaling Function Existence and Properties 394 9.4.5.1 Fixed Point Iteration and the Cascade Algorithm 394 9.4.5.2 Existence of the Scaling Function 395 9.4.5.3 The Support of the Scaling Function 397 9.4.5.4 Back to Multiresolution 399 9.4.6 Wavelets 399 9.4.7 Wavelets and the Multiresolution Analysis 404 9.4.7.1 Final Remarks on Orthogonal Wavelets 406 9.5 Biorthogonal Wavelets 407 9.5.1 Biorthogonal Scaling Functions 408 9.5.2 Biorthogonal Wavelets 409 9.5.3 Decomposition of L2(ℝ) 409 9.6 Matlab Project 411 9.6.1 Orthogonal Wavelets 411 9.6.2 Biorthogonal Wavelets 414 Exercises 414 Bibliography 421 Appendix: Solutions to Exercises 423 Index 439

    2 in stock

    £89.96

  • Electromagnetic Bandgap EBG Structures

    John Wiley and Sons Ltd Electromagnetic Bandgap EBG Structures

    10 in stock

    Book SynopsisAn essential guide to the background, design, and application of common-mode filtering structures in modern high-speed differential communication links Written by a team of experts in the field, Electromagnetic Bandgap (EBG) Structures explores the practical electromagnetic bandgap based common mode filters for power integrity applications and covers the theoretical and practical design approaches for common mode filtering in high-speed printed circuit boards, especially for boards in high data-rate systems. The authors describe the classic applications of electromagnetic bandgap (EBG) structures and the phenomena of common mode generation in high speed digital boards. The text also explores the fundamental electromagnetic mechanisms of the functioning of planar EBGs and considers the impact of planar EBGs on the digital signal propagation of single ended and differential interconnects routed on top or between EBGs. The authors examine the concept, design, and modeling of EBG common moTable of ContentsAbout the Authors vii Preface xi Acknowledgments xiii 1 Introduction 1 2 Planar EBGs: Fundamentals and Design 21 3 Impact of Planar EBGs on Signal Integrity in High-Speed Digital Boards 61 4 Planar Onboard EBG Filters for Common Mode Current Reduction 77 5 Special Topics for EBG Filters 159 6 Removable EBG Common Mode Filters 165 7 EBG Common Mode Filters: Modeling and Measurements 199 Index 219

    10 in stock

    £89.78

  • Fundamentals of Signal Enhancement and Array

    John Wiley & Sons Inc Fundamentals of Signal Enhancement and Array

    Out of stock

    Book SynopsisA comprehensive guide to the theory and practice of signal enhancement and array signal processing, including matlab codes, exercises and instructor and solution manuals Systematically introduces the fundamental principles, theory and applications of signal enhancement and array signal processing in an accessible mannerOffers an updated and relevant treatment of array signal processing with rigor and concisionFeatures a companion website that includes presentation files with lecture notes, homework exercises, course projects, solution manuals, instructor manuals, and Matlab codes for the examples in the bookTable of ContentsPreface vii 1 Introduction 1 1.1 Signal Enhancement 1 1.2 Approaches to Signal Enhancement 9 1.3 Array Signal Processing 11 1.4 Organization of the Book 14 1.5 How to Use the Book 16 2 Single-Channel Signal Enhancement in the Time Domain 21 2.1 Signal Model and Problem Formulation 21 2.2 Wiener Method 22 2.3 Spectral Method 40 2.4 Problems 55 3 Single-Channel Signal Enhancement in the Frequency Domain 61 3.1 Signal Model and Problem Formulation 61 3.2 Noise Reduction with Gains 62 3.3 Performance Measures 63 3.4 Optimal Gains 66 3.5 Constraint Wiener Gains 81 3.6 Implementation with the Short-Time Fourier Transform 87 3.7 Problems 95 4 Multichannel Signal Enhancement in the Time Domain 101 4.1 Signal Model and Problem Formulation 101 4.2 Conventional Method 102 4.3 Spectral Method 116 4.4 Case of a Rank Deficient Noise Correlation Matrix 131 4.5 Problems 136 5 Multichannel Signal Enhancement in the Frequency Domain 143 5.1 Signal Model and Problem Formulation 143 5.2 Linear Filtering 146 5.3 Performance Measures 147 5.4 Optimal Filters 152 5.5 Generalized Sidelobe Canceller Structure 171 5.6 A Signal Subspace Perspective 173 5.7 Implementation with the STFT 182 5.8 Problems 188 6 An Exhaustive Class of Linear Filters 197 6.1 Signal Model and Problem Formulation 197 6.2 Linear Filtering for Signal Enhancement 199 6.3 Performance Measures 200 6.4 Optimal Filters 202 6.5 Filling the Gap Between the Maximum SINR and Wiener Filters 214 6.6 Problems 221 7 Fixed Beamforming 227 7.1 Signal Model and Problem Formulation 227 7.2 Linear Array Model 228 7.3 Performance Measures 229 7.4 Spatial Aliasing 232 7.5 Fixed Beamformers 233 7.6 A Signal Subspace Perspective 253 7.7 Problems 261 8 Adaptive Beamforming 271 8.1 Signal Model, Problem Formulation, and Array Model 271 8.2 Performance Measures 272 8.3 Adaptive Beamformers 274 8.4 SNR Estimation 287 8.5 DOA Estimation 290 8.6 A Spectral Coherence Perspective 294 8.7 Problems 302 9 Differential Beamforming 309 9.1 Signal Model, Problem Formulation, and Array Model 309 9.2 Beampatterns 310 9.3 Front-to-Back Ratios 311 9.4 Array Gains 313 9.5 Examples of Theoretical Differential Beamformers 314 9.6 First-Order Design 317 9.7 Second-Order Design 320 9.8 Third-Order Design 328 9.9 Minimum-Norm Beamformers 335 9.10 Problems 341 10 Beampattern Design 349 10.1 Beampatterns Revisited 349 10.2 Nonrobust Approach 353 10.3 Robust Approach 355 10.4 Frequency-Invariant Beampattern Design 358 10.5 Least-Squares Method 361 10.6 Joint Optimization 367 10.7 Problems 378 11 Beamforming in the Time Domain 383 11.1 Signal Model and Problem Formulation 383 11.2 Broadband Beamforming 386 11.3 Performance Measures 387 11.4 Fixed Beamformers 391 11.5 Adaptive Beamformers 401 11.6 Differential Beamformers 413 11.7 Problems 423 Index 429

    Out of stock

    £110.15

  • Statistical Signal Processing in Engineering

    John Wiley & Sons Inc Statistical Signal Processing in Engineering

    15 in stock

    Book SynopsisA problem-solving approach to statistical signal processing for practicing engineers, technicians, and graduate students This book takes a pragmatic approach in solving a set of common problems engineers and technicians encounter when processing signals.Table of ContentsList of Figures xvii List of Tables xxiii Preface xxv List of Abbreviations xxix How to Use the Book xxxi About the Companion Website xxxiii Prerequisites xxxv Why are there so many matrixes in this book? xxxvii 1 Manipulations on Matrixes 1 1.1 Matrix Properties 1 1.1.1 Elementary Operations 2 1.2 Eigen-Decomposition 6 1.3 Eigenvectors in Everyday Life 9 1.3.1 Conversations in a Noisy Restaurant 9 1.3.2 Power Control in a Cellular System 12 1.3.3 Price Equilibrium in the Economy 14 1.4 Derivative Rules 15 1.4.1 Derivative with respect to x 16 1.4.2 Derivative with respect to x 17 1.4.3 Derivative with respect to the Matrix X 18 1.5 Quadratic Forms 19 1.6 Diagonalization of a Quadratic Form 20 1.7 Rayleigh Quotient 21 1.8 Basics of Optimization 22 1.8.1 Quadratic Function with Simple Linear Constraint (M=1) 23 1.8.2 Quadratic Function with Multiple Linear Constraints 23 Appendix A: Arithmetic vs. Geometric Mean 24 2 Linear Algebraic Systems 27 2.1 Problem Definition and Vector Spaces 27 2.1.1 Vector Spaces in Tomographic Radiometric Inversion 29 2.2 Rotations 31 2.3 Projection Matrixes and Data-Filtering 33 2.3.1 Projections and Commercial FM Radio 34 2.4 Singular Value Decomposition (SVD) and Subspaces 34 2.4.1 How to Choose the Rank of Afor Gaussian Model? 35 2.5 QR and Cholesky Factorization 36 2.6 Power Method for Leading Eigenvectors 38 2.7 Least Squares Solution of Overdetermined Linear Equations 39 2.8 Efficient Implementation of the LS Solution 41 2.9 Iterative Methods 42 3 Random Variables in Brief 45 3.1 Probability Density Function (pdf), Moments, and Other Useful Properties 45 3.2 Convexity and Jensen Inequality 49 3.3 Uncorrelatedness and Statistical Independence 49 3.4 Real-Valued Gaussian Random Variables 51 3.5 Conditional pdf for Real-Valued Gaussian Random Variables 54 3.6 Conditional pdf in Additive Noise Model 56 3.7 Complex Gaussian Random Variables 56 3.7.1 Single Complex Gaussian Random Variable 56 3.7.2 Circular Complex Gaussian Random Variable 57 3.7.3 Multivariate Complex Gaussian Random Variables 58 3.8 Sum of Square of Gaussians: Chi-Square 59 3.9 Order Statistics for N rvs 60 4 Random Processes and Linear Systems 63 4.1 Moment Characterizations and Stationarity 64 4.2 Random Processes and Linear Systems 66 4.3 Complex-Valued Random Processes 68 4.4 Pole-Zero and Rational Spectra (Discrete-Time) 69 4.4.1 Stability of LTI Systems 70 4.4.2 Rational PSD 71 4.4.3 Paley–Wiener Theorem 72 4.5 Gaussian Random Process (Discrete-Time) 73 4.6 Measuring Moments in Stochastic Processes 75 Appendix A: Transforms for Continuous-Time Signals 76 Appendix B: Transforms for Discrete-Time Signals 79 5 Models and Applications 83 5.1 Linear Regression Model 84 5.2 Linear Filtering Model 86 5.2.1 Block-Wise Circular Convolution 88 5.2.2 Discrete Fourier Transform and Circular Convolution Matrixes 89 5.2.3 Identification and Deconvolution 90 5.3 MIMO systems and Interference Models 91 5.3.1 DSL System 92 5.3.2 MIMO in Wireless Communication 92 5.4 Sinusoidal Signal 97 5.5 Irregular Sampling and Interpolation 97 5.5.1 Sampling With Jitter 100 5.6 Wavefield Sensing System 101 6 Estimation Theory 105 6.1 Historical Notes 105 6.2 Non-Bayesian vs. Bayesian 106 6.3 Performance Metrics and Bounds 107 6.3.1 Bias 107 6.3.2 Mean Square Error (MSE) 108 6.3.3 Performance Bounds 109 6.4 Statistics and Sufficient Statistics 110 6.5 MVU and BLU Estimators 111 6.6 BLUE for Linear Models 112 6.7 Example: BLUE of the Mean Value of Gaussian rvs 114 7 Parameter Estimation 117 7.1 Maximum Likelihood Estimation (MLE) 117 7.2 MLE for Gaussian Model 119 7.2.1 Additive Noise Model with 119 7.2.2 Additive Noise Model with 120 7.2.3 Additive Noise Model with Multiple Observations with Known 121 7.2.3.1 Linear Model 121 7.2.3.2 Model 122 7.2.3.3 Model 123 7.2.4 Model 123 7.2.5 Additive Noise Model with Multiple Observations with Unknown 124 7.3 Other Noise Models 125 7.4 MLE and Nuisance Parameters 126 7.5 MLE for Continuous-Time Signals 128 7.5.1 Example: Amplitude Estimation 129 7.5.2 MLE for Correlated Noise 130 7.6 MLE for Circular Complex Gaussian 131 7.7 Estimation in Phase/Frequency Modulations 131 7.7.1 MLE Phase Estimation 132 7.7.2 Phase Locked Loops 133 7.8 Least Square (LS) Estimation 135 7.8.1 Weighted LS with 136 7.8.2 LS Estimation and Linear Models 137 7.8.3 Under or Over-Parameterizing? 138 7.8.4 Constrained LS Estimation 139 7.9 Robust Estimation 140 8 Cramér–Rao Bound 143 8.1 Cramér–Rao Bound and Fisher Information Matrix 143 8.1.1 CRB for Scalar Problem (P=1) 143 8.1.2 CRB and Local Curvature of Log-Likelihood 144 8.1.3 CRB for Multiple Parameters (p 1) 144 8.2 Interpretation of CRB and Remarks 146 8.2.1 Variance of Each Parameter 146 8.2.2 Compactness of the Estimates 146 8.2.3 FIM for Known Parameters 147 8.2.4 Approximation of the Inverse of FIM 148 8.2.5 Estimation Decoupled From FIM 148 8.2.6 CRB and Nuisance Parameters 149 8.2.7 CRB for Non-Gaussian rv and Gaussian Bound 149 8.3 CRB and Variable Transformations 150 8.4 FIM for Gaussian Parametric Model 151 8.4.1 FIM for with 151 8.4.2 FIM for Continuous-Time Signals in Additive White Gaussian Noise 152 8.4.3 FIM for Circular Complex Model 152 Appendix A: Proof of CRB 154 Appendix B: FIM for Gaussian Model 156 Appendix C: Some Derivatives for MLE and CRB Computations 157 9 MLE and CRB for Some Selected Cases 159 9.1 Linear Regressions 159 9.2 Frequency Estimation 162 9.3 Estimation of Complex Sinusoid 164 9.3.1 Proper, Improper, and Non-Circular Signals 165 9.4 Time of Delay Estimation 166 9.5 Estimation of Max for Uniform pdf 170 9.6 Estimation of Occurrence Probability for Binary pdf 172 9.7 How to Optimize Histograms? 173 9.8 Logistic Regression 176 10 Numerical Analysis and Montecarlo Simulations 179 10.1 System Identification and Channel Estimation 181 10.1.1 Matlab Code and Results 184 10.2 Frequency Estimation 184 10.2.1 Variable (Coarse/Fine) Sampling 187 10.2.2 Local Parabolic Regression 189 10.2.3 Matlab Code and Results 190 10.3 Time of Delay Estimation 192 10.3.1 Granularity of Sampling in ToD Estimation 193 10.3.2 Matlab Code and Results 194 10.4 Doppler-Radar System by Frequency Estimation 196 10.4.1 EM Method 197 10.4.2 Matlab Code and Results 199 11 Bayesian Estimation 201 11.1 Additive Linear Model with Gaussian Noise 203 11.1.1 Gaussian A-priori: 204 11.1.2 Non-Gaussian A-Priori 206 11.1.3 Binary Signals: MMSE vs. MAP Estimators 207 11.1.4 Example: Impulse Noise Mitigation 210 11.2 Bayesian Estimation in Gaussian Settings 212 11.2.1 MMSE Estimator 213 11.2.2 MMSE Estimator for Linear Models 213 11.3 LMMSE Estimation and Orthogonality 215 11.4 Bayesian CRB 218 11.5 Mixing Bayesian and Non-Bayesian 220 11.5.1 Linear Model with Mixed Random/Deterministic Parameters 220 11.5.2 Hybrid CRB 222 11.6 Expectation-Maximization (EM) 223 11.6.1 EM of the Sum of Signals in Gaussian Noise 224 11.6.2 EM Method for the Time of Delay Estimation of Multiple Waveforms 227 11.6.3 Remarks 228 Appendix A: Gaussian Mixture pdf 229 12 Optimal Filtering 231 12.1 Wiener Filter 231 12.2 MMSE Deconvolution (or Equalization) 233 12.3 Linear Prediction 234 12.3.1 Yule–Walker Equations 235 12.4 LS Linear Prediction 237 12.5 Linear Prediction and AR Processes 239 12.6 Levinson Recursion and Lattice Predictors 241 13 Bayesian Tracking and Kalman Filter 245 13.1 Bayesian Tracking of State in Dynamic Systems 246 13.1.1 Evolution of the A-posteriori pdf 247 13.2 Kalman Filter (KF) 249 13.2.1 KF Equations 251 13.2.2 Remarks 253 13.3 Identification of Time-Varying Filters in Wireless Communication 255 13.4 Extended Kalman Filter (EKF) for Non-Linear Dynamic Systems 257 13.5 Position Tracking by Multi-Lateration 258 13.5.1 Positioning and Noise 260 13.5.2 Example of Position Tracking 263 13.6 Non-Gaussian Pdf and Particle Filters264 14 Spectral Analysis 267 14.1 Periodogram 268 14.1.1 Bias of the Periodogram 268 14.1.2 Variance of the Periodogram 271 14.1.3 Filterbank Interpretation 273 14.1.4 Pdf of the Periodogram (White Gaussian Process) 274 14.1.5 Bias and Resolution 275 14.1.6 Variance Reduction and WOSA 278 14.1.7 Numerical Example: Bandlimited Process and (Small) Sinusoid 280 14.2 Parametric Spectral Analysis 282 14.2.1 MLE and CRB 284 14.2.2 General Model for AR, MA, ARMA Spectral Analysis 285 14.3 AR Spectral Analysis 286 14.3.1 MLE and CRB 286 14.3.2 A Good Reason to Avoid Over-Parametrization in AR 289 14.3.3 Cramér–Rao Bound of Poles in AR Spectral Analysis 291 14.3.4 Example: Frequency Estimation by AR Spectral Analysis 293 14.4 MA Spectral Analysis 296 14.5 ARMA Spectral Analysis 298 14.5.1 Cramér–Rao Bound for ARMA Spectral Analysis 300 Appendix A: Which Sample Estimate of the Autocorrelation to Use? 302 Appendix B: Eigenvectors and Eigenvalues of Correlation Matrix 303 Appendix C: Property of Monic Polynomial 306 Appendix D: Variance of Pole in AR(1) 307 15 Adaptive Filtering 309 15.1 Adaptive Interference Cancellation 311 15.2 Adaptive Equalization in Communication Systems 313 15.2.1 Wireless Communication Systems in Brief 313 15.2.2 Adaptive Equalization 315 15.3 Steepest Descent MSE Minimization 317 15.3.1 Convergence Analysis and Step-Size 318 15.3.2 An Intuitive View of Convergence Conditions 320 15.4 From Iterative to Adaptive Filters 323 15.5 LMS Algorithm and Stochastic Gradient 324 15.6 Convergence Analysis of LMS Algorithm 325 15.6.1 Convergence in the Mean 326 15.6.2 Convergence in the Mean Square 326 15.6.3 Excess MSE 329 15.7 Learning Curve of LMS 331 15.7.1 Optimization of the Step-Size 332 15.8 NLMS Updating and Non-Stationarity 333 15.9 Numerical Example: Adaptive Identification 334 15.10 RLS Algorithm 338 15.10.1 Convergence Analysis 339 15.10.2 Learning Curve of RLS 341 15.11 Exponentially-Weighted RLS 342 15.12 LMS vs. RLS 344 Appendix A: Convergence in Mean Square 344 16 Line Spectrum Analysis 347 16.1 Model Definition 349 16.1.1 Deterministic Signals 350 16.1.2 Random Signals 350 16.1.3 Properties of Structured Covariance 351 16.2 Maximum Likelihood and Cramér–Rao Bounds 352 16.2.1 Conditional ML 353 16.2.2 Cramér–Rao Bound for Conditional Model 354 16.2.3 Unconditional ML 356 16.2.4 Cramér–Rao Bound for Unconditional Model 356 16.2.5 Conditional vs. Unconditional Model & Bounds 357 16.3 High-Resolution Methods 357 16.3.1 Iterative Quadratic ML (IQML) 358 16.3.2 Prony Method 360 16.3.3 MUSIC 360 16.3.4 ESPRIT 363 16.3.5 Model Order 365 17 Equalization in Communication Engineering 367 17.1 Linear Equalization 369 17.1.1 Zero Forcing (ZF) Equalizer 370 17.1.2 Minimum Mean Square Error (MMSE) Equalizer 371 17.1.3 Finite-Length/Finite-Block Equalizer 371 17.2 Non-Linear Equalization 372 17.2.1 ZF-DFE 373 17.2.2 MMSE–DFE 374 17.2.3 Finite-Length MMSE–DFE 375 17.2.4 Asymptotic Performance for Infinite-Length Equalizers 376 17.3 MIMO Linear Equalization 377 17.3.1 ZF MIMO Equalization 377 17.3.2 MMSE MIMO Equalization 379 17.4 MIMO–DFE Equalization 379 17.4.1 Cholesky Factorization and Min/Max Phase Decomposition 379 17.4.2 MIMO–DFE 380 18 2D Signals and Physical Filters 383 18.1 2D Sinusoids 384 18.1.1 Moiré Pattern 386 18.2 2D Filtering 388 18.2.1 2D Random Fields 390 18.2.2 Wiener Filtering 391 18.2.3 Image Acquisition and Restoration 392 18.3 Diffusion Filtering 394 18.3.1 Evolution vs. Time: Fourier Method 394 18.3.2 Extrapolation of the Density 395 18.3.3 Effect of Phase-Shift 396 18.4 Laplace Equation and Exponential Filtering 398 18.5 Wavefield Propagation 400 18.5.1 Propagation/Backpropagation 400 18.5.2 Wavefield Extrapolation and Focusing 402 18.5.3 Exploding Reflector Model 402 18.5.4 Wavefield Extrapolation 404 18.5.5 Wavefield Focusing (or Migration) 406 Appendix A: Properties of 2D Signals 406 Appendix B: Properties of 2D Fourier Transform 410 Appendix C: Finite Difference Method for PDE-Diffusion 412 19 Array Processing 415 19.1 Narrowband Model 415 19.1.1 Multiple DoAs and Multiple Sources 419 19.1.2 Sensor Spacing Design 420 19.1.3 Spatial Resolution and Array Aperture 421 19.2 Beamforming and Signal Estimation 422 19.2.1 Conventional Beamforming 425 19.2.2 Capon Beamforming (MVDR) 426 19.2.3 Multiple-Constraint Beamforming 429 19.2.4 Max-SNR Beamforming 431 19.3 DoA Estimation 432 19.3.1 ML Estimation and CRB 433 19.3.2 Beamforming and Root-MVDR 434 20 Multichannel Time of Delay Estimation 435 20.1 Model Definition for ToD 440 20.2 High Resolution Method for ToD (L=1) 441 20.2.1 ToD in the Fourier Transformed Domain 441 20.2.2 CRB and Resolution 444 20.3 Difference of ToD (DToD) Estimation 445 20.3.1 Correlation Method for DToD 445 20.3.2 Generalized Correlation Method 448 20.4 Numerical Performance Analysis of DToD 452 20.5 Wavefront Estimation: Non-Parametric Method (L=1) 454 20.5.1 Wavefront Estimation in Remote Sensing and Geophysics 456 20.5.2 Narrowband Waveforms and 2D Phase Unwrapping 457 20.5.3 2D Phase Unwrapping in Regular Grid Spacing 458 20.6 Parametric ToD Estimation and Wideband Beamforming 460 20.6.1 Delay and Sum Beamforming 462 20.6.2 Wideband Beamforming After Fourier Transform 464 20.7 Appendix A: Properties of the Sample Correlations 465 20.8 Appendix B: How to Delay a Discrete-Time Signal? 466 20.9 Appendix C: Wavefront Estimation for 2D Arrays 467 21 Tomography 467 21.1 X-ray Tomography 471 21.1.1 Discrete Model 471 21.1.2 Maximum Likelihood 473 21.1.3 Emission Tomography 473 21.2 Algebraic Reconstruction Tomography (ART) 475 21.3 Reconstruction From Projections: Fourier Method 475 21.3.1 Backprojection Algorithm 476 21.3.2 How Many Projections to Use? 479 21.4 Traveltime Tomography 480 21.5 Internet (Network) Tomography 483 21.5.1 Latency Tomography 484 21.5.2 Packet-Loss Tomography 484 22 Cooperative Estimation 487 22.1 Consensus and Cooperation 490 22.1.1 Vox Populi: The Wisdom of Crowds 490 22.1.2 Cooperative Estimation as Simple Information Consensus 490 22.1.3 Weighted Cooperative Estimation ( ) 493 22.1.4 Distributed MLE ( ) 495 22.2 Distributed Estimation for Arbitrary Linear Models (p>1) 496 22.2.1 Centralized MLE 497 22.2.2 Distributed Weighted LS 498 22.2.3 Distributed MLE 500 22.2.4 Distributed Estimation for Under-Determined Systems 501 22.2.5 Stochastic Regressor Model 503 22.2.6 Cooperative Estimation in the Internet of Things (IoT) 503 22.2.7 Example: Iterative Distributed Estimation 505 22.3 Distributed Synchronization 506 22.3.1 Synchrony-States for Analog and Discrete-Time Clocks 507 22.3.2 Coupled Clocks 510 22.3.3 Internet Synchronization and the Network Time Protocol (NTP) 512 Appendix A: Basics of Undirected Graphs 515 23 Classification and Clustering 521 23.1 Historical Notes 522 23.2 Classification 523 23.2.1 Binary Detection Theory 523 23.2.2 Binary Classification of Gaussian Distributions 528 23.3 Classification of Signals in Additive Gaussian Noise 529 23.3.1 Detection of Known Signal 531 23.3.2 Classification of Multiple Signals 532 23.3.3 Generalized Likelihood Ratio Test (GLRT) 533 23.3.4 Detection of Random Signals 535 23.4 Bayesian Classification 536 23.4.1 To Classify or Not to Classify? 537 23.4.2 Bayes Risk 537 23.5 Pattern Recognition and Machine Learning 538 23.5.1 Linear Discriminant 539 23.5.2 Least Squares Classification 540 23.5.3 Support Vectors Principle 541 23.6 Clustering 543 23.6.1 K-Means Clustering 544 23.6.2 EM Clustering 545 References 549 Index 557

    15 in stock

    £91.76

  • ModelBased Processing

    John Wiley & Sons Inc ModelBased Processing

    2 in stock

    Book SynopsisA bridge between the application of subspace-based methods for parameter estimation in signal processing and subspace-based system identification in control systems Model-Based Processing: An Applied Subspace Identification Approach provides expert insight on developing models for designing model-based signal processors (MBSP) employing subspace identification techniques to achieve model-based identification (MBID) and enables readers to evaluate overall performance using validation and statistical analysis methods. Focusing on subspace approaches to system identification problems, this book teaches readers to identify models quickly and incorporate them into various processing problems including state estimation, tracking, detection, classification, controls, communications, and other applications that require reliable models that can be adapted to dynamic environments. The extraction of a model from data is vital to numerous applications, from thTable of ContentsPreface xiii Acknowledgements xxi Glossary xxiii 1 Introduction 1 1.1 Background 1 1.2 Signal Estimation 2 1.3 Model-Based Processing 8 1.4 Model-Based Identification 16 1.5 Subspace Identification 20 1.6 Notation and Terminology 22 1.7 Summary 24 MATLAB Notes 25 References 25 Problems 26 2 Random Signals and Systems 29 2.1 Introduction 29 2.2 Discrete Random Signals 32 2.3 Spectral Representation of Random Signals 36 2.4 Discrete Systems with Random Inputs 40 2.4.1 Spectral Theorems 41 2.4.2 ARMAX Modeling 42 2.5 Spectral Estimation 44 2.5.1 Classical (Nonparametric) Spectral Estimation 44 2.5.1.1 Correlation Method (Blackman–Tukey) 45 2.5.1.2 Average Periodogram Method (Welch) 46 2.5.2 Modern (Parametric) Spectral Estimation 47 2.5.2.1 Autoregressive (All-Pole) Spectral Estimation 48 2.5.2.2 Autoregressive Moving Average Spectral Estimation 51 2.5.2.3 Minimum Variance Distortionless Response (MVDR) Spectral Estimation 52 2.5.2.4 Multiple Signal Classification (MUSIC) Spectral Estimation 55 2.6 Case Study: Spectral Estimation of Bandpass Sinusoids 59 2.7 Summary 61 MATLAB Notes 61 References 62 Problems 64 3 State-Space Models for Identification 69 3.1 Introduction 69 3.2 Continuous-Time State-Space Models 69 3.3 Sampled-Data State-Space Models 73 3.4 Discrete-Time State-Space Models 74 3.4.1 Linear Discrete Time-Invariant Systems 77 3.4.2 Discrete Systems Theory 78 3.4.3 Equivalent Linear Systems 82 3.4.4 Stable Linear Systems 83 3.5 Gauss–Markov State-Space Models 83 3.5.1 Discrete-Time Gauss–Markov Models 83 3.6 Innovations Model 89 3.7 State-Space Model Structures 90 3.7.1 Time-Series Models 91 3.7.2 State-Space and Time-Series Equivalence Models 91 3.8 Nonlinear (Approximate) Gauss–Markov State-Space Models 97 3.9 Summary 101 MATLAB Notes 102 References 102 Problems 103 4 Model-Based Processors 107 4.1 Introduction 107 4.2 Linear Model-Based Processor: Kalman Filter 108 4.2.1 Innovations Approach 110 4.2.2 Bayesian Approach 114 4.2.3 Innovations Sequence 116 4.2.4 Practical Linear Kalman Filter Design: Performance Analysis 117 4.2.5 Steady-State Kalman Filter 125 4.2.6 Kalman Filter/Wiener Filter Equivalence 128 4.3 Nonlinear State-Space Model-Based Processors 129 4.3.1 Nonlinear Model-Based Processor: Linearized Kalman Filter 130 4.3.2 Nonlinear Model-Based Processor: Extended Kalman Filter 133 4.3.3 Nonlinear Model-Based Processor: Iterated–Extended Kalman Filter 138 4.3.4 Nonlinear Model-Based Processor: Unscented Kalman Filter 141 4.3.5 Practical Nonlinear Model-Based Processor Design: Performance Analysis 148 4.3.6 Nonlinear Model-Based Processor: Particle Filter 151 4.3.7 Practical Bayesian Model-Based Design: Performance Analysis 160 4.4 Case Study: 2D-Tracking Problem 166 4.5 Summary 173 MATLAB Notes 173 References 174 Problems 177 5 Parametrically Adaptive Processors 185 5.1 Introduction 185 5.2 Parametrically Adaptive Processors: Bayesian Approach 186 5.3 Parametrically Adaptive Processors: Nonlinear Kalman Filters 187 5.3.1 Parametric Models 188 5.3.2 Classical Joint State/Parametric Processors: Augmented Extended Kalman Filter 190 5.3.3 Modern Joint State/Parametric Processor: Augmented Unscented Kalman Filter 198 5.4 Parametrically Adaptive Processors: Particle Filter 201 5.4.1 Joint State/Parameter Estimation: Particle Filter 201 5.5 Parametrically Adaptive Processors: Linear Kalman Filter 208 5.6 Case Study: Random Target Tracking 214 5.7 Summary 222 MATLAB Notes 223 References 223 Problems 226 6 Deterministic Subspace Identification 231 6.1 Introduction 231 6.2 Deterministic Realization Problem 232 6.2.1 Realization Theory 233 6.2.2 Balanced Realizations 238 6.2.3 Systems Theory Summary 239 6.3 Classical Realization 241 6.3.1 Ho–Kalman Realization Algorithm 241 6.3.2 SVD Realization Algorithm 243 6.3.2.1 Realization: Linear Time-Invariant Mechanical Systems 246 6.3.3 Canonical Realization 251 6.3.3.1 Invariant System Descriptions 251 6.3.3.2 Canonical Realization Algorithm 257 6.4 Deterministic Subspace Realization: Orthogonal Projections 264 6.4.1 Subspace Realization: Orthogonal Projections 266 6.4.2 Multivariable Output Error State-Space (MOESP) Algorithm 271 6.5 Deterministic Subspace Realization: Oblique Projections 274 6.5.1 Subspace Realization: Oblique Projections 278 6.5.2 Numerical Algorithms for Subspace State-Space System Identification (N4SID) Algorithm 280 6.6 Model Order Estimation and Validation 285 6.6.1 Order Estimation: SVD Approach 286 6.6.2 Model Validation 289 6.7 Case Study: Structural Vibration Response 295 6.8 Summary 299 MATLAB Notes 300 References 300 Problems 303 7 Stochastic Subspace Identification 309 7.1 Introduction 309 7.2 Stochastic Realization Problem 312 7.2.1 Correlated Gauss–Markov Model 312 7.2.2 Gauss–Markov Power Spectrum 313 7.2.3 Gauss–Markov Measurement Covariance 314 7.2.4 Stochastic Realization Theory 315 7.3 Classical Stochastic Realization via the Riccati Equation 317 7.4 Classical Stochastic Realization via Kalman Filter 321 7.4.1 Innovations Model 321 7.4.2 Innovations Power Spectrum 322 7.4.3 Innovations Measurement Covariance 323 7.4.4 Stochastic Realization: Innovations Model 325 7.5 Stochastic Subspace Realization: Orthogonal Projections 330 7.5.1 Multivariable Output Error State-SPace (MOESP) Algorithm 334 7.6 Stochastic Subspace Realization: Oblique Projections 342 7.6.1 Numerical Algorithms for Subspace State-Space System Identification (N4SID) Algorithm 346 7.6.2 Relationship: Oblique (N4SID) and Orthogonal (MOESP) Algorithms 351 7.7 Model Order Estimation and Validation 353 7.7.1 Order Estimation: Stochastic Realization Problem 354 7.7.1.1 Order Estimation: Statistical Methods 356 7.7.2 Model Validation 362 7.7.2.1 Residual Testing 363 7.8 Case Study: Vibration Response of a Cylinder: Identification and Tracking 369 7.9 Summary 378 MATLAB NOTES 378 References 379 Problems 382 8 Subspace Processors for Physics-Based Application 391 8.1 Subspace Identification of a Structural Device 391 8.1.1 State-Space Vibrational Systems 392 8.1.1.1 State-Space Realization 394 8.1.2 Deterministic State-Space Realizations 396 8.1.2.1 Subspace Approach 396 8.1.3 Vibrational System Processing 398 8.1.4 Application: Vibrating Structural Device 400 8.1.5 Summary 404 8.2 MBID for Scintillator System Characterization 405 8.2.1 Scintillation Pulse Shape Model 407 8.2.2 Scintillator State-Space Model 409 8.2.3 Scintillator Sampled-Data State-Space Model 410 8.2.4 Gauss–Markov State-Space Model 411 8.2.5 Identification of the Scintillator Pulse Shape Model 412 8.2.6 Kalman Filter Design: Scintillation/Photomultiplier System 414 8.2.6.1 Kalman Filter Design: Scintillation/Photomultiplier Data 416 8.2.7 Summary 417 8.3 Parametrically Adaptive Detection of Fission Processes 418 8.3.1 Fission-Based Processing Model 419 8.3.2 Interarrival Distribution 420 8.3.3 Sequential Detection 422 8.3.4 Sequential Processor 422 8.3.5 Sequential Detection for Fission Processes 424 8.3.6 Bayesian Parameter Estimation 426 8.3.7 Sequential Bayesian Processor 427 8.3.8 Particle Filter for Fission Processes 429 8.3.9 SNM Detection and Estimation: Synthesized Data 430 8.3.10 Summary 433 8.4 Parametrically Adaptive Processing for Shallow Ocean Application 435 8.4.1 State-Space Propagator 436 8.4.2 State-Space Model 436 8.4.2.1 Augmented State-Space Models 438 8.4.3 Processors 441 8.4.4 Model-Based Ocean Acoustic Processing 444 8.4.4.1 Adaptive PF Design: Modal Coefficients 445 8.4.4.2 Adaptive PF Design: Wavenumbers 447 8.4.5 Summary 450 8.5 MBID for Chirp Signal Extraction 452 8.5.1 Chirp-like Signals 453 8.5.1.1 Linear Chirp 453 8.5.1.2 Frequency-Shift Key (FSK) Signal 455 8.5.2 Model-Based Identification: Linear Chirp Signals 457 8.5.2.1 Gauss–Markov State-Space Model: Linear Chirp 457 8.5.3 Model-Based Identification: FSK Signals 459 8.5.3.1 Gauss–Markov State-Space Model: FSK Signals 460 8.5.4 Summary 462 References 462 Appendix A Probability and Statistics Overview 467 A.1 Probability Theory 467 A.2 Gaussian Random Vectors 473 A.3 Uncorrelated Transformation: Gaussian Random Vectors 473 A.4 Toeplitz Correlation Matrices 474 A.5 Important Processes 474 References 476 Appendix B Projection Theory 477 B.1 Projections: Deterministic Spaces 477 B.2 Projections: Random Spaces 478 B.3 Projection: Operators 479 B.3.1 Orthogonal (Perpendicular) Projections 479 B.3.2 Oblique (Parallel) Projections 481 References 483 Appendix C Matrix Decompositions 485 C.1 Singular-Value Decomposition 485 C.2 QR-Decomposition 487 C.3 LQ-Decomposition 487 References 488 Appendix D Output-Only Subspace Identification 489 References 492 Index 495

    2 in stock

    £108.86

  • An Introduction to Audio Content Analysis

    John Wiley & Sons Inc An Introduction to Audio Content Analysis

    15 in stock

    Book SynopsisAn Introduction to Audio Content Analysis Enables readers to understand the algorithmic analysis of musical audio signals with AI-driven approaches An Introduction to Audio Content Analysis serves as a comprehensive guide on audio content analysis explaining how signal processing and machine learning approaches can be utilized for the extraction of musical content from audio. It gives readers the algorithmic understanding to teach a computer to interpret music signals and thus allows for the design of tools for interacting with music. The work ties together topics from audio signal processing and machine learning, showing how to use audio content analysis to pick up musical characteristics automatically. A multitude of audio content analysis tasks related to the extraction of tonal, temporal, timbral, and intensity-related characteristics of the music signal are presented. Each task is introduced from both a musical and a technical perspective, detailing the algorithmic approach as well as providing practical guidance on implementation details and evaluation. To aid in reader comprehension, each task description begins with a short introduction to the most important musical and perceptual characteristics of the covered topic, followed by a detailed algorithmic model and its evaluation, and concluded with questions and exercises. For the interested reader, updated supplemental materials are provided via an accompanying website. Written by a well-known expert in the music industry, sample topics covered in Introduction to Audio Content Analysis include: Digital audio signals and their representation, common time-frequency transforms, audio featuresPitch and fundamental frequency detection, key and chordRepresentation of dynamics in music and intensity-related featuresBeat histograms, onset and tempo detection, beat histograms, and detection of structure in music, and sequence alignmentAudio fingerprinting, musical genre, mood, and instrument classification An invaluable guide for newcomers to audio signal processing and industry experts alike, An Introduction to Audio Content Analysis covers a wide range of introductory topics pertaining to music information retrieval and machine listening, allowing students and researchers to quickly gain core holistic knowledge in audio analysis and dig deeper into specific aspects of the field with the help of a large amount of references.Table of ContentsAuthor Biography xvii Preface xix Acronyms xxi List of Symbols xxv Source Code Repositories xxix 1 Introduction 1 Part I Fundamentals of Audio Content Analysis 9 2 Analysis of Audio Signals 11 3 Input Representation 17 4 Inference 91 5 Data 107 Part II Music Transcription 127 7 Tonal Analysis 129 8 Intensity217 9 Temporal Analysis 229 10 Alignment 281 Part III Music Identification, Classification, and Assessment 303 11 Audio Fingerprinting 305 12 Music Similarity Detection and Music Genre Classification 317 13 Mood Recognition 337 14 Musical Instrument Recognition 347 15 Music Performance Assessment 355 Part IV Appendices 365 Appendix A Fundamentals 367 Appendix B Fourier Transform 385 Appendix C Principal Component Analysis 405 Appendix D Linear Regression 409 Appendix E Software for Audio Analysis 411 Appendix F Datasets 417 Index 425

    15 in stock

    £91.80

© 2026 Book Curl

    • American Express
    • Apple Pay
    • Diners Club
    • Discover
    • Google Pay
    • Maestro
    • Mastercard
    • PayPal
    • Shop Pay
    • Union Pay
    • Visa

    Login

    Forgot your password?

    Don't have an account yet?
    Create account