Mathematical and statistical software Books
Society for Industrial and Applied Mathematics Learning MATLAB
Book SynopsisThis engaging book is a concise introduction to the MATLAB programming language for students and professionals in mathematics, science, and engineering. It can be used as the primary text for a short course, as a companion textbook for a numerical computing course, or for self-study. The presentation is designed to guide a new MATLAB user through the basics of interacting with and programming in the MATLAB software, and into some of the more important advanced techniques, including the solution of common problem types in scientific computing. Rather than including exhaustive technical syntax material, this book aims to teach through readily understood examples and numerous exercises that range from straightforward to very challenging. Learning MATLAB is ideal for readers seeking a focused and brief approach to the software, rather than an encyclopedic one.
£30.67
Society for Industrial and Applied Mathematics Insight Through Computing A MATLAB Introduction
Book SynopsisThis introduction to computer-based problem-solving using the MATLAB environment is highly recommended for students wishing to learn the concepts and develop the programming skills that are fundamental to computational science and engineering (CSE). Through a 'teaching by examples' approach, the authors pose strategically chosen problems to help first-time programmers learn these necessary concepts and skills. Each section formulates a problem and then introduces those new MATLAB language features that are necessary to solve it. This approach puts problem-solving and algorithmic thinking first and syntactical details second. Each solution is followed by a 'talking point' that concerns some related, larger issue associated with CSE. Collectively, the worked examples, talking points, and 300+ homework problems build intuition for the process of discretization and an appreciation for dimension, inexactitude, visualization, randomness, and complexity. This sets the stage for further cour
£59.36
Saint Philip Street Press Control Theory Tutorial Basic Concepts
Book Synopsis
£32.21
Taylor & Francis Ltd HandsOn Data Science for Librarians
Book SynopsisLibrarians understand the need to store, use and analyze data related to their collection, patrons and institution, and there has been consistent interest over the last 10 years to improve data management, analysis, and visualization skills within the profession. However, librarians find it difficult to move from out-of-the-box proprietary software applications to the skills necessary to perform the range of data science actions in code. This book will focus on teaching R through relevant examples and skills that librarians need in their day-to-day lives that includes visualizations but goes much further to include web scraping, working with maps, creating interactive reports, machine learning, and others. While there's a place for theory, ethics, and statistical methods, librarians need a tool to help them acquire enough facility with R to utilize data science skills in their daily work, no matter what type of library they work at (academic, public or special). By walking through eTable of Contents1. Introduction 2. Using RStudio’s IDE 3. Tidying data with dplyr 4. Visualizing your project with ggplot2 5. Webscraping with rvest 6. Mapping with tmap 7. Textual Analysis with tidytext 8. Creating Dynamic Documents with rmarkdown 9. Creating a flexdashboard 10. Creating an interactive dashboard with shiny 11. Using tidymodels to Understand Machine Learning 12. Conclusion Appendix A. Dependencies Appendix B. Additional Skills
£52.24
Taylor & Francis Ltd Visualizing Surveys in R
Book SynopsisFor researchers who use surveys interested in learning how to seize vast possibilities and flexibility of R in survey analysis/visualizations. Psychologists, marketeers, HR personnel, managers, other professionals who wish to standardize/automate the process for visualizing survey data. Suitable for textbook courses.Table of ContentsI Preparation. 1. Survey data. 2. Process. 3. Variables. 4. Categories. 5. Read data. 6. Parse values. 7. Validate data. 8. Pre-process data. 9. Build a dataset. 10. Basic statistics. 11. Create plots with ggplot2. 12. Save plots to files. 13. R Markdown. II Plotting. 14. Numeric plots. 15. Bar charts. 16. Percentage bars. 17. Diverging percentage bars. 18. Pie charts. 19. Lollipop plots. 20. Dot plots. 21. Heatmaps. 22. Geographic maps. 23. Missing value plots. 24. Validation plots.
£130.50
Taylor & Francis Ltd Modelling Survival Data in Medical Research
Book SynopsisModelling Survival Data in Medical Research, Fourth Edition, describes the analysis of survival data, illustrated using a wide range of examples from biomedical research. Written in a non-technical style, it concentrates on how the techniques are used in practice. Starting with standard methods for summarising survival data, Cox regression and parametric modelling, the book covers many more advanced techniques, including interval-censoring, frailty modelling, competing risks, analysis of multiple events, and dependent censoring.This new edition contains chapters on Bayesian survival analysis and use of the R software. Earlier chapters have been extensively revised and expanded to add new material on several topics. These include methods for assessing the predictive ability of a model, joint models for longitudinal and survival data, and modern methods for the analysis of interval-censored survival data.Features:Presents an accessible account oTable of Contents1. Survival analysis 2. Some non-parametric procedures 3. The Cox regression model 4. Model checking in the Cox regression model 5. Parametric regression models 6. Flexible parametric models 7. Model checking in parametric models 8. Time-dependent variables 9. Interval-censored survival data 10. Frailty models 11. Non-proportional hazards and institutional comparisons 12 Competing risks 13. Multiple events and event history modelling 14. Dependent censoring 15. Sample size requirements for a survival study 16. Bayesian survival analysis 17. Survival Analysis with R
£73.14
Taylor & Francis Ltd Numerical Techniques in MATLAB
Book SynopsisIn this book, various numerical methods are discussed in a comprehensive way. It delivers a mixture of theory, examples and MATLAB practicing exercises to help the students in improving their skills. To understand the MATLAB programming in a friendly style, the examples are solved. The MATLAB codes are mentioned in the end of each topic. Throughout the text, a balance between theory, examples and programming is maintained.Key Features Methods are explained with examples and codes System of equations has given full consideration Use of MATLAB is learnt for every method This book is suitable for graduate students in mathematics, computer science and engineering.Table of Contents1. Common Commands Used in Matlab. 2. System of Linear Equations. 3. Polynomial Interpolation. 4. Root Finding Methods. 5. Numerical Integration. 6. Solution of Initial Value Problems. 7. Boundary Value Problems.
£87.39
Taylor & Francis Ltd Statistical Analysis of Questionnaires
Book SynopsisStatistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing.The book covers the foundations of classical test theory (CTT), test reliability, validity, and scaling as well as item response theory (IRT) fundamentals and IRT for dichotomous and polytomous items. The authors explore the latest IRT extensions, such as IRT models with covariates, multidimensional IRT models, IRT models for hierarchical and longitudinal data, and latent class IRT models. They also describe estimation methods and diagnostics, including graphiTrade Review"This book follows a well established approach to the psychometric analysis of questionnaire data as found in educational, survey and medical research. The authors provide an in-depth discussion of the analysis of score reliability and item properties grounded in classical test theory (CTT), and of the probabilistic modeling of individual responses based on latent variable models. … Chapter 5 is a bit different and focus on the estimation of item and person parameters and the diagnostic of IRT models. The first part is rather technical but it does a good job at describing Statistical Analysis of Questionnaires the pros and cons of each technique–joint, conditional and marginal maximum likelihood–and how they could be implemented using custom software. … The authors conclude (…) by highlighting multidimensional IRT models which allow to relax the strong hypothesis of unidimensionality that is attached to all previous models, as well as the main strengths of structural equation models which can be viewed as providing the glue between factor analytic methods and IRT. Overall, the authors succeed at presenting a solid and reliable framework for psychometric analysis of questionnaire data."— Christophe Lalanne, Paris-Diderot University, in the Journal of Statistical Software, November 2017Table of ContentsPreliminaries. Classical Test Theory. Item Response Theory Models for Dichotomous Items. Item Response Theory Models for Polytomous Items. Estimation Methods and Diagnostics. Some Extensions of Traditional Item Response Theory Models.
£41.79
CRC Press Data Analytics for Business Intelligence
Book SynopsisThis book studies data, analytics, and intelligence using Boolean structure. Chapters dive into the theories, foundations, technologies, and methods of data, analytics, and intelligence.The primary aim of this book is to convey the theories and technologies of data, analytics, and intelligence with applications to readers based on systematic generalization and specialization. Sun uses the Boolean structure to deconstruct all books and papers related to data, analytics, and intelligence and to reorganize them to reshape the world of big data, data analytics, analytics intelligence, data science, and artificial intelligence. Multi-industry applications in business, management, and decision-making are provided. Cutting-edge theories, technologies, and applications of data, analytics, and intelligence and their integration are also explored. Overall, this book provides original insights on sharing computing, insight computing, platform computing, a calculus of intelligent analyti
£52.24
Taylor & Francis Ltd Forecasting and Analytics with the Augmented
Book SynopsisForecasting and Analytics with the Augmented Dynamic Adaptive Model (ADAM) focuses on a time series model in Single Source of Error state space form, called ADAM (Augmented Dynamic Adaptive Model). The book demonstrates a holistic view to forecasting and time series analysis using dynamic models, explaining how a variety of instruments can be used to solve real life problems. At the moment, there is no other tool in R or Python that would be able to model both intermittent and regular demand, would support both ETS and ARIMA, work with explanatory variables, be able to deal with multiple seasonalities (e.g. for hourly demand data) and have a support for automatic selection of orders, components and variables and provide tools for diagnostics and further improvement of the estimated model. ADAM can do all of that in one and the same framework. Given the rising interest in forecasting, ADAM, being able to do all those things, is a useful tool for data scientists, business analTable of Contents1. Introduction 2. Forecasts evaluation 3. Time series components and simple forecasting methods 4. Introduction to ETS 5. Pure additive ADAM ETS 6. Pure multiplicative ADAM ETS 7. General ADAM ETS model 8. Introduction to ARIMA 9. ADAM ARIMA 10. Explanatory variables in ADAM 11. Estimation of ADAM 12. Multiple frequencies in ADAM 13. Intermittent State Space Model 14. Model diagnostics 15. Model selection and combinations in ADAM 16. Handling uncertainty in ADAM 17. Scale model for ADAM 18. Forecasting with ADAM 19. Forecasting functions of the smooth package 20. What’s next?
£87.39
CRC Press Deep Learning Generalization
£46.54
Springer-Verlag New York Inc. An Introduction to Statistical Learning
Book SynopsisAn Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, deep learning, survival analysis, multiple testing, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical sTable of ContentsPreface.- 1 Introduction.- 2 Statistical Learning.- 3 Linear Regression.- 4 Classification.- 5 Resampling Methods.- 6 Linear Model Selection and Regularization.- 7 Moving Beyond Linearity.- 8 Tree-Based Methods.- 9 Support Vector Machines.- 10 Deep Learning.- 11 Survival Analysis and Censored Data.- 12 Unsupervised Learning.- 13 Multiple Testing.- Index.
£67.49
Springer-Verlag New York Inc. An Introduction to Statistical Learning
Book SynopsisTable of ContentsPreface.- 1 Introduction.- 2 Statistical Learning.- 3 Linear Regression.- 4 Classification.- 5 Resampling Methods.- 6 Linear Model Selection and Regularization.- 7 Moving Beyond Linearity.- 8 Tree-Based Methods.- 9 Support Vector Machines.- 10 Deep Learning.- 11 Survival Analysis and Censored Data.- 12 Unsupervised Learning.- 13 Multiple Testing.- Index.
£49.49
SAGE Publications Inc Student Study Guide With IBM® SPSS® Workbook for Research Methods, Statistics, and Applications
The third edition of the Student Study Guide With IBM® SPSS® Workbook for Research Methods, Statistics, and Applications by Kathrynn A. Adams and Eva K. McGuire gives students even more opportunities to practice and apply their knowledge in statistics and research methods. Written by the authors of Research Methods, Statistics, and Applications, the third edition of the study guide follows the third edition of the textbook for straightforward assigning and practice. New features include practice quizzes to give students both recognition and recall activities for better retention. Learning objectives and brief chapter summaries from the main text remind students of what they′ve learned and orient students toward the exercises. In-depth exercises encourage students to build on their knowledge, requiring students to think critically and actively engage with the material. These exercises have been condensed and focus on moving students through the learning objectives at a quick pace. At the end of most chapters, "Your Research" sections encourage students to apply concepts to their own projects. Now placed at the end of book, the IBM® SPSS® workbook provides instructions for performing statistical calculations. Included in this workbook are additional exercises to practice data analysis and interpretation using the software. Answers to quizzes are listed immediately after each quiz in the book while answers to exercises are listed on the instructor resources website.
£55.10
SAGE Publications Inc Advanced Issues in Partial Least Squares
Book SynopsisThe Second Edition of Advanced Issues in Partial Least Squares Structural Equation Modeling offers a straightforward and practical guide to PLS-SEM for users ready to go further than the basics of A Primer on Partial Least Squares Structural Equation Modeling,Third Edition. Even in this advanced guide, the authors have limited the emphasis on equations, formulas, and Greek symbols, and instead rely on detailed explanations of the fundamentals of PLS-SEM and provide general guidelines for understanding and evaluating the results of applying the method. A single study on corporate reputation features as an example throughout the book, along with a single software package (SmartPLS 4.0) to provide a seamless learning experience. The approach of this book is based on the authors' many years of conducting research and teaching methodology courses, including developing the SmartPLS software. The prTrade Review"Excellent guide on how to use smart pls. Good starter product for understanding the underlying concepts." -- Saurabh Gupta"Must have if you want to do PLS" -- Jason XiongTable of ContentsChapter 1: An Overview of Recent and Emerging Developments in PLS-SEM Chapter 2: Higher-order Constructs Chapter 3: Advanced Modeling and Model Assessment Chapter 4: Advanced Results Illustration Chapter 5: Modeling Observed Heterogeneity Chapter 6: Modeling Unobserved Heterogeneity
£55.10
Cambridge University Press Statistics Using IBM SPSS Third Edition
Book SynopsisWritten in a clear and lively tone, Statistics Using IBM SPSS provides a data-centric approach to statistics with integrated SPSS (version 22) commands, ensuring that students gain both a deep conceptual understanding of statistics and practical facility with the leading statistical software package. With one hundred worked examples, the textbook guides students through statistical practice using real data and avoids complicated mathematics. Numerous end-of-chapter exercises allow students to apply and test their understanding of chapter topics, with detailed answers available online. The third edition has been updated throughout and includes a new chapter on research design, new topics (including weighted mean, resampling with the bootstrap, the role of the syntax file in workflow management, and regression to the mean) and new examples and exercises. Student learning is supported by a rich suite of online resources, including answers to end-of-chapter exercises, real data sets, PowerTrade Review'This is the third edition of a very popular and useful text. The focus is on using SPSS in the research process. The chapters have illustrative exercises and meaningful real data problem sets that not only make it convenient for teaching but also provide realistic experiences for students that will stay with them for many years. The book does a very good job presenting the challenge of data analysis and the experience of being a serious researcher looking at important problems; it illustrates how a variety of quantitative methods can be applied to real data to tease out and evaluate the inferences suggested by that data. I strongly recommend this book to instructors of a one- or two-semester introductory statistics course.' Robert W. Lissitz, University of Maryland'This text by Weinberg and Abramowitz is an excellent choice for an undergraduate or introductory graduate course for non-majors. Stressing concepts over computation, it focuses on essential material for students in education and the social sciences. The book reads easily, like a set of well-constructed lectures that begin with simple fundamental concepts. Yet modern and relatively advanced topics, such as uses of the bootstrap, are also treated. Rather than focusing on hand calculations, the book integrates instruction on using SPSS directly into the text. This enables student exploration of actual research data sets, beginning in the first chapters.' James E. Corter, Columbia University'This book covers a broad range of topics in introductory statistics, employing a hands-on, problem-based approach. The latest edition expands an already long list of topics to include bootstrap techniques and experimental design considerations. By providing detailed, worked-through examples based on real data and substantive research questions, the authors guide the student through the data analysis process from beginning to end. However, this is no 'cookbook' - each section builds on the concepts and techniques established previously, and the reader is encouraged to explore the nuances involved in effective statistical analysis. What is particularly unique about the authors' exposition is that it can be read on many levels; this book will serve well as a course textbook or as a handy reference for the applied researcher.' Marc A. Scott, New York UniversityTable of Contents1. Introduction; 2. Examining univariate distributions; 3. Measures of location, spread, and skewness; 4. Re-expressing variables; 5. Exploring relationships between two variables; 6. Simple linear regression; 7. Probability fundamentals; 8. Theoretical probability models; 9. The role of sampling in inferential statistics; 10. Inferences involving the mean of a single population when σ is known; 11. Inferences involving the mean when σ is not known: one- and two-sample designs; 12. Research design: introduction and overview; 13. One-way analysis of variance; 14. Two-way analysis of variance; 15. Correlation and simple regression as inferential techniques; 16. An introduction to multiple regression; 17. Nonparametric methods.
£68.39
Cambridge University Press A Guide to MATLAB For Beginners and Experienced Users
Book SynopsisNow in its third edition, this outstanding textbook explains everything you need to get started using MATLAB . It contains concise explanations of essential MATLAB commands, as well as easily understood instructions for using MATLAB's programming features, graphical capabilities, simulation models, and rich desktop interface. MATLAB 8 and its new user interface is treated extensively in the book. New features in this edition include: a complete treatment of MATLAB's publish feature; new material on MATLAB graphics, enabling the user to master quickly the various symbolic and numerical plotting routines; and a robust presentation of MuPAD and how to use it as a stand-alone platform. The authors have also updated the text throughout, reworking examples and exploring new applications. The book is essential reading for beginners, occasional users and experienced users wishing to brush up their skills. Further resources are available from the authors' website at www-math.umd.edu/schol/a-guTrade ReviewReview of previous edition: 'Major highlights of the book are completely transparent examples of classical yet always intriguing mathematical, statistical, engineering, economics, and physics problems. In addition, the book explains a seamless use with Microsoft Word for integrating MATLAB® outputs with documents, reports, presentations, or other online processes. Advanced topics with examples include: Monte Carlo simulation, population dynamics, and linear programming. … an outstanding textbook, and, likewise, should be an integral part of the technical reference shelf for most IT professionals. It is a great resource for wherever MATLAB® is available!' ACM UbiquityReview of previous edition: 'This is a short, focused introduction to MATLAB®, a comprehensive software system for mathematical and technical computing. For the beginner it explains everything needed to start using MATLAB®, while experienced users ... will find much useful information here.' L'enseignement mathematiqueTable of ContentsPreface; 1. Getting started; 2. MATLAB basics; 3. Interacting with MATLAB; Practice Set A. Algebra and arithmetic; 4. Beyond the basics; 5. MATLAB graphics; 6. MATLAB programming; 7. Publishing and M-books; Practice Set B. Math, graphics, and programming; 8. MuPAD; 9. Simulink; 10. GUIs; 11. Applications; Practice Set C. Developing your MATLAB skills; 12. Troubleshooting; Solutions to the practice sets; Glossary; Index.
£48.99
Cambridge University Press The Design and Statistical Analysis of Animal
Book SynopsisThis is the first book to provide life scientists with a practical guide to using experimental design and statistics when running animal experiments. The chapters cover a range of design types and analysis techniques employed by practitioners, using non-mathematical terms and drawing on real-life examples.Trade Review'At last, a readable statistics book focusing solely on preclinical experimental designs, data and its analysis that should form part of an in-vivo scientist's personal library. The author's unique insight into the statistical needs of preclinical scientists has allowed them to compile a non-technical guide that can facilitate sound experimental design, meaningful data analysis and appropriate scientific conclusions. I would also encourage all readers to download and explore 'InVivoStat', a powerful software package that both my group and I use on a daily basis.' Darrel J. Pemberton, Janssen Research and Development'This book provides an indispensable reference for any in-vivo scientist. It addresses common pitfalls in animal experiments and provides tangible advice to address sources of bias, thus increasing the robustness of the data. … The text links experimental design and statistical analysis in a practical way, easily accessible without any prior statistical knowledge. The statistical concepts are described in plain English, avoiding overuse of mathematical formulas and illustrated with numerous examples relevant to biomedical scientists. … This book will help scientists improve the design of animal experiments and give them the confidence to use more complex designs, enabling more efficient use of animals and reducing the number of experimental animals needed overall.' Nathalie Percie du Sert, National Centre for the Replacement, Refinement and Reduction of Animals in Research'This book will transform the way biomedical scientists plan their work and interpret their results. Although the subject matter covers complex points, it is easy to read and packed with relevant examples. There are two particularly striking features. First, at no point do the authors resort to mathematical equations as a substitute for explaining the concepts. Secondly, they explain why the choice of experimental design is so important, why the design affects the statistical analysis and how to ensure the choice of the most appropriate statistical test. The final section describes how to use InvivoStat (a software package, assembled by the authors), which enables researchers to put into practice all the points covered in this book. This is an invaluable combination of resources that should be within easy reach of anyone carrying out experiments in the biomedical sciences, especially if their work involves using live animals.' Clare Stanford, University College LondonTable of ContentsPreface; Acknowledgements; 1. Introduction; 2. Statistical concepts; 3. Experimental design; 4. Randomisation; 5. Statistical analysis; 6. Analysis using InVivoStat; 7. Conclusion; Glossary; References; Index.
£47.49
John Wiley & Sons Inc TINspire For Dummies
Book SynopsisThe updated guide to the newest graphing calculator from Texas Instruments The TI-Nspire graphing calculator is popular among high school and college students as a valuable tool for calculus, AP calculus, and college-level algebra courses. Its use is allowed on the major college entrance exams.Table of ContentsIntroduction 1 Part I: Getting to Know Your TI-Nspire Handheld 9 Chapter 1: Using TI-Nspire for the First Time 11 Chapter 2: Understanding the Document Structure 25 Chapter 3: Creating and Editing Documents 37 Chapter 4: Linking Handhelds 47 Part II: The Calculator Application 51 Chapter 5: Entering and Evaluating Expressions 53 Chapter 6: Working with Variables 69 Chapter 7: Using the Calculator Application with Other Applications 77 Chapter 8: Using the Calculator Application with TI-Nspire CAS 85 Part III: The Graphs Application 99 Chapter 9: Working with Graphs 101 Chapter 10: Using the Graphs Application to Do Calculus 131 Part IV: The Geometry Application 135 Chapter 11: Working with Geometric Objects 137 Chapter 12: Using an Analytic Window in the Geometry Application 159 Part V: The Lists & Spreadsheet Application 165 Chapter 13: Applying What You Already Know about Spreadsheets 167 Chapter 14: Working with Data 177 Chapter 15: Constructing Scatter Plots and Performing Regressions 189 Chapter 16: Manual and Automatic Data Capture 201 Part VI: The Data & Statistics and Vernier DataQuest Applications 209 Chapter 17: Constructing Statistical Graphs 211 Chapter 18: Working with Single-Variable Data 215 Chapter 19: Working with Two-Variable Data 227 Chapter 20: Data Collection 237 Part VII: The Notes Application 249 Chapter 21: The Why and How of Using Notes 251 Chapter 22: Taking Notes to a Whole New Level 255 Part VIII: TI-Nspire Computer Software 261 Chapter 23: Getting Started with TI-Nspire Computer Software 263 Chapter 24: File Creation and Display in Documents Workspace 271 Chapter 25: File Management with Content Workspace 287 Part IX: The Part of Tens 295 Chapter 26: Ten Great Tips and Shortcuts 297 Chapter 27: Ten Common Problems Resolved 305 Appendix A: Safeguarding in Press-to-Test Mode 311 Appendix B: Basic Programming 315 Appendix C: Working with Libraries 331 Index 337
£16.14
John Wiley & Sons Inc Analysis of Biomarker Data
Book SynopsisA how to guide for applying statistical methods to biomarker data analysis Presenting a solid foundation for the statistical methods that are used to analyze biomarker data, Analysis of Biomarker Data: A Practical Guide features preferred techniques for biomarker validation. The authors provide descriptions of select elementary statistical methods that are traditionally used to analyze biomarker data with a focus on the proper application of each method, including necessary assumptions, software recommendations, and proper interpretation of computer output. In addition, the book discusses frequently encountered challenges in analyzing biomarker data and how to deal with them, methods for the quality assessment of biomarkers, and biomarker study designs. Covering a broad range of statistical methods that have been used to analyze biomarker data in published research studies, Analysis of Biomarker Data: A Practical Guide also features: ATable of ContentsPreface xiii Acknowledgements xvii 1 Introduction 1 1.1 What is a Biomarker? 1 1.2 Biomarkers Versus Surrogate Endpoints 2 1.3 Organization of This Book 3 2 Designing Biomarker Studies 5 2.1 Introduction 5 2.2 Designing the Study 6 2.2.1 The Exposure–Disease Association 6 2.2.2 Cross-sectional Studies 7 2.2.3 Case–Control Studies 7 2.2.4 Retrospective Cohort Studies 9 2.2.5 Prospective Cohort Studies 9 2.2.6 Observational Studies 10 2.2.7 Randomized Controlled Trials 11 2.3 Designing the Analysis 13 2.3.1 Choosing the Appropriate Measure of Association 15 2.3.1.1 Odds Ratio versus Risk Ratio 15 2.3.1.2 Consequences of Not Choosing the Appropriate Measure of Association 16 2.3.2 Choosing the Appropriate Statistical Analysis 16 2.3.3 Choosing the Appropriate Sample Size 17 2.4 Presenting Statistical Results 18 Problems 20 3 Elementary Statistical Methods for Analyzing Biomarker Data 21 3.1 Introduction 21 3.2 Graphical and Tabular Summaries 21 3.3 Descriptive Statistics 26 3.4 Describing the Shape of Distributions 31 3.5 Sampling Distributions 33 3.6 Introduction to Statistical Inference 34 3.6.1 Point Estimation and Confidence Interval Estimation 34 3.6.2 Hypothesis Testing 38 3.7 Comparing Means Across Groups 43 3.7.1 Two Group Comparisons 44 3.7.2 Multiple-Group Comparisons 45 3.8 Correlation Analysis 50 3.9 Regression Analysis 52 3.9.1 Simple Linear Regression 52 3.9.2 Multiple Regression 55 3.9.3 Analysis of Covariance 58 3.10 Analyzing Cross-Classified Data 61 3.10.1 Testing for Independence 61 3.10.2 Comparison of Proportions 65 Problems 69 4 Frequently Encountered Challenges in Analyzing Biomarker Data and How to Deal with Them 72 4.1 Introduction 72 4.2 Non-Normally Distributed Data 73 4.2.1 The Effects of Non-Normality 73 4.2.2 Testing Distributional Assumptions 74 4.2.2.1 Graphical Methods for Assessing Normality 74 4.2.2.2 Measures of Skewness and Kurtosis 81 4.2.2.3 Formal Hypothesis Tests of the Normality Assumption 83 4.2.3 Remedial Measures for Violation of a Distributional Assumption 86 4.2.3.1 Choosing a Transformation 86 4.2.3.2 Using a Robust Statistical Procedure 92 4.2.3.3 Distribution-Free Alternatives 93 4.3 Heterogeneity of Variance 113 4.3.1 The Effects of Heterogeneity 113 4.3.2 The Importance of Heterogeneity in the Comparison of Means 113 4.3.2.1 Comparisons of Two Groups 113 4.3.2.2 Comparisons of More Than Two Groups 116 4.3.2.3 Multiple Comparisons 118 4.4 Dependent Groups 122 4.4.1 The Consequences of Ignoring Dependence Among Groups 122 4.4.2 Comparing Two Dependent Means 124 4.4.2.1 Paired t-test 124 4.4.2.2 Wilcoxon Signed Ranks Test 127 4.4.2.3 Sign Test 128 4.4.3 Tests of Dependent Proportions 134 4.4.3.1 McNemar’s Test 134 4.4.3.2 Cochran’s Q test 138 4.4.3.3 Sample Size and Power Considerations 142 4.5 Correlated Outcomes 144 4.5.1 Choosing the Appropriate Measure of Association 144 4.5.1.1 Spearman’s rho 144 4.5.1.2 Kendall’s tau-b 146 4.5.2 Recommended Methods of Statistical Analysis for Correlation Coefficients 148 4.5.3 Recommended Methods for Interpreting Correlation Coefficient Results 156 4.5.4 Sample Size Issues in Correlation Analysis 157 4.5.5 Comparison of Correlation Coefficients 171 4.5.5.1 Comparison of Independent Correlation Coefficients 172 4.5.5.2 Comparison of Dependent Correlation Coefficients 174 4.5.6 Sample Size Issues When Comparing Two Correlation Coefficients 181 4.5.6.1 Sample Size Issues When Comparing Independent Correlation Coefficients 181 4.5.6.2 Sample Size Issues When Comparing Dependent Correlation Coefficients 183 4.6 Clustered Data 184 4.7 Outliers 199 4.7.1 The Effects of Outliers 199 4.7.2 Detection of Outliers 199 4.7.3 Methods for Accommodating Outliers 207 4.8 Limits of Detection and Non-Detected Observations 208 4.8.1 Statistical Inference When NDs Are Present 210 4.8.2 Maximum Likelihood Estimation of a Correlation Coefficient When Both X and Y Are Subject to Non-Detects 210 4.8.3 Comparison of Confidence Interval Methods for Correlation Coefficients When Both Variables Are Subject to Limits of Detection 212 4.9 The Analysis of Cross-Classified Categorical Data 221 4.9.1 Choosing the Appropriate Measure of Association 221 4.9.1.1 The Odds Ratio 221 4.9.1.2 Risk Ratio 223 4.9.1.3 Risk Difference 224 4.9.1.4 Odds Ratio for Paired Data 225 4.9.2 Choosing the Appropriate Statistical Analysis 225 4.9.3 Choosing the Appropriate Sample Size 226 4.9.4 Choosing a Statistical Method When Both the Predictor and the Outcome Are Dichotomous 226 4.9.4.1 Comparing Two Independent Groups in Terms of a Binomial Proportion 226 4.9.4.2 Exact Test for Independence of Rows and Columns in a 2 × 2 Table 230 4.9.4.3 Exact Inference for Odds Ratios 232 4.9.4.4 Inference for the Odds Ratio for Paired Data 234 4.9.5 Choice of a Statistical Method When the Predictor is Ordinal and the Outcome is Dichotomous 237 4.9.5.1 Tests for a Significant Trend in Proportions 237 4.9.6 Choice of a Statistical Method When Both the Predictor and the Outcome are Ordinal 240 4.9.6.1 Test for Linear-by-Linear Association 240 4.9.7 Choice of a Statistical Method When Both the Predictor and the Outcome are Nominal 243 4.9.7.1 Fisher–Freeman–Halton Test 243 Problems 246 5 Validation of Biomarkers 255 5.1 Overview of Methods for Assessing Characteristics of Biomarkers 255 5.2 General Description of Measures of Agreement 257 5.2.1 Discrete Variables 257 5.2.1.1 Cohen’s Kappa 257 5.2.1.2 Extensions of Coefficient Kappa 265 5.2.1.3 Weighted Kappa 273 5.2.2 Continuous Variables 275 5.2.2.1 Pearson’s Correlation Coefficient 275 5.2.2.2 Alternatives to Pearson’s Correlation Coefficient 277 5.3 Assessing Reliability of a Biomarker 287 5.3.1 General Considerations 287 5.3.2 Assessing Reliability of a Dichotomous Biomarker 287 5.3.2.1 Dichotomous Biomarker, More Than Two Raters 289 5.3.3 Assessing Reliability of a Continuous Biomarker 291 5.3.4 Assessing Inter-Subject, Intra-Subject, and Analytical Measurement Variability 292 5.4 Assessing Validity 294 5.4.1 General Considerations 294 5.4.2 Assessing Validity When a Gold Standard is Available 295 5.4.2.1 Dichotomous Biomarkers 295 5.4.2.2 Comparing Several Dichotomous Biomarkers 302 5.4.2.3 Continuous Biomarkers 304 5.4.3 Assessing Validity When a Gold Standard is Not Available 314 5.4.3.1 Dichotomous Biomarkers 315 5.4.3.2 Continuous Biomarkers 319 5.4.4 Assessing Criterion Validity in Method Comparison Studies 328 5.4.5 Assessing Construct Validity in Method Comparison Studies 329 Problems 329 References 332 Solutions to Problems 348 Index 391
£99.86
John Wiley & Sons Inc Basic Data Analysis for Time Series with R
Book SynopsisWritten at a readily accessible level, Basic Data Analysis for Time Series with R emphasizes the mathematical importance of collaborative analysis of data used to collect increments of time or space.Table of ContentsPREFACE xv ACKNOWLEDGMENTS xvii PART I BASIC CORRELATION STRUCTURES 1 RBasics 3 1.1 Getting Started, 3 1.2 Special R Conventions, 5 1.3 Common Structures, 5 1.4 Common Functions, 6 1.5 Time Series Functions, 6 1.6 Importing Data, 7 Exercises, 7 2 Review of Regression and More About R 8 2.1 Goals of this Chapter, 8 2.2 The Simple(ST) Regression Model, 8 2.2.1 Ordinary Least Squares, 8 2.2.2 Properties of OLS Estimates, 9 2.2.3 Matrix Representation of the Problem, 9 2.3 Simulating the Data from a Model and Estimating the Model Parameters in R, 9 2.3.1 Simulating Data, 9 2.3.2 Estimating the Model Parameters in R, 9 2.4 Basic Inference for the Model, 12 2.5 Residuals Analysis—What Can Go Wrong…, 13 2.6 Matrix Manipulation in R, 15 2.6.1 Introduction, 15 2.6.2 OLS the Hard Way, 15 2.6.3 Some Other Matrix Commands, 16 Exercises, 16 3 The Modeling Approach Taken in this Book and Some Examples of Typical Serially Correlated Data 18 3.1 Signal and Noise, 18 3.2 Time Series Data, 19 3.3 Simple Regression in the Framework, 20 3.4 Real Data and Simulated Data, 20 3.5 The Diversity of Time Series Data, 21 3.6 Getting Data Into R, 24 3.6.1 Overview, 24 3.6.2 The Diskette and the scan() and ts() Functions—New York City Temperatures, 25 3.6.3 The Diskette and the read.table() Function—The Semmelweis Data, 25 3.6.4 Cut and Paste Data to a Text Editor, 26 Exercises, 26 4 Some Comments on Assumptions 28 4.1 Introduction, 28 4.2 The Normality Assumption, 29 4.2.1 Right Skew, 30 4.2.2 Left Skew, 30 4.2.3 Heavy Tails, 30 4.3 Equal Variance, 31 4.3.1 Two-Sample t-Test, 31 4.3.2 Regression, 31 4.4 Independence, 31 4.5 Power of Logarithmic Transformations Illustrated, 32 4.6 Summary, 34 Exercises, 34 5 The Autocorrelation Function And AR(1), AR(2) Models 35 5.1 Standard Models—What are the Alternatives to White Noise?, 35 5.2 Autocovariance and Autocorrelation, 36 5.2.1 Stationarity, 36 5.2.2 A Note About Conditions, 36 5.2.3 Properties of Autocovariance, 36 5.2.4 White Noise, 37 5.2.5 Estimation of the Autocovariance and Autocorrelation, 37 5.3 The acf() Function in R, 37 5.3.1 Background, 37 5.3.2 The Basic Code for Estimating the Autocovariance, 38 5.4 The First Alternative to White Noise: Autoregressive Errors—AR(1), AR(2), 40 5.4.1 Definition of the AR(1) and AR(2) Models, 40 5.4.2 Some Preliminary Facts, 40 5.4.3 The AR(1) Model Autocorrelation and Autocovariance, 41 5.4.4 Using Correlation and Scatterplots to Illustrate the AR(1) Model, 41 5.4.5 The AR(2) Model Autocorrelation and Autocovariance, 41 5.4.6 Simulating Data for AR(m) Models, 42 5.4.7 Examples of Stable and Unstable AR(1) Models, 44 5.4.8 Examples of Stable and Unstable AR(2) Models, 46 Exercises, 49 6 The Moving Average Models MA(1) And MA(2) 51 6.1 The Moving Average Model, 51 6.2 The Autocorrelation for MA(1) Models, 51 6.3 A Duality Between MA(l) And AR(m) Models, 52 6.4 The Autocorrelation for MA(2) Models, 52 6.5 Simulated Examples of the MA(1) Model, 52 6.6 Simulated Examples of the MA(2) Model, 54 6.7 AR(m) and MA(l) model acf() Plots, 54 Exercises, 57 PART II ANALYSIS OF PERIODIC DATA AND MODEL SELECTION 7 Review of Transcendental Functions and Complex Numbers 61 7.1 Background, 61 7.2 Complex Arithmetic, 62 7.2.1 The Number i, 62 7.2.2 Complex Conjugates, 62 7.2.3 The Magnitude of a Complex Number, 62 7.3 Some Important Series, 63 7.3.1 The Geometric and Some Transcendental Series, 63 7.3.2 A Rationale for Euler’s Formula, 63 7.4 Useful Facts About Periodic Transcendental Functions, 64 Exercises, 64 8 The Power Spectrum and the Periodogram 65 8.1 Introduction, 65 8.2 A Definition and a Simplified Form for p(f ), 66 8.3 Inverting p(f ) to Recover the Ck Values, 66 8.4 The Power Spectrum for Some Familiar Models, 68 8.4.1 White Noise, 68 8.4.2 The Spectrum for AR(1) Models, 68 8.4.3 The Spectrum for AR(2) Models, 70 8.5 The Periodogram, a Closer Look, 72 8.5.1 Why is the Periodogram Useful?, 72 8.5.2 Some Na¨ýve Code for a Periodogram, 72 8.5.3 An Example—The Sunspot Data, 74 8.6 The Function spec.pgram() in R, 75 Exercises, 77 9 Smoothers, The Bias-Variance Tradeoff, and the Smoothed Periodogram 79 9.1 Why is Smoothing Required?, 79 9.2 Smoothing, Bias, and Variance, 79 9.3 Smoothers Used in R, 80 9.3.1 The R Function lowess(), 81 9.3.2 The R Function smooth.spline(), 82 9.3.3 Kernel Smoothers in spec.pgram(), 83 9.4 Smoothing the Periodogram for a Series With a Known and Unknown Period, 85 9.4.1 Period Known, 85 9.4.2 Period Unknown, 86 9.5 Summary, 87 Exercises, 87 10 A Regression Model for Periodic Data 89 10.1 The Model, 89 10.2 An Example: The NYC Temperature Data, 91 10.2.1 Fitting a Periodic Function, 91 10.2.2 An Outlier, 92 10.2.3 Refitting the Model with the Outlier Corrected, 92 10.3 Complications 1: CO2 Data, 93 10.4 Complications 2: Sunspot Numbers, 94 10.5 Complications 3: Accidental Deaths, 96 10.6 Summary, 96 Exercises, 96 11 Model Selection and Cross-Validation 98 11.1 Background, 98 11.2 Hypothesis Tests in Simple Regression, 99 11.3 A More General Setting for Likelihood Ratio Tests, 101 11.4 A Subtlety Different Situation, 104 11.5 Information Criteria, 106 11.6 Cross-validation (Data Splitting): NYC Temperatures, 108 11.6.1 Explained Variation, R2, 108 11.6.2 Data Splitting, 108 11.6.3 Leave-One-Out Cross-Validation, 110 11.6.4 AIC as Leave-One-Out Cross-Validation, 112 11.7 Summary, 112 Exercises, 113 12 Fitting Fourier series 115 12.1 Introduction: More Complex Periodic Models, 115 12.2 More Complex Periodic Behavior: Accidental Deaths, 116 12.2.1 Fourier Series Structure, 116 12.2.2 R Code for Fitting Large Fourier Series, 116 12.2.3 Model Selection with AIC, 117 12.2.4 Model Selection with Likelihood Ratio Tests, 118 12.2.5 Data Splitting, 119 12.2.6 Accidental Deaths—Some Comment on Periodic Data, 120 12.3 The Boise River Flow data, 121 12.3.1 The Data, 121 12.3.2 Model Selection with AIC, 122 12.3.3 Data Splitting, 123 12.3.4 The Residuals, 123 12.4 Where Do We Go from Here?, 124 Exercises, 124 13 Adjusting for AR(1) Correlation in Complex Models 125 13.1 Introduction, 125 13.2 The Two-Sample t-Test—UNCUT and Patch-Cut Forest, 125 13.2.1 The Sleuth Data and the Question of Interest, 125 13.2.2 A Simple Adjustment for t-Tests When the Residuals Are AR(1), 128 13.2.3 A Simulation Example, 129 13.2.4 Analysis of the Sleuth Data, 131 13.3 The Second Sleuth Case—Global Warming, A Simple Regression, 132 13.3.1 The Data and the Question, 132 13.3.2 Filtering to Produce (Quasi-)Independent Observations, 133 13.3.3 Simulated Example—Regression, 134 13.3.4 Analysis of the Regression Case, 135 13.3.5 The Filtering Approach for the Logging Case, 136 13.3.6 A Few Comments on Filtering, 137 13.4 The Semmelweis Intervention, 138 13.4.1 The Data, 138 13.4.2 Why Serial Correlation?, 139 13.4.3 How This Data Differs from the Patch/Uncut Case, 139 13.4.4 Filtered Analysis, 140 13.4.5 Transformations and Inference, 142 13.5 The NYC Temperatures (Adjusted), 142 13.5.1 The Data and Prediction Intervals, 142 13.5.2 The AR(1) Prediction Model, 144 13.5.3 A Simulation to Evaluate These Formulas, 144 13.5.4 Application to NYC Data, 146 13.6 The Boise River Flow Data: Model Selection With Filtering, 147 13.6.1 The Revised Model Selection Problem, 147 13.6.2 Comments on R2 and R2 pred, 147 13.6.3 Model Selection After Filtering with a Matrix, 148 13.7 Implications of AR(1) Adjustments and the “Skip” Method, 151 13.7.1 Adjustments for AR(1) Autocorrelation, 151 13.7.2 Impact of Serial Correlation on p-Values, 152 13.7.3 The “skip” Method, 152 13.8 Summary, 152 Exercises, 153 PART III COMPLEX TEMPORAL STRUCTURES 14 The Backshift Operator, the Impulse Response Function, and General ARMA Models 159 14.1 The General ARMA Model, 159 14.1.1 The Mathematical Formulation, 159 14.1.2 The arima.sim() Function in R Revisited, 159 14.1.3 Examples of ARMA(m,l) Models, 160 14.2 The Backshift (Shift, Lag) Operator, 161 14.2.1 Definition of B, 161 14.2.2 The Stationary Conditions for a General AR(m) Model, 161 14.2.3 ARMA(m,l) Models and the Backshift Operator, 162 14.2.4 More Examples of ARMA(m,l) Models, 162 14.3 The Impulse Response Operator—Intuition, 164 14.4 Impulse Response Operator, g(B)—Computation, 165 14.4.1 Definition of g(B), 165 14.4.2 Computing the Coefficients, gj., 165 14.4.3 Plotting an Impulse Response Function, 166 14.5 Interpretation and Utility of the Impulse Response Function, 167 Exercises, 167 15 The Yule–Walker Equations and the Partial Autocorrelation Function 169 15.1 Background, 169 15.2 Autocovariance of an ARMA(m,l) Model, 169 15.2.1 A Preliminary Result, 169 15.2.2 The Autocovariance Function for ARMA(m,l) Models, 170 15.3 AR(m) and the Yule–Walker Equations, 170 15.3.1 The Equations, 170 15.3.2 The R Function ar.yw() with an AR(3) Example, 171 15.3.3 Information Criteria-Based Model Selection Using ar.yw(), 173 15.4 The Partial Autocorrelation Plot, 174 15.4.1 A Sequence of Hypothesis Tests, 174 15.4.2 The pacf() Function—Hypothesis Tests Presented in a Plot, 174 15.5 The Spectrum For Arma Processes, 175 15.6 Summary, 177 Exercises, 178 16 Modeling Philosophy and Complete Examples 180 16.1 Modeling Overview, 180 16.1.1 The Algorithm, 180 16.1.2 The Underlying Assumption, 180 16.1.3 An Example Using an AR(m) Filter to Model MA(3), 181 16.1.4 Generalizing the “Skip” Method, 184 16.2 A Complex Periodic Model—Monthly River Flows, Furnas 1931–1978, 185 16.2.1 The Data, 185 16.2.2 A Saturated Model, 186 16.2.3 Building an AR(m) Filtering Matrix, 187 16.2.4 Model Selection, 189 16.2.5 Predictions and Prediction Intervals for an AR(3) Model, 190 16.2.6 Data Splitting, 191 16.2.7 Model Selection Based on a Validation Set, 192 16.3 A Modeling Example—Trend and Periodicity: CO2 Levels at Mauna Lau, 193 16.3.1 The Saturated Model and Filter, 193 16.3.2 Model Selection, 194 16.3.3 How Well Does the Model Fit the Data?, 197 16.4 Modeling Periodicity with a Possible Intervention—Two Examples, 198 16.4.1 The General Structure, 198 16.4.2 Directory Assistance, 199 16.4.3 Ozone Levels in Los Angeles, 202 16.5 Periodic Models: Monthly, Weekly, and Daily Averages, 205 16.6 Summary, 207 Exercises, 207 PART IV SOME DETAILED AND COMPLETE EXAMPLES 17 Wolf’s Sunspot Number Data 213 17.1 Background, 213 17.2 Unknown Period ⇒ Nonlinear Model, 214 17.3 The Function nls() in R, 214 17.4 Determining the Period, 216 17.5 Instability in the Mean, Amplitude, and Period, 217 17.6 Data Splitting for Prediction, 220 17.6.1 The Approach, 220 17.6.2 Step 1—Fitting One Step Ahead, 222 17.6.3 The AR Correction, 222 17.6.4 Putting it All Together, 223 17.6.5 Model Selection, 223 17.6.6 Predictions Two Steps Ahead, 224 17.7 Summary, 226 Exercises, 226 18 An Analysis of Some Prostate and Breast Cancer Data 228 18.1 Background, 228 18.2 The First Data Set, 229 18.3 The Second Data Set, 232 18.3.1 Background and Questions, 232 18.3.2 Outline of the Statistical Analysis, 233 18.3.3 Looking at the Data, 233 18.3.4 Examining the Residuals for AR(m) Structure, 235 18.3.5 Regression Analysis with Filtered Data, 238 Exercises, 243 19 Christopher Tennant/Ben Crosby Watershed Data 245 19.1 Background and Question, 245 19.2 Looking at the Data and Fitting Fourier Series, 246 19.2.1 The Structure of the Data, 246 19.2.2 Fourier Series Fits to the Data, 246 19.2.3 Connecting Patterns in Data to Physical Processes, 246 19.3 Averaging Data, 248 19.4 Results, 250 Exercises, 250 20 Vostok Ice Core Data 251 20.1 Source of the Data, 251 20.2 Background, 252 20.3 Alignment, 253 20.3.1 Need for Alignment, and Possible Issues Resulting from Alignment, 253 20.3.2 Is the Pattern in the Temperature Data Maintained?, 254 20.3.3 Are the Dates Closely Matched?, 254 20.3.4 Are the Times Equally Spaced?, 255 20.4 A Na¨ýve Analysis, 256 20.4.1 A Saturated Model, 256 20.4.2 Model Selection, 258 20.4.3 The Association Between CO2 and Temperature Change, 258 20.5 A Related Simulation, 259 20.5.1 The Model and the Question of Interest, 259 20.5.2 Simulation Code in R, 260 20.5.3 A Model Using all of the Simulated Data, 261 20.5.4 A Model Using a Sample of 283 from the Simulated Data, 262 20.6 An AR(1) Model for Irregular Spacing, 265 20.6.1 Motivation, 265 20.6.2 Method, 266 20.6.3 Results, 266 20.6.4 Sensitivity Analysis, 267 20.6.5 A Final Analysis, Well Not Quite, 268 20.7 Summary, 269 Exercises, 270 Appendix A Using Datamarket 273 A.1 Overview, 273 A.2 Loading a Time Series in Datamarket, 277 A.3 Respecting Datamarket Licensing Agreements, 280 Appendix B AIC is PRESS! 281 B.1 Introduction, 281 B.2 PRESS, 281 B.3 Connection to Akaike’s Result, 282 B.4 Normalization and R2, 282 B.5 An example, 283 B.6 Conclusion and Further Comments, 283 Appendix C A 15-Minute Tutorial on Nonlinear Optimization 284 C.1 Introduction, 284 C.2 Newton’s Method for One-Dimensional Nonlinear Optimization, 284 C.3 A Sequence of Directions, Step Sizes, and a Stopping Rule, 285 C.4 What Could Go Wrong?, 285 C.5 Generalizing the Optimization Problem, 286 C.6 What Could Go Wrong—Revisited, 286 C.7 What Can be Done?, 287 REFERENCES 291 INDEX 293
£92.66
John Wiley & Sons Inc Nonlinear Parameter Optimization Using R Tools
Book SynopsisNonlinear Parameter Optimization Using R John C.Trade Review"The book chapters are enriched by little anecdotes, and the reader obviously benefits from John C. Nash's experience of more than 30 years in the field of nonlinear optimization. This experience translates into many practical recommendations and tweaks. The book provides plenty of code examples and useful code snippets." (Biometrical Journal, 2016)Table of ContentsPreface xv 1 Optimization problem tasks and how they arise 1 1.1 The general optimization problem 1 1.2 Why the general problem is generally uninteresting 2 1.3 (Non-)Linearity 4 1.4 Objective function properties 4 1.4.1 Sums of squares 4 1.4.2 Minimax approximation 5 1.4.3 Problems with multiple minima 5 1.4.4 Objectives that can only be imprecisely computed 5 1.5 Constraint types 5 1.6 Solving sets of equations 6 1.7 Conditions for optimality 7 1.8 Other classifications 7 References 8 2 Optimization algorithms – an overview 9 2.1 Methods that use the gradient 9 2.2 Newton-like methods 12 2.3 The promise of Newton’s method 13 2.4 Caution: convergence versus termination 14 2.5 Difficulties with Newton’s method 14 2.6 Least squares: Gauss–Newton methods 15 2.7 Quasi-Newton or variable metric method 17 2.8 Conjugate gradient and related methods 18 2.9 Other gradient methods 19 2.10 Derivative-free methods 19 2.10.1 Numerical approximation of gradients 19 2.10.2 Approximate and descend 19 2.10.3 Heuristic search 20 2.11 Stochastic methods 20 2.12 Constraint-based methods – mathematical programming 21 References 22 3 Software structure and interfaces 25 3.1 Perspective 25 3.2 Issues of choice 26 3.3 Software issues 27 3.4 Specifying the objective and constraints to the optimizer 28 3.5 Communicating exogenous data to problem definition functions 28 3.5.1 Use of “global” data and variables 31 3.6 Masked (temporarily fixed) optimization parameters 32 3.7 Dealing with inadmissible results 33 3.8 Providing derivatives for functions 34 3.9 Derivative approximations when there are constraints 36 3.10 Scaling of parameters and function 36 3.11 Normal ending of computations 36 3.12 Termination tests – abnormal ending 37 3.13 Output to monitor progress of calculations 37 3.14 Output of the optimization results 38 3.15 Controls for the optimizer 38 3.16 Default control settings 39 3.17 Measuring performance 39 3.18 The optimization interface 39 References 40 4 One-parameter root-finding problems 41 4.1 Roots 41 4.2 Equations in one variable 42 4.3 Some examples 42 4.3.1 Exponentially speaking 42 4.3.2 A normal concern 44 4.3.3 Little Polly Nomial 46 4.3.4 A hypothequial question 49 4.4 Approaches to solving 1D root-finding problems 51 4.5 What can go wrong? 52 4.6 Being a smart user of root-finding programs 54 4.7 Conclusions and extensions 54 References 55 5 One-parameter minimization problems 56 5.1 The optimize() function 56 5.2 Using a root-finder 57 5.3 But where is the minimum? 58 5.4 Ideas for 1D minimizers 59 5.5 The line-search subproblem 61 References 62 6 Nonlinear least squares 63 6.1 nls() from package stats 63 6.1.1 A simple example 63 6.1.2 Regression versus least squares 65 6.2 A more difficult case 65 6.3 The structure of the nls() solution 72 6.4 Concerns with nls() 73 6.4.1 Small residuals 74 6.4.2 Robustness – “singular gradient” woes 75 6.4.3 Bounds with nls() 77 6.5 Some ancillary tools for nonlinear least squares 79 6.5.1 Starting values and self-starting problems 79 6.5.2 Converting model expressions to sum-of-squares functions 80 6.5.3 Help for nonlinear regression 80 6.6 Minimizing Rfunctions that compute sums of squares 81 6.7 Choosing an approach 82 6.8 Separable sums of squares problems 86 6.9 Strategies for nonlinear least squares 93 References 93 7 Nonlinear equations 95 7.1 Packages and methods for nonlinear equations 95 7.1.1 BB 96 7.1.2 nleqslv 96 7.1.3 Using nonlinear least squares 96 7.1.4 Using function minimization methods 96 7.2 A simple example to compare approaches 97 7.3 A statistical example 103 References 106 8 Function minimization tools in the base R system 108 8.1 optim() 108 8.2 nlm() 110 8.3 nlminb() 111 8.4 Using the base optimization tools 112 References 114 9 Add-in function minimization packages for R 115 9.1 Package optimx 115 9.1.1 Optimizers in optimx 116 9.1.2 Example use of optimx() 117 9.2 Some other function minimization packages 118 9.2.1 nloptr and nloptwrap 118 9.2.2 trust and trustOptim 119 9.3 Should we replace optim() routines? 121 References 122 10 Calculating and using derivatives 123 10.1 Why and how 123 10.2 Analytic derivatives – by hand 124 10.3 Analytic derivatives – tools 125 10.4 Examples of use of R tools for differentiation 125 10.5 Simple numerical derivatives 127 10.6 Improved numerical derivative approximations 128 10.6.1 The Richardson extrapolation 128 10.6.2 Complex-step derivative approximations 128 10.7 Strategy and tactics for derivatives 129 References 131 11 Bounds constraints 132 11.1 Single bound: use of a logarithmic transformation 132 11.2 Interval bounds: Use of a hyperbolic transformation 133 11.2.1 Example of the tanh transformation 134 11.2.2 A fly in the ointment 134 11.3 Setting the objective large when bounds are violated 135 11.4 An active set approach 136 11.5 Checking bounds 138 11.6 The importance of using bounds intelligently 138 11.6.1 Difficulties in applying bounds constraints 139 11.7 Post-solution information for bounded problems 139 Appendix 11.A Function transfinite 141 References 142 12 Using masks 143 12.1 An example 143 12.2 Specifying the objective 143 12.3 Masks for nonlinear least squares 147 12.4 Other approaches to masks 148 References 148 13 Handling general constraints 149 13.1 Equality constraints 149 13.1.1 Parameter elimination 151 13.1.2 Which parameter to eliminate? 153 13.1.3 Scaling and centering? 154 13.1.4 Nonlinear programming packages 154 13.1.5 Sequential application of an increasing penalty 156 13.2 Sumscale problems 158 13.2.1 Using a projection 162 13.3 Inequality constraints 163 13.4 A perspective on penalty function ideas 167 13.5 Assessment 167 References 168 14 Applications of mathematical programming 169 14.1 Statistical applications of math programming 169 14.2 R packages for math programming 170 14.3 Example problem: L1 regression 171 14.4 Example problem: minimax regression 177 14.5 Nonlinear quantile regression 179 14.6 Polynomial approximation 180 References 183 15 Global optimization and stochastic methods 185 15.1 Panorama of methods 185 15.2 R packages for global and stochastic optimization 186 15.3 An example problem 187 15.3.1 Method SANN from optim() 187 15.3.2 Package GenSA 188 15.3.3 Packages DEoptim and RcppDE 189 15.3.4 Package smco 191 15.3.5 Package soma 192 15.3.6 Package Rmalschains 193 15.3.7 Package rgenoud 193 15.3.8 Package GA 194 15.3.9 Package gaoptim 195 15.4 Multiple starting values 196 References 202 16 Scaling and reparameterization 203 16.1 Why scale or reparameterize? 203 16.2 Formalities of scaling and reparameterization 204 16.3 Hobbs’ weed infestation example 205 16.4 The KKT conditions and scaling 210 16.5 Reparameterization of the weeds problem 214 16.6 Scale change across the parameter space 214 16.7 Robustness of methods to starting points 215 16.7.1 Robustness of optimization techniques 218 16.7.2 Robustness of nonlinear least squares methods 220 16.8 Strategies for scaling 222 References 223 17 Finding the right solution 224 17.1 Particular requirements 224 17.1.1 A few integer parameters 225 17.2 Starting values for iterative methods 225 17.3 KKT conditions 226 17.3.1 Unconstrained problems 226 17.3.2 Constrained problems 227 17.4 Search tests 228 References 229 18 Tuning and terminating methods 230 18.1 Timing and profiling 230 18.1.1 rbenchmark 231 18.1.2 microbenchmark 231 18.1.3 Calibrating our timings 232 18.2 Profiling 234 18.2.1 Trying possible improvements 235 18.3 More speedups of R computations 238 18.3.1 Byte-code compiled functions 238 18.3.2 Avoiding loops 238 18.3.3 Package upgrades - an example 239 18.3.4 Specializing codes 241 18.4 External language compiled functions 242 18.4.1 Building an R function using Fortran 244 18.4.2 Summary of Rayleigh quotient timings 246 18.5 Deciding when we are finished 247 18.5.1 Tests for things gone wrong 248 References 249 19 Linking R to external optimization tools 250 19.1 Mechanisms to link R to external software 251 19.1.1 R functions to call external (sub)programs 251 19.1.2 File and system call methods 251 19.1.3 Thin client methods 252 19.2 Prepackaged links to external optimization tools 252 19.2.1 NEOS 252 19.2.2 Automatic Differentiation Model Builder (ADMB) 252 19.2.3 NLopt 253 19.2.4 BUGS and related tools 253 19.3 Strategy for using external tools 253 References 254 20 Differential equation models 255 20.1 The model 255 20.2 Background 256 20.3 The likelihood function 258 20.4 A first try at minimization 258 20.5 Attempts with optimx 259 20.6 Using nonlinear least squares 260 20.7 Commentary 261 Reference 262 21 Miscellaneous nonlinear estimation tools for R 263 21.1 Maximum likelihood 263 21.2 Generalized nonlinear models 266 21.3 Systems of equations 268 21.4 Additional nonlinear least squares tools 268 21.5 Nonnegative least squares 270 21.6 Noisy objective functions 273 21.7 Moving forward 274 References 275 Appendix A R packages used in examples 276 Index 279
£53.06
John Wiley & Sons Inc Statistics
Book Synopsis...I know of no better book of its kind... (Journal of the Royal Statistical Society, Vol 169 (1), January 2006) A revised and updated edition of this bestselling introductory textbook to statistical analysis using the leading free software package R This new edition of a bestselling title offers a concise introduction to a broad array of statistical methods, at a level that is elementary enough to appeal to awide range of disciplines. Step-by-step instructionshelp the non-statistician to fully understand the methodology. The book covers the full range of statistical techniques likely to be needed to analyse the data from research projects, including elementary material like t--tests and chi--squared tests, intermediate methods like regression and analysis of variance, and more advanced techniques like generalized linear modelling. Includes numerous worked examples and exercises within each chapter.Table of ContentsPreface xi Chapter 1 Fundamentals 1 Everything Varies 2 Significance 3 Good and Bad Hypotheses 3 Null Hypotheses 3 p Values 3 Interpretation 4 Model Choice 4 Statistical Modelling 5 Maximum Likelihood 6 Experimental Design 7 The Principle of Parsimony (Occam’s Razor) 8 Observation, Theory and Experiment 8 Controls 8 Replication: It’s the ns that Justify the Means 8 How Many Replicates? 9 Power 9 Randomization 10 Strong Inference 14 Weak Inference 14 How Long to Go On? 14 Pseudoreplication 15 Initial Conditions 16 Orthogonal Designs and Non-Orthogonal Observational Data 16 Aliasing 16 Multiple Comparisons 17 Summary of Statistical Models in R 18 Organizing Your Work 19 Housekeeping within R 20 References 22 Further Reading 22 Chapter 2 Dataframes 23 Selecting Parts of a Dataframe: Subscripts 26 Sorting 27 Summarizing the Content of Dataframes 29 Summarizing by Explanatory Variables 30 First Things First: Get to Know Your Data 31 Relationships 34 Looking for Interactions between Continuous Variables 36 Graphics to Help with Multiple Regression 39 Interactions Involving Categorical Variables 39 Further Reading 41 Chapter 3 Central Tendency 42 Further Reading 49 Chapter 4 Variance 50 Degrees of Freedom 53 Variance 53 Variance: A Worked Example 55 Variance and Sample Size 58 Using Variance 59 A Measure of Unreliability 60 Confidence Intervals 61 Bootstrap 62 Non-constant Variance: Heteroscedasticity 65 Further Reading 65 Chapter 5 Single Samples 66 Data Summary in the One-Sample Case 66 The Normal Distribution 70 Calculations Using z of the Normal Distribution 76 Plots for Testing Normality of Single Samples 79 Inference in the One-Sample Case 81 Bootstrap in Hypothesis Testing with Single Samples 81 Student’s t Distribution 82 Higher-Order Moments of a Distribution 83 Skew 84 Kurtosis 86 Reference 87 Further Reading 87 Chapter 6 Two Samples 88 Comparing Two Variances 88 Comparing Two Means 90 Student’s t Test 91 Wilcoxon Rank-Sum Test 95 Tests on Paired Samples 97 The Binomial Test 98 Binomial Tests to Compare Two Proportions 100 Chi-Squared Contingency Tables 100 Fisher’s Exact Test 105 Correlation and Covariance 108 Correlation and the Variance of Differences between Variables 110 Scale-Dependent Correlations 112 Reference 113 Further Reading 113 Chapter 7 Regression 114 Linear Regression 116 Linear Regression in R 117 Calculations Involved in Linear Regression 122 Partitioning Sums of Squares in Regression: SSY = SSR + SSE 125 Measuring the Degree of Fit, r 2 133 Model Checking 134 Transformation 135 Polynomial Regression 140 Non-Linear Regression 142 Generalized Additive Models 146 Influence 148 Further Reading 149 Chapter 8 Analysis of Variance 150 One-Way ANOVA 150 Shortcut Formulas 157 Effect Sizes 159 Plots for Interpreting One-Way ANOVA 162 Factorial Experiments 168 Pseudoreplication: Nested Designs and Split Plots 173 Split-Plot Experiments 174 Random Effects and Nested Designs 176 Fixed or Random Effects? 177 Removing the Pseudoreplication 178 Analysis of Longitudinal Data 178 Derived Variable Analysis 179 Dealing with Pseudoreplication 179 Variance Components Analysis (VCA) 183 References 184 Further Reading 184 Chapter 9 Analysis of Covariance 185 Further Reading 192 Chapter 10 Multiple Regression 193 The Steps Involved in Model Simplification 195 Caveats 196 Order of Deletion 196 Carrying Out a Multiple Regression 197 A Trickier Example 203 Further Reading 211 Chapter 11 Contrasts 212 Contrast Coefficients 213 An Example of Contrasts in R 214 A Priori Contrasts 215 Treatment Contrasts 216 Model Simplification by Stepwise Deletion 218 Contrast Sums of Squares by Hand 222 The Three Kinds of Contrasts Compared 224 Reference 225 Further Reading 225 Chapter 12 Other Response Variables 226 Introduction to Generalized Linear Models 228 The Error Structure 229 The Linear Predictor 229 Fitted Values 230 A General Measure of Variability 230 The Link Function 231 Canonical Link Functions 232 Akaike’s Information Criterion (AIC) as a Measure of the Fit of a Model 233 Further Reading 233 Chapter 13 Count Data 234 A Regression with Poisson Errors 234 Analysis of Deviance with Count Data 237 The Danger of Contingency Tables 244 Analysis of Covariance with Count Data 247 Frequency Distributions 250 Further Reading 255 Chapter 14 Proportion Data 256 Analyses of Data on One and Two Proportions 257 Averages of Proportions 257 Count Data on Proportions 257 Odds 259 Overdispersion and Hypothesis Testing 260 Applications 261 Logistic Regression with Binomial Errors 261 Proportion Data with Categorical Explanatory Variables 264 Analysis of Covariance with Binomial Data 269 Further Reading 272 Chapter 15 Binary Response Variable 273 Incidence Functions 275 ANCOVA with a Binary Response Variable 279 Further Reading 284 Chapter 16 Death and Failure Data 285 Survival Analysis with Censoring 287 Further Reading 290 Appendix Essentials of the R Language 291 R as a Calculator 291 Built-in Functions 292 Numbers with Exponents 294 Modulo and Integer Quotients 294 Assignment 295 Rounding 295 Infinity and Things that Are Not a Number (NaN) 296 Missing Values (NA) 297 Operators 298 Creating a Vector 298 Named Elements within Vectors 299 Vector Functions 299 Summary Information from Vectors by Groups 300 Subscripts and Indices 301 Working with Vectors and Logical Subscripts 301 Addresses within Vectors 304 Trimming Vectors Using Negative Subscripts 304 Logical Arithmetic 305 Repeats 305 Generate Factor Levels 306 Generating Regular Sequences of Numbers 306 Matrices 307 Character Strings 309 Writing Functions in R 310 Arithmetic Mean of a Single Sample 310 Median of a Single Sample 310 Loops and Repeats 311 The ifelse Function 312 Evaluating Functions with apply 312 Testing for Equality 313 Testing and Coercing in R 314 Dates and Times in R 315 Calculations with Dates and Times 319 Understanding the Structure of an R Object Using str 320 Reference 322 Further Reading 322 Index 323
£31.30
John Wiley & Sons Inc Understanding and Applying Basic Statistical
Book SynopsisFeatures a straightforward and concise resource for introductory statistical concepts, methods, and techniques using R Understanding and Applying Basic Statistical Methods Using R uniquely bridges the gap between advances in the statistical literature and methods routinely used by non-statisticians.Table of ContentsList of Symbols xv Preface xvii About the Companion Website xix 1 Introduction 1 1.1 Samples Versus Populations 3 1.2 Comments on Software 4 1.3 R Basics 5 1.3.1 Entering Data 6 1.3.2 Arithmetic Operations 10 1.3.3 Storage Types and Modes 12 1.3.4 Identifying and Analyzing Special Cases 17 1.4 R Packages 20 1.5 Access to Data Used in this Book 22 1.6 Accessing More Detailed Answers to the Exercises 23 1.7 Exercises 23 2 Numerical Summaries of Data 25 2.1 Summation Notation 26 2.2 Measures of Location 29 2.2.1 The Sample Mean 29 2.2.2 The Median 30 2.2.3 Sample Mean versus Sample Median 33 2.2.4 Trimmed Mean 34 2.2.5 R function mean, tmean, and median 35 2.3 Quartiles 36 2.3.1 R function idealf and summary 37 2.4 Measures of Variation 37 2.4.1 The Range 38 2.4.2 R function Range 38 2.4.3 Deviation Scores, Variance, and Standard Deviation 38 2.4.4 R Functions var and sd 40 2.4.5 The Interquartile Range 41 2.4.6 MAD and the Winsorized Variance 41 2.4.7 R Functions winvar, winsd, idealfIQR, and mad 44 2.5 Detecting Outliers 44 2.5.1 A Classic Outlier Detection Method 45 2.5.2 The Boxplot Rule 46 2.5.3 The MAD–Median Rule 47 2.5.4 R Functions outms, outbox, and out 47 2.6 Skipped Measures of Location 48 2.6.1 R Function MOM 49 2.7 Summary 49 2.8 Exercises 50 3 Plots Plus More Basics on Summarizing Data 53 3.1 Plotting Relative Frequencies 53 3.1.1 R Functions table, plot, splot, barplot, and cumsum 54 3.1.2 Computing the Mean and Variance Based on the Relative Frequencies 56 3.1.3 Some Features of the Mean and Variance 57 3.2 Histograms and Kernel Density Estimators 57 3.2.1 R Function hist 58 3.2.2 What Do Histograms Tell Us? 59 3.2.3 Populations, Samples, and Potential Concerns about Histograms 61 3.2.4 Kernel Density Estimators 64 3.2.5 R Functions Density and Akerd 64 3.3 Boxplots and Stem-and-Leaf Displays 65 3.3.1 R Function stem 67 3.3.2 Boxplot 67 3.3.3 R Function boxplot 68 3.4 Summary 68 3.5 Exercises 69 4 Probability and Related Concepts 71 4.1 The Meaning of Probability 71 4.2 Probability Functions 72 4.3 Expected Values, Population Mean and Variance 74 4.3.1 Population Variance 76 4.4 Conditional Probability and Independence 77 4.4.1 Independence and Dependence 78 4.5 The Binomial Probability Function 80 4.5.1 R Functions dbinom and pbinom 85 4.6 The Normal Distribution 85 4.6.1 Some Remarks about the Normal Distribution 88 4.6.2 The Standard Normal Distribution 89 4.6.3 Computing Probabilities for Any Normal Distribution 92 4.6.4 R Functions pnorm and qnorm 94 4.7 Nonnormality and The Population Variance 94 4.7.1 Skewed Distributions 97 4.7.2 Comments on Transforming Data 98 4.8 Summary 100 4.9 Exercises 101 5 Sampling Distributions 107 5.1 Sampling Distribution of ̂p, the Proportion of Successes 108 5.2 Sampling Distribution of the Mean Under Normality 111 5.2.1 Determining Probabilities Associated with the Sample Mean 113 5.2.2 But Typically 𝜎 Is Not Known. Now What? 116 5.3 Nonnormality and the Sampling Distribution of the Sample Mean 116 5.3.1 Approximating the Binomial Distribution 117 5.3.2 Approximating the Sampling Distribution of the Sample Mean: The General Case 119 5.4 Sampling Distribution of the Median and 20% Trimmed Mean 123 5.4.1 Estimating the Standard Error of the Median 126 5.4.2 R Function msmedse 127 5.4.3 Approximating the Sampling Distribution of the Sample Median 128 5.4.4 Estimating the Standard Error of a Trimmed Mean 129 5.4.5 R Function trimse 130 5.4.6 Estimating the Standard Error When Outliers Are Discarded: A Technically Unsound Approach 130 5.5 The Mean Versus the Median and 20% Trimmed Mean 131 5.6 Summary 135 5.7 Exercises 136 6 Confidence Intervals 139 6.1 Confidence Interval for the Mean 139 6.1.1 Computing a Confidence Interval Given 𝜎2 140 6.2 Confidence Intervals for the Mean Using s (𝜎 Not Known) 145 6.2.1 R Function t.test 148 6.3 A Confidence Interval for The Population Trimmed Mean 149 6.3.1 R Function trimci 150 6.4 Confidence Intervals for The Population Median 151 6.4.1 R Function msmedci 152 6.4.2 Underscoring a Basic Strategy 152 6.4.3 A Distribution-Free Confidence Interval for the Median Even When There Are Tied Values 153 6.4.4 R Function sint 154 6.5 The Impact of Nonnormality on Confidence Intervals 155 6.5.1 Student’s T and Nonnormality 155 6.5.2 Nonnormality and the 20% Trimmed Mean 161 6.5.3 Nonnormality and the Median 162 6.6 Some Basic Bootstrap Methods 163 6.6.1 The Percentile Bootstrap Method 163 6.6.2 R Functions trimpb 164 6.6.3 Bootstrap-t 164 6.6.4 R Function trimcibt 166 6.7 Confidence Interval for The Probability of Success 167 6.7.1 Agresti–Coull Method 169 6.7.2 Blyth’s Method 169 6.7.3 Schilling–Doi Method 170 6.7.4 R Functions acbinomci and binomLCO 170 6.8 Summary 172 6.9 Exercises 173 7 Hypothesis Testing 179 7.1 Testing Hypotheses about the Mean, 𝜎 Known 179 7.1.1 Details for Three Types of Hypotheses 180 7.1.2 Testing for Exact Equality and Tukey’s Three-Decision Rule 183 7.1.3 p-Values 184 7.1.4 Interpreting p-Values 186 7.1.5 Confidence Intervals versus Hypothesis Testing 187 7.2 Power and Type II Errors 187 7.2.1 Power and p-Values 191 7.3 Testing Hypotheses about the mean, 𝜎 Not Known 191 7.3.1 R Function t.test 193 7.4 Student’s T and Nonnormality 193 7.4.1 Bootstrap-t 195 7.4.2 Transforming Data 196 7.5 Testing Hypotheses about Medians 196 7.5.1 R Function msmedci and sintv2 197 7.6 Testing Hypotheses Based on a Trimmed Mean 198 7.6.1 R Functions trimci, trimcipb, and trimcibt 198 7.7 Skipped Estimators 200 7.7.1 R Function momci 200 7.8 Summary 201 7.9 Exercises 202 8 Correlation and Regression 207 8.1 Regression Basics 207 8.1.1 Residuals and a Method for Estimating the Median of Y Given X 209 8.1.2 R function qreg and Qreg 211 8.2 Least Squares Regression 212 8.2.1 R Functions lsfit, lm, ols, plot, and abline 214 8.3 Dealing with Outliers 215 8.3.1 Outliers among the Independent Variable 215 8.3.2 Dealing with Outliers among the Dependent Variable 216 8.3.3 R Functions tsreg and tshdreg 218 8.3.4 Extrapolation Can Be Dangerous 219 8.4 Hypothesis Testing 219 8.4.1 Inferences about the Least Squares Slope and Intercept 220 8.4.2 R Functions lm, summary, and ols 223 8.4.3 Heteroscedcasticity: Some Practical Concerns and How to Address Them 225 8.4.4 R Function olshc4 226 8.4.5 Outliers among the Dependent Variable: A Cautionary Note 227 8.4.6 Inferences Based on the Theil–Sen Estimator 227 8.4.7 R Functions regci and regplot 227 8.5 Correlation 229 8.5.1 Pearson’s Correlation 229 8.5.2 Inferences about the Population Correlation, 𝜌 232 8.5.3 R Functions pcor and pcorhc4 234 8.6 Detecting Outliers When Dealing with Two or More Variables 235 8.6.1 R Functions out and outpro 236 8.7 Measures of Association: Dealing with Outliers 236 8.7.1 Kendall’s Tau 236 8.7.2 R Functions tau and tauci 239 8.7.3 Spearman’s Rho 240 8.7.4 R Functions spear and spearci 241 8.7.5 Winsorized and Skipped Correlations 242 8.7.6 R Functions scor, scorci, scorciMC, wincor, and wincorci 243 8.8 Multiple Regression 245 8.8.1 Least Squares Regression 245 8.8.2 Hypothesis Testing 246 8.8.3 R Function olstest 248 8.8.4 Inferences Based on a Robust Estimator 248 8.8.5 R Function regtest 249 8.9 Dealing with Curvature 249 8.9.1 R Function lplot and rplot 251 8.10 Summary 256 8.11 Exercises 257 9 Comparing Two Independent Groups 263 9.1 Comparing Means 264 9.1.1 The Two-Sample Student’s T Test 264 9.1.2 Violating Assumptions When Using Student’s T 266 9.1.3 Why Testing Assumptions Can Be Unsatisfactory 269 9.1.4 Interpreting Student’s T When It Rejects 270 9.1.5 Dealing with Unequal Variances: Welch’s Test 271 9.1.6 R Function t.test 273 9.1.7 Student’s T versus Welch’s Test 274 9.1.8 The Impact of Outliers When Comparing Means 275 9.2 Comparing Medians 276 9.2.1 A Method Based on the McKean–Schrader Estimator 276 9.2.2 A Percentile Bootstrap Method 277 9.2.3 R Functions msmed, medpb2, split, and fac2list 278 9.2.4 An Important Issue: The Choice of Method can Matter 279 9.3 Comparing Trimmed Means 280 9.3.1 R Functions yuen, yuenbt, and trimpb2 282 9.3.2 Skipped Measures of Location and Deleting Outliers 283 9.3.3 R Function pb2gen 283 9.4 Tukey’s Three-Decision Rule 283 9.5 Comparing Variances 284 9.5.1 R Function comvar2 285 9.6 Rank-Based (Nonparametric) Methods 285 9.6.1 Wilcoxon–Mann–Whitney Test 286 9.6.2 R Function wmw 289 9.6.3 Handling Heteroscedasticity 289 9.6.4 R Functions cid and cidv2 290 9.7 Measuring Effect Size 291 9.7.1 Cohen’s d 292 9.7.2 Concerns about Cohen’s d and How They Might Be Addressed 293 9.7.3 R Functions akp.effect, yuenv2, and med.effect 295 9.8 Plotting Data 296 9.8.1 R Functions ebarplot, ebarplot.med, g2plot, and boxplot 298 9.9 Comparing Quantiles 299 9.9.1 R Function qcomhd 300 9.10 Comparing Two Binomial Distributions 301 9.10.1 Improved Methods 302 9.10.2 R Functions twobinom and twobicipv 302 9.11 A Method for Discrete or Categorical Data 303 9.11.1 R Functions disc2com, binband, and splotg2 304 9.12 Comparing Regression Lines 305 9.12.1 Classic ANCOVA 307 9.12.2 R Function CLASSanc 307 9.12.3 Heteroscedastic Methods for Comparing the Slopes and Intercepts 309 9.12.4 R Functions olsJ2 and ols2ci 309 9.12.5 Dealing with Outliers among the Dependent Variable 311 9.12.6 R Functions reg2ci, ancGpar, and reg2plot 311 9.12.7 A Closer Look at Comparing Nonparallel Regression Lines 313 9.12.8 R Function ancJN 313 9.13 Summary 315 9.14 Exercises 316 10 Comparing More than Two Independent Groups 321 10.1 The ANOVA F Test 321 10.1.1 R Functions anova, anova1, aov, split, and fac2list 327 10.1.2 When Does the ANOVA F Test Perform Well? 329 10.2 Dealing with Unequal Variances: Welch’s Test 331 10.3 Comparing Groups Based on Medians 333 10.3.1 R Functions med1way and Qanova 333 10.4 Comparing Trimmed Means 334 10.4.1 R Functions t1way and t1waybt 335 10.5 Two-Way ANOVA 335 10.5.1 Interactions 338 10.5.2 R Functions anova and aov 341 10.5.3 Violating Assumptions 342 10.5.4 R Functions t2way and t2waybt 343 10.6 Rank-Based Methods 344 10.6.1 The Kruskal–Wallis Test 344 10.6.2 Method BDM 346 10.7 R Functions kruskal.test AND bdm 347 10.8 Summary 348 10.9 Exercises 349 11 Comparing Dependent Groups 353 11.1 The Paired T Test 354 11.1.1 When Does the Paired T Test Perform Well? 356 11.1.2 R Functions t.test and trimcibt 357 11.2 Comparing Trimmed Means and Medians 357 11.2.1 R Functions yuend, ydbt, and dmedpb 359 11.2.2 Measures of Effect Size 363 11.2.3 R Functions D.akp.effect and effectg 364 11.3 The SIGN Test 364 11.3.1 R Function signt 365 11.4 Wilcoxon Signed Rank Test 365 11.4.1 R Function wilcox.test 367 11.5 Comparing Variances 367 11.5.1 R Function comdvar 368 11.6 Dealing with More Than Two Dependent Groups 368 11.6.1 Comparing Means 369 11.6.2 R Function aov 369 11.6.3 Comparing Trimmed Means 370 11.6.4 R Function rmanova 371 11.6.5 Rank-Based Methods 371 11.6.6 R Functions friedman.test and bprm 373 11.7 Between-By-Within Designs 373 11.7.1 R Functions bwtrim and bw2list 373 11.8 Summary 375 11.9 Exercises 376 12 Multiple Comparisons 379 12.1 Classic Methods for Independent Groups 380 12.1.1 Fisher’s Least Significant Difference Method 380 12.1.2 R Function FisherLSD 382 12.2 The Tukey–Kramer Method 382 12.2.1 Some Important Properties of the Tukey–Kramer Method 384 12.2.2 R Functions TukeyHSD and T.HSD 385 12.3 Scheffé’s Method 386 12.3.1 R Function Scheffe 386 12.4 Methods That Allow Unequal Population Variances 387 12.4.1 Dunnett’s T3 Method and an Extension of Yuen’s Method for Comparing Trimmed Means 387 12.4.2 R Functions lincon, linconbt, and conCON 389 12.5 Anova Versus Multiple Comparison Procedures 391 12.6 Comparing Medians 391 12.6.1 R Functions msmed, medpb, and Qmcp 392 12.7 Two-Way Anova Designs 393 12.7.1 R Function mcp2atm 397 12.8 Methods For Dependent Groups 400 12.8.1 Bonferroni Method 400 12.8.2 Rom’s Method 401 12.8.3 Hochberg’s Method 403 12.8.4 R Functions rmmcp, dmedpb, and sintmcp 403 12.8.5 Controlling the False Discovery Rate 404 12.9 Summary 405 12.10 Exercises 406 13 Categorical Data 409 13.1 One-Way Contingency Tables 409 13.1.1 R Function chisq.test 413 13.1.2 Gaining Perspective: A Closer Look at the Chi-Squared Distribution 413 13.2 Two-Way Contingency Tables 414 13.2.1 McNemar’s Test 414 13.2.2 R Functions contab and mcnemar.test 417 13.2.3 Detecting Dependence 418 13.2.4 R Function chi.test.ind 422 13.2.5 Measures of Association 422 13.2.6 The Probability of Agreement 423 13.2.7 Odds and Odds Ratio 424 13.3 Logistic Regression 426 13.3.1 R Function logreg 428 13.3.2 A Confidence Interval for the Odds Ratio 429 13.3.3 R Function ODDSR.CI 429 13.3.4 Smoothers for Logistic Regression 429 13.3.5 R Functions rplot.bin and logSM 430 13.4 Summary 431 13.5 Exercises 432 AppendixA Solutions to Selected Exercises 435 Appendix B Tables 441 References 465 Index 473
£60.30
John Wiley & Sons Inc Engineering Applications
Book SynopsisENGINEERING APPLICATIONS A comprehensive text on the fundamental principles of mechanical engineering Engineering Applications presents the fundamental principles and applications of the statics and mechanics of materials in complex mechanical systems design. Using MATLAB to help solve problems with numerical and analytical calculations, authors and noted experts on the topic Mihai Dupac and Dan B. Marghitu offer an understanding of the static behaviour of engineering structures and components while considering the mechanics of materials knowledge as the most important part of their design. The authors explore the concepts, derivations, and interpretations of general principles and discuss the creation of mathematical models and the formulation of mathematical equations. This practical text also highlights the solutions of problems solved analytically and numerically using MATLAB. The figures generated with MATLAB reinforce visual learning for students andTable of Contents1 Forces 1 1.1 Terminology and Notation 1 1.2 Resolution of Forces 3 1.3 Angle Between Two Forces 3 1.4 Force Vector 4 1.5 Scalar (Dot) Product of Two Forces 5 1.6 Cross Product of Two Forces 5 1.7 Examples 6 2 Moments and Couples 15 2.1 Types of Moments 15 2.2 Moment of a Force About a Point 15 2.3 Moment of a Force About a Line 18 2.4 Couples 20 2.5 Examples 21 3 Equilibrium of Structures 55 3.1 Equilibrium Equations 55 3.2 Supports 57 3.3 Free-Body Diagrams 59 3.4 Two-Force and Three-Force Members 60 3.5 Plane Trusses 61 3.6 Analysis of Simple Trusses 62 3.6.1 Method of Joints 62 3.6.2 Method of Sections 65 3.7 Examples 67 4 Centroids and Moments of Inertia 129 4.1 Centre of the Mass and Centroid 129 4.2 Centroid and Centre of the Mass of a Solid Region, Surface or Curve 130 4.3 Method of Decomposition 134 4.4 First Moment of an Area 134 4.5 The Centre of Gravity 135 4.6 Examples 136 5 Stress, Strain and Deflection 185 5.1 Stress 185 5.2 Elastic Strain 185 5.3 Shear and Moment 186 5.4 Deflections of Beams 189 5.5 Examples 193 6 Friction 211 6.1 Coefficient of Static Friction 212 6.2 Coefficient of Kinetic Friction 213 6.3 Friction Models 213 6.3.1 Coulomb Friction Model 214 6.3.2 Coulomb Model with Viscous Friction 216 6.3.3 Coulomb Model with Stiction 217 6.4 Angle of Friction 218 6.5 Examples 219 7 Work, Energy and Power 255 7.1 Work 255 7.2 Kinetic Energy 256 7.3 Work and Power 258 7.4 Conservative Forces 259 7.5 Work Done by the Gravitational Force 259 7.6 Work Done by the Friction Force 260 7.7 Potential Energy and Conservation of Energy 261 7.8 Work Done and Potential Energy of an Elastic Force 261 7.9 Potential Energy Due to the Gravitational Force 262 7.9.1 Potential Energy Due to the Gravitational Force for a Particle 262 7.9.2 Potential Energy Due to the Gravitational Force for a Rigid Body 263 7.10 Examples 264 8 Simple Machines 295 8.1 Load and Effort, Mechanical Advantage, Velocity Ratio and Efficiency of a Simple Machine 295 8.1.1 Load and Effort 295 8.1.2 Mechanical Advantage 296 8.1.3 Velocity Ratio and Efficiency 296 8.2 Effort and Load of an Ideal Machine 297 8.3 The Lever 297 8.4 Inclined Plane (Wedge) 298 8.5 Screws 299 8.6 Simple Screwjack 299 8.6.1 Motion Impending Upwards 301 8.6.2 Motion Impending Downwards 302 8.6.3 Efficiency While Hoisting Load 303 8.7 Differential Screwjack 303 8.8 Pulleys 304 8.8.1 First-order Pulley System 304 8.8.2 Second-order Pulley System 306 8.8.3 Third-order Pulley System 307 8.9 Differential Pulley 308 8.10 Wheel and Axle 309 8.11 Wheel and Differential Axle 310 8.12 Examples 312 References 353 Index 357
£75.56
John Wiley & Sons Inc Statistics with JMP Hypothesis Tests ANOVA and
Book SynopsisStatistics with JMP: Hypothesis Tests, ANOVA and Regression Peter Goos, University of Leuven and University of Antwerp, Belgium David Meintrup, University of Applied Sciences Ingolstadt, Germany A first course on basic statistical methodology using JMP This book provides a first course on parameter estimation (point estimates and confidence interval estimates), hypothesis testing, ANOVA and simple linear regression. The authors approach combines mathematical depth with numerous examples and demonstrations using the JMP software. Key features: Provides a comprehensive and rigorous presentation of introductory statistics that has been extensively classroom tested. Pays attention to the usual parametric hypothesis tests as well as to non-parametric tests (including the calculation of exact p-values). Discusses the power of various statistical tests, along with examples in JMP to Trade Review"Masters and advanced students in applied statistics, industrial engineering, business engineering, civil engineering and bio-science engineering will find this book beneficial. It also provides a useful resource for teachers of statistics particularly in the area of engineering." (Zentralblatt MATH 2016)Table of ContentsDedication iii Preface xiii Acknowledgements xvii Part One Estimators and tests 1 1 Estimating population parameters 3 2 Interval estimators 37 3 Hypothesis tests 71 Part Two One population 103 4 Hypothesis tests for a population mean, proportion or variance 105 5 Two hypothesis tests for the median of a population 149 6 Hypothesis tests for the distribution of a population 175 Part Three Two populations 7 Independent versus paired samples 213 8 Hypothesis tests for means, proportions and variances of two independent samples 219 9 A nonparametric hypothesis test for the medians of two independent samples 263 10 Hypothesis tests for the population mean of two paired samples 285 11 Two nonparametric hypothesis tests for paired samples 305 Part Four More than two populations 325 12 Hypothesis tests for more than two population means: one-way analysis of variance 327 13 Nonparametric alternatives to an analysis of variance 375 14 Hypothesis tests for more than two population variances 401 Part Five More useful tests and procedures 417 15 Design of experiments and data collection 419 16 Testing equivalence 427 17 Estimation and testing of correlation and association 445 18 An introduction to regression modeling 481 19 Simple linear regression 493 A Binomial distribution 589 B Standard normal distribution 593 C X2-distribution 595 D Student’s t-distribution 597 E Wilcoxon signed-rank test 599 F Critical values for the Shapiro-Wilk test 605 G Fisher’s F-distribution 607 H Wilcoxon rank-sum test 615 I Studentized range or Q-distribution 625 J Two-sided Dunnett test 629 K One-sided Dunnett test 633 L Kruskal-Wallis-Test 637 M Rank correlation test 641 Index 643
£54.86
John Wiley & Sons Inc Financial Risk Modelling and Portfolio
Book SynopsisA must have text for risk modelling and portfolio optimization using R. This book introduces the latest techniques advocated for measuring financial market risk and portfolio optimization, and provides a plethora of R code examples that enable the reader to replicate the results featured throughout the book. This edition has been extensively revised to include new topics on risk surfaces and probabilistic utility optimization as well as an extended introduction to R language. Financial Risk Modelling and Portfolio Optimization with R: Demonstrates techniques in modelling financial risks and applying portfolio optimization techniques as well as recent advances in the field. Introduces stylized facts, loss function and risk measures, conditional and unconditional modelling of risk; extreme value theory, generalized hyperbolic distribution, volatility modelling and concepts for capturing dependencies. Explores portfolio risk coTable of ContentsPreface to the Second Edition xi Preface xiii Abbreviations xv About the Companion Website xix PART I MOTIVATION 1 1 Introduction 3 Reference 5 2 A brief course in R 6 2.1 Origin and development 6 2.2 Getting help 7 2.3 Working with R 10 2.4 Classes, methods, and functions 12 2.5 The accompanying package FRAPO 22 References 28 3 Financial market data 29 3.1 Stylized facts of financial market returns 29 3.1.1 Stylized facts for univariate series 29 3.1.2 Stylized facts for multivariate series 32 3.2 Implications for risk models 35 References 36 4 Measuring risks 37 4.1 Introduction 37 4.2 Synopsis of risk measures 37 4.3 Portfolio risk concepts 42 References 44 5 Modern portfolio theory 46 5.1 Introduction 46 5.2 Markowitz portfolios 47 5.3 Empirical mean-variance portfolios 50 References 52 PART II RISK MODELLING 55 6 Suitable distributions for returns 57 6.1 Preliminaries 57 6.2 The generalized hyperbolic distribution 57 6.3 The generalized lambda distribution 60 6.4 Synopsis of R packages for GHD 66 6.4.1 The package fBasics 66 6.4.2 The package GeneralizedHyperbolic 67 6.4.3 The package ghyp 69 6.4.4 The package QRM 70 6.4.5 The package SkewHyperbolic 70 6.4.6 The package VarianceGamma 71 6.5 Synopsis of R packages for GLD 71 6.5.1 The package Davies 71 6.5.2 The package fBasics 72 6.5.3 The package gld 73 6.5.4 The package lmomco 73 6.6 Applications of the GHD to risk modelling 74 6.6.1 Fitting stock returns to the GHD 74 6.6.2 Risk assessment with the GHD 77 6.6.3 Stylized facts revisited 80 6.7 Applications of the GLD to risk modelling and data analysis 82 6.7.1 VaR for a single stock 82 6.7.2 Shape triangle for FTSE 100 constituents 84 References 86 7 Extreme value theory 89 7.1 Preliminaries 89 7.2 Extreme value methods and models 90 7.2.1 The block maxima approach 90 7.2.2 The rth largest order models 91 7.2.3 The peaks-over-threshold approach 92 7.3 Synopsis of R packages 94 7.3.1 The package evd 94 7.3.2 The package evdbayes 95 7.3.3 The package evir 96 7.3.4 The packages extRemes and in2extRemes 98 7.3.5 The package fExtremes 99 7.3.6 The package ismev 101 7.3.7 The package QRM 101 7.3.8 The packages Renext and RenextGUI 102 7.4 Empirical applications of EVT 103 7.4.1 Section outline 103 7.4.2 Block maxima model for Siemens 103 7.4.3 r-block maxima for BMW 107 7.4.4 POT method for Boeing 110 References 115 8 Modelling volatility 116 8.1 Preliminaries 116 8.2 The class of ARCH models 116 8.3 Synopsis of R packages 120 8.3.1 The package bayesGARCH 120 8.3.2 The package ccgarch 121 8.3.3 The package fGarch 122 8.3.4 The package GEVStableGarch 122 8.3.5 The package gogarch 123 8.3.6 The package lgarch 123 8.3.7 The packages rugarch and rmgarch 125 8.3.8 The package tseries 127 8.4 Empirical application of volatility models 128 References 130 9 Modelling dependence 133 9.1 Overview 133 9.2 Correlation, dependence, and distributions 133 9.3 Copulae 136 9.3.1 Motivation 136 9.3.2 Correlations and dependence revisited 137 9.3.3 Classification of copulae 139 9.4 Synopsis of R packages 142 9.4.1 The package BLCOP 142 9.4.2 The package copula 144 9.4.3 The package fCopulae 146 9.4.4 The package gumbel 147 9.4.5 The package QRM 148 9.5 Empirical applications of copulae 148 9.5.1 GARCH–copula model 148 9.5.2 Mixed copula approaches 155 References 157 PART III PORTFOLIO OPTIMIZATION APPROACHES 161 10 Robust portfolio optimization 163 10.1 Overview 163 10.2 Robust statistics 164 10.2.1 Motivation 164 10.2.2 Selected robust estimators 165 10.3 Robust optimization 168 10.3.1 Motivation 168 10.3.2 Uncertainty sets and problem formulation 168 10.4 Synopsis of R packages 174 10.4.1 The package covRobust 174 10.4.2 The package fPortfolio 174 10.4.3 The package MASS 175 10.4.4 The package robustbase 176 10.4.5 The package robust 176 10.4.6 The package rrcov 178 10.4.7 Packages for solving SOCPs 179 10.5 Empirical applications 180 10.5.1 Portfolio simulation: robust versus classical statistics 180 10.5.2 Portfolio back test: robust versus classical statistics 186 10.5.3 Portfolio back-test: robust optimization 190 References 195 11 Diversification reconsidered 198 11.1 Introduction 198 11.2 Most-diversified portfolio 199 11.3 Risk contribution constrained portfolios 201 11.4 Optimal tail-dependent portfolios 204 11.5 Synopsis of R packages 207 11.5.1 The package cccp 207 11.5.2 The packages DEoptim, DEoptimR, and RcppDE 207 11.5.3 The package FRAPO 210 11.5.4 The package PortfolioAnalytics 211 11.6 Empirical applications 212 11.6.1 Comparison of approaches 212 11.6.2 Optimal tail-dependent portfolio against benchmark 216 11.6.3 Limiting contributions to expected shortfall 221 References 226 12 Risk-optimal portfolios 228 12.1 Overview 228 12.2 Mean-VaR portfolios 229 12.3 Optimal CVaR portfolios 234 12.4 Optimal draw-down portfolios 238 12.5 Synopsis of R packages 241 12.5.1 The package fPortfolio 241 12.5.2 The package FRAPO 243 12.5.3 Packages for linear programming 245 12.5.4 The package PerformanceAnalytics 249 12.6 Empirical applications 251 12.6.1 Minimum-CVaR versus minimum-variance portfolios 251 12.6.2 Draw-down constrained portfolios 254 12.6.3 Back-test comparison for stock portfolio 260 12.6.4 Risk surface plots 265 References 272 13 Tactical asset allocation 274 13.1 Overview 274 13.2 Survey of selected time series models 275 13.2.1 Univariate time series models 275 13.2.2 Multivariate time series models 281 13.3 The Black–Litterman approach 289 13.4 Copula opinion and entropy pooling 292 13.4.1 Introduction 292 13.4.2 The COP model 292 13.4.3 The EP model 293 13.5 Synopsis of R packages 295 13.5.1 The package BLCOP 295 13.5.2 The package dse 297 13.5.3 The package fArma 300 13.5.4 The package forecast 301 13.5.5 The package MSBVAR 302 13.5.6 The package PortfolioAnalytics 304 13.5.7 The packages urca and vars 304 13.6 Empirical applications 307 13.6.1 Black–Litterman portfolio optimization 307 13.6.2 Copula opinion pooling 313 13.6.3 Entropy pooling 318 13.6.4 Protection strategies 324 References 334 14 Probabilistic utility 339 14.1 Overview 339 14.2 The concept of probabilistic utility 340 14.3 Markov chain Monte Carlo 342 14.3.1 Introduction 342 14.3.2 Monte Carlo approaches 343 14.3.3 Markov chains 347 14.3.4 Metropolis–Hastings algorithm 349 14.4 Synopsis of R packages 354 14.4.1 Packages for conducting MCMC 354 14.4.2 Packages for analyzing MCMC 358 14.5 Empirical application 362 14.5.1 Exemplary utility function 362 14.5.2 Probabilistic versus maximized expected utility 366 14.5.3 Simulation of asset allocations 369 References 375 Appendix A Package overview 378 A.1 Packages in alphabetical order 378 A.2 Packages ordered by topic 382 References 386 Appendix B Time series data 391 B.1 Date/time classes 391 B.2 The ts class in the base package stats 395 B.3 Irregularly spaced time series 395 B.4 The package timeSeries 397 B.5 The package zoo 399 B.6 The packages tframe and xts 401 References 404 Appendix C Back-testing and reporting of portfolio strategies 406 C.1 R packages for back-testing 406 C.2 R facilities for reporting 407 C.3 Interfacing with databases 407 References 408 Appendix D Technicalities 411 Reference 411 Index 413
£63.86
John Wiley & Sons Inc Sports Research with Analytical Solution using
Book SynopsisA step-by-step approach to problem-solving techniques using SPSS in the fields of sports science and physical education Featuring a clear and accessible approach to the methods, processes, and statistical techniques used in sports science and physical education, Sports Research with Analytical Solution using SPSS emphasizes how to conduct and interpret a range of statistical analysis using SPSS. The book also addresses issues faced by research scholars in these fields by providing analytical solutions to various research problems without reliance on mathematical rigor. Logically arranged to cover both fundamental and advanced concepts, the book presents standard univariate and complex multivariate statistical techniques used in sports research such as multiple regression analysis, discriminant analysis, cluster analysis, and factor analysis. The author focuses on the treatment of various parametric and nonparametric statistical tests, which are shown throTable of ContentsPreface xv About the Companion Website xviii Acknowledgments xix 1 Introduction to Data Types and SPSS Operations 1 1.1 Introduction 1 1.2 Types of data 2 1.2.1 Qualitative Data 2 1.2.2 Quantitative Data 3 1.3 Important definitions 4 1.3.1 Variable 4 1.4 Data Cleaning 4 1.5 Detection of Errors 5 1.5.1 Using Frequencies 5 1.5.2 Using Mean and Standard Deviation 5 1.5.3 Logic Checks 5 1.5.4 Outlier Detection 5 1.6 How to Start Spss? 6 1.6.1 Preparing Data File 7 1.7 Exercise 10 1.7.1 Short Answer Questions 10 1.7.2 Multiple Choice Questions 11 2 Descriptive Profile 14 2.1 Introduction 14 2.2 Explanation of Various Descriptive Statistics 16 2.2.1 Mean 16 2.2.2 Variance 16 2.2.3 Standard Error of Mean 17 2.2.4 Skewness 17 2.2.5 Kurtosis 18 2.2.6 Percentiles 19 2.3 Application of Descriptive Statistics 19 2.3.1 Testing Normality of Data and Identifying Outliers 20 2.4 Computation of Descriptive Statistics Using Spss 25 2.4.1 Preparation of Data File 25 2.4.2 Defining Variables 26 2.4.3 Entering Data 26 2.4.4 SPSS Commands 26 2.5 Interpretations of the Results 29 2.6 Developing Profile Chart 31 2.7 Summary of Spss Commands 33 2.8 Exercise 33 2.8.1 Short Answer Questions 33 2.8.2 Multiple Choice Questions 34 2.9 Case Study on Descriptive Analysis 36 3 Correlation Coefficient and Partial Correlation 41 3.1 Introduction 41 3.2 Correlation Matrix and Partial Correlation 43 3.2.1 Product Moment Correlation Coefficient 43 3.2.2 Partial Correlation 45 3.3 Application of Correlation Matrix and Partial Correlation 46 3.4 Correlation Matrix with Spss 46 3.4.1 Computation in Correlation Matrix 46 3.4.2 Interpretations of Findings 51 3.5 Partial Correlation with Spss 51 3.5.1 Computation of Partial Correlations 52 3.5.2 Interpretation of Partial Correlation 55 3.6 Summary of the Spss Commands 56 3.6.1 For Computing Correlation Matrix 56 3.6.2 For Computing Partial Correlations 57 3.7 Exercise 57 3.7.1 Short Answer Questions 57 3.7.2 Multiple Choice Questions 57 3.7.3 Assignment 60 3.8 Case Study on Correlation 60 4 Comparing Means 65 4.1 Introduction 65 4.2 One‐Sample t‐Test 66 4.2.1 Application of One‐Sample t‐Test 67 4.3 Two‐Sample t‐Test for Unrelated Groups 67 4.3.1 Assumptions While Using t‐Test 67 4.3.2 Case I: Two‐Tailed Test 68 4.3.3 Case II: Right Tailed Test 68 4.3.4 Case III: Left Tailed Test 69 4.3.5 Application of Two‐Sample t-Test 70 4.4 Paired t‐Test for Related Groups 70 4.4.1 Case I: Two‐Tailed Test 71 4.4.2 Case II: Right Tailed Test 71 4.4.3 Case III: Left Tailed Test 72 4.4.4 Application of Paired t‐Test 73 4.5 One‐Sample t‐Test with Spss 73 4.5.1 Computation in t‐Test for Single Group 74 4.5.2 Interpretation of Findings 77 4.6 Two‐Sample t‐Test for Independent Groups with Spss 78 4.6.1 Computation in Two‐Sample t‐Test 79 4.6.2 Interpretation of Findings 83 4.7 Paired t‐Test for Related Groups with Spss 85 4.7.1 Computation in Paired t‐Test 86 4.7.2 Interpretation of Findings 89 4.8 Summary of Spss Commands for t‐Tests 90 4.8.1 One‐Sample t‐Test 90 4.8.2 Two‐Sample t‐Test for Independent Groups 90 4.8.3 Paired t‐Test 91 4.9 Exercise 91 4.9.1 Short Answer Questions 91 4.9.2 Multiple Choice Questions 91 4.9.3 Assignment 93 4.10 Case Study 94 5 Independent Measures Anova 100 5.1 Introduction 101 5.2 One‐Way Analysis of Variance 101 5.2.1 One‐Way ANOVA Model 102 5.2.2 Post Hoc Test 102 5.2.3 Application of One‐Way ANOVA 103 5.3 One‐Way Anova with Spss (Equal Sample Size) 103 5.3.1 Computation in One‐Way ANOVA (Equal Sample Size) 104 5.3.2 Interpretation of Findings 107 5.4 One‐Way Anova with Spss (Unequal Sample Size) 110 5.4.1 Computation in One‐Way ANOVA (Unequal Sample Size) 111 5.4.2 Interpretation of Findings 114 5.5 Two‐Way Analysis of Variance 115 5.5.1 Assumptions in Two‐Way Analysis of Variance 116 5.5.2 Hypotheses in Two‐Way ANOVA 116 5.5.3 Factors 117 5.5.4 Treatment Groups 117 5.5.5 Main Effect 117 5.5.6 Interaction Effect 117 5.5.7 Within‐Groups Variation 117 5.5.8 F‐Statistic 117 5.5.9 Two‐Way ANOVA Table 118 5.5.10 Interpretation 118 5.5.11 Application of Two‐Way Analysis of Variance 118 5.6 Two‐Way Anova Using Spss 119 5.6.1 Computation in Two‐Way ANOVA 121 5.6.2 Interpretation of Findings 126 5.7 Summary of the Spss Commands 137 5.7.1 One‐Way ANOVA 137 5.7.2 Two‐Way ANOVA 138 5.8 Exercise 138 5.8.1 Short Answer Questions 138 5.8.2 Multiple Choice Questions 139 5.8.3 Assignment 142 5.9 Case Study on One‐Way Anova Design 143 5.10 Case Study on Two‐Way Anova 147 6 Repeated Measures Anova 153 6.1 Introduction 153 6.2 One‐Way Repeated Measures Anova 154 6.2.1 Assumptions in One‐Way Repeated Measures ANOVA 155 6.2.2 Application in Sports Research 155 6.2.3 Steps in Solving One‐Way Repeated Measures ANOVA 156 6.3 One‐Way Repeated Measures Anova Using Spss 157 6.3.1 Computation in the One‐Way Repeated Measures ANOVA 157 6.3.2 Interpretation of Findings 161 6.3.3 Findings of the Study 165 6.3.4 Inference 166 6.4 Two‐Way Repeated Measures Anova 166 6.4.1 Assumptions in Two‐Way Repeated Measures ANOVA 166 6.4.2 Application in Sports Research 167 6.4.3 Steps in Solving Two‐Way Repeated Measures ANOVA 167 6.5 Two‐Way Repeated Measures Anova Using Spss 168 6.5.1 Computation in Two‐Way Repeated Measures ANOVA 170 6.5.2 Interpretation of Findings 173 6.5.3 Findings of the Study 181 6.5.4 Inference 181 6.6 Summary of the Spss Commands for One‐Way Repeated Measures Anova 182 6.7 Summary of the Spss Commands for Two‐Way Repeated Measures Anova 182 6.8 Exercise 183 6.8.1 Short Answer Questions 183 6.8.2 Multiple Choice Questions 183 6.8.3 Assignment 185 6.9 Case Study on Repeated Measures Design 186 7 Analysis of Covariance 190 7.1 Introduction 190 7.2 Conceptual Framework of Analysis of Covariance 191 7.3 Application of ANCOVA 192 7.4 ANCOVA with Spss 193 7.4.1 Computation in ANCOVA 194 7.5 Summary of the Spss Commands 201 7.6 Exercise 202 7.6.1 Short Answer Questions 202 7.6.2 Multiple Choice Questions 202 7.6.3 Assignment 203 7.7 Case Study on ANCOVA Design 204 8 Nonparametric Tests in Sports Research 209 8.1 Introduction 209 8.2 Chi‐Square Test 211 8.2.1 Testing Goodness of Fit 211 8.2.2 Yates’ Correction 212 8.2.3 Contingency Coefficient 212 8.3 Goodness of Fit with Spss 212 8.3.1 Computation in Goodness of Fit 213 8.3.2 Interpretation of Findings 216 8.4 Testing Independence of Two Attributes 216 8.4.1 Interpretation 218 8.5 Testing Association with Spss 219 8.5.1 Computation in Chi‐Square 219 8.5.2 Interpretation of Findings 223 8.6 Mann–Whitney U Test: Comparing Two Independent Samples 224 8.6.1 Computation in Mann–Whitney U Statistic Using SPSS 224 8.6.2 Interpretation of Findings 226 8.7 Wilcoxon Signed‐Rank Test: For Comparing Two Related Groups 227 8.7.1 Computation in Wilcoxon Signed‐Rank Test Using SPSS 228 8.7.2 Interpretation of Findings 230 8.8 Kruskal–Wallis Test 231 8.8.1 Computation in Kruskal–Wallis Test Using SPSS 232 8.8.2 Interpretation of Findings 234 8.9 Friedman Test 234 8.9.1 Computation in Friedman Test Using SPSS 235 8.9.2 Interpretation of Findings 237 8.10 Summary of the Spss Commands 237 8.10.1 Computing Chi‐Square Statistic (for Testing Goodness of Fit) 237 8.10.2 Computing Chi‐Square Statistic (for Testing Independence) 238 8.10.3 Computation in Mann–Whitney U Test 238 8.10.4 Computation in Wilcoxon Signed‐Rank Test 239 8.10.5 Computation in Kruskal–Wallis Test 239 8.10.6 Computation in Friedman Test 239 8.11 Exercise 240 8.11.1 Short Answer Questions 240 8.11.2 Multiple Choice Questions 241 8.11.3 Assignment 243 8.12 Case Study on Testing Independence of Attributes 243 9 Regression Analysis and Multiple Correlations 246 9.1 Introduction 246 9.2 Understanding Regression Equation 247 9.2.1 Methods of Regression Analysis 247 9.2.2 Multiple Correlation 248 9.3 Application of Regression Analysis 248 9.4 Multiple Regression Analysis with Spss 249 9.4.1 Computation in Regression Analysis 249 9.4.2 Interpretation of Findings 254 9.5 Summary of Spss Commands for Regression Analysis 259 9.6 Exercise 259 9.6.1 Short Answer Questions 259 9.6.2 Multiple Choice Questions 260 9.6.3 Assignment 261 9.7 Case Study on Regression Analysis 263 10 Application of Discriminant Function Analysis 267 10.1 Introduction 268 10.2 Basics of Discriminant Function Analysis 268 10.2.1 Discriminating Variables 268 10.2.2 Dependent Variable 268 10.2.3 Discriminant Function 268 10.2.4 Classification Matrix 269 10.2.5 Stepwise Method of Discriminant Analysis 269 10.2.6 Power of Discriminating Variable 269 10.2.7 Canonical Correlation 269 10.2.8 Wilks’ Lambda 270 10.3 Assumptions in Discriminant Analysis 270 10.4 Why to Use Discriminant Analysis 270 10.5 Steps in Discriminant Analysis 271 10.6 Application of Discriminant Function Analysis 272 10.7 Discriminant Analysis Using Spss 274 10.7.1 Computation in Discriminant Analysis 274 10.7.2 Interpretation of Findings 279 10.8 Summary of the Spss Commands for Discriminant Analysis 284 10.9 Exercise 284 10.9.1 Short Answer Questions 284 10.9.2 Multiple Choice Questions 285 10.9.3 Assignment 286 10.10 Case Study on Discriminant Analysis 288 11 Logistic Regression for Developing Logit Model in Sport 293 11.1 Introduction 293 11.2 Understanding Logistic Regression 294 11.3 Application of Logistic Regression in Sports Research 295 11.4 Assumptions in Logistic Regression 297 11.5 Steps in Developing Logistic Model 297 11.6 Logistic Analysis Using Spss 297 11.6.1 Block 0 299 11.6.2 Block 1 299 11.6.3 Computation in Logistic Regression with SPSS 299 11.7 Interpretation of Findings 304 11.7.1 Case Processing and Coding Summary 304 11.7.2 Analyzing Logistic Models 305 11.8 Summary of the Spss Commands for Logistic Regression 310 11.9 Exercise 310 11.9.1 Short Answer Questions 310 11.9.2 Multiple Choice Questions 311 11.9.3 Assignment 312 11.10 Case Study on Logistic Regression 313 12 Application of Factor Analysis 319 12.1 Introduction 319 12.2 Terminologies Used in Factor Analysis 320 12.2.1 Principal Component Analysis 320 12.2.2 Eigenvalue 320 12.2.3 Kaiser Criterion 321 12.2.4 The Scree Test 321 12.2.5 Communality 321 12.2.6 Factor Loading 322 12.2.7 Varimax Rotation 322 12.3 Assumptions in Factor Analysis 322 12.4 Steps in Factor Analysis 323 12.5 Application of Factor Analysis 323 12.6 Factor Analysis with Spss 324 12.6.1 Computation in Factor Analysis Using SPSS 326 12.7 Summary of the Spss Commands for Factor Analysis 336 12.8 Exercise 336 12.8.1 Short Answer Questions 336 12.8.2 Multiple Choice Questions 337 12.8.3 Assignment 338 12.9 Case Study on Factor Analysis 339 Appendix 346 Bibliography 360 Index 368
£89.96
John Wiley & Sons Inc Exploring Arduino Tools and Techniques for
Book SynopsisThe bestselling beginner Arduino guide, updated with new projects! Exploring Arduino makes electrical engineering and embedded software accessible. Learn step by step everything you need to know about electrical engineering, programming, and human-computer interaction through a series of increasingly complex projects.Table of ContentsIntroduction xxv Part I Arduino Engineering Basics 1 1 Getting Started and Understanding the Arduino Landscape 3 Exploring the Arduino Ecosystem 4 Arduino Functionality 5 The Microcontroller 7 Programming Interfaces 8 Input/Output: GPIO, ADCs, and Communication Busses 9 Power 9 Arduino Boards 11 Creating Your First Program 15 Downloading and Installing the Arduino IDE 16 Running the IDE and Connecting to the Arduino 17 Breaking Down Your First Program 18 Summary 21 2 Digital Inputs, Outputs, and Pulse-Width Modulation 23 Digital Outputs 24 Wiring Up an LED and Using Breadboards 24 Working with Breadboards 24 Wiring LEDs 25 Programming Digital Outputs 29 Using For Loops 30 Pulse-Width Modulation with analogWrite() 31 Reading Digital Inputs 35 Reading Digital Inputs with Pull-Down Resistors 35 Working with “Bouncy” Buttons 38 Building a Controllable RGB LED Nightlight 42 Summary 46 3 Interfacing with Analog Sensors 47 Understanding Analog and Digital Signals 48 Comparing Analog and Digital Signals 48 Converting an Analog Signal to Digital 49 Reading Analog Sensors with the Arduino: analogRead() 51 Reading a Potentiometer 51 Using Analog Sensors 56 Using Variable Resistors to Make Your Own Analog Sensors 60 Using Resistive Voltage Dividers 61 Using Analog Inputs to Control Analog Outputs 64 Summary 66 Part II Interfacing with Your Environment 67 4 Using Transistors and Driving DC Motors 69 Driving DC Motors 70 Handling High-Current Inductive Loads 71 Using Transistors as Switches 72 Using Protection Diodes73 Using a Secondary Power Source 74 Wiring the Motor 74 Controlling Motor Speed with PWM 76 Using an H-Bridge to Control DC Motor Direction 78 Building an H-Bridge Circuit 80 Operating an H-Bridge Circuit 82 Building a Roving Robot 86 Choosing the Robot Parts 87 Selecting a Motor and Gearbox 87 Powering Your Robot 87 Constructing the Robot 89 Writing the Robot Software 92 Bringing It Together 96 Summary 97 5 Driving Stepper and Servo Motors 99 Driving Servo Motors 100 Understanding the Difference between Continuous Rotation and Standard Servos 100 Understanding Servo Control 101 Controlling a Servo 104 Building a Sweeping Distance Sensor 105 Understanding and Driving Stepper Motors 109 How Bipolar Stepper Motors Work 111 Making Your Stepper Move 113 Building a “One-Minute Chronograph” 117 Wiring and Building the Chronograph 117 Programming the Chronograph 119 Summary 124 6 Making Sounds and Music 125 Understanding How Speakers Work 126 The Properties of Sound 126 How a Speaker Produces Sound 128 Using tone() to Make Sounds 129 Including a Definition File 129 Wiring the Speaker 130 Making Sound Sequences 133 Using Arrays 133 Making Note and Duration Arrays 134 Completing the Program 134 Understanding the Limitations of the tone() Function 136 Building a Micro Piano 136 Summary 139 7 USB Serial Communication 141 Understanding the Arduino’s Serial Communication Capabilities 142 Arduino Boards with an Internal or External FTDI or Silicon Labs USB-to-Serial Converter 143 Arduino Boards with a Secondary USB-Capable ATmega MCU Emulating a Serial Converter 146 Arduino Boards with a Single USB-Capable MCU 147 Arduino Boards with USB-Host Capabilities 147 Listening to the Arduino 148 Using print Statements 148 Using Special Characters 150 Changing Data Type Representations 152 Talking to the Arduino 152 Configuring the Arduino IDE’s Serial Monitor to Send Command Strings 152 Reading Incoming Data from a Computer or Other Serial Device 153 Telling the Arduino to Echo Incoming Data 153 Understanding the Differences between Chars and Ints 154 Sending Single Characters to Control an LED 156 Sending Lists of Values to Control an RGB LED 158 Talking to a Desktop App 161 Installing Processing 162 Controlling a Processing Sketch from Your Arduino 163 Sending Data from Processing to Your Arduino 166 Summary 169 8 Emulating USB Devices 171 Emulating a Keyboard 173 Typing Data into the Computer 173 Commanding Your Computer to Do Your Bidding 177 Emulating a Mouse 178 Summary 182 9 Shift Registers 183 Understanding Shift Registers 184 Sending Parallel and Serial Data 185 Working with the 74HC595 Shift Register 186 Understanding the Shift Register pin Functions 186 Understanding How the Shift Register Works 187 Shifting Serial Data from the Arduino 189 Converting Between Binary and Decimal Formats 192 Controlling Light Animations with a Shift Register 192 Building a “Light Rider” 192 Responding to Inputs with an LED Bar Graph 194 Summary 197 Part III Communication Interfaces 199 10 The I2C Bus 201 History of the I2C Bus 202 I2C Hardware Design 203 Communication Scheme and ID Numbers 203 Hardware Requirements and Pull-Up Resistors 206 Communicating with an I2C Temperature Probe 208 Setting Up the Hardware208 Referencing the Datasheet 210 Writing the Software 212 Combining Shift Registers, Serial Communication, and I2C Communications 214 Building the Hardware for a Temperature Monitoring System 214 Modifying the Embedded Program 215 Writing the Processing Sketch 218 Summary 221 11 The SPI Bus and Third-Party Libraries 223 Overview of the SPI Bus 224 SPI Hardware and Communication Design 225 Hardware Configuration 225 Communication Scheme 227 Comparing SPI to I2C and UART 227 Communicating with an SPI Accelerometer 228 What is an Accelerometer? 229 Gathering Information from the Datasheet 231 Setting Up the Hardware233 Writing the Software 235 Installing the Adafruit Sensor Libraries 236 Leveraging the Library 237 Creating an Audiovisual Instrument Using a 3-Axis Accelerometer 241 Setting Up the Hardware242 Modifying the Software 242 Summary 246 12 Interfacing with Liquid Crystal Displays 247 Setting Up the LCD 248 Using the LiquidCrystal Library to Write to the LCD 251 Adding Text to the Display 252 Creating Special Characters and Animations 254 Building a Personal Thermostat 258 Setting Up the Hardware 258 Displaying Data on the LCD 261 Adjusting the Set Point with a Button 264 Adding an Audible Warning and a Fan 265 Bringing It All Together: The Complete Program 266 Taking This Project to the Next Level 270 Summary 271 Part IV Digging Deeper and Combining Functions 273 13 Interrupts and Other Special Functions 275 Using Hardware Interrupts 276 Knowing the Tradeoffs Between Polling and Interrupting 277 Ease of Implementation (Software) 277 Ease of Implementation (Hardware) 277 Multitasking 278 Acquisition Accuracy 278 Understanding the Arduino Hardware Interrupt Capabilities 278 Building and Testing a Hardware-Debounced Button Interrupt Circuit 279 Creating a Hardware-Debouncing Circuit 280 Assembling the Complete Test Circuit 284 Writing the Software 285 Using Timer Interrupts 288 Understanding Timer Interrupts 288 Getting the Library 289 Executing Two Tasks Simultaneously(ish) 289 Building an Interrupt-Driven Sound Machine 290 Sound Machine Hardware 291 Sound Machine Software 291 Summary 294 14 Data Logging with SD Cards 295 Getting Ready for Data Logging 296 Formatting Data with CSV Files 297 Preparing an SD Card for Data Logging 297 Formatting Your SD Card Using a Windows PC 298 Formatting Your SD Card Using Mac OS 300 Formatting Your SD Card Using Linux 302 Interfacing the Arduino with an SD Card 304 SD Card Shields 304 SD Card SPI Interface 307 Writing to an SD Card 307 Reading from an SD Card 312 Real-Time Clocks 317 Understanding Real-Time Clocks 317 Communicating with a Real-Time Clock 317 Using the RTC Arduino Third-Party Library 318 Using a Real-Time Clock 319 Installing the RTC and SD Card Modules 319 Updating the Software 320 Building an Entrance Logger 327 Logger Hardware 328 Logger Software 329 Data Analysis 334 Summary 335 Part V Going Wireless 337 15 Wireless RF Communications 339 The Electromagnetic Spectrum 340 The Spectrum 342 How Your RF Link Will Send and Receive Data 343 Receiving Key Presses with the RF Link 346 Connecting Your Receiver 346 Programming Your Receiver 347 Making a Wireless Doorbell 351 Wiring the Receiver 351 Programming the Receiver 351 The Start of Your Smart Home—Controlling a Lamp 354 Your Home’s AC Power 356 How a Relay Works 356 Programming the Relay Control 358 Hooking up Your Lamp and Relay to the Arduino 360 Summary 361 16 Bluetooth Connectivity 363 Demystifying Bluetooth 364 Bluetooth Standards and Versions 364 Bluetooth Profiles and BTLE GATT Services 365 Communication between Your Arduino and Your Phone 366 Reading a Sensor over BTLE 366 Adding Support for Third-Party Boards to the Arduino IDE 367 Installing the BTLE Module Library 369 Programming the Feather Board 369 Connecting Your Smartphone to Your BTLE Transmitter 377 Sending Commands from Your Phone over BTLE 379 Parsing Command Strings 380 Commanding Your BTLE Device with Natural Language 384 Controlling an AC Lamp with Bluetooth 389 How Your Phone “Pairs” to BTLE Devices 389 Writing the Proximity Control Software 390 Pairing Your Phone 394 Pairing an Android Phone 394 Pairing an iPhone 395 Make Your Lamp React to Your Presence 396 Summary 397 17 Wi-Fi and the Cloud 399 The Web, the Arduino, and You 400 Networking Lingo 401 The Internet vs. the World Wide Web vs. the Cloud 401 IP Address 401 Network Address Translation 402 MAC Address 402 HTML 402 HTTP and HTTPS 402 GET/POST 403 DHCP 403 DNS 403 Clients and Servers 403 Your Wi-Fi–Enabled Arduino 404 Controlling Your Arduino from the Web 404 Setting Up the I/O Control Hardware 404 Preparing the Arduino IDE for Use with the Feather Board.406 Ensuring the Wi-Fi Library is Matched to the Wi-Fi Module’s Firmware 407 Checking the WINC1500’s Firmware Version 408 Updating the WINC1500’s Firmware 408 Writing an Arduino Server Sketch 408 Connecting to the Network and Retrieving an IP Address via DHCP 409 Writing the Code for a Bare-Minimum Web Server 412 Controlling Your Arduino from Inside and Outside Your Local Network 423 Controlling Your Arduino over the Local Network 423 Using Port Forwarding to Control Your Arduino from Anywhere 425 Interfacing with Web APIs 427 Using a Weather API428 Creating an Account with the API Service Provider 429 Understanding How APIs are Structured 430 JSON-Formatted Data and Your Arduino 430 Fetching and Parsing Weather Data 431 Getting the Local Temperature from the Web on Your Arduino 433 Completing the Live Temperature Display 440 Wiring up the LED Readout Display 440 Driving the Display with Temperature Data 443 Summary 449 Appendix A: Deciphering Datasheets and Schematics 451 Index 461
£26.35
John Wiley & Sons Inc Using Excel for Business and Financial Modelling
Book SynopsisA hands-on guide to using Excel in the business context First published in 2012, Using Excel for Business and Financial Modelling contains step-by-step instructions of how to solve common business problems using financial models, including downloadable Excel templates, a list of shortcuts and tons of practical tips and techniques you can apply straight away. Whilst there are many hundreds of tools, features and functions in Excel, this book focuses on the topics most relevant to finance professionals. It covers these features in detail from a practical perspective, but also puts them in context by applying them to practical examples in the real world. Learn to create financial models to help make business decisions whilst applying modelling best practice methodology, tools and techniques. Provides the perfect mix of practice and theory Helps you become a DIY Excel modelling specialist Includes updates for Excel 2019/365 and Excel for Table of ContentsPreface xi Chapter 1 What is Financial Modelling? 1 What’s the Difference Between a Spreadsheet and a Financial Model? 3 Types and Purposes of Financial Models 5 Tool Selection 6 What Skills Do You Need to Be a Good Financial Modeller? 17 The “Ideal” Financial Modeller 23 Summary 27 Chapter 2 Building a Model 29 Model Design 29 The Golden Rules for Model Design 31 Design Issues 32 The Workbook Anatomy of a Model 33 Project Planning Your Model 36 Model Layout Flowcharting 37 Steps to Building a Model 39 Information Requests 47 Version-Control Documentation 49 Summary 50 Chapter 3 Best-Practice Principles of Modelling 51 Document Your Assumptions 51 Linking, Not Hardcoding 52 Enter Data Only Once 53 Avoid Bad Habits 53 Use Consistent Formulas 53 Format and Label Clearly 54 Methods and Tools of Assumptions Documentation 55 Linked Dynamic Text Assumptions Documentation 62 What Makes a Good Model? 65 Summary 67 Chapter 4 Financial Modelling Techniques 69 The Problem with Excel 69 Error Avoidance Strategies 71 How Long Should a Formula Be? 76 Linking to External Files 78 Building Error Checks 81 Circular References 85 Summary 90 Chapter 5 Using Excel in Financial Modelling 91 Formulas and Functions in Excel 91 Excel Versions 94 Handy Excel Shortcuts 100 Cell Referencing Best Practices 104 Named Ranges 107 Basic Excel Functions 110 Logical Functions 114 Nesting Logical Functions 117 Summary 125 Chapter 6 Functions for Financial Modelling 127 Aggregation Functions 127 LOOKUP Functions 139 Nesting Index and Match 150 OFFSET Function 153 Regression Analysis 158 Choose Function 164 Working with Dates 165 Financial Project Evaluation Functions 171 Loan Calculations 177 Summary 183 Chapter 7 Tools for Model Display 185 Basic Formatting 185 Custom Formatting 186 Conditional Formatting 191 Sparklines 195 Bulletproofing Your Model 199 Customising the Display Settings 203 Form Controls 210 Summary 226 Chapter 8 Tools for Financial Modelling 227 Hiding Sections of a Model 227 Grouping 233 Array Formulas 234 Goal Seeking 240 Structured Reference Tables 242 PivotTables 245 Macros 254 Summary 263 Chapter 9 Common Uses of Tools in Financial Modelling 265 Escalation Methods for Modelling 265 Understanding Nominal and Effective (Real) Rates 270 Calculating a Cumulative Sum (Running Totals) 274 How to Calculate a Payback Period 275 Weighted Average Cost of Capital (WACC) 278 Building a Tiering Table 282 Modelling Depreciation Methods 286 Break-Even Analysis 295 Summary 300 Chapter 10 Model Review 301 Rebuilding an Inherited Model 301 Improving Model Performance 312 Auditing a Financial Model 317 Summary 323 Appendix: QA Log 323 Chapter 11 Stress Testing, Scenarios, and Sensitivity Analysis in Financial Modelling 325 What are the Differences Between Scenario, Sensitivity, and What-If Analyses? 326 Overview of Scenario Analysis Tools and Methods 328 Advanced Conditional Formatting 337 Comparing Scenario Methods 340 Adding Probability to a Data Table 350 Summary 351 Chapter 12 Presenting Model Output 353 Preparing an Oral Presentation for Model Results 353 Preparing a Graphic or Written Presentation for Model Results 355 Chart Types 358 Working with Charts 367 Handy Charting Hints 374 Dynamic Named Ranges 376 Charting with Two Different Axes and Chart Types 382 Bubble Charts 384 Creating a Dynamic Chart 387 Waterfall Charts 391 Summary 395 About the Author 397 About the Website 399 Index 403
£56.70
John Wiley & Sons Inc Modern Computational Finance
Book SynopsisArguably the strongest addition to numerical finance of the past decade, Algorithmic Adjoint Differentiation (AAD) is the technology implemented in modern financial software to produce thousands of accurate risk sensitivities, within seconds, on light hardware.AAD recently became a centerpiece of modern financial systems and a key skill for all quantitative analysts, developers, risk professionals or anyone involved with derivatives. It is increasingly taught in Masters and PhD programs in finance.Danske Bank''s wide scale implementation of AAD in its production and regulatory systems won the In-House System of the Year 2015 Risk award. The Modern Computational Finance books, written by three of the very people who designed Danske Bank''s systems, offer a unique insight into the modern implementation of financial models. The volumes combine financial modelling, mathematics and programming to resolve real life financial problems and produce effective derivatives Table of ContentsModern Computational Finance xi Preface by Leif Andersen xv Acknowledgments xix Introduction xxi About the Companion C++ Code xxv PART I Modern Parallel Programming 1 Introduction 3 CHAPTER 1 Effective C++ 17 CHAPTER 2 Modern C++ 25 2.1 Lambda expressions 25 2.2 Functional programming in C++ 28 2.3 Move semantics 34 2.4 Smart pointers 41 CHAPTER 3 Parallel C++ 47 3.1 Multi-threaded Hello World 49 3.2 Thread management 50 3.3 Data sharing 55 3.4 Thread local storage 56 3.5 False sharing 57 3.6 Race conditions and data races 62 3.7 Locks 64 3.8 Spinlocks 66 3.9 Deadlocks 67 3.10 RAII locks 68 3.11 Lock-free concurrent design 70 3.12 Introduction to concurrent data structures 72 3.13 Condition variables 74 3.14 Advanced synchronization 80 3.15 Lazy initialization 83 3.16 Atomic types 86 3.17 Task management 89 3.18 Thread pools 96 3.19 Using the thread pool 108 3.20 Debugging and optimizing parallel programs 113 PART II Parallel Simulation 123 Introduction 125 CHAPTER 4 Asset Pricing 127 4.1 Financial products 127 4.2 The Arbitrage Pricing Theory 140 4.3 Financial models 151 CHAPTER 5 Monte-Carlo 185 5.1 The Monte-Carlo algorithm 185 5.2 Simulation of dynamic models 192 5.3 Random numbers 200 5.4 Better random numbers 202 CHAPTER 6Serial Implementation 213 6.1 The template simulation algorithm 213 6.2 Random number generators 223 6.3 Concrete products 230 6.4 Concrete models 245 6.5 User interface 263 6.6 Results 268 CHAPTER 7 Parallel Implementation 271 7.1 Parallel code and skip ahead 271 7.2 Skip ahead with mrg32k3a 276 7.3 Skip ahead with Sobol 282 7.4 Results 283 PART III Constant Time Differentiation 285 Introduction 287 CHAPTER 8 Manual Adjoint Differentiation 295 8.1 Introduction to Adjoint Differentiation 295 8.2 Adjoint Differentiation by hand 308 8.3 Applications in machine learning and finance 315 CHAPTER 9 Algorithmic Adjoint Differentiation 321 9.1 Calculation graphs 322 9.2 Building and applying DAGs 328 9.3 Adjoint mathematics 340 9.4 Adjoint accumulation and DAG traversal 344 9.5 Working with tapes 349 CHAPTER 10 Effective AAD and Memory Management 357 10.1 The Node class 359 10.2 Memory management and the Tape class 362 10.3 The Number class 379 10.4 Basic instrumentation 398 CHAPTER 11 Discussion and Limitations 401 11.1 Inputs and outputs 401 11.2 Higher-order derivatives 402 11.3 Control flow 402 11.4 Memory 403 CHAPTER 12 Differentiation of the Simulation Library 407 12.1 Active code 407 12.2 Serial code 409 12.3 User interface 417 12.4 Serial results 424 12.5 Parallel code 426 12.6 Parallel results 433 CHAPTER 13 Check-Pointing and Calibration 439 13.1 Check-pointing 439 13.2 Explicit calibration 448 13.3 Implicit calibration 475 CHAPTER 14 Multiple Differentiation in Almost Constant Time 483 14.1 Multidimensional differentiation 483 14.2 Traditional Multidimensional AAD 484 14.3 Multidimensional adjoints 485 14.4 AAD library support 487 14.5 Instrumentation of simulation algorithms 494 14.6 Results 499 CHAPTER 15 Acceleration with Expression Templates 503 15.1 Expression nodes 504 15.2 Expression templates 507 15.3 Expression templated AAD code 524 Debugging AAD Instrumentation 541 Conclusion 547 References 549 Index 555
£67.50
John Wiley & Sons Inc The R Book
Book SynopsisA start-to-finish guide to one of the most useful programming languages for researchers in a variety of fields In the newly revised Third Edition of The R Book, a team of distinguished teachers and researchers delivers a user-friendly and comprehensive discussion of foundational and advanced topics in the R software language, which is used widely in science, engineering, medicine, economics, and other fields. The book is designed to be used as both a complete textreadable from cover to coverand as a reference manual for practitioners seeking authoritative guidance on particular topics. This latest edition offers instruction on the use of the RStudio GUI, an easy-to-use environment for those new to R. It provides readers with a complete walkthrough of the R language, beginning at a point that assumes no prior knowledge of R and very little previous knowledge of statistics. Readers will also find: A thorough introduction to fundamental conceptTable of ContentsPreface 1 Getting started 1 2 Technical background 17 3 Essentials of the R language 55 4 Data input and dataframes 195 5 Graphics 235 6 Graphics in more detail 289 7 Tables 357 8 Probability distributions in R 369 9 Testing 401 10 Regression 433 11 Generalised Linear Models 495 12 Generalised Additive Models 575 13 Mixed-effect models 599 14 Non-linear regression 627 15 Survival analysis 651 16 Designed experiments 669 17 Meta-analysis 701 18 Time Series 717 19 Multivariate Statistics 743 20 Classification and regression trees 765 21 Spatial Statistics 785 22 Bayesian Statistics 807 23 Simulation models 833
£67.50
John Wiley & Sons Inc Micromechanics With Mathematica
Book SynopsisDemonstrates the simplicity and effectiveness of Mathematica as the solution to practical problems in composite materials. Designed for those who need to learn how micromechanical approaches can help understand the behaviour of bodies with voids, inclusions, defects, this book is perfect for readers without a programming background.Table of ContentsPreface ix About the Companion Website xi 1 Coordinate Transformation and Tensors 1 1.1 Index Notation 1 1.1.1 Some Examples of Index Notation in 3-D 3 1.1.2 Mathematica Implementation 3 1.1.3 Kronecker Delta 6 1.1.4 Permutation Symbols 9 1.1.5 Product of Matrices 10 1.2 Coordinate Transformations (Cartesian Tensors) 11 1.3 Definition of Tensors 13 1.3.1 Tensor of Rank 0 (Scalar) 13 1.3.2 Tensor of Rank 1 (Vector) 14 1.3.3 Tensor of Rank 2 15 1.3.4 Tensor of Rank 3 17 1.3.5 Tensor of Rank 4 17 1.3.6 Differentiation 19 1.3.7 Differentiation of Cartesian Tensors 20 1.4 Invariance of Tensor Equations 21 1.5 Quotient Rule 22 1.6 Exercises 23 References 24 2 Field Equations 25 2.1 Concept of Stress 25 2.1.1 Properties of Stress 29 2.1.2 (Stress) Boundary Conditions 30 2.1.3 Principal Stresses 31 2.1.4 Stress Deviator 35 2.1.5 Mohr’s Circle 38 2.2 Strain 40 2.2.1 Shear Deformation 47 2.3 Compatibility Condition 49 2.4 Constitutive Relation, Isotropy, Anisotropy 50 2.4.1 Isotropy 52 2.4.2 Elastic Modulus 54 2.4.3 Orthotropy 56 2.4.4 2-D Orthotropic Materials 57 2.4.5 Transverse Isotropy 57 2.5 Constitutive Relation for Fluids 58 2.5.1 Thermal Effect 58 2.6 Derivation of Field Equations 59 2.6.1 Divergence Theorem (Gauss Theorem) 59 2.6.2 Material Derivative 60 2.6.3 Equation of Continuity 62 2.6.4 Equation of Motion 62 2.6.5 Equation of Energy 63 2.6.6 Isotropic Solids 65 2.6.7 Isotropic Fluids 65 2.6.8 Thermal Effects 66 2.7 General Coordinate System 66 2.7.1 Introduction to Tensor Analysis 66 2.7.2 Definition of Tensors in Curvilinear Systems 68 2.7.3 Metric Tensor10, gij 69 2.7.4 Covariant Derivatives 70 2.7.5 Examples 73 2.7.6 Vector Analysis 75 2.8 Exercises 77 References 80 3 Inclusions in Infinite Media 81 3.1 Eshelby’s Solution for an Ellipsoidal Inclusion Problem 82 3.1.1 Eigenstrain Problem 85 3.1.2 Eshelby Tensors for an Ellipsoidal Inclusion 87 3.1.3 Inhomogeneity (Inclusion) Problem 95 3.2 Multilayered Inclusions 104 3.2.1 Background 104 3.2.2 Implementation of Index Manipulation in Mathematica 105 3.2.3 General Formulation 108 3.2.4 Exact Solution for Two-Phase Materials 116 3.2.5 Exact Solution for Three-Phase Materials 123 3.2.6 Exact Solution for Four-Phase Materials 132 3.2.7 Exact Solution for 2-D Multiphase Materials 137 3.3 Thermal Stress 137 3.3.1 Thermal Stress Due to Heat Source 138 3.3.2 Thermal Stress Due to Heat Flow 146 3.4 Airy’s Stress Function Approach 155 3.4.1 Airy’s Stress Function 156 3.4.2 Mathematica Programming of Complex Variables 161 3.4.3 Multiphase Inclusion Problems Using Airy’s Stress Function 163 3.5 Effective Properties 172 3.5.1 Upper and Lower Bounds of Effective Properties 173 3.5.2 Self-Consistent Approximation 175 3.5.3 Source Code for micromech.m 178 3.6 Exercises 188 References 189 4 Inclusions in Finite Matrix 191 4.1 General Approaches for Numerically Solving Boundary Value Problems 192 4.1.1 Method of Weighted Residuals 192 4.1.2 Rayleigh–Ritz Method 203 4.1.3 Sturm–Liouville System 205 4.2 Steady-State Heat Conduction Equations 213 4.2.1 Derivation of Permissible Functions 213 4.2.2 Finding Temperature Field Using Permissible Functions 227 4.3 Elastic Fields with Bounded Boundaries 232 4.4 Numerical Examples 238 4.4.1 Homogeneous Medium 238 4.4.2 Single Inclusion 240 4.5 Exercises 251 References 252 Appendix A Introduction to Mathematica 253 A.1 Essential Commands/Statements 255 A.2 Equations 256 A.3 Differentiation/Integration 260 A.4 Matrices/Vectors/Tensors 260 A.5 Functions 262 A.6 Graphics 263 A.7 Other Useful Functions 265 A.8 Programming in Mathematica 267 A.8.1 Control Statements 268 A.8.2 Tensor Manipulations 270 References 272 Index 273
£79.16
John Wiley & Sons Inc Theory of Lift
Book SynopsisThis introductory text walks readers from the fundamental mechanics of lift to the stage of being able to make practical calculations and predictions of the coefficient of lift for realistic wing profile and platform geometries.Trade Review“This book is a very useful digest of key points from the literature, carefully structured and presented with helpful pointers as to how the successive aerodynamical models can be implemented in the ‘now so readily available interactive matrix computation systems.” (Aeronautical Journal, 1 August 2013)Table of ContentsPreface xvii Series Preface xxiii Part One Plane Ideal Aerodynamics 1 Preliminary Notions 3 1.1 Aerodynamic Force and Moment 3 1.1.1 Motion of the Frame of Reference 3 1.1.2 Orientation of the System of Coordinates 4 1.1.3 Components of the Aerodynamic Force 4 1.1.4 Formulation of the Aerodynamic Problem 4 1.2 Aircraft Geometry 5 1.2.1 Wing Section Geometry 6 1.2.2 Wing Geometry 7 1.3 Velocity 8 1.4 Properties of Air 8 1.4.1 Equation of State: Compressibility and the Speed of Sound 8 1.4.2 Rheology: Viscosity 10 1.4.3 The International Standard Atmosphere 12 1.4.4 Computing Air Properties 12 1.5 Dimensional Theory 13 1.5.1 Alternative methods 16 1.5.2 Example: Using Octave to Solve a Linear System 16 1.6 Example: NACA Report No. 502 18 1.7 Exercises 19 1.8 Further Reading 22 References 22 2 Plane Ideal Flow 25 2.1 Material Properties: The Perfect Fluid 25 2.2 Conservation of Mass 26 2.2.1 Governing Equations: Conservation Laws 26 2.3 The Continuity Equation 26 2.4 Mechanics: The Euler Equations 27 2.4.1 Rate of Change of Momentum 27 2.4.2 Forces Acting on a Fluid Particle 28 2.4.3 The Euler Equations 29 2.4.4 Accounting for Conservative External Forces 29 2.5 Consequences of the Governing Equations 30 2.5.1 The Aerodynamic Force 30 2.5.2 Bernoulli’s Equation 33 2.5.3 Circulation, Vorticity, and Irrotational Flow 33 2.5.4 Plane Ideal Flows 35 2.6 The Complex Velocity 35 2.6.1 Review of Complex Variables 35 2.6.2 Analytic Functions and Plane Ideal Flow 38 2.6.3 Example: the Polar Angle Is Nowhere Analytic 40 2.7 The Complex Potential 41 2.8 Exercises 42 2.9 Further Reading 44 References 45 3 Circulation and Lift 47 3.1 Powers of z 47 3.1.1 Divergence and Vorticity in Polar Coordinates 48 3.1.2 Complex Potentials 48 3.1.3 Drawing Complex Velocity Fields with Octave 49 3.1.4 Example: k = 1, Corner Flow 50 3.1.5 Example: k = 0, Uniform Stream 51 3.1.6 Example: k =−1, Source 51 3.1.7 Example: k =−2, Doublet 52 3.2 Multiplication by a Complex Constant 53 3.2.1 Example: w = const., Uniform Stream with Arbitrary Direction 53 3.2.2 Example: w = i/z, Vortex 54 3.2.3 Example: Polar Components 54 3.3 Linear Combinations of Complex Velocities 54 3.3.1 Example: Circular Obstacle in a Stream 54 3.4 Transforming the Whole Velocity Field 56 3.4.1 Translating the Whole Velocity Field 56 3.4.2 Example: Doublet as the Sum of a Source and Sink 56 3.4.3 Rotating the Whole Velocity Field 56 3.5 Circulation and Outflow 57 3.5.1 Curve-integrals in Plane Ideal Flow 57 3.5.2 Example: Numerical Line-integrals for Circulation and Outflow 58 3.5.3 Closed Circuits 59 3.5.4 Example: Powers of z and Circles around the Origin 60 3.6 More on the Scalar Potential and Stream Function 61 3.6.1 The Scalar Potential and Irrotational Flow 61 3.6.2 The Stream Function and Divergence-free Flow 62 3.7 Lift 62 3.7.1 Blasius’s Theorem 62 3.7.2 The Kutta–Joukowsky Theorem 63 3.8 Exercises 64 3.9 Further Reading 65 References 66 4 Conformal Mapping 67 4.1 Composition of Analytic Functions 67 4.2 Mapping with Powers of ζ 68 4.2.1 Example: Square Mapping 68 4.2.2 Conforming Mapping by Contouring the Stream Function 69 4.2.3 Example: Two-thirds Power Mapping 69 4.2.4 Branch Cuts 70 4.2.5 Other Powers 71 4.3 Joukowsky’s Transformation 71 4.3.1 Unit Circle from a Straight Line Segment 71 4.3.2 Uniform Flow and Flow over a Circle 72 4.3.3 Thin Flat Plate at Nonzero Incidence 73 4.3.4 Flow over the Thin Flat Plate with Circulation 74 4.3.5 Joukowsky Aerofoils 75 4.4 Exercises 75 4.5 Further Reading 78 References 78 5 Flat Plate Aerodynamics 79 5.1 Plane Ideal Flow over a Thin Flat Plate 79 5.1.1 Stagnation Points 80 5.1.2 The Kutta–Joukowsky Condition 80 5.1.3 Lift on a Thin Flat Plate 81 5.1.4 Surface Speed Distribution 82 5.1.5 Pressure Distribution 83 5.1.6 Distribution of Circulation 84 5.1.7 Thin Flat Plate as Vortex Sheet 85 5.2 Application of Thin Aerofoil Theory to the Flat Plate 87 5.2.1 Thin Aerofoil Theory 87 5.2.2 Vortex Sheet along the Chord 87 5.2.3 Changing the Variable of Integration 88 5.2.4 Glauert’s Integral 88 5.2.5 The Kutta–Joukowsky Condition 89 5.2.6 Circulation and Lift 89 5.3 Aerodynamic Moment 89 5.3.1 Centre of Pressure and Aerodynamic Centre 90 5.4 Exercises 90 5.5 Further Reading 91 References 91 6 Thin Wing Sections 93 6.1 Thin Aerofoil Analysis 93 6.1.1 Vortex Sheet along the Camber Line 93 6.1.2 The Boundary Condition 93 6.1.3 Linearization 94 6.1.4 Glauert’s Transformation 95 6.1.5 Glauert’s Expansion 95 6.1.6 Fourier Cosine Decomposition of the Camber Line Slope 97 6.2 Thin Aerofoil Aerodynamics 98 6.2.1 Circulation and Lift 98 6.2.2 Pitching Moment about the Leading Edge 99 6.2.3 Aerodynamic Centre 100 6.2.4 Summary 101 6.3 Analytical Evaluation of Thin Aerofoil Integrals 101 6.3.1 Example: the NACA Four-digit Wing Sections 104 6.4 Numerical Thin Aerofoil Theory 105 6.5 Exercises 109 6.6 Further Reading 109 References 109 7 Lumped Vortex Elements 111 7.1 The Thin Flat Plate at Arbitrary Incidence, Again 111 7.1.1 Single Vortex 111 7.1.2 The Collocation Point 111 7.1.3 Lumped Vortex Model of the Thin Flat Plate 112 7.2 Using Two Lumped Vortices along the Chord 114 7.2.1 Postprocessing 116 7.3 Generalization to Multiple Lumped Vortex Panels 117 7.3.1 Postprocessing 117 7.4 General Considerations on Discrete Singularity Methods 117 7.5 Lumped Vortex Elements for Thin Aerofoils 119 7.5.1 Panel Chains for Camber Lines 119 7.5.2 Implementation in Octave 121 7.5.3 Comparison with Thin Aerofoil Theory 122 7.6 Disconnected Aerofoils 123 7.6.1 Other Applications 124 7.7 Exercises 125 7.8 Further Reading 125 References 126 8 Panel Methods for Plane Flow 127 8.1 Development of the CUSSSP Program 127 8.1.1 The Singularity Elements 127 8.1.2 Discretizing the Geometry 129 8.1.3 The Influence Matrix 131 8.1.4 The Right-hand Side 132 8.1.5 Solving the Linear System 134 8.1.6 Postprocessing 135 8.2 Exercises 137 8.2.1 Projects 138 8.3 Further Reading 139 References 139 8.4 Conclusion to Part I: The Origin of Lift 139 Part Two Three-dimensional Ideal Aerodynamics 9 Finite Wings and Three-Dimensional Flow 143 9.1 Wings of Finite Span 143 9.1.1 Empirical Effect of Finite Span on Lift 143 9.1.2 Finite Wings and Three-dimensional Flow 143 9.2 Three-Dimensional Flow 145 9.2.1 Three-dimensional Cartesian Coordinate System 145 9.2.2 Three-dimensional Governing Equations 145 9.3 Vector Notation and Identities 145 9.3.1 Addition and Scalar Multiplication of Vectors 145 9.3.2 Products of Vectors 146 9.3.3 Vector Derivatives 147 9.3.4 Integral Theorems for Vector Derivatives 148 9.4 The Equations Governing Three-Dimensional Flow 149 9.4.1 Conservation of Mass and the Continuity Equation 149 9.4.2 Newton’s Law and Euler’s Equation 149 9.5 Circulation 150 9.5.1 Definition of Circulation in Three Dimensions 150 9.5.2 The Persistence of Circulation 151 9.5.3 Circulation and Vorticity 151 9.5.4 Rotational Form of Euler’s Equation 153 9.5.5 Steady Irrotational Motion 153 9.6 Exercises 154 9.7 Further Reading 155 References 155 10 Vorticity and Vortices 157 10.1 Streamlines, Stream Tubes, and Stream Filaments 157 10.1.1 Streamlines 157 10.1.2 Stream Tubes and Stream Filaments 158 10.2 Vortex Lines, Vortex Tubes, and Vortex Filaments 159 10.2.1 Strength of Vortex Tubes and Filaments 159 10.2.2 Kinematic Properties of Vortex Tubes 159 10.3 Helmholtz’s Theorems 159 10.3.1 ‘Vortex Tubes Move with the Flow’ 159 10.3.2 ‘The Strength of a Vortex Tube is Constant’ 160 10.4 Line Vortices 160 10.4.1 The Two-dimensional Vortex 160 10.4.2 Arbitrarily Oriented Rectilinear Vortex Filaments 160 10.5 Segmented Vortex Filaments 161 10.5.1 The Biot–Savart Law 161 10.5.2 Rectilinear Vortex Filaments 162 10.5.3 Finite Rectilinear Vortex Filaments 164 10.5.4 Infinite Straight Line Vortices 164 10.5.5 Semi-infinite Straight Line Vortex 164 10.5.6 Truncating Infinite Vortex Segments 165 10.5.7 Implementing Line Vortices in Octave 165 10.6 Exercises 166 10.7 Further Reading 167 References 167 11 Lifting Line Theory 169 11.1 Basic Assumptions of Lifting Line Theory 169 11.2 The Lifting Line, Horseshoe Vortices, and the Wake 169 11.2.1 Deductions from Vortex Theorems 169 11.2.2 Deductions from the Wing Pressure Distribution 170 11.2.3 The Lifting Line Model of Air Flow 170 11.2.4 Horseshoe Vortex 170 11.2.5 Continuous Trailing Vortex Sheet 171 11.2.6 The Form of the Wake 172 11.3 The Effect of Downwash 173 11.3.1 Effect on the Angle of Incidence: Induced Incidence 173 11.3.2 Effect on the Aerodynamic Force: Induced Drag 174 11.4 The Lifting Line Equation 174 11.4.1 Glauert’s Solution of the Lifting Line Equation 175 11.4.2 Wing Properties in Terms of Glauert’s Expansion 176 11.5 The Elliptic Lift Loading 178 11.5.1 Properties of the Elliptic Lift Loading 179 11.6 Lift–Incidence Relation 180 11.6.1 Linear Lift–Incidence Relation 181 11.7 Realizing the Elliptic Lift Loading 182 11.7.1 Corrections to the Elliptic Loading Approximation 182 11.8 Exercises 182 11.9 Further Reading 183 References 183 12 Nonelliptic Lift Loading 185 12.1 Solving the Lifting Line Equation 185 12.1.1 The Sectional Lift–Incidence Relation 185 12.1.2 Linear Sectional Lift–Incidence Relation 185 12.1.3 Finite Approximation: Truncation and Collocation 185 12.1.4 Computer Implementation 187 12.1.5 Example: a Rectangular Wing 187 12.2 Numerical Convergence 188 12.3 Symmetric Spanwise Loading 189 12.3.1 Example: Exploiting Symmetry 191 12.4 Exercises 192 References 192 13 Lumped Horseshoe Elements 193 13.1 A Single Horseshoe Vortex 193 13.1.1 Induced Incidence of the Lumped Horseshoe Element 195 13.2 Multiple Horseshoes along the Span 195 13.2.1 A Finite-step Lifting Line in Octave 197 13.3 An Improved Discrete Horseshoe Model 200 13.4 Implementing Horseshoe Vortices in Octave 203 13.4.1 Example: Yawed Horseshoe Vortex Coefficients 205 13.5 Exercises 206 13.6 Further Reading 207 References 207 14 The Vortex Lattice Method 209 14.1 Meshing the Mean Lifting Surface of a Wing 209 14.1.1 Plotting the Mesh of a Mean Lifting Surface 210 14.2 A Vortex Lattice Method 212 14.2.1 The Vortex Lattice Equations 213 14.2.2 Unit Normals to the Vortex-lattice 215 14.2.3 Spanwise Symmetry 215 14.2.4 Postprocessing Vortex Lattice Methods 215 14.3 Examples of Vortex Lattice Calculations 216 14.3.1 Campbell’s Flat Swept Tapered Wing 216 14.3.2 Bertin’s Flat Swept Untapered Wing 218 14.3.3 Spanwise and Chordwise Refinement 219 14.4 Exercises 220 14.5 Further Reading 221 14.5.1 Three-dimensional Panel Methods 222 References 222 Part Three Nonideal Flow in Aerodynamics 15 Viscous Flow 225 15.1 Cauchy’s First Law of Continuum Mechanics 225 15.2 Rheological Constitutive Equations 227 15.2.1 Perfect Fluid 227 15.2.2 Linearly Viscous Fluid 227 15.3 The Navier–Stokes Equations 228 15.4 The No-Slip Condition and the Viscous Boundary Layer 228 15.5 Unidirectional Flows 229 15.5.1 Plane Couette and Poiseuille Flows 229 15.6 A Suddenly Sliding Plate 230 15.6.1 Solution by Similarity Variable 230 15.6.2 The Diffusion of Vorticity 233 15.7 Exercises 234 15.8 Further Reading 234 References 235 16 Boundary Layer Equations 237 16.1 The Boundary Layer over a Flat Plate 237 16.1.1 Scales in the Conservation of Mass 237 16.1.2 Scales in the Streamwise Momentum Equation 238 16.1.3 The Reynolds Number 239 16.1.4 Pressure in the Boundary Layer 239 16.1.5 The Transverse Momentum Balance 239 16.1.6 The Boundary Layer Momentum Equation 240 16.1.7 Pressure and External Tangential Velocity 241 16.1.8 Application to Curved Surfaces 241 16.2 Momentum Integral Equation 241 16.3 Local Boundary Layer Parameters 243 16.3.1 The Displacement and Momentum Thicknesses 243 16.3.2 The Skin Friction Coefficient 243 16.3.3 Example: Three Boundary Layer Profiles 244 16.4 Exercises 248 16.5 Further Reading 249 References 249 17 Laminar Boundary Layers 251 17.1 Boundary Layer Profile Curvature 251 17.1.1 Pressure Gradient and Boundary Layer Thickness 252 17.2 Pohlhausen’s Quartic Profiles 252 17.3 Thwaites’s Method for Laminar Boundary Layers 254 17.3.1 F(λ) ≈ 0.45 − 6λ 255 17.3.2 Correlations for Shape Factor and Skin Friction 256 17.3.3 Example: Zero Pressure Gradient 256 17.3.4 Example: Laminar Separation from a Circular Cylinder 257 17.4 Exercises 260 17.5 Further Reading 261 References 262 18 Compressibility 263 18.1 Steady-State Conservation of Mass 263 18.2 Longitudinal Variation of Stream Tube Section 265 18.2.1 The Design of Supersonic Nozzles 266 18.3 Perfect Gas Thermodynamics 266 18.3.1 Thermal and Caloric Equations of State 266 18.3.2 The First Law of Thermodynamics 267 18.3.3 The Isochoric and Isobaric Specific Heat Coefficients 267 18.3.4 Isothermal and Adiabatic Processes 267 18.3.5 Adiabatic Expansion 268 18.3.6 The Speed of Sound and Temperature 269 18.3.7 The Speed of Sound and the Speed 269 18.3.8 Thermodynamic Characteristics of Air 270 18.3.9 Example: Stagnation Temperature 270 18.4 Exercises 270 18.5 Further Reading 271 References 271 19 Linearized Compressible Flow 273 19.1 The Nonlinearity of the Equation for the Potential 273 19.2 Small Disturbances to the Free-Stream 274 19.3 The Uniform Free-Stream 275 19.4 The Disturbance Potential 275 19.5 Prandtl–Glauert Transformation 276 19.5.1 Fundamental Linearized Compressible Flows 277 19.5.2 The Speed of Sound 278 19.6 Application of the Prandtl–Glauert Rule 279 19.6.1 Transforming the Geometry 279 19.6.2 Computing Aerodynamical Forces 280 19.6.3 The Prandlt–Glauert Rule in Two Dimensions 282 19.6.4 The Critical Mach Number 284 19.7 Sweep 284 19.8 Exercises 285 19.9 Further Reading 285 References 286 Appendix A Notes on Octave Programming 287 A. 1 Introduction 287 A. 2 Vectorization 287 A.2. 1 Iterating Explicitly 288 A.2. 2 Preallocating Memory 288 A.2. 3 Vectorizing Function Calls 288 A.2. 4 Many Functions Act Elementwise on Arrays 289 A.2. 5 Functions Primarily Defined for Arrays 289 A.2. 6 Elementwise Arithmetic with Single Numbers 289 A.2. 7 Elementwise Arithmetic between Arrays 290 A.2. 8 Vector and Matrix Multiplication 290 A. 3 Generating Arrays 290 A.3. 1 Creating Tables with bsxfun 290 A. 4 Indexing 291 A.4. 1 Indexing by Logical Masks 291 A.4. 2 Indexing Numerically 291 A. 5 Just-in-Time Compilation 291 A. 6 Further Reading 292 References 292 Glossary 293 Nomenclature 305 Index 309
£76.46
John Wiley & Sons Inc A Workout in Computational Finance with Website
Book SynopsisA comprehensive introduction to various numerical methods used in computational finance today Quantitative skills are a prerequisite for anyone working in finance or beginning a career in the field, as well as risk managers. A thorough grounding in numerical methods is necessary, as is the ability to assess their quality, advantages, and limitations. This book offers a thorough introduction to each method, revealing the numerical traps that practitioners frequently fall into. Each method is referenced with practical, real-world examples in the areas of valuation, risk analysis, and calibration of specific financial instruments and models. It features a strong emphasis on robust schemes for the numerical treatment of problems within computational finance. Methods covered include PDE/PIDE using finite differences or finite elements, fast and stable solvers for sparse grid systems, stabilization and regularization techniques for inverse problems resulting from the calibration oTable of ContentsAcknowledgements xiii About the Authors xv 1 Introduction and Reading Guide 1 2 Binomial Trees 7 2.1 Equities and Basic Options 7 2.2 The One Period Model 8 2.3 The Multiperiod Binomial Model 9 2.4 Black-Scholes and Trees 10 2.5 Strengths and Weaknesses of Binomial Trees 12 2.5.1 Ease of Implementation 12 2.5.2 Oscillations 12 2.5.3 Non-recombining Trees 14 2.5.4 Exotic Options and Trees 14 2.5.5 Greeks and Binomial Trees 15 2.5.6 Grid Adaptivity and Trees 15 2.6 Conclusion 16 3 Finite Differences and the Black-Scholes PDE 17 3.1 A Continuous Time Model for Equity Prices 17 3.2 Black-Scholes Model: From the SDE to the PDE 19 3.3 Finite Differences 23 3.4 Time Discretization 27 3.5 Stability Considerations 30 3.6 Finite Differences and the Heat Equation 30 3.6.1 Numerical Results 34 3.7 Appendix: Error Analysis 36 4 Mean Reversion and Trinomial Trees 39 4.1 Some Fixed Income Terms 39 4.1.1 Interest Rates and Compounding 39 4.1.2 Libor Rates and Vanilla Interest Rate Swaps 40 4.2 Black76 for Caps and Swaptions 43 4.3 One-Factor Short Rate Models 45 4.3.1 Prominent Short Rate Models 45 4.4 The Hull-White Model in More Detail 46 4.5 Trinomial Trees 47 5 Upwinding Techniques for Short Rate Models 55 5.1 Derivation of a PDE for Short Rate Models 55 5.2 Upwind Schemes 56 5.2.1 Model Equation 57 5.3 A Puttable Fixed Rate Bond under the Hull-White One Factor Model 63 5.3.1 Bond Details 64 5.3.2 Model Details 64 5.3.3 Numerical Method 65 5.3.4 An Algorithm in Pseudocode 68 5.3.5 Results 69 6 Boundary, Terminal and Interface Conditions and their Influence 71 6.1 Terminal Conditions for Equity Options 71 6.2 Terminal Conditions for Fixed Income Instruments 72 6.3 Callability and Bermudan Options 74 6.4 Dividends 74 6.5 Snowballs and TARNs 75 6.6 Boundary Conditions 77 6.6.1 Double Barrier Options and Dirichlet Boundary Conditions 77 6.6.2 Artificial Boundary Conditions and the Neumann Case 78 7 Finite Element Methods 81 7.1 Introduction 81 7.1.1 Weighted Residual Methods 81 7.1.2 Basic Steps 82 7.2 Grid Generation 83 7.3 Elements 85 7.3.1 1D Elements 86 7.3.2 2D Elements 88 7.4 The Assembling Process 90 7.4.1 Element Matrices 93 7.4.2 Time Discretization 97 7.4.3 Global Matrices 98 7.4.4 Boundary Conditions 101 7.4.5 Application of the Finite Element Method to Convection-Diffusion-Reaction Problems 103 7.5 A Zero Coupon Bond Under the Two Factor Hull-White Model 105 7.6 Appendix: Higher Order Elements 107 7.6.1 3D Elements 109 7.6.2 Local and Natural Coordinates 111 8 Solving Systems of Linear Equations 117 8.1 Direct Methods 118 8.1.1 Gaussian Elimination 118 8.1.2 Thomas Algorithm 119 8.1.3 LU Decomposition 120 8.1.4 Cholesky Decomposition 121 8.2 Iterative Solvers 122 8.2.1 Matrix Decomposition 123 8.2.2 Krylov Methods 125 8.2.3 Multigrid Solvers 126 8.2.4 Preconditioning 129 9 Monte Carlo Simulation 133 9.1 The Principles of Monte Carlo Integration 133 9.2 Pricing Derivatives with Monte Carlo Methods 134 9.2.1 Discretizing the Stochastic Differential Equation 135 9.2.2 Pricing Formalism 137 9.2.3 Valuation of a Steepener under a Two Factor Hull-White Model 137 9.3 An Introduction to the Libor Market Model 139 9.4 Random Number Generation 146 9.4.1 Properties of a Random Number Generator 147 9.4.2 Uniform Variates 148 9.4.3 Random Vectors 150 9.4.4 Recent Developments in Random Number Generation 151 9.4.5 Transforming Variables 152 9.4.6 Random Number Generation for Commonly Used Distributions 155 10 Advanced Monte Carlo Techniques 161 10.1 Variance Reduction Techniques 161 10.1.1 Antithetic Variates 161 10.1.2 Control Variates 163 10.1.3 Conditioning 166 10.1.4 Additional Techniques for Variance Reduction 168 10.2 Quasi Monte Carlo Method 169 10.2.1 Low-Discrepancy Sequences 169 10.2.2 Randomizing QMC 174 10.3 Brownian Bridge Technique 175 10.3.1 A Steepener under a Libor Market Model 177 11 Valuation of Financial Instruments with Embedded American/Bermudan Options within Monte Carlo Frameworks 179 11.1 Pricing American options using the Longstaff and Schwartz algorithm 179 11.2 A Modified Least Squares Monte Carlo Algorithm for Bermudan Callable Interest Rate Instruments 181 11.2.1 Algorithm: Extended LSMC Method for Bermudan Options 182 11.2.2 Notes on Basis Functions and Regression 185 11.3 Examples 186 11.3.1 A Bermudan Callable Floater under Different Short-rate Models 186 11.3.2 A Bermudan Callable Steepener Swap under a Two Factor Hull-White Model 188 11.3.3 A Bermudan Callable Steepener Cross Currency Swap in a 3D IR/FX Model Framework 189 12 Characteristic Function Methods for Option Pricing 193 12.1 Equity Models 194 12.1.1 Heston Model 196 12.1.2 Jump Diffusion Models 198 12.1.3 Infinite Activity Models 199 12.1.4 Bates Model 200 12.2 Fourier Techniques 201 12.2.1 Fast Fourier Transform Methods 201 12.2.2 Fourier-Cosine Expansion Methods 203 13 Numerical Methods for the Solution of PIDEs 209 13.1 A PIDE for Jump Models 209 13.2 Numerical Solution of the PIDE 210 13.2.1 Discretization of the Spatial Domain 211 13.2.2 Discretization of the Time Domain 211 13.2.3 A European Option under the Kou Jump Diffusion Model 212 13.3 Appendix: Numerical Integration via Newton-Cotes Formulae 214 14 Copulas and the Pitfalls of Correlation 217 14.1 Correlation 218 14.1.1 Pearson’s ρ 218 14.1.2 Spearman’s ρ 218 14.1.3 Kendall’s τ 220 14.1.4 Other Measures 221 14.2 Copulas 221 14.2.1 Basic Concepts 222 14.2.2 Important Copula Functions 222 14.2.3 Parameter estimation and sampling 229 14.2.4 Default Probabilities for Credit Derivatives 234 15 Parameter Calibration and Inverse Problems 239 15.1 Implied Black-Scholes Volatilities 239 15.2 Calibration Problems for Yield Curves 240 15.3 Reversion Speed and Volatility 245 15.4 Local Volatility 245 15.4.1 Dupire’s Inversion Formula 246 15.4.2 Identifying Local Volatility 246 15.4.3 Results 247 15.5 Identifying Parameters in Volatility Models 248 15.5.1 Model Calibration for the FTSE- 100 249 16 Optimization Techniques 253 16.1 Model Calibration and Optimization 255 16.1.1 Gradient-Based Algorithms for Nonlinear Least Squares Problems 256 16.2 Heuristically Inspired Algorithms 258 16.2.1 Simulated Annealing 259 16.2.2 Differential Evolution 260 16.3 A Hybrid Algorithm for Heston Model Calibration 261 16.4 Portfolio Optimization 265 17 Risk Management 269 17.1 Value at Risk and Expected Shortfall 269 17.1.1 Parametric VaR 270 17.1.2 Historical VaR 272 17.1.3 Monte Carlo VaR 273 17.1.4 Individual and Contribution VaR 274 17.2 Principal Component Analysis 276 17.2.1 Principal Component Analysis for Non-scalar Risk Factors 276 17.2.2 Principal Components for Fast Valuation 277 17.3 Extreme Value Theory 278 18 Quantitative Finance on Parallel Architectures 285 18.1 A Short Introduction to Parallel Computing 285 18.2 Different Levels of Parallelization 288 18.3 GPU Programming 288 18.3.1 CUDA and OpenCL 289 18.3.2 Memory 289 18.4 Parallelization of Single Instrument Valuations using (Q)MC 290 18.5 Parallelization of Hybrid Calibration Algorithms 291 18.5.1 Implementation Details 292 18.5.2 Results 295 19 Building Large Software Systems for the Financial Industry 297 Bibliography 301 Index 307
£45.00
CRC Press Robust Statistical Methods with R Second Edition
Book SynopsisThe second edition of Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on new developments and on the computational aspects. There are many numerical examples and notes on the R environment, and the updated chapter on the multivariate model contains additional material on visualization of multivariate data in R. A new chapter on robust procedures in measurement error models concentrates mainly on the rank procedures, less sensitive to errors than other procedures. This book will be an invaluable resource for researchers and postgraduate students in statistics and mathematics.Features Provides a systematic, practical treatment of robust statistical methods Offers a rigorous treatment of the whole range of robust methods, including the sequential versions of estimators, their moment convergence, and compares their asymptotic and finite-sample behavior The extended account of multivaTable of ContentsIntroductionMathematical tools of robustnessCharacteristics of robustnessEstimation of real parameterLinear modelMultivariate modelLarge sample and finite sample behavior of robust estimatorsRobust and nonparametric procedures in measurement error modelsAppendix ABibliography, Subject Index, Author Index
£99.75
Taylor & Francis Ltd Compositional Data Analysis in Practice
Book SynopsisCompositional data are quantitative descriptions of the parts of some whole, conveying exclusively relative information. Examples are found in various fields, including geology, medicine, chemistry, agriculture, economics, social science, etc. This concise book presents a very applied introduction to compositional data analysis, focussing on the use of R for analysis. It includes lots of real examples, code snippets, and colour figures, to illustrate the methods.Trade Review"(…This book) avoids cumbersome theoretical digressions and only presents to the reader the essential basic concepts for the application of CODA, using ratios and logratios that retain most of the original data structure and, subsequently, may lead to proper conclusions. … The simplification of the analysis and the straightforward interpretability of results is, clearly, one of the primary values of the publication. In addition, the emphasis on the general application of weights in the calculus of most of the operations and methodologies used throughout the book deserves a special mention.. … Altogether, the book and the easyCODA R package may represent a promising instrument for introducing CODA in the fat and oils field, where fatty acid compositions have been treated until now exclusively by classical multivariate techniques without considering their compositional structure. Predicting the future is risky, but the book may represent an essential instrument for CODA spreading since it represents just what many practitioners were expecting to initiate their experience in this promising new statistical field of compositional data analysis."—A. Garrido Fernández in Gracas y Aceites – International Journal of Fats and Oils, July-September 2019"…an interesting book, certainly controversial in some respects for scholars in the field. It has a strong data analytic focus and requires some background in multivariate analysis and biplot theory for a good understanding. It overemphasizes links to correspondence analysis at times, but is very well written and didactically nicely sliced into modules numbering exactly eight pages each. Most examples in the book are reproducible in the R environment. Finally, it will help the analyst to reflect on the use of weights, to the benefit of the analysis of compositional data."—Jan Graffelman in the Biometrical Journal, March 2019"This book provides a essential reference as a practical way to evaluate and interpret compositional data across a broad spectrum of disciplines in the life and natural sciences for both academia and industry. The book takes a prescribed approach starting with the definition of compositional data, the use of logratios for dimension reduction, clustering and variable selection issues along with several practical examples and a case study. The theory of compositional data analysis and computational aspects are included as Appendices.This book can be used at the undergraduate level as part of a course in data analysis. At the graduate level, for research studies, this book is essential in understanding how to collect and interpret compositional data. Using the methods described in this book will help to avoid costly mistakes made from misinterpreting compositional data."—Professor Eric Grunsky, Department of Earth and Environmental Sciences, University of WaterlooWaterloo, Ontario, Canada"Clearly the best introduction to compositional data analysis"—Professor John Bacon-Shone"Compositional Data Analysis in Practice is a short book by Michael Greenacre that introduces the statistician to the analysis of data partitions adding to a constant total. These data appear frequently in biology, chemistry, sociology, and other areas. ...The book is organised in to 10 chapters, each of eight pages, with a final summary, which makes it easy to read and very didactic. Easy to follow examples are used throughout the book, analyzed with R packages. This book is short, which I find appealing for a fast introduction to the topic. It covers the important practical analytical problems and provides easy solutions with example code. I recommend it for those who need to use compositional data analysis, or require a study guide for courses on the topic."- Victor Moreno in ISCB, June 2019"…an interesting book, certainly controversial in some respects for scholars in the field. It has a strong data analytic focus and requires some background in multivariate analysis and biplot theory for a good understanding. It overemphasizes links to correspondence analysis at times, but is very well written and didactically nicely sliced into modules numbering exactly eight pages each. Most examples in the book are reproducible in the R environment. Finally, it will help the analyst to reflect on the use of weights, to the benefit of the analysis of compositional data."—Jan Graffelman in the Biometrical Journal, March 2019"This book provides a essential reference as a practical way to evaluate and interpret compositional data across a broad spectrum of disciplines in the life and natural sciences for both academia and industry. The book takes a prescribed approach starting with the definition of compositional data, the use of logratios for dimension reduction, clustering and variable selection issues along with several practical examples and a case study. The theory of compositional data analysis and computational aspects are included as Appendices.This book can be used at the undergraduate level as part of a course in data analysis. At the graduate level, for research studies, this book is essential in understanding how to collect and interpret compositional data. Using the methods described in this book will help to avoid costly mistakes made from misinterpreting compositional data."—Professor Eric Grunsky, University of Waterloo, Ontario, Canada"Clearly the best introduction to compositional data analysis"—Professor John Bacon-Shone"Compositional Data Analysis in Practice is a short book by Michael Greenacre that introduces the statistician to the analysis of data partitions adding to a constant total. These data appear frequently in biology, chemistry, sociology, and other areas. ...The book is organised in to 10 chapters, each of eight pages, with a final summary, which makes it easy to read and very didactic. Easy to follow examples are used throughout the book, analyzed with R packages. This book is short, which I find appealing for a fast introduction to the topic. It covers the important practical analytical problems and provides easy solutions with example code. I recommend it for those who need to use compositional data analysis, or require a study guide for courses on the topic."- Victor Moreno in ISCB, June 2019Table of ContentsWhat are compositional data, and why are they special? Geometry and visualization of compositional data. Logratio transformations. Properties and distributions of logratios. Regression models involving compositional data. Dimension reduction using logratio analysis. Clustering of compositional data. The problem of zeros, with some solutions. Simplifying the task: variable selection. Case study: Fatty acids of marine amphipods. Appendix A: Theory of compositional data analysis. Appendix B: Commented Bibliography. Appendix C: Computational examples using the R package easyCODA. Appendix D: Epilogue.
£114.00
Taylor & Francis Ltd Omic Association Studies with R and Bioconductor
Book SynopsisAfter the great expansion of genome-wide association studies, their scientific methodology and, notably, their data analysis has matured in recent years, and they are a keystone in large epidemiological studies. Newcomers to the field are confronted with a wealth of data, resources and methods. This book presents current methods to perform informative analyses using real and illustrative data with established bioinformatics tools and guides the reader through the use of publicly available data. Includes clear, readable programming codes for readers to reproduce and adapt to their own data. Emphasises extracting biologically meaningful associations between traits of interest and genomic, transcriptomic and epigenomic data Uses up-to-date methods to exploit omic data Presents methods through specific examples and computing sessions Supplemented by a websTrade Review"This book is a good tool for self-learning analytical strategies for omics data. It requires previous knowledge of R and focuses on getting things done...I think the book would be a good reference for masters or PhD students that have to perform their analysis and need a starting point. Also, for the practicing statistician working with omics data."- Victor Moreno, ISCB News, July 2020 Table of Contents1 Introduction 2 Case examples 3 Dealing with omic data in Bioconductor 4 Genetic association studies 5 Genomic variant studies 6 Adressing batch effects 7 Transcriptomic studies 8 Epigenomic studies 9 Exposomic analysis 10 Enrichment analysis 11 Multiomic data analysis
£99.75
CRC Press R Markdown
Book SynopsisR Markdown: The Definitive Guide is the first official book authored by the core R Markdown developers that provides a comprehensive and accurate reference to the R Markdown ecosystem. With R Markdown, you can easily create reproducible data analysis reports, presentations, dashboards, interactive applications, books, dissertations, websites, and journal articles, while enjoying the simplicity of Markdown and the great power of R and other languages. In this book, you will learn Basics: Syntax of Markdown and R code chunks, how to generate figures and tables, and how to use other computing languages Built-in output formats of R Markdown: PDF/HTML/Word/RTF/Markdown documents and ioslides/Slidy/Beamer/PowerPoint presentations Extensions and applications: Dashboards, Tufte handouts, xaringan/reveal.js presentations, websites, books, journal articles, and interactive tutorials Advanced topics: Parameterized reports, HTML widgets, document templates, custom output formats, and Shiny documents. Yihui Xie is a software engineer at RStudio. He has authored and co-authored several R packages, including knitr, rmarkdown, bookdown, blogdown, shiny, xaringan, and animation. He has published three other books, Dynamic Documents with R and knitr, bookdown: Authoring Books and Technical Documents with R Markdown, and blogdown: Creating Websites with R Markdown.J.J. Allaire is the founder of RStudio and the creator of the RStudio IDE. He is an author of several packages in the R Markdown ecosystem including rmarkdown, flexdashboard, learnr, and radix.Garrett Grolemund is the co-author of R for Data Science and author of Hands-On Programming with R. He wrote the lubridate R package and works for RStudio as an advocate who trains engineers to do data science with R and the Tidyverse.Trade Review"The manuscript offers a detailed documentation of the R Markdown document format and its related packages for R (e.g. knitr, rmarkdown, flexdashboard, shiny). These packages form an important ecosystem for reproducible research using R and are widely used across academia and the private sector. All the authors have been key contributors to developing the core R Markdown packages and are knowledgable about the inner workings of these functions and all the available options to customize published documents…The target audience for this manuscript would be experienced R users who frequently use R Markdown to generate publications for a variety of mediums (articles, books, information dashboards, interactive web applications, etc.)…While this book is strongly related to the author’s previous book (Dynamic Documents with R and knitr), a wider range of readers should find this new manuscript useful for its focus on the broad range of output formats generated by R Markdown and how to customize those outputs." ~Benjamin Soltoff, Department of Computational Social Science, University of Chicago"A main strength of the software described herein is that it facilitates reproducible documents incorporating analyses and figures. The first topics covered in chapters 6-13 include handout and presentation formats that could be used effectively for teaching or presenting statistical results. The other topics focus on larger scale documents such as complex websites, books, and academic journal articles. From academic teaching and research to industry and other settings, the material covered by this book allows statisticians and data scientists to disseminate results in a highly effective manner." ~David Whitney, Department of Biostatistics, University of Washington"This book will be a valuable reference for students, academics, and professionals using R – that is to say, any one in a wide (and growing) variety of fields focused on practical data analysis including statistics, machine learning, the social sciences, etc. There is increasing awareness that nearly any occasion calling for analysis code also calls for some amount of corresponding documentation, explanation, and/or interpretation. Rapid improvement in tools for R markdown has made integrating code and text less and less of a chore, and therefore more and more common – even among users new to R. R markdown is a popular choice now for a range of formats including blog posts, user manuals, books, dissertations, and undergraduate homework assignments. I personally use R markdown for nearly all of my website content, presentations, research papers, and to generate reports for the clients of my statistical consulting business. Because of its many applications, however, the ecosystem of R markdown tools has become unwieldy, and many tutorials reference outdated techniques or unnecessary workarounds. A definitive guide has been long needed." ~Rose Hartman, UnderstandingData"This book is so far the most comprehensive reference for the R Markdown format and its associated extensions and tools. On a high level, Part I and II (Chapter 1-4) of this book cover the basic use of the R Markdown document and the knitr and rmarkdown packages, which are helpful for new users to quickly get started. Part III (Chapter 5-13) introduces a lot of new developments and powerful tools for R Markdown, including creating presentations, authoring books, building websites, writing journal articles, etc. In my personal point of view this is the most attractive part of this book, as it opens a new world for users who have only used R Markdown to create ordinary documents." ~Yixuan Qui, Department of Statistics, Purdue University"This book represents a valuable contribution to the target field due to its exploration of a wide range of features in the markdown language. If other books on this topic exist, this one has the advantage that the authors have already made significant contributions to the markdown language in the R platform and certainly have a comprehensive understanding of the topic…I recommend this book for publication because the topic is sophisticated and complex, and the interested audience will certainly be satisfied with the clarity of presentation and the depth that the authors reach in their exploratory examples" ~Jon Katz, data analyst"The manuscript offers a detailed documentation of the R Markdown document format and its related packages for R (e.g. knitr, rmarkdown, flexdashboard, shiny). These packages form an important ecosystem for reproducible research using R and are widely used across academia and the private sector. All the authors have been key contributors to developing the core R Markdown packages and are knowledgable about the inner workings of these functions and all the available options to customize published documents…The target audience for this manuscript would be experienced R users who frequently use R Markdown to generate publications for a variety of mediums (articles, books, information dashboards, interactive web applications, etc.)…While this book is strongly related to the author’s previous book (Dynamic Documents with R and knitr), a wider range of readers should find this new manuscript useful for its focus on the broad range of output formats generated by R Markdown and how to customize those outputs." ~Benjamin Soltoff, Department of Computational Social Science, University of Chicago"A main strength of the software described herein is that it facilitates reproducible documents incorporating analyses and figures. The first topics covered in chapters 6-13 include handout and presentation formats that could be used effectively for teaching or presenting statistical results. The other topics focus on larger scale documents such as complex websites, books, and academic journal articles. From academic teaching and research to industry and other settings, the material covered by this book allows statisticians and data scientists to disseminate results in a highly effective manner." ~David Whitney, Department of Biostatistics, University of Washington"This book will be a valuable reference for students, academics, and professionals using R – that is to say, any one in a wide (and growing) variety of fields focused on practical data analysis including statistics, machine learning, the social sciences, etc. There is increasing awareness that nearly any occasion calling for analysis code also calls for some amount of corresponding documentation, explanation, and/or interpretation. Rapid improvement in tools for R markdown has made integrating code and text less and less of a chore, and therefore more and more common – even among users new to R. R markdown is a popular choice now for a range of formats including blog posts, user manuals, books, dissertations, and undergraduate homework assignments. I personally use R markdown for nearly all of my website content, presentations, research papers, and to generate reports for the clients of my statistical consulting business. Because of its many applications, however, the ecosystem of R markdown tools has become unwieldy, and many tutorials reference outdated techniques or unnecessary workarounds. A definitive guide has been long needed." ~Rose Hartman, UnderstandingData"This book is so far the most comprehensive reference for the R Markdown format and its associated extensions and tools. On a high level, Part I and II (Chapter 1-4) of this book cover the basic use of the R Markdown document and the knitr and rmarkdown packages, which are helpful for new users to quickly get started. Part III (Chapter 5-13) introduces a lot of new developments and powerful tools for R Markdown, including creating presentations, authoring books, building websites, writing journal articles, etc. In my personal point of view this is the most attractive part of this book, as it opens a new world for users who have only used R Markdown to create ordinary documents." ~Yixuan Qui, Department of Statistics, Purdue University"This book represents a valuable contribution to the target field due to its exploration of a wide range of features in the markdown language. If other books on this topic exist, this one has the advantage that the authors have already made significant contributions to the markdown language in the R platform and certainly have a comprehensive understanding of the topic…I recommend this book for publication because the topic is sophisticated and complex, and the interested audience will certainly be satisfied with the clarity of presentation and the depth that the authors reach in their exploratory examples" ~Jon Katz, data analystTable of ContentsI Get Started 1.Installation 2. Basics Example applications Airbnb’s knowledge repository Homework assignments on RPubs Personalized mails Employer Health Benefits Survey Journal articles Dashboards at eelloo Books Websites Compile an R Markdown document Cheat sheets Output formats Markdown syntax Inline formatting Block-level elements Math expressions R code chunks and inline R code Figures Tables Other language engines Python Shell scripts SQL Rcpp Stan JavaScript and CSS Julia C and Fortran Interactive documents HTML widgets Shiny documents II Output Formats 3. Documents HTML document Table of contents Section numbering Tabbed sections Appearance and style Figure options Data frame printing Code folding MathJax equations Document dependencies Advanced customization Shared options HTML fragments Notebook Using Notebooks Saving and sharing Notebook format PDF document Table of contents Figure options Data frame printing Syntax highlighting LaTeX options LaTeX packages for citations Advanced customization Other features Word document Other features OpenDocument Text document Other features Rich Text Format document Other features Markdown document Markdown variants Other features R package vignette 4. Presentations ioslides presentation Display modes Incremental bullets Visual appearance Code highlighting Adding a logo Tables Advanced layout Text color Presenter mode Printing and PDF output Custom templates Other features Slidy presentation Display modes Text size Footer elements Other features Beamer presentation Themes Slide level Other features PowerPoint presentation Custom templates Other features III Extensions 5. Dashboards Layout Row-based layouts Attributes on sections Multiple pages Story boards Components Value boxes Gauges Text annotations Navigation bar Shiny Getting started A Shiny dashboard example Input sidebar Learning more 6. Tufte Handouts Headings Figures Margin figures Arbitrary margin content Full-width figures Main column figures Sidenotes References Tables Block quotes Responsiveness Sans-serif fonts and epigraphs Customize CSS styles 7. xaringan Presentations Get started Keyboard shortcuts Slide formatting Slides and properties The title slide Content classes Incremental slides Presenter notes yolo: true Build and preview slides CSS and themes Some tips Autoplay slides Countdown timer Highlight code lines Working offline Macros Disadvantages 8. revealjs Presentations Display modes Appearance and style Smaller text Slide transitions Slide backgrounds -D presentations Custom CSS Slide IDs and classes Styling text spans revealjs options revealjs plugins Other features 9. Community Formats Lightweight Pretty HTML Documents Usage Package vignettes The rmdformats package Shower presentations 10. Websites Get started The directory structure Deployment Other site generators rmarkdown’s site generator A simple example Site authoring Common elements Site navigation HTML generation Site configuration Publishing websites Additional examples Custom site generators 11. HTML Documentation for R Packages Get started Components Home page Function reference Articles News Navigation bar 12. Books Get started Project structure Index file Rmd files _bookdownyml _outputyml Markdown extensions Number and reference equations Theorems and proofs Special headers Text references Cross referencing Output Formats HTML LaTeX/PDF E-books A single document Editing Build the book Preview a chapter Serve the book RStudio addins Publishing RStudio Connect Other services Publishers 13. Journals Get started Articles templates Using a template LaTeX content Linking with bookdown Contributing templates 14. Interactive Tutorials Get started Tutorial types Exercises Solutions Hints Quiz questions Videos Shiny components Navigation and progress tracking IV Advanced Topics 15. Parameterized reports Declaring parameters Using parameters Knitting with parameters The Knit button Knit with custom parameters The interactive user interface Publishing 16. HTML Widgets Overview A widget example (sigmajs) File layout Dependencies R binding JavaScript binding Demo Creating your own widgets Requirements Scaffolding Other packages Widget sizing Specifying a sizing policy JavaScript resize method Advanced topics Data transformation Passing JavaScript functions Custom widget HTML Create a widget without an R package 17. Document Templates Template structure Supporting files Custom Pandoc templates Sharing your templates 18. Creating New Formats Deriving from built-in formats Fully custom formats Using a new format 19. Shiny Documents Getting started Deployment ShinyAppsio Shiny Server / RStudio Connect Embedded Shiny apps Inline applications External applications Shiny widgets The shinyApp() function Example: k-Means clustering Widget size and layout Multiple pages Delayed rendering Output arguments for render functions A caveat
£31.34
Taylor & Francis Ltd HandsOn Machine Learning with R
Book SynopsisHands-on Machine Learning with R provides a practical and applied approach to learning and developing intuition into today’s most popular machine learning methods. This book serves as a practitioner’s guide to the machine learning process and is meant to help the reader learn to apply the machine learning stack within R, which includes using various R packages such as glmnet, h2o, ranger, xgboost, keras, and others to effectively model and gain insight from their data. The book favors a hands-on approach, providing an intuitive understanding of machine learning concepts through concrete examples and just a little bit of theory. Throughout this book, the reader will be exposed to the entire machine learning process including feature engineering, resampling, hyperparameter tuning, model evaluation, and interpretation. The reader will be exposed to powerful algoTrade Review"Hands-On Machine Learning with R is a great resource for understanding and applying models. Each section provides descriptions and instructions using a wide range of R packages." - Max Kuhn, Machine Learning Software Engineer, RStudio"You can't find a better overview of practical machine learning methods implemented with R."- JD Long, co-author of R Cookbook"Simultaneously approachable, accessible, and rigorous, Hands-On Machine Learning with R offers a balance of theory and implementation that can actually bring you from relative novice to competent practitioner." - Mara Averick, RStudio Dev Advocate"Hands-On Machine Learning with R is a great resource for understanding and applying models. Each section provides descriptions and instructions using a wide range of R packages." - Max Kuhn, Machine Learning Software Engineer, RStudio"You can't find a better overview of practical machine learning methods implemented with R."- JD Long, co-author of R Cookbook"Simultaneously approachable, accessible, and rigorous, Hands-On Machine Learning with R offers a balance of theory and implementation that can actually bring you from relative novice to competent practitioner." - Mara Averick, RStudio Dev Advocate"...The book describes in detail the various methods for solving classification and clustering problems. Functions from many R libraries are compared, which enables the reader to understand their respective advantages and disadvantages. The authors have developed a clear structure to the book that includes a brief description of each model, examples of using the model for specific real-life examples, and discussion of the advantages and disadvantages of the model. This structure is one of the book’s main advantages."- Igor Malyk, ISCB News, July 2020Table of ContentsI FUNDAMENTALS 1. Introduction to Machine Learning 1.1 Supervised learning 1.1.1 Regression problems 1.1.2 Classification problems 1.2 Unsupervised learning 1.3 Roadmap 1.4 The data sets 2. Modeling Process 2.1 Prerequisites 2.2 Data splitting 2.2.1 Simple random sampling 2.2.2 Stratified sampling 2.2.3 Class imbalances 2.3 Creating models in R 2.3.1 Many formula interfaces 2.3.2 Many engines 2.4 Resampling methods 2.4.1 k-fold cross validation 2.4.2 Bootstrapping 2.4.3 Alternatives 2.5 Bias variance trade-off 2.5.1 Bias 2.5.2 Variance 2.5.3 Hyperparameter tuning 2.6 Model evaluation 2.6.1 Regression models 2.6.2 Classification models 2.7 Putting the processes together 3. Feature & Target Engineering 3.1 Prerequisites 3.2 Target engineering 3.3 Dealing with missingness 3.3.1 Visualizing missing values 3.3.2 Imputation 3.4 Feature filtering 3.5 Numeric feature engineering 3.5.1 Skewness 3.5.2 Standardization 3.6 Categorical feature engineering 3.6.1 Lumping 3.6.2 One-hot & dummy encoding 3.6.3 Label encoding 3.6.4 Alternatives 3.7 Dimension reduction 3.8 Proper implementation 3.8.1 Sequential steps 3.8.2 Data leakage 3.8.3 Putting the process together II SUPERVISED LEARNING 4. Linear Regression 4.1 Prerequisites 4.2 Simple linear regression 4.2.1 Estimation 4.2.2 Inference 4.3 Multiple linear regression 4.4 Assessing model accuracy 4.5 Model concerns 4.6 Principal component regression 4.7 Partial least squares 4.8 Feature interpretation 4.9 Final thoughts 5. Logistic Regression 5.1 Prerequisites 5.2 Why logistic regression 5.3 Simple logistic regression 5.4 Multiple logistic regression 5.5 Assessing model accuracy 5.6 Model concerns 5.7 Feature interpretation 5.8 Final thoughts 6. Regularized Regression 6.1 Prerequisites 6.2 Why regularize? 6.2.1 Ridge penalty 6.2.2 Lasso penalty 6.2.3 Elastic nets 6.3 Implementation 6.4 Tuning 6.5 Feature interpretation 6.6 Attrition data 6.7 Final thoughts 7. Multivariate Adaptive Regression Splines 7.1 Prerequisites 7.2 The basic idea 7.2.1 Multivariate regression splines 7.3 Fitting a basic MARS model 7.4 Tuning 7.5 Feature interpretation 7.6 Attrition data 7.7 Final thoughts 8. K-Nearest Neighbors 8.1 Prerequisites 8.2 Measuring similarity 8.2.1 Distance measures 8.2.2 Pre-processing 8.3 Choosing k 8.4 MNIST example 8.5 Final thoughts 9 Decision Trees 9.1 Prerequisites 9.2 Structure 9.3 Partitioning 9.4 How deep? 9.4.1 Early stopping 9.4.2 Pruning 9.5 Ames housing example 9.6 Feature interpretation 9.7 Final thoughts 10. Bagging 10.1 Prerequisites 10.2 Why and when bagging works 10.3 Implementation 10.4 Easily parallelize 10.5 Feature interpretation 10.6 Final thoughts 11. Random Forests 11.1 Prerequisites 11.2 Extending bagging 11.3 Out-of-the-box performance 11.4 Hyperparameters 11.4.1 Number of trees 11.4.2 mtry 11.4.3 Tree complexity 11.4.4 Sampling scheme 11.4.5 Split rule 11.5 Tuning strategies 11.6 Feature interpretation 11.7 Final thoughts 12. Gradient Boosting 12.1 Prerequisites 12.2 How boosting works 12.2.1 A sequential ensemble approach 12.2.2 Gradient descent 12.3 Basic GBM 12.3.1 Hyperparameters 12.3.2 Implementation 12.3.3 General tuning strategy 12.4 Stochastic GBMs 12.4.1 Stochastic hyperparameters 12.4.2 Implementation 12.5 XGBoost 12.5.1 XGBoost hyperparameters 12.5.2 Tuning strategy 12.6 Feature interpretation 12.7 Final thoughts 13. Deep Learning 13.1 Prerequisites 13.2 Why deep learning 13.3 Feedforward DNNs 13.4 Network architecture 13.4.1 Layers and nodes 13.4.2 Activation 13.5 Backpropagation 13.6 Model training 13.7 Model tuning 13.7.1 Model capacity 13.7.2 Batch normalization 13.7.3 Regularization 13.7.4 Adjust learning rate 13.8 Grid Search 13.9 Final thoughts 14. Support Vector Machines 14.1 Prerequisites 14.2 Optimal separating hyperplanes 14.2.1 The hard margin classifier 14.2.2 The soft margin classifier 14.3 The support vector machine 14.3.1 More than two classes 14.3.2 Support vector regression 14.4 Job attrition example 14.4.1 Class weights 14.4.2 Class probabilities 14.5 Feature interpretation 14.6 Final thoughts 15. Stacked Models 15.1 Prerequisites 15.2 The Idea 15.2.1 Common ensemble methods 15.2.2 Super learner algorithm 15.2.3 Available packages 15.3 Stacking existing models 15.4 Stacking a grid search 15.5 Automated machine learning 15.6 Final thoughts 16. Interpretable Machine Learning 16.1 Prerequisites 16.2 The idea 16.2.1 Global interpretation 16.2.2 Local interpretation 16.2.3 Model-specific vs. model-agnostic 16.3 Permutation-based feature importance 16.3.1 Concept 16.3.2 Implementation 16.4 Partial dependence 16.4.1 Concept 16.4.2 Implementation 16.4.3 Alternative uses 16.5 Individual conditional expectation 16.5.1 Concept 16.5.2 Implementation 16.6 Feature interactions 16.6.1 Concept 16.6.2 Implementation 16.6.3 Alternatives 16.7 Local interpretable model-agnostic explanations 16.7.1 Concept 16.7.2 Implementation 16.7.3 Tuning 16.7.4 Alternative uses 16.8 Shapley values 16.8.1 Concept 16.8.2 Implementation 16.8.3 XGBoost and built-in Shapley values 16.9 Localized step-wise procedure 16.9.1 Concept 16.9.2 Implementation 16.10Final thoughts III DIMENSION REDUCTION 17. Principal Components Analysis 17.1 Prerequisites 17.2 The idea 17.3 Finding principal components 17.4 Performing PCA in R 17.5 Selecting the number of principal components 17.5.1 Eigenvalue criterion 17.5.2 Proportion of variance explained criterion 17.5.3 Scree plot criterion 17.6 Final thoughts 18. Generalized Low Rank Models 18.1 Prerequisites 18.2 The idea 18.3 Finding the lower ranks 18.3.1 Alternating minimization 18.3.2 Loss functions 18.3.3 Regularization 18.3.4 Selecting k 18.4 Fitting GLRMs in R 18.4.1 Basic GLRM model 18.4.2 Tuning to optimize for unseen data 18.5 Final thoughts 19. Autoencoders 19.1 Prerequisites 19.2 Undercomplete autoencoders 19.2.1 Comparing PCA to an autoencoder 19.2.2 Stacked autoencoders 19.2.3 Visualizing the reconstruction 19.3 Sparse autoencoders 19.4 Denoising autoencoders 19.5 Anomaly detection 19.6 Final thoughts IV Clustering 20. K-means Clustering 20.1 Prerequisites 20.2 Distance measures 20.3 Defining clusters 20.4 k-means algorithm 20.5 Clustering digits 20.6 How many clusters? 20.7 Clustering with mixed data 20.8 Alternative partitioning methods 20.9 Final thoughts 21. Hierarchical Clustering 21.1 Prerequisites 21.2 Hierarchical clustering algorithms 21.3 Hierarchical clustering in R 21.3.1 Agglomerative hierarchical clustering 21.3.2 Divisive hierarchical clustering 21.4 Determining optimal clusters 21.5 Working with dendrograms 21.6 Final thoughts 22. Model-based Clustering 22.1 Prerequisites 22.2 Measuring probability and uncertainty 22.3 Covariance types 22.4 Model selection 22.5 My basket example 22.6 Final thoughts Bibliography Index
£78.84
Pearson Education Understanding Statistics in Psychology with SPSS
Book SynopsisDennis Howitt and Duncan Cramer are based at Loughborough University.Table of Contents Chapter 1 Why statistics? Chapter 2 Some basics: Variability and measurement Chapter 3 Describing variables: Tables and diagrams Chapter 4 Describing variables numerically: Averages, variation and spread Chapter 5 Shapes of distributions of scores Chapter 6 Standard deviation and z-scores: Standard unit of measurement in statistics Chapter 7 Relationships between two or more variables: Diagrams and tables Chapter 8 Correlation coefficients: Pearson’s correlation and Spearman’s rho Chapter 9 Regression: Prediction with precision Chapter 10 Samples from populations Chapter 11 Statistical significance for the correlation coefficient: A practical introduction to statistical inference Chapter 12 Standard error: Standard deviation of the means of samples Chapter 13 Related t-test: Comparing two samples of related/correlated/paired scores Chapter 14 Unrelated t-test: Comparing two samples of unrelated/uncorrelated/ independent scores Chapter 15 What you need to write about your statistical analysis Chapter 16 Confidence intervals Chapter 17 Effect size in statistical analysis: Do my findings matter? Chapter 18 Chi-square: Differences between samples of frequency data Chapter 19 Probability Chapter 20 One-tailed versus two-tailed significance testing Chapter 21 Ranking tests: Nonparametric statistics Chapter 22 Variance ratio test: F-ratio to compare two variances Chapter 23 Analysis of variance (ANOVA): One-way unrelated or uncorrelated ANOVA Chapter 24 ANOVA for correlated scores or repeated measures Chapter 25 Two-way or factorial ANOVA for unrelated/uncorrelated scores: Two studies for the price of one? Chapter 26 Multiple comparisons with in ANOVA: A priori and post hoc tests Chapter 27 Mixed-design ANOVA: Related and unrelated variables together Chapter 28 Analysis of covariance (ANCOVA): Controlling for additional variables Chapter 29 Multivariate analysis of variance (MANOVA) Chapter 30 Discriminant (function) analysis – especially in MANOVA Chapter 31 Statistics and analysis of experiments Chapter 32 Partial correlation: Spurious correlation, third or confounding variables, suppressor variables Chapter 33 Factor analysis: Simplifying complex data Chapter 34 Multiple regression and multiple correlation Chapter 35 Path analysis Chapter 36 Meta-analysis: Combining and exploring statistical findings from previous research Chapter 37 Reliability in scales and measurement: Consistency and agreement Chapter 38 Influence of moderator variables on relationships between two variables Chapter 39 Statistical power analysis: Getting the sample size right Chapter 40 Log-linear methods: Analysis of complex contingency tables Chapter 41 Multinomial logistic regression: Distinguishing between several different categories or groups Chapter 42 Binomial logistic regression Chapter 43 Data mining and big data
£65.55
Cambridge University Press Understanding Maple
Book SynopsisMaple is a powerful symbolic computation system that is widely used in universities around the world. This short introduction gives readers an insight into the rules that control how the system works, and how to understand, fix, and avoid common problems. Topics covered include algebra, calculus, linear algebra, graphics, programming, and procedures. Each chapter contains numerous illustrative examples, using mathematics that does not extend beyond first-year undergraduate material. Maple worksheets containing these examples are available for download from the author''s personal website. The book is suitable for new users, but where advanced topics are central to understanding Maple they are tackled head-on. Many concepts which are absent from introductory books and manuals are described in detail. With this book, students, teachers and researchers will gain a solid understanding of Maple and how to use it to solve complex mathematical problems in a simple and efficient way.Trade Review'Thompson (Univ. of Liverpool, UK) clearly knows the 'gotchas' that most often plague beginners (and others!) and provides pointed guidance; extracting the same vital information buried in longer and more systematic treatises and manuals can certainly prove a challenge. Summing Up: Recommended. Lower-division undergraduates and above; faculty and professionals.' D. V. Feldman, CHOICETable of Contents1. Introduction; 2. Getting started; 3. Algebra and calculus; 4. Solving equations; 5. Linear algebra; 6. Graphics; 7. Programming; 8. Procedures; 9. Example programs; Appendix A. Other ways to run Maple; Appendix B. Terminating characters; Index of Maple notation.
£19.99
Bloomsbury Publishing PLC Jamovi for Psychologists
Book SynopsisThis textbook offers a refreshingly clear and digestible introduction to statistical analysis for psychology using the user-friendly jamovi software. The authors provide a concise, practical guide that takes students from the early stages of research design, with a jargon-free explanation of terminology, and walks them through key analyses such as the t-test, ANOVA, correlation, chi-square, and linear regression. The book features written interpretations to help learners identify relevant statistics along the way. With fascinating examples from psychological research, as well as screenshots and activities from jamovi, this text is sure to encourage even the most reluctant statistics student. The comprehensive companion website provides an extra helping hand, with practice datasets and a full suite of tutorial videos to help consolidate understanding. This is essential reading for psychology students using jamovi for their courses in Research Methods and Statistics or Data Analysis.Trade ReviewJamovi for Psychologists offers a complete overview of topics in introductory statistics in an easy, conversational tone. But what makes it especially valuable is its practical emphasis—how to use very accessible software, fully understand its output, and appropriately report the results. It’s the kind of book students will actually find useful! * Andy Luttrell, Ball State University, USA *Jamovi for Psychologists is an excellent resource for those learning to use jamovi as part of a statistics course or for those seeking to better understand the wide range of statistical tests available in the software. The straightforward step-by-step instructions and conceptual framing of statistical analyses will help faculty make statistics and jamovi more accessible to students. * Andrew Mienaltowski, Western Kentucky University, USA *Jamovi for Psychologists, is a friendly introduction to the accessible statistics package jamovi. It is well-pitched for psychologists beginning to learn about statistics and includes concise but thorough guides throughout. * Piers Fleming, University of East Anglia, UK *Table of Contents1. Research Design 2. Data Preparation, Common Assumptions, and Descriptive Statistics 3. P-Values, Effect Sizes and 95% confidence intervals 4. Statistical Power 5. Reliability and Validity 6. Correlations 7. Chi Square 8. Independent T-Tests 9. Paired T-Tests 10. Comparing multiple means for Between-subjects designs (One-way ANOVA & Kruskal-Wallis) 11. Comparing multiple means for Repeated measures designs (one-way ANOVA and Friedman’s ANOVA) 12. Factorial ANOVA (assessing effects of multiple independent variables) 13. Simple, Multiple, and Hierarchical Linear Regression.
£28.99
Johns Hopkins University Press Visualizing Mathematics with 3D Printing
Book SynopsisWith the book in one hand and a 3D printed model in the other, readers can find deeper meaning while holding a hyperbolic honeycomb, touching the twists of a torus knot, or caressing the curves of a Klein quartic.Trade ReviewMy best advice is to go out and buy yourself a copy of the book. Chalkdust Magazine The breadth of Segerman's 3D printing explorations is impressive. Coupled with the clarity of his explanations of the mathematics behind those explorations, this book becomes an easy recommendation for any reader interested in learning some beautiful mathematical ideas. Journal of Mathematics and the Arts No previous mathematical maturity is required. The work is a good addition to any academic library. Highly recommended Choice I have great difficulty thinking about Visualizing Mathematics with 3D Printing as "just a book." The careful choice, quality and effectiveness of the 140+ images in the book is outstanding. What Segerman has developed is much bigger than a book he has developed a whole platform to complement the book and explore mathematical concepts. Visualizing Mathematics with 3D printing allows the reader to manipulate with a computer or 3D print the objects discussed, making it possible to physically interact with the concepts. Mathematical Association of AmericaTable of ContentsPrefaceAcknowledgments1. Symmetry2. Polyhedra3. Four-Dimensional Space4. Tilings and Curvature5. Knots6. Surfaces7. MenagerieAppendix AAppendix BIndex
£51.00
Taylor & Francis Inc Design and Analysis of Experiments with R
Book SynopsisDesign and Analysis of Experiments with R presents a unified treatment of experimental designs and design concepts commonly used in practice. It connects the objectives of research to the type of experimental design required, describes the process of creating the design and collecting the data, shows how to perform the proper analysis of the data, and illustrates the interpretation of results.Drawing on his many years of working in the pharmaceutical, agricultural, industrial chemicals, and machinery industries, the author teaches students how to: Make an appropriate design choice based on the objectives of a research project Create a design and perform an experiment Interpret the results of computer data analysis The book emphasizes the connection among the experimental units, the way treatments are randomized to experimental units, and the proper error term for data analysis. R code is uTrade Review"This is an excellent but demanding text. … This book should be mandatory reading for anyone teaching a course in the statistical design of experiments. … reading this text is likely to influence their course for the better."—MAA Reviews, March 2015"Thank you for writing your phenomenal book "Design and Analysis of Experiments with R". I'm teaching a new course this spring on experimental design and reinforcement learning. The students are graduate bioengineers, so I was having difficulty finding a text that blends theory, practice, and computation. Your book excels at all three. The first chapter I read clarified several topics and improved both my teaching and research. After testing a dozen DOE and RSM books, yours is the clear winner. I understand the enormous time that goes into a well-constructed textbook. I hope this message conveys my deep appreciation for your effort."—Paul Jensen, Ph.D., Assistant Professor , Department of Bioengineering and Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign"This is an excellent but demanding text. … This book should be mandatory reading for anyone teaching a course in the statistical design of experiments. … reading this text is likely to influence their course for the better."—MAA Reviews, March 2015"In my opinion, this is a very valuable book. It covers the topics that I judge should be in such a book including what might be called the standard designs and more … it has become my go to text on experimental design."David E. Booth, TechnometricsTable of ContentsIntroduction. Completely Randomized Designs with One Factor. Factorial Designs. Randomized Block Designs. Designs to Study Variances. Fractional Factorial Designs. Incomplete and Confounded Block Designs. Split-Plot Designs. Crossover and Repeated Measures Designs. Response Surface Designs. Mixture Experiments. Robust Parameter Design Experiments. Experimental Strategies for Increasing Knowledge. Bibliography. Index.
£104.50
Taylor & Francis Inc Foundations of Statistical Algorithms
Book SynopsisA new and refreshingly different approach to presenting the foundations of statistical algorithms, Foundations of Statistical Algorithms: With References to R Packages reviews the historical development of basic algorithms to illuminate the evolution of today's more powerful statistical algorithms. It emphasizes recurring themes in all statistical algorithms, including computation, assessment and verification, iteration, intuition, randomness, repetition and parallelization, and scalability. Unique in scope, the book reviews the upcoming challenge of scaling many of the established techniques to very large data sets and delves into systematic verification by demonstrating how to derive general classes of worst case inputs and emphasizing the importance of testing over a large number of different inputs. Broadly accessible, the book offers examples, exercises, and selected solutions in each chapter as well as access to a supplTrade Review"My main take away is that these authors spend a lot of time thinking about issues that I never think about. They argue strongly that I, as a statistician, should think about them more, and I find their argument compelling. I certainly enjoyed the various flashes of insight into computation I had as I read their book. ... The book's case studies ... are incredibly detailed and deep, far beyond the case studies typically used to illustrate these methods. ... I greatly enjoyed the overall arc of the book and found it quite compelling. ... a nice book to have on the shelf in case you find yourself suspicious about something computational, or want to find a case study illustrating some topic in computation." -Luke W. Miratrix, Journal of the American Statistical Association, March 2015 "... it provides the necessary skills to construct statistical algorithms and hence to contribute to statistical computing. And I wish I had the luxury to teach from Foundations of Statistical Algorithms to my graduate students ... a rich book that should benefit a specific niche of statistical graduates and would-be-statisticians, namely those ready to engage into serious statistical programming. It should provide them with the necessary background, out of which they should develop their own tools." -Christian Robert on his blog, February 2014 "The book is suitable for readers who not only want to understand current statistical algorithms, but also gain a deeper understanding of how the algorithms are constructed and how they operate. It is addressed first and foremost to students and lecturers teaching the foundations of statistical algorithms." -Ivan Krivy, Zentralblatt MATH 1296 "... an invaluable resource on several levels. ... For a student who wants to become a competent professional in data science, this monograph is an absolute must, with hard-to-find alternatives. For those who are established in the profession, it is a reference to a broad range of issues encountered in everyday practice. Liberally dispensed advice and insights will be particularly appreciated by the practically oriented reader." -Mathematical Reviews, June 2015Table of ContentsIntroduction. Computation. Verification. Iteration. Deduction of Theoretical Properties. Randomization. Repetition. Scalability and Parallelization. Bibliography. Index.
£128.25
Springer-Verlag New York Inc. Monte Carlo Statistical Methods
Book SynopsisWe have sold 4300 copies worldwide of the first edition (1999). This new edition contains five completely new chapters covering new developments. Trade ReviewFrom the reviews: MATHEMATICAL REVIEWS "Although the book is written as a textbook, with many carefully worked out examples and exercises, it will be very useful for the researcher since the authors discuss their favorite research topics (Monte Carlo optimization and convergence diagnostics) going through many relevant references…This book is a comprehensive treatment of the subject and will be an essential reference for statisticians working with McMC." From the reviews of the second edition: "Only 2 years after its first edition this carefully revised second edition accounts for the rapid development in this field...This book can be highly recommended for students and researchers interested in learning more about MCMC methods and their background." Biometrics, March 2005 "This is a comprehensive book for advanced graduate study by statisticians." Technometrics, May 2005 "This excellent text is highly recommended..." Short Book Reviews of the ISI, April 2005 "This book provides a thorough introduction to Monte Carlo methods in statistics with an emphasis on Markov chain Monte Carlo methods. … Each chapter is concluded by problems and notes. … The book is self-contained and does not assume prior knowledge of simulation or Markov chains. …. on the whole it is a readable book with lots of useful information." (Søren Feodor Nielsen, Journal of Applied Statistics, Vol. 32 (6), August, 2005) "This revision of the influential 1999 text … includes changes to the presentation in the early chapters and much new material related to MCMC and Gibbs sampling. The result is a useful introduction to Monte Carlo methods and a convenient reference for much of current methodology. … The numerous problems include many with analytical components. The result is a very useful resource for anyone wanting to understand Monte Carlo procedures. This excellent text is highly recommended … ." (D.F. Andrews, Short Book Reviews, Vol. 25 (1), 2005) "You have to practice statistics on a desert island not to know that Markov chain Monte Carlo (MCMC) methods are hot. That situation has caused the authors not only to produce a new edition of their landmark book but also to completely revise and considerably expand it. … This is a comprehensive book for advanced graduate study by statisticians." (Technometrics, Vol. 47 (2), May, 2005) "This remarkable book presents a broad and deep coverage of the subject. … This second edition is a considerably enlarged version of the first. Some subjects that have matured more rapidly in the five years following the first edition, like reversible jump processes, sequential MC, two-stage Gibbs sampling and perfect sampling have now chapters of their own. … the book is also very well suited for self-study and is also a valuable reference for any statistician who wants to study and apply these techniques." (Ricardo Maronna, Statistical Papers, Vol. 48, 2006) "This second edition of ‘Monte Carlo Statistical Methods’ has appeared only five years after the first … the new edition aims to incorporate recent developments. … Each chapter includes sections with problems and notes. … The style of the presentation and many carefully designed examples make the book very readable and easily accessible. It represents a comprehensive account of the topic containing valuable material for lecture courses as well as for research in this area." (Evelyn Buckwar, Zentrablatt MATH, Vol. 1096 (22), 2006) "This is a useful and utilitarian book. It provides a catalogue of modern Monte carlo based computational techniques with ultimate emphasis on Markov chain Monte Carlo (MCMC) … . an excellent reference for anyone who is interested in algorithms for various modes of Markov chain (MC) methodology … . a must for any researcher who believes in the importance of understanding what goes on inside of the MCMC ‘black box.’ … I recommend the book to all who wish to learn about statistical simulation." (Wesley O. Johnson, Journal of the American Statistical Association, Vol. 104 (485), March, 2009)Table of ContentsIntroduction * Random Variable Generation * Monte Carlo Integration * Controlling Monte Carlo Variance * Monte Carlo Optimization * Markov Chains * The Metropolis-Hastings Algorithm * The Slice Sampler * The Two-Stage Gibbs Sampler * The Multi-Stage Gibbs Sampler * Variable Dimension Models and Reversible Jump * Diagnosing Convergence * Perfect Sampling * Iterated and Sequential Importance Sampling
£104.49
Springer New York The Grammar of Graphics Statistics and Computing
Trade ReviewFrom the reviews of the second edition: "This fascinating book deconstructs the process of producing graphics and in doing so raises many fascinating questions on the nature and representation of information...This second edition is almost twice the size of the original, with six new chapters and substantial revisions." Short Book Reviews of the International Statistical Institute, December 2005 "When the first edidtion of this book appeared in 2000 it was much praised. I called it a tour de force of the highest order. (Wainer, 2001), Edward Wegman (2000) argued that it was destined to become a classic. Now, six years later this very fine book has been much improved." Howard Wainer for Psychometrika "...The second edition is an impressive expansion beyond a quite remarkable first edition. The text remains dense and even more encyclopedic, but it is a pleasure to read, whether a novice or an expert in graphics...this book is a bargain...The second edition is a must-have volume for anyone interested in graphics." Thomas E. Bradstreet for the Journal of the American Statistical Association, December 2006 "I find myself still thinking about the book and its ideas, several weeks after I finished reading it. I love that kind of book." Mark Bailey for Techometrics, Vol. 49, No. 1, February 2007 "Warts and all, The Grammar of Graphics is a richly rewarding work, an outstanding achievement by one of the leaders of statistical graphics. Seek it out." Nicholas J. Cox for the Journal of Statistical Software, January 2007 "The second edition is a quite fascinating book as well, and it comes with many color graphics. Anyone working in this field can see how many hours the author (plus coworkers) has spent on such a volume. … Demands for good graphics are high and this book will help to wetten the appetite to create future computer packages that will meet this demand. An occasional reader will get insights into a modern world of computing … ." (Wolfgang Polasek, Statistical Papers, Vol. 48, 2007)Table of ContentsSyntax.- How To Make a Pie.- Data.- Variables.- Algebra.- Scales.- Statistics.- Geometry.- Coordinates.- Aesthetics.- Facets.- Guides.- Semantics.- Space.- Time.- Uncertainty.- Analysis.- Control.- Automation.- Reader.- Coda.
£127.49