Showing 49–72 of 75 results
This book provides a self-contained comprehensive exposition of the theory of dynamical systems. The book begins with a discussion of several elementary but crucial examples. These are used to formulate a program for the general study of asymptotic properties and to introduce the principal theoretical concepts and methods. The main theme of the second part of the book is the interplay between local analysis near individual orbits and the global complexity of the orbit structure. The third and fourth parts develop the theories of low-dimensional dynamical systems and hyperbolic dynamical systems in depth.
Bayesian Theory José M. Bernardo Universidad de Valencia, Valencia, Spain Adrian F. M. Smith Imperial College of Science, Technology and Medicine, London, UK Bayesian Theory is the first volume of a related series of three and will be followed by Bayesian Computation, and Bayesian Methods. The series aims to provide an up–to–date overview of the why?, how? and what? of Bayesian statistics. This volume provides a thorough account of key basic concepts and theoretical results, with particular emphasis on viewing statistical inference as a special case of decision theory.
Many fundamentally important decisions about our social life are a function of how well we understand and analyze DATA. This sounds so obvious but it is so misunderstood. Social statisticians struggle with this problem in their teaching constantly. This book and its approach is the ally and support of all instructors who want to accomplish this hugely important teaching goal.This innovative text for undergraduate social statistics courses is, (as one satisfied instructor put it), a "breath of fresh air.&
Emphasizes the strategy of experimentation, data analysis, and the interpretation of experimental results.Features numerous examples using actual engineering and scientific studies.Presents statistics as an integral component of experimentation from the planning stage to the presentation of the conclusions.Deep and concentrated experimental design coverage, with equivalent but separate emphasis on the analysis of data from the various designs.Topics can be implemented by practitioners and do not require a high level of training in statistics.N
A Practical Guide to Implementing Nonparametric and Rank-Based Procedures
Nonparametric Statistical Methods Using R covers traditional nonparametric methods and rank-based analyses, including estimation and inference for models ranging from simple location models to general linear and nonlinear models for uncorrelated and correlated responses. The authors emphasize applications and statistical computation. They illustrate the methods with many real and simulated data examples using R, including the packages Rfit and npsm.
The book first gives an overview of the R language and basic statistical concepts before discussing nonparametrics. It presents rank-based methods for one- and two-sample problems, procedures for regression models, computation for general fixed-effects ANOVA and ANCOVA models, and time-to-event analyses. The last two chapters cover more advanced material, including high breakdown fits for general regression models and rank-based inference for cluster correlated data.
The book can be used as a primary text or supplement in a course on applied nonparametric or robust procedures and as a reference for researchers who need to implement nonparametric and rank-based methods in practice. Through numerous examples, it shows readers how to apply these methods using R.
The second edition of a bestselling textbook, Using R for Introductory Statistics guides students through the basics of R, helping them overcome the sometimes steep learning curve. The author does this by breaking the material down into small, task-oriented steps. The second edition maintains the features that made the first edition so popular, while updating data, examples, and changes to R in line with the current version.See What’s New in the Second Edition:Increased emphasis on more idiomatic R provides a grounding in the functionality of base R.D
Written to convey an intuitive feel for both theory and practice, its main objective is to illustrate what a powerful tool density estimation can be when used not only with univariate and bivariate data but also in the higher dimensions of trivariate and quadrivariate information.
Causality is central to the understanding and use of data. Without an understanding of cause effect relationships, we cannot use data to answer questions as basic as, “Does this treatment harm or help patients?” But though hundreds of introductory texts are available on statistical methods of data analysis, until now, no beginner-level book has been written about the exploding arsenal of methods that can tease causal information from data.Causal Inference in Statistics: A PrimerJudea Pearl, Computer Science and Statistics, University of California Los Angeles, USAMadelyn Glymour, Philosophy, Carnegie Mellon University, Pittsburgh, USAandNicholas P.
Instead of presenting the standard theoretical treatments that underlie the various numerical methods used by scientists and engineers, Using R for Numerical Analysis in Science and Engineering shows how to use R and its add-on packages to obtain numerical solutions to the complex mathematical problems commonly faced by scientists and engineers. This practical guide to the capabilities of R demonstrates Monte Carlo, stochastic, deterministic, and other numerical methods through an abundance of worked examples and code, covering the solution of systems of linear algebraic equations and nonlinear equations as well as ordinary differential equations and partial differential equations.
Historical records show that there was no real concept of probability in Europe before the mid-seventeenth century, although the use of dice and other randomizing objects was commonplace. Ian Hacking presents a philosophical critique of early ideas about probability, induction, and statistical inference and the growth of this new family of ideas in the fifteenth, sixteenth, and seventeenth centuries. Hacking invokes a wide intellectual framework involving the growth of science, economics, and the theology of the period.
The choice of examples used in this text clearly illustrate its use for a one-year graduate course. The material to be presented in the classroom constitutes a little more than half the text, while the rest of the text provides background, offers different routes that could be pursued in the classroom, as well as additional material that is appropriate for self-study.
Intended as the text for a sequence of advanced courses, this book covers major topics in theoretical statistics in a concise and rigorous fashion. The discussion assumes a background in advanced calculus, linear algebra, probability, and some analysis and topology. Measure theory is used, but the notation and basic results needed are presented in an initial chapter on probability, so prior knowledge of these topics is not essential.
The presentation is designed to expose students to as many of the central ideas and topics in the discipline as possible, balancing various approaches to inference as well as exact, numerical, and large sample methods.
See How to Use Statistics for New Testament Interpretation
The Synoptic Problem and Statistics lays the foundations for a new area of interdisciplinary research that uses statistical techniques to investigate the synoptic problem in New Testament studies, which concerns the relationships between the Gospels of Matthew, Mark, and Luke. There are potential applications of the techniques to study other sets of similar documents.
Explore Hidden Markov Models for Textual Data
The book provides an introductory account of the synoptic problem and relevant theories, literature, and research at a level suitable for academic and professional statisticians.
Risk Analysis in Finance and Insurance, Second Edition presents an accessible yet comprehensive introduction to the main concepts and methods that transform risk management into a quantitative science. Taking into account the interdisciplinary nature of risk analysis, the author discusses many important ideas from mathematics, finance, and actuarial science in a simplified manner. He explores the interconnections among these disciplines and encourages readers toward further study of the subject.
This unique resource provides simulation techniques for financial risk managers ensuring you become well versed in many recent innovations, including Gibbs sampling, the use of heavy–tailed distributions in VaR calculations, construction of volatility smile, and state space modeling.
Based on the author’s own research, this book rigorously and systematically develops the theory of Gaussian white noise measures on Hilbert spaces to provide a comprehensive account of nonlinear filtering theory. Covers Markov processes, cylinder and quasi-cylinder probabilities and conditional expectation as well as predictio0n and smoothing and the varied processes used in filtering. Especially useful for electronic engineers and mathematical statisticians for explaining the systematic use of finely additive white noise theory leading to a more simplified and direct presentation.
Any practical introduction to statistics in the life sciences requires a focus on applications and computational statistics combined with a reasonable level of mathematical rigor. It must offer the right combination of data examples, statistical theory, and computing required for analysis today. And it should involve R software, the lingua franca of statistical computing.Introduction to Statistical Data Analysis for the Life Sciences covers all the usual material but goes further than other texts to emphasize:Both data analysis and the mathematics underlying classical statistical analysis Modeling aspects of statistical analysis with added focus on biological interpretationsApplications of statistical software in analyzing real-world problems and data setsDeveloped from their courses at the University of Copenhagen, the authors imbue readers with the ability to model and analyze data early in the text and then gradually fill in the blanks with needed probability and statistics theory.
The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments.&
The mathematical basis of signal processing and its many areas of application is the subject of this book. Based on a series of graduate-level lectures held at the Mathematical Sciences Research Institute, the volume emphasizes current challenges, new techniques adapted to new technologies, and certain recent advances in algorithms and theory.
A Gentle Introduction to Stata, Fourth Edition is for people who need to learn Stata but who may not have a strong background in statistics or prior experience with statistical software packages. After working through this book, you will be able to enter, build, and manage a dataset, and perform fundamental statistical analyses. This book is organized like the unfolding of a research project. You begin by learning how to enter and manage data and how to do basic descriptive statistics and graphical analysis.
Statistics for Engineers and Scientists stands out for its crystal clear presentation of applied statistics. Suitable for a one or two semester course, the book takes a practical approach to methods of statistical modeling and data analysis that are most often used in scientific work.
Statistics for Engineers and Scientists features a unique approach highlighted by an engaging writing style that explains difficult concepts clearly, along with the use of contemporary real world data sets to help motivate students and show direct connections to industry and research.
Discusses the analysis and construction of Markov processes in terms of the excursions of the path between visits to a subset of the state space. Its purpose is to attract graduate students and research mathematicians to the subject (and to probabilistic potential theory in general) and to acquaint them with the theory, techniques and applications of the excursion viewpoint. The book’s emphasis is on a notable aspect of excursion theory – its use in making specific computations and in providing concrete illustrations of many of the concepts, even some deeply theoretical ones, from probabilistic potential theory.
The Petersen graph occupies an important position in the development of several areas of modern graph theory, because it often appears as a counter-example to important conjectures. In this account, the authors examine those areas, using the prominent role of the Petersen graph as a unifying feature. Topics covered include: vertex and edge colorability (including snarks), factors, flows, projective geometry, cages, hypohamiltonian graphs, and "symmetry" properties such as distance transitivity.
This book is intended for use in a rigorous introductory PhD level course in econometrics, or in a field course in econometric theory. It covers the measure-theoretical foundation of probability theory, the multivariate normal distribution with its application to classical linear regression analysis, various laws of large numbers, central limit theorems and related results for independent random variables as well as for stationary time series, with applications to asymptotic inference of M-estimators, and maximum likelihood theory.
Showing 49–72 of 75 results