My last two posts have been about mixture models, with examples to illustrate what they are and how they can be useful. Further discussion and more examples can be found in Chapter 10 of Exploring Data in Engineering, the Sciences, and Medicine. One important topic I haven’t covered is how to fit mixture models to datasets like the Old Faithful geyser data that I have discussed previously: a nonparametric density plot gives fairly compelling evidence for a bimodal distribution, but how do you estimate the parameters of a mixture model that describes these two modes? For a finite Gaussian mixture distribution, one way is by trial and error, first estimating the centers of the peaks by eye in the density plot (these become the component means), and adjusting the standard deviations and mixing percentages to approximately match the peak widths and heights, respectively. This post considers the more systematic alternative of estimating the mixture distribution parameters using the

**mixtools**package in*R*.The Old Faithful dataset mentioned above. A much more complete and thorough discussion of the Old Faithful dataset – is given in the

**mixtools**package is one of several available in*R*to fit mixture distributions or to solve the closely related problem of model-based clustering. Further,**mixtools**includes a variety of procedures for fitting mixture models of different types. This post focuses on one of these – the**normalmixEM**procedure for fitting normal mixture densities – and applies it to two simple examples, starting with the**mixtools**package – which also discusses its application to the*R*package vignette, mixtools: An R Package for Analyzing Finite Mixture Models.The above plot shows the results obtained using the Old Faithful waiting time data. Specifically, this plot was generated by the following sequence of

**normalmixEM**procedure with its default parameter values, applied to the*R*commands:library(mixtools)wait = faithful$waitingmixmdl = normalmixEM(wait)plot(mixmdl,which=2)lines(density(wait), lty=2, lwd=2)

Like many modeling tools in

*R*, the**normalmixEM**procedure has associated plot and summary methods. In this case, the plot method displays either the log likelihood associated with each iteration of the EM fitting algorithm (more about that below), or the component densities shown above, or both. Specifying “which=1” displays only the log likelihood plot (this is the default), specifying “which = 2” displays only the density components/histogram plot shown here, and specifying “density = TRUE” without specifying the “which” parameter gives both plots. Note that the two solid curves shown in the above plot correspond to the individual Gaussian density components in the mixture distribution, each scaled by the estimated probability of an observation being drawn from that component distribution. The final line of*R*code above overlays the nonparametric density estimate generated by the**density**function with its default parameters, shown here as the heavy dashed line (obtained by specifying “lty = 2”).Most of the procedures in the Old Faithful waiting time data is a case in point – using the default starting values gives the following parameter estimates:

**mixtools**package are based on the iterative*expectation maximization (EM) algorithm,*discussed in Section 2 of the**mixtools**vignette and also in Chapter 16 of*Exploring Data*. A detailed discussion of this algorithm is beyond the scope of this post – books have been devoted to the topic (see, for example, the book by McLachlan and Krishnan, The EM Algorithm and Extensions (Wiley Series in Probability and Statistics) ) – but the following two points are important to note here. First, the EM algorithm is an iterative procedure, and the time required for it to reach convergence – if it converges at all – depends strongly on the problem to which it is applied. The second key point is that because it is an iterative procedure, the EM algorithm requires starting values for the parameters, and algorithm performance can depend strongly on these initial values. The**normalmixEM**procedure supports both user-supplied starting values and built-in estimation of starting values if none are supplied. These built-in estimates are the default and, in favorable cases, they work quite well. The> mixmdl[c("lambda","mu","sigma")]$lambda[1] 0.3608868 0.6391132$mu[1] 54.61489 80.09109$sigma[1] 5.871241 5.867718

The mixture density described by these parameters is given by:

p(x) = lambda[1] n(x; mu[1], sigma[1]) + lambda[2] n(x; mu[2], sigma[2])

where

*n(x; mu, sigma)*represents the Gaussian probability density function with mean*mu*and standard deviation*sigma.*One reason the default starting values work well for the Old Faithful waiting time data is that if nothing is specified, the number of components (the parameter k) is set equal to 2. Thus, if you are attempting to fit a mixture model with more than two components, this number should be specified, either by setting k to some other value and not specifying any starting estimates for the parameters lambda, mu, and sigma, or by specifying a vector with k components as starting values for at least one of these parameters. (There are a number of useful options in calling the

**normalmixEM**procedure: for example, specifying the initial sigma value as a scalar constant rather than a vector with k components forces the component variances to be equal. I won’t attempt to give a detailed discussion of these options here; for that, type “help(normalmixEM)”.)Another important point about the default starting values is that, aside from the number of components k, any unspecified initial parameter estimates are selected randomly by the Old Faithful waiting time dataset seems to be such a case – the number of iterations required to obtain the final result can vary significantly from one run to the next. (Specifically, the

**normalmixEM**procedure. This means that, even in cases where the default starting values consistently work well – again, the**normalmixEM**procedure does not fix the seed for the random number generators used to compute these starting values, so repeated runs of the procedure with the same data will start from different initial parameter values and require different numbers of iterations to achieve convergence. In the case of the Old Faithful waiting time data, I have seen anywhere between 16 and 59 iterations required, with the final results differing only very slightly, typically in the fifth or sixth decimal place. If you want to use the same starting value on successive runs, this can be done by setting the random number seed via the**set.seed**command before you invoke the**normalmixEM**procedure.)It is important to note that the default starting values do not always work well, even if the correct number of components is specified. This point is illustrated nicely by the following example. The plot above shows two curves: the solid line is the exact density for the three-component Gaussian mixture distribution described by the following parameters:

mu = (2.00, 5.00, 7.00)sigma = (1.000, 1.000, 1.000)lambda = (0.200, 0.600, 0.200)

The dashed curve in the figure is the nonparametric density estimate generated from n = 500 observations drawn from this mixture distribution. Note that the first two components of this mixture distribution are evident in both of these plots, from the density peaks at approximately 2 and 5. The third component, however, is too close to the second to yield a clear peak in either density, giving rise instead to slightly asymmetric “shoulders” on the right side of the upper peaks. The key point is that the components in this mixture distribution are difficult to distinguish from either of these density estimates, and this hints at further difficulties to come.

Applying the Old Faithful results discussed above. In fact, to compare these results, it is necessary to be explicit about the values of the random seeds used to initialize the parameter estimation procedure. Specifying this random seed as 101 and only specifying k=3 in the

**normalmixEM**procedure to the 500 sample sequence used to generate the nonparametric density estimate shown above and specifying k = 3 gives results that are substantially more variable than the**normalmixEM**call yields the following parameter estimates after 78 iterations:mu = (1.77, 4.87, 5.44)sigma = (0.766, 0.115, 1.463)lambda = (0.168, 0.028, 0.803)

Comparing these results with the correct parameter values listed above, it is clear that some of these estimation errors are quite large. The figure shown below compares the mixture density constructed from these parameters (the heavy dashed curve) with the nonparametric density estimate computed from the data used to estimate them. The prominent “spike” in this mixture density plot corresponds to the very small standard deviation estimated for the second component and it provides a dramatic illustration of the relatively poor results obtained for this particular example.

Repeating this numerical experiment with different random seeds to obtain different random starting estimates, the

**normalmixEM**procedure failed to converge in 1000 iterations for seed values of 102 and 103, but it converged after 393 iterations for the seed value 104, yielding the following parameter estimates:mu = (1.79, 5.03, 5.46)sigma = (0.775, 0.352, 1.493)lambda = (0.169, 0.063, 0.768)

Arguably, the general behavior of these parameter estimates is quite similar to those obtained with the random seed value 101, but note that the second variance component differs by a factor of three, and the second component of lambda increases almost as much.

Increasing the sample size from n = 500 to n = 2000 and repeating these experiments, the

**normalmixEM**procedure failed to converge after 1000 iterations for all four of the random seed values 101 through 104. If, however, we specify the correct standard deviations (i.e., specify “sigma = c(1,1,1)” when we invoke

**normalmixEM**) and we increase the maximum number of iterations to 3000 (i.e., specify “maxit = 3000”), the procedure does converge after 2417 iterations for the seed value 101, yielding the following parameter estimates:

mu = (1.98, 4.98, 7.15)sigma = (1.012, 1.055, 0.929)lambda = (0.198, 0.641, 0.161)

While these parameters took a lot more effort to obtain, they are clearly much closer to the correct values, emphasizing the point that when we are fitting a model to data, our results generally improve as the amount of available data increases and as our starting estimates become more accurate. This point is further illustrated by the plot shown below, analogous to the previous one, but constructed from the model fit to the longer data sequence and incorporating better initial parameter estimates. Interestingly, re-running the same procedure but taking the correct means as starting parameter estimates instead of the correct standard deviations, the procedure failed to converge in 3000 iterations.

Overall, I like what I have seen so far of the

**mixtools**package, and I look forward to exploring its capabilities further. It’s great to have a built-in procedure – i.e., one I didn’t have to write and debug myself – that does all of the things that this package does. However, the three-component mixture results presented here do illustrate an important point: the behavior of iterative procedures like**normalmixEM**and others in the**mixtools**package can depend strongly on the starting values chosen to initialize the iteration process, and the extent of this dependence can vary greatly from one application to another.
Interesting review! Have you compared its performance with mclust in identifying the true clusters (components) exist?

ReplyDeleteReally neat example. I'm interested in getting the X intercept of the two Gaussians, but I don't think any of the attributes of the model gives it.. Is there another simple way for calculating it?

ReplyDeleteThis was a very, very helpful post. I am trying to fit a mixture Gamma distribution - the gut retention time data I am analzying is bimodal. However, I am unable to figure out the expansion for mixture density (as you do here for the normal distribution). Mixtools lists lambda, alpha and beta coefficients for Gamma distributions - what do these correspond to? Are alpha and beta the fitted parameters for each of the Gamma distributions? It would be very helpful if you can suggest how we can expand this and fit the curves to histograms. Thanks!

ReplyDeleteThanks for the post. I used this package on my data. When I used it the plot produces two curves but I see more populations when I increase bin widths. My question is how to increase bin width using this function?

ReplyDeleteThanks a lot

Hi! Thank you very much for this useful post. How can you get the median of each population?

ReplyDeleteOnce you have the mixmdl how can you generate new samples from it?

ReplyDeleteIt was really a nice post and I was really impressed by reading thisData Science online Course

ReplyDeleteThank you for sharing your article. Great efforts put it to find the list of articles which is very useful to know, Definitely will share the same to other forums.

ReplyDeleteData Science Training in chennai at Credo Systemz | data science course fees in chennai | data science course in chennai quora | data science with python training in chennai

Thankful to you for this amazing information sharing with us. Get website designing and development services by Ogen Infosystem.

ReplyDeleteWebsite Designing Company in Delhi

Rice Bags Manufacturers

ReplyDeletePouch Manufacturers

wall putty bag manufacturers

Lyrics with music

we have provide the best ppc service.

ReplyDeleteppc services in gurgaon

website designing company in Gurgaon

PPC company in Noida

seo services in gurgaon

we have provide the best fridge repair service.

ReplyDeletefridge repair in faridabad

Videocon Fridge Repair in Faridabad

Whirlpool Fridge Repair in Faridabad

Washing Machine Repair in Noida

godrej washing machine repair in noida

whirlpool Washing Machine Repair in Noida

IFB washing Machine Repair in Noida

LG Washing Machine Repair in Noida

Bali Honeymoon Packages From Delhi

ReplyDeleteBali Honeymoon Packages From Chennai

Hong Kong Packages From Delhi

Europe Packages from Delhi

Bali Honeymoon Packages From Bangalore

Bali Honeymoon Packages From Mumbai

Maldives Honeymoon Packages From Bangalore

travel company in Delhi

I feel happy to see your webpage and looking forward for more updates.

ReplyDeleteMachine Learning Course in Chennai

Machine Learning Training in Velachery

Data Science Course in Chennai

Data Analytics Courses in Chennai

Data Analyst Course in Chennai

R Programming Training in Chennai

Data Analytics Training in Chennai

Machine Learning course in Chennai

Your content is really awesome and understandable, thanks for the efforts in this blog. Visit Mutual Fund Wala for Mutual Fund Schemes.

ReplyDeleteMutual Fund Companies

Decent, Get Service for Night out page 3 parties and this magnificent service provided by Lifestyle Magazine.

ReplyDeleteLifestyle Magazine India

agar aapka husband aapse ladai karta hai or aap karna nhi chatey toh aap jarur apne pati ko apne vash main kar sakty ho पति को वश में करने के टोटके kare

ReplyDeleteGreat, I think this is one of the best blog in past some time I have seen. Visit Kalakutir for Fleet Painting, Godown Line Marking Painting and Caution & Indication Signages.

ReplyDeleteFleet Painting

data science course bangalore is the best data science course

ReplyDeleteMobile app development company in gurgaon

ReplyDeleteI have to search sites with relevant information on given topic and provide them to teacher our opinion and the article.

ReplyDeletedata analytics courses in mumbai

data science interview questions

business analytics course

data science course in mumbai

You actually make it look so easy with your performance but I find this matter to be actually something which I think I would never comprehend. It seems too complicated and extremely broad for me. I'm looking forward for your next post, I’ll try to get the hang of it!

ReplyDeletePMP Certifications

Really awesome blog!!! I finally found great post here.I really enjoyed reading this article. It's really a nice experience to read your post. Thanks for sharing your innovative ideas. Excellent work! I will get back here.

ReplyDeleteData Science Course

Data Science Course in Marathahalli

Data Science Course Training in Bangalore

wonderful article. Very interesting to read this article.I would like to thank you for the efforts you had made for writing this awesome article. This article resolved my all queries.

ReplyDeleteData science Interview Questions

Data Science Course

keep up the good work. this is an Ossam post. This is to helpful, i have read here all post. i am impressed. thank you. this is our data science training in mumbai

ReplyDeletedata science training in mumbai | https://www.excelr.com/data-science-course-training-in-mumbai

wonderful article. Very interesting to read this article.I would like to thank you for the efforts you had made for writing this awesome article. This article resolved my all queries. keep it up.

ReplyDeletedata analytics course in Bangalore

Great post i must say and thanks for the information. Education is definitely a sticky subject. However, is still among the leading topics of our time. I appreciate your post and look forward to more.

ReplyDeleteCorrelation vs Covariance

Very interesting blog. Many blogs I see these days do not really provide anything that attracts others, but believe me the way you interact is literally awesome.You can also check my articles as well.

ReplyDeleteData Science In Banglore With Placements

Data Science Course In Bangalore

Data Science Training In Bangalore

Best Data Science Courses In Bangalore

Data Science Institute In Bangalore

Thank you..

Took me time to understand all of the comments, but I seriously enjoyed the write-up. It proved being really helpful to me and Im positive to all of the commenters right here! Its constantly nice when you can not only be informed, but also entertained! I am certain you had enjoyable writing this write-up.

ReplyDeleteData Science Course

Very interesting to read this article.I would like to thank you for the efforts you had made for writing this awesome article. This article inspired me to read more. keep it up.

ReplyDeleteCorrelation vs Covariance

Simple linear regression

It is perfect time to make some plans for the future and it is time to be happy. I've read this post and if I could I desire to suggest you some interesting things or suggestions. Perhaps you could write next articles referring to this article. I want to read more things about it!

ReplyDeleteData Science Training

I feel very grateful that I read this. It is very helpful and very informative and I really learned a lot from it.

ReplyDeleteAnchor tag

Data Science Certification in Bangalore

Cleaning company in Hail

ReplyDeleteA sofa cleaning company in Hail

A carpet cleaning company in Hail

Tank cleaning company in Hail

Hail Cleaning Company

Such a very useful article. Very interesting to read this article. I would like to thank you for the efforts you had made for writing this awesome article.

ReplyDeleteData Science Course in Pune

Data Science Training in Pune

Nice blog. I finally found great post here Very interesting to read this article and very pleased to find this site. Great work!

ReplyDeleteData Science Training in Pune

Data Science Course in Pune

Nice Post. Very informative Message and found a great post. Thank you.

ReplyDeleteBusiness Analytics Course in Pune

Business Analytics Training in Pune

I feel very grateful that I read this. It is very helpful and very informative and I really learned a lot from it.

ReplyDeleteData Analytics Course in Pune

Data Analytics Training in Pune

All the scientists think that the computers will have to come to this point to make sure that they are artificially intelligent and would be self aware. artificial intelligence training in hyderabad

ReplyDeleteThis is my first visit to your blog! We are a team of volunteers and new

ReplyDeleteinitiatives in the same niche. Blog gave us useful information to work. You

have done an amazing job!

artificial intelligence training in Bangalore

I just got to this amazing site not long ago. I was actually captured with the piece of resources you have got here. Big thumbs up for making such wonderful blog page!

ReplyDeleteData Analyst Course

Thanks for sharing great information!!

ReplyDeleteData Science Training in Hyderabad

https://digitalweekday.com/

ReplyDeletehttps://digitalweekday.com/

https://digitalweekday.com/

https://digitalweekday.com/

https://digitalweekday.com/

https://digitalweekday.com/

https://digitalweekday.com/

https://digitalweekday.com/

Thumbs up guys your doing a really good job. It is the intent to provide valuable information and best practices, including an understanding of the regulatory process.

ReplyDeleteCyber Security Course in Bangalore

This Was An Amazing ! I Haven't Seen This Type of Blog Ever ! Thankyou For Sharing, data science training

ReplyDeleteVery nice blog and articles. I am really very happy to visit your blog. Now I am found which I actually want. I check your blog everyday and try to learn something from your blog. Thank you and waiting for your new post.

ReplyDeleteCyber Security Training in Bangalore

This is a wonderful article, Given so much info in it, These type of articles keeps the users interest in the website, and keep on sharing more ... good luck.

ReplyDeleteData Science Training Institute in Bangalore

I feel very grateful that I read this. It is very helpful and very informative and I really learned a lot from it.

ReplyDeleteBest Data Science Courses in Bangalore

Very interesting to read this article.I would like to thank you for the efforts you had made for writing this awesome article. This article inspired me to read more. keep it up.

ReplyDeleteCorrelation vs Covariance

Simple linear regression

data science interview questions

I just got to this amazing site not long ago. I was actually captured with the piece of resources you have got here. Big thumbs up for making such wonderful blog page!

ReplyDeleteData Science Course in Bangalore

cool stuff you have and you keep overhaul every one of us

ReplyDeleteData Science Training in Bangalore

Awesome blog. I enjoyed reading your articles. This is truly a great read for me. I have bookmarked it and I am looking forward to reading new articles. Keep up the good work!

ReplyDeleteSimple Linear Regression

Correlation vs Covariance

After reading your article I was amazed. I know that you explain it very well. And I hope that other readers will also experience how I feel after reading your article.

ReplyDeleteEthical Hacking Course in Bangalore

Wow! Such an amazing and helpful post this is. I really really love it. I hope that you continue to do your work like this in the future also.

ReplyDeleteEthical Hacking Training in Bangalore

I am impressed by the information that you have on this blog. Thanks for Sharing

ReplyDeleteEthical Hacking in Bangalore

Cool stuff you have and you keep overhaul every one of us.

ReplyDeleteData Science Course

Thanks for the informative and helpful post, obviously in your blog everything is good..

ReplyDeleteData Science Training

Happy to visit your blog, I am by all accounts forward to more solid articles and I figure we as a whole wish to thank such huge numbers of good articles, blog to impart to us.

ReplyDelete360DigiTMG

This is my first time I visit here. I found such a large number of engaging stuff in your blog, particularly its conversation. From the huge amounts of remarks on your articles, I surmise I am by all account not the only one having all the recreation here! Keep doing awesome. I have been importance to compose something like this on my site and you have given me a thought.

ReplyDelete360DigiTMG

I am looking for and I love to post a comment that "The content of your post is awesome" Great work!

ReplyDeletedata science interview questions

Really nice and interesting post. I was looking for this kind of information and enjoyed reading this one. Keep posting. Thanks for sharing.

ReplyDeleteSimple Linear Regression

Correlation vs covariance

KNN Algorithm

Fabulous blog found to be Very impressive and interesting to come across such an awesome blog. I would really thank the blogger to come up with the content which motivates the readers to be up to date with the fast growing technology in the current era. Once again nice blog keep it up and keep sharing the content as always.

ReplyDelete360DigiTMG Tableau Course

Mindblowing blog appreciating your endless efforts in developing a truly transparent content. Which probably the best one to come across disclosing the content which people might not aware of it. Thanks for bringing out the amazing content and keep sharing more further.

ReplyDelete360DigiTMG PMP Certification Course

I am looking for and I love to post a comment that "The content of your post is awesome" Great work!

ReplyDeletedata science interview questions

This is a wonderful article, Given so much info in it, These type of articles keeps the users interest in the website, and keep on sharing more ... good luck.

ReplyDeleteSimple Linear Regression

Correlation vs covariance

KNN Algorithm

This is a wonderful article, Given so much info in it, These type of articles keeps the users interest in the website, and keep on sharing more ... good luck.

ReplyDeleteSimple Linear Regression

Correlation vs Covariance

very informative article post. much thanks again

ReplyDeleteData Science Training in Hyderabad

Data Science course in Hyderabad

Data Science coaching in Hyderabad

Data Science Training institute in Hyderabad

Data Science institute in Hyderabad

Nice Article...Very interesting to read this article. I have learned some new information.thanks for sharing.

ReplyDeleteData Science Training in Hyderabad

Data Science course in Hyderabad

Data Science coaching in Hyderabad

Data Science Training institute in Hyderabad

Data Science institute in Hyderabad

Amazing Article ! I would like to thank you for the efforts you had made for writing this awesome article. This article inspired me to read more. keep it up.

ReplyDeleteSimple Linear Regression

Correlation vs covariance

data science interview questions

KNN Algorithm

Logistic Regression explained

I would like to thank you for the efforts you had made for writing this awesome article. This article inspired me to read more. keep it up.

ReplyDeleteCorrelation vs Covariance

Simple Linear Regression

data science interview questions

KNN Algorithm

Logistic Regression explained