Conversation
|
recent status of gradients: tensorflow-gradient seems to be wrong as gradients for the coefficients that are initialized with zeros (and should thus be trained) are zero. Computation of log_probs wrt model_loc and model_scale seems to be correct and the only things happening before that are eta_loc = tf.matmul(design_loc, a_var) and model_loc = 1/1+tf.exp(-eta_loc) and the same for eta_scale. Will look at it again during the next days. |
|
Did you try initializing at non-zero? Or checked whether zero is is an additional extremum? |
…ommon parameters p and q
|
Status report: I think the hessian test didn't do what it was supposed to do and fixed it. Bernoulli should be more or less ok: I also compared the old beta version to nb, but I didn't find anything that could solve the gradient problem. The only thing I am not sure about are the bounds. Maybe you can have a look at them? The new version for beta is not yet completely working: (btw: beta2 is the old version with mean and samplesize and beta is the new version with p and q) |
|
Ok, nice! I will add an option to use the full FIM as oppose to the block-diagonal blocks only. Until then, just set fim_a and fim_b as zero scalars and return a 3rd object "fim". |
|
Forgot log in the log-likelihood, so no fim at all. Sorry! |
No description provided.