@@ -70,8 +70,9 @@ Werhli, A. V., & Husmeier, D. (2007). "Reconstructing gene regulatory networks w

Scutari, M. (2010). Learning Bayesian Networks with the bnlearn R Package. Journal of Statistical Software, 35(3), 1 - 22. doi:http://dx.doi.org/10.18637/jss.v035.i03.

}

\donotrun{

\examples{

\dontrun{

## Example from the asia dataset from Lauritzen and Spiegelhalter (1988) provided by Scutari (2010)

\title{MCMC search from the synthetic asia dataset for use with mcmcabn library examples}

\description{10^5 MCMC runs with 1000 burn in run from the asia synthetic datasets from Lauritzen and Spiegelhalter (1988) provided by Scutari (2010).

}

\usage{ex0.dag.data}

\format{A data frame, binary variables are factors. The relevant formulas are given below (note these do not give parameter estimates just the form of the relationships, e.g. logit()=1 means a logit link function and comprises of only an intercept term).

\describe{

\usage{data(mcmc_run_asia)}

\format{

The data contains an object of class mcmcabn and a cache of score compute with abn R package.

\item{bsc.compute}{cache of score with a maximum of two parents per node.}

\item{dist}{a named list giving the distribution for each node in the network}

\item{mcmc.out}{an object of class mcmcabn.}

\itemize{

\item \code{bsc.compute}: cache of score with a maximum of two parents per node.

\item \code{dist}: a named list giving the distribution for each node in the network.

\item \code{mcmc.out}: an object of class mcmcabn.

}}

\examples{

...

...

@@ -23,7 +24,14 @@ library(bnlearn) #for the dataset

@@ -37,8 +37,9 @@ Claus O. Wilke (2019). cowplot: Streamlined Plot Theme and Plot Annotations for

Scutari, M. (2010). Learning Bayesian Networks with the bnlearn R Package. Journal of Statistical Software, 35(3), 1 - 22. doi:http://dx.doi.org/10.18637/jss.v035.i03.

}

\donotrun{

\examples{

\dontrun{

## Example from the asia dataset from Lauritzen and Spiegelhalter (1988) provided by Scutari (2010)

@@ -210,8 +210,13 @@ Classically, structure MCMC is done using the algorithm described by Madigan and

Two structurally transparent workarounds have been proposed: the new edge reversal move (REV) and the Markov blanket resampling (MBR). The former advocates to make a reversal move in resampling the parent set. Indeed, the classical reversal move depends on the global configuration of the parents and children, and then fails to propose MCMC jumps that produce valid but very different DAGs in a unique move. It is know to be unbiased but the assumption of ergodicity do not necessarily hold. In the latter the same idea is applied but to the entire Markov blanket of a randomly chosen node.

We believe that having those three algorithm in a unique function with user adjustable relative frequency could only lead to better results. Indeed, those three algorithm work very differently, then it is assumed by the authors that they could be complementary

We believe that having those three algorithms in a unique function with user adjustable relative frequencies could only lead to better results. Indeed, those three algorithms work very differently. Indeed, the MC^3 is very stable and sample efficiently the nearby region. The REV and MBR could produce large MCMC jumps but different then possibly complementary.

### Prior

Three priors are implemented in mcmcabn R package. The parameter `prior.choice` determines the prior used within each individual node for a given choice of parent combination. In Koivisto and Sood (2004) p.554 a form of prior, called Koivisto prior, is used which assumes that the prior probability for parent combinations comprising of the same number of parents are all equal. Specifically, that the prior probability for parent set G with cardinality |G| is proportional to 1/[n-1 choose |G|] where there are n total nodes. Note that this favours parent combinations with either very low or very high cardinality which may not be appropriate. This prior is used when `prior.choice=2`. When `prior.choice=1` an uninformative prior is used where parent combinations of all cardinalities are equally likely.

When `prior.choice=3` a user defined prior is used, defined by `prior.dag`. It is given by an adjacency matrix (squared and same size as number of nodes) where entries ranging from zero to one give the user prior belief. An hyper parameter defining the global user belief in the prior is given by `prior.lambda`. As