Abstract:
For clinical trials with survival data, the hazard ratio has been the most widely used measure for describing the treatment effect, and the partial
likelihood procedure (Cox, 1975) provides a convenient and robust means of estimating a constant hazard ratio. If the hazards are possibly non-proportional, alternative methods are available, such as piece-wise Cox model, or other models such as the linear transformation models and the accelerated failure time model. The short-term and long-term hazard ratio model of Yang and Prentice (2005) contains the proportional hazards model and the proportional odds model as sub-models, and do not have many restrictions that other approaches impose, thus providing sufficient flexibility for a wide range of applications. We investigate various measures
of interest under this model. Point estimates, point-wise confidence intervals and simultaneous confidence bands of these measures are established. These results can be used to capture and to graphically present the treatment effect. We illustrate these visual tools in applications to clinical trials including the Women's Health Initiative.

Gaussian graphical model has a wide range of applications. The study of Gaussian graphical model had attracted a
lot of attention recently. In this talk we consider a basic question: when is it possible to obtain statistical inference
for estimation of Gaussian Graphical Model? A regression approach will be proposed to obtain asymptotically efficient estimation
of each entry when the precision matrix is sufficient sparse. If the precision matrix is not sufficient sparse, i.e.,
the sparseness condition fails, a lower bound is established to show that it is on longer possible to achieve
a parametric rate estimation of each entry by a construction of a subset of sparse precision matrices and Le Cam's Lemma.
If time permits, we apply the asymptotic normality result to do adaptive support recovery, to obtain adaptive rate-optimal estimation of
the precision matrix under various matrix
l_q norms, and to do inference and estimation for a class of latent variable graphical models, without the need of
the irrepresentable condition and the l_1 constraint of the precision matrix, which are commonly required in literature.
This is a joint work with Zhao Ren, Tingni Sun and Cun-Hui Zhang.

Abstract:
The topic of agreement among different raters, instruments, labs, assays, or measurements methods is of importance in many domains of science. For example, in Phase II cancer clinical trials, the effect of treatment is measured by imaging devices like computed tomography (CT) or magnetic resonance imaging (MRI), and the success of the treatment is usually decided by a team of radiologists, oncologists and surgeons. How to assess agreement among different radiologists is extremely important in this type of clinical trials. The area of agreement assessment has rapidly developed over the last half-century. In this talk, many popular agreement coefficients such as Cohen?s Kappa, Intraclass Correlation Coefficient (ICC), Concordance Correlation Coefficient (CCC) and Coefficient of Individual Agreement/Individual Equivalence (CIA/CIE) will be introduced. An example assessing agreement of adverse events after vaccination between in-clinical records and patients? diary is presented. Data we used were from CDC?s randomized, double-blind placebo-controlled phase 4 clinical trial to assess safety and serological noninferiority of alternate schedules and routes of administration of anthrax vaccine adsorbed (AVA).

Abstract: Innovation is one of the key motivations for mergers and acquisitions (M&A). Studies on the effect of M&A on innovation output do not control for endogenous matching between the acquirer and the target and obscure the effect of integration on innovation performance. In this paper we adopt a two-stage model to deal with this issue: first, a matching model to explain the sorting of firms into pairs, and second, an innovation output function linked to the matching model through error correlation.
We apply this two-stage model on data for 1,979 mergers that occurred between 1992 and 2008 in 4 high-tech industries, computer, biotech, communication and electronics. We find that unobserved matching synergy has a significant effect on the post-merger innovation abilities of the combined firms and that this effect peaks in the second year after merger. In addition we find that managers seem able to correctly foresee the difficulties of integration across different countries. But several other factors, such as industrial culture, firm size ratio, and breadth and depth of knowledge, are incongruent between the merger criteria of managers and the effects on innovation outcomes. These findings can be used by M&A participating managers, regulators, and financial analysts.

Abstract:
Modular structures are ubiquitous across various types of biological networks. The study of network modularity can help reveal regulatory mechanisms in systems biology, evolutionary biology and developmental biology. Identifying putative modular latent structures from high-throughput data using exploratory analysis can help better interpret the data and generate new hypotheses. Unsupervised learning methods designed for global dimension reduction or clustering fall short of identifying modules with factors acting in linear combinations. We developed an exploratory data analysis method named MLSA (Modular Latent Structure Analysis) to estimate modular latent structures, which can find co-regulative modules that involve non-coexpressive genes.Through simulations and real-data analyses, we show that the method can recover modular latent structures effectively. In addition, the method also performed very well on data generated from sparse global latent factor models. In some high throughput datasets, a clinical outcome is available. We show that starting module search using the clinical outcome vector as a starting point reveal latent structures pertaining to the clinical outcome that cannot be directly found by finding genes/gene sets correlated with the clinical outcome.

Abstract:
There is growing interest in neuroimaging meta analysis, an important tool
for synthesizing the ever-expanding brain mapping literature that is largely
based on samples of 20 or fewer subjects. Neuroimaging meta analysis identies
consistent activation regions by using peak activation coordinates (foci) that
are collected from dierent independently performed studies. Kang et al. (2011)
proposed a fully parametric spatial Bayesian model that provides richer results
than other methods by, for example, modeling interstudy variation in activation
location. However, that method only models one population of studies with a
single-type point process and is sensitive to prior specications that are based
on expert opinion. To address these limitations, in this work, we adopt a non-
parametric Bayesian approach for meta analysis data from multiple classes or
types of studies. In particular, foci form each type of study are modeled as
a cluster process driven by a random intensity function that is modeled as a
kernel convolution of a gamma random eld. The type-specic gamma random
elds are linked and modeled as a realization of a common gamma random eld
shared by all types, inducing correlation between study types and mimicking
the behavior of a univariate mixed eects model. We illustrate our model on
simulation studies and a meta analysis of ve emotions from 219 studies. In
addition, we show how to use the model to predict the study type for a newly
presented study. We evaluate the performance of our methods via leave-one-out
cross validation, which are eciently implemented using importance sampling
techniques.

Abstract:
This paper considers a continuous-review, single-product production-inventory system with constant replenishment, compound Poisson demands and lost-sales.
The cost function of the system is the sum of expected discounted inventory holding costs and expected discounted lost-sales penalties
over an infinite horizon, given an initial inventory level. The objective of the paper is to minimize the cost function with respect
to the replenishment rate. To this end, we employ a change of variable to a positive root of Lundberg's fundamental equation (to be referred
to as the LPR variable), by which the cost function has a closed form. We first solve the optimization problem in terms of the LPR variable,
and the optimal replenishment rate can then be obtained from the optimal LPR variable by their one-one and onto mapping relationship. For the special cases
of constant or proportional penalty and exponentially distributed demand sizes, we obtain simpler explicit formulas for the optimal replenishment rate.