亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? psy6003 logistic regression and discriminant analysis.mht

?? 這是博弈論算法全集第六部分:局面描述,其它算法將陸續推出.以便與大家共享
?? MHT
?? 第 1 頁 / 共 3 頁
字號:
suggest that you browse through the menu version of SPSS to learn the =
details. A=20
simple example will illustrate the parallels. Imagine that we had =
carried out a=20
study of voting and wished to know how to best predict whether people =
had voted=20
Conservative or Labour. The commands would be:</P><PRE>LOGISTIC =
REGRESSION /VARIABLES voting WITH age sex class
    att1 att2 att3 att4 extro psycho neuro
    /METHOD FSTEP(LR)
    /CLASSPLOT.</PRE>
<P>The dependent variable is separated from the independent variables by =
the=20
term WITH. The METHOD subcommand uses the keyword FSTEP to specify a=20
<B>forward</B> <B>stepwise</B> procedure; we could also use BSTEP which =
does a=20
<B>backward stepwise</B>, i.e. it starts by entering all the variables =
and then=20
takes them out one at a time; or ENTER is we were engaged in hypothesis =
testing=20
rather than exploratory analysis. If no METHOD subcommand is given, =
ENTER will=20
be assumed. The (LR) term after FSTEP specifies that likelihood ratio=20
considerations will be used in selecting variables to add to or delete =
from the=20
model; this is preferable but can slow computation, so it may be =
necessary to=20
omit it. The /CLASSPLOT line is not strictly necessary but aids =
interpretation -=20
<A =
href=3D"http://www.ex.ac.uk/~SEGLea/multvar2/disclogi.html#classplot">see=
=20
below</A>. </P>
<P>A useful property of the LOGISTIC REGRESSION command is that it can =
cope=20
automatically with categorical independent variables; we don't have to =
write a=20
loop as we do for linear regression. All we have to do is declare any=20
categorical variables on a /CATEGORICAL subcommand <I>as well as</I> on =
the=20
/VARIABLES subcommand. The /CONTRAST subcommand should be used to =
control which=20
category is dropped out when the dummy variables are formed; if the =
control or=20
modal category of, say, a variable DIAGNOST was its third value, we =
would use=20
the subcommand /CONTRAST(DIAGNOST)=3DINDICATOR(3) to tell the LOGISTIC =
REGRESSION=20
to drop level 3 of the variable in forming dummy variables. Although =
this is an=20
improvement over what we have to do when using SPSS to carry out linear=20
regression, there is a snag. /CONTRAST likes its category levels =
specified in=20
rather an odd way; in the example, 3 might not be the value used to code =
the=20
modal category in DIAGNOST: for example, if psychotic, neurotic and =
normal=20
people were coded 0, 1 and 2, the correct entry in /CONTRAST would be 3, =
not 2.=20
Look, I didn't write this idiot system, I'm just trying to tell you =
about it.=20
</P>
<P>As in linear regression, there is no need to declare dichotomous =
independent=20
variables as categorical. </P>
<P>We can also use SPSS to carry out discriminant analysis. For the =
example just=20
considered, the commands would be: </P><PRE>DISCRIMINANT =
GROUPS=3Dvoting(0,1)
    /VARIABLES =3D age sex class att1 to att4 extro psycho neuro
    /METHOD=3DminRESID
    /STATISTICS=3DTABLE.</PRE>
<P>Note that we have to specify the two possible levels of the dependent =

variable (voting). We can use the /METHOD subcommand to request a =
variety of=20
stepwise methods (RAO is another you might like to try), or to ENTER all =
or a=20
subset of variables. The subcommand /STATISTICS=3DTABLE is needed to get =
the=20
classification table which is needed for assessing goodness of fit (see =
below).=20
</P>
<P><I>back to <A=20
href=3D"http://www.ex.ac.uk/~SEGLea/multvar2/disclogi.html#top">top</A></=
I></P>
<H3><A name=3Dreport></A>Interpreting and reporting logistic regression=20
results</H3>
<UL>
  <LI><B>Log likelihoods</B>=20
  <P>A key concept for understanding the tests used in logistic =
regression (and=20
  many other procedures using maximum likelihood methods) is that of =
<B>log=20
  likelihood</B>. Likelihood just means probability, though it tends to =
be used=20
  by statisticians of a <B>Bayesian</B> orientation. It always means =
probability=20
  <I>under a specified hypothesis</I>. In thinking about logistic =
regression,=20
  two hypotheses are likely to be of interest: the null hypothesis, =
which is=20
  that all the coefficients in the regression equation take the value =
zero, and=20
  the hypothesis that the model currently under consideration is =
accurate. We=20
  then work out the likelihood of observing the exact data we actually =
did=20
  observe under each of these hypotheses. The result is nearly always a=20
  frighteningly small number, and to make it easier to handle, we take =
its=20
  natural logarithm (i.e. its log base <I>e</I>) , giving us a log =
likelihood.=20
  Probabilities are always less than one, so log likelihoods are always=20
  negative; often, we work with <B>negative log likelihoods</B> for =
convenience.=20
  </P>
  <LI><B>Goodness of fit</B>=20
  <P>Logistic regression does not give rise to an=20
  <I>R</I><SUP>2</SUP><SUB>adj</SUB> statistic. Darlington (1990, page =
449)=20
  recommends the following statistic as a measure of goodness of fit: =
</P>
  <CENTER><PRE>        exp[(LL<SUB>model</SUB>-LL<SUB>0</SUB>)/N] - 1
LRFC<SUB>1</SUB> =3D ------------------------
            exp(-LL<SUB>0</SUB>/N) - 1
</PRE></CENTER>
  <P>where exp refers to the exponential function (the inverse of the =
log=20
  function), <I>N</I> as usual is sample size, and =
<I>LL</I><SUB>model</SUB> and=20
  <I>LL</I><SUB>0</SUB> are the log likelihoods of the data under the =
model and=20
  the null hypothesis respectively. (Note that I have changed =
Darlington's=20
  notation a little to make it fit in with that used in the rest of =
these=20
  notes.) Darlington's statistic is useful because it takes values =
between 0 and=20
  1 (or 0% and 100%) which have much the same interpretation as values =
of=20
  <I>R</I><SUP>2</SUP><SUB>adj</SUB> or =
<I>R</I><SUP>2</SUP><SUB>adj</SUB> in an=20
  linear regression, although unfortunately it looks from the formula =
that, of=20
  the two, it is more closely analogous to <I>R</I><SUP>2</SUP> . =
Unfortunately=20
  SPSS does not report this statistic. However, it does report =
<I>negative</I>=20
  log likelihoods, multiplied by 2, so with a little adjustment these =
can be=20
  inserted in the equation for <I>LRFC</I><SUB>1</SUB>. </P>
  <P>Rather than using a goodness of fit statistic, though, we often =
want to=20
  look at the proportion of cases we have managed to classify correctly. =
For=20
  this we need to look at the <B>classification table</B> printed out by =
SPSS,=20
  which tells us how many of the cases where the observed value of the =
dependent=20
  variable was 1 have been predicted with a value 1, and so on. An =
advantage of=20
  the classification table is that we can get one out of either logistic =

  regression or discriminant analysis, so we can use it to compare the =
two=20
  approaches. Statisticians claim that logistic regression tends to =
classify a=20
  higher proportion of cases correctly. </P>
  <P><A name=3Dclassplot></A>Another very useful piece of information =
for=20
  assessing goodness of fit can be gained by using the /CLASSPLOT =
subcommand.=20
  This causes SPSS to print distributions of predicted logit values,=20
  distinguishing the observed category values. The resulting plot is =
very useful=20
  for spotting possible outliers. It will also tell you whether it might =
be=20
  better to separate the two predicted categories by some rule other =
than the=20
  simple one SPSS uses, which is to predict value 1 if logit(<I>p</I>) =
is=20
  greater than 0 (i.e. if <I>p</I> is greater than 0.5). A better =
separation of=20
  categories might result from using a different criterion. We might =
also want=20
  to use a different criterion if the <I>a priori</I> probabilities of =
the two=20
  categories were very different (one might be a rare disease, for =
example), or=20
  if the costs of mistakenly predicting someone into the two categories =
differ=20
  (suppose the categories were "found guilty of murder" and "not =
guilty", for=20
  example). The following is an example of such a CLASSPLOT:</P><PRE>    =
  32 +                                                           f+
         |                                                           f|
         |                                                           f|
F        |                                                           f|
R     24 +                                                           f+
E        |                                                           f|
Q        |                                                           f|
U        |                                                           f|
E     16 +                                                           f+
N        |                                                           f|
C        |                                                           f|
Y        |                                                           f|
       8 +                                                           f+
         |                                                           f|
         |                  f f                  f           f  ffffff|
         |          n fnn nnnnnf nnfnn nnn  n fn nnffnff  f ff nfnffff|
Predicted --------------+--------------+--------------+---------------
  Prob:   0            .25            .5             .75             1
  Group:  nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnffffffffffffffffffffffffffffff

          Predicted Probability is of Membership for found guilty

          Symbols: n - not guilty
                   f - found guilty

          Each Symbol Represents 2 Cases.</PRE>
  <P>If we were called as expert witnesses to advise the court about the =

  probability that the person accused had committed murder, using the =
variables=20
  in this particular logistic regression model, we might want to set a =
predicted=20
  probability criterion of .9 rather than .5</P>
  <LI><B>Overall significance</B>=20
  <P>SPSS will offer you a variety of statistical tests. Usually, =
though,=20
  overall significance is tested using what SPSS calls the <I>Model=20
  Chi</I>-<I>square</I>, which is derived from the likelihood of =
observing the=20
  actual data under the assumption that the model that has been fitted =
is=20
  accurate. It is convenient to use -2 times the log (base <I>e</I>) of =
this=20
  likelihood; we call this -2<I>LL</I>. The difference between =
-2<I>LL</I> for=20
  the best-fitting model and -2<I>LL</I> for the null hypothesis model =
(in which=20
  all the <I>b</I> values are set to zero) is distributed like =
chi-squared, with=20
  degrees of freedom equal to the number of predictors; this difference =
is the=20
  <I>Model chi</I>-<I>square</I> that SPSS refers to. Very conveniently, =
the=20
  difference between -2<I>LL</I> values for models with successive terms =
added=20
  also has a chi-squared distribution, so when we use a stepwise =
procedure, we=20
  can use chi-squared tests to find out if adding one or more extra =
predictors=20
  singificantly improves the fit of our model. <A name=3Dcoeffs></A></P>
  <LI><B>The interpretation of coefficients</B>=20
  <P>How can we <I>describe</I> the effect of a single regressor in =
logistic=20
  regression? The fundamental equation for logistic regression tells us =
that=20
  with all other variables held constant, there is a constant increase =
of=20
  <I>b</I><SUB>1</SUB> in logit(<I>p</I>) for every 1-unit increase in=20
  <I>x</I><SUB>1</SUB>, and so on. But what does a constant increase in=20
  logit(<I>p</I>) mean? Because the logit transformation is non-linear, =
it does=20
  not mean a constant increase in <I>p</I>; so the increase in <I>p</I>=20
  associated with a 1-unit increase in <I>x</I><SUB>1</SUB> changes with =
the=20
  value of <I>x</I><SUB>1</SUB> you begin with. </P>
  <P>It turns out that a constant increase in logit(<I>p</I>) does have =
a=20
  reasonably straightforward interpretation. It corresponds to a =
constant=20
  <I>multiplication</I> (by exp(<I>b</I>)) of the <B>odds</B> that the =
dependent=20
  variable takes the value 1 rather than 0. So, suppose =
<I>b</I><SUB>1</SUB>=20
  takes the value 2.30 - we choose this value as an example because =
exp(2.30)=20
  equals 10, so the arithmetic will be easy. Then if =
<I>x</I><SUB>1</SUB>=20
  changes increases by 1, the odds that the dependent variable takes the =
value 1=20
  increase tenfold. So, with this value of <I>b</I><SUB>1</SUB>, let us =
suppose=20
  that with all other variables at their mean values, and =
<I>x</I><SUB>1</SUB>=20
  taking the value 0, we predict a logit(<I>p</I>) of 0; this means that =
there=20
  is an even chance of the dependent variable taking the value 1. Now =
suppose=20
  <I>x</I><SUB>1</SUB> increases to 1. The odds that the dependent =
variable=20

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
一区免费观看视频| 国产喷白浆一区二区三区| 亚洲日本欧美天堂| 日本欧美加勒比视频| 成人激情av网| 精品免费日韩av| 亚洲国产日韩av| 99综合影院在线| 精品福利一区二区三区 | 一区二区三区**美女毛片| 日韩高清不卡在线| 94色蜜桃网一区二区三区| 精品三级av在线| 青青草国产成人av片免费| 色哟哟国产精品免费观看| 国产精品免费久久| 国产福利不卡视频| 久久综合五月天婷婷伊人| 另类小说视频一区二区| 欧美日韩高清影院| 亚洲成人免费看| 欧美亚洲图片小说| 亚洲美女精品一区| 一本久久a久久精品亚洲| 国产精品高清亚洲| 成人av电影观看| 国产精品久久久久9999吃药| 国产精品亚洲第一| 午夜精品福利在线| 一个色妞综合视频在线观看| 成人午夜精品一区二区三区| 国产日韩精品一区二区浪潮av| 男女性色大片免费观看一区二区| 欧美日本视频在线| 石原莉奈一区二区三区在线观看| 欧美精品少妇一区二区三区| 亚洲成a人片在线观看中文| 欧美色区777第一页| 亚洲国产精品尤物yw在线观看| 欧美色网一区二区| 亚洲国产综合视频在线观看| 欧美三片在线视频观看 | 成人高清免费观看| 中文字幕一区二| 欧美午夜精品一区二区三区| 亚洲第一福利视频在线| 91麻豆精品国产91久久久使用方法 | 日本va欧美va欧美va精品| 欧美日韩国产欧美日美国产精品| 视频精品一区二区| 精品成人佐山爱一区二区| 国产成人午夜精品影院观看视频 | 日韩免费看的电影| 国产麻豆精品95视频| 国产精品色在线观看| 91电影在线观看| 奇米在线7777在线精品| 久久久久久久久久久久久久久99| 成人免费看视频| 亚洲电影一级黄| 精品处破学生在线二十三| a亚洲天堂av| 日韩在线卡一卡二| 日本一二三不卡| 色婷婷国产精品综合在线观看| 91福利小视频| 婷婷综合另类小说色区| 精品日韩一区二区三区 | 精品在线你懂的| 中文字幕亚洲不卡| 欧美一区二区三区免费大片| 高清在线成人网| 午夜电影一区二区三区| 日本一区二区在线不卡| 欧美日韩成人激情| 成人免费毛片app| 日韩精品亚洲专区| 亚洲欧洲av一区二区三区久久| 8v天堂国产在线一区二区| 成人午夜视频免费看| 日韩精品乱码av一区二区| 欧美国产精品中文字幕| 欧美一区二区在线免费播放| 91免费国产在线| 国产精品久久久久久久久动漫| 欧美精品v国产精品v日韩精品| 成人深夜在线观看| 蜜臀av性久久久久蜜臀aⅴ | 精品国产乱码久久| 欧美午夜电影在线播放| 成人av在线一区二区| 国产制服丝袜一区| 美女视频免费一区| 香蕉影视欧美成人| 一区二区国产视频| 1区2区3区国产精品| 亚洲国产成人自拍| 国产午夜亚洲精品不卡| 日韩欧美电影一区| 91精品国产高清一区二区三区| 色视频成人在线观看免| hitomi一区二区三区精品| 国产成人午夜高潮毛片| 韩国欧美国产1区| 免费精品视频在线| 日韩影院精彩在线| 天堂影院一区二区| 日韩中文字幕麻豆| 欧美性感一类影片在线播放| 日韩精品一区二区三区视频播放 | 国产精品久久久久三级| 久久奇米777| 久久综合九色综合欧美就去吻| 91精品国产综合久久精品麻豆| 欧美丝袜丝交足nylons| 欧美性一级生活| 欧洲精品一区二区| 欧美性大战久久久久久久蜜臀| 日本丶国产丶欧美色综合| 日本韩国欧美一区| 欧美午夜电影在线播放| 欧美日韩在线播放三区四区| 欧美日韩中字一区| 欧美另类高清zo欧美| 欧美一级日韩不卡播放免费| 日韩欧美资源站| 亚洲精品一线二线三线无人区| 综合欧美亚洲日本| 日韩午夜电影av| 欧美大胆人体bbbb| 久久久国产精品麻豆| 国产精品国产三级国产有无不卡 | 久久电影国产免费久久电影 | 久久理论电影网| 国产精品丝袜一区| 亚洲在线观看免费视频| 秋霞电影网一区二区| 久久99国产精品尤物| 成人免费观看视频| 欧美在线啊v一区| 日韩午夜小视频| 日本一区二区三级电影在线观看 | 国产福利精品导航| 最新不卡av在线| 亚洲精品亚洲人成人网在线播放| 亚洲第一二三四区| 激情亚洲综合在线| 99精品视频在线观看免费| 欧美三级资源在线| 精品久久久久一区二区国产| 国产精品久久久久久久久晋中 | 日本韩国精品在线| 欧美一卡在线观看| 久久精品日产第一区二区三区高清版 | 91精品国产高清一区二区三区| 欧美精品一区二区三区视频| 日韩亚洲欧美在线| 国产精品美女久久久久aⅴ国产馆| 久久成人免费电影| 99久久精品国产麻豆演员表| 欧美日韩国产大片| 国产精品久久久久一区| 精品中文av资源站在线观看| 91亚洲精品久久久蜜桃| 精品国产免费久久| 亚洲国产一区在线观看| 成人美女在线观看| 精品久久久久久久久久久院品网 | 亚洲国产综合视频在线观看| 国产一区二区在线视频| 欧美日韩精品欧美日韩精品| 国产精品乱码妇女bbbb| 裸体健美xxxx欧美裸体表演| 欧洲av一区二区嗯嗯嗯啊| 久久免费午夜影院| 日本欧美在线看| 欧美日韩在线不卡| 日韩毛片高清在线播放| 国产成人精品综合在线观看 | 国产日韩影视精品| 免费在线一区观看| 欧美色综合网站| 一区二区三区日韩欧美精品| 成人国产一区二区三区精品| 日韩精品专区在线影院观看| 婷婷综合五月天| 9191久久久久久久久久久| 一区二区三区精品| 久久奇米777| 免费成人av资源网| 欧美天堂亚洲电影院在线播放| 中文字幕日韩欧美一区二区三区| 精品中文字幕一区二区| 日韩国产欧美在线视频| 欧美日韩在线三区| 天天av天天翘天天综合网| 欧美精品1区2区3区| 偷拍一区二区三区四区| 欧美猛男男办公室激情| 丝袜美腿亚洲色图|