

Hello,
For a standard, simultaneous binary logistic regression in SPSS 21:
 7 covariates (all categorical, 4 are dichotomous).
 N~240.
 Covariate X1 has 4 levels.
 Covariate X1 Wald is not significant. However, one of the contrasts for X1 is significant. Theoretically I would expect the significant contrast, and it is potentially an important finding.
Should the Wald for the covariate be interpreted like an omnibus test in ANOVA? That is, X1 is not a significant predictor, thus the contrasts should not be interpreted?
I guess the real problem is that I am underpowered and/or that I made a wrong decision regarding the composition of the levels.
What do you think?
Thanks!
Hans
PS a related, bonus question...when only one categorical covariate with multiple levels is entered into logistic regression, what do you do if the overall model chisquare ("omnibus test") is significant (p<.05) but the Wald for the variable is not?


On: Wednesday, December 30, 2015 5:32 PM, wrote"
> Hello,
>
> For a standard, simultaneous binary logistic regression in SPSS 21:
>  7 covariates (all categorical, 4 are dichotomous).
>  N~240.
>  Covariate X1 has 4 levels.
>  Covariate X1 Wald is not significant. However, one of the contrasts
> for X1
> is significant. Theoretically I would expect the significant contrast,
> and
> it is potentially an important finding.
>
> Should the Wald for the covariate be interpreted like an omnibus test
> in
> ANOVA? That is, X1 is not a significant predictor, thus the contrasts
> should
> not be interpreted?
The simple answer is yes. If you had done a planned comparison
only using the levels of the covariate that produced the significant
result,
you could argue for its significance being meaningful and not a Type I
error. However, this result is part of a larger linear model and it is
not
clear that you would still have the significant result outside of this
model (i.e., this planned comparison depends on the other factors
used in the analysis). Count the number of significance test you are
doing and then determine how many of the results could be due to
chance (5% or 1 in 20). It sounds like you've done a lot of tests and
should expect at least one significant result on the basis of chance.
> I guess the real problem is that I am underpowered and/or that I made
> a
> wrong decision regarding the composition of the levels.
>
> What do you think?
>
> Thanks!
> Hans
>
> PS a related, bonus question...when only one categorical covariate
> with
> multiple levels is entered into logistic regression, what do you do if
> the
> overall model chisquare ("omnibus test") is significant (p<.05) but
> the
> Wald for the variable is not?
In ordinary ANOVA with multiple levels (e.g., oneway 4 level), it
is possible to have a significant overall F but none of the pariwise
differences between means might be significant, a result that most
people find puzzling. In this case, if the significant F is real (not a
Type I error), all it guarantees is the one orhtogonal comparison is
statistically significant. But orthgonal comparisons can either be
simple (pairs of means) or complex (the mean of the first two
levels versus the mean of the last two levels). Ultimately, it
depends upon what hypothesis or set of hypotheses one is interested
in. If one has very specific hypotheses to test, then it would be
better to do planned comparisons instead of the traditional twostage
analysis (i.e., omnibus test followed by post hoc/multiple comparisons).
Others may have different views.
Mike Palij
New York University
[hidden email]
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


Thank you Mike! That helps immensely.
Regarding the "PS", I should clarify that the Wald to which I referred was the overall variable Wald, not for individual planned comparisons. I can't find a good description of the relationship between the Chi Sq and Wald (Internet and Tabachnick&Fidell) so if anyone has a good link (or if not too much trouble, an explanation) I would certainly appreciate it!


On Wednesday, December 30, 2015 6:59 PM, hanshananigan wrote:
>
> Thank you Mike! That helps immensely.
>
> Regarding the "PS", I should clarify that the Wald to which I referred
> was
> the overall variable Wald, not for individual planned comparisons. I
> can't
> find a good description of the relationship between the Chi Sq and
> Wald
> (Internet and Tabachnick&Fidell) so if anyone has a good link (or if
> not
> too much trouble, an explanation) I would certainly appreciate it!
I'm not entirely sure I understand what you are asking but let me make
a couple of points:
(1) Take a look at Hosmer & Lemeshow's "Applied Logistic Regression".
(the 2013 version is Hosmer, Lemeshow & Sturdivant). You can
preview parts of it on books.google.com and page 3842, I think, address
your issue; see:
https://books.google.com/books?id=64JYAwAAQBAJ&printsec=frontcover&dq=hosmer+and+lemeshow&hl=en&sa=X&ved=0ahUKEwjm9e6D8ITKAhVIox4KHSUgCLUQ6AEIOjAC#v=onepage&q=multivariate%20Wald&f=false(2) If I am not mistaken, the Chisquare test used in multiple logistic
regression comes about from the comparison of the likelihood ratios
for the "constant" (intercept) only model and the model with the fitted
"independent/predictor" variables. The assumption is that in the fitted
model all of the slopes for the predictors is zero (i.e., Wald
statistics
are not significant). A significant chisquare indicates that one or
more
of the coefficients are not equal to zero (i.e., the Wald statistic is
significant).
This may be somewhat of an oversimplification because it overlooks
issues of model misspecification/leaving out variables that are really
correlated to the outcome but can only express themselves via the
predictors being used (suggesting the wrong predictor is used).
I'm sure someone will correct me on this if I am wrong.
(3) The overall model can be tested by a chisquare or other test
statistic,
such as the multivariate Wald statistic (see page 42 in Hosmer,
Lemeshow & Sturvidant via the link above) or the Score test. I assume
that the multivariate Wald test is not what you are referring to since
SPSS
does not produce this (not sure which software does; SAS does provide
the Score test).
HTH. As always, I'm open to correction.
Mike Palij
New York University
[hidden email]
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


see inserted below. > Date: Wed, 30 Dec 2015 15:32:59 0700 > From: [hidden email]> Subject: Logistic regression: categorical covariate Wald is not significant but individual contrast is signficant? > To: [hidden email]> > Hello, > > For a standard, simultaneous binary logistic regression in SPSS 21: >  7 covariates (all categorical, 4 are dichotomous). >  N~240. >  Covariate X1 has 4 levels. >  Covariate X1 Wald is not significant. However, one of the contrasts for X1 > is significant. Theoretically I would expect the significant contrast, and > it is potentially an important finding. > > Should the Wald for the covariate be interpreted like an omnibus test in > ANOVA? That is, X1 is not a significant predictor, thus the contrasts should > not be interpreted? What were your hypotheses when you started? "I expect the significant contrast" is intriguing, but would you say the same for every other contrast in your analysis? You get POWER from laying out your hypotheses and analyses in advance, usually with the test that justifies your data collection being described by very few (13) degrees of freedom. If that "expected significant" contrast was more important that the other 2 d.f. of the variable, then perhaps you should not be looking at the test on the whole variable at all. Else, it is proper to control for the multipletesting. If this is (merely) an exploratory analysis, over all 7 variables, then you have the minor comfort that not everything was n.s.; but if the variables are not knocking each other out of contention  which works just as nastily for LR as it does for OLS  then 240 is a fairly decent N if your interesting groups are not tiny, and you may be testing for effects that are not especially large. > > I guess the real problem is that I am underpowered and/or that I made a > wrong decision regarding the composition of the levels. > > What do you think? > > Thanks! > Hans > > PS a related, bonus question...when only one categorical covariate with > multiple levels is entered into logistic regression, what do you do if the > overall model chisquare ("omnibus test") is significant (p<.05) but the > Wald for the variable is not? >
My understanding is that the test by subtraction ("omnibus test") is more reliable than the same d.f. tested by Wald (using the effect divided by estimated error).  Rich Ulrich
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


Many thanks, Mike and Rich! I will certainly check out the Hosmer & Lemeshow book.
It helped for me to understand that the model chisquare is a test of whether the model with predictors is a better fit than the constantonly model, whereas the Wald is a test of whether or not the logistic coefficient is different than zero: two related but notquitethesame things.
You got me to consider a more a priori approach to planned comparisons, so I will play around with that a bit.
Thanks!
Hans


Hans,
The model ChiSquare test, also known as the Likelihood Ratio (LR) ChiSquare test, compares nested models. The default model ChiSquare in the LOGISTIC regression procedure compares the interceptonly model to the full model, but it is possible to incorporate multiple ENTER statements to compare the full model to a model which has one or fewer predictors.
The LR test is generally superior to the Wald test, where the Wald test will yield more similar results to the LR test as N increases. The drawback to the LR test is having to fit the nested models using the ENTER statements. From what I recall, you can obtain LR confidence intervals for each parameter estimate using the GENLIN procedure.
Ryan
Sent from my iPhone
> On Dec 31, 2015, at 8:53 AM, hanshananigan < [hidden email]> wrote:
>
> Many thanks, Mike and Rich! I will certainly check out the Hosmer & Lemeshow
> book.
>
> It helped for me to understand that the model chisquare is a test of
> whether the model with predictors is a better fit than the constantonly
> model, whereas the Wald is a test of whether or not the logistic coefficient
> is different than zero: two related but notquitethesame things.
>
> You got me to consider a more a priori approach to planned comparisons, so I
> will play around with that a bit.
>
> Thanks!
> Hans
>
>
>
> 
> View this message in context: http://spssxdiscussion.1045642.n5.nabble.com/LogisticregressioncategoricalcovariateWaldisnotsignificantbutindividualcontrastissignfictp5731165p5731174.html> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSXL, send a message to
> [hidden email] (not to SPSSXL), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSXL
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


Would like to ask a follow up question on this thread.
Say i have the following results in a logistic regression:
Variable Wald P OR LCL UCL
Age 5.11 0.07
Age (1) 0.56 0.45 1.30 0.66 2.59
Age (2) 4.06 0.04 0.22 0.51 0.96
Since the Wald says age does not contribute to the model, my interpretation
is that age is not contributing to the model and i am not reporting the ORs
even though the OR for Age (2) is significant.
I can't find anything that specifically says that this is correct.
Can anyone clarify?
Thanks much
Carol

Sent from: http://spssxdiscussion.1045642.n5.nabble.com/=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


I see you do refer to the whole thread as conserved by Nabble.
That thread discusses the general case.
Adapting what I posted on 12/30/2015  If the AGE(2) contrast
represents your /a priori/ hypothesis, then the plevel of that
contrast is what should have your attention for reporting.
If I had 3 age groups (which is what these results suggest),
I would take the linear scoring as my main hypothesis, and
the quadratic transformation for disconfirmation of linearity.
So. You have an effect to report if AGE(2) is an expected linear
trend. Otherwise, the 2 d.f. effect for Age is not significant.

Rich Ulrich
From: SPSSX(r) Discussion <[hidden email]> on behalf of parisec <[hidden email]>
Sent: Friday, April 26, 2019 6:12 PM
To: [hidden email]
Subject: Re: Logistic regression: categorical covariate Wald is not significant but individual contrast is signficant?
Would like to ask a follow up question on this thread.
Say i have the following results in a logistic regression:
Variable Wald P OR LCL UCL
Age 5.11 0.07
Age (1) 0.56 0.45 1.30 0.66 2.59
Age (2) 4.06 0.04 0.22 0.51 0.96
Since the Wald says age does not contribute to the model, my interpretation
is that age is not contributing to the model and i am not reporting the ORs
even though the OR for Age (2) is significant.
I can't find anything that specifically says that this is correct.
Can anyone clarify?
Thanks much
Carol

Sent from: http://spssxdiscussion.1045642.n5.nabble.com/
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


Are you suggesting that it really depends on the hypothesis? Suppose the
question is "Is increasing age associated with increased odds of wrinkles"
and I had 3 age categories.
If the Wald for age was not significant, my first thought would be to not
reject H0 of no association of age with wrinkles and ignore the ORs for each
of the age categories.
Since my hypothesis has a linear trend, i should reject H0 of no association
of age with wrinkles because the youngest age category shows significantly
decreased odds of wrinkles.
Suppose this was a nominal variable, where instead of age, the variable was
hair color? Would I not reject H0 of no association of hair color with
wrinkles if the Wald was NS but red hair had ORs that were statistically
significantly lower than brown hair (refcat).
Is this accurate?
Thank you
Carol
Rich Ulrich wrote
> I see you do refer to the whole thread as conserved by Nabble.
> That thread discusses the general case.
>
> Adapting what I posted on 12/30/2015  If the AGE(2) contrast
> represents your /a priori/ hypothesis, then the plevel of that
> contrast is what should have your attention for reporting.
>
> If I had 3 age groups (which is what these results suggest),
> I would take the linear scoring as my main hypothesis, and
> the quadratic transformation for disconfirmation of linearity.
>
> So. You have an effect to report if AGE(2) is an expected linear
> trend. Otherwise, the 2 d.f. effect for Age is not significant.
>
> 
> Rich Ulrich

Sent from: http://spssxdiscussion.1045642.n5.nabble.com/=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


> Are you suggesting that it really depends on the hypothesis? Suppose the
> question is "Is increasing age associated with increased odds of wrinkles"
> and I had 3 age categories."
Yes, I am SAYING that the test that tests /your/ hypothesis is the one
that (in general) you ought to rely on, and not a test with extra d.f. that
tests something else. "Increasing age" surely sounds like a linear contrast
across three age categories that are properly in order.
> If the Wald for age was not significant,
Now, that Question is a problem for me, from this start. A Wald
test is a test on a 1 d.f. contrast, and not a test on 3 age categories
when they are taken as categories. I am saying that you want either
a Wald test or the LR test on age as a linear trend  one d.f. If that
is the Wald test in the question, then that is the test you want.
So, I don't know what Wald test you may be pointing to. Looking
at separate age contrasts by category is not the same as looking
at the linear trend. For three groups, you can have complete coverage
of the 2 d.f. by using linear and quadratic trends. These will be
orthogonal contrasts if n1=n3.

Rich Ulrich
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD

Administrator

A Wald test is a Chisquare test, and can have df > 1. Here's an example
using the survey_sample.sav data set that comes with SPSS.
NEW FILE.
DATASET CLOSE ALL.
* Edit path on next line as needed.
GET FILE = "C:\SPSSdata\survey_sample.sav".
COMPUTE Male = sex EQ 1.
FORMATS Male(F1).
LOGISTIC REGRESSION VARIABLES Male
/METHOD=ENTER race
/CONTRAST (race)=Indicator(3)
/PRINT=CI(95)
/CRITERIA=PIN(0.05) POUT(0.10) ITERATE(20) CUT(0.5).
COMPUTE p = 1  CDF.CHISQ(10.598192,2).
COMPUTE p1 = 1  CDF.CHISQ(0.007019,1).
COMPUTE p2 = 1  CDF.CHISQ(4.411820,1).
FORMATS p to p2 (F10.8).
LIST p p1 p2 /CASES FROM 1 to 1.
* Notice that these computed pvalues match the pvalues in the
* table of coefficients from LOGISTIC REGRESSION.
The table of coefficients from the logistic regression model shows 3 Wald
tests for Race:
Race Wald = 10.598 df=2 p=.005
Race(1) Wald = .007 df=1 p =.933
Race(2) Wald = 4.412 df=1 p=.036
The values I generate via COMPUTE with CDF.CHISQ are the same:
p p1 p2
.00499611 .93323177 .03569074
The table Carol showed earlier in the thread reported Wald Chisquare =
5.11, p = 0.07, and with 3 age groups, one would infer df = 2. However,
with Chi^2 = 5.11 and df = 2, I get p = .07769223 with both SPSS and Stata.
So something's not quite right there.
COMPUTE pCarol = 1  CDF.CHISQ(5.11,2).
FORMATS pCarol (F10.8).
LIST pCarol /CASES FROM 1 to 1.
pCarol
.07769223
Using Stata:
. display chi2tail(2,5.11)
.07769223
HTH.
Rich Ulrich wrote
>  snip 
>
>> If the Wald for age was not significant,
>
> Now, that Question is a problem for me, from this start. A Wald
> test is a test on a 1 d.f. contrast, and not a test on 3 age categories
> when they are taken as categories. I am saying that you want either
> a Wald test or the LR test on age as a linear trend  one d.f. If that
> is the Wald test in the question, then that is the test you want.
>
> So, I don't know what Wald test you may be pointing to. Looking
> at separate age contrasts by category is not the same as looking
> at the linear trend. For three groups, you can have complete coverage
> of the 2 d.f. by using linear and quadratic trends. These will be
> orthogonal contrasts if n1=n3.
>
> 
> Rich Ulrich
>
>
> =====================
> To manage your subscription to SPSSXL, send a message to
> LISTSERV@.UGA
> (not to SPSSXL), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSXL
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD


Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/"When all else fails, RTFM."
NOTE: My Hotmail account is not monitored regularly.
To send me an email, please use the address shown above.

Sent from: http://spssxdiscussion.1045642.n5.nabble.com/=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


Bruce,
Oops! My bad. The only Wald tests I ever considered reporting were ones
with 1 d.f.
I did take (insufficient) precaution of looking at the Wikipedia article
on Wald tests  where I apparently quit reading at the first semicolon.
The first part of the sentence says "estimate over its standard error";
the second part mentions that the estimated covariance matrix can
include several variables.
Still, I hope I made it clear enough, that when you do have one apriori
hypothesis, the test on THAT is the test that you generally want to be
concerned with. Asking for a "significant" overall test before interpreting
all single d.f. tests which it subsumes is one fairly good way to control
for multitest error, when you do not have priorities among the hypotheses
of the single d.f. tests.

Rich Ulrich
From: SPSSX(r) Discussion <[hidden email]> on behalf of Bruce Weaver <[hidden email]>
Sent: Monday, April 29, 2019 6:05 PM
To: [hidden email]
Subject: Re: Logistic regression: categorical covariate Wald is not significant but individual contrast is signficant?
A Wald test is a Chisquare test, and can have df > 1. Here's an example
using the survey_sample.sav data set that comes with SPSS.
NEW FILE.
DATASET CLOSE ALL.
* Edit path on next line as needed.
GET FILE = "C:\SPSSdata\survey_sample.sav".
COMPUTE Male = sex EQ 1.
FORMATS Male(F1).
LOGISTIC REGRESSION VARIABLES Male
/METHOD=ENTER race
/CONTRAST (race)=Indicator(3)
/PRINT=CI(95)
/CRITERIA=PIN(0.05) POUT(0.10) ITERATE(20) CUT(0.5).
COMPUTE p = 1  CDF.CHISQ(10.598192,2).
COMPUTE p1 = 1  CDF.CHISQ(0.007019,1).
COMPUTE p2 = 1  CDF.CHISQ(4.411820,1).
FORMATS p to p2 (F10.8).
LIST p p1 p2 /CASES FROM 1 to 1.
* Notice that these computed pvalues match the pvalues in the
* table of coefficients from LOGISTIC REGRESSION.
The table of coefficients from the logistic regression model shows 3 Wald
tests for Race:
Race Wald = 10.598 df=2 p=.005
Race(1) Wald = .007 df=1 p =.933
Race(2) Wald = 4.412 df=1 p=.036
The values I generate via COMPUTE with CDF.CHISQ are the same:
p p1 p2
.00499611 .93323177 .03569074
The table Carol showed earlier in the thread reported Wald Chisquare =
5.11, p = 0.07, and with 3 age groups, one would infer df = 2. However,
with Chi^2 = 5.11 and df = 2, I get p = .07769223 with both SPSS and Stata.
So something's not quite right there.
COMPUTE pCarol = 1  CDF.CHISQ(5.11,2).
FORMATS pCarol (F10.8).
LIST pCarol /CASES FROM 1 to 1.
pCarol
.07769223
Using Stata:
. display chi2tail(2,5.11)
.07769223
HTH.
Rich Ulrich wrote
>  snip 
>
>> If the Wald for age was not significant,
>
> Now, that Question is a problem for me, from this start. A Wald
> test is a test on a 1 d.f. contrast, and not a test on 3 age categories
> when they are taken as categories. I am saying that you want either
> a Wald test or the LR test on age as a linear trend  one d.f. If that
> is the Wald test in the question, then that is the test you want.
>
> So, I don't know what Wald test you may be pointing to. Looking
> at separate age contrasts by category is not the same as looking
> at the linear trend. For three groups, you can have complete coverage
> of the 2 d.f. by using linear and quadratic trends. These will be
> orthogonal contrasts if n1=n3.
>
> 
> Rich Ulrich
>
>
> =====================
> To manage your subscription to SPSSXL, send a message to
> LISTSERV@.UGA
> (not to SPSSXL), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSXL
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD


Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/
"When all else fails, RTFM."
NOTE: My Hotmail account is not monitored regularly.
To send me an email, please use the address shown above.

Sent from: http://spssxdiscussion.1045642.n5.nabble.com/
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


Thanks much for the followup Richard. This is an issue that has bugged me
for a long time and i have not found anything that is decisive.
My hypotheses are generally "Is variable X associated with increased odds of
Y when adjusted for z, aa, bb, etc".
But now I think i may have been too conservative in my interpretations. I
have not been reporting ORs for variables that don't a significant Wald even
if one of the levels has significant ORs. in many cases it's a linear trend
i.e. stage of disease (14), comorbidity index (02) where higher generally
means worse than lower. Sometimes, the higher value is significant (stage 4
vs stage 1) whereas a lower value may not be. This is clinically meaningful.
So, interpreting the ORs for these levels makes total sense.
But i'm thinking for variables like race, if the Wald for race is NS, and
only one race is showing any significance, then stating that race is
associated with increased odds might be a stretch.

Sent from: http://spssxdiscussion.1045642.n5.nabble.com/=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD

Administrator

Carol, when you say it's often a linear trend, are you using the POLYNOMIAL
trend option that is available for LOGISTIC REGRESSION? Assuming CatVar is
a categorical variable with ordered categories:
/CONTRAST (CatVar)=POLYNOMIAL
In this case, I would not insist on significance of the omnibus Wald test.
HTH.
parisec2 wrote
> Thanks much for the followup Richard. This is an issue that has bugged me
> for a long time and i have not found anything that is decisive.
>
> My hypotheses are generally "Is variable X associated with increased odds
> of
> Y when adjusted for z, aa, bb, etc".
>
> But now I think i may have been too conservative in my interpretations. I
> have not been reporting ORs for variables that don't a significant Wald
> even
> if one of the levels has significant ORs. in many cases it's a linear
> trend
> i.e. stage of disease (14), comorbidity index (02) where higher
> generally
> means worse than lower. Sometimes, the higher value is significant (stage
> 4
> vs stage 1) whereas a lower value may not be. This is clinically
> meaningful.
> So, interpreting the ORs for these levels makes total sense.
>
> But i'm thinking for variables like race, if the Wald for race is NS, and
> only one race is showing any significance, then stating that race is
> associated with increased odds might be a stretch.
>
>
>
>
>
> 
> Sent from: http://spssxdiscussion.1045642.n5.nabble.com/>
> =====================
> To manage your subscription to SPSSXL, send a message to
> LISTSERV@.UGA
> (not to SPSSXL), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSXL
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD


Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/"When all else fails, RTFM."
NOTE: My Hotmail account is not monitored regularly.
To send me an email, please use the address shown above.

Sent from: http://spssxdiscussion.1045642.n5.nabble.com/=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD


The problem with extra d.f. is that they reduce the power of analyses.
Consider. Contrasts across three or more groups can be broken into
linear/nonlinear, or linear/quadratic/cubic/etc. chisquared tests or
Ftests. For "Pearson" evaluation of contingency tables, these add up
exactly to the total for all the d.f. of the table. For contingency tables
tested by Maximum Likelihood, they totalup notquiteexactly. If you
want a test on the linear hypothesis, don't do a test that gives equal
"weight" to several hypotheses that you are not interested in.
And the same principal applies elsewhere. I was first impressed by it
when I was doing ANOVAs with variables across 6 age groups  The linear
trend (with one d.f.) was regularly accounting for 90% of the total
Sum of Squares for all 5 d.f.  always "highly significant" whether my
6group test met 5% level or not. The one F was based on SS/5, the
other used ~ 0.9*SS/1.
Now, I imagine another d.f. problem with tests on "Race". Someone's
standard test might have 5 or 10 "races" listed ... which might be useful
when switching between Miami and San Francisco, or when your N is
thousands. But it is not useful, for a test, to include a category where
the N is minuscule  you increase the d.f. without having any fair chance
of a contribution for that cell. Tiny cells are more obvious when staring
at the 2x5 contingency table than when the information is buried in
tests for "contrasts for the parameter" when a program has done the
dummy coding for you. And that is why a lot of US studies report on
"white v. nonwhite" or "white v. black v. other" even though data
collection probably used more choices they want good tests, and they
don't want distractions from trivial Ns in what they present.
By similar logic  a simple MANOVA tests a contrast of its several
independent variables across /all possible contrasts/ of those variables.
If you are interested in "good performance", don't test across all those
MANOVA d.f. which represent implicit contrasts. Construct your own,
arbitrary, weighted composite score; or, as your major test, use one fine
outcome all by itself.

Rich Ulrich
From: SPSSX(r) Discussion <[hidden email]> on behalf of parisec <[hidden email]>
Sent: Tuesday, April 30, 2019 1:06 PM
To: [hidden email]
Subject: Re: Logistic regression: categorical covariate Wald is not significant but individual contrast is signficant?
Thanks much for the followup Richard. This is an issue that has bugged me
for a long time and i have not found anything that is decisive.
My hypotheses are generally "Is variable X associated with increased odds of
Y when adjusted for z, aa, bb, etc".
But now I think i may have been too conservative in my interpretations. I
have not been reporting ORs for variables that don't a significant Wald even
if one of the levels has significant ORs. in many cases it's a linear trend
i.e. stage of disease (14), comorbidity index (02) where higher generally
means worse than lower. Sometimes, the higher value is significant (stage 4
vs stage 1) whereas a lower value may not be. This is clinically meaningful.
So, interpreting the ORs for these levels makes total sense.
But i'm thinking for variables like race, if the Wald for race is NS, and
only one race is showing any significance, then stating that race is
associated with increased odds might be a stretch.

Sent from: http://spssxdiscussion.1045642.n5.nabble.com/
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD
=====================
To manage your subscription to SPSSXL, send a message to
[hidden email] (not to SPSSXL), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSXL
For a list of commands to manage subscriptions, send the command
INFO REFCARD

