Smokescreen: 'passive smoking' and public policy - Part 1


Author: John Luik
Article Published: 1997


I am very much indebted to the work of Dr. J.R. Johnstone on the previous NHMRC Report (1986). I am also indebted to a variety of writings by Dr. G. Gori, which have substantially enhanced my understanding of epidemiology. Finally, I am most appreciative of the suggestions of the reviewers of this paper, several of which have improved its clarity and the force of its arguments.

The Health Effects of Passive Smoking, the Draft Report of the NHMRC Working Party represents a careful and sophisticated development of the principles of two Canadians, Professor John Last, a distinguished epidemiologist, and Marc Lalonde, a distinguished former minister of National Health and Welfare. Professor Last's principles are taken from his plenary address to the International Epidemiological Association, while those of minister Lalonde's are taken from a 1974 document, ' A New Perspective for the Health of Canadians'. Together, Last and Lalonde provide the unsated but nonetheless pervasive 'logic' that binds together the Report: namely, the scientific 'yes,but' and the 'insensitive application of scientific rigour' must not be brought to bear in the public policy process about Environmental Tobacco Smoke (ETS), as it is 'not in the public interest' to do so. Put most directly, what the Report appears to be is an attempt, no doubt in the interests of what the authors take to be a good cause, to corrupt the public policy process with respect to ETS throughout the use of either bad science or corrupt science. This response to the Report is divided into two sections. In the first section, we outline what we mean by the claim that the Report is an instance of either bad science or corrupt science, and in the second section we examine the consequences of the use of such instances of bad or corrupt science on the public policy process.

1 - Bad or Corrupt Science

 

At the outset it is important to define what we mean by bad science and corrupt science. By bad science we mean incompetent science, science that has unconsciously failed to do its work properly in some essential sense. Corrupt science, on the other hand, is bogus science, science that knows that its data misrepresent reality and its processes are deviant, but nonetheless attempts to pass itself off as genuine science. It is science that has an institutionalised motivation and justification for allowing ends to extrinsic to science to determine the findings of science, for allowing science to be subject to an agenda not its own, for allowing science to tell lies with a clear conscience. It is essentially science that wishes to claim the public policy advantages of genuine science without conforming to the scientific process or doing the work of real science. There are at least four characteristics that mark corrupt science off from simply incompetent science.

First, corrupt science that moves not from hypothesis and data to conclusion, but instead from mandated/acceptable conclusion to selected data back to mandated/acceptable conclusion. That is, to say, it is science that fundamentally distorts the scientific process through using selected data to reach the 'right' conclusion, a conclusion that by the very nature of the data necessarily misrepresents reality.
 

Second, corrupt science necessarily misrepresents the state of what it seeks to explain. Rather than acknowledging alternative evidence, or problems with its evidence that would cast doubt on its conclusiveness, and rather than admitting the complexity of the issue under review and the limits of its evidence, corrupt science presents what is at best a carefully chosen partial truth as the whole truth necessary for public policy.

 

Third, corrupt science not only misrepresents reality but it also misrepresents its own processes in arriving at its conclusions. Instead of acknowledging the selectivity of its process and the official compunction of demonstrating the correct predetermined conclusion, it invests both its process and its conclusions with a mantle of indubitability.

 

Fourth, and perhaps most importantly,, whereas good science deals with a dissent an the basis of the quality of it evidence and arguments and considers ad hominem argument as inappropriate in science, corrupt science seeks to create formidable institutional barriers to dissent through excluding dissenters from the process of review and contriving to silence dissent not by challenging its quality, but by questioning its character and motivation.

 

These four characteristics manifest themselves in a variety of ways which include: claiming that a statistical association is a casual relationship; a highly selective use of data; a highly selective practice of citation and referencing; claiming that a risk exists regardless of exposure level; assuming that a large number of statistically non-significant studies constitute a significantly evidentiary trend; claiming that a series of inconclusive or weak studies justify a strong conclusion; suggesting that weak evidence warrants decisive regulatory action; being unwilling to consider seriously non-conforming data; failing to contextualise risk in terms of its significance; implying that the status of an authority justifies its evidence and policy recommendations; suggesting that the mean between two Risk Ratios (RRs) is the reasonable real RR;5 claiming that a finding based on one population is necessarily true of a different population; confusing the roles of public policy advocate and scientist; suggesting that certain risks need not conform to the normal public policy process; advancing public policy measures as 'scientifically justified' without respect to potential efficacy.

 

There are at least two major patterns of problems within the Report that suggest the label incompetent or corrupt science: the Report's understanding of what counts as evidence and indeed its use of evidence, and the Report's understanding of logic, particularly coherence and consistency.

 

Let us begin with the problems of evidence. It is remarkable that a document published by a body that aims to produce high quality evidence-based information and advice for use in a range of settings 6 would proceed without providing its readers with any clear understanding of what its authors take to be the status of the central scientific discipline, namely epidemiology, upon which the weight of its evidence is based. It is equally remarkable that the authors would produce a document that provides no clear position on the assumptions that are employed in its risk assessments; for example, does it use conservative assumptions and default options? These two issues, one about the status of epidemiology in general and the other about risk assessments assumptions in particular are not academic trifles inasmuch as they go to the heart of two quite central issues- one the issue of whether ETS really does pose a significant public health problem, and the other the issue of whether the authors are prepared to be forthright about how certain risk assessment and assumptions can conceal the uncertainties of the data. What the authors of the Report appear to believe is that ETS causes lung cancer based on the weight of epidemiological evidence? For instance the Report argues that "the evidence that passive smoking causes lung cancer is strong- it is biologically plausible, reasonably consistent, and demonstrates dose-response relationships" 8

 

But apart from the specific question of whether the evidence adduced in the Report can justify this conclusion of causality, there is the prior question of whether epidemiology can support any causal conclusions. As McCormick and Skrabanek have noted in their Follies and Fallacies in Medicine: It is not uncommon for epidemiological data regarding the associations to be abused by assuming that an association implies causation. This is particularly likely in the case of diseases of unknown cause. In modern epidemiology, the concept of 'cause' has been replaced by statistical associations with so-called risk factors. As Stehbens point out, risk factors such as high levels of cholesterol in blood are not causes of coronary heart disease, but associated phenomena such as cough, shortness of breath, or fever in pneumonia. 9 It is certainly arguable that the question that the Report seeks to answer is what has been called a trans-scientific question, a question that goes beyond the powers of science. 10 As Ken Rothman has noted: Despite the philosophic injunctions concerning inductive interference criteria have commonly been used to make such inferences, The justification offered has been that the exigencies of public health problems demand action and that despite imperfect knowledge causal inferences must be made. 11

 

Indeed this 'exigencies of public health' approach is precisely the one taken by the Report, 12 and in doing so it fails both to warn its readers of the inherent and perhaps insurmountable limitations of epidemiological evidence and to provide any convincing rationale as to why it should make a 'judgement about cause and effect'.13 The Report's reasoning on this point seems utterly specious. For one thing the Hill guidelines14 might provide epidemiological 'causation' but this begs the question as to whether this is the genuine causation because the reasoning is transparently circular.: the canons of epidemiology guarantee causation, therefore causation obtains. Second, the Report's conclusions do not meet even the Hill criteria of causality, thus there is no justified action that 'must be taken to protect health'. 15 The larger issue, surely, is whether it is proper to us the word 'cause' in this instance at all, particularly when the word, at least in the mind of the public, carries a meaning that appears to confound its epidemiological use?

 

For instance, when we say that Robert entered the room, pulled out a gun and shot Sam dead, and conclude that Robert caused Sam's death, we mean 'caused' in the sense that A caused B to happen. causality in this sense is direct and certain. The causation of epidemiology is, however, not the causation of 'A caused B' but the 'causation' of statistical association, which can be something quite different. It is crucially important, therefore for a Report of this sort, one which offers policy advice to both policy makers and the general public, to be precise about the limitations of its central evidentiary tool. To this end, the Report should make it clear that its use of the word 'cause' has two caveats. First, the word is not used in the normal sense of cause-scientists have not watched ETS enter the lungs of any non-smoker and cause lung cancer. Second, the epidemiological hypotheses of the possible associations between multifactorial diseases and low level environmental exposures are extremely difficult to establish, as opposed to the epidemiological hypotheses about infectious diseases. As the late Petr Skrabanek has noted:

The main preoccupation of epidemiologists is now the association game. This consists of searching for associations between 'diseases of civilisation' and 'risk factors'. The 'diseases of civilisation' are heart disease and cancer. The 'risk factors' studied by epidemiologists are either personal characteristics (age, sex, race, weight, height, diet, habits, customs, vices) or situational characteristics (geography, occupation, environment, air, water, sun, gross domestic product, stress, density of doctors.) Important associations, such as liver cirrhosis or Korsakoff's psychosis in alcoholism, retinopathy or foot gagarine in diabetes, aortic lesions or sabre tibias in syphilis, lung cancer in uranium ore miners, bladder cancer in workers with aniline dyes, are not discovered by epidemiologists but by clinicians, and they are not called 'associations' but the manifestations, signs, or complications of diseases which are their causes. 16

This sort of scepticism about the ability to verify the hypotheses of epidemiology with respect to multifactorial diseases is shared by some of the discipline's most eminent practitioners. For instance, Sir Richard Doll himself noted that: 'Epidemiological observations, however, also have serious disadvantages that limit their value. First, they can seldom be made according to the strict requirements of experimental science, and consequently the available observations may be open to a variety of interpretations.' 17

 

The failure of epidemiology to meet the 'strict requirements of experimental science' lies in the fact that epidemiology does not always proceed on the basis of randomised trials, which are the unquestioned standard for the drug studies and other medical research. As Gary Taubes, writing last year in an article in Science on the limits of epidemiology, observes: 'Assign subjects to random test and control groups, alter the exposure of the test group to the suspected risk factor, and follow both groups to learn the outcome. Often, both the experimenters and the subjects are 'blinded'- unaware who is in the test group and who is a control.' 18

 

But the 'strict requirements of experimental science' cannot be used for most risk factors, not simply because of the length of time that would take and the corresponding expense, but because of the significant ethical objection of exposing healthy subjects to suspected risks. This means that the epidemiology of multifactorial diseases must proceed on the basis of observational studies which are either case-control studies or cohort studies. Case-control studies select a population that has a particular disease, select a population that does not have the disease and then systematically compare the two groups for differences. Cohort studies take a large population, interview them about lifestyle and environment and then follow the population for a period of time in order to determine who gets which diseases. But these approaches are themselves open to a series of other problems- namely, confounders (the unobserved variables in the populations being studied which are not controlled for) and biases (the problems inherent in the designs of the studies). For many epidemiologists, these create formidable if not insurmountable problems. Alvin Feinstein of Yale observes: ' In the laboratory you have all kinds of procedures for calibrating equipment and standardising measurement procedures. In epidemiology... its all immensely prey to both the vicissitudes of human memory and the biases of the interview.' 19 Again as Harvard's Alex Walker suggests: ' I have trouble imagining a system involving a human habit over a prolonged period of time that could give reliable estimates if [risk] increases that are of the order of tens of percent.' 20 These conclusions are supported by Gio Gori, President of the International Society of Regulatory Toxicology and Pharmacology, who notes that:

In studies of local populations, many factors trigger, facilitate, or impede particular outcomes, and introduce extreme difficulties in reaching specific causal attributions. In current practice, these difficulties are circumvented by ignoring the multiplicity of field variables, and by presuming that only the few of contingent interest matter. Adding uncertainty, most studies of multifactorial conditions are not randomised experiments, but observational surveys that are virtually impossible to replicate under equal conditions. Such observational studies are un-testable by the standard rules of the scientific method and their reports could not be objectively validated... The ethics of science negates the causal statements of fact if ex perimental predictivity is absent. 21

I am not suggesting that epidemiology is not a science, only that the epidemiology of multifactorial diseases is plagued with substantial problems which rule out the use of claims about causation, and about which the readers of the Report deserve to be informed. But it is not merely that the Report's authors do not forthrightly share their epidemiological assumptions, alert the reader to the methodological problems surrounding the epidemiology of multifactorial diseases, or indicate the special sense in which 'cause' functions in current epidemiology; it is also that they do not explicitly expose their risk assessment assumptions, even though such assumptions do function to determine the Report's directions. For instance, the Report seems to proceed from conservative assessment assumptions; that is, it is intentionally biased towards finding more risk. Moreover, it is also driven by a common default option assumption that in the absence of complete scientific knowledge it will fill the gap between scientific knowledge and public policy. 22 In this case the Report has assumed that in the absence of convincing evidence to the contrary, it will proceed as if there is no threshold of ETS-caused cancer and that the dose response is linear. 23

 

Only such a default option could explain the Report's policy recommendations for dealing with ETS. Yet this default is an assumption: it is not scientific knowledge because it cannot be confirmed or disproved by science. (The default option does appear to be inconsistent with what the majority of cancer specialists believe that science knows about carcinogenesis.) The difficulty with such a position is that it significantly lessens the scientific credibility of the Report inasmuch as it builds into the Report a non-scientific assumption. Additionally, by avoiding the issue of conservative assumptions and default options, the Report avoids the most central issue of risk assessment. Taken together, the unsatisfactory and misleading discussion of the limits of epidemiology with respect to the causation and the incorporation of significant bias throughout the conservative risk assumptions and default options, tend to suggest not bad science but something closer to corrupt science. It is closer to corrupt science because it misrepresents the complexity of the issue and the nature of the evidence about it through failing to acknowledge both its biases and the inherent uncertainties of the risk assessment exercise itself. The second evidentiary problem, which again suggests the Report's proximity to corrupted science, is its uncritical use of the US Environmental Protection Agency (EPA) 1992 Report. The Report refers to the EPA work as the ' most extensive recent review of the evidence of passive smoking and lung cancer.' 24 Quite surprisingly, the Report fails to mention the extensive criticism directed at the EPA's ETS analysis, or the fact that the EPA's findings are the subject of court action. 25 Given the uncritical attention devoted to the Epa's work, it is worth noting precisely how strong the evidence is which suggests that the EPA's science on ETS is corrupt science.



The evidence that the EPA science on ETS is corrupt science falls into two categories: evidence about the substance of the science, and evidence about the processes involved in creating and using the science.

A. The Substantive Issue

The EPA report, Respiratory Health Effects of Passive Smoking: Lung Cancer and Other Disorders, 26 claims that, 'based on the weight of the available scientific evidence, the US Environmental Protection Agency has concluded that the widespread exposure to environmental tobacco smoke in the United States presents a serious and substantial public health impact.' Is this, in fact, the case? In order to answer this question one must first know something about the data on which the Epa decision is based. The EPA report refers to the 30 epidemiological studies in spousal smoking and lung cancer that have been published from 1982-1990. I t is important to note that while EPA Administer Reilly in referring to the report spoke about ETS and cancer in children and in the workplace, and though the report has been used as a basis for demanding smoking bans both in public places and in workplaces, the EPA did not examine those studies that look at workplace ETS exposure. The overwhelming majority of those workplace studies do not find statistically significant exposure to ETS and lung cancer in non-smokers: a fact that by itself destroys the legitimacy of any harm-based demand for public or workplace smoking bans.


Thus, to begin with, the EPA's case is based not on workplace or public ETS exposure, but on the risks of non-smoking spouses contracting lung cancer from their smoking spouse. But what of the 30 epidemiological studies? Those 30 studies come from different countries and vary substantially in size. Some have fewer than 20 subjects, others are based on larger populations, with the largest study involving 189 cancer cases. Of the 30 studies, 24 reported no statistically significant association, while six reported a statistically significant association, that is, a positive relative risk for those non-smoking spouses. Now, relative risks are further classified into strong risks or weak risks, depending on their magnitude. Within the 30 studies on ETS and lung cancer none reported a strong relative risk. Moreover, whenever the assessment of relative risk is weak there is a substantial possibility that the finding, the assessment, is artificial rather than real. That is to say, there is a strong likelihood that even the weak relative risk is a reflection not of some real world risk, but of problems with confounding variables or interpretive bias. There are, for instance, at least 20 confounding factors ranging from nutrition to socioeconomic status that have been identified as important to the development of lung cancer. Yet none of the 30 studies attempts to control for these factors. So, in assessing the global scientific evidence about ETS and lung cancer, the crucial conclusion is that none of the studies reports a strong relative risk for non-smokers married to smokers.

 

Now the EPA report discusses all of these 30 studies, but it limits its statistical analysis to only 11 U.S. studies of spouses of smokers.What do the 11 studies show? Of the 11, 10 reported no statistically significant association between ETS exposure and lung cancer while only one reported a statistically significant association. The EPA analysis of these 11 studies claims that they show a statistically significant difference in the number of lung cancers occurring in the non-smoking spouses of smokers such that they suffer 119 such cancers compared with 100 such cancers in non-smoking spouses of non-smokers. It is this finding of statistical significance, a finding based only on 11 U.S. studies, 10 of which found no significant association, that is the basis for the EPA decision to classify ETS as a ' Group A ' carcinogen. 27 In order to arrive at its 'conclusion', the EPA combined the data from the 11 studies into a more comprehensive data assessment called a 'meta-analysis'. Meta-analysis is governed by its own rules, as not every included study is a candidate for such combined analysis. In general, meta-analysis is appropriate only when the studies being analyzed together have the same structure. The difficulty withe EPA's use of meta-analysis of the 11 ETS studies is that it has failed to provide the requisite information about the structure of the 11 studies, information crucial for an independent assessment of whether the studies are indeed candidates for meta-analysis. Thus, the EPA conclusion is based on a meta-analysis that is difficult, if not impossible, to verify.



But even more crucial to the question of assessing the quality of the EPA's ETS science is the issue of confidence intervals, for even by limiting its analysis ti only 11 studies, and even by lumping these studies together through a meta-analysis, EPA could not have achieved the 'right' result if it had not engaged in a creative use of 'confidence intervals'. Essentially, confidence intervals express the likelihood that a reported association could have occurred by chance.


The generally accepted confidence interval is 95%, which means that there is a 95% confidence that the association did not occur by chance. Inasmuch as most epidemiologists use the 95% confidence interval, the EPA itself, until the 1992 ETS report, always used this interval. Indeed every one of the individual ETS studies reviewed by the EPA used a 95% confidence interval. Curiously, the EPA decided that in this instance it would use a 90% confidence interval, something that effectively doubles the chance of being wrong. Without using this 90% standard, the EPA could not have found that the 11
U.S. studies were 'statistically significant'. Without employing a novel standard, without, in effect, changing the accepted rules of epidemiological reporting, the EPA result, already painfully coaxed into existence, would not have existed, and ETS could not have been labelled a 'Group A' carcinogen.

 

hus, despite all of its careful selection of the right data, its meta-analysis and finally its relaxed confidence intervals, the conclusive point remains, as Huber, Brockie, and Mahajan had already noted in Consumers Research in the United States, that ' No matter how the data from all of the epidemiological studies are manipulated, recalculated, 'coked', or 'massaged', the risk from exposure to spousal smoking and lung cancer remains weak... No matter how these data are analyzed, no-one has reported a strong risk relationship for exposure to spousal smoking and lung cancer.' 28

B. The Process Issues


While a careful look at the EPA's ETS claims clearly shows why this science can be called nothing other than corrupt science in that it uses highly selected data, data that are then further manipulated in breach of accepted scientific norms, all without cogent explanation, to reach the 'right' conclusion, an examination of the process underlying this 'science' demonstrates even more clearly its wholly corrupt character. There are at least nine specific process issues worth noting, each of which highlights a slightly different dimension of the corrupted character of the EPA's ETS science.


First, EPA science issues from a health promotion perspective that finds its conceptual home in the Lalonde Doctrine as propounded by Former Canadian Minister of National Health and Welfare, Marc Lalonde. Lalonde argued that health messages must be vigorously promoted even if the scientific evidence was incomplete, ambiguous, and divided. Health messages must be 'loud, clear, and unequivocal' even if the evidence did not support such clarity and definition. What we have in the EPA is simply the Lalonde Doctrine as an institutionalised process. Clearly, the substance of the ETS data does not support its 'Group A' carcinogen status, nor does it support public and workplace smoking bans on the grounds that ETS threatens the health of non-smokers. But the substance of ETS data is to be ignored because the Lalonde Doctrine places the process of using such substance ahead of the substance itself; indeed, it requires that the substance be portrayed as something that it is not in order to further the health agenda. What this inevitably does is to build into the heart of the scientific enterprise an institutionalised motivation and justification for allowing ends extrinsic to science to determine the findings of science, for allowing science to be subject to an agenda not its own, for allowing science to tell lies with a clear conscience.Once one has come to see science as something which of necessity happens within the context of health promotion, then the process corruptions of the EPA follow quite 'naturally'.

 

This explains why, at one level, those involved with the EPA decision on ETS are quite frank about their process. For instance, an EPA official responsible for the revised ETS risk assessment was quoted as admitting that 'she and her colleagues engaged in some fancy statistical footwork' to come up with an 'indictment' of ETS. 29 (The footwork to which she referred is the novel 90% confidence interval.) Or to take another process example, the Science Advisory Board which reviewed the initial draft risk assessment on ETS, and found the case against ETS based on its association with lung cancer to be unconvincing, actually urged the EPA staff to attempt to 'make the case' against ETS on the basis of similarities between ETS and mainstream smoke. 30 To be fair, the consequences of the Lalonde Doctrine are not confined to the EPA's anti-smoking agenda. For instance, an article in the Journal of the American Medical Association in July 1989 reported a study that claimed to show a link between ETS exposure and increased risk of cervical cancer. In response to critics who noted that such a link was biologically implausible and that the study had ignored confounding factors, the authors replied that the study was justified simply on the grounds that it might reinforce the dangers-of-smoking message: 'While we do not know of a biologic mechanism for either active...smoking or ETS to be related to cervical cancer, we do know that cigarette smoking is harmful to health. The message to the public, as a result of this study, is one that reinforces the message that smoking is detrimental to health.' 31 It would be difficult to find a more succinct example of the Lalonde Doctrine at work. There is no compelling evidence to support their claim, the authors all but admit, but it is important, in the interests of health promotion, that the public be made to think that there is scientific evidence of harm.

 

But second, while those involved in the EPA process are at one level open about the process, at another level they are profoundly dissembling. For instance, the EPA fails to mention that the 'Group A' carcinogen status for ETS was arrived at using a process that violates its own Guidelines for Carcinogenic Risk Assessment. Instead of accepting that this suggested that the Guidelines for Carcinogenic Risk Assessment be changed. Given that the 'right' conclusion must be reached and the data do not support that conclusion, one must manipulate the data and revise the guidelines governing the process and the conclusion.

 

Third, the ETS risk-assessment process has been corrupted from the outset by the fact that it has repeatedly violated the standards of objectivity required by legitimate science by utilizing individuals with anti-smoking biases. One member of the group working on the ETS issue at the EPA is an active member of the US anti-smoking organisations, while the Science Advisory Board that examined the EPA's ETS work included, not only a leading anti-smoking activist, but several others strongly opposed to tobacco use. Finally, the EPA contracted some of the work on certain documents related to the ETS risk assessment to one of the founders of a leading anti-smoking group.

 

Fourth, the EPA changed the accepted scientific standard with respect to confidence intervals, without offering any compelling justification, in order to make it substantive findings statistically significant.

 

Fifth, the EPA's Workplace Policy Guide which, as a policy document, would, in the course of normal scientific process, be developed only after the scientific evidence was in, was actually written before the scientific risk assessment was even completed, let alone reviewed and finalised. 32 Quite obviously, science was to be made to fit with policy, rather than policy with science.

 

Sixth, the EPA fails to note that if the two most recent US ETS studies were to be included along with its eleven other studies, it would have resulted in a risk assessment that was not statistically significant, even using the novel 90% confidence interval. With its entire 'conclusion' at risk, there are exceedingly compelling process reasons for the EPA to have excluded these two studies from their analysis.

 

... continued ...




FORCES is supported solely by the efforts of the readers. Please become a member or donate what you can.



Contact Info
Forces Contacts
Media Contacts
Advertisers
Columnists
Ian DunbarIan Dunbar, United Kingdom

Latest Article »»  

Bill Brown, USA

Latest Article »»  

Michael J. McFadden, USA

Latest Article »»  

Joe Jackson, United Kingdom

Latest Article »»  

Virginia Day, USA

Latest Article »»  

Robert Prasker, USA

Latest Article »»
Contact Robert Prasker »»

John Dunn, MD, United States

Latest Article »»
Contact John Dunn, MD »»

Andrew Phillips, Canada

Latest Article »»
Contact Andrew Phillips »»

Pat Nurse, United Kingdom

Latest Article »»
Contact Pat Nurse »»

Elio F. Gagliano, MD, Italy

Latest Article »»  

Edmund Contoski, USA

Latest Article »»
Contact Edmund Contoski »»

John Luik, Canada

Latest Article »»  

Norman Kjono, USA

Latest Article »»
Contact Norman Kjono »»

Gian Turci, Italy

Latest Article »»  

Søren Højbjerg, Denmark

Latest Article »»
Contact Søren Højbjerg »»