Saturday, November 27, 2010

Health Reform in 19th Century America

Ronald Numbers, Prophetess of health : Ellen G. White and the origins of the Seventh-Day Adventist health reform, Knoxville : University of Tennessee Press, 1992

Ellen G. White is one of four nineteenth-century founders of a major American religious sect (the others are: Joseph Smith - Mormon, Mary Baker Eddy - Christian Science and Charles Taze Russell - Jehovah’s Witnesses), but she is not widely known outside of her church.  Yet when she died in 1915 she left behind a legacy that consisted not only of the Seventh-day Adventist Church, but also sanitariums and hospitals located throughout the world.  She also inspired an educational system that is still highly regarded, traveled, lectured, and wrote dozens of books.  She was born Ellen Gould Harmon, along with her twin sister Elizabeth, on November 26, 1827.

Her influence sprang from the visions that she began experiencing in 1844, when she was seventeen.  These trances lasted anywhere from a few minutes to several hours, and during them she received messages about events both in the future and the past, heavenly and earthly.  These visions were accepted as genuine revelations from God, and her followers (with her encouragement) regarded her as a true prophetess on a par with the prophets of the Bible.

On June 5, 1863, in Otsego, Michigan, she received her vision regarding health, in which God revealed to her the hygienic laws that should be followed by Seventh-day Adventists.  They were to give up eating meat and other stimulating food, neither drink alcohol nor use tobacco, and avoid medical drugs.  When they were sick they were supposed to rely on the remedies of Nature, including fresh air, sunshine, rest, proper diet, exercise and water.  Women were to cease wearing the fashionable clothing of the time (including hoop skirts and corsets) and wear “short” skirts and pantaloons.  Followers were also supposed to curb their “animal passions” (masturbation was an especial evil leading to deformity of mind and body, not to mention spirit).

Health reform was not new.  In the early nineteenth century, America was not a healthy or hygienic place.  Americans ate too much meat and not enough vegetables and fruits.  Their food was heavy with grease and fats, and they drank too much Brazilian coffee.  Public sanitation was horribly inadequate, and personal hygiene wasn’t much better.  Most Americans seldom, if ever, bathed.


In the 1830s, Sylvester Graham launched a full-blown health crusade.  In the summer of 1830 the Pennsylvania Society for Discouraging the Use of Ardent Spirits invited him to come and lecture under its auspices.  He accepted and was soon giving lectures featuring his scientific and moral arguments against consumption of alcohol.  Reverend William Metcalfe was also preaching in Philadelphia at this time.  He was the author of the first American tract on vegetarianism and had brought his English congregation over in 1817 and established the vegetarian Bible Christian Church.  Graham added the vegetarianism to his lectures on temperance.  In 1831 he broke away from the Society and was lecturing at the Franklin Institute on a broad range of topics including proper diet and the control of the passions.  The 1832 cholera epidemic thrust Graham and his health reforms into the spotlight.

Another reformer, important partly because he was associated with the Millerites (as was Ellen White) and also because her reforms mirror many of his, was Larkin B. Coles.  His claim to health reform fame lie in two books: Philosophy of Health: Natural Principles of Health and Cure and The Beauties and Deformities of Tobacco-Using.  His view of health reform was a moralistic one, and was not unique among health reformers.  But both Cole and White saw obedience to these laws of health mainly as a requirement for entry into heaven rather than as a means for living a more enjoyable and healthy life on earth.

Saturday, November 13, 2010

Science as a Social Construct

Douglas, Mary.  “Environments at Risk” in Science in Context, Barry Barnes & David Edge, eds.  Cambridge, MA, MIT Press, 1982 (pp. 260-75)

Science in Context is a collection of essays focusing on the sociology of science.  The purpose of the collection, as stated in the General Introduction, is to “provide a tolerable indication of what is going on in the sociology of science, and, more importantly, of what kind of social activity science is, and what its significance is.”  The primary focus of the collection is on the relationship between the sub-culture of science and the wider culture that surrounds it, especially as it relates to science as a source of knowledge and competence and as a cognitive authority for evaluating knowledge claims.

Central to the ideas of sociology of science are the writings of Thomas Kuhn, especially his book The Structure of Scientific Revolutions.  From Kuhn, sociologists of science have concluded that science is a social construct, and that even statements of scientific fact have a conventional character.  Because it is constructed and not intrinsic to the natural world, they conclude that it cannot be self-sustaining, and if it cannot be self-sustaining in the sub-culture of science, then neither can it be self-sustaining in mainstream culture.  There is nothing in science that implicitly reveals its correctness and so its standing in society depends upon the degree of trust and authority with which society imbues scientists and institutions.

In her essay, Mary Douglas examines the issue of credibility in the context of the ecology movement.  She is concerned with how beliefs arise and how they gain support.  The approach she takes is of the anthropologist from Mars, an hypothetical being that is agnostic when it comes to beliefs about the Earth’s environment.  In her view this suspension of belief is what allows us to confront the fundamental question of credibility.  She asserts that civilizations throughout history have viewed their environments to be at risk, although the risks they identified were generally not the same, but she claims that all civilizations pin responsibility for the crisis in the same way.  The environment is put at risk by human folly, hate and greed.

In the present, however, we have an added factor: self-knowledge.  Because we can compare our beliefs with those of others we lose the filtering mechanism that those earlier civilizations possessed.  We no longer have anything to restrict our perception of the sources of knowledge.  Credibility is easier in a limited belief system, but how do you determine credibility when opposing sides of an issue both make sense?  This is the question confronting environmentalists in our age.

Through various anthropological examples she endeavors to show that the credibility of a belief regarding how the environment will react to human action depends upon the moral commitment of the community to a particular set of institutions.  For example, bison do not like fratricide (murder within the tribe), so such an act endangers the well-being of the tribe and as a result has special sanctions.  So long as the institutions in question maintain the loyalty of the community, nothing can overthrow the beliefs that support those institutions.  If those institutions lose the support of the community, she claims that the beliefs are easily changed.  A particular view of the universe and the society holding that view are thus interdependent.  They form a single system and neither can exist without the other.  Any given environment that we know thus exists as a structure of meaningful distinctions.

In this credibility debate the role of laymen and social scientists is to examine the sources of our own bias.  Because we lack the moral consensus that gives credibility to ecological warnings we do not listen to the scientists.  Similarly, because we lack a discriminating principle we are easily overwhelmed by our pollution fears.  This discriminating principle comes from social structures and it allows a culture to select which dangers it will fear and also to set up a belief system that will address those dangers.  Without that structure we are prey to every dread and right and wrong cease to exist.  This is the price of full self-consciousness, but it is a price that she feels we must pay.  When we do that the classifications of social life will be gone and we will recognize that every environment is simply a mask and support structure for a certain kind of society.  Understanding both the nature and value of that society is as important as understanding the sources and nature of the pollution that puts our environment at risk.

Mary Douglas deliberately picks an area of science where our understanding is incomplete and in which the debate over competing theories has become politically charged.  Consensus is not the final arbiter of a scientific theory or hypothesis.  Unfortunately in the case of the environment politicians and advocates have created a situation where that is the level at which the discussion of the various theories and hypotheses is taking place.

Monday, November 1, 2010

Michael Kater, "Doctors under Hitler", Chapel Hill: University of North Carolina Press, 1989.

This monograph is a sociohistorical study of the medical profession under the Third Reich and rests on the author’s previous work analyzing doctors and medicine from Wilhelm II to Hitler.  It draws upon documents in the Federal Archive of Koblenz and the Berlin Document Center.  Primary material was also drawn from the student archive in Würzburg and other regional West German archives.  He also drew on the papers of the former panel physicians’ association, the KVD, as well as the predominant professional journals and memoirs of physicians that lived beyond 1945.

 At the dawn of the Third Reich, in 1933, there was a surplus of physicians, inherited from the republican era.  These doctors were at first hopeful that the new regime would address issues left over from the health administration of the Weimar Republic, but their hopes were not fulfilled.  Under the republic medical graduates had to spend three years as an assistant in a hospital where they were poorly paid, and forbidden to seek other sources of income.  Establishing themselves as independent practitioners was almost impossible for a doctor straight out of medical school.  One of the complaints lodged by spokesmen for this group was that medical institutions should stop advertising junior positions for bachelors only.  They also emphasized that, after public school teachers, high school teachers, and jurists, they represented the fourth largest group of academically trained professionals born after 1900.

 But under the Third Reich, the medical profession became a microcosm of the larger Nazi sociopolitical system, governed by the Nazi leadership principle and redefined in National Socialist terms.  Physicians now had to present every private contractual arrangement to the Reich Physicians’ Chamber for approval, register with the Nazi medical agencies and keep them informed of any changes in their family status or medical qualifications.  They also had to report on their patients.  All serious cases of alcoholism, ‘incurable’ hereditary or congenital illness (i.e. imbecilism) and highly contagious diseases such as venereal disorders were recorded and reported to the appropriate authority.

 The doctors themselves were required to undergo continued training.  Partly this was to break down the distinction between general practitioners and medical specialists, but it was also to teach them National Socialist concepts of health and medicine.  The unpopularity of these courses was perhaps offset by another change in their profession implemented by the Nazi legislators, its redefinition.  By stating that the medical occupation was not a business, the Reich Physicians’ Chamber was able to exclude anyone who was not properly schooled or licensed.

 This did not do away with medical quacks, however, for the Nazi conception of medicine favored the lay element over ‘school’ medicine.  Instead they created a new class titled “physician of natural healing” open to anyone who could demonstrate the requisite ability.  Anyone in this group with extraordinary talent could enter a medical facility without the usual professional medical qualification, and could even receive a license as a doctor medici.  The Nazis further required that regular doctors had to assist registered lay healers at the latter’s request.

Under the Third Reich medicine became the preeminent academic discipline, with approximately 30 percent of all university faculty being composed of medical teachers by 1935.  Medical faculty also became dominant in university power politics.  Between 1933 and 1945 the percentage of medical faculty serving as rectors increased from 36 to 59 percent.  Along with this increase in power and significance there was the establishment of a new discipline that became a part of the medical curriculum after 1933, Rassenkunde or Rassenhygiene, race hygiene or eugenics.  This ‘science’ consisted of three parts: anthropological, sociological, and medical, and its goal was to improve the superior race, while eliminating the inferior ones.

Kater thus links the professionalization of medicine in the Third Reich with its corruption.  West German doctors saw these events as a struggle between the forces of freedom and democracy against the totalitarianism of the Nazi regime.  A battle which the latter eventually won.  East German doctors, on the other hand, saw these events as the result of a premeditated conspiracy between fascist-minded German doctors and Nazi political leaders.  Kater feels that the truth is somewhere in between, but that it lies closer to the East German perspective, than the West German one.

Sunday, October 10, 2010

Treating the Disease vs Treating the Patient

While I was pursuing my History of Science studies at Notre Dame I took a seminar course on Medicine and Society.  My last two posts are from that class.  I came to hate that class and it was a large factor in my decision to drop out of the program, but I did learn some important lessons during it.  The crux of the message that the professor was trying to get across to us was the way that the medical profession dehumanizes the patient and ends up treating the disease, and not the human being.  If you want to see this message in a very disturbing but highly distilled form just watch the film "Wit" with Emma Thompson.

This lesson was reinforced for me this past week when I had to rush home to Ohio because my father was in the hospital.  He went in for something relatively minor but ended up in the hospital for a week being treated for another condition.  A condition that was due, in part, at least, to actions taken by the hospital staff in their treatment of his original issue.  I am not saying that the staff was malicious in their treatment, but they were aggressive and interventionist, so that rather than assuming that the change in his condition might be due to the drugs they had given him they kept chasing symptoms. It quickly became apparent that the treatment was reactive - x happened, so they did y, without ever really trying to understand the whole picture, the patient. In the end my father spent a week in the hospital and underwent a procedure that was probably not really necessary.


It is hard challenging the medical profession when you are a patient, they are so authoritative, and when there is something wrong you get swept up into their treatment course and it takes over your life.  I saw this myself when I was undergoing treatment for breast cancer.  I tried hard to be an informed patient and question the treatment but there was one week in which I had a CAT scan, a PET scan and two biopsies.  Everything checked out as fine, but that week was quite an ordeal, both physically and emotionally.  My oncologist's conclusion after all of that was that if you did tests and scans you will always find something that is odd, and if you let yourself, you will chase these oddities for quite some time before concluding that while odd, they are not dangerous or unhealthy.  My oncologist now uses me as a poster child for not doing more than is necessary.  He still feels bad about putting me through that ordeal. 

There is a lot of debate going on right now about how to fix the health care system.  Well, one of the things they should do is treat the patient, not the disease.  One of the hardest things about being a doctor is the process of diagnosis (this is actually a place where expert systems could be useful) and rather than being thoughtful or logical about ordering tests they just order a whole suite of them.  It is as if they are throwing a whole bunch of darts at a dart board in the dark, hoping that one of them hits the target.  That is simply not a rational or cost effective approach to treatment.  It isn't good for society and it isn't good for the patient.

Demographics

Fertility, Class and Gender in Britain 1860-1940   
    Simon Szreter, Cambridge University Press, 1996

In the early part of the 20th century there was a growing awareness of a declining birthrate in the industrialized nations.  In Austro-Hungary and France the birth rate in some rural areas had begun to decline substantially during the 18th century, with similar declines taking place among the aristocratic and bourgeois groups as early as the 17th century.  In 1945 a theory of demographic transition was published.  It proposed three stages of demographic development: an initial pre-industrial stage of high birth rates and high death rates, an industrial phase of high birth rates and declining death rates (leading to substantial population growth) and a post-industrial phase of low birth rates and low death rates.

This theory was based upon a single case, that of Britain.  It utilized the findings of the 1911 census, which analyzed the fertility patterns of the British population from 1851-1911 and the newly released study conducted for the Royal Commission on Population that covered the period 1901-1946.  The 1911 census used what has become known as the professional model of social classification in which all male occupations are assigned to one of five grades (professional upper and middle class, intermediate, skilled workers, intermediate, unskilled workers).  The 1911 census analysis found that the higher the social class, the earlier and more rigorously it controlled its fertility.

This classification scheme was based upon three assumptions: 1) the occupation of the male head of household was the best way to classify families; 2) a primary division existed between the higher-status non-manual occupations (they were more professional) and the lower-status manual occupations (assessed according to skill) and 3) the fact that a single hierarchical social grading system was a valid classification scheme.  It should be noted that this scheme excludes women and their labor, both paid and unpaid.  It should also be noted that those living off private means, and thus listing no personal occupation, were classified alongside paupers in a residual category, labeled the unproductive class.

In 1869 Francis Galton published Heredity Genius in which he examined the families of ‘eminent men’ in England in an effort to determine the heritability of both mental and physical qualities.  He went on to coin the term eugenics in 1884.  By the end of the 19th century there was widespread concern that modern society was reversing evolution, leading to the degeneration of the English people.  This was partly driven by an increase in the recorded rates of lunacy from 2.26/10,000 in 1807 to 29.26/10,000 in 1890, (Mathew Thomson, The Problem of Mental Deficiency: Eugenics, Democracy and Social Policy in Britain c. 1870-1959).  By the first decade of the 20th century mental defectives became defined as the central eugenic threat facing the nation.  Greater social awareness plus universal education led to the growing realization of the presence of mentally deficient people in the population.  This heightened awareness coincided with growing fears about the fitness of the population.  In 1907 the Eugenics Education Society was formed.

During the period 1875-1883, the Anthropometric Committee of the British Association for the Advancement of Science provided an hereditary basis for the professional model.  The professional model thus acquired the status of an empirically tested theory.  Despite the fact that it was based upon unexamined social conventions it had been turned into a naturalistic theory of British society’s essential structure.

In the beginning of the 20th century an environmentalist counter movement emerged opposing the ideas of the eugenicists that the poor were poor because of the way they were, rather than because of social or environmental factors.  At the forefront of this movement were the Fabians who, although they shared a nationalistic interpretation of social Darwinism with the hereditarian biometricians, did not agree with them as to the causes or the appropriate political means to achieve the optimal nation.  They held that poverty was not the manifestation of inherited biological deficiencies but rather that the environment was responsible for the moral and material degradation of the working man.

Sunday, September 26, 2010

The Anatomy Act

Death, Dissection and the Destitute
by Ruth Richardson, Penguin Books, 1988, 426 pp., appendices, notes, bibliography, index.


In 1518, the College of Physicians was founded to improve the state of medical knowledge in England, but improvements were hampered by one very simple fact: the lack of human bodies for dissection.  In 1540, the companies of Barbers and Surgeons were united by Royal Charter and Henry VIII granted them the rights to the bodies of four hanged felons per year.  Charles II increased that number to six.  But these dissections were ostensibly public affairs and were part of the sentence inflicted upon the criminals.  Thus, from the start, dissection was seen in the public eye as a punishment for criminals and as a defilement of the corpse, not as a means of gaining medical knowledge.

This shortfall in supply was made up by one very simple solution, robbing graves.  This was done either by disinterring freshly buried corpses, or by waylaying the bodies before they were buried.  Work houses, charity hospitals and asylums were favorite sources as their occupants were poor, indigent or had no relatives to claim their bodies.  The supplying of anatomists and surgeons with bodies sometimes involved the collusion of grave diggers, sextons, administrators at the facilities mentioned, undertakers and even clergy.

The men who plied this trade were called resurrectionists.  Grave robbing was not a crime, per se, since the body was not considered property.  While a man could be hung for poaching, he would not be hung for stealing a dead body, unless he also stole the personal effects of the corpse.  It was a lucrative business and it is perhaps not entirely surprising that at some point some one would see the advantage of using the anatomists as a means of disposing of murder victims.  The most celebrated case was that of Burke and Hare in Edinburgh.

Mrs. Hare was the owner of a cheap lodging house in which an elderly man died while still owing her money.  To pay off this debt, Burke and Hare sold him to an anatomist for £7.10s.  When another lodger fell very ill, Burke and Hare eased him on his way and sold his body for £10.  In all they killed 16 people before they were discovered, and introduced a new verb into the English vocabulary: to burke.  Burke was hung and dissected on 28 January 1829, Hare turned King’s evidence and was spared, and the anatomist to whom they sold the bodies, Knox, was never charged.

The first Anatomy Bill (Bill for preventing the Unlawful Disinterment of Human Bodies, and for Regulating Schools of Anatomy) was submitted to Parliament by Henry Warburton on 12 March 1829.  It did not pass, partly because of its length, the fact that it used the word dissection and because it obviously singled out the poor as the primary source of bodies.  In 1831 Bishop and Williams, the London Burkers, were discovered.  They had been supplying bodies to schools for some time when they decided to help matters along.  They confessed to killing three people before their trial, although on the eve of their execution on 5 December 1831, Williams supposedly confessed that the number was closer to sixty.

Warburton introduced his second Anatomy Bill ten days after their execution.  This one was called simply A Bill for Regulating Schools of Anatomy, and the word dissection had been replaced with the phrase anatomical examination.  It was shorter than his previous bill and though it still targeted the poor, it did not do so directly.  It merely said that unless you or your executor or other lawful party expressly forbid it, your body was liable to undergo anatomical examination.  It was eventually passed, but it did little to increase the supply of legitimate bodies.  For the most part it simply cut out the middle man of the resurrectionist.

In this book, Ruth Richardson has given us a detailed social and political history of the events leading up to and surrounding the Anatomy Act using numerous primary sources including government documents, official reports, pamphlets and newspapers.  She links it with a general change in attitudes towards the poor, culminating in the New Poor Laws, and stigmatizing poverty by connecting their deaths with a fate that had previously been reserved for criminals.  She claims that it also lead to a societal fear among the poor of the pauper’s funeral, helping to spur the growth of burial clubs and friendly societies.  In addition, the connection of work houses as suppliers of anatomists lead to a general mistrust of these institutions.  Other factors that are mentioned are the corruption and nepotism of the Royal College of Surgeons, the establishment of the Lancet by Thomas Wakley as a means of promulgating medical knowledge and as a vehicle for medical reform and the role of the Benthamites in the passing of the Anatomy Act itself.

Saturday, September 18, 2010

David Noble - America by Design

A neo-Mumfordian, Noble enlists science in the conspiracy of Big Money in taking over the world and turning us all into parts of their machine, whose sole motives are profit and power.

C. Hamlin & P. Shepard - Deep Disagreement in U.S. Agriculture

As we watch policy debates, and technology debates, and all the debates on environmental issues, the question arises: how can we ever hope to resolve all of these differences?  How can we ever find a solution that will satisfy all the parties concerned?  This book presents a method for doing just that.

By creating a neutral ground, and translating between the various interest groups in a disagreement, academics can enable a rational dialogue between the parties concerned that might actually lead to mutual understanding and maybe even a solution.

Sunday, September 12, 2010

Bruno Latour, Aramis

When a technology fails, how do we explain what happened?  How do we understand what happened?  In Aramis Latour uncovers the multiple narratives that underlie failure, and perhaps, by implication, success.

Among the themes that he addresses are the sexuality of technology.  Latour wants to refute the idea that the theory of evolution can be applied to scientific progress, which assumes that later technology is an improvement over earlier technology and that it better meets/serves the needs of “the environment” (i.e., humanity).

He also advocates heterogeneous engineering in which major social questions concerning the spirit of the age or the century and “properly” technological questions are blended into a single discourse.  This leads to the notion of translation, in which a global problem is transformed into a local problem through a chain of intermediaries that are not “logical” in the formal sense.

In addition, in order for a project to succeed, an engineer has to stimulate interest and convince the public.  They must market innovation and technology.  All of which leads to the question: is technological reality rational?  Consumers, like technology, are invented, displaced, and translated through chains of interest.

He recommends two kinds of charts to help understand technology: sociograms, which chart human interests and translations; and technograms, which chart nonhuman interests and translations.  Both people and technology (human and nonhuman actors) are alike in that just as you have to compromise when dealing with a number of people, so you have to compromise when integrating any new technology.

But one of the problems of an innovative project is that the number of actors that needs to be taken into account are not known from the beginning.  If you don’t have enough actors, the project loses reality, if you have too many actors, the project becomes over-complicated and will probably fail.

Thursday, September 9, 2010

Jerry Mander, In the Absence of the Sacred (1991)

Jerry Mander is asking “what happened to the future?” and challenging the idea that advances in technology equates to progress and that this is a good thing.

He defines a minimally successful society as one that:
    1) keeps its population healthy, peaceful and contented;
    2) has sufficient food, shelter, and a sense of participation in a shared community experience;
    3) permits and encourages access to the collective wisdom and knowledge of the society and whose members have a spiritually and emotionally satisfying existence.

Mander wants to encourage awareness, care and respect for the earth’s life support system.

But while technology has given us an improved standard of living, with greater speed, greater choice, greater leisure, greater luxury (bigger, better, faster, more), we haven’t eliminated poverty or crime and we don’t even have universal education.  So, while our society may be a material success, it doesn’t work.  And, even worse, the technological advances that have made this all possible have led to environmental degradation, but no one (except Mander?) seems to be questioning the price of technology.

In response, Mander wants to challenge what he calls the Pro-Technology Paradigm that is characterized by:
    1) dominance of best-case scenarios
    2) the pervasiveness and invisibility of technology
    3) the limitations of the personal view - we don’t see the wider effects of our tools, only how they help us
    4) the inherent appeal of the machine - its flash and promise
    5) the assumption that technology is neutral and the idea of a scientific priesthood - nuclear power leads to autocratic systems, while solar energy leads to democratic systems (centralized power vs. distributed power)

Tuesday, August 31, 2010

Philip Scranton, “Determinism and Indeterminacy in Science & Technology”

Does Technology Drive History: the Dilemma of Technological Determinism (1994)

How do we write the history of science and technology?  If we set aside technological determinism than we must also abandon the idea that changes and shifts in technology govern the restructuring of social formations and organizations or of cultural practices.  So how do we capture the dynamics of the interactions of science and society without resorting to new reductionisms that substitute a new universal for an old one?  Neither can we assume that technical change represents a unified process.

Deterministic approaches to the history of technology have meant that the situational links between technical changes and social and political relations have often been left unspecified and under-investigated, because technological determinism insulates technological change from extra-technical initiatives.  But once we move past linear and reductionist accounts of technological change we can begin to fill in some of the gaps and silences of the history of technology and science.  Gaps such as non-Western concepts of technology and technical practice, technical-environmental relations, technologies of sexuality and family limitation, or to technologies of the management of the incarcerated or the dead.

It is Scranton’s belief “that technological change proceeds in the absence of overarching rationalities; that it proceeds along multiple coexistent trajectories; that links between technical change and sociopolitical relations are intimate and underspecified; and that stepping beyond reductionist teleologies reveals an array of intriguing silences in the history of technology.” (p. 163)

Sunday, July 4, 2010

Daniel Boorstein - The Americans: The Democratic Experience

The meta-narrative: that the century following the Civil War was an Age of Revolution in which the meaning of community, of time and space, of present and future was being continually revised.  It was a century in which a new democratic world was being invented and discovered by Americans.

This is the story of how we got to where we are.  The creation of the everywhere community.  Mass production, chain stores, suburbia, everywhere you go, there you are.  The same commodities, the same merchandisers.  The same television shows, the same radio shows.  The homogenization, the democratization of American society.  For Boorstein it is a good thing.  Remember those Popular Science reels that showed us what the future was going to be like?  The glorification of the gadget, gee whiz science.

But by the end even Boorstein seems a little out of breath.  The pace of scientific and technical progress has become so fast, is it out of our control?  Has it become its own power?

Sunday, June 27, 2010

Edelstein, Michael - Contaminated Communities - 1988

The aim of this book is to identify the major social and psychological impacts that stem from residential toxic exposure and to examine their significance.  Edelstein bases his analysis on four postulates: 1) that the social and psychological impacts of toxic exposure involve complex interactions among the different levels of society as well as differing across time and with environmental context; 2) that these impacts affect how the victim behaves and how they understand their lives both in the short and long term; 3) that toxic exposure incidents are traumatic and invoke coping responses in their victims; and 4) that contamination is inherently stigmatizing and the very possibility of such contamination arouses fear in the public.

Toxic exposure undermines the very fabric of society.  It leads to a loss of trust, the inversion of the home (formerly seen as a safe haven, now hopelessly poisoned), a sense of a loss of control in one’s personal life and over the present and the future, a different relationship to and assessment of the environment (now seen as dangerous, and insidious in it’s dangers) and a pessimistic attitude towards one’s expectations about health.  It places the adults in contaminated families under a great deal of stress as they become isolated and stigmatized by their contamination and it teaches children to fear.

Toxic victims also become absorbed by government agencies and bureaucracies that threaten the victim’s social identity.  This is compounded by the fact that the government’s aims and values may not be the same as the values of the victim’s, especially with regard to acceptable risk, which has more to do with economic and political forces.  The regulator’s, on the other hand, have their own restrictions that they operate under.  They are bound by regulations, political realities and limited resources.

Toxic contamination in other communities leads to anticipatory fear in communities as yet untouched, resulting in the “not-in-my-backyard” (NIMBY) response, which serves to articulate citizens’ frustration over the manner by which projects are sited.  It arises, in part, from the failure of the regulators to take the psychosocial impact of these facilities seriously.  The citizens, seeing no room for compromise in the response of the regulators, regard the situation as an all-or-nothing, win-or-lose, battle.

One of the lessons that Edelstein draws from his study is the engineering fallacy, which involves the assumption that problems can be solved in isolation, away from the complicating factors and uncertainties of the real world.  If we narrow a problem enough, it will be controllable, and solvable.  He points out Bateson’s 1972 book, Steps to an Ecology of Mind as a source for an alternative method to that of traditional science and engineering.  In this approach, any learning is done within a context.  Called metalearning, it seeks to recognize the context of a problem, rather than deduce and isolate it.

Sunday, May 23, 2010

Wynne, Brian “Misunderstood misunderstandings” (pp. 19-46)

Misunderstanding Science?  The Public Reconstruction of Science & Technology (1996)

A case study of hill sheep-farmers of the Lake District of northern England that were affected not only by Chernobyl, but also the nuclear reactors at Sellafield (formerly Windscale).  Wynne examines the interplay between social and cultural identities, especially those of the sheep-farmers, as they see themselves threatened by the scientists interventions.  What is revealed is arrogance on the part of the scientists who discount the local knowledge of the sheep-farmers, even when that knowledge is essential to understanding the scientific issue at hand, namely radioactive contamination.

The relationship between the scientists and the sheep-farmers is further undermined by the lack of full disclosure on the part of the scientists, and also by the changing assertions of the scientists.  At first they said there would be no effects from Chernobyl, but six weeks later (20 June 1986) the Minister for Agriculture announced a ban on sheep sales and movement in several of the affected areas.  Once the scientists had admitted the contamination, they insisted that the initially high cesium levels would fall soon, but their predictions were based upon a false scientific model.  Their model was based upon empirical data of alkaline clay soils, not the acid peaty soil found in these upland areas.

The degree of certainty that the scientists expressed in their statements denied the ability of the farmers to cope with ignorance and lack of control, and the degree of standardization of knowledge denied the variation of the conditions in the region from their models and even from farm to farm.  We thus see scientists inappropriately applying their specialized knowledge and not acknowledging the specialized knowledge of the sheep-farmers, but where the farmers were willing to work with the scientists, the scientists did not seem to be willing to work with the farmers.

The distrust that the farmers soon came to have for the scientists, and then for the government that was employing them, also caused the farmers to question the government’s assertions about Sellafield.  The scientists asserted that contamination from Chernobyl could be distinguished from contamination due to Sellafield, thus making Chernobyl a convenient cover or scape goat for previous misdeeds.  As the distrust grows you transition from considering the scientists merely as arrogant, to thinking that maybe there has been some kind of coverup or conspiracy, all of which only serves to further undermine the public’s trust and understanding of science.

Wynne sees the conflict as one between social identities, both groups, the scientists and the sheep-farmers, have their identity threatened by the other.  These sorts of conflicts bring to light the whole issue of knowledge systems and the problems that arise when formal knowledge systems interact with informal ones.  The formal knowledge systems often don’t know how to acknowledge or understand the informal knowledge systems because the former have a hard time quantifying the latter.  The problem may simply be one of communication, these two types of knowledge systems simply may not speak the same language, or, even worse, they may speak the same language but mean subtly different things.

Sunday, May 16, 2010

B. Campbell - “Uncertainty as Symbolic Action in Disputes among Experts” - Social Studies of Science 15 (1985): 429-53

Campbell claims that uncertainty does not cause controversy because the content of scientific knowledge is a social construction.  Therefore uncertainty is something that is negotiated, discussed and argued about.  He further argues that the adequacy of empirical evidence thus becomes a prop in the social negotiations that occur over the credibility of expert statements made in public arenas, where the authority of a scientist as an expert is connected to the image of the relationship between scientific understanding and empirical evidence.
    He is attempting to establish five points.
    1) uncertainty is a strategic element of argument as opposed to something that causes argument;
    2) adequacy of evidence and knowledge is relative and varies with the social situation of experts;
    3) the social structuring of expert arguments does not mean that the scientists’ arguments have been ‘distorted’ by the social circumstances of their expertise;
    4) uncertainty arguments don’t necessarily undermine the credibility of scientific expert knowledge;
    5) the approach that he takes emphasizes the political dynamics of expertise and the complex relationships between scientific and policy issues.

Sunday, May 2, 2010

E. Chargaff - Heraclitian Fire (1978)

An idealist, a romantic, a classical man in a modern world.  He never found his place, never found himself. [Oscar Levant: It’s not what we are that hurts, it’s what we aren’t.]  In the 20th C, especially in the latter half, the pace of scientific advances became inhuman, and science became inhuman, accumulating facts, but not understanding.  Increasingly fragmented, increasingly specialized, a Red Queen’s Race.  There is no time to understand the ramifications or the importance of a discovery, because a new one is just around the corner.  Even the sciences are forgetting their history.  The citations and references of research papers rarely go further back than a decade.  Everything is compressed, eternally in the present.  Without a past how can there be a future.  Scientists need to ask why they are doing what they are doing, to what ends are they working, how will their knowledge be used.

Echoing Mitroff, Chargaff is a humanist scientist making a plea for the development of Dionysian science.

Sunday, April 25, 2010

Ian S. Mitroff - The Subjective Side of Science (1974)

The author, a practitioner of applied social science, believes that disagreement (rather than consensus) is the essence of scientific inquiry.  He further claims that the causes of the disagreements between scientists are social in kind and that they are also psychological, and that truth is as much psychological as it is logical.  This view is in contrast to the Descartes-Locke view that all humans are endowed with the common capabilities of reasoning and observing, and that if we strip away (purify) the personal and social contamination, then scientific inquiry will necessarily yield results that are acceptable by all who have also achieved this “pure state.”

The aim of the book is to lay a foundation for modeling scientific practice on a critical analysis of the actual behavior of scientists, and to also ask what a science looks like that better understands itself.  The book (and the author) is critical of what he calls “The Storybook Image of Science.”  This myth depicts scientists as emotionally neutral, willing to change their opinions when confronted with reliable evidence, humble about their knowledge and understanding, loyal to the truth above all else, and possessed of an objective attitude and the ability to suspend judgement until the issue under consideration has been investigated.

What Mitroff fails to realize, though, is that myths and archetypes have an important psychological place in our existence.  Would science be better off if the storybook image of science was dispelled?  Or would science, and the world, lose their ideal of what a scientist is supposed to be?  And if that happened, would science suffer as a result?

As a sociologist of science, he is concerned with examining the norms of science, which he defines as: faith in rationality, emotional neutrality, universalism (rational knowledge is the realm of all), individualism (anti-authoritarian), community (knowledge is not private property), disinterestedness (no personal glory), impartiality (concern is with knowledge, not with its consequences), suspension of judgement, absence of bias, group loyalty and freedom of investigation.

And yet he’s willing to accept the norms as ideals.

Mitroff bases the study in this book on the Churchmanian program in the philosophy of science in which the logic of research and the social psychology of research are not viewed as being antagonistic to each other, but rather vital, although partial, components of scientific research as a whole.  He is a supporter of Kuhn, rather than Popper.

After interviewing scientists studying moon rocks he comes to the following conclusions:  1) there are two sets of opposing norms of science; 2) there are distinct styles of inquiry in science and distinct psychological types of scientists; and 3) the method that scientists use to test their ideas involve adversarial proceedings that combine formal and informal elements.

Styles of Inquiry:

Leibnizian:  formal-deductive;
Lockean: experiential, inductive, consensual;
Kantian: synthetic multimodal;
Hegelian or Dialectical: conflictual, synthetic;
Singerian-Churchmanian: synthetic, interdisciplinary, holistic.

Psychological types of scientists (based upon Jung):

Hard experimentalist (sensation);
Abstract theorizer (thinking);
Intuitive synthesizer (intuition);
Humanist scientist (feeling).

Science as a Game possesses the following features: 1) individual players; 2) coalitions, teams; 3) designation of teams as home side and opposing side; 4) special field of play; 5) entrance fees or skills; 6) schedule of games; 7) recruitment and development of talent; 8) rules; 9) umpires; 10) objectives; 11) awards; 12) historians or preservers of the tradition; 13) fans; 14) public sponsors; 15) societal sanction or support.  It is intensely human and personal, and also mostly masculine (conquering nature), and has at least four general aims or ideals: the knowledge ideal (perfect knowledge); the politico-economical ideal (efficient pursuit of knowledge); the ethical-moral ideal (conflict removal between scientists and science and society); the esthetic ideal (enlarge the range and scope of scientific inquiry).  And even though it does have subjective elements, it is not completely subjective, irrational or relativistic.  We have reached Apollonian science that knows how to reach “the starry heavens above,” but we still don’t know how to develop Dionysian science that knows how to reach “the moral law within.”  We may be intellectual giants, but we are still moral dwarfs, and tackling that problem is the task of the future.

Sunday, April 18, 2010

S. Hillgartner - “The Dominant View of Popularization: Conceptual Problems” Social Studies of Science, V. 20, #3, 519-41

Hillgartner asserts that scientists develop scientific knowledge and after that it is popularized, and that this popularized knowledge is seen (by the scientific community) as either an appropriate simplification or a pollution, something that is distorted or misunderstood.  This view serves scientists by providing a vocabulary of non-science for rhetorical use and by demarcating ‘genuine’ science from ‘popularized’ knowledge.  It asserts that there is a ‘gold standard’ of knowledge, and that only scientists know the ‘truth’ and gives scientists the authority to decide which popularizations are appropriate simplifications and which are distortions.

But there are problems with this view.  Popularized knowledge feeds back into the research process, especially by scientists outside the field, who use it, and whose beliefs about the content and conduct of science are shaped by it.  Simplified explanations of science are used in communicating with students, funding agencies and specialists in adjacent fields.  And scientific knowledge is seen as constructed through the collective statements of the scientific community and popularization is a part of this (social studies of science perspective).

In drawing the line between genuine knowledge and popularization, the scientific community prefers a binary mode.  When scientists communicate with other scientists they are exchanging genuine knowledge, when scientists communicate with the public, it is popularization.  The difference between these two modes is one of content, the nature of the claims, and the precision with which they are stated, and also the difference between ‘original’ knowledge and the subsequent spread.  But drawing the line between appropriate simplification and distortion is not that easy, since virtually every ‘downstream’ retelling involves some simplification.  How do we know when simplification becomes over simplification becomes distortion (the telephone game).

The dominant view of popularization reinforces the epistemic authority of scientists because ‘genuine’ knowledge is only available to them.  It also provides scientists with a rhetorical tool for representing science and communicating it to non-scientists.  Because scientists often control the simplification, they can shape public opinion by how and what they simplify.  They can even use the notion of distortion via popularization to debunk popular claims and reassert their authority.  But...there is no ‘central’ bank to enforce the ‘gold standard’ of knowledge, and there is no ‘police force’ to detect counterfeit claims.

Sunday, April 11, 2010

Scott L. Montgomery - Science as Kitsch: The Dinosaur and Other Icons, Science as Culture, Volume 2, part 1, No. 10, 7-58, 1991.

The word ‘kitsch’ may come from ‘verkitschen’, which means ‘to turn out cheaply’, but regardless of its origins, it represents a style of mercenary aesthetics.  For Montgomery, it connects aspects of the role that images play in modern society, and the ability to turn thought and feeling into formula, and hence into products for consumption.  The resulting power of this connection between images and the consumer industry helps ingrain and recycle the existing modes of thought and helps stabilize particular institutional structures.

When applied to science, as in science as kitsch, he is referring to pseudo or crank science and to the use of science to legitimate political ends or culture prejudices.  It also refers to mass-media science (including some popularization), science as myth and as effect, such as tabloid science.  Even the scientist as celebrity falls under the heading of science as kitsch, as does the notion of a national mission of science, or the use of science as intellectual deputies of the state.

(So to popularize anything is to vulgarize it?  Popular culture is obsessed with images and consumption, and to popularize science is to sell out, to turn it into a mass-market consumable, the appearance of which has little if anything to do with the reality?)

According to Montgomery, kitsch is society’s favored discourse about itself, it is ‘integrational propaganda’, and acts through textbooks, films, TV, toys, and any other mechanism that helps to condition or otherwise limit one’s curiosity about and critique of existing concepts and social realities.  The result is that kitsch blocks the will for new insight by dominating our images.  We do not move past the old myths and legends and the inaccessibility and mystery of science is maintained.  “It keeps alive concepts and beliefs that are false and closed, that dazzle or distract with an appeal to distant expertise, and that attempt in every case to satisfy without the benefit of substance.” (56)

Monday, April 5, 2010

John Burnham, How Superstition Won and Science Lost, 1989

The skeptics claims against superstition are that it involves defective reasoning about the natural world or the inexplicable, that certain people encourage superstitious beliefs for their own reasons (usually not altruistic) and that superstition represents the surrender of the rational point of view.  And the skeptics fear that the irrational might destroy civilization.

The major tenet of the religion of science is that scientists offered natural explanations of all natural events, including human beings and their thinking.  In the 1930s, the last days of the undiluted religion of science, the aim of the popularizers was to get the reader to see the world as scientists saw it, orderly and unified, and to get people to think scientifically using the scientific method.  Their goal was the evolution of human beings and society into a progressively scientific civilization.

 In the late 19th century the man of science was someone who was devoted to the scientific method, and who believed that such devotion led to moral superiority.  They upheld a traditional view of culture, while at the same time adding to it, and they believed that science stood for objectivity.  In their pursuit of knowledge, they sought to eliminate the self, forswearing subjective emotional and personal advantage.  They sought to emancipate themselves from the misconceptions and prejudices of society and saw it as their duty to popularize science, to spread the religion of science.  (The myth of the man of science.  Scientists as a priesthood.)

At first popularization was the teaching of what scientists knew, but as science became professionalized and specialized, the function of popularization became the translation between scientists and the public.  Burnham identifies four main phases in the popularization of the natural sciences.

The first half of the 19th century is characterized by the dominance of natural theology and an increase in the number of full-time scientists.  In the 1820s the popularizing activity grew rapidly with schools, colleges, magazines and lyceums, and by the 1830s the popularizing activity was concentrating on the practical applications of science.  In 1850 and 1852 year books were published chronicling the progress of science and technology.  A connection was made between popularization and modernization, promoting a sense of human power connected to the natural sciences and the idea of progress.  After the Civil War there was a period of positivism and scientism, an ante-bellum synthesis of piety and practicality in popular science in which nature began to substitute for God.  The legacy of the early 19th century popularizers was a set of institutions and a concern for meaning and context, with the goal of conveying science to the public and diminishing superstition.

In the late 19th century, these popularizers emphasized facts, progress, practicality, but at the same time science was being more intensely applied to human beings and human affairs.  This was the dawn of the men of science and the beginning of the professionalization of science.  From the 1870s to the end of the century, scientists took the lead in presenting science to the public and science was part of high culture, something to be aspired to.  To love science was to love Truth.  Reductionism followed, and the belief that facts serve to explain the unexplainable, to illustrate discovery and to show the practical contribution of scientific knowledge.  In the 19th century popularization of science was a missionary activity aimed at converting people to the scientific way of life.  (Let’s all become Vulcans, live long and prosper.)

In the first half of the 20th century the religion and ideology of positivistic science peaked and popularization ceased to be under the control of the scientists.  After 1945 the media changed the characteristics of the popularization of science and it no longer supported the religion of science.  In the 20th century, as science becomes specialized, popularization shifts from science to technology (basic to applied).

Sunday, March 28, 2010

Joseph Ben-David - Scientific Growth - Ch. 9: The Profession of Science and Its Powers

Ben-David is looking at scientific research as “profession” with certain defining characteristics including:  higher education as an entry requirement; a monopoly over the practice of their profession; control over who is admitted to their ranks; and a limitation over their contractual obligation to their clients.  But the corporate institutions of scientists need to legitimate their activities within the social order, lest they be seen as subversive, and the scientific method as it is practiced by scientists is one of the tools that is used to prevent subversion, by promoting self-regulation.

Another aspect of the professionalization is the creation of autonomous academies with some official standing, which can provide scientifically competent judgements that would be accepted and honored by the general public.  Such institutions include: the Royal Society of London, which represented science to the public and rewarded scientific discovery and the Paris Academy of Sciences, which was more controlling and came to be perceived as a political body regulating science rather than representing it.

Ben-David claims that in pure or basic science, only the scientific community has the competence to assess the results of research and to make informed estimates about the scientific potentialities of persons and projects (shades of Polanyi) but that when it comes to the application of science, the scientific community is not more capable of judging the practical results of the research or guessing the practical potential of people or projects.  Nor does he believe that the scientific community should act as the allocator of funds between different fields, because purely scientific considerations do not provide all the necessary criteria for a rational choice (shades of Weinberg’s trans-science).  Scientists can, however, estimate the upper limits of the funds that can be expended on research without undue risk of waste and the lowest limit needed to maintain scientific capacity.

He seems to want a middle ground between science and the public, science and governments, and the republic of science and trans-science.

Sunday, March 21, 2010

Alvin M. Weinberg - “Science and Trans-Science” - Minerva 10 (1972); 209-22

For Weinberg, the relationship between science and society is more complicated than the view that science provides the means and politics provides the ends.  He wants to introduce the notion of trans-science, the idea that there are questions of fact that can be asked of science, but cannot be answered by science.  Questions such as the biological effects of low-level radiation, or the probability of extremely improbable events.  And even entire disciplines have aspects of trans-science, engineering judgment, for example, or simply the elements of scientific uncertainty that are inherent in any advancing technology.  He also considers the social sciences as trans-scientific, since we can’t predict human behavior (no psycho-history á la Asimov), because the subject matter is too variable for rationalization.  There is also the axiology of science.  Questions of scientific value, criteria for scientific choice, the valuation of different styles of science, moral and aesthetic judgments.  All of these fall into the realm of trans-science.

But how do we settle the issues of trans-science?  How do we weigh the benefits and risks of new technology?  There is the political process, which establishes priorities and allocates resources.  There are adversary procedures.  The formal, legal and quasi-legal proceedings where opposing views are heard before some sort of board empowered to decide the issue.  But in order for these processes to work, scientists must help define the science/trans-science border and inject some intellectual discipline into the republic of trans-science.

Sunday, March 14, 2010

M Polanyi, "The Republic of Science: its Political and Economic Theory," Minerva 1 (1962): 54-73

Polanyi wants to model the scientific community as a republic, and hence as a political body, with activities coordinated by the mutual adjustment of individual initiatives; each taking into account the activities of the others.  Under this model the problems to be investigated are chosen by the scientific community in order to guarantee that their efforts and resources will not be wasted.  The criteria for their selection are: plausibility; scientific value: accuracy, systematic importance, intrinsic interest; and originality.  He recognizes that there is a tension between the first (plausibility) and the third (originality) criteria.

As a republic, the authority of scientific opinion is mutual and is established between scientists, not over them.  He claims that scientific activities cannot be controlled from a central authority or directed from outside the community in order to serve public interest.  It is an organic growth from existing knowledge to new knowledge and cannot be predicted or shaped.  Any such attempts at direction or shaping will only result in mutilation.  The paradox of the republic of science is that its tradition is one that upholds authority while at the same time cultivating originality.  It is an association of independent initiatives that combine towards an indeterminate achievement.  It is a society of explorers.

No authoritarian technics, here, but neither do we have the federal government controlling science.

Sunday, March 7, 2010

D. Kevles, "The Physicists: The History of a Scientific Community in Modern America" (NY, 1978)

Kevles is writing a history of the professionalization of science (and perhaps of the “Greatest Generation” of scientists?).  He traces the professionalization of science back to the late 19th century (post Civil War).  The goal of the emerging class of scientists (the name was coined by William Whewell in 1840, and replaced the tern “natural philosopher” during this time) was to exclude amateurs, improve the condition of science in the colleges and universities and enlarge the role of science in the federal government.  One of the early marriages between science and the government was the Army sponsorship of geographic and geologic surveys, a connection that linked science and western exploration.

 Even at this early stage of things, the scientific community made the distinction between “abstract” and “practical” science.  Abstract science was the study of nature for the sake of understanding its substance, its workings, its laws.  Practical science was the exploitation of nature and of nature’s laws for the sake of material development.  But the public did not understand this distinction, nor did they understand the dependence of technology on scientific progress.

Of course the connections between science and government grow tighter during military conflicts, especially World War II, where it can probably be claimed that science won the war.  (The Atomic bomb ended the war, radar won it).  The story that Kevles tells us is one of the growing involvement of science with government.  We see increases in federal spending on science, the creation of the President’s Science Advisory Committee, the National Science Foundation, the Atomic Energy Commission, the expansion of military research laboratories.  (Is this authoritarian technics taking over democratic government?)  And ultimately we see the politicization of science with the formation of the Union of Concerned Scientists, and scientists speaking out against government policies and research efforts (ABM, for example, or SDI).

So does science serve society or does society and government serve science?  With the political and social changes that take place in the 60's and 70's we see science going from a position of prestige to one of distrust.  Does science get tarred with same brush as the political institutions?  Or do failures in big science contribute to the distrust of government?  Big science, big government, where do we draw the lines?  Or has government fallen into the technology trap?  Only technology can save us from the problems that technology has created, only technology can provide us with the security that is increasingly hard to find in an increasingly unstable world.  Can we spend our way out of a recession?  Can we invent our way out of the mess that our inventions have left us in?

If science won WWII, did it lose Vietnam?  Are we giving too much agency to science?  Ultimately it all comes down to human beings and how we use the knowledge available to us.  If science has agency, it is because we have given it to it.  The Frankenstein of the novel is the scientist, not the monster, only in the movies does the monster get a name.

Monday, March 1, 2010

Lewis Mumford, "Technics and the Nature of Man," Technology and Culture 7 (1966): 303-17

In this article, Mumford is challenging the basic assumption that defines man as a tool-using animal.  Many anthropologists and ethnologists have made the claim that it was tool use that led to the development of the human brain, but as Mumford rightly points out, there are other species that use tools (chimpanzees, for example) and their tool use has not led them into the same developmental pathways that man has followed.  He would argue that it was the creation of significant modes of symbolic expression, rather than more effective tools, that was the basis of Homo Sapiens’ further development.  As evidence to back up this claim he points out the creation of the cave paintings by an early man that was still quite primitive in terms of the tools he had.

The fixation upon man as the tool user, which may also be an expression of presentism, casting our modern day obsession with machines back upon our ancestors, has led to a fascination with the machine to the exclusion of other aspects of humanity’s existence.  But, Mumford, would argue, at its origins, technics was life-centered, not work-centered or power-centered.  The greatest technical feat of early man was the domestication of plants and animals, a feat that did not require great sophistication in our tools, but did require a concentration upon sexuality in all its varied manifestations, a concentration that was abundantly evident in cult objects and symbolic art (the Venus sculptures, for example).

The mechanization and regimentation of society through industrial and bureaucratic organization eventually replaced religious ritual as a means of promoting the stability of mass populations.  Leading us, ultimately to a present in which the focus of human activity has shifted form an organic environment to the Megamachine, and a future in which all forms of life and culture will be reduced to something that can be translated into the current system of scientific abstractions and transformed en masse to machines and electronic apparatus.

In order to bring technics back into the service of human culture, we need to cease our further expansion of the Megamachine and instead concentrate on the development of those parts of the organic environment and the human personality that have been suppressed.  We must replace automation, the proper end for a machine, with autonomy, the proper end for a human being.

Sunday, February 21, 2010

Lewis Mumford, "Authoritarian and Democratic Technics," Technology and Culture 5 (1964): 1-9.

For Mumford, democracy consists in giving final authority to the whole, rather than the part, and only living human beings are an expression of that whole.  Associated with this central principle are ideas of communal self-government, free communication, unimpeded access to the common store of knowledge, protection against arbitrary external control and a sense of individual moral responsibility for behavior that affects the entire community.

Democratic technics, then, is characterized by small scale methods of production that rest mostly on human skill and energy and that remains under human control, even when machines are used.  But in society, as in technics, there is a tension between small-scale association and large-scale organization, between personal autonomy and institutional regulation.  The irony of civilization is that as our societies have been moving from authoritarian regimes to democratic ones, our technology has been moving from democratic technics to authoritarian technics.

Mumford traces democratic technics back to the earliest use of tools, claiming that it has been the underlying support of every historic culture, balancing the authoritarian regimes of the day.  Authoritarian technics, on the other hand is a more recent trend (relatively speaking), traced back to the fourth millennium B.C., coinciding with the rise of civilization in the form of centralized political control.  Drawing on inventions and discoveries in mathematics, writing, irrigation, and astronomy it created complex human machines - the work army, the military army, the bureaucracy.  Authoritarian technics was tolerated, despite its potential for destruction, because it also created abundance.

Unfortunately, through mechanization and automation authoritarian technics has overcome its greatest weakness: its dependence upon human beings as its component parts.  And now the center of authority no longer lies with people but with the system itself, even the scientists that created it have become trapped within the organization that they have created.  The ultimate end of this technics is to transfer the attributes of life to the machine and the mechanical collective (we are the Borg, resistance is futile, you will be assimilated).  And the only way to maintain our democratic institutions is to make sure that our constructive efforts include technology.  We must reconstruct our science and our technics so that it includes the human personality and favor variety and ecological complexity over uniformity and standardization.  We must put humanity back at the center of our technology.

Monday, February 8, 2010

Lewis Mumford, Technics and Civilization (New York: Harper, 1934)

In the Fall of 2001 I did a directed reading with my advisor.  The subject was technology/science and society.  For coursework I wrote summaries of the books that we read.

Mumford wants to know how and why Western Europeans carried the physical sciences to the point where the whole mode of life had been adapted to the pace and capacities of the machine so that, in effect, the society had surrendered to the machine.  He traces this development to the invention of the clock, which allows time to be divided up and measured in the same sense that space is and helped create the belief in an independent world of mathematically measurable sequences.

The first wave of the machine was in the 10th century and was characterized by an effort to achieve order and power by purely external means.  The second wave occurred in the 18th century with improvements in mining and iron-working.  Attempts were made by the disciples of Watt and Awkright to universalize the ideological premises of the first effort to create the machine, and to take advantage of the practical consequences.  With the third wave (20th century) the machine ceases to be a substitute for God or for an orderly society and its success is now measured by the mechanization of life.

He also links the emergence of the present-day form of capitalism with the beginning of the machine age and the substitution of money-values for life-values.  In the quest for power by the means of abstractions, one abstraction reinforces another.  Time is money, money is power, and power required the furtherance of trade and production, and increases in production drive increases in mechanization.  This abstraction of capitalism preceded the abstractions of modern science so that the power that was science and the power that was money became the same kind of power - the power of abstraction, measurement and quantification.

Because of this link between technics and capitalism, technics takes on the characteristics of capitalism, which utilized the machine not to further social welfare but to increase private profit.  It was capitalism that destroyed the handicraft industries, even though the machine products were inferior.  It was because of the possibilities of profit that the place of the machine was over-emphasized and the degree of regimentation pushed beyond what was necessary to harmony or efficiency.  It was because of capitalism that the machine (a neutral agent) has been the malicious element in human society, careless of human life, indifferent to human interests.  The machine has suffered the sins of capitalism and capitalism has taken credit for the virtues of the machine.  (Marxism)

The development of the machine civilization is divided into three successive but over-lapping and interpenetrating phases.  The Eotechnic phase is characterized by wood and water with the primary inventions being mechanical clocks, the telescope, cheap paper, print, the printing-press, the magnetic compass and the scientific method.  The Paleotechnic phase is characterized by coal and iron.  After 1750 industry passed into a new phase with different sources of power, different materials and different social objectives that multiplied, vulgarized, and spread the methods and goals of the first wave that were directed towards the quantification of life.  The source of mechanical power in the Paleotechnic phase was coal, and its industry rested on the mine, whose products dominated its life and determined the characteristics of its inventions and improvements.  This period is also marked by environmental degradation and the treatment of the environment as another abstraction along with money, prices, capital and most of human existence.  It also saw the worker as a resource to be exploited, mined, exhausted and discarded.

In the Neotechnic phase the scientific method took possession of the other domains of experience and turned the living organism and human society into objects of systematic investigation.  It is characterized by electricity and alloys and in order to survive it has to organize industry and its polity on a worldwide scale.  This phase is marked by instantaneous personal communication over long distances, and this instantaneous personal communication is the mechanical symbol of the world-wide cooperation of thought and feeling that must emerge if the world is not to sink into ruin.  (Wells’ World State?)

Monday, January 25, 2010

Stalin and the Bomb

Stalin and the Bomb by David Holloway - Modern European Intellectual History 12
(The Soviet Union and Atomic Energy, 1939-1956)

 Science was seen in Russia, by both its friends and its enemies as a progressive and democratic force.  But even after it was assimilated into Russian culture, it was mistrusted by many because it was seen as embodying Western values.  We have already seen the influence of the Bolshevik revolution on biology with Lysenko and physics was also at risk of being politicized.  It was saved from that fate because Lenin understood that science and technology were essential for defense and economic security (“it is necessary to master the highest technology or be crushed”).

The first 30 years of the 20th century saw a rising interest in nuclear physics in the West, reaching a peak in the early 30s with the realization of the possibility of fission and the consequent release of energy.  Soviet scientists followed the advances in nuclear physics as well as participating in them, although they were hampered in their research by not always having access to the best equipment.  The State wanted science that would benefit the people, pure research was harder to justify.  Although in the West the notion that nuclear fission could be used to create an extremely powerful bomb was being discussed at this time, in the Soviet Union the primary interest was in the possibilities for power generation.  Physicists in the Soviet Union did not grow concerned about the possibility of the atomic bomb until work in the US, Britain and Germany was already underway.

In 1942, a review of journals by Flerov revealed that articles on fission were no longer appearing and that the scientists doing the research on fission were not publishing on other research.  From “the dogs that do not bark” he determined that research on fission had gone secret in the US, which meant that the Americans were trying to build an atomic bomb.  He wrote to several people, including Stalin.  There was no response.  An ongoing concern in the Soviet Union, however, was the supply of uranium, needed for power plants as well as for bombs and over the years there were attempts by physicists to get the state to organize the search for sources of uranium, with varying degrees of success.

Stalin, it seems did not really understand the significance of the bomb, even when he knew that the Americans possessed one.  (The Soviets had details of the Manhattan project as well as the Maud Commission’s report).  It was not until the US dropped the atomic bomb on Hiroshima that Stalin took a real interest in it, and only then did the Soviet Union begin a concerted effort to build one of their own.  The detailed intelligence that they obtained was not shown to all the scientists working on the project however, it was only shown to Kurchatov, who was in charge.  He then used this knowledge to help guide the work.  The Soviet Union exploded their first atomic bomb on August 29, 1949.  It had taken them only a little longer to develop than it took the US to develop theirs.

It is important to remember that the relationship between politics and science was not an easy one in the Soviet Union.  Stalin distrusted the scientists, probably because he could not really understand what they were doing, and it is extremely likely that had the test on August 29th been a failure the scientists in charge would have been taken out and shot.

Stalin perceived the US foreign policy as being one of Atomic Blackmail.  After WWII, when the US was the only nation that had the bomb, he expected them to use it to establish an hegemony over the world.  This was something that the Soviet Union must, at all costs, resist.  The only people who feared the atomic bomb were those who had “weak nerves.”  This led to a war of nerves and of atomic brinkmanship, especially once the Soviet Union had their own bomb.  They didn’t want to give in to the US in international affairs, because that would make them look weak, but at the same time they didn’t want to provoke a war. Stalin, however, believed that another war was inevitable so long as capitalism survived in the world.  WWI had heralded the Bolshevik revolution, WWII the rise of the Soviet Union, WWIII would crush capitalism forever.

After WWII, Stalin invested heavily in other military technology besides nuclear weapons, including jet engines, radar and missile technology, the size of the military also increased markedly.  Immediately after building the atomic bomb, the Soviet physicists were set to work on the hydrogen bomb.  In this effort they did not duplicate the work being done in the US, but rather developed the technology on their own.  They tested their hydrogen bomb on August 8, 1953.   Under Stalin’s leadership the command economy, combined with the large defense industry and large military establishment set the Soviet Union on a path of militarized development from which it was unable to escape, even after Stalin’s death.

Sunday, January 17, 2010

Francis Galton

Modern European Intellectual History - 11

Francis Galton - Hereditary Genius
Mathew Thomson - The Problem of Mental Deficiency: Eugenics, Democracy and Social Policy in Britain c. 1870-1959

Francis Galton published Heredity Genius in 1869, a second edition was published in 1892.  The point of the book was to show that both the mental and the physical faculties of individuals obeyed the laws of heredity.  To demonstrate the heritability of mental faculties he examines the families of what he calls “eminent men” in England: judges between 1660 & 1865, statesmen, English peerages, commanders, literary men, men of science, poets, musicians, painters, divines and senior classmen of Cambridge.  To demonstrate the heritability of physical faculties he examines the families of oarsmen and wrestlers of the North Country.

In order to do this analysis he must quantify mental and physical faculties, and his solution is to apply the law of “frequency of error,” which is used by mathematicians to estimate the value that is probably nearest to the correct one from a collection of measurements of the same quantity.  This method had been extended to the proportions of the human body by Quetelet on the grounds that differences in something like physical stature could be treated as if they were errors from some norm.

Galton felt that just as we breed our dogs and our horses for certain traits, so we could influence the mental and physical fitness of the human race and that we owed it to the future generations of humanity to attempt such an improvement.  He coined the word eugenics in 1884.  Two decades later a movement emerged with the formation of the Eugenics Education Society in 1907, which began publication of the journal Eugenics Review in 1909.  The core concerns of the eugenics debate developed within the framework of social Darwinism.

By the end of the 19th century there was widespread concern that modern society was reversing evolution, leading to the degeneration of the English people.  This was partly driven by an increase in the recorded rates of lunacy from 2.26/10,000 in 1807 to 29.26/10,000 in 1890.  By the first decade of the 20th century mental defectives became defined as the central eugenic threat facing the nation.  Greater social awareness plus universal education led to the growing realization of the presence of mentally deficient people in the population.  This heightened awareness coincided with growing fears about the fitness of the population.  In addition, the declining birth rates among the middle class combined with increasing birth rate among the lower class led to the fear of mental defectives breeding without control.  Feeble-minded women were seen to be especially at risk for sexual exploitation by men, which created a link between morality and mental deficiency, leading to the notion of immorality as a sign of mental deficiency.  All of these factors contributed to the growing feeling that the ills of society could be traced back to mental deficiency.

In response to these growing concerns the Royal Commission on the Care and Control of the Feeble-Minded was created.  Their 1908 report contained the conclusion that mental deficiency was in large part hereditary.  In 1913 Britain passed the Mental Deficiency Act, which proposed mass segregation of the ‘feeble minded’.

Among the institutions proposed to deal with the problem of mental defectives in society were colonies, community care and sterilization.  Colonies were seen as ways of segregating mental defectives from society at large in a positive environment in which they could be useful and maybe even educated to some degree. Community care provided supervision, guardianship and occupation centers but with limited resources it was difficult to provide the tight moral and sexual control over the defectives within the community as a separate colony would.  This led, in the 1930s with the notion of linking community care with sterilization.

Sterilization as an option for dealing with mental defectives had been resisted both by the eugenics community and the mental institutions in Britain.  They did not want to separate sex from reproduction and were worried about the sexual exploitation of mental defectives and the spread of venereal disease in the community.  But by the 1930s it had become obvious that segregation was unable to cope with demand, that sterilization could relieve this pressure on the colony institutions as a part of community care and that it would also reverse the current trend of rising mental deficiency and improve the eugenic fitness and social welfare of the population as a whole.

The Eugenics Society failed to get sterilization adopted via legislation, the issue was simply too hot, politically.  Sterilization was used, however, and by the 1960s the legislature approved it as a fait accompli.  The operation was directed at just that class of women that had concerned eugenicists in the 1930s.

Sunday, January 10, 2010

Friedrich Nietzsche

Modern European Intellectual History -  10

He was born at Röcken, Prussia, on October 15, 1844 - the birthday of the reigning Prussian King, Frederick William IV, and was named after him.  His father was a minister, and had tutored several members of the royal family.  His mother was a Puritan.  He was raised in a very religious household by women (his father died when Nietzsche was still young).  He read the Bible a great deal, and even read it to others.  This led to him being called by his school mates “the little minister” and described as “a Jesus in the Temple.”

 He lost his faith in the God of his fathers at 18, and spent the remainder of his life searching for a replacement (he thought he found one in the Superman).  At 23 he was conscripted into the military, but a fall from a horse injured him and he was released from service.  His brief experience of the military left him with almost as many delusions about soldiers as he had on entering it–the hard Spartan life of commanding and obeying, the endurance and discipline appealed to him.  He worshiped the ideal of the warrior although he could never become one. Instead he became a scholar.  He earned a Ph.D., and at 25 was appointed to the chair of classical philology at the University of Basle.

This conflict of opposites, of Parsifal and Siegfried, underlie Nietzsche’s philosophy.  In Beyond Good and Evil (1886) and The Genealogy of Morals (1887) he is trying to destroy the old morality and pave the way for the morality of the superman.  He seeks an understanding of the current morality through an understanding of the etymologies of the words and finds two contradictory valuations of human behavior, two ethical standpoints and criteria: a morality of masters and a morality of the herd (or slaves).  The master morality values manhood, courage, enterprise, bravery.  The herd morality was born of subjection and values humility, altruism, and the love of  security and peace.  Honor is pagan, Roman, feudal, aristocratic; conscience is Jewish, Christian, bourgeois, democratic.

But this morality is merely a veneer covering our secret will to power.  Love is a desire for possession, courtship is combat and mating mastery.  This passion for power makes reason and morality helpless.  The Judeo-Christian ethos has subverted the true nature of mankind by exalting the herd morality and suppressing our instincts, which are the most intelligent of all kinds of intelligence.  Moral systems are not universal, different functions require different qualities and the “evil” virtues of the strong are as necessary as the “good” virtues of the weak.  The ultimate ethic is biological:  Good is that which survives, which wins; bad is that which gives way and fails. Morality, as well as theology, must be reconstructed in terms of evolution theory.  The function of life is to bring about “not betterment of the majority, who, taken as individuals, are the most worthless types,” but “the creation of genius,” the development and elevation of superior personalities.” (Schopenhauer as Educator)

The goal of human effort, therefore, is not the improvement of mankind (who exists only as an abstraction) but superman.  At first thought of by Nietzsche as a new species, he later came to think of the superman as superior individuals rising out of the mire of mediocrity, and owing his existence to careful breeding and education.  Such a man would be beyond good and evil because what is good is all that increases the feeling of power, the will to power and what is bad is what is weak.  Mankind should give themselves to this goal of creating the Superman just as Europeans once gave themselves as the means to the ends of Bonaparte.  But the Superman cannot come about from democracy, which was born in Christianity’s rebellion against everything privileged,  only from aristocracy.

Democracy means the worship of mediocrity and the hatred of excellence.  Great men are impossible in a democracy, because they would not submit to the indignation of a system that presumes equality.  Great men are like wolfs among dogs, and the dogs hate the wolf, the free spirit.  Along with democracy, Nietzsche also condemns feminism, by which women become more like men, and socialism and anarchism, which are the offspring of democracy.  If you have social equality, why not economic equality, why have leaders at all?  But nature abhors equality, all life is exploitation and subsists ultimately on other life.

Sunday, January 3, 2010

Isaiah Berlin - Two Concepts of Liberty

Modern European Intellectual History - 9

For Isaiah Berlin one of the dominant issues of the world was the question of obedience (Why should I, or anyone, obey anyone else?) and coercion (If I disobey, may I be coerced?), a question that he felt had long been a central one in politics. In investigating the answers to this question Berlin proposes two concepts of liberty, negative and positive liberty. Negative liberty addresses the question: “What is the area within which the subject–a person or group of persons–is or should be left to do or be what he is able to do or be, without interference by other persons?” Positive liberty addresses the question: “What, or who, is the source of control or interference that can determine someone to do, or be, this rather than that?” While these questions are different, he acknowledges that the answers to them may overlap.

Political liberty in the sense of negative freedom (he uses the terms freedom and liberty interchangeably) is the area within which a man can act unobstructed by others. If I am prevented from so acting by the deliberate interference of other human beings I can describe myself as being coerced or even enslaved, depending upon the degree of interference. But for human society to avoid dissolving into chaos, some limits on individual freedom must exist. The question is, then, where are those limits drawn? Where is the line between private life and public authority?

Notions of positive liberty derive from the desire on the part of the individual to be their own master. It is not freedom from, but freedom to–freedom to live as you see fit. These notions may not seem to be very far apart, but Berlin claims that these ideas of freedom historically developed in different directions and ended up coming in direct conflict with each other. This can be seen by examining the question of what it means to be one’s own master. We can be slaves to our nature, or spiritual slaves, just as we can be physical slaves. And we can justify coercion by claiming that we are acting in the best interests of those whom we are coercing, and that if they were but more self-aware they would recognize the probity of our actions. Berlin believed that conceptions of freedom are derived directly from conceptions of what constitutes the self, a person, or a man. Manipulate the definition of what it is to be a man, and you can change the definition of freedom into whatever you wish it to be.

One way of attaining freedom is to liberate yourself from desires for things that you cannot attain. If we cannot change our environment (social, political or physical), change ourselves to suit it. But this attitude results in the reduction of the sphere of negative liberty, and leads Berlin to the conclusion that the definition of negative liberty to do what one wishes will not work. If I can do little of what I wish, I need only reduce or eliminate my wishes and I am free.

Another way of attaining freedom is through knowledge and understanding. This is the rational approach to liberty. If I understand why things must be the way they are then, as a rational creature, I must will them to be that way. Our knowledge liberates us not by offering us more choices, but by keeping us from attempting the impossible. Berlin calls this the positive doctrine of liberation by reason. But is such a rational life possible not just for the individual, but also for society? And if it is, how is it to be attained? I wish to be free and to live my life as my rational will commands, but so does everyone else. How do we avoid the collision of wills? Where is the line between my rationally determined rights and the rationally determined rights of others? And is there only one variety of rational society?

Will our rationality save us from tyrants and oppression? Will it lead us to truths that cannot be disputed, that are universal? There is also a notion here (perhaps naive?) that rational men would not want to oppress or exploit others. But who defines what is rational? And what do we do about those individuals who do not meet that criteria? Can we legitimately coerce them for their own good? Can we educate them or re-educate them, even against their will?

Berlin believed that the positive notion of liberty was at the heart of the demands for national or social self-direction that resulted in the most powerful and morally just public movements of his time. But at the same time he thought that the idea that a single formula could be found that would harmonize all the diverse ends of humanity was wrong. Instead, pluralism, with the measure of negative liberty that it entails, seemed to him a truer and more humane ideal than the goals of those who seek the ideal of positive self-mastery by classes, people, or humanity as a whole. In his view, pluralism is truer because it recognizes the fact that human goals are many and that they are not all commensurate with each other, and that some may in fact be in a state of perpetual rivalry.