About

The Gray Box is operated by David Voelker, Associate Professor of Humanistic Studies and History at the University of Wisconsin-Green Bay. (More)

Recent Posts
Search
Sunday
Feb102013

The Humanities Know (Part 2): Meaning Matters

This post is the second installment in a series titled “The Humanities Know.” Click here for the first installment.

My initial post about the value of the humanities purported to be an argument “against the materialists who believe that anything without an obvious, short-term cash value is in fact valueless.” By “materialists” here, I don’t mean anything very technical: simply the valuing of material wealth to an extent that crowds out other values. In recent debates over higher education, we’ve seen materialists of this ilk repeatedly deny the value of the humanities, suggesting unfairly that college graduates with humanities degrees lack useful skills.[1] Not only is this characterization of humanities graduates inaccurate, but students of the humanities know something very important about the way the “real world” works that flies in the face of materialism: meaning matters. Neither individuals nor groups make decisions based solely upon financial incentives. Rather, most people search for meaning by pursuing values that transcend material concerns. Understanding this fact about the human condition, humanists take the activity of making meaning, often labeled as “culture,” as a subject of serious study. Sometimes we even contribute to this culture through our scholarly and creative work.

Skeptics of this claim about the orientation of the humanities would rightly point out that some humanists (especially in literary criticism, philosophy, and cultural studies) have assaulted meaning in recent decades, interrogating and undermining knowledge claims through an intellectual project often (simplistically) labeled “postmodernism.” Be that as it may, most humanists are capable of putting aside our anxiety regarding the uncertainly or relativity of knowledge in order to recognize that meaning does indeed matter – and it matters as much for us as economic beings as it does for other aspects of living in the “real world.”[2]

We (human beings, whether humanists or not) make meaning as individuals and as groups, from the family on up to the nation-state and perhaps even, as one of my colleagues argues, through a “global civil society.” We make this meaning in myriad ways, most of which are forms of storytelling. Much of the work of the humanities is to show how our narratives (wherever they are embedded) shape social relations, build communities, motivate political participation, and also constitute the life-blood of the cultural segment of the economy (including but not limited to film, fiction, music, poetry, and the arts). In other words, the humanities (and their elder siblings, the arts) explore how meaning shapes and guides the important life choices of individuals and societies.[3]

Before exploring the humanist view further, let’s walk into the thorny thicket of the materialists. These folks take their ideas from neo-liberal economists, but as a general rule they don’t sweat the theory (or its evidential base) so much as they adhere to the ideology that “the market” should be the sole and final arbiter of value. From this point of view, the meanings we make are no more than mere stories we tell ourselves – a kind of epiphenomenal frosting on a purely material substrate – functioning as after-the-fact “rationales” for determined outcomes. For market ideologues, our decisions are (and should be) motivated by a very narrowly defined form of economic “rationality” – a sort of personal cost-benefit calculus.

To be fair, the materialists (who have become so influential in our halls of state) do not always wear their materialism on their coat sleeves, but usually clothe themselves in the language of family values, responsibility, patriotism, enterprise, growth, and, perhaps most of all, “job creation.” Perhaps some of them have a sincere personal dedication to these values. But if you look at their actions as public officials, they tend to focus on eviscerating public spending on education, healthcare, and social welfare programs, as well as on removing as many “barriers” to doing business as possible. Although the evidence for their position is dubious, they maintain that slashing spending at all levels of government is necessary in order to cut taxes, because the “job creators” won’t create jobs if taxes creep too high or if there are too many regulations.

While I would argue that neither American economic history nor comparative economics support this story told by market ideologues, I don’t deny that there is a certain logic to it. But it’s a logic that rests upon the flawed supposition that “economic growth” necessarily makes everyone prosperous – or at least everyone who deserves it. With this deeply problematic assumption in mind, it seems reasonable to some people to subordinate the government to business interests. The government should do what it can to promote business and should otherwise “get out of the way.”

Economic growth, measured in a very gross fashion, is thus seen as a panacea for almost all of our problems. The socio-economic inequality and the environmental degradation that have often accompanied growth when it is undertaken at any cost are either ignored, denied, or blamed on someone else. The materialists’ accounting sheet simply doesn’t accomodate these other costs, and they try to keep these issues off the table, politically. If we put aside for a moment that the folks who support this mindset and the policies that follow often reap unfair financial benefits, we can perhaps also see that their argument – Hey, everybody needs a job – does have something to be said for it. But if we step outside of the materialist frame of mind, we’d quickly ask: Yes, but what kind of job? And, will the work be good for me and my community?

In education, this materialist mindset suggests the need for the following “reforms,” to be engineered in large part by people who have no expertise in education or the fields being taught: first, define education as job training; second, privatize as much of education as possible; and third, subject higher education, which has obviously run afoul of the previous two principles, to the yoke of business logic (which, incidentally, will deprive many critics of market ideology of a platform for sharing their ideas).

As a counter-weight to the materialist worldview, consider the following – not exactly as an argument but rather as a collage of the human experience that hints at their errors. If meaning did not matter – if all that really mattered to us was short term economic value: Would earlier Americans have created a system of government that was meant to check the accumulation of power? Would democratic politics continue to exist, even in its currently diminished form? Would we even have public schools in the first place? Would religious controversies so powerfully shape world history, from ancient times to the present? Would indigenous peoples around the world continue to preserve their cultures in the face of globalization? Would tens of millions of people bother to creatively express themselves over the Internet? Would millions of Americans choose careers in fields where their primary work is to support the growth and wellbeing other people?

Would college graduates be better prepared to face “the real world” as it exists today if they no longer had opportunities to learn about the history of democracy, if they no longer had the chance to study creative works, if they never explored the diverse religions, cultures, and languages of our nation and of the world?

Currently, students who major or minor in the humanities (and other liberal arts programs) carefully study topics like these (politics, culture, religion, and language). But students in supposedly more “strategic” fields also have some exposure to these areas of study through general education programs – the wide array of courses in the humanities, social sciences, arts, and sciences that they must take to complete a degree. Thus the study of this activity of making meaning, which is central to the humanities, is an intergral part of any “higher” education. To neglect this facet of the human experience would be to leave students without the knowledge and understanding that they need to thrive in our diverse and culturally complex society, not only as workers but just as importantly as citizens and members of families and communities.

It should be clear by now that despite the attempts of materialists to deny the importance of meaning and culture – and the educational apparatus that supports it – they are in fact making a meaning of the crudest sort. I think it's high time for those of us who don't accept their diminished vision for public and private life to stand up and say so.

 

Postscript: I hope that it goes without saying here that I want people who are seeking jobs to be be able to find good jobs. Nothing I say here should be taken as against jobs or employers. I think that many successful business owners avoid the sort of materialism that I am criticizing here, and many of them also support public policy that transcends materialism. My concern about "good jobs" derives from the simple fact that most economic growth in the U.S. in recent decades has benefited people who were already quite well off, while leaving many working people underemployed, uninsured, and insecure for the future. A recent piece by Nobel-Prize-winning-economist Joseph Stiglitz supports my concern, as do countless other studies of growing economic inequality in the U.S.


  1. Corporate employers routinely report that most new college graduates, regardless of their major, need a year or two of on the job training. (In other words, business and communications majors need as much training as English and philosophy majors.) Yet it would be a serious mistake to therefore assume that higher education needs to be reformed as “job training.” If it were true that college graduates hadn’t really learned anything, then why do employers hire them rather than new employees with high school degrees only, which would certainly be cheaper? The fact is that modern corporate jobs require broad knowledge of proprietary information about employer-specific computer systems, product lines, clientele, and bureaucracies. These systems are properly the arena of job-specific training – not higher education. Employers need to be prepared to make this sort of investment in their workers.  ↩

  2. I reiterate the phrase “real world” here as a reminder that living in the real world includes a wide range of experiences that happen outside of compensated labor but are equally or more important to our existence as human beings. Our rhetorical habit of conflating the “real world” with the workplace is dehumanizing – and not incidentally so. Consider: who benefits from that sort of language?  ↩

  3. I realize that I am making a sweeping claim about the humanities. As a qualification, I might add that the humanities pursue this project in diverse and complex ways, and they do other other things as well. I take the liberty of refering to the arts as the “elder siblings” of the humanities because historically the creation of meaning has come before the formal study of said meaning.  ↩

Sunday
Jan272013

The Humanities Know (Part 1)

This post is the first installment in a series titled “The Humanities Know.”

There has been a lot of discussion lately of the need for the humanities to teach “21st-century skills” – especially focused on the idea that humanities programs could do a much better job of preparing students to make their way in the contemporary economy, where knowledge of history, literature, culture, and language is more often applied and expressed through the use of digital technologies than via a conventional “paper.” Fair enough. This conversation is a welcome improvement over the parallel discussion that suggests the humanities should be “cut out,” because of they are “non-strategic.” Although I support the “digital humanities,” I would like to defend the value of the humanities above and beyond their potential affiliation with new technologies. More generally, I would like to point out that the current convergence of conservatives’ desire to cut education funding with technologists’ fantasies regarding online education pose a serious threat to higher education, especially to higher education that includes the humanities.

I am taking up a position against the materialists who believe that anything without an obvious, short-term cash value is in fact valueless. People of this sort seem increasingly to populate our elected assemblies. They often come out of the business world and pride themselves on their practicality. Their hostility toward public higher education seems to derive only in part from the expense.[1] Although the “liberal” in “liberal education” does not mean politically liberal, the materialists understand correctly that liberal education – which includes the study of the arts and humanities alongside the social and natural sciences – poses a threat to the ideological position that “the market” should be the sole and ultimate arbiter of value.

Many, perhaps most Americans recognize the limitations of this materialistic thinking, which presses technocratic means into the service of a business oligarchy. I do not mean to dismiss the usefulness of technical expertise in business and government, which historically was a progressive development in some contexts. But it seems often to be the case that the proponents of technical values (efficiency, profitability, development) lose track of the larger purpose of their endeavors. I think that there are a couple of explanations for this problem. First, there is a tendency to deal mainly in quantitative measures of efficacy, which are necessary and useful but necessarily strip away context and oversimplify. Second, there is over-confidence in technical solutions to both human and natural problems. By a technical solution I mean a solution that consists of a discrete intervention into an economy, a society, or a natural environment, often aimed more at addressing a symptom than an underlying problem. I don’t mean to thumb my nose at technical solutions, which could include anything from offering a new tax deduction to deploying a vaccine. We need technical solutions. But we also need a public that is capable of establishing a framework of values within which to identify and address problems.

A problem is only a problem within a particular context. A solution is only a solution within a particular context. Some branches of natural and social science are more context-aware than others, but it’s the humanities and arts that are hyper-alert to context. If our society is going to have a fighting chance of wrestling effectively with the wide range of problems that we currently face, we need a wide variety of perspectives on how the problems are defined and approached. We need not simply technical descriptions of the problems but humane understandings of how the problems emerged out of a particular context, and a creative envisioning of how we might find a way of addressing (if not solving) problems.

The particular sect of the business-political class that worries me has shed context. It has donned the cloak of thrift and practicality (usually using key phrases such as “tax cuts” and “job creation,” which are magically synonymous) with plans to establish a minimal society – a society where everyone looks after themselves while ignoring all of the ways that we depend on one another for our prosperity, success, health, and happiness. This iron law of interdependence applies as much (perhaps even more) to the wealthiest among us as it does to the most humble members of our society. In trying to reform society along minimalist terms, the materialists (despite their frequent use of the rhetoric of patriotism) have been striking out against those who disagree with them, whether in government, labor unions, education, or elsewhere. They champion a reductive point of view. Education is reduced to a narrow economic utility. Insofar as public education is tolerated, it is required to serve a narrowly economic purpose. Higher education is particularly vulnerable under this regime. Higher education at its best supports critical thinking, diverse perspectives, and patient inquiry. None of these values are short-term profit centers.

In fact, there is something inherently inefficient about higher education. Insofar as higher education is democratic, it is also inefficient. The American system of higher education, storm-beaten as it is, is still probably the best in world. It is based upon the assumption that it is simply not adequate to have a few brilliant minds at a few well-funded private institutions charged with defining and distributing knowledge. One of the frightening prospects of massive online courses (combined with a business model for education more generally) is that a few good lecturers could be given exactly this charge. Proponents of efficiency will ask: why not “bottle” the great lectures of this or that professor, have students watch and listen and take online objective exams, and call it higher education? (Indeed, they are already celebrating this model.) While this approach may work for some some courses, it is hardly a platform for higher education.

This model typifies the technocratic thinking that I critiqued above. The “problem” is that higher education (and education more generally) is too expensive. The “solution” is to use technology to make education more efficient and cost-effective. Meanwhile, who can complain if a system that was once exclusively public or non-profit is now “open for business”?

I would like to suggest some questions that I think we should ask in response to this trajectory on which we are headed: What would we be giving up here? Would we still have something worthy of being called higher education?

There is a reason that higher education has not made major “productivity” gains over the past several decades. (The reason is not lazy professors. One does not earn a Ph.D., much less find a job and get tenure, without working very hard.) The reason is that both teaching and learning are time- and labor-intensive endeavors. Although I think we should celebrate the new connectivity afforded by computer technology, there is little evidence that computers can reduce this time and labor. In many cases, the opportunities afforded by technology ramp up the required labor.

I am not defending the status quo ante-recession of higher education. Nor am I attacking the use of technology in higher education. (I am much closer to being a technophile than a technophobe.) I think that our colleges and universities can, must, and will do a better job of educating students. But we are not going to do this by stripping public universities of full-time faculty and replacing them with internet connections and a few super lecturers.


  1. Public higher education is not nearly so “public” as it used to be. For example, according to recent estimates, the state of Wisconsin pays only about one out of every five dollars needed to fund my institution.  ↩

Sunday
Sep302012

Telling Students the Truth about Good Writing

Most of my posts here will be a bit more scholarly that what is to follow, which is a brief reflection on some evolution in the way I think about student writing. In a nutshell, I am increasingly comfortable with the idea that some students might always need help with aspects of their writing, and they may as well figure this out sooner rather than later and strategically ask for help.

To begin, I should confess that I am a stickler when it comes to writing. I like my thesis statements obvious, my compound adjectives hyphenated, and my independent clauses separated by proper punctuation. When I read uncopyedited manuscripts, I make sure the hypens, em dashes, and en dashes are all in order. And so on.

Although I no longer “mark up” student papers[1]–because the research on teaching writing shows that this is not usually an effective pedagogical strategy–I do hold my students accountable for several basic writing rules, which I publish in a custom “Handbook for Writing Historical Essays.” (Here’s a link to a slightly dated version.) Among other things, I require student papers to use standard spelling, grammatically correct sentences, gender-inclusive language, and integrated quotations.

One of the most vexing problems that I encounter in student writing deals with the “complete sentences” rule that is so dear to my heart. Every semester, I have two or three students who habitually write comma splices or other kinds of run-on sentences. I think that comma splices bother me so much because I misread them every time, as my mind tries to make some tortured syntax work. Once I recognize the run-on, of course, I simply back up and re-read the would-be sentence, perhaps a little grumpy for being forced to retrace my steps.

Whenever I encounter a student who tends to write comma splices, I send them to this excellent online handout from the Writing Center at the University of North Carolina at Chapel Hill. One time a student thanked me for the explanation and really did seem to benefit, but I have learned from long experience that few students who are writing comma splices by the time they enter college are able to mend their ways so easily.

I think that I am finally coming to terms with this cold, harsh truth. Although I will continue to fling around handouts about how to write complete sentences and such, I am ready to embrace a new strategy: letting students in on the secret that (virtually) all good writers (almost) always have another good writer read over their material before they share it with their intended audience.

For several years, in fact, my writing assignments have encouraged students to have someone (not in the class) read over their essays before they submit them. This fall, I went a step further and included the following note on the syllabus for an upper-level history course:

A note about writing: Producing a polished final document does not have to be an individualistic effort. Although the standard rules about academic honesty always apply—meaning that you must cite the sources from which you borrow ideas and information—I encourage you to ask a friend with strong grammar and spelling skills to help proofread your work. Even strong writers benefit from this practice. If your own writing is technically shaky at this stage of your academic career, then you should probably accept this limitation and find ways to deal with it.

Most important here is the final sentence, in which I suggest that students who know they have particular writing wrinkles that they haven’t been able to iron out should make a habit of relying on a friend to help them clean up their prose.

There are times and places where this practice might be seen as cheating, but I think that this is an unfair conception of what good writing should be. If I am correct that (virtually) all good writers (almost) always have another good writer read over their material before they share it with their intended audience, then we should be teaching our students to do the same.[2]


  1. There are exceptions to this rule, of course. I personally did learn from heavily marked up papers–but then I was the kind of student who almost immediately wanted to become a professor. I return the favor when I am working with students under certain circumstances, such as independent studies and honors projects. But I generally give feedback using a flexible but detailed rubric, which requires students to identify their own errors and figure out how to apply the advice that I am giving them rather than making changes mechanically.  ↩

  2. I am well aware that peer workshops and peer editing have been an essential part of composition pedagogy for decades, but I do not see that this practice has caught on outside of “comp” courses.  ↩

Wednesday
Sep122012

History Pedagogy and Engaged Citizenship

On the last day of August, I had the honor of speaking with history faculty (at a casual lunchtime colloquium) and graduate students at the University of Notre Dame with my excellent collaborator Joel Sipress (Professor of History at UW-Superior). Over 25 history graduate students attended our talk titled: “Beyond the Coverage Model: The Argument-Based History Course.” (Thanks to Craig Kinnear for organizing this visit and for drawing my attention to the citizenship issue that I discuss below.)

Joel and I used a combination of evidence from historical and pedagogical research to argue that the coverage model of teaching history has been falling short for over a century. We made much of this case in our March 2011 article in the Journal of American History, but we have now expanded our explanation of the argument-based introductory history course that we propose to replace the traditional coverage-based model. I’d like to briefly explain the differences between these two models in order to comment on the different assumptions that they make about citizenship.

The coverage model is designed primarily to ensure that students are exposed to (and become familiar with) key factual and conceptual information about the place and time in question (e.g., the U.S. since 1865, or the western world since the French Revolution). Joel and I acknowledge that there is a great deal of variation among these courses and what they choose to cover and that many of them include primary source analysis. In that sense, there is no such thing as “the History Survey course.” But we contend nonetheless that the dominant design imperative of history survey courses is to cover a wide array of material, which is mostly presented to students as fixed knowledge. The most obvious (but not the only) problem with this sort of course design, which has been passed down for generations, is that exposure to huge piles of information produces little long-term learning. It’s no wonder that various tests and surveys of the historical knowledge of American youth have yielded disappointing results – for nearly a century! [1]

The argument-based introductory course that Joel and I promote as an alternative to coverage also has many possible manifestations. We have documented this course design used by other instructors and have developed it extensively ourselves. [2] We emphasize several key features of the argument-based course. In a nutshell, an argument-based course is designed from the ground up around a small number of significant historical questions – questions that scholars actually debate – that students will explore in depth with the expectation that they will devise and support their own historical arguments. In this model, instead of encountering historical information through textbooks and lectures, students engage with a manageable number of competing scholarly interpretations, alongside a modest body of primary documents and reference sources to help them understand the debate and take positions of their own. Note that the instructor still plays an essential role here as a guide to the authentic debate, and what is usually called “content” (in the coverage model) takes center stage as an object of scrutiny.

One of the interesting points of disagreement between Joel and me, on the one hand, and the defenders of coverage, on the other hand, centers on how we understand the possible role of history education in promoting engaged citizenship. The defenders of coverage say things such as, “there are certain things that every educated person should know,” paired with the assertion that the coverage-based course “covers” some significant set of those “things” that can both enrich one’s life and prepare one for citizenship and public engagement. Thus some defenders of coverage may see a reduction in coverage as a sort of “dumbing down.”

I think that this view is mistaken for a number of reasons. First, as noted above, the coverage model has not proven effective at cultivating the kind of cultural literacy that defenders of coverage seek to promote. Second, the argument-based model, far from jettisoning “content,” gives students an opportunity to engage seriously and deeply with important historical questions, contexts, and sources. [3] Finally, and central to my point here: the coverage model wrongly takes for granted that students will be able, at some future date, to usefully apply whatever historical knowledge they have retained from their “exposure” to heaps of information. This assumption violates the basic pedagogical principle that learners must actually practice whatever it is that they are supposed to be learning. [4]

Here’s where the argument-based model comes in. Joel and I contend that introductory history courses should be more intentionally designed to guide students through the process of thinking historically. We are not suggesting that introductory-level students learn the methods of historical research; we don’t harbor hopes that we are preparing our general education students for a life of slipping off to the archives during their lunch hours. But these students can and should learn that to study history is to analyze sources from various perspectives, to interpret a body of evidence, and to draw conclusions in conversation with other people who are studying the past. To really learn these things, students must actually practice them extensively. And if college students learn to engage critically with historical claims and evidence in this way, they will be much better prepared for democratic citizenship than if they have taken a history course that prioritizes the passive reception of historical information and perhaps thereby reinforces common misunderstandings about the nature of historical knowledge. [5]

The best defense for keeping history in our schools and colleges is not primarily that it exposes students to certain information about the past and helps some tiny percentage of them achieve cultural literacy. Rather, the study of history is an essential part of education because it provides a critical apparatus for understanding why things are the way they are, how the way things are compares to the way things have been, and how we might go about changing them – should change seem desirable.

With this in mind, I thank Professor Dan Graff, the Director of Undergraduate Studies in history at Notre Dame, for giving me a departmental T-shirt inscribed with the following: “People who want to change the world know how the world has changed.”

Thanks to both Joel Sipress and Brian Steele (Univ. of Alabama-Birmingham) for reviewing a draft of this post.


  1. (See Sam Wineburg, “Crazy for History,” Journal of American History 90 (March 2004): 1401–1414.)  ↩

  2. See Lendol Calder, “Uncoverage: Toward a Signature Pedagogy for the History Survey,” Journal of American History (2006) for a seminal critique of the coverage model. For a broader overview of the issues, see Joel Sipress and David Voelker, “From Learning History to Doing History,” in Exploring Signature Pedagogies: Approaches to Teaching Disciplinary Habits of Mind, pp. 19–35, edited by Regan Gurung, Nancy Chick, and Aeron Haynie (Stylus, 2008), as well as the aforementioned March 2011 article in the Journal of American History.  ↩

  3. For example, see the fall 2012 syllabus for my introductory early American history course. Many of the course details appear in a website that is not open to public access, but the syllabus outlines the basic course structure.  ↩

  4. For a thoroughly documented exposition of this principle, see Grant Wiggins and Jay McTighe, Understanding by Design, 2nd ed. (Upper Saddle River, NJ: Pearson Education, 2006).  ↩

  5. For a brief discussion of common misunderstandings, see Terrie Epstein, “Preparing History Teachers to Develop Young People’s Historical Thinking,” Perspectives on History (May 2012).  ↩

Thursday
Aug302012

Studying the Impact of Course Design on Student Learning

Wherein I report preliminary results of my inquiry into student learning in my argument-based introductory history course.  (Updated at bottom with some statistical details.)

It just makes sense that my inaugural post here addresses my own work in the Scholarship of Teaching and Learning (SoTL) in history. I've carried out several SoTL projects over the years -- collecting and analyzing evidence of my own students' learning -- but I have for the first time collected data from a comparison group of students, and it is really interesting for me to see how my own students measure up against students in a similar course.

More specifically, I asked students in my introductory (early) American history course and in a colleague's introductory (modern) American history course to explain, in their own words, "what historians do" and to give specific examples if possible.

It's important to note that these were not questions that any of the students intentionally prepared to answer. Rather, students received a small amount of extra credit simply for responding briefly to these prompts after they completed their in-class final exams. (There was no added incentive to be especially thoughtful or complete.) In sum, I collected a set of over 150 quickly penned responses from students who were probably pretty tired of answering questions for professors.

I will have more to say about the data that I collected in due time, but I want to explain here that despite the limitations noted above, I was able to see marked differences between the responses of my students, who had just completed a question-driven, argument-based introductory history course, and responses of students who had taken a more standard history survey course (taught by an excellent teacher, by the way).

While the evidence that I collected was textual, and I will pay close attention to the language students' used, I also analyzed the responses using a rubric, marking each one with a series of codes based upon the content of the response. (I have shared my rubric here.) I then entered the codes for each response into a spreadsheet for collective, quantitative analysis.

Snippet from a coded response

Here, perhaps, is the most striking number that emerges from the first 100 responses that I have analyzed. In the comparison class (44 responses), not one student (0%) mentioned a specific historian or scholar by name. In my class (56 responses), over 20% of students mentioned a specific historian by name, without being prompted to do so. (Yes, I realize that 20% is not a stunning number, but compared to 0% it looks pretty good!)

Why do I care if students mention specific historians when asked to give examples of what historians do? I don't actually expect introductory-level students to have memorized historians' names that they will remember over the long term. That's not my goal. I do, however, want them to develop a deep and lasting awareness that to study history is to enter into an evidence-based discussion or debate. Those students who mentioned historians by name showed that they understood that history is not a simple description of the past produced by anonymous experts who are simply reporting indisputable facts. (By the way, I also coded for mentions of analysis and interpretation.) Here's another area where my students out-performed the comparison group: about 37% of my students mentioned the idea of debate or discussion among historians, while only 14% of the comparison group referred to this aspect of studying history.

Again, I've just begun to analyze my data, and I will want to refine this study and collect additional evidence over subsequent semesters. But what I am seeing here is that my course design is making a difference in student learning. Having carried out this inquiry, I now have a more complex understanding of how my students think about history.

UPDATE (9/8/12):  I have processed the remainder of my data from my spring 2012 students. Although the percentages changed a bit when I added in the second section of students, the basic trend is the same.  With the help of my generous colleague Ryan Martin (Assoc. Professor of Human Development and Psychology at UW-Green Bay), I ran my data through SPSS and found that the major differences that I noted above between my students and the comparison students were statistically significant.  For those of you who speak the language of statistics, the t-test when I compared the means for the total scores of my students versus the comparison students yielded a p-value of 0.001.  (This means that assuming my data are good and the samples are representative, etc., the odds that this difference is the result of random chance, rather than a result of the differences between my course and the comparison course, is about one in one thousand.)