Thursday, July 11, 2013

Conflict in Egypt

0 comments
By: Nancy Gallagher

On July 3, 2013, the military overthrew the elected Egyptian government. Led by the Muslim Brotherhood (the Society of Muslim Brothers), the government had come to power on June 30, 2012. Was it a second revolution, a necessary correction in the path of the January 25 (2011) Egyptian Revolution, or a military coup? Who were the Muslim Brothers, how did they rise to power, and why did they fall so spectacularly?

Hasan El-Banna, a schoolteacher and religious leader, established the Muslim Brothers in 1928. The organization initially sought to Islamize society in order to drive the British out, but King Farouk (reigned 1936–1952) and Gamal Abdel Nasser (president 1952–1970) severely suppressed it. Most of the Muslim Brothers leadership came to follow the ideas of Sayyid Qutb, who advocated the assassination of political leaders who did not adopt his interpretation of Islamic Sharia law, calling them “infidels.” Over the years, the organization won many supporters because of its grassroots religious leadership and extensive social welfare organizations. In the elections that followed the January 25 Revolution, the Muslim Brothers candidate, Mohammad Morsi, was elected to power with 52% of the vote. His opponent, the candidate of the old regime, went into exile.

Many people voted for Morsi because he had vowed to carry out the goals of the revolution. He, however, soon demonstrated that he would carry out the goals of the Muslim Brothers. He proved unwilling to work with other factions. He did not take steps to reform the brutal and corrupt state security sector. He did not reach out to the opposition. He did not include Coptic Christians and women in his government, despite his many campaign promises. In November, six months after being elected, he issued a decree that would shield him from judicial review. After intense opposition, he revoked the decree, but the damage was done. He then forced through a constitution that was narrow and exclusionary. Women feared they would lose their hard- and recently won rights. Copts and other minorities feared for their future in Egypt. The government did little to reassure them. Morsi and his appointees proceeded to go after the media, the NGOs, the judiciary, and the arts.

Before being elected he had announced a program to revive the economy, but it did not materialize. The economy continued to sink. A loan from the International Monetary Fund could not be obtained. Tourists did not return. He appointed a member of al-Gama’a al-Islamiyya—the organization allegedly responsible for the massacre of 62 people, mostly tourists, in Luxor in1997—to the post of mayor of Luxor.

He thought he had tamed the military when he dismissed Defense Minister Hussein Tantawi, who had been Egypt's de facto ruler after the 2011 revolution, but Tantawi’s replacement, Abdul Fatah El-Sisi, proved to be the ultimate power broker.  When Morsi broke relations with Syria and encouraged his followers to wage jihad against the Syrian regime, without first informing him, Sisi decided Morsi had to go. 

On June 30, a year after Morsi came to power, at least two million people demonstrated against the government in response to the Tamarod (rebel) campaign that began with a petition calling for early elections. Egyptians claimed it was the largest demonstration in history. The Muslim Brothers bused members from outside Cairo to stage a rival demonstration, but the extent of public disaffection was clear.

On July 3, the army arrested Morsi. His supporters then confronted the military and dozens were killed. On July 8 the military opened fire on a demonstration in front of the Republican Guards headquarters, killing 59 and wounding over 300.

It was a military coup against an elected government, but one supported by a vast number of people who felt that the country could not survive such misrule much longer. The military hastily made Adly Mansour, head of the High Constitutional Court, interim president. Six judges and four lawyers were to revise the 2012 constitution. Noted economist Hazem el-Beblawi became prime minister, and opposition leader and Nobel Prize winner Mohamed ElBaradei vice-president. The Gulf States rushed billions of dollars in aid to support the collapsing Egyptian economy. Nearly all the leaders of the Muslim Brothers were arrested and its television stations were closed.

The Muslim Brothers remain convinced that they were elected in free and fair elections and should have been allowed to complete their terms. Egyptis deeply polarized. The economy is weak, the security forces are unreformed, and the role of the military in future governments is unclear. Will the revolutionaries be able to realize their goals of “bread, freedom, and social justice?” The struggle has barely begun.

Nancy Gallagher teaches Middle East history at the American University in Cairo and is a research professor at the University of California at Santa Barbara (UCSB). She is a widely published expert on the Middle East and Arab North Africa.

. . . 

For more on Egypt and its history, check out these resources:





Denis J. Sullivan and Kimberly Jones



Mona Russell


Monday, July 1, 2013

B(l)ack in the Kitchen: Food Network

0 comments
by Lisa Guerrero
 

The conversation surrounding Paula Deen and her use of the “N word” has simultaneously erased the accusations of job discrimination and harassment all while ignoring the larger issues of race and Food Network. In fact, Deen’s ultimate firing by the Food Network has allowed the network to position itself as anti-racist, as America’s moral conscience. Refusing to allow prejudice to stain its airwaves, the Food Network has situated itself as a progressive force of accountability and justice.

Deen, however, is reflective of their brand—one that normalizes and operationalizes whiteness all while reimagining the world of food as racially transcendent. Revelations regarding Deen burst that illusion. With this in mind, we are sharing an excerpt from Lisa Guerrero’s brilliant chapter from our recent book, African Americans on Television: Race-ing for Ratings



. . .

In all of its programming, even within programs where race is undeniably apparent, either because of the celebrity or the cuisine, food is presented as a race-neutral cultural object.  Unfortunately, in a race-based society, as the United Statesis, “race-neutral” invariably gets translated as “white.”  Food Network trades in the notion of the “racelessness” of food to create a commodified sense of neoliberal inclusion and equality, wherein the focus is placed on individuals and not on systems. Food is portrayed across the network as a “universal language;” but as discussed above, it is definitely constructed as a specifically class-based language, as well as a language constructed in specifically racialized terms.  To be fair, Food Network is no different from most other cable television networks where whiteness is predominant and becomes easily normalized and rendered invisible to most viewers.
Ironically, the relatability that Food Network carefully crafts around its personalities is almost completely belied by the “everyday” lifestyles many of the network celebrities are show to have as they are strategically integrated into their respective shows, most notably with Ina Garten, Giada DeLaurentis, and Bobby Flay.  While the wealth and whiteness displayed in these, and much of Food Network’s other programming is conspicuous, they are treated as commonplace, the effect of which is twofold:  1) it creates a socioracial standard when it comes to the act of food consumption; and 2) it suggestively endorses the idea of food as a racial and economic privilege. 
Through its successful erasure of race and class, Food Networkperpetuates certain understandings about the social landscape in which people think about food consumption and commodification as being generally equal amongst various populations, even as statistically and programmatically most people can see that food equality isn’t a reality.  But Food Network is able to maintain this profitable food fantasy by constructing its food narratives in a very particular sociohistorical vacuum that allows audiences to distance themselves from not only certain tediums surrounding daily food habits, but also the sociohistorical and socioeconomic systems of food production and preparation in the United States.  The strategic use of blackness on the network is one of the primary ways in which this distancing is enabled.
The relative absence of blackness on Food Network, while not unlike the relative absence of blackness on network television generally speaking, succeeds in denying the significant place African Americans have, both historically and contemporaneously, in the creation of American food culture and foodways.  This erasure, while creating an amputated impression of American food backgrounds, does so in deliberate ways that are in keeping with long histories of using whiteness to signify notions of expertise, virtuosity, superiority, propriety, and polish.  In other words, in order to cement the network’s guiding narrative of elevating food to a craft, an art, an aspiration, it needs to simultaneously elevate whiteness, usually white maleness. 
 Not surprisingly, the programming on Food Network frames American food in very Eurocentric terms, tracing food origins and traditions to primarily Western, European nations, while periodically recognizing the “exotic” fare of Latin America or Asia.  There is little to no recognition of African cuisines within programming, despite the growing popularity of African food and restaurants among American consumers sparked by growing numbers of African immigrants to the United States, and probably represented most notably by the often tokenized celebrity chef, Marcus Samuelsson, who was born in Ethiopia and raised in Sweden.  Neither is there much linkage drawn between the specificity of African American soul food and the development of much of what is considered American “southern food.”  The erasure of these African and African-American cultural linkages to American food habits and histories effectively reimagines a significant portion of American food architecture as almost exclusively white, a reimagining not supported by history. 
Now certainly Food Network isn’t The History Channel, and viewers aren’t necessarily expecting to be provided with critically accurate or developed histories of food origins, routes, or social significances.  Nonetheless, its lack of wider, more representative narrative frames within its programming results in two things:  First, there is a barely perceptible, encompassing whitening of both the network itself, as well as the perspectives it creates about food relationships within American populations.  Secondly, when racial “diversity” and representation do occur, they have the effect of “tokenism” rather than inclusion.  Nowhere is this latter effect more apparent than in the network’s small club of Black cooking personalities.
 The framing of Food Network and The Cooking Channel break down into simplistic terms as “The U.S.” and “The Global,” respectively.  As such, The Cooking Channel does appear to embrace diversity in a larger, more transparent way than Food Network.  However, the apparent differentials of framing are really only on a cosmetic level.  There are more people of color that appear regularly on The Cooking Channel, but only slightly more, and considering the overbearing whiteness of Food Network, it really wouldn’t take much to have “more” racial diversity.  But the neutralized by emphasizing the notion of  “the exotic.”  The people of color on The Cooking Channel are, by and large, not of the United States, creating a comforting distance between U.S. audiences and any troublesome considerations about racism. 
In scholarly terms, it wouldn’t be far off the mark to think about Food Network as “the colonial” and The Cooking Channel as “the postcolonial.”  In other words, Food Networkdenies race and its systems by trying to devalue and/or erase race altogether, while The Cooking Channel denies race and its systems by putting race on display in almost exhibitional terms so that audiences don’t relate to it as a “real” thing.  In both cases, whiteness is positioned as the fulcrum of food experiences and knowledges.  And ultimately, blackness, especially American blackness, is relegated to becoming the specialty ingredient that gets used sparingly in the recipe of televisual food programming for fear that its flavor won’t be palatable to American consumers.

 
Postscript: 

As we’ve seen over the last few days not only with the vociferous response by Deen supporters, but also with SCOTUS gutting the Voting Rights Act, Texas scrambling to capitalize on that decision by pushing through a Voter ID bill, the dehumanizing tactics of the defense counsel in the George Zimmerman trial, and the countless racist microaggressions the accounts of which we are bombarded with daily, Paula Deen’s words and behaviors are, in themselves, unsurprising and relatively unremarkable, but rather indicative of the banality of American racism.  As several scholars have articulately pointed out in response to the Deen controversy, (including David J. Leonard), and as I have tried to address in this piece in broader ways, while Deen should certainly be held responsible for the ways in which her actions contribute to the continuation of systemic and ideologic racisms in the United States, the problem is much bigger than her use of racial epithets and her disturbing bucolic nostalgia for the racial order of the antebellum South. 

Perhaps the biggest problem of which Deen is but one very small symptom, is a problem which will, in all likelihood strangle equality and freedom for allAmerican citizens; it is the problem of the United States’ misguided belief in its own magnanimity of race; the delusion that we have remedied our racial illnesses and no longer need to be vigilant about the sickness, and in fact, can be prideful about the “past tense” of our racial struggles.  This blind hubris (which Justice Ginsburg so aptly identified in her dissension to the Voting Rights Act decision), allows for people like Paula Deen to sincerely dislocate their actions from the insidiousness of racism…since racism has been fixed, (so it goes), then certainly what people do and to whom they do it can’t be considered racism. 

Unfortunately, this racist psychosis, the inability to see racism even as you are enacting it, supporting it, contributing to it, benefitting from it, is one of many deleterious side-effects of our post-racial nation, and is sure to kill us quicker than a Paula Deen recipe.   

. . . 


Lisa A. Guerrero is Associate Professor of Comparative Ethnic Studies at Washington State University Pullman.  She is the editor of Teaching Race in the 21st Century: College Professors Talk About Their Fears, Risks, and Rewards (Palgrave Macmillan, 2009) and co-editor of African Americans onTelevision: Race-ing for Ratings (Praeger Press) with David J. Leonard.

Thursday, June 27, 2013

Interview with Michael Frassetto, Author of The Early Medieval World: From the Fall of Rome to the Time of Charlemagne

0 comments
How does the early medieval world differ from the classical world and the later Middle Ages?

The early medieval world differed in a number of ways from the ancient and later medieval worlds. It was much more rural than the ancient world; cities virtually disappeared in the early medieval world and the literate and urban culture associated with ancient Romevanished. The early medieval world was an increasingly Christian world, unlike the polytheistic world of antiquity, and its primary cultural center was the monastery. Politically, the early medieval world was ruled by kings rather than the emperors of antiquity and government itself was understood in more personal terms. In part building upon the traditions of the early medieval world, the later Middle Ages differed markedly from the early medieval world. City life revived in the later Middle Ages and population and the economy grew dramatically. The later Middle Ages experienced a commercial revolution that revived international trade, which had virtually disappeared in the early medieval world. The use of the written word throughout society expanded in the later Middle Ages, new institutions of learning such as the university were established, and the institutions of Church and state grew in power and organization.

What can the early medieval world teach us about our modern world? Are there any similarities?

It has often been said that the past is a foreign country, and this is no more true than in regard to the early medieval world, which had a worldview that is fundamentally different than the worldview held today. Having said that, it must be noted that the early medieval world has much to teach us today. People of the early medieval period left an important legacy in terms of spirituality and religious belief and practice that can provide comfort and important insights to many people today. Early medieval rulers faced numerous challenges of governance and had to create new institutions of government that could help guide modern political leaders. The early medieval world was also one of surprising diversity as peoples with a wide range of cultural practices, languages, and traditions came to create a new social order out of the old Roman Empire, and lessons in our own increasingly diverse world could be learned from our medieval forebears.

What do you think is a common misunderstanding about the early medieval world?

The most common misunderstanding of the early Middle Ages is that it was a “dark age.” Although the early medieval world suffered decline in population, city life, and other areas, it was a period of important cultural transformation and growth. During this period, Europe underwent a process of Christianization, and it was during the early Middle Ages that the Christian, Roman, and Germanic traditions merged to lay the foundation for later European civilization. Important institutions such as the papacy and monasticism took shape during this period, and influential Christian and encyclopedic texts were written. There was also a series of cultural revivals, most notably the Carolingian Renaissance in the eighth to ninth centuries, that produced important artistic works and literary texts. The Carolingian revival was most important for the later development of European civilization. Many ancient classical and Christian works were copied and preserved by Carolingian authors who also wrote works of history, biography, theology, and law. Carolingian artists lavishly illuminated these texts with dazzling images that borrowed from earlier Christian and Roman works of art.

What are some of the contributions the early medieval world gave to us?

The early medieval world has left a number of important cultural artifacts. The Book of Kells and the Lindisfarne Gospels are two beautifully illuminated manuscripts from the early Middle Ages, and Carolingian artists produced a number of equally beautiful illuminated manuscripts. The standard version of what became the Catholic Bible took shape during the early medieval world. Carolingian scholars preserved much of ancient classical and Christian literature; the earliest surviving copies of nearly all ancient Latin manuscripts were made by Carolingian scholars in the ninth century. The Code of Justinian, which shaped European legal and judicial traditions, and the Rule of Saint Benedict, which defined the practice of religious life into the modern era, were creations of the early medieval world. Charlemagne’s chapel at Aachen, Theodoric’s mausoleum, and the Hagia Sophia are among the great architectural monuments created during the early Middle Ages.

In working on the book, did you discover anything particularly surprising or interesting?

One thing I discovered is the wide range of truly interesting personalities that lived during this period. The people of the early medieval world are a fascinating group of scholars, holy men and women, and political leaders. Many of them are interesting because of their courage and integrity and others are interesting—perhaps more interesting—because of their ruthlessness and quest for power at any cost. I was also surprised by the incredible creativity of the period during which society went through a profound transformation. New forms of religious life developed, and kings and other political leaders devised new ideas about political power and created new forms of government. Patterns of daily life were transformed and new social institutions developed. And although I have long known this, I am continually surprised by the literary and artistic creativity of this period that includes the great achievements of the Church fathers, Carolingian Renaissance scholars, and many other early medieval writers and scholars.




Michael Frassetto, PhD, teaches medieval and world history at the University of Delaware, La Salle University, and Richard Stockton College of New Jersey. He has published numerous articles on medieval religious and social history. Frassetto is author of The Great Medieval Heretics: Five Centuries of Religious Dissent and editor of Christian Attitudes toward the Jews in the Middle Ages: A Casebook and Heresy and the Persecuting Society in the Middle Ages: Essays on the Work of R.I. Moore

Wednesday, June 26, 2013

Affirmative Action Survives to See Another Day—For Now…

0 comments
The following is a piece from James A. Beckman, author of the forthcoming 2014 title Affirmative Action: Contemporary Perspectives and Associate Professor of Legal Studies at the University of Central Florida

The dust has settled from yet another constitutional battle involving the war over affirmative action in America. The United States Supreme Court rendered the latest of a long line of decisions spanning over three decades on Monday, June 24, 2013, again placing restrictions (but not outright eliminating) the practice of affirmative action in the case of Fisher v. University of Texas at Austin. Proponents of affirmative action can take solace in the fact that the concept of affirmative action still survives—at least until the next major challenge. In ruling in Fisher, the Court declined to overturn any of its landmark cases of affirmative action—like Grutter v. Bollinger in 2003 and Regents of the University of California v. Bakke in 1978—and continued to allow universities to use race in admissions decisions so long as no other “workable race-neutral alternatives would produce the educational benefits of diversity.”

The Supreme Court in Fisher, by a 7-1 ruling, avoided the most extreme path of entirely dismantling affirmative action, and instead opting for a “middle of the road” approach, which reversed the federal Fifth Circuit Court of Appeals (which had upheld the University of Texas affirmative action admission’s plan as constitutional) as not upholding the rigorous level of judicial review needed in race classification cases as the Supreme Court has previously mandated and required to be employed by courts reviewing these cases (as the Court said in Bakke in 1978 and Grutter in 2003), and remanding the case back to the lower courts for further review. 
 
Thus, while the Court reversed the lower federal court's decision as not meeting its exacting standards under “strict scrutiny,” the majority did however again decline to strike down the general practice of affirmative action as per se unconstitutional and refused to characterize the practice as no longer being needed in society. Indeed, going into the Fishercase, proponents of affirmative action were acutely aware that it was possible that a majority on the Court could have dismantled affirmative action outright, pronounced the complete prohibition on the use of race or ethnicity in admissions decisions (or related governmental actions), and declared America’s experiment with remedial race-conscious preferences to be at an end and no longer necessary in modern society.

There was nothing overly revolutionary or radical in today’s ruling, and the Court seems to reaffirm that diversity is a compelling governmental interest and that Bakke and Grutter decisions are still good law (despite Justice Scalia and Justices Thomas’ concurring opinions to the contrary). This alone should give some comfort to supporters of affirmative action—at least in the short term. Given that the Court has basically used the Fisher ruling to reaffirm its rules set out in Grutter—and specifically that “strict scrutiny” needs to be truly meaningful scrutiny, and not (as the Court says) “strict in theory and feeble in fact,” the standard for review in future cases will certainly need to be more exacting, and states will need to show that “no workable race-neutral alternatives would produce the educational benefits of diversity.” While this is a more exacting standard of review moving forward, the Court clearly did not decide that UT’s program in using race was unconstitutional. The decision also references and upholds the standards set forth in Bakke & Grutter—so Bakke and Grutter are still good law, and diversity in higher education still can be considered a permissible compelling governmental interest. The Court signaled that race based affirmative action plans can still be considered constitutional if implemented properly (and if no workable race neutral alternatives are available). 

Thus, the ruling in Fisher was a narrow one, saving the broader battle over affirmative action (and a possible final end point) for another day. However, while holding that affirmative action survives, the Supreme Court made clear that reviewing courts have the obligation to make their own independent judgments about whether the university’s critical mass determination is a valid one. That is, strict scrutiny requires real and meaningful searching inquiries on the part of the court; not deference to the institution at issue. Further, as diversity increases on campus, it should be harder for institutions to consider race and use affirmative action at all. 

Thus, through the settling haze, the practice of affirmative action still stands, alive, but battered. The practice has withstood the Court’s restrictions and caveats in such cases as the Regents of the University of California v. Bakke in 1978, Adarand v. Pena in 1995, Gratz v. Bollingerin 2003, Grutter v. Bollinger in 2003, and now Fisher v. University of Texas at Austin in 2013.  It is battered, bruised and wobbling—like a punch happy pugilist who is recoiling from one too many uppercuts to the jaw; but yet, still it stands. Weaker, more tempered, but still in the fight.  While judicial concepts like “strict scrutiny” have been further defined and the level of review has been increased, proponents of affirmative action can take solace in the fact that the concept of affirmative action still survives—at least until the next major challenge.

One final note: The next major challenge may not be too far off in the distance. The Supreme Court has already granted review of the next affirmative action case in Schuette v. Coalition to Defend Affirmative Action by Any Means Necessary. The case will be argued at the Supreme Court in the Fall 2013 term. This case deals with the propriety and fate of state law bans on the practice of affirmative action. This case deals with the constitutionality of Michigan Proposal 2, which amended the Michiganstate constitution to prohibit (as a matter of state law) public institutions within the state from utilizing racial-preference in admissions, employment, and contracting. In the petition to the Supreme Court requesting review, Michigan Attorney General Bill Schuette stressed that he was not asking the Court to constitutionally dismantle affirmative action itself (as was a possibility leading up to the Fisher ruling), but rather whether state governments can decide to do so on their own. Thus, according to Michigan Attorney General Schuette, “this case presents the different issue whether a state has the right to accept this Court’s invitation in Grutter to bring an end to all race-based preferences.” This “invitation” is clearly a reference to Justice O’Connor’s language in Grutter that affirmative action should not be a permanent program and should have a logical end point, and that end point should be within the next quarter century from the Grutter decision (i.e., by 2028). The stage is already set for this next battle over affirmative action. Stay tuned in the Fall.  

. . .

James A. Beckman (J.D., Ohio State,  LL.M. Georgetown University) is Associate Professor of Legal Studies at the University of Central Florida, where he also serves as the inaugural chair of the Department of Legal Studies. He is the author of Comparative Legal Approaches to Homeland Security and Anti-terrorism (2007) and Affirmative Action Now: A Guide for Students, Families, and Counselors (2006); he is also the General Editor of Affirmative Action: An Encyclopedia (2004). Before his entrance into academia in 2000, he served as an attorney-advisor for the Bureau of Alcohol, Tobacco & Firearms (ATF) at its headquarters in Washington, DC.  Among other awards, he was the recipient of the United States Department of Defense Meritorious Service Medal for his legal work as an active duty judge advocate from 1994–1998, and the Department of Justice Meritorious Service Award (1999) for legal work on behalf of the Department of Justice and ATF.

   



Tuesday, June 25, 2013

Spying in America

0 comments
The following is a piece from Ronald A. Marks, author of Spying in America in the Post 9/11 World: Domestic Threat and the Need for Change:

Since 9/11, the United States has engaged in an unprecedented amount of spying within the American homeland. An enemy who recognizes no borders, recruits individuals and small groups, and is ruthless in its desire to kill civilians has prompted the effort. We have engaged our spy community, our military, and our law enforcement community to stop these attacks. The record is now up for review.

In the 12 years since the attack on the World Trade Center towers and the Pentagon, we have spent nearly half a trillion dollars on homeland security alone. The Federal government has established deep information and law enforcement relationships with the 17,600 state, local and tribal law authorities. It has reached out in unprecedented ways to the business and public for information. It has intruded into our personal lives every time we travel, every time we remove our shoes in an airport or get wanded entering a public building.

Authorities say some 50 terrorist plots have been stopped. But, the Boston Bombings this year made Americans uneasy over the effectiveness of what is being done to stop terrorism. The exposure of the super-secret, extensive, and legally approved effort by the National Security Agency (NSA) to take in and mine unprecedented volumes of information from innumerable private and public sources has stunned the country and forced the questions: are we doing too much and how much should we, the public, need to know about it?

Contained in the DNA of America’s citizens is their concern over big government. We neither like nor trust it. The U.S. Constitution, the very essence of our political identity, splits the power between three separate, co-equal branches of Federal government.  Additionally, it allows for state’s rights and specifically lays out individual freedom in the Bill of Rights. 

So the time has come to debate our actions publicly– whither America in its war on terror within America.  The challenge the U.S. Government will have making its case lie in the secret methods it has used to build up our defenses. Government officials argue for not tipping our hand to the terrorists—the traditional argument of sources and methods. And, unlike other times in our history such as the anti-communist hunts of the 1950s and 1960s, our government has gone through extraordinary measures to make sure its actions were legal and reviewed. 

In the past few weeks, prompted by an unlikely so-called whistleblower from NSA, the average American has been exposed to the issue of FISA courts, and the Patriot Act, and Presidential Executive Orders designed to check and double check surveillance programs.  The problem lies not in the court of law, but in the court of public opinion.

Americans are a tolerant people if things are explained to them; if they are vetted into the process and reasoning behind our Government protection. That public “light” has been not been shined.  The public “security” boards set up under law years ago to provide this insight and outside government protection are only now being filled and put into action. 


It is up to the U.S. Government to make its case for spying in America to its citizens. It is up to its citizens to determine how much they want or are willing to tolerate. That is what America’s Constitution calls for and what should be done.



Ronald A. Marks is senior fellow at George Washington University's Homeland Security Policy Institute, Washington, DC, and a former CIA senior official. Marks has written about intelligence and homeland security issues for the last ten years.


Monday, June 10, 2013

ABC-CLIO Solutions Helps Nevada Student with National History Day Contest

0 comments

Each year more than half a million children across the country participate in the National History Day Contest. Students are challenged to choose a historical topic related to the annual theme, and then conduct primary and secondary research. They are then asked to present this research in a creative way via performance, exhibit, documentary or website.

This year student Bennett Wallace's creative website on Valley Forge has been selected from his state to compete at the national level. Bennett used ABC-CLIO Solutions as his primary source of information in creating his website. We took a moment to ask Bennett about this project and how ABC-CLIO Solutions helped him create his winning website.


Screen shot of Bennett's webpage. Visit it here: http://94560837.nhd.weebly.com/index.html


ABC-CLIO (AC): Why did you choose this topic?


Bennett Wallace (BW): I chose Valley Forge as a topic because I had visited the Valley Forge National Park when I was 11 and learned so many interesting things there about how Valley Forge was a turning point in the war. I felt like it would fit the topic perfectly. 

AC: How did ABC-CLIO resources help your research for this project? 

BW: ABC-CLIO resources helped me so much on this project because it was quick and easy to find reliable sources from their database and they even have the MLA citation at the bottom of each source. ABC-CLIO made it easy to cite sources for my annotated bibliography. 


AC:What challenges did you face during the course of this project? How did you overcome these challenges?

BW: The challenges I had during this project were trying to keep under the word limit. There is quite a lot of information on Valley Forge and I wish I could have added more.  Another problem I faced was making the annotated bibliography. I used so many sources that it was hard to cite them all. I overcame these problems by getting rid of some pages on my website and also by using sites like EasyBib and ABC-CLIO that made making my bibliography easier. 

AC: What surprised you the most about your subject during the course of your research?

BW: The thing that surprised me most about my subject is that Valley Forge was a turning point not only in the Revolutionary War but also in George Washington's life and really our country's history.  Also, what surprised me were the conditions at Valley Forge and how harsh the winter was.  


...

We also asked Bennet's teacher, Lindsey Clewell, for her perspective on the ABC-CLIO and the project:

AC: What made you decide to have your students participate in the contest? 

Lindsey Clewell (LC): I heard about National History Day from the coordinator of Social Studies for Washoe County, Sue Davis.  I thought that this sounded like an amazing opportunity for students to learn lifelong skills while researching something they are interested in.

AC: What did you find most useful about ABC-CLIO Solutions for your students while working on this project? 

LC: ABC-CLIO offers students reliable information.  In today’s world students have the tedious task of sorting through information to find out what is correct and reliable. ABC-CLIO offers a resource that students can go to and know that the information they are reading about is accurate. 

AC: What challenges did you face during the course of this project? How did you overcome these challenges? 

LC: I feel the biggest challenge of this project was teaching the students what is a reliable resource and what is not. They are used to going to Google and typing in a search term and believing everything they read is reliable. To get them digging a bit deeper into the resource and asking the questions, “Where did this source come from?” and “How do I know if this is reliable?”, was a task that is important and something that we spend a lot of time on. I also made sure to give my students websites that are reliable and offer many primary and secondary resources. This is a skill that my students will need to know throughout their lives and is one that is worth spending extra time to teach. 

AC: How does ABC-CLIO Solutions compare to other research tools you've used in the classroom?  

LC: ABC-CLIO is easy to navigate for students and this is why my students tended to gravitate to the source.  Their generation is used to getting answers fast and ABC-CLIO offered great answers in a timely manner.  My students found multiple resources relating to their topic in one place and they really enjoyed this online resource as a primary source that they used. 

...

We asked Christine Hull, Director of Social Studies and Content Literacy Programs at the Nevada Department of Education, to give her feedback on how ABC-CLIO Solutions plays a role at the state level:

AC: What made you decide to have NV schools participate in the contest?

Christine Hull (CH): When I took the position I am in currently I inherited the role of History Day Coordinator for the State of Nevada. I encourage schools to participate in this contest because the process to prepare their projects aligns with Common Core as well as gives the teachers an authentic learning and assessment opportunity in their classrooms. I really believe the process is the most important part of the entire contest. The Director of National History Day, Cathy Gorn always says, History Day is every day! I truly believe that and the skills that students learn through this process truly are preparing them for their next step in education.

AC: What challenges did you face during the course of this project? How did you overcome these challenges?

CH: Our state is so diverse in geography and population. We are unable to have one state contest like every other state so the first time our entire delegation meets is in Maryland. We also run into problems reaching our districts in the eastern part of the state and something that I would really like to focus on in the future.

AC: How has ABC-CLIO Solutions helped you accomplish your overall goals for the social studies programs in NV schools?

CH: Having the ABC-CLIO Solutions available to every K-12 student in the entire state makes it so great for me to encourage teachers to use this as their starting point for research. Knowing that they can all access the same articles and resources I know that if I show an example during a webinar or face to face training that everyone has access to a trusted source of information.

AC: How has ABC-CLIO Solutions helped NV teachers to implement the Common Core State Standards?

CH: Our teachers are loving the ability to search by not only content standards but also by CCSS. Using the primary sources and articles available in ABC-CLIO Solutions gives our teachers the ability to have an updated textbook of sorts that is aligned to the types of literacy activities they are implementing in their classrooms. 


...
If you haven't already explored ABC-CLIO Solutions, sign up today for a FREE trial!

ABC-CLIO's American History online solution




Monday, May 20, 2013

Prophylactic Mastectomy: Angelina Jolie Opens a Door on a World of Challenging Decisions

0 comments

Angelina Jolie’s brave announcement in the New York Times last week that she has undergone a prophylactic mastectomy and reconstruction because of her high hereditary risk of breast and ovarian cancer has resonated loudly with other women who come from families with high cancer rates. Being a carrier of a deleterious mutation in a BRCA1 or BRCA2 gene means that a woman faces a 56-87% risk of developing breast cancer and a 20-60% risk of developing ovarian cancer, both rates far above those faced by women in the general population. Many of the breast cancers occur at unusually early ages, so breast screening in women at high risk is recommended to begin at age 25 and women in screening programs are also advised to consider prophylactic mastectomy and prophylactic oophorectomy when they complete their childbearing.

 As the Cancer Genetics and Prevention Clinic Director of Psychology Research and Clinical Services and the author of Prophylactic Mastectomy: Insights from Women who Chose to Reduce Their Risk (Praeger, 2012), I have heard many women’s stories about how they came to the same decision Angelina made and how they have coped with the physical and psychological challenges which surgery created. The vast majority of women feel as Angelina said she did, grateful for the chance to avoid cancer and to be able to reassure her children that they would not lose her to that disease. Having lost a parent to cancer at a young age and having small children are two of the most common motivations for women to choose prophylactic mastectomy. What Angelina could not cover in her letter were the many dilemmas, challenges, decisions, problems and adaptations which a woman opting for prophylactic mastectomy faces along the way to her successful surgery and recovery. The 21 women I interviewed for the book talked openly about difficulties finding sympathetic doctors, countering well-meaning relatives who opposed the surgery, confronting innermost feelings about their breasts, figuring out how to explain this surgery to small children (one woman told her young children her surgery was like when their stuffed animals needed new stuffing!), and adjusting to a changed sexual experience and body image. The road to “saving my own life” or “feeling safe within my body” is often a bumpy one, leading to a good place, but sometimes requiring support from family, friends, and professionals along the way. Hats off to Angelina, for pointing the GPS down that road! She has made it much easier for other women to follow.





Andrea Farkas Patenaude, PhD, Director of Psychology Research and Clinical Services, Dana-Farber Cancer Institute, Author, Prophylactic Mastectomy: Insights from Women Who Chose to Reduce Their Risk (Praeger, 2012)