By DOUGLAS QUENQUA
Published: June 5, 2009
“HI, I’m Judy Nichols. Welcome to my rant.”
Evan Sung for The New York Times
REDUX Nancy Sun stopped blogging by 2004, but now she has a new blog.
Thus was born Rantings of a Crazed Soccer Mom, the blog of a stay-at-home mother and murder-mystery writer from Wilmington, N.C. Mrs. Nichols, 52, put up her first post in late 2004, serving up a litany of gripes about the Bush administration and people who thought they had “a monopoly on morality.” After urging her readers to vote for John Kerry, she closed with a flourish: “Practice compassionate regime change.”
The post generated no comments.
Today, Mrs. Nichols speaks about her blog as if it were a diet or half-finished novel. “I’m going to get back to it,” she swears. Her last entry, in December of last year, was curt and none too profound. “Books make great gifts,” she began, breaking a silence of nearly a month.
Like Mrs. Nichols, many people start blogs with lofty aspirations — to build an audience and leave their day job, to land a book deal, or simply to share their genius with the world. Getting started is easy, since all it takes to maintain a blog is a little time and inspiration. So why do blogs have a higher failure rate than restaurants?
According to a 2008 survey by Technorati, which runs a search engine for blogs, only 7.4 million out of the 133 million blogs the company tracks had been updated in the past 120 days. That translates to 95 percent of blogs being essentially abandoned, left to lie fallow on the Web, where they become public remnants of a dream — or at least an ambition — unfulfilled.
Judging from conversations with retired bloggers, many of the orphans were cast aside by people who had assumed that once they started blogging, the world would beat a path to their digital door.
“I was always hoping more people would read it, and it would get a lot of comments,” Mrs. Nichols said recently by telephone, sounding a little betrayed. “Every once in a while I would see this thing on TV about some mommy blogger making $4,000 a month, and thought, ‘I would like that.’ ”
Not all fallow blogs die from lack of reader interest. Some bloggers find themselves too busy — what with, say, homework and swim practice, or perhaps even housework and parenting. Others graduate to more immediate formats, like Twitter and Facebook. And a few — gasp — actually decide to reclaim some smidgen of personal privacy.
“Before you could be anonymous, and now you can’t,” said Nancy Sun, a 26-year-old New Yorker who abandoned her first blog after experiencing the dark side of minor Internet notoriety. She had started it in 1999, back when blogging was in its infancy and she did not have to worry too hard about posting her raw feelings for a guy she barely knew.
Ms. Sun’s posts to her blog — www.cromulent.org, named for a fake word from “The Simpsons” — were long and artful. She quickly attracted a large audience and, in 2001, was nominated for the “best online diary” award at the South by Southwest media powwow.
But then she began getting e-mail messages from strangers who had seen her at parties. A journalist from Philadelphia wanted to profile her. Her friends began reading her blog and drawing conclusions — wrong ones — about her feelings toward them. Ms. Sun found it all very unnerving, and by 2004 she stopped blogging altogether.
“The Internet is different now,” she said over a cup of tea in Midtown. “I was too Web 1.0. You want to be anonymous, you want to write, like, long entries, and no one wants to read that stuff.”
Richard Jalichandra, chief executive of Technorati, said that at any given time there are 7 million to 10 million active blogs on the Internet, but “it’s probably between 50,000 and 100,000 blogs that are generating most of the page views.” He added, “There’s a joke within the blogging community that most blogs have an audience of one.”
That’s a serious letdown from the hype that greeted blogs when they first became popular. No longer would writers toil in anonymity or suffer the indignities of the publishing industry, we were told. Finally the world of ideas would be democratized! This was the catnip that intoxicated Mrs. Nichols. “That was when people were starting to talk about blogs and how anyone could, if not get famous, get their opinions out there and get them read,” she recalled. “I just wanted to post something interesting and get people talking, but mostly it was just my sister commenting.”
Many people who think blogging is a fast path to financial independence also find themselves discouraged. Matt Goodman, an advertising executive in Atlanta, had no trouble attracting an audience to his self-explanatory site, Things My Dog Ate, which included tales of his foxhound, Watson, eating remote controls, a wig and a $400 pair of Prada shoes.
“I did some Craigslist postings to advertise it, and I very quickly got an audience of about 50,000 viewers a month,” he said. That led to some small advertising deals, including one with PetSmart and another with a company that made dog-proof cellphone chargers. Mr. Goodman posted a video of his dog failing to destroy one.
“I guess the charger wasn’t very popular,” he said. “I think I made about $20” from readers clicking on the ads. He last updated the site in November.
Mr. Jalichandra of Technorati — a blogger himself — also points out that some retired bloggers have merely found new platforms. “Some of that activity has gone to Facebook and MySpace, and obviously Twitter is a new phenomenon,” he said.
Others simply tire of telling their stories. “Stephanie,” a semi-anonymous 17-year-old with a precocious knowledge of designers and a sharp sense of humor, abandoned her blog, Fashion Robot, about a week before it got a shoutout in the “blog watch” column of The Wall Street Journal last December. Her final post, simply titled “The End,” said she just didn’t feel like blogging any more. She declined an e-mail request for an interview, saying she was no longer interested in publicity.
As for Ms. Sun of Cromulent.org, she has made peace with being public. She has a new blog, SaladDays.org, where she keeps her posts short and jaunty, not personally revealing; mostly, she offers up health and diet tips, with the occasional quote from Simone de Beauvoir.
What is she after this time around? In person, she was noncommittal, but that night she sent a follow-up e-mail message.
“To be honest, I would love a book deal to come out of my blog,” she wrote. “Or I would love for Salad Days to give me a means to be financially independent to continue pursuing and sharing what I love with the world.”
11 Jun 2009, 1306 hrs IST, PTI
LONDON: The magnificent Basilica of Bom Jesus in Old Goa and the Fortress of Diu are among the '7 Wonders of Portuguese Origin in the World'.
The two Indian sites are rooted in Portuguese colonialism that held sway over Goa, Daman & Diu until the colonies were liberated on 19 December 1961. Goa is now a separate state while Daman & Diu are Union Territories.
Fortress of Diu (India)
Basilica of Bom Jesus Old Goa
Unrest Deepens as Critics Are Detained
By ROBERT F. WORTH and NAZILA FATHI
Published: June 14, 2009
TEHRAN — Violence and acrimony over Iran’s disputed election intensified on Sunday, with word spreading that more than 100 prominent opposition members had been detained, riots erupting in Tehran and other cities, and the triumphant incumbent hinting that his top challenger risks punishment for questioning the result.
Kavya Shivashankar wins 2009 Scripps National Spelling Bee in USA
Thursday, May 28th 2009, 10:49 PM
WASHINGTON - Cool and collected, Kavya Shivashankar wrote out every word on her palm and always ended with a smile. The 13-year-old Kansas girl saved the biggest smile for last, when she rattled off the letters to "Laodicean" to become the nation's spelling champion.
Saina Nehwal smashes super barrier of badminton Tournament
New Delhi, June 21: Saina Nehwal today became the first Indian to win a Super Series badminton tournament, the game’s equivalent of a tennis Grand Slam title.
The 19-year-old Hyderabadi girl won the Indonesian Open in Jakarta beating world No. 3 Ling Wang of China 12-21, 21-18, 21-9, and is likely to rise three spots to world No. 5.
June 26, 2009
LOS ANGELES – Michael Jackson, the sensationally gifted child star who rose to become the "King of Pop" and the biggest celebrity in the world only to fall from his throne in a freakish series of scandals, died Thursday, a person with knowledge of the situation told The Associated Press. He was 50.
By JOE McDONALD, AP Business Writer Joe Mcdonald, AP Business Writer – Mon Jun 22, 6:36 am ET
BEIJING – The World Bank has cut its 2009 global growth forecast, saying the world economy will shrink by 2.9 percent and warning that a drop in investment in developing countries will increase poverty.
"The global recession has deepened," the Washington-based multilateral lender said in a report.
Global trade is expected to plunge by 9.7 percent this year, while total gross domestic product for high-income countries contracts by 4.2 percent, the bank said. It said economic growth in developing countries should slow to 1.2 percent — but excluding relatively strong China and India, developing economies will contract by 1.6 percent.
The bank's latest forecast is a sharp reduction from its March prediction of a 1.7 percent global contraction, which it said then would be the worst on record.
Economic damage to developing countries "has been much deeper and broader than previous crises," warned the report, issued Sunday in Washington.
"Unemployment is on the rise, and poverty is set to increase in developing economies," it said.
The global economy should start to grow again in late 2009, but "the expected recovery is projected to be much less vigorous than normal," the report said. It said banks' ability to finance investment and consumer spending would be hampered by the overhang of unpaid loans and devalued assets.
"To break the cycle and revive lending and growth, bold policy measures, along with substantial international coordination, are needed," the World Bank said.
Investment and other financial flows to developing countries plunged by an estimated 39 percent in 2008 to $707 billion, the World Bank said. It said foreign direct investment in developing countries is projected to drop by 30 percent this year to $385 billion.
Eastern Europe and Central Asia have been hit hardest and the region's gross domestic product is expected to plunge by 4.7 percent this year, the bank said. It said growth should recover next year to 1.6 percent.
GDP in Latin America and the Caribbean should shrink by 2.3 percent this year before rebounding to expand by 2 percent in 2010, the report said.
In the Middle East and North Africa, growth is expected to fall by half this year to 3.1 percent, while that of sub-Saharan Africa will drop to 1 percent from an annual average of 5.7 percent over the past three years, the bank said.
East Asia should post a 5 percent expansion, supported in part by China's stimulus-fueled growth, the bank said.
By JENNIFER BARRETT
Published: January 6, 2009
You may not have checked your credit score lately, but there’s a good chance someone else has.
If you have applied for a mortgage or a loan — or even received a credit card offer in the mail — someone accessed that three-digit number to help determine the amount you can borrow and the interest you’ll owe on it.
So what goes into this all-important score? And how can you make sure you’ve got a good one?
The term credit score usually refers to your FICO score, a number based on a formula developed by the Fair Isaac Corporation. Fair Isaac looks at a summary of all your credit accounts and payment history. If you’ve got a mortgage, a MasterCard or a Macy’s account, it will be included in the report, as will late or missed payments. FICO scores range from 300 to 850, and Fair Isaac calculates them for each of the three big credit-reporting agencies: Equifax, Experian and TransUnion. That’s one reason why your FICO score with each may differ slightly. Generally speaking, the higher your score, the more money you can borrow and the less you’ll pay for the loan.
Here’s how your score is determined:
¶ 35 percent is determined by your payment history. Do you regularly pay your bills or fines on time to any creditor that submits your information to the credit bureau? Even unpaid library fines, medical bills or parking tickets may appear here.
¶ 30 percent is based on the amounts you owe each of your creditors, and how that compares with the total credit available to you or the total loan amount you took out. If you’re maxing out your credit cards, your score may suffer.
¶ 15 percent is based on the length of your credit history, both how long you’ve had each account and how long it’s been since you had any activity on those accounts. The fewer and older the accounts, the better (assuming you’ve made timely payments).
¶ 10 percent is based on how many accounts you’ve recently opened compared with the total number of your accounts, as well as the number of recent inquiries on your report made by lenders to whom you’ve applied for credit. Your score can drop if it looks as if you’re seeking several new sources of credit — a sign that you may be in financial trouble. (If a lender initiates an inquiry about your credit report without your knowledge, though, it should not affect your score.) Shopping around for an auto loan or mortgage shouldn’t hurt, if you keep your search to six weeks or less. But every inquiry you trigger when you apply for a credit card can affect your score, says Craig Watts, a spokesman for Fair Isaac. So be selective.
¶ The final 10 percent is determined by the types of credit used. Having installment debt — like a mortgage, in which you pay a fixed amount each month — demonstrates that you can manage a large loan. But how you handle revolving debt, like credit cards, tends to carry more weight since it’s seen as more predictive of future behavior. (You can pay off the balance each month or just the minimum, for example, charge to the limit of your cards or rarely use them.)
For the best rates on a loan or credit card, you want a score that’s above 700, at least. To achieve that, make sure to pay all your bills on time. It’s also a good idea to have at least one credit card you plan to use for a long time, but not too many. Keep a low balance — generally less than one-third of your total credit limit. Of course, it’s best to pay off your balance entirely each month. And stay on top of the information in your reports.
You can get a free copy of your credit report from each of the three major credit agencies once a year. Be sure to order it through annualcreditreport.com, the only authorized online site under federal law. If you notice information that’s inaccurate, you can submit a request for removal online at Equifax, Experian or TransUnion. Or submit your request by mail. Be sure to specify what information you think is inaccurate and why, and include any documents that support your argument. Ask in writing that the information be corrected or removed from your report. By law, the bureaus must investigate your complaint, usually within 30 days, and give you a response in writing (or via e-mail, if your request was made online) and a free copy of your report, if the information is changed as a result. Your score should reflect that change shortly after.
To see your actual score, you’ll generally have to pay. You can go through Equifax, Experian or TransUnion directly, but be aware that the score you order may be one developed by the agencies themselves, like the TransUnion TransRisk New Account Score, Experian Plus or VantageScore. These are different than the FICO scores lenders generally use when they evaluate your loan applications.
Whether you need to monitor your credit that often is debatable. For most, a close look at the free annual reports from each bureau is probably enough. But if you plan to apply for a loan or credit card, check your score and report at least a couple of months beforehand. Not only will you be aware of how creditworthy you are, you’ll also have time to remove any errors you spot and make sure your score reflects the changes before you fill out any applications.
Updated 6:39 a.m. EDT, Wed June 10, 2009
HONG KONG, China (CNN) -- If you like to search for "music lyrics" or "free" things, you are engaging in risky cyber behavior. And "free music downloads" puts 20 percent of Web surfers in harm's way of malicious software, known as "malware."
Searches for words such as "free," "music" and "download" put users at increased risk for malicious software.
A new research report by U.S.-based antivirus software company McAfee has identified the most dangerous Internet search words that place users on pages with a higher likelihood of cyber attacks.
The study examined 2,600 popular keywords on five major search engines -- Google, Yahoo, Live, AOL and Ask -- and analyzed 413,000 Web pages.
"Just in the past year, we've seen a pretty dramatic shift in what we call malware," David DeWalt, president and CEO of McAfee, told Richard Quest for CNN International's "Quest Means Business." Watch Quest interview with McAfee boss »
"It went from a hacker in a basement, to organized cybercrime to now, literally, terrorism and other forms of organized geopolitical attacks," he said.
Categories that had the highest risk of run-ins with malware: screen savers, free games, work from home, Olympics, videos, celebrities, music and news.
Riskiest terms: word unscrambler, lyrics, myspace, free music downloads, phelps, game cheats, printable fill-in puzzles, free ringtones and solitaire.
The study shows how cyber criminals are increasing in sophistication.
"We can have massive outages with a hacker in the basement. We saw that recently with the 'Twitter worm,' a 17-year-old in his basement basically perpetrated tens of millions of (computer) outages. Or, we can see an organized attack bringing down infrastructure," DeWalt said.
Antivirus software companies lag behind the latest developments by cyber criminals, he said.
"We've been way behind, that's true for the entire world, the global infrastructure of the Internet has grown dramatically -- 50 percent of the world's PCs are unprotected," DeWalt said.
Despite the increased risk, DeWalt doesn't believe there will be a "cyber Armageddon" causing widespread destruction of computers and Internet infrastructure.
"Last week, you saw President Obama in the United States talk about a major cyber-security initiative sponsored by the government, other governments are sponsoring this as well," DeWalt said. "I think we're learning this can happen, and if we get ahead of it, we can prevent it."
Microsoft's new operating system is coming this fall, but experts say a PC rebound is a ways away -- and will happen in spite of the new software.
By David Goldman, CNNMoney.com staff writer
June 12, 2009: 4:45 AM ET
Windows 7 is set to debut on Oct. 22.
Top of Form
Bottom of Form
NEW YORK (CNNMoney.com) -- Windows 7 is coming soon. But having a PC sales rebound come with it seems unlikely.
Computer sales are mired in an awful slump, as businesses and consumers have reined in spending during the recession. Year-over-year global PC shipments fell 7% in the first quarter, the largest drop in that measure since the third quarter of 2001 amid the dot.com bust, according to tech analysis firm IDC.
But Microsoft's new operating system is set to debut on Oct. 22, and experts for the most part like what they have seen. That's a dramatic shift from the largely negative reviews -- and disappointing sales -- of its current Windows version, Vista.
"Microsoft learned a lot from its mistakes with Vista," said Richard Shim, analyst at IDC. "They fixed some very important features and made an impressive operating system."
Despite the positive reviews, most analysts say Windows 7 alone is not enough to jumpstart lackluster PC sales. They cite customer animosity toward Microsoft (MSFT, Fortune 500), a change in consumer trends and the typically slow pace of businesses' OS integration as reasons.
Microsoft's problem: Windows is by far the most-used PC operating system. Last year, 83% of new PCs sold had Windows built in.
Windows 7 may be a shiny and new version of the world's No. 1 OS, but analysts wonder if customers are willing to give Microsoft a second chance after Vista. Users complain that Vista is sluggish, has too many versions and is susceptible to bugs.
According to a survey of more than 1,000 IT professionals nationwide conducted in March by Dimension Research, 50% said they were considering leaving Windows altogether rather than switch to Windows 7. Apple's (AAPL, Fortune 500) Mac OS X was the system they are most likely to switch to.
"Microsoft tried to stuff too many features into the Vista bag, and the bag burst," said Zeus Kerravala, analyst with Yankee Group. "There was a big loss of goodwill towards Microsoft [over Vista.]"
Happy with my clunker: Disenchantment with Windows may not help a PC sales rebound, but PCs may have been doomed anyway.
As more and more applications become available on the Internet, consumers have begun to rely less on computer processing speeds and operating systems' bells and whistles. Online programs such as Google Docs, Yahoo Mail and Wikipedia, for the average consumer's purposes, can stand in for Microsoft Word, Outlook and Encarta.
It's called "cloud computing" (the software is "in the clouds"), and analysts say it will likely continue to damage PC sales growth.
"The most important part of valuating a PC five years ago was its OS, but with more applications moving to cloud, the browser is quickly becoming the most important feature," said Kerravala.
With Internet browsing becoming faster on non-computer devices such as mobile phones and video game systems, analysts say customers are delaying the purchase of new computers from the typical three-year refresh to as much as five years.
Slow to catch on: One set of customers that still rely heavily on PCs are businesses. But companies' integration of operating systems is typically slow.
According to the Dimension Research survey, 84% of IT professionals who are planning on switching to Windows 7 said they are going to wait at least a year to upgrade.
Gartner analysts John Enck and Mike Silver said a six to 12-month period of compatibility testing is typical before businesses begin to adopt a new OS, as IT departments work in companies' customized features and applications.
On a conference call in May, Dell (DELL, Fortune 500) Founder and Chief Executive Michael Dell agreed that a technology refresh cycle is about "nine months to a year out."
Consumers are slow to catch on too. According to IDC, a release of an OS has not led to an immediate boost in shipments since Windows 95 was launched in 1995. Sales have picked up in successive years, though. For instance, Windows XP, which was launched in 2001, had a record sales year in 2007.
Rebound still coming: Though Windows 7 may not be the catalyst needed to spark a rebound in PC sales, most experts say a natural PC sales boost is due by late next year anyway.
A stunning 83% of IT professionals in the Dimension Research survey skipped a Vista upgrade and continue to use the eight-year old Windows XP -- which is ancient, by computer standards. Analysts say businesses will often upgrade their hardware with a new operating system, and the lack of a Vista upgrade means many companies are using older computers that are two or more years past their typical decommissioning period.
"A natural PC sales refresh is coming up from the commercial side, as big companies pushed off a refresh a couple of times during the downturn and kept their old systems," said Shim.
Still, the rebound won't happen anytime soon. Enck said the PC market slump could last until at least the third quarter of 2010. And according to a recent Gartner forecast, U.S. PC sales won't rebound past 2008 levels until 2011.
May 28, 2009 1:58 PM PDT
Microsoft on Thursday took the wraps off Bing, the rebranded and rebuilt search engine formerly code-named Kumo, designed to replace Live Search. It's a solid improvement over the previous search product, and it beats Google in important areas. It will help Microsoft gain share in the search business. It's surprisingly competitive with Google.
Bing isn't available to the public yet, but you won't have to wait long. Starting on June 1, some users will get Bing search results from Live Search. On June 3, we're told, Bing will be Microsoft's new default search. We got early access to the service. Here's how it looks.
In search presentation, Bing wins. It uses technology from Powerset (a search technology company Microsoft acquired) to display refined versions of your query down the left side of the page. For example, I searched for the game "Fallout 3" on Google and Bing. While Google gave me good results, Bing gave me a menu of "related searches," that included Walkthrough, News, and so on.
Bing also pop ups an excerpt of the text on a search result if you hover over it. This saves a lot of time if you're not quite sure if you want to follow a result.
In the content of search results, Bing is not consistently superior to Google. In many searches I did (not the sample searches Microsoft sent me), the Google results were more relevant and useful. Not by miles, mind you, but in many cases Google delivered the goods just enough better than Bing to make me question the wisdom of adopting Bing as a replacement search engine. Just one example: Searching for "Best house paint for humid climates" gave me better advice links at the top of the search results with Google than with Bing.
When searching for product reviews, Google's search result pages were mostly better than Bing's -- although, again, not by a lot. However, Bing also collates user and expert reviews on many products, and this gives you a great overview. This feature doesn't always show up, though; and I wouldn't even have known about it had it not been for the Wired review of Bing.
When you want to shop for an item, both services have very strong "shopping" tabs that organize results well. Google gives you seller ratings, which Bing doesn't. But Bing offers a cashback program, which is hard to beat.
And in some searches, Bing won on results outright. When searching for "Facebook sandberg" on Google, the top link was a story from 2008. On Bing, the top item was "News about facebook sandberg" with three sublinks to very recent articles. When searching for "Obama Supreme Court," Google did show news results, but the top link was a day-old story. Bing's was from 32 minutes ago.
To be fair to Google, you can also click through to Google News on any result and sort results by date. But that's extra clicks. Bing is more aggressive about including news.
All search engines have their strengths, and many of Bing's lie in areas where Microsoft has its own content companies. For example, Microsoft owns the airfare prediction service Farecast, and it includes Farecast buying advice whenever you search for airplane travel. Bing also displays some medical data inside the search engine itself.
Bing also does very well in at least one area where Google should do better. The video search result page for "Thomas Jefferson" in Google gives you a vertical list of videos. On Bing, you get a big grid that's easier to scan, and a list of related videos on the left for "George Washington," "James Madison," and so on. The search results are about equivalent, but Bing's presentation is far superior.
Drinking Water From Air Humidity
ScienceDaily (June 8, 2009) — Not a plant to be seen, the desert ground is too dry. But the air contains water, and research scientists have found a way of obtaining drinking water from air humidity. The system is based completely on renewable energy and is therefore autonomous.
Drinking water from air humidity. (Credit: Image courtesy of Fraunhofer-Gesellschaft)
Cracks permeate the dried-out desert ground, the landscape bears testimony to the lack of water. But even here, where there are no lakes, rivers or groundwater, considerable quantities of water are stored in the air. In the Negev desert in Israel, for example, annual average relative air humidity is 64 percent – in every cubic meter of air there are 11.5 milliliters of water.
Research scientists at the Fraunhofer Institute for Interfacial Engineering and Biotechnology IGB in Stuttgart working in conjunction with their colleagues from the company Logos Innovationen have found a way of converting this air humidity autonomously and decentrally into drinkable water. “The process we have developed is based exclusively on renewable energy sources such as thermal solar collectors and photovoltaic cells, which makes this method completely energy-autonomous. It will therefore function in regions where there is no electrical infrastructure,” says Siegfried Egner, head of department at the IGB. The principle of the process is as follows: hygroscopic brine – saline solution which absorbs moisture – runs down a tower-shaped unit and absorbs water from the air. It is then sucked into a tank a few meters off the ground in which a vacuum prevails. Energy from solar collectors heats up the brine, which is diluted by the water it has absorbed.
Because of the vacuum, the boiling point of the liquid is lower than it would be under normal atmospheric pressure. This effect is known from the mountains: as the atmospheric pressure there is lower than in the valley, water boils at temperatures distinctly below 100 degrees Celsius. The evaporated, non-saline water is condensed and runs down through a completely filled tube in a controlled manner. The gravity of this water column continuously produces the vacuum and so a vacuum pump is not needed. The reconcentrated brine runs down the tower surface again to absorb moisture from the air.
“The concept is suitable for various sizes of installation. Single-person units and plants supplying water to entire hotels are conceivable,” says Egner. Prototypes have been built for both system components – air moisture absorption and vacuum evaporation – and the research scientists have already tested their interplay on a laboratory scale. In a further step the researchers intend to develop a demonstration facility.
Adapted from materials provided by Fraunhofer-Gesellschaft.
Richard M. Felder
Department of Chemical Engineering
North Carolina State University
Raleigh, NC 27695-7905
Felder, Richard, "The Myth of the Superhuman Professor."
J. Engr. Education, 82(2), 105-110 (1994).
The usual justification for trying to make all professors researchers is the argument that teaching and research are inextricably linked, to an extent that the first cannot be done well in the absence of the second. This argument is a strange one. Its proponents - usually academicians, trained in scientific method and the rules of logical inference - offer it with unbounded conviction, passion, and a total absence of evidence. They argue that only researchers are aware of recent developments in their field, so that courses taught by non-researchers must be irrelevant or obsolete. They add that non-researchers whom students rate as good teachers must be merely "entertainers," providing style without substance. When challenged to produce some evidence for the linkage between research and teaching, they name professors they know who have both admirable research records and teaching awards, which is like claiming that you can only be a world-class organist if you practice medicine in Africa and pointing to Albert Schweitzer to prove it.
In this essay I want to take a closer look at the purported linkage between teaching and academic research, to see how it stands up to the tests of common sense and educational research. I will argue that it stands up to neither. Before I get started, though, perhaps I should clarify a point. I am not saying that research and teaching are necessarily in conflict; I cheerfully grant that in some cases the two activities are indeed complementary. An advanced graduate course on a currently hot research topic, for example, is likely to be taught best by an instructor actively doing research on that topic. There is no logical basis, however, for requiring active research involvement to teach an introductory course on engineering mechanics or mass and energy balances. My discussion of teaching in the balance of this paper should therefore be understood to relate mainly to courses that stress engineering fundamentals, practice, and problem-solving methodologies - which is to say, most undergraduate engineering courses.
THE TEACHING/RESEARCH LINKAGE IS MOSTLY FICTION
Teaching and research have different goals and require different skills
Rugarcia(1) points out several distinctions:
The principal goal of research is to discover new knowledge, while that of teaching is to impart well-established knowledge and provide training in problem-solving. Repetition of previous work using standard procedures may be necessary in research but what really matters is the result. On the other hand, prior knowledge and solution algorithms are (or should be) the focal points of undergraduate teaching. Glossing over them in one's zeal to get to the results misses the point.
Ability to communicate is a desirable but not a necessary condition to be a good researcher and a mandatory condition to be a good teacher. Some of the most eminent scientists in history - Gibbs and Einstein come to mind - are well known for the obscurity of their lecturing. Their lack of clarity in presentation in no way diminished their stature as researchers. However, an outstanding teacher who cannot communicate is inconceivable, a contradiction in terms.
The personality traits associated with outstanding researchers are not the same as those associated with outstanding teachers. Most excellent researchers are intensely involved with their work. They feel the greatest satisfaction when performing their experiments, interpreting the data, struggling through their derivations. Many of them feel compelled to minimize the time they spend on activities that distract them from their research, such as teaching: they view having to go over old material as a waste of time and may be impatient with students who don't get it quickly.
Outstanding teachers are more outwardly directed. They enjoy contacts with students and may get as much satisfaction out of delivering a good lecture or seeing a student finally grasp a concept as out of getting an experiment or derivation to work. They may or may not be dynamic or entertaining in lectures, but they share a clarity of expression and convey a sense of enthusiasm that may be noticeably lacking in their research-oriented colleagues.
Good research and good teaching each take a lot of time. Doing both takes more time than most professors have.
It is no secret that research is a major time sink. It takes time - preferably in large uninterrupted blocks - to define problems, generate support, collect, read, and understand all relevant published work on the topic, plan a method of attack, make false starts and wander down blind alleys, wait out the inevitable unproductive periods, clean out logical flaws or weak points, replicate experiments, explore possible consequences and applications of results, write papers, and give seminars. Doing all that is under any circumstances a full-time job; doing it well enough to gain national recognition - now the principal criterion for promotion and tenure almost everywhere - requires an intensity of effort that tolerates few distractions.
That excellent teaching takes just as much time and intensity of effort is not as well appreciated. Consider the preparation of lectures. Most course notes and texts are written from the point of view of someone who already understands the concepts; the trick is to find a way to make the ideas clear to someone approaching them for the first time. Just stating a concept is likely to be useless. To make it comprehensible to most students, the instructor must first provide examples to establish relevance and motivate interest, then imbed the concept in a web of alternative expressions and visual representations, and finally provide more examples and participatory exercises to solidify understanding. Finding a way to do all that for just one relatively straightforward concept can take hours or even days - and a course contains lots of concepts.
Making up good problems is another time-intensive chore. Students almost never learn anything nontrivial in formal lectures; they only start to get it when they try to solve problems. For true learning to take place, however, the problems must vary in scope and difficulty - some drilling basic concepts, others integrating new and prior material, and still others challenging the problem-solving skills and creativity of the best students. Relatively few textbooks offer problems that provide the necessary variety and scope; the burden on the instructor is to collect problems from several sources and to make up and work out solutions to others. Doing so takes immense amounts of time.
Educational research does not confirm the purported linkage between teaching and academic research
Reviewing studies done before 1965, Brown and Mayhew(2) concluded that "Whenever studies of teaching effectiveness are made as judged by students, no relationship is found between judged teaching effectiveness and research productivity." Finkelstein(3) and Feldman(4) reviewed more recent research studies and found that the correlation between good teaching and strong research was either nonexistent or, in a minority of cases, slightly positive. Interestingly, quality of publications (as assessed by frequency of citation) was considerably more likely than any other publication measure to correlate negatively with teaching effectiveness, and individual authorship of books and first authorship of articles also showed strong negative correlations. The implication is that professors doing individual research good enough to gain widespread peer recognition are least likely to be judged effective as teachers.
Perhaps the most telling indication of the nature of the research-teaching interaction is provided by Alexander Astin (5) in a landmark study conducted in the late 1980's. Astin accumulated data on faculty members and almost 25,000 students at 309 institutions of higher education. For each institution, he assessed the faculty's research orientation (as measured by research publications, research funding, time spent away from campus on research-related activities, and self-rated importance of engaging in research and being recognized for research achievement) and student orientation (level of interest in students' academic and personal problems, sensitivity to minority issues, accessibility outside office hours, opportunities for student-faculty interaction), correlating each orientation with a variety of measures of student performance and attitudes.
The results are striking. Research orientation of the faculty correlates negatively with completion of the bachelor's degree, various other measures of academic performance, and student satisfaction with quality of instruction and the overall college experience (p. 338). Student orientation of the faculty correlates positively with bachelor's degree completion, overall academic attainment, student satisfaction with quality of instruction, and self-reported growth in preparation for graduate school, writing skills, leadership abilities, general knowledge, and public speaking skills (pp. 341-342). Research orientation and student orientation are negatively correlated (p. 338).
The quantitative results of the study led Astin to reject the assertion that research and teaching are mutually supportive. On the contrary, he concludes that "In certain respects, the two poles of this factor [research vs. student orientation] reinforce the commonly held notion that, in American higher education, there is a fundamental conflict between research and teaching" (p. 67) and that "Attending a college whose faculty is heavily Research-Oriented increases student dissatisfaction and impacts negatively on most measures of cognitive and affective development. Attending a college that is strongly oriented toward student development shows the opposite pattern of effects (p. 363)."
Certainly there are professors who are both good researchers and good teachers, but their presence on faculties (and hence the occasional slight positive correlation between research and teaching performance) proves nothing, since they are likely to get promotion and tenure where professors who are excellent teachers and fair or poor researchers are not. The real question is whether an institutional emphasis on research activity improves or detracts from teaching quality. The evidence clearly points to the latter.
Cartoon on Professor and Research Grant
By TARA PARKER-POPE
Published: June 22, 2009
As head of the Food and Drug Administration, Dr. David A. Kessler served two presidents and battled Congress and Big Tobacco. But the Harvard-educated pediatrician discovered he was helpless against the forces of a chocolate chip cookie.
In an experiment of one, Dr. Kessler tested his willpower by buying two gooey chocolate chip cookies that he didn’t plan to eat. At home, he found himself staring at the cookies, and even distracted by memories of the chocolate chunks and doughy peaks as he left the room. He left the house, and the cookies remained uneaten. Feeling triumphant, he stopped for coffee, saw cookies on the counter and gobbled one down.
“Why does that chocolate chip cookie have such power over me?” Dr. Kessler asked in an interview. “Is it the cookie, the representation of the cookie in my brain? I spent seven years trying to figure out the answer.”
The result of Dr. Kessler’s quest is a fascinating new book, “The End of Overeating: Taking Control of the Insatiable American Appetite” (Rodale).
During his time at the Food and Drug Administration, Dr. Kessler maintained a high profile, streamlining the agency, pushing for faster approval of drugs and overseeing the creation of the standardized nutrition label on food packaging. But Dr. Kessler is perhaps best known for his efforts to investigate and regulate the tobacco industry, and his accusation that cigarette makers intentionally manipulated nicotine content to make their products more addictive.
In “The End of Overeating,” Dr. Kessler finds some similarities in the food industry, which has combined and created foods in a way that taps into our brain circuitry and stimulates our desire for more.
When it comes to stimulating our brains, Dr. Kessler noted, individual ingredients aren’t particularly potent. But by combining fats, sugar and salt in innumerable ways, food makers have essentially tapped into the brain’s reward system, creating a feedback loop that stimulates our desire to eat and leaves us wanting more and more even when we’re full.
Dr. Kessler isn’t convinced that food makers fully understand the neuroscience of the forces they have unleashed, but food companies certainly understand human behavior, taste preferences and desire. In fact, he offers descriptions of how restaurants and food makers manipulate ingredients to reach the aptly named “bliss point.” Foods that contain too little or too much sugar, fat or salt are either bland or overwhelming. But food scientists work hard to reach the precise point at which we derive the greatest pleasure from fat, sugar and salt.
The result is that chain restaurants like Chili’s cook up “hyper-palatable food that requires little chewing and goes down easily,” he notes. And Dr. Kessler reports that the Snickers bar, for instance, is “extraordinarily well engineered.” As we chew it, the sugar dissolves, the fat melts and the caramel traps the peanuts so the entire combination of flavors is blissfully experienced in the mouth at the same time.
Foods rich in sugar and fat are relatively recent arrivals on the food landscape, Dr. Kessler noted. But today, foods are more than just a combination of ingredients. They are highly complex creations, loaded up with layer upon layer of stimulating tastes that result in a multisensory experience for the brain. Food companies “design food for irresistibility,” Dr. Kessler noted. “It’s been part of their business plans.”
But this book is less an exposé about the food industry and more an exploration of us. “My real goal is, how do you explain to people what’s going on with them?” Dr. Kessler said. “Nobody has ever explained to people how their brains have been captured.”
The book, a New York Times best seller, includes Dr. Kessler’s own candid admission that he struggles with overeating.
“I wouldn’t have been as interested in the question of why we can’t resist food if I didn’t have it myself,” he said. “I gained and lost my body weight several times over. I have suits in every size.”
This is not a diet book, but Dr. Kessler devotes a sizable section to “food rehab,” offering practical advice for using the science of overeating to our advantage, so that we begin to think differently about food and take back control of our eating habits.
One of his main messages is that overeating is not due to an absence of willpower, but a biological challenge made more difficult by the overstimulating food environment that surrounds us. “Conditioned hypereating” is a chronic problem that is made worse by dieting and needs to be managed rather than cured, he said. And while lapses are inevitable, Dr. Kessler outlines several strategies that address the behavioral, cognitive and nutritional factors that fuel overeating.
Planned and structured eating and understanding your personal food triggers are essential. In addition, educating yourself about food can help alter your perceptions about what types of food are desirable. Just as many of us now find cigarettes repulsive, Dr. Kessler argues that we can also undergo similar “perceptual shifts” about large portion sizes and processed foods. For instance, he notes that when people who once loved to eat steak become vegetarians, they typically begin to view animal protein as disgusting.
The advice is certainly not a quick fix or a guarantee, but Dr. Kessler said that educating himself in the course of writing the book had helped him gain control over his eating.
“For the first time in my life, I can keep my weight relatively stable,” he said. “Now, if you stress me and fatigue me and put me in an airport and the plane is seven hours late — I’m still going to grab those chocolate-covered pretzels. The old circuitry will still show its head.”
12:09 04 September 2006 by John Pickrell
For similar stories, visit the Stem Cells Topic Guide
Fast-forward to the end of the 21st Century: surgeons can create new organs to order, regrow crippled spines and hearts and reverse the damage of Parkinson's disease or diabetes with ease. Immune rejection and waiting lists for replacement organs are consigned to history.
Coloured scanning electron micrograph (SEM) of groups of embryonic stem cells (ESCs) (Image: Steve Gschmeissner / SPL)
Doctors have been transplanting adult blood stem cells, in the form of bone marrow transplants, for many decades, but stem cells from human embryos were only isolated and cultured in 1998. Though research has progressed rapidly since then, we still have much to understand; not least what gives stem cells their unique properties, but also how exactly they are able to differentiate into the 300 or so different types of human cell.
Despite their medical promise, stem cells have been dogged by political and ethical controversy because some are derived from discarded human embryos, and because of fears and confusion about links with human reproductive cloning. The future of stem cell therapies was thrown deeper into doubt in late 2005, when a leader of the field - Woo Suk Hwang, South Korea's "stem cell king" - was found to have forged key discoveries and flouted ethical protocols. So has the stem cell miracle been postponed?
Full of potential
Embryonic stem cells (ESCs) come from fertilised human embryos - pinhead-sized balls of cells called blastocysts - just a few days old. In the embryo, these cells go on to form all the tissues of the developing body. They have generated so much interest because they are virtually immortal in the laboratory and can also generate any tissue type from bones to brain cells - making them pluripotent.
Besides regeneration, stem cells could also be studied to provide insights into how human bodies develop from fertilised eggs. Stem cells with genetic defects could further be used to understand how congenital diseases, such as cystic fibrosis, develop. Stem cells might also be used to test new drugs in the lab on a range of tissues, instead of on people or animals.
Adult stem cells
As well as the fetus, stem cells are also found in the placenta, amniotic fluid and umbilical cord, and they remain in many adult tissues. Cord blood is sometimes collected at birth today, and the stem cells stored.
Adult stem cells have been found in: bone marrow, blood, the cornea and retina, intestine, liver, muscles, nervous system and the brain, pancreas and skin. These "multipotent" stem cells are less flexible than ESCs and are typically only able to form cells of the tissue in which they reside. "Adult" distinguishes these cells from their embryonic equivalents, but they are present in children too.
For example, hematopoietic stem cells are blood-forming stem cells, which largely reside in bone marrow. They are responsible for replenishing all blood cell types on a continual basis. It is these stem cells that rebuild the damaged blood system of leukaemia sufferers after successful bone marrow transplants. MesenchymaL stem cells, also found in bone marrow, can go on to form cells including muscle, fat, skin and cartilage.
Though adult stem cells are less flexible than ESCs, and are not immortal in the laboratory, they sidestep the ethical quandary of destroying embryos. Furthermore, we may be able to stimulate the adult stem cells we already possess to travel to and repair damaged tissues within our bodies.
Currently stem cells of both types are being tested to treat many conditions, including: Alzheimer's disease, blood disorders, blood loss, baldness, blindness, cystic fibrosis, deafness, diabetes, heart disease, kidney failure, liver damage, lupus, motor neuron disease, multiple sclerosis, osteoporosis, Parkinson's disease, spinal cord injuries and stroke.
Researchers still have much to learn about how to direct stem cells to form and repair different tissues and how they behave within a patient's body. Even identifying stem cells is difficult currently. Concern that stem cells could divide uncontrollably to form tumours called teratocarcinomas is also likely to delay major clinical trials for some years. Stem cells might also become cancerous in the lab.
The cloning connection
The most significant hurdle however, is immune rejection. As with any tissue transplant (from a donor other than an identical twin), the body will recognise ESCs as foreign and mount an attack which could destroy them. ESC recipients would have to take immune suppressant drugs for the rest of their lives.
Multiplying a patient's own adult stem cells in the lab and then reinjecting them is one way to avoid rejection. Duping the immune system is another possibility, perhaps using stem cells from the brain that somehow avoid detection.
Therapeutic cloning is a clever technique that circumvents the problem. We can make custom-made ESCs using a patient's own DNA and a donor egg. In the same way as reproductive cloning, the nucleus of a skin or muscle cell from the patient is added to an unfertilised egg that has had its own genetic material removed. This egg is then persuaded to divide as though it had been fertilised and, with luck, goes on to form the ball of cells called a blastocyst. At this point, the inner cell mass is removed and cultured in the lab to derive stem cells. These stem cells now contain the DNA of the recipient and would not be treated as foreign by the immune system.
But, in theory, the cloned embryo could be implanted into a womb where it might develop into a cloned human baby. This would be reproductive cloning, and is the same method used to produce Dolly the sheep.
In any case, reproductive cloning has been banned in many countries for ethical reasons and because of suspected health risks to the clone. It was banned in the UK in 2001. Despite strong opposition, it has yet to be banned in the US.
For many, the destruction of embryos for scientific purposes is unacceptable, so numerous countries - such as Germany and France - also support bans on therapeutic cloning and using embryos to derive stem cells. A total of 87 nations voted for a resolution totally banning both types of human cloning in March 2005, but it was abandoned due to failure to agree on therapeutic cloning.
For others, the medical benefits outweigh these concerns. For example, in the UK, Belgium, Sweden, Japan, China and South Korea, therapeutic cloning has been allowed, but regulated. In the UK, licenses have been granted for studies into diabetes and motor neuron disease. ESC lines have been created in the UK since 2003. The EU provides some funding for ESC research in those countries that have embraced it.
In the US, the situation has become complicated. Disagreement between the religious groups who want a total ban on cloning and an equally vociferous pro-therapeutic cloning lobby has stalled legislation. In the place of a ban, US president George W Bush introduced legislation that restricted federally funded research to 22 stem cell lines created before 2001. However, research now suggests that these lines may have been tainted with material from mouse feeder cells in the lab, rendering them useless for human therapy. New ESC colonies free of this contamination have now been created.
Some US states have taken the situation into their own hands. California agreed a plan in 2004 to provide $3 billion for stem cell research over 10 years. By contrast, the Bush administration has pledged just $25 million annually to stem cell research.
In response to these restrictions, the race is on to find an ethical stem cell source - one that does not involve destroying embryos. One method does exist, but it creates ESCs with abnormal chromosomes. Other methods extract stem cells without destroying embryos, or create embryos that could never become babies. Further possible sources are: baby teeth cells, "universal" adult stem cells umbilical cord blood and testicle cells.
Fall from grace
Politics is not the only controversy that has gripped the stem cell world.
In May 2005, one of the world's top stem cell scientists - South Korea's Woo Suk Hwang - announced that his team had used therapeutic cloning to produce 11 ESC lines tailored to individual patients. This was one of a string of remarkable achievements. In 2004 Hwang cloned human embryos for the first time, and he later produced the world's first cloned dog - an Afghan hound named Snuppy.
Then in late 2005, the research community was rocked by claims that Hwang had flouted ethical guidelines by obtaining eggs from women in his own research group. As investigations proceeded and other transgressions unfolded, it became clear that much of his research had been fabricated. There are now questions over his use of funds too.
The fall from grace has been spectacular for a man who was revered as a national hero in South Korea, and the repercussions have travelled far and wide. Collaborating researchers have been tarnished by association, other stem cell science is in doubt, investors are wary of stem cell medicine, and there are now questions about how easy it is to fabricate results.
The already controversial field of stem cell research was brought further into disrepute, and it remains to be seen how much the scandal will delay the development of the miracle therapies that are so desperately desired.
Institute of Technology, Banaras Hindu University
Varanasi 221005, UP