Published on Oct 11, 2010 - 6:42:33 AM
By: AlphaGalileo Foundation
Oct. 11, 2010 - The Royal Swedish Academy of Sciences has decided to award The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2010 jointly to:
* Peter A. Diamond (Massachusetts Institute of Technology, Cambridge, MA, USA)
* Dale T. Mortensen (Northwestern University, Evanston, IL, USA)
* and Christopher A. Pissarides (London School of Economics and Political Science, UK) "for their analysis of markets with search frictions"
This year's Laureates have developed a theory which can be used to answer some of the questions about the labour market. The DMP model “have substantially improved our understanding of markets with search frictions”, especially in areas where goods or services are not standardise, according to the Academy. This theory is also applicable to markets other than the labour market, like housing.
Frictions occur in both sides of the market, and we often see that sellers and buyers have difficulties in finding each other. On the one side we see that there are many unemployed people and on the other a large number of job openings. Employers often struggle to find suitable candidates and these to find available positions. In the same way we see families looking for suitable accommodation and many empty properties that are not sold easily.
“The three Laureates have formulated a theoretical framework for search markets. Peter Diamond has analyzed the foundations of search markets. Dale Mortensen and Christopher Pissarides have expanded the theory and have applied it to the labour market. The Laureates' models help us understand the ways in which unemployment, job vacancies, and wages are affected by regulation and economic policy. This may refer to benefit levels in unemployment insurance or rules in regard to hiring and firing. One conclusion is that more generous unemployment benefits give rise to higher unemployment and longer search times.”
The Academy described the model as an important policy tool that can be used to interpret data and implement better policies regarding unemployment and housing, for example.
During the press conference, Pissarides described that his reaction to the phone call announcing the prize was a “mixture of surprise and happiness and generally satisfaction”. He explained that he started his research at a time when unemployment in Europe was raising and he saw this as a real problem where economics could help.
A model for the labor market:
Several important studies on search and matching markets were published around 1980. Peter Diamond, Dale Mortensen and Christopher Pissarides examined the properties of the various markets. They provided new answers to many unsolved issues and could also pose completely new questions which earlier research had not been able to formulate.
In a number of studies, Dale Mortensen and Christopher Pissarides have systematically developed and applied the theory to examine the labor market – particularly the determinants of unemployment. This has resulted in a model known as the Diamond-Mortensen-Pissarides (DMP) model. Today, the DMP model is the most frequently used tool for analyzing unemployment, wage formation and job vacancies.
The DMP model describes the search activity of the unemployed, the recruiting behavior of firms and wage formation. When a job seeker and an employer find one another, the wage is determined on the basis of the situation on the labor market (the number of unemployed workers and the number of vacancies). The model can thus be used to estimate the effects of different labor-market factors on unemployment, the average duration of spells of unemployment, the number of vacancies and the real wage. Such factors may include the benefit level in unemployment insurance, the real interest rate, the efficiency of employment agencies, hiring and firing costs, etc.
Tuesday, November 02, 2010
Monday, November 01, 2010
Khan Academy
Bill Gates says of Khan's ability to teach " I kind of envy him"!
http://www.khanacademy.org/
http://www.khanacademy.org/
Tuesday, September 14, 2010
Remarks of President Barack Obama - Back to School Speech Philadelphia, Pennsylvania
September 14, 2010
Hello Philadelphia! It’s wonderful to be here. Today is about welcoming all of you and all of America’s students back to school – and I can’t think of a better place to do it than Masterman. You’re one of the best schools in Philadelphia – a leader in helping students succeed in the classroom. And just last week, you were recognized as a National Blue Ribbon School for your record of achievement. That’s a testament to everyone here – students and parents, teachers and school leaders. And it’s an example of excellence I hope communities across America embrace.
Over the past few weeks, Michelle and I have been getting Sasha and Malia ready for school. And I bet a lot of you are feeling the same way they’re feeling. You’re a little sad to see the summer go, but you’re also excited about the possibilities of a new year. The possibilities of building new friendships and strengthening old ones. Of joining a school club, or trying out for a team. The possibilities of growing into a better student, and a better person, and making your family proud.
But I know some of you may also be nervous about starting a new school year. Maybe you’re making the jump from elementary to middle school, or from middle to high school, and worried about what that’ll be like. Maybe you’re starting a new school, and not sure how you’ll like it. Or maybe you’re a senior who’s feeling anxious about the whole college process; about where to apply and whether you can afford to go.
And beyond all these concerns, I know a lot of you are also feeling the strain of these difficult times. You know what’s going on in the news and your own family’s lives. You read about the war in Afghanistan. You hear about the recession we’ve been through. You see it in your parents’ faces and sense it in their voice.
A lot of you are having to act a lot older than you are; to be strong for your family while your brother or sister is serving overseas; to look after younger siblings while your mom works that second shift; to take on a part-time job while your dad is out of work.
It’s a lot to handle; it’s more than you should have to handle. And it may make you wonder at times what your own future will look like; whether you’ll be able to succeed in school; whether you should set your sights a little lower, and scale back your dreams.
But here is what I came to Masterman to tell you: nobody gets to write your destiny but you. Your future is in your hands. Your life is what you make of it. And nothing – absolutely nothing – is beyond your reach. So long as you’re willing to dream big. So long as you’re willing to work hard. So long as you’re willing to stay focused on your education.
That last part is absolutely essential – because an education has never been more important. I’m sure there will be times in the months ahead when you’re staying up late cramming for a test, or dragging yourselves out of bed on a rainy morning, and wondering if it’s all worth it. Let me tell you, there is no question about it. Nothing will have as great an impact on your success in life as your education.
More and more, the kinds of opportunities that are open to you will be determined by how far you go in school. In other words, the farther you go in school, the farther you’ll go in life. And at a time when other countries are competing with us like never before; when students around the world are working harder than ever, and doing better than ever; your success in school will also help determine America’s success in the 21st century.
So, you have an obligation to yourselves, and America has an obligation to you to make sure you’re getting the best education possible. And making sure you get that kind of education is going to take all of us working hand-in-hand.
It will take all of us in government – from Harrisburg to Washington – doing our part to prepare our students, all of them, for success in the classroom, in college, and in a career. It will take an outstanding principal and outstanding teachers like the ones here at Masterman; teachers who go above and beyond for their students. And it will take parents who are committed to your education.
That’s what we have to do for you. That’s our responsibility. That’s our job. But here’s your job. Showing up to school on time. Paying attention in class. Doing your homework. Studying for exams. Staying out of trouble. That kind of discipline and drive – that kind of hard work – is absolutely essential for success.
I know – because I didn’t always have it. I wasn’t always the best student when I was younger; I made my share of mistakes. In fact, I can still remember a conversation I had with my mother in high school, when I was about the age of some of you here today. It was about how my grades were slipping, how I hadn’t even started my college applications, how I was acting, as she put it, “casual” about my future. It’s a conversation I suspect will sound familiar to some of the students and parents here today.
And my attitude was what I imagine every teenager’s attitude is in a conversation like that. I was like, I don’t need to hear all this. So, I started to say that, and she just cut me right off. You can’t just sit around, she said, waiting for luck to see you through. She said I could get into any school in the country if I just put in a little effort. Then she gave me a hard look and added, “Remember what that’s like? Effort?”
It was pretty jolting, hearing my mother say that. But eventually, her words had their intended effect. I got serious about my studies. I made an effort. And I began to see my grades – and my prospects – improve. And I know that if hard work could make the difference for me, it can make the difference for you, too.
I know some of you may be skeptical about that. You may wonder if some people are just better at certain things. And it’s true that we each have our own gifts and talents we need to discover and nurture. But just because you’re not the best at something today doesn’t mean you can’t be tomorrow. Even if you don’t think of yourself as a math person or as a science person – you can still excel in those subjects if you’re willing to make the effort. And you may find out you have talents you’d never dreamed of.
You see, excelling in school or in life isn’t mainly about being smarter than everybody else. It’s about working harder than everybody else. Don’t avoid new challenges – seek them out, step out of your comfort zone, and don’t be afraid to ask for help; your teachers and family are there to guide you. Don’t feel discouraged or give up if you don’t succeed at something – try it again, and learn from your mistakes. Don’t feel threatened if your friends are doing well; be proud of them, and see what lessons you can draw from what they’re doing right.
That’s the kind of culture of excellence you promote here at Masterman; and that’s the kind of excellence we need to promote in all America’s schools. That’s why today, I’m announcing our second Commencement Challenge. If your school is the winner; if you show us how teachers, students, and parents are working together to prepare your kids for college and a career; if you show us how you’re giving back to your community and our country – I’ll congratulate you in person by speaking at your commencement.
But the truth is, an education is about more than getting into a good college or getting a good job when you graduate. It’s about giving each and every one of us the chance to fulfill our promise; to be the best version of ourselves we can be. And part of what that means is treating others the way we want to be treated – with kindness and respect.
Now, I know that doesn’t always happen. Especially not in middle or high school. Being a teenager isn’t easy. It’s a time when we’re wrestling with a lot of things. When I was your age, I was wrestling with questions about who I was; about what it meant to be the son of a white mother and a black father, and not having that father in my life. Some of you may be working through your own questions right now, and coming to terms with what makes you different.
And I know that figuring all that out can be even more difficult when you’ve got bullies in class who try to use those differences to pick on you or poke fun at you; to make you feel bad about yourself. In some places, the problem is more serious. There are neighborhoods in my hometown of Chicago, where kids have hurt one another. And the same thing has happened here in Philly.
So, what I want to say to you today – what I want all of you to take away from my speech – is that life is precious, and part of its beauty lies in its diversity. We shouldn’t be embarrassed by the things that make us different. We should be proud of them. Because it’s the things that make us different that make us who we are. And the strength and character of this country have always come from our ability to recognize ourselves in one another, no matter who we are, or where we come from, what we look like, or what abilities or disabilities we have.
I was reminded of that idea the other day when I read a letter from Tamerria Robinson, an 11-year old girl in Georgia. She told me about how hard she works, and about all the community service she does with her brother. And she wrote, “I try to achieve my dreams and help others do the same.” “That,” she wrote, “is how the world should work.”
I agree with Tamerria. That is how the world should work. Yes, we need to work hard. Yes, we need to take responsibility for our own education. Yes, we need to take responsibility for our own lives. But what makes us who we are is that here, in this country, we not only reach for our own dreams, we help others do the same. This is a country that gives all its daughters and all its sons a fair chance. A chance to make the most of their lives. A chance to fulfill their God-given potential.
And I’m absolutely confident that if all our students – here at Masterman, and across this country – keep doing their part; if you keep working hard, and focusing on your education; if you keep fighting for your dreams and if all of us help you reach them; then not only will you succeed this year, and for the rest of your lives, but America will succeed in the 21st century. Thank you, God bless you, and may God bless the United States of America.
Hello Philadelphia! It’s wonderful to be here. Today is about welcoming all of you and all of America’s students back to school – and I can’t think of a better place to do it than Masterman. You’re one of the best schools in Philadelphia – a leader in helping students succeed in the classroom. And just last week, you were recognized as a National Blue Ribbon School for your record of achievement. That’s a testament to everyone here – students and parents, teachers and school leaders. And it’s an example of excellence I hope communities across America embrace.
Over the past few weeks, Michelle and I have been getting Sasha and Malia ready for school. And I bet a lot of you are feeling the same way they’re feeling. You’re a little sad to see the summer go, but you’re also excited about the possibilities of a new year. The possibilities of building new friendships and strengthening old ones. Of joining a school club, or trying out for a team. The possibilities of growing into a better student, and a better person, and making your family proud.
But I know some of you may also be nervous about starting a new school year. Maybe you’re making the jump from elementary to middle school, or from middle to high school, and worried about what that’ll be like. Maybe you’re starting a new school, and not sure how you’ll like it. Or maybe you’re a senior who’s feeling anxious about the whole college process; about where to apply and whether you can afford to go.
And beyond all these concerns, I know a lot of you are also feeling the strain of these difficult times. You know what’s going on in the news and your own family’s lives. You read about the war in Afghanistan. You hear about the recession we’ve been through. You see it in your parents’ faces and sense it in their voice.
A lot of you are having to act a lot older than you are; to be strong for your family while your brother or sister is serving overseas; to look after younger siblings while your mom works that second shift; to take on a part-time job while your dad is out of work.
It’s a lot to handle; it’s more than you should have to handle. And it may make you wonder at times what your own future will look like; whether you’ll be able to succeed in school; whether you should set your sights a little lower, and scale back your dreams.
But here is what I came to Masterman to tell you: nobody gets to write your destiny but you. Your future is in your hands. Your life is what you make of it. And nothing – absolutely nothing – is beyond your reach. So long as you’re willing to dream big. So long as you’re willing to work hard. So long as you’re willing to stay focused on your education.
That last part is absolutely essential – because an education has never been more important. I’m sure there will be times in the months ahead when you’re staying up late cramming for a test, or dragging yourselves out of bed on a rainy morning, and wondering if it’s all worth it. Let me tell you, there is no question about it. Nothing will have as great an impact on your success in life as your education.
More and more, the kinds of opportunities that are open to you will be determined by how far you go in school. In other words, the farther you go in school, the farther you’ll go in life. And at a time when other countries are competing with us like never before; when students around the world are working harder than ever, and doing better than ever; your success in school will also help determine America’s success in the 21st century.
So, you have an obligation to yourselves, and America has an obligation to you to make sure you’re getting the best education possible. And making sure you get that kind of education is going to take all of us working hand-in-hand.
It will take all of us in government – from Harrisburg to Washington – doing our part to prepare our students, all of them, for success in the classroom, in college, and in a career. It will take an outstanding principal and outstanding teachers like the ones here at Masterman; teachers who go above and beyond for their students. And it will take parents who are committed to your education.
That’s what we have to do for you. That’s our responsibility. That’s our job. But here’s your job. Showing up to school on time. Paying attention in class. Doing your homework. Studying for exams. Staying out of trouble. That kind of discipline and drive – that kind of hard work – is absolutely essential for success.
I know – because I didn’t always have it. I wasn’t always the best student when I was younger; I made my share of mistakes. In fact, I can still remember a conversation I had with my mother in high school, when I was about the age of some of you here today. It was about how my grades were slipping, how I hadn’t even started my college applications, how I was acting, as she put it, “casual” about my future. It’s a conversation I suspect will sound familiar to some of the students and parents here today.
And my attitude was what I imagine every teenager’s attitude is in a conversation like that. I was like, I don’t need to hear all this. So, I started to say that, and she just cut me right off. You can’t just sit around, she said, waiting for luck to see you through. She said I could get into any school in the country if I just put in a little effort. Then she gave me a hard look and added, “Remember what that’s like? Effort?”
It was pretty jolting, hearing my mother say that. But eventually, her words had their intended effect. I got serious about my studies. I made an effort. And I began to see my grades – and my prospects – improve. And I know that if hard work could make the difference for me, it can make the difference for you, too.
I know some of you may be skeptical about that. You may wonder if some people are just better at certain things. And it’s true that we each have our own gifts and talents we need to discover and nurture. But just because you’re not the best at something today doesn’t mean you can’t be tomorrow. Even if you don’t think of yourself as a math person or as a science person – you can still excel in those subjects if you’re willing to make the effort. And you may find out you have talents you’d never dreamed of.
You see, excelling in school or in life isn’t mainly about being smarter than everybody else. It’s about working harder than everybody else. Don’t avoid new challenges – seek them out, step out of your comfort zone, and don’t be afraid to ask for help; your teachers and family are there to guide you. Don’t feel discouraged or give up if you don’t succeed at something – try it again, and learn from your mistakes. Don’t feel threatened if your friends are doing well; be proud of them, and see what lessons you can draw from what they’re doing right.
That’s the kind of culture of excellence you promote here at Masterman; and that’s the kind of excellence we need to promote in all America’s schools. That’s why today, I’m announcing our second Commencement Challenge. If your school is the winner; if you show us how teachers, students, and parents are working together to prepare your kids for college and a career; if you show us how you’re giving back to your community and our country – I’ll congratulate you in person by speaking at your commencement.
But the truth is, an education is about more than getting into a good college or getting a good job when you graduate. It’s about giving each and every one of us the chance to fulfill our promise; to be the best version of ourselves we can be. And part of what that means is treating others the way we want to be treated – with kindness and respect.
Now, I know that doesn’t always happen. Especially not in middle or high school. Being a teenager isn’t easy. It’s a time when we’re wrestling with a lot of things. When I was your age, I was wrestling with questions about who I was; about what it meant to be the son of a white mother and a black father, and not having that father in my life. Some of you may be working through your own questions right now, and coming to terms with what makes you different.
And I know that figuring all that out can be even more difficult when you’ve got bullies in class who try to use those differences to pick on you or poke fun at you; to make you feel bad about yourself. In some places, the problem is more serious. There are neighborhoods in my hometown of Chicago, where kids have hurt one another. And the same thing has happened here in Philly.
So, what I want to say to you today – what I want all of you to take away from my speech – is that life is precious, and part of its beauty lies in its diversity. We shouldn’t be embarrassed by the things that make us different. We should be proud of them. Because it’s the things that make us different that make us who we are. And the strength and character of this country have always come from our ability to recognize ourselves in one another, no matter who we are, or where we come from, what we look like, or what abilities or disabilities we have.
I was reminded of that idea the other day when I read a letter from Tamerria Robinson, an 11-year old girl in Georgia. She told me about how hard she works, and about all the community service she does with her brother. And she wrote, “I try to achieve my dreams and help others do the same.” “That,” she wrote, “is how the world should work.”
I agree with Tamerria. That is how the world should work. Yes, we need to work hard. Yes, we need to take responsibility for our own education. Yes, we need to take responsibility for our own lives. But what makes us who we are is that here, in this country, we not only reach for our own dreams, we help others do the same. This is a country that gives all its daughters and all its sons a fair chance. A chance to make the most of their lives. A chance to fulfill their God-given potential.
And I’m absolutely confident that if all our students – here at Masterman, and across this country – keep doing their part; if you keep working hard, and focusing on your education; if you keep fighting for your dreams and if all of us help you reach them; then not only will you succeed this year, and for the rest of your lives, but America will succeed in the 21st century. Thank you, God bless you, and may God bless the United States of America.
Friday, July 09, 2010
Lessons from History - Ten Economic Blunders from History
Ten Economic Blunders from History
Mises Daily: Wednesday, July 07, 2010 by John
S. Chamberlain
Take cover when you hear a political
leader talking about economic affairs. You can bet a bad decision is incoming.
Luckily for the leaders, their meddling usually has a slow, erosive effect on
the economy. Every so often, however, the great ones manage to land a real
whopper that takes them down along with their whole country. Here are ten
examples from history.
1.
Charge Too Much and You Die
In the year 301, the Roman emperor
Diocletian issued the Edictum De Pretiis Rerum Venalium, i.e., the Edict on
Prices of Foodstuffs, which rebalanced the coinage system and set maximums on
wages and the prices of many types of goods, especially food. The penalty for
selling above the stipulated prices was death. Copies of the edict were
inscribed on stone monuments all over the empire. Here's a tip for future
dictators: never inscribe your blunders on stone unless you want people to
laugh at you for the rest of eternity. The edict was a disaster. Sellers
withdrew their goods, unwilling to sell at the fixed prices or even risk being
falsely accused of selling beyond the maximum and thus be subject to execution.
Workers responded to the wage edicts by vanishing or sitting around doing
nothing. Eventually the edict was ignored and became a subject of derision and
mockery which permanently lowered the prestige and authority of the empire.
2.
Shearing the English Wolf
You know you are doing something
wrong when your enemies become folk heroes like Robin Hood. Common sense is to
tax the weak and give money to the strong, but after his failure in forestry
policy King John of England decided to try the reverse. He relieved the knights
of the realm from their military service requirements, but then ordered them to
pay instead a hefty "scutage" (shield) tax. Soon, there were 10,000
Robin Hoods trying to kill him and going about it in an organized fashion.
Signing the humiliating Magna Carta in 1215 bought him some time, but by the
next year he was living on the lam. After his folly-won treasure was washed
away in a mistimed river crossing, he went crazy and died soon after.
3.
Paper Money Is Amazing
The fifth Khan of Persia was named
"Gaykhatu," which means "amazing" in Mongolian. After
recklessly squandering the money left by his predecessors he was in no position
to cope with a massive rinderpest epidemic that began devastating his subjects'
livestock in 1294. Amazing came up with an amazing solution to his financial
problems: paper money. Invented by his boss, Kublai Khan, back in China the idea
of paper money was a godsend. He would print up certificates just like the
Chinese ones, decree death for anyone who refused them, and all his problems
would be solved. Amazing! Unfortunately for Amazing he did not fuss too much
with technical details like convertibility and capital controls, which Kublai
Khan had agonized over, and the result was the total failure of the project.
Economic chaos ensued. Amazing was deposed and put to death the next year.
4.
I'll Buy Every Sword You've Got
In the Muromachi period (1336 to
1573), Ming dynasty Mandarins in China adopted a policy of buying and importing
swords from the Japanese with the goal of depriving the troublesome
"barbarians" occupying those islands of their weapons. The gleeful
reaction of the Japanese was along the lines of Jay Leno's Doritos commercial:
buy all you like; we'll make more.
5.
No Smuggling Allowed
Price controls are stupid anytime,
but it takes true idiocy to apply them in the middle of a siege. In 1584 forces
controlled by Alexander Farnese, the duke of Parma, were besieging Holland's
grandest city, Antwerp, in the Dutch War of Independence. At first the siege
was ineffectual because the duke's lines were porous and Antwerp could be
supplied by sea, but the duke was in luck because the city decided to blockade
itself voluntarily. The magistrates of the city declared a maximum on the price
of grain. The smugglers who had been running the blockade up to that point
became considerably less enthusiastic about making food deliveries after that.
Facing starvation, the city surrendered the next year.
6.
The Gold Factory of Venice
In 1590 the Republic of Venice was
in decline. Nineteen years earlier it had gloriously fended off the Ottoman
Turks by a tremendous victory at the Battle of Lepanto, but had nevertheless
lost Cyprus, the republic's greatest possession. In 1585 the newly elected doge
had thrown silver coins instead of the traditional gold at his ascension.
Weighed down by taxes, imposts, tariffs, duties, tithes, assessments and fees,
the economy had seen better days. From out of this gloom a new hope
unexpectedly appeared. A long-lost Venetian named Marco Bragadini, currently
resident in nearby Lombardy, had discovered how to make gold. The republic had
to act fast, though, because the duke of Mantua was trying to lay his hands on
this valuable goose. A cohort of soldiers was sent forthwith and Bragadini was
securely delivered into the city in triumph by three galleys. Rigorous
scientific tests were ordered by the senate to verify the power of the
"anima d'oro," which Bragadini alone possessed. The alchemist filled
a crucible with quicksilver, added a pinch of his secret powder and set it to
fire. Soon the quicksilver turned to gold; it was all true. The price of
alchemist capes and retorts skyrocketed. Signor Bragadini coolly informed the
senate he could produce six million ducats or whatever they would require. For
himself he wanted nothing but to be the humble servant of his country.
Naturally, the senate put all the resources of Venice at his disposal. Nobles
flocked to Bragadini by the dozen, imploring him to cut them in on his
business. The months wore on, but the production of the new gold factory was
disappointingly meager. Apparently there were limits to the speed with which
the gold could be manufactured. Sensing a mounting impatience with his
operations Bragadini absconded to Munich where Duke William the Pious was
wooing him. Unfortunately for the maestro, in the meantime Pope Sixtus had died
and been replaced with the sanctimonious Pope Gregory XIV who considered the
alchemist and his two dogs to be the devil's spawn and sent orders for their
execution — with which William complied. The senate of Venice decided to
pretend the whole thing never happened.
7.
How to Deal with Hoarders
As the famine-fueled French
Revolution careened out of control in 1793, a radical clique called the
"Committee of Public Safety" headed by Maximilien Robespierre took
power. The committee resolved to solve the food problem by enacting the "General
Maximum," a set of policies fixing the maximum price of bread and other
common goods. When those measures failed to increase the supply of food, they
sent soldiers into the countryside to forcibly seize grain from the evil
farmers who were "hoarding" it. Robespierre and the committee went to
the guillotine the next year.
8.
A Hobo's Dream, An Empire's End
In 1880, railroad technology was
advancing rapidly, and the Russians received several private petitions for a
concession in the Far East. To the paranoid patricians of Moscow, it was not
enough to merely deny these foreign schemers; they needed to build their own
railroad to the east to keep them out. Under the leadership of His Royal
Paranoidness, Czar Alexander III, the Russian state began taking out massive foreign
loans and constructing the 5,000-mile Trans-Siberian Railway, the largest
civil-works project since the Great Pyramid of Giza. Alexander (and his empire)
would later die from injuries sustained in a railroad accident. By the time the
corruption-ridden boondoggle was completed in 1904, Alexander's son, Nicholas
II, was technically bankrupt. Wars and revolts started to plague the empire.
Instead of carrying trade goods, the new railway was carrying political
prisoners and supplies for soldiers. When Russia rolled over its debts in 1907,
it was obvious to the large banking houses that the empire was financially
doomed and only small investors could be found to subscribe the new loans. Even
with these loans suspended, Russia's economy was so weak that it would not
survive the coming war. Nicholas was executed July 16, 1918.
9.
It Takes a Village to Build a Famine
The 1984 crop failure in Ethiopia
presented a fresh set of problems for the Marxist junta called the
"Derg" that controlled the government. The nationalization programs
and price controls they had been experimenting with for years seemed less
effective than ever. Obviously the remnants of capitalism were still infecting
the economy, so they took vigorous measures such as outlawing grain trading. Oddly
enough, that did not stop the famine. The chairman, Mengistu Haile Mariam,
inspired by the brilliant agricultural successes of Secretary Stalin in the
1930s thereupon sponsored a whole new idea dubbed "villagization. "
Under this plan the scattered rural inhabitants of Ethiopia would be gathered
together in modernized villages with all the latest civic infrastructure. As
might be expected, not all the beneficiaries of this plan realized what utopias
these villages would be so they had to be driven there at gunpoint for their
own good. Unfortunately, the expected increases in agricultural production
never materialized and millions starved. The country descended into a permanent
state of civil war, which only ended in 1990 after the Soviet Union stopped supplying
the Derg. Mengistu fled to Zimbabwe where he has become an important advisor to
that nation's rulers.
10.
Rubles: Now You See Them, Now You Don't
On January 22, 1991, Mikhail
Gorbachev, the president of the Soviet Union, decreed that all existing 50- and
100-ruble banknotes were no longer legal tender and that they could be
exchanged for new notes for three days only and only in small quantities. This
had the effect of instantly deleting large portions of the savings and
accumulated capital of private citizens. He followed up this genius move on
January 26, by ordering that the police had the authority to search any place
of business and to demand the records of any business at any time. The union's
economic problems accelerated into a death spiral. Gorbachev resigned on
December 25, and on the next day the Supreme Soviet dissolved itself and the
Union of Soviet Socialist Republics.
Mises Daily: Wednesday, July 07, 2010 by John
S. Chamberlain
Take cover when you hear a political
leader talking about economic affairs. You can bet a bad decision is incoming.
Luckily for the leaders, their meddling usually has a slow, erosive effect on
the economy. Every so often, however, the great ones manage to land a real
whopper that takes them down along with their whole country. Here are ten
examples from history.
1.
Charge Too Much and You Die
In the year 301, the Roman emperor
Diocletian issued the Edictum De Pretiis Rerum Venalium, i.e., the Edict on
Prices of Foodstuffs, which rebalanced the coinage system and set maximums on
wages and the prices of many types of goods, especially food. The penalty for
selling above the stipulated prices was death. Copies of the edict were
inscribed on stone monuments all over the empire. Here's a tip for future
dictators: never inscribe your blunders on stone unless you want people to
laugh at you for the rest of eternity. The edict was a disaster. Sellers
withdrew their goods, unwilling to sell at the fixed prices or even risk being
falsely accused of selling beyond the maximum and thus be subject to execution.
Workers responded to the wage edicts by vanishing or sitting around doing
nothing. Eventually the edict was ignored and became a subject of derision and
mockery which permanently lowered the prestige and authority of the empire.
2.
Shearing the English Wolf
You know you are doing something
wrong when your enemies become folk heroes like Robin Hood. Common sense is to
tax the weak and give money to the strong, but after his failure in forestry
policy King John of England decided to try the reverse. He relieved the knights
of the realm from their military service requirements, but then ordered them to
pay instead a hefty "scutage" (shield) tax. Soon, there were 10,000
Robin Hoods trying to kill him and going about it in an organized fashion.
Signing the humiliating Magna Carta in 1215 bought him some time, but by the
next year he was living on the lam. After his folly-won treasure was washed
away in a mistimed river crossing, he went crazy and died soon after.
3.
Paper Money Is Amazing
The fifth Khan of Persia was named
"Gaykhatu," which means "amazing" in Mongolian. After
recklessly squandering the money left by his predecessors he was in no position
to cope with a massive rinderpest epidemic that began devastating his subjects'
livestock in 1294. Amazing came up with an amazing solution to his financial
problems: paper money. Invented by his boss, Kublai Khan, back in China the idea
of paper money was a godsend. He would print up certificates just like the
Chinese ones, decree death for anyone who refused them, and all his problems
would be solved. Amazing! Unfortunately for Amazing he did not fuss too much
with technical details like convertibility and capital controls, which Kublai
Khan had agonized over, and the result was the total failure of the project.
Economic chaos ensued. Amazing was deposed and put to death the next year.
4.
I'll Buy Every Sword You've Got
In the Muromachi period (1336 to
1573), Ming dynasty Mandarins in China adopted a policy of buying and importing
swords from the Japanese with the goal of depriving the troublesome
"barbarians" occupying those islands of their weapons. The gleeful
reaction of the Japanese was along the lines of Jay Leno's Doritos commercial:
buy all you like; we'll make more.
5.
No Smuggling Allowed
Price controls are stupid anytime,
but it takes true idiocy to apply them in the middle of a siege. In 1584 forces
controlled by Alexander Farnese, the duke of Parma, were besieging Holland's
grandest city, Antwerp, in the Dutch War of Independence. At first the siege
was ineffectual because the duke's lines were porous and Antwerp could be
supplied by sea, but the duke was in luck because the city decided to blockade
itself voluntarily. The magistrates of the city declared a maximum on the price
of grain. The smugglers who had been running the blockade up to that point
became considerably less enthusiastic about making food deliveries after that.
Facing starvation, the city surrendered the next year.
6.
The Gold Factory of Venice
In 1590 the Republic of Venice was
in decline. Nineteen years earlier it had gloriously fended off the Ottoman
Turks by a tremendous victory at the Battle of Lepanto, but had nevertheless
lost Cyprus, the republic's greatest possession. In 1585 the newly elected doge
had thrown silver coins instead of the traditional gold at his ascension.
Weighed down by taxes, imposts, tariffs, duties, tithes, assessments and fees,
the economy had seen better days. From out of this gloom a new hope
unexpectedly appeared. A long-lost Venetian named Marco Bragadini, currently
resident in nearby Lombardy, had discovered how to make gold. The republic had
to act fast, though, because the duke of Mantua was trying to lay his hands on
this valuable goose. A cohort of soldiers was sent forthwith and Bragadini was
securely delivered into the city in triumph by three galleys. Rigorous
scientific tests were ordered by the senate to verify the power of the
"anima d'oro," which Bragadini alone possessed. The alchemist filled
a crucible with quicksilver, added a pinch of his secret powder and set it to
fire. Soon the quicksilver turned to gold; it was all true. The price of
alchemist capes and retorts skyrocketed. Signor Bragadini coolly informed the
senate he could produce six million ducats or whatever they would require. For
himself he wanted nothing but to be the humble servant of his country.
Naturally, the senate put all the resources of Venice at his disposal. Nobles
flocked to Bragadini by the dozen, imploring him to cut them in on his
business. The months wore on, but the production of the new gold factory was
disappointingly meager. Apparently there were limits to the speed with which
the gold could be manufactured. Sensing a mounting impatience with his
operations Bragadini absconded to Munich where Duke William the Pious was
wooing him. Unfortunately for the maestro, in the meantime Pope Sixtus had died
and been replaced with the sanctimonious Pope Gregory XIV who considered the
alchemist and his two dogs to be the devil's spawn and sent orders for their
execution — with which William complied. The senate of Venice decided to
pretend the whole thing never happened.
7.
How to Deal with Hoarders
As the famine-fueled French
Revolution careened out of control in 1793, a radical clique called the
"Committee of Public Safety" headed by Maximilien Robespierre took
power. The committee resolved to solve the food problem by enacting the "General
Maximum," a set of policies fixing the maximum price of bread and other
common goods. When those measures failed to increase the supply of food, they
sent soldiers into the countryside to forcibly seize grain from the evil
farmers who were "hoarding" it. Robespierre and the committee went to
the guillotine the next year.
8.
A Hobo's Dream, An Empire's End
In 1880, railroad technology was
advancing rapidly, and the Russians received several private petitions for a
concession in the Far East. To the paranoid patricians of Moscow, it was not
enough to merely deny these foreign schemers; they needed to build their own
railroad to the east to keep them out. Under the leadership of His Royal
Paranoidness, Czar Alexander III, the Russian state began taking out massive foreign
loans and constructing the 5,000-mile Trans-Siberian Railway, the largest
civil-works project since the Great Pyramid of Giza. Alexander (and his empire)
would later die from injuries sustained in a railroad accident. By the time the
corruption-ridden boondoggle was completed in 1904, Alexander's son, Nicholas
II, was technically bankrupt. Wars and revolts started to plague the empire.
Instead of carrying trade goods, the new railway was carrying political
prisoners and supplies for soldiers. When Russia rolled over its debts in 1907,
it was obvious to the large banking houses that the empire was financially
doomed and only small investors could be found to subscribe the new loans. Even
with these loans suspended, Russia's economy was so weak that it would not
survive the coming war. Nicholas was executed July 16, 1918.
9.
It Takes a Village to Build a Famine
The 1984 crop failure in Ethiopia
presented a fresh set of problems for the Marxist junta called the
"Derg" that controlled the government. The nationalization programs
and price controls they had been experimenting with for years seemed less
effective than ever. Obviously the remnants of capitalism were still infecting
the economy, so they took vigorous measures such as outlawing grain trading. Oddly
enough, that did not stop the famine. The chairman, Mengistu Haile Mariam,
inspired by the brilliant agricultural successes of Secretary Stalin in the
1930s thereupon sponsored a whole new idea dubbed "villagization. "
Under this plan the scattered rural inhabitants of Ethiopia would be gathered
together in modernized villages with all the latest civic infrastructure. As
might be expected, not all the beneficiaries of this plan realized what utopias
these villages would be so they had to be driven there at gunpoint for their
own good. Unfortunately, the expected increases in agricultural production
never materialized and millions starved. The country descended into a permanent
state of civil war, which only ended in 1990 after the Soviet Union stopped supplying
the Derg. Mengistu fled to Zimbabwe where he has become an important advisor to
that nation's rulers.
10.
Rubles: Now You See Them, Now You Don't
On January 22, 1991, Mikhail
Gorbachev, the president of the Soviet Union, decreed that all existing 50- and
100-ruble banknotes were no longer legal tender and that they could be
exchanged for new notes for three days only and only in small quantities. This
had the effect of instantly deleting large portions of the savings and
accumulated capital of private citizens. He followed up this genius move on
January 26, by ordering that the police had the authority to search any place
of business and to demand the records of any business at any time. The union's
economic problems accelerated into a death spiral. Gorbachev resigned on
December 25, and on the next day the Supreme Soviet dissolved itself and the
Union of Soviet Socialist Republics.
Tuesday, May 18, 2010
Russia's Conquering Zeros
The strength of post-Soviet math stems from decades of lonely productivity
By MASHA GESSEN
Moscow
It may be no accident that, while some of the best American mathematical minds worked to solve one of the century's hardest problems—the Poincaré Conjecture—it was a Russian mathematician working in Russia who, early in this decade, finally triumphed.
Decades before, in the Soviet Union, math placed a premium on logic and consistency in a culture that thrived on rhetoric and fear; it required highly specialized knowledge to understand; and, worst of all, mathematics lay claim to singular and knowable truths—when the regime had staked its own legitimacy on its own singular truth. All this made mathematicians suspect. Still, math escaped the purges, show trials and rule by decree that decimated other Soviet sciences.
Three factors saved math. First, Russian math happened to be uncommonly strong right when it might have suffered the most, in the 1930s. Second, math proved too obscure for the sort of meddling Joseph Stalin most liked to exercise: It was simply too difficult to ignite a passionate debate about something as inaccessible as the objective nature of natural numbers (although just such a campaign was attempted). And third, at a critical moment math proved immensely useful to the state.
Three weeks after Nazi Germany invaded the Soviet Union in June 1941, the Soviet air force had been bombed out of existence. The Russian military set about retrofitting civilian airplanes for use as bombers. The problem was, the civilian airplanes were much slower than the military ones, rendering moot everything the military knew about aim.
What was needed was a small army of mathematicians to recalculate speeds and distances to let the air force hit its targets.
The greatest Russian mathematician of the 20th century, Andrei Kolmogorov, led a classroom of students, armed with adding machines, in recalculating the Red Army's bombing and artillery tables. Then he set about creating a new system of statistical control and prediction for the Soviet military.
View Full Image
Math
iStockphoto
Math
Math
Following the war, the Soviets invested heavily in high-tech military research, building over 40 cities where scientists and mathematicians worked in secret. The urgency of the mobilization recalled the Manhattan Project—only much bigger and lasting much longer. Estimates of the number of people engaged in the Soviet arms effort in the second half of the century range up to 12 million people, with a couple million of them employed by military-research institutions.
These jobs spelled nearly total scientific isolation: For defense employees, any contact with foreigners would be considered treasonous rather than simply suspect. In addition, research towns provided comfortably cloistered social environments but no possibility for outside intellectual contact. The Soviet Union managed to hide some of its best mathematical minds away in plain sight.
In the years following Stalin's death in 1953, the Iron Curtain began to open a tiny crack—not quite enough to facilitate much-needed conversation with non-Soviet mathematicians but enough to show off some of Soviet mathematics' proudest achievements.
By the 1970s, a Soviet math establishment had taken shape. A totalitarian system within a totalitarian system, it provided its members not only with work and money but also with apartments, food, and transportation. It determined where they lived and when, where, and how they traveled for work or pleasure. To those in the fold, it was a controlling and strict but caring mother: Her children were undeniably privileged.
Even for members of the math establishment, though, there were always too few good apartments, too many people wanting to travel to a conference. So it was a vicious, back-stabbing little world, shaped by intrigue, denunciations and unfair competition.
Then there were those who could never join the establishment: those who happened to be born Jewish or female, those who had had the wrong advisers at university or those who could not force themselves to join the Party. For these people, "the most they could hope for was being able to defend their doctoral dissertation at some institute in Minsk, if they could secure connections there," says Sergei Gelfand, publisher of the American Mathematical Society—who also happens to be the son of one of Russia's top 20th-century mathematicians, Israel Gelfand, a student of Mr. Kolmogorov. Some Western mathematicians, Sergei Gelfand adds, "even came for an extended stay because they realized there were a lot of talented people. This was unofficial mathematics."
Math Stars
Math Stars
Besides Grigory Perelman and the Poincaré Conjecture, there are numerous other famous math solvers, and there are still problems to solve.
Andrew Wiles (1953-)
This Princeton mathematician resolved the most famous problem in numbers—Fermat's Last Theorem—in 1995.
Leonhard Euler (1707–1783)
A Swiss mathematician who made so many contributions, particularly in the early foundations of calculus, that it gets hard to keep track of all that's named for him.
Kurt Gödel (1906–1978)
This Austrian logician demonstrated that any reasonably powerful system of math contains true statements that can't be proven.
The Riemann Hypothesis
To the enduring befuddlement of mathematicians, prime numbers—numbers divisible only by themselves and 1—exhibit no pattern at all: 2, 3, 5, 7, 11, 13 are the first few. They aren't evenly spaced but get scarcer the further out you go. No formula can tell you what the next one will be. In 1859, the German mathematician Bernhard Riemann discovered that a function—known now as the Riemann zeta function (expressed in the graphic above)—appeared to give signposts to where primes lie in the great field of numbers. It provided some order to the mystery. Riemann conjectured that these key signposts—"zeros" of the function—all lie on a single straight line out to infinity, that none are flung off in strange places. In the 150 years since, no one has proved his hypothesis. To a mathematician, the hypothesis looks like this: All non-trivial zeros of the Riemann zeta function have a real part equal to ½.
--Charles Forelle
Besides Grigory Perelman and the Poincaré Conjecture, there are numerous other famous math solvers, and there are still problems to solve.
Andrew Wiles (1953-)
This Princeton mathematician resolved the most famous problem in numbers—Fermat's Last Theorem—in 1995.
Leonhard Euler (1707–1783)
A Swiss mathematician who made so many contributions, particularly in the early foundations of calculus, that it gets hard to keep track of all that's named for him.
Kurt Gödel (1906–1978)
This Austrian logician demonstrated that any reasonably powerful system of math contains true statements that can't be proven.
The Riemann Hypothesis
To the enduring befuddlement of mathematicians, prime numbers—numbers divisible only by themselves and 1—exhibit no pattern at all: 2, 3, 5, 7, 11, 13 are the first few. They aren't evenly spaced but get scarcer the further out you go. No formula can tell you what the next one will be. In 1859, the German mathematician Bernhard Riemann discovered that a function—known now as the Riemann zeta function (expressed in the graphic above)—appeared to give signposts to where primes lie in the great field of numbers. It provided some order to the mystery. Riemann conjectured that these key signposts—"zeros" of the function—all lie on a single straight line out to infinity, that none are flung off in strange places. In the 150 years since, no one has proved his hypothesis. To a mathematician, the hypothesis looks like this: All non-trivial zeros of the Riemann zeta function have a real part equal to ½.
--Charles Forelle
One such visitor was Dusa McDuff, then a British algebraist and now a professor emerita at the State University of New York at Stony Brook. She studied with the older Mr. Gelfand for six months, and credits this experience to opening her eyes both to what mathematics really is: "It was a wonderful education... Gelfand amazed me by talking of mathematics as though it were poetry."
In the mathematical counterculture, math "was almost a hobby," recalls Sergei Gelfand. "So you could spend your time doing things that would not be useful to anyone for the nearest decade." Mathematicians called it "math for math's sake." There was no material reward in this—no tenure, no money, no apartments, no foreign travel; all they stood to gain was the respect of their peers.
Math not only held out the promise of intellectual work without state interference (if also without its support) but also something found nowhere else in late-Soviet society: a knowable singular truth. "If I had been free to choose any profession, I would have become a literary critic," says Georgii Shabat, a well-known Moscow mathematician. "But I wanted to work, not spend my life fighting the censors." The search for that truth could take long years—but in the late Soviet Union, time seemed to stand still.
When it all collapsed, the state stopped investing in math and holding its mathematicians hostage. It's hard to say which of these two factors did more to send Russian mathematicians to the West, primarily the U.S., but leave they did, in what was probably one of the biggest outflows of brainpower the world has ever known. Even the older Mr. Gelfand moved to the U.S. and taught at Rutgers University for nearly 20 years, almost until his death in October at the age of 96. The flow is probably unstoppable by now: A promising graduate student in Moscow or St. Petersburg, unable to find a suitable academic adviser at home, is most likely to follow the trail to the U.S.
But the math culture they find in America, while less back-stabbing than that of the Soviet math establishment, is far from the meritocratic ideal that Russia's unofficial math world had taught them to expect. American math culture has intellectual rigor but also suffers from allegations of favoritism, small-time competitiveness, occasional plagiarism scandals, as well as the usual tenure battles, funding pressures and administrative chores that characterize American academic life. This culture offers the kinds of opportunities for professional communication that a Soviet mathematician could hardly have dreamed of, but it doesn't foster the sort of luxurious, timeless creative work that was typical of the Soviet math counterculture.
For example, the American model may not be able to produce a breakthrough like the proof of the Poincaré Conjecture, carried out by the St. Petersburg mathematician Grigory Perelman.
Mr. Perelman came to the United States as a young postdoctoral student in the early 1990s and immediately decided that America was math heaven; he wrote home demanding that his mother and his younger sister, a budding mathematician, move here. But three years later, when his postdoc hiatus was over and he was faced with the pressures of securing an academic position, he returned home, disillusioned.
In St. Petersburg he went on the (admittedly modest) payroll of the math research institute, where he showed up infrequently and generally kept to himself for almost seven years, one of the greatest mathematical discoveries of at least the last hundred years. It's all but impossible to imagine an American institution that could have provided Mr. Perelman with this kind of near-solitary existence, free of teaching and publishing obligations.
After posting his proof on the Web, Mr. Perelman traveled to the U.S. in the spring of 2003, to lecture at a couple of East Coast universities. He was immediately showered with offers of professorial appointments and research money, and, by all accounts, he found these offers gravely insulting, as he believes the monetization of achievement is the ultimate insult to mathematics. So profound was his disappointment with the rewards he was offered that, I believe, it contributed a great deal to his subsequent decision to quit mathematics altogether, along with the people who practice it. (He now lives with his mother on the outskirts of St. Petersburg.)
A child of the Soviet math counterculture, he still held a singular truth to be self-evident: Math as it ought to be practiced, math as the ultimate flight of the imagination, is something money can't buy.
This essay was adapted from Masha Gessen's latest book, "Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century," a story of Grigory Perelman and the Poincaré Conjecture. She lives in Moscow and is the author of three previous books.
Copyright 2009 Dow Jones & Company, Inc. All Rights Reserved
This copy is for your personal, non-commercial use only. Distribution and use of this material are governed by our Subscriber Agreement and by copyright law. For non-personal use or to order multiple copies, please contact Dow Jones Reprints at 1-800-843-0008 or visit
www.djreprints.com
More In
By MASHA GESSEN
Moscow
It may be no accident that, while some of the best American mathematical minds worked to solve one of the century's hardest problems—the Poincaré Conjecture—it was a Russian mathematician working in Russia who, early in this decade, finally triumphed.
Decades before, in the Soviet Union, math placed a premium on logic and consistency in a culture that thrived on rhetoric and fear; it required highly specialized knowledge to understand; and, worst of all, mathematics lay claim to singular and knowable truths—when the regime had staked its own legitimacy on its own singular truth. All this made mathematicians suspect. Still, math escaped the purges, show trials and rule by decree that decimated other Soviet sciences.
Three factors saved math. First, Russian math happened to be uncommonly strong right when it might have suffered the most, in the 1930s. Second, math proved too obscure for the sort of meddling Joseph Stalin most liked to exercise: It was simply too difficult to ignite a passionate debate about something as inaccessible as the objective nature of natural numbers (although just such a campaign was attempted). And third, at a critical moment math proved immensely useful to the state.
Three weeks after Nazi Germany invaded the Soviet Union in June 1941, the Soviet air force had been bombed out of existence. The Russian military set about retrofitting civilian airplanes for use as bombers. The problem was, the civilian airplanes were much slower than the military ones, rendering moot everything the military knew about aim.
What was needed was a small army of mathematicians to recalculate speeds and distances to let the air force hit its targets.
The greatest Russian mathematician of the 20th century, Andrei Kolmogorov, led a classroom of students, armed with adding machines, in recalculating the Red Army's bombing and artillery tables. Then he set about creating a new system of statistical control and prediction for the Soviet military.
View Full Image
Math
iStockphoto
Math
Math
Following the war, the Soviets invested heavily in high-tech military research, building over 40 cities where scientists and mathematicians worked in secret. The urgency of the mobilization recalled the Manhattan Project—only much bigger and lasting much longer. Estimates of the number of people engaged in the Soviet arms effort in the second half of the century range up to 12 million people, with a couple million of them employed by military-research institutions.
These jobs spelled nearly total scientific isolation: For defense employees, any contact with foreigners would be considered treasonous rather than simply suspect. In addition, research towns provided comfortably cloistered social environments but no possibility for outside intellectual contact. The Soviet Union managed to hide some of its best mathematical minds away in plain sight.
In the years following Stalin's death in 1953, the Iron Curtain began to open a tiny crack—not quite enough to facilitate much-needed conversation with non-Soviet mathematicians but enough to show off some of Soviet mathematics' proudest achievements.
By the 1970s, a Soviet math establishment had taken shape. A totalitarian system within a totalitarian system, it provided its members not only with work and money but also with apartments, food, and transportation. It determined where they lived and when, where, and how they traveled for work or pleasure. To those in the fold, it was a controlling and strict but caring mother: Her children were undeniably privileged.
Even for members of the math establishment, though, there were always too few good apartments, too many people wanting to travel to a conference. So it was a vicious, back-stabbing little world, shaped by intrigue, denunciations and unfair competition.
Then there were those who could never join the establishment: those who happened to be born Jewish or female, those who had had the wrong advisers at university or those who could not force themselves to join the Party. For these people, "the most they could hope for was being able to defend their doctoral dissertation at some institute in Minsk, if they could secure connections there," says Sergei Gelfand, publisher of the American Mathematical Society—who also happens to be the son of one of Russia's top 20th-century mathematicians, Israel Gelfand, a student of Mr. Kolmogorov. Some Western mathematicians, Sergei Gelfand adds, "even came for an extended stay because they realized there were a lot of talented people. This was unofficial mathematics."
Math Stars
Math Stars
Besides Grigory Perelman and the Poincaré Conjecture, there are numerous other famous math solvers, and there are still problems to solve.
Andrew Wiles (1953-)
This Princeton mathematician resolved the most famous problem in numbers—Fermat's Last Theorem—in 1995.
Leonhard Euler (1707–1783)
A Swiss mathematician who made so many contributions, particularly in the early foundations of calculus, that it gets hard to keep track of all that's named for him.
Kurt Gödel (1906–1978)
This Austrian logician demonstrated that any reasonably powerful system of math contains true statements that can't be proven.
The Riemann Hypothesis
To the enduring befuddlement of mathematicians, prime numbers—numbers divisible only by themselves and 1—exhibit no pattern at all: 2, 3, 5, 7, 11, 13 are the first few. They aren't evenly spaced but get scarcer the further out you go. No formula can tell you what the next one will be. In 1859, the German mathematician Bernhard Riemann discovered that a function—known now as the Riemann zeta function (expressed in the graphic above)—appeared to give signposts to where primes lie in the great field of numbers. It provided some order to the mystery. Riemann conjectured that these key signposts—"zeros" of the function—all lie on a single straight line out to infinity, that none are flung off in strange places. In the 150 years since, no one has proved his hypothesis. To a mathematician, the hypothesis looks like this: All non-trivial zeros of the Riemann zeta function have a real part equal to ½.
--Charles Forelle
Besides Grigory Perelman and the Poincaré Conjecture, there are numerous other famous math solvers, and there are still problems to solve.
Andrew Wiles (1953-)
This Princeton mathematician resolved the most famous problem in numbers—Fermat's Last Theorem—in 1995.
Leonhard Euler (1707–1783)
A Swiss mathematician who made so many contributions, particularly in the early foundations of calculus, that it gets hard to keep track of all that's named for him.
Kurt Gödel (1906–1978)
This Austrian logician demonstrated that any reasonably powerful system of math contains true statements that can't be proven.
The Riemann Hypothesis
To the enduring befuddlement of mathematicians, prime numbers—numbers divisible only by themselves and 1—exhibit no pattern at all: 2, 3, 5, 7, 11, 13 are the first few. They aren't evenly spaced but get scarcer the further out you go. No formula can tell you what the next one will be. In 1859, the German mathematician Bernhard Riemann discovered that a function—known now as the Riemann zeta function (expressed in the graphic above)—appeared to give signposts to where primes lie in the great field of numbers. It provided some order to the mystery. Riemann conjectured that these key signposts—"zeros" of the function—all lie on a single straight line out to infinity, that none are flung off in strange places. In the 150 years since, no one has proved his hypothesis. To a mathematician, the hypothesis looks like this: All non-trivial zeros of the Riemann zeta function have a real part equal to ½.
--Charles Forelle
One such visitor was Dusa McDuff, then a British algebraist and now a professor emerita at the State University of New York at Stony Brook. She studied with the older Mr. Gelfand for six months, and credits this experience to opening her eyes both to what mathematics really is: "It was a wonderful education... Gelfand amazed me by talking of mathematics as though it were poetry."
In the mathematical counterculture, math "was almost a hobby," recalls Sergei Gelfand. "So you could spend your time doing things that would not be useful to anyone for the nearest decade." Mathematicians called it "math for math's sake." There was no material reward in this—no tenure, no money, no apartments, no foreign travel; all they stood to gain was the respect of their peers.
Math not only held out the promise of intellectual work without state interference (if also without its support) but also something found nowhere else in late-Soviet society: a knowable singular truth. "If I had been free to choose any profession, I would have become a literary critic," says Georgii Shabat, a well-known Moscow mathematician. "But I wanted to work, not spend my life fighting the censors." The search for that truth could take long years—but in the late Soviet Union, time seemed to stand still.
When it all collapsed, the state stopped investing in math and holding its mathematicians hostage. It's hard to say which of these two factors did more to send Russian mathematicians to the West, primarily the U.S., but leave they did, in what was probably one of the biggest outflows of brainpower the world has ever known. Even the older Mr. Gelfand moved to the U.S. and taught at Rutgers University for nearly 20 years, almost until his death in October at the age of 96. The flow is probably unstoppable by now: A promising graduate student in Moscow or St. Petersburg, unable to find a suitable academic adviser at home, is most likely to follow the trail to the U.S.
But the math culture they find in America, while less back-stabbing than that of the Soviet math establishment, is far from the meritocratic ideal that Russia's unofficial math world had taught them to expect. American math culture has intellectual rigor but also suffers from allegations of favoritism, small-time competitiveness, occasional plagiarism scandals, as well as the usual tenure battles, funding pressures and administrative chores that characterize American academic life. This culture offers the kinds of opportunities for professional communication that a Soviet mathematician could hardly have dreamed of, but it doesn't foster the sort of luxurious, timeless creative work that was typical of the Soviet math counterculture.
For example, the American model may not be able to produce a breakthrough like the proof of the Poincaré Conjecture, carried out by the St. Petersburg mathematician Grigory Perelman.
Mr. Perelman came to the United States as a young postdoctoral student in the early 1990s and immediately decided that America was math heaven; he wrote home demanding that his mother and his younger sister, a budding mathematician, move here. But three years later, when his postdoc hiatus was over and he was faced with the pressures of securing an academic position, he returned home, disillusioned.
In St. Petersburg he went on the (admittedly modest) payroll of the math research institute, where he showed up infrequently and generally kept to himself for almost seven years, one of the greatest mathematical discoveries of at least the last hundred years. It's all but impossible to imagine an American institution that could have provided Mr. Perelman with this kind of near-solitary existence, free of teaching and publishing obligations.
After posting his proof on the Web, Mr. Perelman traveled to the U.S. in the spring of 2003, to lecture at a couple of East Coast universities. He was immediately showered with offers of professorial appointments and research money, and, by all accounts, he found these offers gravely insulting, as he believes the monetization of achievement is the ultimate insult to mathematics. So profound was his disappointment with the rewards he was offered that, I believe, it contributed a great deal to his subsequent decision to quit mathematics altogether, along with the people who practice it. (He now lives with his mother on the outskirts of St. Petersburg.)
A child of the Soviet math counterculture, he still held a singular truth to be self-evident: Math as it ought to be practiced, math as the ultimate flight of the imagination, is something money can't buy.
This essay was adapted from Masha Gessen's latest book, "Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century," a story of Grigory Perelman and the Poincaré Conjecture. She lives in Moscow and is the author of three previous books.
Copyright 2009 Dow Jones & Company, Inc. All Rights Reserved
This copy is for your personal, non-commercial use only. Distribution and use of this material are governed by our Subscriber Agreement and by copyright law. For non-personal use or to order multiple copies, please contact Dow Jones Reprints at 1-800-843-0008 or visit
www.djreprints.com
More In
Friday, May 07, 2010
The grand challenges of Indian science
R.A. Mashelkar
We need to recognise that there is no intellectual democracy; elitism in science is inevitable and needs to be promoted.
The Nobel Laureate Richard Feynman had famously said, ‘the difficulty with science is often not with the new ideas, but in escaping the old ones. A certain amount of irreverence is essential for creative pursuit in science.’
The first grand challenge before Indian science is that of building some irreverence. Our students are too reverent. Our existing hierarchical structures kill irreverence. Promoting irreverence means building the questioning attitude. It means education systems that do not have the rigid unimaginative curricula, it means replacing ‘learning by rote’ by ‘learning by doing’ and to do away with the examination systems with single correct answers.
Paper or people?
More often than not, in our systems, paper becomes more important than people. Bureaucracy overrides meritocracy. Risk taking innovators are shot. Decision making time cycles are longer than the product life cycles. Therefore, the second grand challenge is that of creating an ‘innovation ecosystem’, in which questioning attitudes and healthy irreverence can grow.
The third grand challenge is of creating truly innovative scientists, who see what everyone else sees but think of what no one else thinks. The 2005 Nobel Prize winners for medicine, Warren and Marshall, for instance, were such innovators. Everyone had thought that the cause of gastritis inflammation and stomach ulceration is excessive acid secretion due to irregularities in diet and lifestyle. Warren & Marshall postulated that the causative agent was, a bacterium called Heliobacter pylori. They were ridiculed but they stuck to their guns. They saw what the others did not see. And they were proved right.
The fourth grand challenge is the ability to pose, rather than merely solve, big problems. For example, James Watson felt sure that it was going to be possible to discover the molecular nature of the gene and worked hard at it — even to such an extent that he was fired from the Rockefeller Fellowship that he had. Einstein, when he was 15 years old, asked himself what would the world look like if [he] were moving with the velocity of light. This big question led finally to his special theory of relativity.
The fifth grand challenge is to create new mechanisms by which out of the box thinking will be triggered in Indian science. In the early nineties, when I was the Director of the National Chemical Laboratory, we tried to promote this by creating a small “kite flying fund”, where an out of the box idea with even a one in one thousand chance of success of would be supported. Bold thinking was applauded and failure was not punished. The result was remarkable ‘free thinking’ that gave us a quite a few breakthroughs.
When I moved to Council of Scientific and Industrial Research (CSIR) as Director-General in mid nineties, we created a “New Idea Fund” with a similar objective. Here, over time, it turned out that it was not the lack of funds, but it was lack of great ideas that was the bottleneck!
But great ideas did come to Indian scientists in the distant past. In 2003, Jayant Narlikar wrote a book The Scientific Edge. He listed the top 10 achievements of Indian science and technology in the 20th century. There are five before 1950 and five after 1950. Interestingly, the five before 1950 are all individual efforts, namely, the works by Ramanujam (the products of his mathematical genius are still researched on), Meghnad Saha (his ionization equation played a vital role in stellar astrophysics), S.N. Bose (his work on particle statistics was path breaking), C.V. Raman (his Raman effect discovery led to the one and only Nobel prize that an Indian scientist doing work in India has won) and G.N. Ramachandran (he was the father of molecular biophysics).
After 1950, Narlikar lists the other five achievements, namely the green revolution, space research, nuclear energy, superconductivity and transformation of CSIR in the nineties. In these, except for the superconductivity research, in which the likes of C.N.R. Rao made pioneering contributions, the rest are all government funded “organised science and technology”. Why is it that in the second half of 20th century, we could not recreate the magic of the early part of the century created by Ramanujams, Ramans, Boses and so on?
The potential Ramans and Ramanujams are there even today somewhere. We need to find them early enough and nurture them. For this, we need to recognise that there is no intellectual democracy; elitism in science is inevitable and needs to be promoted.
In the year 2005, the Nobel prize for physics was shared by Glauber, Hall and Hansch, a controversy erupted since many Indian scientists felt that it should have been shared by E.C.G. Sudarshan, a scientist of Indian origin. In the year 2009, we did better. A scientist of Indian origin, Venky Ramakrishnan shared the chemistry Nobel prize with Steitz and Yonath. The fact that Venky was born in India was a cause for great Indian celebration. Next, will we have a Nobel prize for an Indian working in India?
Why not? It certainly can happen. The government has created new institutions such as Indian Institute of Science, Education and Research. It has created schemes such as Innovation in Science Pursuit for Inspired Research (INSPIRE), for drawing and retaining millions of young bright children into science. There are clear signs of reversal of brain drain. Infosys has taken a giant step forward by creating mini Indian Nobel prizes worth half a crore rupees each for different scientific disciplines. If we can leverage all this by promoting that irreverence in Indian science, creating new organisational values, creating tolerance for risk taking and failure, then Indian science will certainly make that ‘much awaited’ difference. Nobel prizes will then follow inevitably.
(Dr. R.A. Mashelkar, FRS, is chairman, National Innovation Foundation & president, Global Research Alliance.)
© Copyright 2000 - 2009 The Hindu
We need to recognise that there is no intellectual democracy; elitism in science is inevitable and needs to be promoted.
The Nobel Laureate Richard Feynman had famously said, ‘the difficulty with science is often not with the new ideas, but in escaping the old ones. A certain amount of irreverence is essential for creative pursuit in science.’
The first grand challenge before Indian science is that of building some irreverence. Our students are too reverent. Our existing hierarchical structures kill irreverence. Promoting irreverence means building the questioning attitude. It means education systems that do not have the rigid unimaginative curricula, it means replacing ‘learning by rote’ by ‘learning by doing’ and to do away with the examination systems with single correct answers.
Paper or people?
More often than not, in our systems, paper becomes more important than people. Bureaucracy overrides meritocracy. Risk taking innovators are shot. Decision making time cycles are longer than the product life cycles. Therefore, the second grand challenge is that of creating an ‘innovation ecosystem’, in which questioning attitudes and healthy irreverence can grow.
The third grand challenge is of creating truly innovative scientists, who see what everyone else sees but think of what no one else thinks. The 2005 Nobel Prize winners for medicine, Warren and Marshall, for instance, were such innovators. Everyone had thought that the cause of gastritis inflammation and stomach ulceration is excessive acid secretion due to irregularities in diet and lifestyle. Warren & Marshall postulated that the causative agent was, a bacterium called Heliobacter pylori. They were ridiculed but they stuck to their guns. They saw what the others did not see. And they were proved right.
The fourth grand challenge is the ability to pose, rather than merely solve, big problems. For example, James Watson felt sure that it was going to be possible to discover the molecular nature of the gene and worked hard at it — even to such an extent that he was fired from the Rockefeller Fellowship that he had. Einstein, when he was 15 years old, asked himself what would the world look like if [he] were moving with the velocity of light. This big question led finally to his special theory of relativity.
The fifth grand challenge is to create new mechanisms by which out of the box thinking will be triggered in Indian science. In the early nineties, when I was the Director of the National Chemical Laboratory, we tried to promote this by creating a small “kite flying fund”, where an out of the box idea with even a one in one thousand chance of success of would be supported. Bold thinking was applauded and failure was not punished. The result was remarkable ‘free thinking’ that gave us a quite a few breakthroughs.
When I moved to Council of Scientific and Industrial Research (CSIR) as Director-General in mid nineties, we created a “New Idea Fund” with a similar objective. Here, over time, it turned out that it was not the lack of funds, but it was lack of great ideas that was the bottleneck!
But great ideas did come to Indian scientists in the distant past. In 2003, Jayant Narlikar wrote a book The Scientific Edge. He listed the top 10 achievements of Indian science and technology in the 20th century. There are five before 1950 and five after 1950. Interestingly, the five before 1950 are all individual efforts, namely, the works by Ramanujam (the products of his mathematical genius are still researched on), Meghnad Saha (his ionization equation played a vital role in stellar astrophysics), S.N. Bose (his work on particle statistics was path breaking), C.V. Raman (his Raman effect discovery led to the one and only Nobel prize that an Indian scientist doing work in India has won) and G.N. Ramachandran (he was the father of molecular biophysics).
After 1950, Narlikar lists the other five achievements, namely the green revolution, space research, nuclear energy, superconductivity and transformation of CSIR in the nineties. In these, except for the superconductivity research, in which the likes of C.N.R. Rao made pioneering contributions, the rest are all government funded “organised science and technology”. Why is it that in the second half of 20th century, we could not recreate the magic of the early part of the century created by Ramanujams, Ramans, Boses and so on?
The potential Ramans and Ramanujams are there even today somewhere. We need to find them early enough and nurture them. For this, we need to recognise that there is no intellectual democracy; elitism in science is inevitable and needs to be promoted.
In the year 2005, the Nobel prize for physics was shared by Glauber, Hall and Hansch, a controversy erupted since many Indian scientists felt that it should have been shared by E.C.G. Sudarshan, a scientist of Indian origin. In the year 2009, we did better. A scientist of Indian origin, Venky Ramakrishnan shared the chemistry Nobel prize with Steitz and Yonath. The fact that Venky was born in India was a cause for great Indian celebration. Next, will we have a Nobel prize for an Indian working in India?
Why not? It certainly can happen. The government has created new institutions such as Indian Institute of Science, Education and Research. It has created schemes such as Innovation in Science Pursuit for Inspired Research (INSPIRE), for drawing and retaining millions of young bright children into science. There are clear signs of reversal of brain drain. Infosys has taken a giant step forward by creating mini Indian Nobel prizes worth half a crore rupees each for different scientific disciplines. If we can leverage all this by promoting that irreverence in Indian science, creating new organisational values, creating tolerance for risk taking and failure, then Indian science will certainly make that ‘much awaited’ difference. Nobel prizes will then follow inevitably.
(Dr. R.A. Mashelkar, FRS, is chairman, National Innovation Foundation & president, Global Research Alliance.)
© Copyright 2000 - 2009 The Hindu
Wednesday, May 05, 2010
ര്ട്ടന് ടാറ്റാ - ലീടെര്ഷിപ് ലെസ്സോന്!
One of Mr. Ratan N Tata's (RNT) first assignments was the stewardship of the ailing electronics company in the Tata portfolio - Nelco.
Story goes that a team of senior managers from Nelco was driving to Nasik along with RNT. Halfway into the journey, the car had a flat tyre, and as the driver pulled up, the occupants - including Mr. Tata - got off for a comfort break, leaving the driver to replace the tyre.
Some of the managers welcomed the forced break, as it allowed them a much-needed chance to light up a cigarette. Some used the opportunity to stretch, and smile, and share a joke. And then, one of them suddenly noticed that Mr. Tata was not to be seen, and wondered aloud where Ratan Tata might have vanished!
Was he behind some bush?
Had he wandered off inside the roadside dhaba for a quick cup of tea?
Or was he mingling with some passer-bys, listening to their stories?
None of these, in fact, while his colleagues were taking a break, Ratan Tata was busy helping the driver change tyres. Sleeves rolled up, tie swatted away over the shoulder, the hands expertly working the jack and the spanner, bouncing the spare tyre to check if the tyre pressure was ok. Droplets of sweat on the brow, and a smileon the face.
At that moment, the managers accompanying Ratan Tata got a master class in Leadership they haven't forgotten.
And that's a moment that the driver of that car probably hasn't forgotten either!
Questions to ask:
· When was the last time I rolled up my sleeves to do a task much below my hierarchy?
· Do I wait for the big opportunity to showcase my leadership?
· Is that big opportunity ever going to come?
· Am I trying to manage upwards so much that I've lost the feel of the field?
Ideas for action:
· Humility is the essence of success. Be humble and even teach your children to be so.
· To reach the top and remain there, always start from the bottom, else your days at the top will not last long.
· Practice leadership in small things instead of waiting for the big crisis or a major product launch.
· Seek to find opportunities to lead in everyday moments.
· Build your leadership skills one baby step at a time.
· When one’s hands get dirty - The mind remains clean!!
Story goes that a team of senior managers from Nelco was driving to Nasik along with RNT. Halfway into the journey, the car had a flat tyre, and as the driver pulled up, the occupants - including Mr. Tata - got off for a comfort break, leaving the driver to replace the tyre.
Some of the managers welcomed the forced break, as it allowed them a much-needed chance to light up a cigarette. Some used the opportunity to stretch, and smile, and share a joke. And then, one of them suddenly noticed that Mr. Tata was not to be seen, and wondered aloud where Ratan Tata might have vanished!
Was he behind some bush?
Had he wandered off inside the roadside dhaba for a quick cup of tea?
Or was he mingling with some passer-bys, listening to their stories?
None of these, in fact, while his colleagues were taking a break, Ratan Tata was busy helping the driver change tyres. Sleeves rolled up, tie swatted away over the shoulder, the hands expertly working the jack and the spanner, bouncing the spare tyre to check if the tyre pressure was ok. Droplets of sweat on the brow, and a smileon the face.
At that moment, the managers accompanying Ratan Tata got a master class in Leadership they haven't forgotten.
And that's a moment that the driver of that car probably hasn't forgotten either!
Questions to ask:
· When was the last time I rolled up my sleeves to do a task much below my hierarchy?
· Do I wait for the big opportunity to showcase my leadership?
· Is that big opportunity ever going to come?
· Am I trying to manage upwards so much that I've lost the feel of the field?
Ideas for action:
· Humility is the essence of success. Be humble and even teach your children to be so.
· To reach the top and remain there, always start from the bottom, else your days at the top will not last long.
· Practice leadership in small things instead of waiting for the big crisis or a major product launch.
· Seek to find opportunities to lead in everyday moments.
· Build your leadership skills one baby step at a time.
· When one’s hands get dirty - The mind remains clean!!
Monday, May 03, 2010
The Unreasonable Effectiveness of Mathematics
The Unreasonable Effectiveness of Mathematics
R. W. HAMMING
Reprinted From: The American Mathematical Monthly
Volume 87 Number 2 February 1980
Prologue. It is evident from the title that this is a philosophical discussion. I shall not apologize for the philosophy, though I am well aware that most scientists, engineers, and mathematicians have little regard for it; instead, I shall give this short prologue to justify the approach.
Man, so far as we know, has always wondered about himself, the world around him, and what life is all about. We have many myths from the past that tell how and why God, or the gods, made man and the universe. These I shall call theological explanations. They have one principal characteristic in common-there is little point in asking why things are the way they are, since we are given mainly a description of the creation as the gods chose to do it.
Philosophy started when man began to wonder about the world outside of this theological framework. An early example is the description by the philosophers that the world is made of earth, fire, water, and air. No doubt they were told at the time that the gods made things that way and to stop worrying about it.
From these early attempts to explain things slowly came philosophy as well as our present science. Not that science explains "why" things are as they are-gravitation does not explain why things fall-but science gives so many details of "how" that we have the feeling we understand "why." Let us be clear about this point; it is by the sea of interrelated details that science seems to say "why" the universe is as it is.
Our main tool for carrying out the long chains of tight reasoning required by science is mathematics. Indeed, mathematics might be defined as being the mental tool designed for this purpose. Many people through the ages have asked the question I am effectively asking in the title, "Why is mathematics so unreasonably effective?" In asking this we are merely looking more at the logical side and less at the material side of what the universe is and how it works.
Mathematicians working in the foundations of mathematics are concerned mainly with the self-consistency and limitations of the system. They seem not to concern themselves with why the world apparently admits of a logical explanation. In a sense I am in the position of the early Greek philosophers who wondered about the material side, and my answers on the logical side are probably not much better than theirs were in their time. But we must begin somewhere and sometime to explain the phenomenon that the world seems to be organized in a logical pattern that parallels much of mathematics, that mathematics is the language of science and engineering.
Once I had organized the main outline, I had then to consider how best to communicate my ideas and opinions to others. Experience shows that I am not always successful in this matter. It finally occurred to me that the following preliminary remarks would help.
In some respects this discussion is highly theoretical. I have to mention, at least slightly, various theories of the general activity called mathematics, as well as touch on selected parts of it. Furthermore, there are various theories of applications. Thus, to some extent, this leads to a theory of theories. What may surprise you is that I shall take the experimentalist's approach in discussing things. Never mind what the theories are supposed to be, or what you think they should be, or even what the experts in the field assert they are; let us take the scientific attitude and look at what they are. I am well aware that much of what I say, especially about the nature of mathematics, will annoy many mathematicians. My experimental approach is quite foreign to their mentality and preconceived beliefs. So be it!
The inspiration for this article came from the similarly entitled article, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" [1], by E. P. Wigner. It will be noticed that I have left out part of the title, and by those who have already read it that I do not duplicate much of his material (I do not feel I can improve on his presentation). On the other hand, I shall spend relatively more time trying to explain the implied question of the title. But when all my explanations are over, the residue is still so large as to leave the question essentially unanswered.
The Effectiveness of Mathematics. In his paper, Wigner gives a large number of examples of the effectiveness of mathematics in the physical sciences. Let me, therefore, draw on my own experiences that are closer to engineering. My first real experience in the use of mathematics to predict things in the real world was in connection with the design of atomic bombs during the Second World War. How was it that the numbers we so patiently computed on the primitive relay computers agreed so well with what happened on the first test shot at Almagordo? There were, and could be, no small-scale experiments to check the computations directly. Later experience with guided missiles showed me that this was not an isolated phenomenon - constantly what we predict from the manipulation of mathematical symbols is realized in the real world. Naturally, working as I did for the Bell System, I did many telephone computations and other mathematical work on such varied things as traveling wave tubes, the equalization of television lines, the stability of complex communication systems, the blocking of calls through a telephone central office, to name but a few. For glamour, I can cite transistor research, space flight, and computer design, but almost all of science and engineering has used extensive mathematical manipulations with remarkable successes.
Many of you know the story of Maxwell's equations, how to some extent for reasons of symmetry he put in a certain term, and in time the radio waves that the theory predicted were found by Hertz. Many other examples of successfully predicting unknown physical effects from a mathematical formulation are well known and need not be repeated here.
The fundamental role of invariance is stressed by Wigner. It is basic to much of mathematics as well as to science. It was the lack of invariance of Newton's equations (the need for an absolute frame of reference for velocities) that drove Lorentz, Fitzgerald, Poincare, and Einstein to the special theory of relativity.
Wigner also observes that the same mathematical concepts turn up in entirely unexpected connections. For example, the trigonometric functions which occur in Ptolemy's astronomy turn out to be the functions which are invariant with respect to translation (time invariance). They are also the appropriate functions for linear systems. The enormous usefulness of the same pieces of mathematics in widely different situations has no rational explanation (as yet).
Furthermore, the simplicity of mathematics has long been held to be the key to applications in physics. Einstein is the most famous exponent of this belief. But even in mathematics itself the simplicity is remarkable, at least to me; the simplest algebraic equations, linear and quadratic, correspond to the simplest geometric entities, straight lines, circles, and conics. This makes analytic geometry possible in a practical way. How can it be that simple mathematics, being after all a product of the human mind, can be so remarkably useful in so many widely different situations?
Because of these successes of mathematics there is at present a strong trend toward making each of the sciences mathematical. It is usually regarded as a goal to be achieved, if not today, then tomorrow. For this audience I will stick to physics and astronomy for further examples.
Pythagoras is the first man to be recorded who clearly stated that "Mathematics is the way to understand the universe." He said it both loudly and clearly, "Number is the measure of all things."
Kepler is another famous example of this attitude. He passionately believed that God's handiwork could be understood only through mathematics. After twenty years of tedious computations, he found his famous three laws of planetary motion-three comparatively simple mathematical expressions that described the apparently complex motions of the planets.
It was Galileo who said, "The laws of Nature are written in the language of mathematics." Newton used the results of both Kepler and Galileo to deduce the famous Newtonian laws of motion, which together with the law of gravitation are perhaps the most famous example of the unreasonable effectiveness of mathematics in science. They not only predicted where the known planets would be but successfully predicted the positions of unknown planets, the motions of distant stars, tides, and so forth.
Science is composed of laws which were originally based on a small, carefully selected set of observations, often not very accurately measured originally; but the laws have later been found to apply over much wider ranges of observations and much more accurately than the original data justified. Not always, to be sure, but often enough to require explanation.
During my thirty years of practicing mathematics in industry, I often worried about the predictions I made. From the mathematics that I did in my office I confidently (at least to others) predicted some future events-if you do so and so, you will see such and such-and it usually turned out that I was right. How could the phenomena know what I had predicted (based on human-made mathematics) so that it could support my predictions? It is ridiculous to think that is the way things go. No, it is that mathematics provides, somehow, a reliable model for much of what happens in the universe. And since I am able to do only comparatively simple mathematics, how can it be that simple mathematics suffices to predict so much?
I could go on citing more examples illustrating the unreasonable effectiveness of mathematics, but it would only be boring. Indeed, I suspect that many of you know examples that I do not. Let me, therefore, assume that you grant me a very long list of successes, many of them as spectacular as the prediction of a new planet, of a new physical phenomenon, of a new artifact. With limited time, I want to spend it attempting to do what I think Wigner evaded-to give at least some partial answers to the implied question of the title.
What is Mathematics? Having looked at the effectiveness of mathematics, we need to look at the question,"What is Mathematics?" This is the title of a famous book by Courant and Robbins [2]. In it they do not attempt to give a formal definition, rather they are content to show what mathematics is by giving many examples. Similarly, I shall not give a comprehensive definition. But I will come closer than they did to discussing certain salient features of mathematics as I see them.
Perhaps the best way to approach the question of what mathematics is, is to start at the beginning. In the far distant prehistoric past, where we must look for the beginnings of mathematics, there were already four major faces of mathematics. First, there was the ability to carry on the long chains of close reasoning that to this day characterize much of mathematics. Second, there was geometry, leading through the concept of continuity to topology and beyond. Third, there was number, leading to arithmetic, algebra, and beyond. Finally there was artistic taste, which plays so large a role in modern mathematics. There are, of course, many different kinds of beauty in mathematics. In number theory it seems to be mainly the beauty of the almost infinite detail; in abstract algebra the beauty is mainly in the generality. Various areas of mathematics thus have various standards of aesthetics.
The earliest history of mathematics must, of course, be all speculation, since there is not now, nor does there ever seem likely to be, any actual, convincing evidence. It seems, however, that in the very foundations of primitive life there was built in, for survival purposes if for nothing else, an understanding of cause and effect. Once this trait is built up beyond a single observation to a sequence of, "If this, then that, and then it follows still further that . . . ," we are on the path of the first feature of mathematics I mentioned, long chains of close reasoning. But it is hard for me to see how simple Darwinian survival of the fittest would select for the ability to do the long chains that mathematics and science seem to require.
Geometry seems to have arisen from the problems of decorating the human body for various purposes, such as religious rites, social affairs, and attracting the opposite sex, as well as from the problems of decorating the surfaces of walls, pots, utensils and clothing. This also implies the fourth aspect I mentioned, aesthetic taste, and this is one of the deep foundations of mathematics. Most textbooks repeat the Greeks and say that geometry arose from the needs of the Egyptians to survey the land after each flooding by the Nile River, but I attribute much more to aesthetics than do most historians of mathematics and correspondingly less to immediately utility.
The third aspect of mathematics, numbers, arose from counting. So basic are numbers that a famous mathematician once said, "God made the integers, man did the rest" [3]. The integers seem to us to be so fundamental that we expect to find them wherever we find intelligent life in the universe. I have tried, with little success, to get some of my friends to understand my amazement that the abstraction of integers for counting is both possible and useful. Is it not remarkable that 6 sheep plus 7 sheep make 13 sheep; that 6 stones plus 7 stones make 13 stones? Is it not a miracle that the universe is so constructed that such a simple abstraction as a number is possible? To me this is one of the strongest examples of the unreasonable effectiveness of mathematics. Indeed, l find it both strange and unexplainable.
In the development of numbers, we next come to the fact that these counting numbers, the integers, were used successfully in measuring how many times a standard length can be used to exhaust the desired length that is being measured. But it must have soon happened, comparatively speaking, that a whole number of units did not exactly fit the length being measured, and the measurers were driven to the fractions-the extra piece that was left over was used to measure the standard length. Fractions are not counting numbers; they are measuring numbers. Because of their common use in measuring, the fractions were, by a suitable extension of ideas, soon found to obey the same rules for manipulations as did the integers, with the added benefit that they made division possible in all cases (I have not yet come to the number zero). Some acquaintance with the fractions soon reveals that between any two fractions you can put as many more as you please and that in some sense they are homogeneously dense everywhere. But when we extend the concept of number to include the fractions, we have to give up the idea of the next number,
This brings us again to Pythagoras, who is reputed to be the first man to prove that the diagonal of a square and the side of the square have no common measure-that they are irrationally related. This observation apparently produced a profound upheaval in Greek: mathematics. Up to that time the discrete number system and the continuous geometry flourished side by side with little conflict. The crisis of incommensurability tripped off the Euclidean approach to mathematics. It is a curious fact that the early Greeks attempted to make mathematics rigorous by replacing the uncertainties of numbers by what they felt was the more certain geometry (due to Eudoxus). It was a major event to Euclid, and as a result you find in The Elements [4] a lot of what we now consider number theory and algebra cast in the form of geometry. Opposed to the early Greeks, who doubted the existence of the real number system, we have decided that there should be a number that measures the length of the diagonal of a unit square (though we need not do so), and that is more or less how we extended the rational number system to include the algebraic numbers. It was the simple desire to measure lengths that did it. How can anyone deny that there is a number to measure the length of any straight line segment?
The algebraic numbers, which are roots of polynomials with integer, fractional, and, as was later proved, even algebraic numbers as coefficients, were soon under control by simply extending the same operations that were used on the simpler system of numbers.
However, the measurement of the circumference of a circle with respect to its diameter soon forced us to consider the ratio called pi. This is not an algebraic number, since no linear combination of the power of pi with integer coefficients will exactly vanish. One length, the circumference, being a curved line, and the other length, the diameter, being a straight line, make the existence of the ratio less certain than is the ratio of the diagonal of a square to its side; but since it seems that there ought to be such a number, the transcendental numbers gradually got into the number system. Thus by a further suitable extension of the earlier ideas of numbers, the transcendental numbers were admitted consistently into the number system, though few students are at all comfortable with the technical apparatus we conventionally use to show the consistency.
Further tinkering with the number system brought both the number zero and the negative numbers. This time the extension required that we abandon the division for the single number zero. This seems to round out the real number system for us (as long as we confine ourselves to the process of taking limits of sequences of numbers and do not admit still further operations) -not that we have to this day a firm, logical, simple, foundation for them; but they say that familiarity breeds contempt, and we are all more or less familiar with the real number system. Very few of us in our saner moments believe that the particular postulates that some logicians have dreamed up create the numbers - no, most of us believe that the real numbers are simply there and that it has been an interesting, amusing, and important game to try to find a nice set of postulates to account for them. But let us not confuse ourselves-Zeno's paradoxes are still, even after 2,000 years, too fresh in our minds to delude ourselves that we understand all that we wish we did about the relationship between the discrete number system and the continuous line we want to model. We know, from nonstandard analysis if from no other place, that logicians can make postulates that put still further entities on the real line, but so far few of us have wanted to go down that path. It is only fair to mention that there are some mathematicians who doubt the existence of the conventional real number system. A few computer theoreticians admit the existense of only "the computable numbers."
The next step in the discussion is the complex number system. As I read history, it was Cardan who was the first to understand them in any real sense. In his The Great Art or Rules of Algebra [5] he says, "Putting aside the mental tortures involved multiply (5 + sqrt 15) by (5 - sqrt -15) making 25-(-15) ...." Thus he clearly recognized that the same formal operations on the symbols for complex numbers would give meaningful results. In this way the real number system was gradually extended to the complex number system, except that this time the extension required giving up the property of ordering the numbers-the complex numbers cannot be ordered in the usual sense.
Cauchy was apparently led to the theory of complex variables by the problem of integrating real functions along the real line. He found that by bending the path of integration into the complex plane he could solve real integration problems.
A few years ago I had the pleasure of teaching a course in complex variables. As always happens when I become involved in the topic, I again came away with the feeling that "God made the universe out of complex numbers." Clearly, they play a central role in quantum mechanics. They are a natural tool in many other areas of application, such as electric circuits, fields, and so on.
To summarize, from simple counting using the God-given integers, we made various extensions of the ideas of numbers to include more things. Sometimes the extensions were made for what amounted to aesthetic reasons, and often we gave up some property of the earlier number system. Thus we came to a number system that is unreasonably effective even in mathematics itself; witness the way we have solved many number theory problems of the original highly discrete counting system by using a complex variable.
From the above we see that one of the main strands of mathematics is the extension, the generalization, the abstraction - they are all more or less the same thing-of well-known concepts to new situations. But note that in the very process the definitions themselves are subtly altered. Therefore, what is not so widely recognized, old proofs of theorems may become false proofs. The old proofs no longer cover the newly defined things. The miracle is that almost always the theorems are still true; it is merely a matter of fixing up the proofs. The classic example of this fixing up is Euclid's The Elements [4]. We have found it necessary to add quite a few new postulates (or axioms, if you wish, since we no longer care to distinguish between them) in order to meet current standards of proof. Yet how does it happen that no theorem in all the thirteen books is now false? Not one theorem has been found to be false, though often the proofs given by Euclid seem now to be false. And this phenomenon is not confined to the past. It is claimed that an ex-editor of Mathematical Reviews once said that over half of the new theorems published these days are essentially true though the published proofs are false. How can this be if mathematics is the rigorous deduction of theorems from assumed postulates and earlier results? Well, it is obvious to anyone who is not blinded by authority that mathematics is not what the elementary teachers said it was. It is clearly something else.
What is this "else"? Once you start to look you find that if you were confined to the axioms and postulates then you could deduce very little. The first major step is to introduce new concepts derived from the assumptions, concepts such as triangles. The search for proper concepts and definitions is one of the main features of doing great mathematics.
While on the topic of proofs, classical geometry begins with the theorem and tries to find a proof. Apparently it was only in the 1850's or so that it was clearly recognized that the opposite approach is also valid (it must have been occasionally used before then). Often it is the proof that generates the theorem. We see what we can prove and then examine the proof to see what we have proved! These are often called "proof generated theorems" [6]. A classic example is the concept of uniform convergence. Cauchy had proved that a convergent series of terms, each of which is continuous, converges to a continuous function. At the same time there were known to be Fourier series of continuous functions that converged to a discontinuous limit. By a careful examination of Cauchy's proof, the error was found and fixed up by changing the hypothesis of the theorem to read, "a uniformly convergent series."
More recently, we have had an intense study of what is called the foundations of mathematics-which in my opinion should be regarded as the top battlements of mathematics and not the foundations. It is an interesting field, but the main results of mathematics are impervious to what is found there-we simply will not abandon much of mathematics no matter how illogical it is made to appear by research in the foundations.
I hope that I have shown that mathematics is not the thing it is often assumed to be, that mathematics is constantly changing and hence even if I did succeed in defining it today the definition would not be appropriate tomorrow. Similarly with the idea of rigor-we have a changing standard. The dominant attitude in science is that we are not the center of the universe, that we are not uniquely placed, etc., and similarly it is difficult for me to believe that we have now reached the ultimate of rigor. Thus we cannot be sure of the current proofs of our theorems. Indeed it seems to me:
The Postulates of Mathematics Were Not on the Stone Tablets that Moses Brought Down from Mt. Sinai.
It is necessary to emphasize this. We begin with a vague concept in our minds, then we create various sets of postulates, and gradually we settle down to one particular set. In the rigorous postulational approach the original concept is now replaced by what the postulates define. This makes further evolution of the concept rather difficult and as a result tends to slow down the evolution of mathematics. It is not that the postulation approach is wrong, only that its arbitrariness should be clearly recognized, and we should be prepared to change postulates when the need becomes apparent.
Mathematics has been made by man and therefore is apt to be altered rather continuously by him. Perhaps the original sources of mathematics were forced on us, but as in the example I have used we see that in the development of so simple a concept as number we have made choices for the extensions that were only partly controlled by necessity and often, it seems to me, more by aesthetics. We have tried to make mathematics a consistent, beautiful thing, and by so doing we have had an amazing number of successful applications to the real world.
The idea that theorems follow from the postulates does not correspond to simple observation. If the Pythagorean theorem were found to not follow from the postulates, we would again search for a way to alter the postulates until it was true. Euclid's postulates came from the Pythagorean theorem, not the other way. For over thirty years I have been making the remark that if you came into my office and showed me a proof that Cauchy's theorem was false I would be very interested, but I believe that in the final analysis we would alter the assumptions until the theorem was true. Thus there are many results in mathematics that are independent of the assumptions and the proof.
How do we decide in a "crisis" what parts of mathematics to keep and what parts to abandon? Usefulness is one main criterion, but often it is usefulness in creating more mathematics rather than in the applications to the real world! So much for my discussion of mathematics.
Some Partial Explanations. I will arrange my explanations of the unreasonable effectiveness of mathematics under four headings.
1. We see what we look for. No one is surprised if after putting on blue tinted glasses the world appears bluish. I propose to show some examples of how much this is true in current science. To do this I am again going to violate a lot of widely, passionately held beliefs. But hear me out.
I picked the example of scientists in the earlier part for a good reason. Pythagoras is to my mind the first great physicist. It was he who found that we live in what the mathematicians call L2-the sum of the squares of the two sides of a right triangle gives the square of the hypotenuse. As I said before, this is not a result of the postulates of geometry-this is one of the results that shaped the postulates.
Let us next consider Galileo. Not too long ago I was trying to put myself in Galileo's shoes, as it were, so that I might feel how he came to discover the law of falling bodies. I try to do this kind of thing so that I can learn to think like the masters did-I deliberately try to think as they might have done.
Well, Galileo was a well-educated man and a master of scholastic arguments. He well knew how to argue the number of angels on the head of a pin, how to argue both sides of any question. He was trained in these arts far better than any of us these days. I picture him sitting one day with a light and a heavy ball, one in each hand, and tossing them gently. He says, hefting them, "It is obvious to anyone that heavy objects fall faster than light ones-and, anyway, Aristotle says so." "But suppose," he says to himself, having that kind of a mind, "that in falling the body broke into two pieces. Of course the two pieces would immediately slow down to their appropriate speeds. But suppose further that one piece happened to touch the other one. Would they now be one piece and both speed up? Suppose I tied the two pieces together. How tightly must I do it to make them one piece? A light string? A rope? Glue? When are two pieces one?"
The more he thought about it-and the more you think about it-the more unreasonable becomes the question of when two bodies are one. There is simply no reasonable answer to the question of how a body knows how heavy it is-if it is one piece, or two, or many. Since falling bodies do something, the only possible thing is that they all fall at the same speed-unless interfered with by other forces. There's nothing else they can do. He may have later made some experiments, but I strongly suspect that something like what I imagined actually happened. I later found a similar story in a book by Polya [7]. Galileo found his law not by experimenting but by simple, plain thinking, by scholastic reasoning.
I know that the textbooks often present the falling body law as an experimental observation; I am claiming that it is a logical law, a consequence of how we tend to think.
Newton, as you read in books, deduced the inverse square law from Kepler's laws, though they often present it the other way; from the inverse square law the textbooks deduce Kepler's laws. But if you believe in anything like the conservation of energy and think that we live in a three-dimensional Euclidean space, then how else could a symmetric central-force field fall off? Measurements of the exponent by doing experiments are to a great extent attempts to find out if we live in a Euclidean space, and not a test of the inverse square law at all.
But if you do not like these two examples, let me turn to the most highly touted law of recent times, the uncertainty principle. It happens that recently I became involved in writing a book on Digital Filters [8] when I knew very little about the topic. As a result I early asked the question, "Why should I do all the analysis in terms of Fourier integrals? Why are they the natural tools for the problem?" I soon found out, as many of you already know, that the eigenfunctions of translation are the complex exponentials. If you want time invariance, and certainly physicists and engineers do (so that an experiment done today or tomorrow will give the same results), then you are led to these functions. Similarly, if you believe in linearity then they are again the eigenfunctions. In quantum mechanics the quantum states are absolutely additive; they are not just a convenient linear approximation. Thus the trigonometric functions are the eigenfunctions one needs in both digital filter theory and quantum mechanics, to name but two places.
Now when you use these eigenfunctions you are naturally led to representing various functions, first as a countable number and then as a non-countable number of them-namely, the Fourier series and the Fourier integral. Well, it is a theorem in the theory of Fourier integrals that the variability of the function multiplied by the variability of its transform exceeds a fixed constant, in one notation l/2pi. This says to me that in any linear, time invariant system you must find an uncertainty principle. The size of Planck's constant is a matter of the detailed identification of the variables with integrals, but the inequality must occur.
As another example of what has often been thought to be a physical discovery but which turns out to have been put in there by ourselves, I turn to the well-known fact that the distribution of physical constants is not uniform; rather the probability of a random physical constant having a leading digit of 1. 2, or 3 is approximately 60%, and of course the leading digits of 5, 6, 7, 8, and 9 occur in total only about 40% of the time. This distribution applies to many types of numbers, including the distribution of the coefficients of a power series having only one singularity on the circle of convergence. A close examination of this phenomenon shows that it is mainly an artifact of the way we use numbers.
Having given four widely different examples of nontrivial situations where it turns out that the original phenomenon arises from the mathematical tools we use and not from the real world, I am ready to strongly suggest that a lot of what we see comes from the glasses we put on. Of course this goes against much of what you have been taught, but consider the arguments carefully. You can say that it was the experiment that forced the model on us, but I suggest that the more you think about the four examples the more uncomfortable you are apt to become. They are not arbitrary theories that I have selected, but ones which are central to physics,
In recent years it was Einstein who most loudly proclaimed the simplicity of the laws of physics, who used mathematics so exclusively as to be popularly known as a mathematician. When examining his special theory of relativity paper [9] one has the feeling that one is dealing with a scholastic philosopher's approach. He knew in advance what the theory should look like. and he explored the theories with mathematical tools, not actual experiments. He was so confident of the rightness of the relativity theories that, when experiments were done to check them, he was not much interested in the outcomes, saying that they had to come out that way or else the experiments were wrong. And many people believe that the two relativity theories rest more on philosophical grounds than on actual experiments.
Thus my first answer to the implied question about the unreasonable effectiveness of mathematics is that we approach the situations with an intellectual apparatus so that we can only find what we do in many cases. It is both that simple, and that awful. What we were taught about the basis of science being experiments in the real world is only partially true. Eddington went further than this; he claimed that a sufficiently wise mind could deduce all of physics. I am only suggesting that a surprising amount can be so deduced. Eddington gave a lovely parable to illustrate this point. He said, "Some men went fishing in the sea with a net, and upon examining what they caught they concluded that there was a minimum size to the fish in the sea."
2. We select the kind of mathematics to use. Mathematics does not always work. When we found that scalars did not work for forces, we invented a new mathematics, vectors. And going further we have invented tensors. In a book I have recently written [10] conventional integers are used for labels, and real numbers are used for probabilities; but otherwise all the arithmetic and algebra that occurs in the book, and there is a lot of both, has the rule that
1+1=0.
Thus my second explanation is that we select the mathematics to fit the situation, and it is simply not true that the same mathematics works every place.
3. Science in fact answers comparatively few problems. We have the illusion that science has answers to most of our questions, but this is not so. From the earliest of times man must have pondered over what Truth, Beauty, and Justice are. But so far as I can see science has contributed nothing to the answers, nor does it seem to me that science will do much in the near future. So long as we use a mathematics in which the whole is the sum of the parts we are not likely to have mathematics as a major tool in examining these famous three questions.
Indeed, to generalize, almost all of our experiences in this world do not fall under the domain of science or mathematics. Furthermore, we know (at least we think we do) that from Godel's theorem there are definite limits to what pure logical manipulation of symbols can do, there are limits to the domain of mathematics. It has been an act of faith on the part of scientists that the world can be explained in the simple terms that mathematics handles. When you consider how much science has not answered then you see that our successes are not so impressive as they might otherwise appear.
4. The evolution of man provided the model. I have already touched on the matter of the evolution of man. I remarked that in the earliest forms of life there must have been the seeds of our current ability to create and follow long chains of close reasoning. Some people [11] have further claimed that Darwinian evolution would naturally select for survival those competing forms of life which had the best models of reality in their minds-"best" meaning best for surviving and propagating. There is no doubt that there is some truth in this. We find, for example, that we can cope with thinking about the world when it is of comparable size to ourselves and our raw unaided senses, but that when we go to the very small or the very large then our thinking has great trouble. We seem not to be able to think appropriately about the extremes beyond normal size.
Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark "Perhaps there are thoughts we cannot think," surprise you? Evolution, so far, may possibly have blocked us from being able to think in some directions; there could be unthinkable thoughts.
If you recall that modern science is only about 400 years old, and that there have been from 3 to 5 generations per century, then there have been at most 20 generations since Newton and Galileo. If you pick 4,000 years for the age of science, generally, then you get an upper bound of 200 generations. Considering the effects of evolution we are looking for via selection of small chance variations, it does not seem to me that evolution can explain more than a small part of the unreasonable effectiveness of mathematics.
Conclusion. From all of this I am forced to conclude both that mathematics is unreasonably effective and that all of the explanations I have given when added together simply are not enough to explain what I set out to account for. I think that we-meaning you, mainly-must continue to try to explain why the logical side of science-meaning mathematics, mainly-is the proper tool for exploring the universe as we perceive it at present. I suspect that my explanations are hardly as good as those of the early Greeks, who said for the material side of the question that the nature of the universe is earth, fire, water, and air. The logical side of the nature of the universe requires further exploration.
I (Larry Frazier, who (with R. Hamming's permission) scanned this and put it online) was pleased to note that 58 people visited this essay in a recent 2-month period. I assume most of you are finding this from a pointer in the Gutenberg Project hierarchy.
On the other hand, I feel like thousands of people should be reading this. It is the most profound essay I have seen regarding philosophy of science; important, significant, in fact, for our whole understanding of thought, of knowing, or reality.
Drop me a note if you have any comments. Larry Frazier
1. E. P. Wigner, The unreasonable effectiveness of mathematics in the natural sciences, Comm. Pure Appl. Math., 13 (Feb. 1960).
2. R. Courant and H. Robbins, What Is Mathematics? Oxford University Press, 1941.
3. L. Kronecker, Item 1634. in On Mathematics and Mathematicians, by R E Moritz.
4. Euclid, Euclid's Elements, T. E. Heath, Dover Publications, New York, 1956.
5. G. Cardano, The Great Art or Rules of Algebra, transl. by T. R. Witmer, MIT Press, 1968, pp. 219-220
6. Imre Lakatos, Proofs and Refutations; Cambridge University Press, 1976, p. 33.
7. G. Polya, Mathematical Methods in Science, MAA, 1963, pp. 83-85.
8. R. W. Hamming, Digital Filters, Prentice-Hall, Englewood Cliffs, NJ., 1977.
9. G. Holton Thematic Origins of Scientific Thought, Kepler to Einstein, Harvard University Press, 1973.
10. R. W. Hamming, Coding and Information Theory, Prentice-Hall, Englewood Cliffs, NJ., 1980.
11. H. Mohr, Structure and Significance of Science, Springer- Verlag, 1977.
On 2001 May 24 Larry Frazier gave me permission to post this.
Tom Schneider
2003 April 10. I noticed that one paragraph ends incorrectly with "idea of the next number," To determine if there is a corrected copy somewhere I did a search (see below). The Dartmouth version has the same error!
* Google search for The Unreasonable Effectiveness of Mathematics
* The Unreasonable Effectiveness of Mathematics by R. W. HAMMING (at Dartmouth) Another copy of Hamming's article.
* The Unreasonable Effectiveness of Mathematics in the Natural Sciences by Eugene Wigner (at Dartmouth) This was cited in Hamming's article. It's also well worth reading.
Schneider Lab
origin: 1998 or 1999 sometime?
updated: 2001 May 24
updated: 2003 Apr 10
R. W. HAMMING
Reprinted From: The American Mathematical Monthly
Volume 87 Number 2 February 1980
Prologue. It is evident from the title that this is a philosophical discussion. I shall not apologize for the philosophy, though I am well aware that most scientists, engineers, and mathematicians have little regard for it; instead, I shall give this short prologue to justify the approach.
Man, so far as we know, has always wondered about himself, the world around him, and what life is all about. We have many myths from the past that tell how and why God, or the gods, made man and the universe. These I shall call theological explanations. They have one principal characteristic in common-there is little point in asking why things are the way they are, since we are given mainly a description of the creation as the gods chose to do it.
Philosophy started when man began to wonder about the world outside of this theological framework. An early example is the description by the philosophers that the world is made of earth, fire, water, and air. No doubt they were told at the time that the gods made things that way and to stop worrying about it.
From these early attempts to explain things slowly came philosophy as well as our present science. Not that science explains "why" things are as they are-gravitation does not explain why things fall-but science gives so many details of "how" that we have the feeling we understand "why." Let us be clear about this point; it is by the sea of interrelated details that science seems to say "why" the universe is as it is.
Our main tool for carrying out the long chains of tight reasoning required by science is mathematics. Indeed, mathematics might be defined as being the mental tool designed for this purpose. Many people through the ages have asked the question I am effectively asking in the title, "Why is mathematics so unreasonably effective?" In asking this we are merely looking more at the logical side and less at the material side of what the universe is and how it works.
Mathematicians working in the foundations of mathematics are concerned mainly with the self-consistency and limitations of the system. They seem not to concern themselves with why the world apparently admits of a logical explanation. In a sense I am in the position of the early Greek philosophers who wondered about the material side, and my answers on the logical side are probably not much better than theirs were in their time. But we must begin somewhere and sometime to explain the phenomenon that the world seems to be organized in a logical pattern that parallels much of mathematics, that mathematics is the language of science and engineering.
Once I had organized the main outline, I had then to consider how best to communicate my ideas and opinions to others. Experience shows that I am not always successful in this matter. It finally occurred to me that the following preliminary remarks would help.
In some respects this discussion is highly theoretical. I have to mention, at least slightly, various theories of the general activity called mathematics, as well as touch on selected parts of it. Furthermore, there are various theories of applications. Thus, to some extent, this leads to a theory of theories. What may surprise you is that I shall take the experimentalist's approach in discussing things. Never mind what the theories are supposed to be, or what you think they should be, or even what the experts in the field assert they are; let us take the scientific attitude and look at what they are. I am well aware that much of what I say, especially about the nature of mathematics, will annoy many mathematicians. My experimental approach is quite foreign to their mentality and preconceived beliefs. So be it!
The inspiration for this article came from the similarly entitled article, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" [1], by E. P. Wigner. It will be noticed that I have left out part of the title, and by those who have already read it that I do not duplicate much of his material (I do not feel I can improve on his presentation). On the other hand, I shall spend relatively more time trying to explain the implied question of the title. But when all my explanations are over, the residue is still so large as to leave the question essentially unanswered.
The Effectiveness of Mathematics. In his paper, Wigner gives a large number of examples of the effectiveness of mathematics in the physical sciences. Let me, therefore, draw on my own experiences that are closer to engineering. My first real experience in the use of mathematics to predict things in the real world was in connection with the design of atomic bombs during the Second World War. How was it that the numbers we so patiently computed on the primitive relay computers agreed so well with what happened on the first test shot at Almagordo? There were, and could be, no small-scale experiments to check the computations directly. Later experience with guided missiles showed me that this was not an isolated phenomenon - constantly what we predict from the manipulation of mathematical symbols is realized in the real world. Naturally, working as I did for the Bell System, I did many telephone computations and other mathematical work on such varied things as traveling wave tubes, the equalization of television lines, the stability of complex communication systems, the blocking of calls through a telephone central office, to name but a few. For glamour, I can cite transistor research, space flight, and computer design, but almost all of science and engineering has used extensive mathematical manipulations with remarkable successes.
Many of you know the story of Maxwell's equations, how to some extent for reasons of symmetry he put in a certain term, and in time the radio waves that the theory predicted were found by Hertz. Many other examples of successfully predicting unknown physical effects from a mathematical formulation are well known and need not be repeated here.
The fundamental role of invariance is stressed by Wigner. It is basic to much of mathematics as well as to science. It was the lack of invariance of Newton's equations (the need for an absolute frame of reference for velocities) that drove Lorentz, Fitzgerald, Poincare, and Einstein to the special theory of relativity.
Wigner also observes that the same mathematical concepts turn up in entirely unexpected connections. For example, the trigonometric functions which occur in Ptolemy's astronomy turn out to be the functions which are invariant with respect to translation (time invariance). They are also the appropriate functions for linear systems. The enormous usefulness of the same pieces of mathematics in widely different situations has no rational explanation (as yet).
Furthermore, the simplicity of mathematics has long been held to be the key to applications in physics. Einstein is the most famous exponent of this belief. But even in mathematics itself the simplicity is remarkable, at least to me; the simplest algebraic equations, linear and quadratic, correspond to the simplest geometric entities, straight lines, circles, and conics. This makes analytic geometry possible in a practical way. How can it be that simple mathematics, being after all a product of the human mind, can be so remarkably useful in so many widely different situations?
Because of these successes of mathematics there is at present a strong trend toward making each of the sciences mathematical. It is usually regarded as a goal to be achieved, if not today, then tomorrow. For this audience I will stick to physics and astronomy for further examples.
Pythagoras is the first man to be recorded who clearly stated that "Mathematics is the way to understand the universe." He said it both loudly and clearly, "Number is the measure of all things."
Kepler is another famous example of this attitude. He passionately believed that God's handiwork could be understood only through mathematics. After twenty years of tedious computations, he found his famous three laws of planetary motion-three comparatively simple mathematical expressions that described the apparently complex motions of the planets.
It was Galileo who said, "The laws of Nature are written in the language of mathematics." Newton used the results of both Kepler and Galileo to deduce the famous Newtonian laws of motion, which together with the law of gravitation are perhaps the most famous example of the unreasonable effectiveness of mathematics in science. They not only predicted where the known planets would be but successfully predicted the positions of unknown planets, the motions of distant stars, tides, and so forth.
Science is composed of laws which were originally based on a small, carefully selected set of observations, often not very accurately measured originally; but the laws have later been found to apply over much wider ranges of observations and much more accurately than the original data justified. Not always, to be sure, but often enough to require explanation.
During my thirty years of practicing mathematics in industry, I often worried about the predictions I made. From the mathematics that I did in my office I confidently (at least to others) predicted some future events-if you do so and so, you will see such and such-and it usually turned out that I was right. How could the phenomena know what I had predicted (based on human-made mathematics) so that it could support my predictions? It is ridiculous to think that is the way things go. No, it is that mathematics provides, somehow, a reliable model for much of what happens in the universe. And since I am able to do only comparatively simple mathematics, how can it be that simple mathematics suffices to predict so much?
I could go on citing more examples illustrating the unreasonable effectiveness of mathematics, but it would only be boring. Indeed, I suspect that many of you know examples that I do not. Let me, therefore, assume that you grant me a very long list of successes, many of them as spectacular as the prediction of a new planet, of a new physical phenomenon, of a new artifact. With limited time, I want to spend it attempting to do what I think Wigner evaded-to give at least some partial answers to the implied question of the title.
What is Mathematics? Having looked at the effectiveness of mathematics, we need to look at the question,"What is Mathematics?" This is the title of a famous book by Courant and Robbins [2]. In it they do not attempt to give a formal definition, rather they are content to show what mathematics is by giving many examples. Similarly, I shall not give a comprehensive definition. But I will come closer than they did to discussing certain salient features of mathematics as I see them.
Perhaps the best way to approach the question of what mathematics is, is to start at the beginning. In the far distant prehistoric past, where we must look for the beginnings of mathematics, there were already four major faces of mathematics. First, there was the ability to carry on the long chains of close reasoning that to this day characterize much of mathematics. Second, there was geometry, leading through the concept of continuity to topology and beyond. Third, there was number, leading to arithmetic, algebra, and beyond. Finally there was artistic taste, which plays so large a role in modern mathematics. There are, of course, many different kinds of beauty in mathematics. In number theory it seems to be mainly the beauty of the almost infinite detail; in abstract algebra the beauty is mainly in the generality. Various areas of mathematics thus have various standards of aesthetics.
The earliest history of mathematics must, of course, be all speculation, since there is not now, nor does there ever seem likely to be, any actual, convincing evidence. It seems, however, that in the very foundations of primitive life there was built in, for survival purposes if for nothing else, an understanding of cause and effect. Once this trait is built up beyond a single observation to a sequence of, "If this, then that, and then it follows still further that . . . ," we are on the path of the first feature of mathematics I mentioned, long chains of close reasoning. But it is hard for me to see how simple Darwinian survival of the fittest would select for the ability to do the long chains that mathematics and science seem to require.
Geometry seems to have arisen from the problems of decorating the human body for various purposes, such as religious rites, social affairs, and attracting the opposite sex, as well as from the problems of decorating the surfaces of walls, pots, utensils and clothing. This also implies the fourth aspect I mentioned, aesthetic taste, and this is one of the deep foundations of mathematics. Most textbooks repeat the Greeks and say that geometry arose from the needs of the Egyptians to survey the land after each flooding by the Nile River, but I attribute much more to aesthetics than do most historians of mathematics and correspondingly less to immediately utility.
The third aspect of mathematics, numbers, arose from counting. So basic are numbers that a famous mathematician once said, "God made the integers, man did the rest" [3]. The integers seem to us to be so fundamental that we expect to find them wherever we find intelligent life in the universe. I have tried, with little success, to get some of my friends to understand my amazement that the abstraction of integers for counting is both possible and useful. Is it not remarkable that 6 sheep plus 7 sheep make 13 sheep; that 6 stones plus 7 stones make 13 stones? Is it not a miracle that the universe is so constructed that such a simple abstraction as a number is possible? To me this is one of the strongest examples of the unreasonable effectiveness of mathematics. Indeed, l find it both strange and unexplainable.
In the development of numbers, we next come to the fact that these counting numbers, the integers, were used successfully in measuring how many times a standard length can be used to exhaust the desired length that is being measured. But it must have soon happened, comparatively speaking, that a whole number of units did not exactly fit the length being measured, and the measurers were driven to the fractions-the extra piece that was left over was used to measure the standard length. Fractions are not counting numbers; they are measuring numbers. Because of their common use in measuring, the fractions were, by a suitable extension of ideas, soon found to obey the same rules for manipulations as did the integers, with the added benefit that they made division possible in all cases (I have not yet come to the number zero). Some acquaintance with the fractions soon reveals that between any two fractions you can put as many more as you please and that in some sense they are homogeneously dense everywhere. But when we extend the concept of number to include the fractions, we have to give up the idea of the next number,
This brings us again to Pythagoras, who is reputed to be the first man to prove that the diagonal of a square and the side of the square have no common measure-that they are irrationally related. This observation apparently produced a profound upheaval in Greek: mathematics. Up to that time the discrete number system and the continuous geometry flourished side by side with little conflict. The crisis of incommensurability tripped off the Euclidean approach to mathematics. It is a curious fact that the early Greeks attempted to make mathematics rigorous by replacing the uncertainties of numbers by what they felt was the more certain geometry (due to Eudoxus). It was a major event to Euclid, and as a result you find in The Elements [4] a lot of what we now consider number theory and algebra cast in the form of geometry. Opposed to the early Greeks, who doubted the existence of the real number system, we have decided that there should be a number that measures the length of the diagonal of a unit square (though we need not do so), and that is more or less how we extended the rational number system to include the algebraic numbers. It was the simple desire to measure lengths that did it. How can anyone deny that there is a number to measure the length of any straight line segment?
The algebraic numbers, which are roots of polynomials with integer, fractional, and, as was later proved, even algebraic numbers as coefficients, were soon under control by simply extending the same operations that were used on the simpler system of numbers.
However, the measurement of the circumference of a circle with respect to its diameter soon forced us to consider the ratio called pi. This is not an algebraic number, since no linear combination of the power of pi with integer coefficients will exactly vanish. One length, the circumference, being a curved line, and the other length, the diameter, being a straight line, make the existence of the ratio less certain than is the ratio of the diagonal of a square to its side; but since it seems that there ought to be such a number, the transcendental numbers gradually got into the number system. Thus by a further suitable extension of the earlier ideas of numbers, the transcendental numbers were admitted consistently into the number system, though few students are at all comfortable with the technical apparatus we conventionally use to show the consistency.
Further tinkering with the number system brought both the number zero and the negative numbers. This time the extension required that we abandon the division for the single number zero. This seems to round out the real number system for us (as long as we confine ourselves to the process of taking limits of sequences of numbers and do not admit still further operations) -not that we have to this day a firm, logical, simple, foundation for them; but they say that familiarity breeds contempt, and we are all more or less familiar with the real number system. Very few of us in our saner moments believe that the particular postulates that some logicians have dreamed up create the numbers - no, most of us believe that the real numbers are simply there and that it has been an interesting, amusing, and important game to try to find a nice set of postulates to account for them. But let us not confuse ourselves-Zeno's paradoxes are still, even after 2,000 years, too fresh in our minds to delude ourselves that we understand all that we wish we did about the relationship between the discrete number system and the continuous line we want to model. We know, from nonstandard analysis if from no other place, that logicians can make postulates that put still further entities on the real line, but so far few of us have wanted to go down that path. It is only fair to mention that there are some mathematicians who doubt the existence of the conventional real number system. A few computer theoreticians admit the existense of only "the computable numbers."
The next step in the discussion is the complex number system. As I read history, it was Cardan who was the first to understand them in any real sense. In his The Great Art or Rules of Algebra [5] he says, "Putting aside the mental tortures involved multiply (5 + sqrt 15) by (5 - sqrt -15) making 25-(-15) ...." Thus he clearly recognized that the same formal operations on the symbols for complex numbers would give meaningful results. In this way the real number system was gradually extended to the complex number system, except that this time the extension required giving up the property of ordering the numbers-the complex numbers cannot be ordered in the usual sense.
Cauchy was apparently led to the theory of complex variables by the problem of integrating real functions along the real line. He found that by bending the path of integration into the complex plane he could solve real integration problems.
A few years ago I had the pleasure of teaching a course in complex variables. As always happens when I become involved in the topic, I again came away with the feeling that "God made the universe out of complex numbers." Clearly, they play a central role in quantum mechanics. They are a natural tool in many other areas of application, such as electric circuits, fields, and so on.
To summarize, from simple counting using the God-given integers, we made various extensions of the ideas of numbers to include more things. Sometimes the extensions were made for what amounted to aesthetic reasons, and often we gave up some property of the earlier number system. Thus we came to a number system that is unreasonably effective even in mathematics itself; witness the way we have solved many number theory problems of the original highly discrete counting system by using a complex variable.
From the above we see that one of the main strands of mathematics is the extension, the generalization, the abstraction - they are all more or less the same thing-of well-known concepts to new situations. But note that in the very process the definitions themselves are subtly altered. Therefore, what is not so widely recognized, old proofs of theorems may become false proofs. The old proofs no longer cover the newly defined things. The miracle is that almost always the theorems are still true; it is merely a matter of fixing up the proofs. The classic example of this fixing up is Euclid's The Elements [4]. We have found it necessary to add quite a few new postulates (or axioms, if you wish, since we no longer care to distinguish between them) in order to meet current standards of proof. Yet how does it happen that no theorem in all the thirteen books is now false? Not one theorem has been found to be false, though often the proofs given by Euclid seem now to be false. And this phenomenon is not confined to the past. It is claimed that an ex-editor of Mathematical Reviews once said that over half of the new theorems published these days are essentially true though the published proofs are false. How can this be if mathematics is the rigorous deduction of theorems from assumed postulates and earlier results? Well, it is obvious to anyone who is not blinded by authority that mathematics is not what the elementary teachers said it was. It is clearly something else.
What is this "else"? Once you start to look you find that if you were confined to the axioms and postulates then you could deduce very little. The first major step is to introduce new concepts derived from the assumptions, concepts such as triangles. The search for proper concepts and definitions is one of the main features of doing great mathematics.
While on the topic of proofs, classical geometry begins with the theorem and tries to find a proof. Apparently it was only in the 1850's or so that it was clearly recognized that the opposite approach is also valid (it must have been occasionally used before then). Often it is the proof that generates the theorem. We see what we can prove and then examine the proof to see what we have proved! These are often called "proof generated theorems" [6]. A classic example is the concept of uniform convergence. Cauchy had proved that a convergent series of terms, each of which is continuous, converges to a continuous function. At the same time there were known to be Fourier series of continuous functions that converged to a discontinuous limit. By a careful examination of Cauchy's proof, the error was found and fixed up by changing the hypothesis of the theorem to read, "a uniformly convergent series."
More recently, we have had an intense study of what is called the foundations of mathematics-which in my opinion should be regarded as the top battlements of mathematics and not the foundations. It is an interesting field, but the main results of mathematics are impervious to what is found there-we simply will not abandon much of mathematics no matter how illogical it is made to appear by research in the foundations.
I hope that I have shown that mathematics is not the thing it is often assumed to be, that mathematics is constantly changing and hence even if I did succeed in defining it today the definition would not be appropriate tomorrow. Similarly with the idea of rigor-we have a changing standard. The dominant attitude in science is that we are not the center of the universe, that we are not uniquely placed, etc., and similarly it is difficult for me to believe that we have now reached the ultimate of rigor. Thus we cannot be sure of the current proofs of our theorems. Indeed it seems to me:
The Postulates of Mathematics Were Not on the Stone Tablets that Moses Brought Down from Mt. Sinai.
It is necessary to emphasize this. We begin with a vague concept in our minds, then we create various sets of postulates, and gradually we settle down to one particular set. In the rigorous postulational approach the original concept is now replaced by what the postulates define. This makes further evolution of the concept rather difficult and as a result tends to slow down the evolution of mathematics. It is not that the postulation approach is wrong, only that its arbitrariness should be clearly recognized, and we should be prepared to change postulates when the need becomes apparent.
Mathematics has been made by man and therefore is apt to be altered rather continuously by him. Perhaps the original sources of mathematics were forced on us, but as in the example I have used we see that in the development of so simple a concept as number we have made choices for the extensions that were only partly controlled by necessity and often, it seems to me, more by aesthetics. We have tried to make mathematics a consistent, beautiful thing, and by so doing we have had an amazing number of successful applications to the real world.
The idea that theorems follow from the postulates does not correspond to simple observation. If the Pythagorean theorem were found to not follow from the postulates, we would again search for a way to alter the postulates until it was true. Euclid's postulates came from the Pythagorean theorem, not the other way. For over thirty years I have been making the remark that if you came into my office and showed me a proof that Cauchy's theorem was false I would be very interested, but I believe that in the final analysis we would alter the assumptions until the theorem was true. Thus there are many results in mathematics that are independent of the assumptions and the proof.
How do we decide in a "crisis" what parts of mathematics to keep and what parts to abandon? Usefulness is one main criterion, but often it is usefulness in creating more mathematics rather than in the applications to the real world! So much for my discussion of mathematics.
Some Partial Explanations. I will arrange my explanations of the unreasonable effectiveness of mathematics under four headings.
1. We see what we look for. No one is surprised if after putting on blue tinted glasses the world appears bluish. I propose to show some examples of how much this is true in current science. To do this I am again going to violate a lot of widely, passionately held beliefs. But hear me out.
I picked the example of scientists in the earlier part for a good reason. Pythagoras is to my mind the first great physicist. It was he who found that we live in what the mathematicians call L2-the sum of the squares of the two sides of a right triangle gives the square of the hypotenuse. As I said before, this is not a result of the postulates of geometry-this is one of the results that shaped the postulates.
Let us next consider Galileo. Not too long ago I was trying to put myself in Galileo's shoes, as it were, so that I might feel how he came to discover the law of falling bodies. I try to do this kind of thing so that I can learn to think like the masters did-I deliberately try to think as they might have done.
Well, Galileo was a well-educated man and a master of scholastic arguments. He well knew how to argue the number of angels on the head of a pin, how to argue both sides of any question. He was trained in these arts far better than any of us these days. I picture him sitting one day with a light and a heavy ball, one in each hand, and tossing them gently. He says, hefting them, "It is obvious to anyone that heavy objects fall faster than light ones-and, anyway, Aristotle says so." "But suppose," he says to himself, having that kind of a mind, "that in falling the body broke into two pieces. Of course the two pieces would immediately slow down to their appropriate speeds. But suppose further that one piece happened to touch the other one. Would they now be one piece and both speed up? Suppose I tied the two pieces together. How tightly must I do it to make them one piece? A light string? A rope? Glue? When are two pieces one?"
The more he thought about it-and the more you think about it-the more unreasonable becomes the question of when two bodies are one. There is simply no reasonable answer to the question of how a body knows how heavy it is-if it is one piece, or two, or many. Since falling bodies do something, the only possible thing is that they all fall at the same speed-unless interfered with by other forces. There's nothing else they can do. He may have later made some experiments, but I strongly suspect that something like what I imagined actually happened. I later found a similar story in a book by Polya [7]. Galileo found his law not by experimenting but by simple, plain thinking, by scholastic reasoning.
I know that the textbooks often present the falling body law as an experimental observation; I am claiming that it is a logical law, a consequence of how we tend to think.
Newton, as you read in books, deduced the inverse square law from Kepler's laws, though they often present it the other way; from the inverse square law the textbooks deduce Kepler's laws. But if you believe in anything like the conservation of energy and think that we live in a three-dimensional Euclidean space, then how else could a symmetric central-force field fall off? Measurements of the exponent by doing experiments are to a great extent attempts to find out if we live in a Euclidean space, and not a test of the inverse square law at all.
But if you do not like these two examples, let me turn to the most highly touted law of recent times, the uncertainty principle. It happens that recently I became involved in writing a book on Digital Filters [8] when I knew very little about the topic. As a result I early asked the question, "Why should I do all the analysis in terms of Fourier integrals? Why are they the natural tools for the problem?" I soon found out, as many of you already know, that the eigenfunctions of translation are the complex exponentials. If you want time invariance, and certainly physicists and engineers do (so that an experiment done today or tomorrow will give the same results), then you are led to these functions. Similarly, if you believe in linearity then they are again the eigenfunctions. In quantum mechanics the quantum states are absolutely additive; they are not just a convenient linear approximation. Thus the trigonometric functions are the eigenfunctions one needs in both digital filter theory and quantum mechanics, to name but two places.
Now when you use these eigenfunctions you are naturally led to representing various functions, first as a countable number and then as a non-countable number of them-namely, the Fourier series and the Fourier integral. Well, it is a theorem in the theory of Fourier integrals that the variability of the function multiplied by the variability of its transform exceeds a fixed constant, in one notation l/2pi. This says to me that in any linear, time invariant system you must find an uncertainty principle. The size of Planck's constant is a matter of the detailed identification of the variables with integrals, but the inequality must occur.
As another example of what has often been thought to be a physical discovery but which turns out to have been put in there by ourselves, I turn to the well-known fact that the distribution of physical constants is not uniform; rather the probability of a random physical constant having a leading digit of 1. 2, or 3 is approximately 60%, and of course the leading digits of 5, 6, 7, 8, and 9 occur in total only about 40% of the time. This distribution applies to many types of numbers, including the distribution of the coefficients of a power series having only one singularity on the circle of convergence. A close examination of this phenomenon shows that it is mainly an artifact of the way we use numbers.
Having given four widely different examples of nontrivial situations where it turns out that the original phenomenon arises from the mathematical tools we use and not from the real world, I am ready to strongly suggest that a lot of what we see comes from the glasses we put on. Of course this goes against much of what you have been taught, but consider the arguments carefully. You can say that it was the experiment that forced the model on us, but I suggest that the more you think about the four examples the more uncomfortable you are apt to become. They are not arbitrary theories that I have selected, but ones which are central to physics,
In recent years it was Einstein who most loudly proclaimed the simplicity of the laws of physics, who used mathematics so exclusively as to be popularly known as a mathematician. When examining his special theory of relativity paper [9] one has the feeling that one is dealing with a scholastic philosopher's approach. He knew in advance what the theory should look like. and he explored the theories with mathematical tools, not actual experiments. He was so confident of the rightness of the relativity theories that, when experiments were done to check them, he was not much interested in the outcomes, saying that they had to come out that way or else the experiments were wrong. And many people believe that the two relativity theories rest more on philosophical grounds than on actual experiments.
Thus my first answer to the implied question about the unreasonable effectiveness of mathematics is that we approach the situations with an intellectual apparatus so that we can only find what we do in many cases. It is both that simple, and that awful. What we were taught about the basis of science being experiments in the real world is only partially true. Eddington went further than this; he claimed that a sufficiently wise mind could deduce all of physics. I am only suggesting that a surprising amount can be so deduced. Eddington gave a lovely parable to illustrate this point. He said, "Some men went fishing in the sea with a net, and upon examining what they caught they concluded that there was a minimum size to the fish in the sea."
2. We select the kind of mathematics to use. Mathematics does not always work. When we found that scalars did not work for forces, we invented a new mathematics, vectors. And going further we have invented tensors. In a book I have recently written [10] conventional integers are used for labels, and real numbers are used for probabilities; but otherwise all the arithmetic and algebra that occurs in the book, and there is a lot of both, has the rule that
1+1=0.
Thus my second explanation is that we select the mathematics to fit the situation, and it is simply not true that the same mathematics works every place.
3. Science in fact answers comparatively few problems. We have the illusion that science has answers to most of our questions, but this is not so. From the earliest of times man must have pondered over what Truth, Beauty, and Justice are. But so far as I can see science has contributed nothing to the answers, nor does it seem to me that science will do much in the near future. So long as we use a mathematics in which the whole is the sum of the parts we are not likely to have mathematics as a major tool in examining these famous three questions.
Indeed, to generalize, almost all of our experiences in this world do not fall under the domain of science or mathematics. Furthermore, we know (at least we think we do) that from Godel's theorem there are definite limits to what pure logical manipulation of symbols can do, there are limits to the domain of mathematics. It has been an act of faith on the part of scientists that the world can be explained in the simple terms that mathematics handles. When you consider how much science has not answered then you see that our successes are not so impressive as they might otherwise appear.
4. The evolution of man provided the model. I have already touched on the matter of the evolution of man. I remarked that in the earliest forms of life there must have been the seeds of our current ability to create and follow long chains of close reasoning. Some people [11] have further claimed that Darwinian evolution would naturally select for survival those competing forms of life which had the best models of reality in their minds-"best" meaning best for surviving and propagating. There is no doubt that there is some truth in this. We find, for example, that we can cope with thinking about the world when it is of comparable size to ourselves and our raw unaided senses, but that when we go to the very small or the very large then our thinking has great trouble. We seem not to be able to think appropriately about the extremes beyond normal size.
Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark "Perhaps there are thoughts we cannot think," surprise you? Evolution, so far, may possibly have blocked us from being able to think in some directions; there could be unthinkable thoughts.
If you recall that modern science is only about 400 years old, and that there have been from 3 to 5 generations per century, then there have been at most 20 generations since Newton and Galileo. If you pick 4,000 years for the age of science, generally, then you get an upper bound of 200 generations. Considering the effects of evolution we are looking for via selection of small chance variations, it does not seem to me that evolution can explain more than a small part of the unreasonable effectiveness of mathematics.
Conclusion. From all of this I am forced to conclude both that mathematics is unreasonably effective and that all of the explanations I have given when added together simply are not enough to explain what I set out to account for. I think that we-meaning you, mainly-must continue to try to explain why the logical side of science-meaning mathematics, mainly-is the proper tool for exploring the universe as we perceive it at present. I suspect that my explanations are hardly as good as those of the early Greeks, who said for the material side of the question that the nature of the universe is earth, fire, water, and air. The logical side of the nature of the universe requires further exploration.
I (Larry Frazier, who (with R. Hamming's permission) scanned this and put it online) was pleased to note that 58 people visited this essay in a recent 2-month period. I assume most of you are finding this from a pointer in the Gutenberg Project hierarchy.
On the other hand, I feel like thousands of people should be reading this. It is the most profound essay I have seen regarding philosophy of science; important, significant, in fact, for our whole understanding of thought, of knowing, or reality.
Drop me a note if you have any comments. Larry Frazier
1. E. P. Wigner, The unreasonable effectiveness of mathematics in the natural sciences, Comm. Pure Appl. Math., 13 (Feb. 1960).
2. R. Courant and H. Robbins, What Is Mathematics? Oxford University Press, 1941.
3. L. Kronecker, Item 1634. in On Mathematics and Mathematicians, by R E Moritz.
4. Euclid, Euclid's Elements, T. E. Heath, Dover Publications, New York, 1956.
5. G. Cardano, The Great Art or Rules of Algebra, transl. by T. R. Witmer, MIT Press, 1968, pp. 219-220
6. Imre Lakatos, Proofs and Refutations; Cambridge University Press, 1976, p. 33.
7. G. Polya, Mathematical Methods in Science, MAA, 1963, pp. 83-85.
8. R. W. Hamming, Digital Filters, Prentice-Hall, Englewood Cliffs, NJ., 1977.
9. G. Holton Thematic Origins of Scientific Thought, Kepler to Einstein, Harvard University Press, 1973.
10. R. W. Hamming, Coding and Information Theory, Prentice-Hall, Englewood Cliffs, NJ., 1980.
11. H. Mohr, Structure and Significance of Science, Springer- Verlag, 1977.
On 2001 May 24 Larry Frazier gave me permission to post this.
Tom Schneider
2003 April 10. I noticed that one paragraph ends incorrectly with "idea of the next number," To determine if there is a corrected copy somewhere I did a search (see below). The Dartmouth version has the same error!
* Google search for The Unreasonable Effectiveness of Mathematics
* The Unreasonable Effectiveness of Mathematics by R. W. HAMMING (at Dartmouth) Another copy of Hamming's article.
* The Unreasonable Effectiveness of Mathematics in the Natural Sciences by Eugene Wigner (at Dartmouth) This was cited in Hamming's article. It's also well worth reading.
Schneider Lab
origin: 1998 or 1999 sometime?
updated: 2001 May 24
updated: 2003 Apr 10
THE UNREASONABLE EFFECTIVENESS OF MATHEMATICS IN THE NATURAL SCIENCES
This HTML page was prepared based on the original PDF file found here: http://www.physik.uni-wuerzburg.de/fileadmin/tp3/QM/wigner.pdf
Reprinted from Communications in Pure and Applied Mathematics, Vol. 13, No. I (February 1960).
New York: John Wiley & Sons, Inc.
Copyright © 1960 by John Wiley & Sons, Inc.
Page 1
THE UNREASONABLE EFFECTIVENESS OF MATHEMATICS IN THE NATURAL SCIENCES
by Eugene Wigner
Mathematics, rightly viewed, possesses not only truth, but supreme beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as in poetry.
— BERTRAND RUSSELL, Study of Mathematics
There is a story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. "How can you know that?" was his query. "And what is this symbol here?" "Oh," said the statistician, "this is pi." "What is that?" "The ratio of the circumference of the circle to its diameter." "Well, now you are pushing your joke too far," said the classmate, "surely the population has nothing to do with the circumference of the circle."
Naturally, we are inclined to smile about the simplicity of the classmate's approach. Nevertheless, when I heard this story, I had to admit to an eerie feeling because, surely, the reaction of the classmate betrayed only plain common sense. I was even more confused when, not many days later, someone came to me and expressed his bewilderment [ The remark to be quoted was made by F. Werner when he was a student in Princeton.] with the fact that we make a rather narrow selection when choosing the data on which we test our theories. "How do we know that, if we made a theory which focuses its attention on phenomena we disregard and disregards some of the phenomena now commanding our attention, that we could not build another theory which has little in common with the present one but which, nevertheless, explains just as many phenomena as the present theory?" It has to be admitted that we have no definite evidence that there is no such theory.
The preceding two stories illustrate the two main points which are the subjects of the present discourse. The first point is that mathematical concepts turn up in entirely unexpected connections. Moreover, they often permit an unexpectedly close and accurate description of the phenomena in these connections. Secondly, just because of this circumstance, and because we do not understand the reasons of their usefulness, we cannot know whether a theory formulated in terms of mathematical concepts is uniquely appropriate. We are in a position similar to that of a man who was provided with a bunch of keys and who, having to open several doors in succession, always hit on the right key on the first or second trial. He became skeptical concerning the uniqueness of the coordination between keys and doors.
Page 2 Most of what will be said on these questions will not be new; it has probably occurred to most scientists in one form or another. My principal aim is to illuminate it from several sides. The first point is that the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it. Second, it is just this uncanny usefulness of mathematical concepts that raises the question of the uniqueness of our physical theories. In order to establish the first point, that mathematics plays an unreasonably important role in physics, it will be useful to say a few words on the question, "What is mathematics?", then, "What is physics?", then, how mathematics enters physical theories, and last, why the success of mathematics in its role in physics appears so baffling. Much less will be said on the second point: the uniqueness of the theories of physics. A proper answer to this question would require elaborate experimental and theoretical work which has not been undertaken to date.
WHAT IS MATHEMATICS?
Somebody once said that philosophy is the misuse of a terminology which was invented just for this purpose.[This statement is quoted here from W. Dubislav's Die Philosophie der Mathematik in der Gegenwart (Berlin: Junker and Dunnhaupt Verlag, 1932), p. 1.] In the same vein, I would say that mathematics is the science of skillful operations with concepts and rules invented just for this purpose. The principal emphasis is on the invention of concepts. Mathematics would soon run out of interesting theorems if these had to be formulated in terms of the concepts which already appear in the axioms. Furthermore, whereas it is unquestionably true that the concepts of elementary mathematics and particularly elementary geometry were formulated to describe entities which are directly suggested by the actual world, the same does not seem to be true of the more advanced concepts, in particular the concepts which play such an important role in physics. Thus, the rules for operations with pairs of numbers are obviously designed to give the same results as the operations with fractions which we first learned without reference to "pairs of numbers." The rules for the operations with sequences, that is, with irrational numbers, still belong to the category of rules which were determined so as to reproduce rules for the operations with quantities which were already known to us. Most more advanced mathematical concepts, such as complex numbers, algebras, linear operators, Borel sets - and this list could be continued almost indefinitely - were so devised that they are apt subjects on which the mathematician can demonstrate his ingenuity and sense of formal beauty. In fact, the definition of these concepts, with a realization that interesting and ingenious considerations could be applied to them, is the first demonstration of the ingeniousness of the mathematician who defines them. The depth of thought which goes into the formulation of the mathematical concepts is later justified by the skill with which these concepts are used. The great mathematician fully, almost ruthlessly, exploits the domain of permissible reasoning and skirts the impermissible. That his recklessness does not lead him into a morass of contradictions is a miracle in itself: certainly it is hard to believe that our reasoning power was brought, by Darwin's process of natural selection, to the perfection which it seems to possess. However, this is not our present subject. The principal point which will have to be recalled later is that the mathematician could formulate only a handful of interesting theorems without defining concepts beyond those contained in the axioms and that the concepts outside those contained in the axioms are defined with a view of permitting ingenious logical operations which appeal to our aesthetic sense both as operations and also in their results of great generality and simplicity. [ M. Polanyi, in his Personal Knowledge (Chicago: University of Chicago Press, 1958), says: "All these difficulties are but consequences of our refusal to see that mathematics cannot be defined without acknowledging its most obvious feature: namely, that it is interesting" (p. 188)].
The complex numbers provide a particularly striking example for the foregoing. Certainly, nothing in our experience suggests the introduction of these quantities. Indeed, if a mathematician is asked to justify his interest in complex numbers, he will point, with some indignation, to the many Page 3 beautiful theorems in the theory of equations, of power series, and of analytic functions in general, which owe their origin to the introduction of complex numbers. The mathematician is not willing to give up his interest in these most beautiful accomplishments of his genius. [ The reader may be interested, in this connection, in Hilbert's rather testy remarks about intuitionism which "seeks to break up and to disfigure mathematics," Abh. Math. Sem., Univ. Hamburg, 157 (1922), or Gesammelte Werke (Berlin: Springer, 1935), p. 188.]
WHAT IS PHYSICS?
The physicist is interested in discovering the laws of inanimate nature. In order to understand this statement, it is necessary to analyze the concept, "law of nature."
The world around us is of baffling complexity and the most obvious fact about it is that we cannot predict the future. Although the joke attributes only to the optimist the view that the future is uncertain, the optimist is right in this case: the future is unpredictable. It is, as Schrodinger has remarked, a miracle that in spite of the baffling complexity of the world, certain regularities in the events could be discovered. One such regularity, discovered by Galileo, is that two rocks, dropped at the same time from the same height, reach the ground at the same time. The laws of nature are concerned with such regularities. Galileo's regularity is a prototype of a large class of regularities. It is a surprising regularity for three reasons.
The first reason that it is surprising is that it is true not only in Pisa, and in Galileo's time, it is true everywhere on the Earth, was always true, and will always be true. This property of the regularity is a recognized invariance property and, as I had occasion to point out some time ago, without invariance principles similar to those implied in the preceding generalization of Galileo's observation, physics would not be possible. The second surprising feature is that the regularity which we are discussing is independent of so many conditions which could have an effect on it. It is valid no matter whether it rains or not, whether the experiment is carried out in a room or from the Leaning Tower, no matter whether the person who drops the rocks is a man or a woman. It is valid even if the two rocks are dropped, simultaneously and from the same height, by two different people. There are, obviously, innumerable other conditions which are all immaterial from the point of view of the validity of Galileo's regularity. The irrelevancy of so many circumstances which could play a role in the phenomenon observed has also been called an invariance. However, this invariance is of a different character from the preceding one since it cannot be formulated as a general principle. The exploration of the conditions which do, and which do not, influence a phenomenon is part of the early experimental exploration of a field. It is the skill and ingenuity of the experimenter which show him phenomena which depend on a relatively narrow set of relatively easily realizable and reproducible conditions. [ see, in this connection, the graphic essay of M. Deutsch, Daedalus 87, 86 (1958). A. Shimony has called my attention to a similar passage in C. S. Peirce's Essays in the Philosophy of Science (New York: The Liberal Arts Press, 1957), p. 237.] In the present case, Galileo's restriction of his observations to relatively heavy bodies was the most important step in this regard. Again, it is true that if there were no phenomena which are independent of all but a manageably small set of conditions, physics would be impossible.
The preceding two points, though highly significant from the point of view of the philosopher, are not the ones which surprised Galileo most, nor do they contain a specific law of nature. The law of nature is contained in the statement that the length of time which it takes for a heavy object to fall from a given height is independent of the size, material, and shape of the body which drops. In the framework of Newton's second "law," this amounts to the statement that the gravitational force which acts on the falling body is proportional to its mass but independent of the size, material, and shape of the body which falls.
Page 4
The preceding discussion is intended to remind us, first, that it is not at all natural that "laws of nature" exist, much less that man is able to discover them. [ E. Schrodinger, in his What Is Life? (Cambridge: Cambridge University Press, 1945), p. 31, says that this second miracle may well be beyond human understanding.] The present writer had occasion, some time ago, to call attention to the succession of layers of "laws of nature," each layer containing more general and more encompassing laws than the previous one and its discovery constituting a deeper penetration into the structure of the universe than the layers recognized before. However, the point which is most significant in the present context is that all these laws of nature contain, in even their remotest consequences, only a small part of our knowledge of the inanimate world. All the laws of nature are conditional statements which permit a prediction of some future events on the basis of the knowledge of the present, except that some aspects of the present state of the world, in practice the overwhelming majority of the determinants of the present state of the world, are irrelevant from the point of view of the prediction. The irrelevancy is meant in the sense of the second point in the discussion of Galileo's theorem. [ The writer feels sure that it is unnecessary to mention that Galileo's theorem, as given in the text, does not exhaust the content of Galileo's observations in connection with the laws of freely falling bodies.]
As regards the present state of the world, such as the existence of the earth on which we live and on which Galileo's experiments were performed, the existence of the sun and of all our surroundings, the laws of nature are entirely silent. It is in consonance with this, first, that the laws of nature can be used to predict future events only under exceptional circumstances - when all the relevant determinants of the present state of the world are known. It is also in consonance with this that the construction of machines, the functioning of which he can foresee, constitutes the most spectacular accomplishment of the physicist. In these machines, the physicist creates a situation in which all the relevant coordinates are known so that the behavior of the machine can be predicted. Radars and nuclear reactors are examples of such machines.
The principal purpose of the preceding discussion is to point out that the laws of nature are all conditional statements and they relate only to a very small part of our knowledge of the world. Thus, classical mechanics, which is the best known prototype of a physical theory, gives the second derivatives of the positional coordinates of all bodies, on the basis of the knowledge of the positions, etc., of these bodies. It gives no information on the existence, the present positions, or velocities of these bodies. It should be mentioned, for the sake of accuracy, that we discovered about thirty years ago that even the conditional statements cannot be entirely precise: that the conditional statements are probability laws which enable us only to place intelligent bets on future properties of the inanimate world, based on the knowledge of the present state. They do not allow us to make categorical statements, not even categorical statements conditional on the present state of the world. The probabilistic nature of the "laws of nature" manifests itself in the case of machines also, and can be verified, at least in the case of nuclear reactors, if one runs them at very low power. However, the additional limitation of the scope of the laws of nature which follows from their probabilistic nature will play no role in the rest of the discussion.
THE ROLE OF MATHEMATICS IN PHYSICAL THEORIES
Having refreshed our minds as to the essence of mathematics and physics, we should be in a better position to review the role of mathematics in physical theories. Naturally, we do use mathematics in everyday physics to evaluate the results of the laws of nature, to apply the conditional statements to the particular conditions which happen to prevail or happen to interest us. In order that this be possible, the laws of nature must already be formulated in mathematical language. However, the role of evaluating the consequences of already established Page 5 theories is not the most important role of mathematics in physics. Mathematics, or, rather, applied mathematics, is not so much the master of the situation in this function: it is merely serving as a tool.
Mathematics does play, however, also a more sovereign role in physics. This was already implied in the statement, made when discussing the role of applied mathematics, that the laws of nature must have been formulated in the language of mathematics to be an object for the use of applied mathematics. The statement that the laws of nature are written in the language of mathematics was properly made three hundred years ago; [ It is attributed to Galileo.] it is now more true than ever before. In order to show the importance which mathematical concepts possess in the formulation of the laws of physics, let us recall, as an example, the axioms of quantum mechanics as formulated, explicitly, by the great physicist, Dirac. There are two basic concepts in quantum mechanics: states and observables. The states are vectors in Hilbert space, the observables self-adjoint operators on these vectors. The possible values of the observations are the characteristic values of the operators - but we had better stop here lest we engage in a listing of the mathematical concepts developed in the theory of linear operators.
It is true, of course, that physics chooses certain mathematical concepts for the formulation of the laws of nature, and surely only a fraction of all mathematical concepts is used in physics. It is true also that the concepts which were chosen were not selected arbitrarily from a listing of mathematical terms but were developed, in many if not most cases, independently by the physicist and recognized then as having been conceived before by the mathematician. It is not true, however, as is so often stated, that this had to happen because mathematics uses the simplest possible concepts and these were bound to occur in any formalism. As we saw before, the concepts of mathematics are not chosen for their conceptual simplicity - even sequences of pairs of numbers are far from being the simplest concepts - but for their amenability to clever manipulations and to striking, brilliant arguments. Let us not forget that the Hilbert space of quantum mechanics is the complex Hilbert space, with a Hermitean scalar product. Surely to the unpreoccupied mind, complex numbers are far from natural or simple and they cannot be suggested by physical observations. Furthermore, the use of complex numbers is in this case not a calculational trick of applied mathematics but comes close to being a necessity in the formulation of the laws of quantum mechanics. Finally, it now begins to appear that not only complex numbers but so-called analytic functions are destined to play a decisive role in the formulation of quantum theory. I am referring to the rapidly developing theory of dispersion relations.
It is difficult to avoid the impression that a miracle confronts us here, quite comparable in its striking nature to the miracle that the human mind can string a thousand arguments together without getting itself into contradictions, or to the two miracles of the existence of laws of nature and of the human mind's capacity to divine them. The observation which comes closest to an explanation for the mathematical concepts' cropping up in physics which I know is Einstein's statement that the only physical theories which we are willing to accept are the beautiful ones. It stands to argue that the concepts of mathematics, which invite the exercise of so much wit, have the quality of beauty. However, Einstein's observation can at best explain properties of theories which we are willing to believe and has no reference to the intrinsic accuracy of the theory. We shall, therefore, turn to this latter question.
IS THE SUCCESS OF PHYSICAL THEORIES TRULY SURPRISING?
A possible explanation of the physicist's use of mathematics to formulate his laws of nature is that he is a somewhat irresponsible person. As a result, when he finds a connection between two quantities which resembles a connection well-known from mathematics, he will jump at the Page 6 conclusion that the connection is that discussed in mathematics simply because he does not know of any other similar connection. It is not the intention of the present discussion to refute the charge that the physicist is a somewhat irresponsible person. Perhaps he is. However, it is important to point out that the mathematical formulation of the physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena. This shows that the mathematical language has more to commend it than being the only language which we can speak; it shows that it is, in a very real sense, the correct language. Let us consider a few examples.
The first example is the oft-quoted one of planetary motion. The laws of falling bodies became rather well established as a result of experiments carried out principally in Italy. These experiments could not be very accurate in the sense in which we understand accuracy today partly because of the effect of air resistance and partly because of the impossibility, at that time, to measure short time intervals. Nevertheless, it is not surprising that, as a result of their studies, the Italian natural scientists acquired a familiarity with the ways in which objects travel through the atmosphere. It was Newton who then brought the law of freely falling objects into relation with the motion of the moon, noted that the parabola of the thrown rock's path on the earth and the circle of the moon's path in the sky are particular cases of the same mathematical object of an ellipse, and postulated the universal law of gravitation on the basis of a single, and at that time very approximate, numerical coincidence. Philosophically, the law of gravitation as formulated by Newton was repugnant to his time and to himself. Empirically, it was based on very scanty observations. The mathematical language in which it was formulated contained the concept of a second derivative and those of us who have tried to draw an osculating circle to a curve know that the second derivative is not a very immediate concept. The law of gravity which Newton reluctantly established and which he could verify with an accuracy of about 4% has proved to be accurate to less than a ten thousandth of a per cent and became so closely associated with the idea of absolute accuracy that only recently did physicists become again bold enough to inquire into the limitations of its accuracy. [ see, for instance, R. H. Dicke, Am. Sci., 25 (1959).] Certainly, the example of Newton's law, quoted over and over again, must be mentioned first as a monumental example of a law, formulated in terms which appear simple to the mathematician, which has proved accurate beyond all reasonable expectations. Let us just recapitulate our thesis on this example: first, the law, particularly since a second derivative appears in it, is simple only to the mathematician, not to common sense or to non-mathematically-minded freshmen; second, it is a conditional law of very limited scope. It explains nothing about the earth which attracts Galileo's rocks, or about the circular form of the moon's orbit, or about the planets of the sun. The explanation of these initial conditions is left to the geologist and the astronomer, and they have a hard time with them.
The second example is that of ordinary, elementary quantum mechanics. This originated when Max Born noticed that some rules of computation, given by Heisenberg, were formally identical with the rules of computation with matrices, established a long time before by mathematicians. Born, Jordan, and Heisenberg then proposed to replace by matrices the position and momentum variables of the equations of classical mechanics. They applied the rules of matrix mechanics to a few highly idealized problems and the results were quite satisfactory. However, there was, at that time, no rational evidence that their matrix mechanics would prove correct under more realistic conditions. Indeed, they say "if the mechanics as here proposed should already be correct in its essential traits." As a matter of fact, the first application of their mechanics to a realistic problem, that of the hydrogen atom, was given several months later, by Pauli. This application gave results in agreement with experience. This was satisfactory but still understandable because Heisenberg's rules of calculation were abstracted from problems which included the old theory of the hydrogen atom. The miracle occurred only when matrix mechanics, or a mathematically equivalent theory, was applied to problems for which Heisenberg's calculating rules were meaningless. Heisenberg's rules presupposed that the classical equations of motion had solutions with certain periodicity properties; Page 7 and the equations of motion of the two electrons of the helium atom, or of the even greater number of electrons of heavier atoms, simply do not have these properties, so that Heisenberg's rules cannot be applied to these cases. Nevertheless, the calculation of the lowest energy level of helium, as carried out a few months ago by Kinoshita at Cornell and by Bazley at the Bureau of Standards, agrees with the experimental data within the accuracy of the observations, which is one part in ten million. Surely in this case we "got something out" of the equations that we did not put in.
The same is true of the qualitative characteristics of the "complex spectra," that is, the spectra of heavier atoms. I wish to recall a conversation with Jordan, who told me, when the qualitative features of the spectra were derived, that a disagreement of the rules derived from quantum mechanical theory and the rules established by empirical research would have provided the last opportunity to make a change in the framework of matrix mechanics. In other words, Jordan felt that we would have been, at least temporarily, helpless had an unexpected disagreement occurred in the theory of the helium atom. This was, at that time, developed by Kellner and by Hilleraas. The mathematical formalism was too dear and unchangeable so that, had the miracle of helium which was mentioned before not occurred, a true crisis would have arisen. Surely, physics would have overcome that crisis in one way or another. It is true, on the other hand, that physics as we know it today would not be possible without a constant recurrence of miracles similar to the one of the helium atom, which is perhaps the most striking miracle that has occurred in the course of the development of elementary quantum mechanics, but by far not the only one. In fact, the number of analogous miracles is limited, in our view, only by our willingness to go after more similar ones. Quantum mechanics had, nevertheless, many almost equally striking successes which gave us the firm conviction that it is, what we call, correct.
The last example is that of quantum electrodynamics, or the theory of the Lamb shift. Whereas Newton's theory of gravitation still had obvious connections with experience, experience entered the formulation of matrix mechanics only in the refined or sublimated form of Heisenberg's prescriptions. The quantum theory of the Lamb shift, as conceived by Bethe and established by Schwinger, is a purely mathematical theory and the only direct contribution of experiment was to show the existence of a measurable effect. The agreement with calculation is better than one part in a thousand.
The preceding three examples, which could be multiplied almost indefinitely, should illustrate the appropriateness and accuracy of the mathematical formulation of the laws of nature in terms of concepts chosen for their manipulability, the "laws of nature" being of almost fantastic accuracy but of strictly limited scope. I propose to refer to the observation which these examples illustrate as the empirical law of epistemology. Together with the laws of invariance of physical theories, it is an indispensable foundation of these theories. Without the laws of invariance the physical theories could have been given no foundation of fact; if the empirical law of epistemology were not correct, we would lack the encouragement and reassurance which are emotional necessities, without which the "laws of nature" could not have been successfully explored. Dr. R. G. Sachs, with whom I discussed the empirical law of epistemology, called it an article of faith of the theoretical physicist, and it is surely that. However, what he called our article of faith can be well supported by actual examples - many examples in addition to the three which have been mentioned.
THE UNIQUENESS OF THE THEORIES OF PHYSICS
The empirical nature of the preceding observation seems to me to be self-evident. It surely is not a "necessity of thought" and it should not be necessary, in order to prove this, to point to the fact that it applies only to a very small part of our knowledge of the inanimate world. It is absurd to believe that the existence of mathematically simple expressions for the second derivative of the position is Page 8 self-evident, when no similar expressions for the position itself or for the velocity exist. It is therefore surprising how readily the wonderful gift contained in the empirical law of epistemology was taken for granted. The ability of the human mind to form a string of 1000 conclusions and still remain "right," which was mentioned before, is a similar gift.
Every empirical law has the disquietire which will be discovered, will fuse into a single consistent unit, or at least asymptotically approach such a fusion. Alternatively, it is possible that there always will be some laws of nature which have nothing in common with each other. At present, this is true, for instance, of the laws of heredity and of physics. It is even possible that some of the laws of nature will be in conflict with each other in their implications, but each convincing enough in its own domain so that we may not be willing to abandon any of them. We may resign ourselves to such a state of affairs or our interest in clearing up the conflict between the various theories may fade out. We may lose interest in the "ultimate truth," that is, in a picture which is a consistent fusion into a single unit of the little pictures, formed on the various aspects of nature.
It may be useful to illustrate the alternatives by an example. We now have, in physics, two theories of great power and interest: the theory of quantum phenomena and the theory of relativity. These two theories have their roots in mutually exclusive groups of phenomena. Relativity theory applies to macroscopic bodies, such as stars. The event of coincidence, that is, in ultimate analysis of collision, is the primitive event in the theory of relativity and defines a point in space-time, or at least would define a point if the colliding panicles were infinitely small. Quantum theory has its roots in the microscopic world and, from its point of view, the event of coincidence, or of collision, even if it takes place between particles of no spatial extent, is not primitive and not at all sharply isolated in space-time. The two theories operate with different mathematical concepts - the four dimensional Riemann space and the infinite dimensional Hilbert space, respectively. So far, the two theories could not be united, that is, no mathematical formulation exists to which both of these theories are approximations. All physicists believe that a union of the two theories is inherently possible and that we shall find it. Nevertheless, it is possible also to imagine that no union of the two theories can be found. This example illustrates the two possibilities, of union and of conflict, mentioned before, both of which are conceivable.
In order to obtain an indication as to which alternative to expect ultimately, we can pretend to be a little more ignorant than we are and place ourselves at a lower level of knowledge than we actually possess. If we can find a fusion of our theories on this lower level of intelligence, we can confidently expect that we will find a fusion of our theories also at our real level of intelligence. On the other hand, if we would arrive at mutually contradictory theories at a somewhat lower level of knowledge, the possibility of the permanence of conflicting theories cannot be excluded for ourselves either. The level of knowledge and ingenuity is a continuous variable and it is unlikely that a relatively small variation of this continuous variable changes the attainable picture of the world from inconsistent to consistent. [ This passage was written after a great deal of hesitation. The writer is convinced that it is useful, in epistemological discussions, to abandon the idealization that the level of human intelligence has a singular position on an absolute scale. In some cases it may even be useful to consider the attainment which is possible at the level of the intelligence of some other species. However, the writer also realizes that his thinking along the lines indicated in the text was too brief and not subject to sufficient critical appraisal to be reliable.]
Considered from this point of view, the fact that some of the theories which we know to be false give such amazingly accurate results is an adverse factor. Had we somewhat less knowledge, the group of phenomena which these "false" theories explain would appear to us to be large enough to "prove" these theories. However, these theories are considered to be "false" by us just for the reason that they are, in ultimate analysis, incompatible with more encompassing pictures and, if Page 9 sufficiently many such false theories are discovered, they are bound to prove also to be in conflict with each other. Similarly, it is possible that the theories, which we consider to be "proved" by a number of numerical agreements which appears to be large enough for us, are false because they are in conflict with a possible more encompassing theory which is beyond our means of discovery. If this were true, we would have to expect conflicts between our theories as soon as their number grows beyond a certain point and as soon as they cover a sufficiently large number of groups of phenomena. In contrast to the article of faith of the theoretical physicist mentioned before, this is the nightmare of the theorist.
Let us consider a few examples of "false" theories which give, in view of their falseness, alarmingly accurate descriptions of groups of phenomena. With some goodwill, one can dismiss some of the evidence which these examples provide. The success of Bohr's early and pioneering ideas on the atom was always a rather narrow one and the same applies to Ptolemy's epicycles. Our present vantage point gives an accurate description of all phenomena which these more primitive theories can describe. The same is not true any longer of the so-called free-electron theory, which gives a marvelously accurate picture of many, if not most, properties of metals, semiconductors, and insulators. In particular, it explains the fact, never properly understood on the basis of the "real theory," that insulators show a specific resistance to electricity which may be 10 26 times greater than that of metals. In fact, there is no experimental evidence to show that the resistance is not infinite under the conditions under which the free-electron theory would lead us to expect an infinite resistance. Nevertheless, we are convinced that the free-electron theory is a crude approximation which should be replaced, in the description of all phenomena concerning solids, by a more accurate picture.
If viewed from our real vantage point, the situation presented by the free-electron theory is irritating but is not likely to forebode any inconsistencies which are unsurmountable for us. The free-electron theory raises doubts as to how much we should trust numerical agreement between theory and experiment as evidence for the correctness of the theory. We are used to such doubts.
A much more difficult and confusing situation would arise if we could, some day, establish a theory of the phenomena of consciousness, or of biology, which would be as coherent and convincing as our present theories of the inanimate world. Mendel's laws of inheritance and the subsequent work on genes may well form the beginning of such a theory as far as biology is concerned. Furthermore,, it is quite possible that an abstract argument can be found which shows that there is a conflict between such a theory and the accepted principles of physics. The argument could be of such abstract nature that it might not be possible to resolve the conflict, in favor of one or of the other theory, by an experiment. Such a situation would put a heavy strain on our faith in our theories and on our belief in the reality of the concepts which we form. It would give us a deep sense of frustration in our search for what I called "the ultimate truth." The reason that such a situation is conceivable is that, fundamentally, we do not know why our theories work so well. Hence, their accuracy may not prove their truth and consistency. Indeed, it is this writer's belief that something rather akin to the situation which was described above exists if the present laws of heredity and of physics are confronted.
Let me end on a more cheerful note. The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.
Reprinted from Communications in Pure and Applied Mathematics, Vol. 13, No. I (February 1960).
New York: John Wiley & Sons, Inc.
Copyright © 1960 by John Wiley & Sons, Inc.
Page 1
THE UNREASONABLE EFFECTIVENESS OF MATHEMATICS IN THE NATURAL SCIENCES
by Eugene Wigner
Mathematics, rightly viewed, possesses not only truth, but supreme beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as in poetry.
— BERTRAND RUSSELL, Study of Mathematics
There is a story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. "How can you know that?" was his query. "And what is this symbol here?" "Oh," said the statistician, "this is pi." "What is that?" "The ratio of the circumference of the circle to its diameter." "Well, now you are pushing your joke too far," said the classmate, "surely the population has nothing to do with the circumference of the circle."
Naturally, we are inclined to smile about the simplicity of the classmate's approach. Nevertheless, when I heard this story, I had to admit to an eerie feeling because, surely, the reaction of the classmate betrayed only plain common sense. I was even more confused when, not many days later, someone came to me and expressed his bewilderment [ The remark to be quoted was made by F. Werner when he was a student in Princeton.] with the fact that we make a rather narrow selection when choosing the data on which we test our theories. "How do we know that, if we made a theory which focuses its attention on phenomena we disregard and disregards some of the phenomena now commanding our attention, that we could not build another theory which has little in common with the present one but which, nevertheless, explains just as many phenomena as the present theory?" It has to be admitted that we have no definite evidence that there is no such theory.
The preceding two stories illustrate the two main points which are the subjects of the present discourse. The first point is that mathematical concepts turn up in entirely unexpected connections. Moreover, they often permit an unexpectedly close and accurate description of the phenomena in these connections. Secondly, just because of this circumstance, and because we do not understand the reasons of their usefulness, we cannot know whether a theory formulated in terms of mathematical concepts is uniquely appropriate. We are in a position similar to that of a man who was provided with a bunch of keys and who, having to open several doors in succession, always hit on the right key on the first or second trial. He became skeptical concerning the uniqueness of the coordination between keys and doors.
Page 2 Most of what will be said on these questions will not be new; it has probably occurred to most scientists in one form or another. My principal aim is to illuminate it from several sides. The first point is that the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it. Second, it is just this uncanny usefulness of mathematical concepts that raises the question of the uniqueness of our physical theories. In order to establish the first point, that mathematics plays an unreasonably important role in physics, it will be useful to say a few words on the question, "What is mathematics?", then, "What is physics?", then, how mathematics enters physical theories, and last, why the success of mathematics in its role in physics appears so baffling. Much less will be said on the second point: the uniqueness of the theories of physics. A proper answer to this question would require elaborate experimental and theoretical work which has not been undertaken to date.
WHAT IS MATHEMATICS?
Somebody once said that philosophy is the misuse of a terminology which was invented just for this purpose.[This statement is quoted here from W. Dubislav's Die Philosophie der Mathematik in der Gegenwart (Berlin: Junker and Dunnhaupt Verlag, 1932), p. 1.] In the same vein, I would say that mathematics is the science of skillful operations with concepts and rules invented just for this purpose. The principal emphasis is on the invention of concepts. Mathematics would soon run out of interesting theorems if these had to be formulated in terms of the concepts which already appear in the axioms. Furthermore, whereas it is unquestionably true that the concepts of elementary mathematics and particularly elementary geometry were formulated to describe entities which are directly suggested by the actual world, the same does not seem to be true of the more advanced concepts, in particular the concepts which play such an important role in physics. Thus, the rules for operations with pairs of numbers are obviously designed to give the same results as the operations with fractions which we first learned without reference to "pairs of numbers." The rules for the operations with sequences, that is, with irrational numbers, still belong to the category of rules which were determined so as to reproduce rules for the operations with quantities which were already known to us. Most more advanced mathematical concepts, such as complex numbers, algebras, linear operators, Borel sets - and this list could be continued almost indefinitely - were so devised that they are apt subjects on which the mathematician can demonstrate his ingenuity and sense of formal beauty. In fact, the definition of these concepts, with a realization that interesting and ingenious considerations could be applied to them, is the first demonstration of the ingeniousness of the mathematician who defines them. The depth of thought which goes into the formulation of the mathematical concepts is later justified by the skill with which these concepts are used. The great mathematician fully, almost ruthlessly, exploits the domain of permissible reasoning and skirts the impermissible. That his recklessness does not lead him into a morass of contradictions is a miracle in itself: certainly it is hard to believe that our reasoning power was brought, by Darwin's process of natural selection, to the perfection which it seems to possess. However, this is not our present subject. The principal point which will have to be recalled later is that the mathematician could formulate only a handful of interesting theorems without defining concepts beyond those contained in the axioms and that the concepts outside those contained in the axioms are defined with a view of permitting ingenious logical operations which appeal to our aesthetic sense both as operations and also in their results of great generality and simplicity. [ M. Polanyi, in his Personal Knowledge (Chicago: University of Chicago Press, 1958), says: "All these difficulties are but consequences of our refusal to see that mathematics cannot be defined without acknowledging its most obvious feature: namely, that it is interesting" (p. 188)].
The complex numbers provide a particularly striking example for the foregoing. Certainly, nothing in our experience suggests the introduction of these quantities. Indeed, if a mathematician is asked to justify his interest in complex numbers, he will point, with some indignation, to the many Page 3 beautiful theorems in the theory of equations, of power series, and of analytic functions in general, which owe their origin to the introduction of complex numbers. The mathematician is not willing to give up his interest in these most beautiful accomplishments of his genius. [ The reader may be interested, in this connection, in Hilbert's rather testy remarks about intuitionism which "seeks to break up and to disfigure mathematics," Abh. Math. Sem., Univ. Hamburg, 157 (1922), or Gesammelte Werke (Berlin: Springer, 1935), p. 188.]
WHAT IS PHYSICS?
The physicist is interested in discovering the laws of inanimate nature. In order to understand this statement, it is necessary to analyze the concept, "law of nature."
The world around us is of baffling complexity and the most obvious fact about it is that we cannot predict the future. Although the joke attributes only to the optimist the view that the future is uncertain, the optimist is right in this case: the future is unpredictable. It is, as Schrodinger has remarked, a miracle that in spite of the baffling complexity of the world, certain regularities in the events could be discovered. One such regularity, discovered by Galileo, is that two rocks, dropped at the same time from the same height, reach the ground at the same time. The laws of nature are concerned with such regularities. Galileo's regularity is a prototype of a large class of regularities. It is a surprising regularity for three reasons.
The first reason that it is surprising is that it is true not only in Pisa, and in Galileo's time, it is true everywhere on the Earth, was always true, and will always be true. This property of the regularity is a recognized invariance property and, as I had occasion to point out some time ago, without invariance principles similar to those implied in the preceding generalization of Galileo's observation, physics would not be possible. The second surprising feature is that the regularity which we are discussing is independent of so many conditions which could have an effect on it. It is valid no matter whether it rains or not, whether the experiment is carried out in a room or from the Leaning Tower, no matter whether the person who drops the rocks is a man or a woman. It is valid even if the two rocks are dropped, simultaneously and from the same height, by two different people. There are, obviously, innumerable other conditions which are all immaterial from the point of view of the validity of Galileo's regularity. The irrelevancy of so many circumstances which could play a role in the phenomenon observed has also been called an invariance. However, this invariance is of a different character from the preceding one since it cannot be formulated as a general principle. The exploration of the conditions which do, and which do not, influence a phenomenon is part of the early experimental exploration of a field. It is the skill and ingenuity of the experimenter which show him phenomena which depend on a relatively narrow set of relatively easily realizable and reproducible conditions. [ see, in this connection, the graphic essay of M. Deutsch, Daedalus 87, 86 (1958). A. Shimony has called my attention to a similar passage in C. S. Peirce's Essays in the Philosophy of Science (New York: The Liberal Arts Press, 1957), p. 237.] In the present case, Galileo's restriction of his observations to relatively heavy bodies was the most important step in this regard. Again, it is true that if there were no phenomena which are independent of all but a manageably small set of conditions, physics would be impossible.
The preceding two points, though highly significant from the point of view of the philosopher, are not the ones which surprised Galileo most, nor do they contain a specific law of nature. The law of nature is contained in the statement that the length of time which it takes for a heavy object to fall from a given height is independent of the size, material, and shape of the body which drops. In the framework of Newton's second "law," this amounts to the statement that the gravitational force which acts on the falling body is proportional to its mass but independent of the size, material, and shape of the body which falls.
Page 4
The preceding discussion is intended to remind us, first, that it is not at all natural that "laws of nature" exist, much less that man is able to discover them. [ E. Schrodinger, in his What Is Life? (Cambridge: Cambridge University Press, 1945), p. 31, says that this second miracle may well be beyond human understanding.] The present writer had occasion, some time ago, to call attention to the succession of layers of "laws of nature," each layer containing more general and more encompassing laws than the previous one and its discovery constituting a deeper penetration into the structure of the universe than the layers recognized before. However, the point which is most significant in the present context is that all these laws of nature contain, in even their remotest consequences, only a small part of our knowledge of the inanimate world. All the laws of nature are conditional statements which permit a prediction of some future events on the basis of the knowledge of the present, except that some aspects of the present state of the world, in practice the overwhelming majority of the determinants of the present state of the world, are irrelevant from the point of view of the prediction. The irrelevancy is meant in the sense of the second point in the discussion of Galileo's theorem. [ The writer feels sure that it is unnecessary to mention that Galileo's theorem, as given in the text, does not exhaust the content of Galileo's observations in connection with the laws of freely falling bodies.]
As regards the present state of the world, such as the existence of the earth on which we live and on which Galileo's experiments were performed, the existence of the sun and of all our surroundings, the laws of nature are entirely silent. It is in consonance with this, first, that the laws of nature can be used to predict future events only under exceptional circumstances - when all the relevant determinants of the present state of the world are known. It is also in consonance with this that the construction of machines, the functioning of which he can foresee, constitutes the most spectacular accomplishment of the physicist. In these machines, the physicist creates a situation in which all the relevant coordinates are known so that the behavior of the machine can be predicted. Radars and nuclear reactors are examples of such machines.
The principal purpose of the preceding discussion is to point out that the laws of nature are all conditional statements and they relate only to a very small part of our knowledge of the world. Thus, classical mechanics, which is the best known prototype of a physical theory, gives the second derivatives of the positional coordinates of all bodies, on the basis of the knowledge of the positions, etc., of these bodies. It gives no information on the existence, the present positions, or velocities of these bodies. It should be mentioned, for the sake of accuracy, that we discovered about thirty years ago that even the conditional statements cannot be entirely precise: that the conditional statements are probability laws which enable us only to place intelligent bets on future properties of the inanimate world, based on the knowledge of the present state. They do not allow us to make categorical statements, not even categorical statements conditional on the present state of the world. The probabilistic nature of the "laws of nature" manifests itself in the case of machines also, and can be verified, at least in the case of nuclear reactors, if one runs them at very low power. However, the additional limitation of the scope of the laws of nature which follows from their probabilistic nature will play no role in the rest of the discussion.
THE ROLE OF MATHEMATICS IN PHYSICAL THEORIES
Having refreshed our minds as to the essence of mathematics and physics, we should be in a better position to review the role of mathematics in physical theories. Naturally, we do use mathematics in everyday physics to evaluate the results of the laws of nature, to apply the conditional statements to the particular conditions which happen to prevail or happen to interest us. In order that this be possible, the laws of nature must already be formulated in mathematical language. However, the role of evaluating the consequences of already established Page 5 theories is not the most important role of mathematics in physics. Mathematics, or, rather, applied mathematics, is not so much the master of the situation in this function: it is merely serving as a tool.
Mathematics does play, however, also a more sovereign role in physics. This was already implied in the statement, made when discussing the role of applied mathematics, that the laws of nature must have been formulated in the language of mathematics to be an object for the use of applied mathematics. The statement that the laws of nature are written in the language of mathematics was properly made three hundred years ago; [ It is attributed to Galileo.] it is now more true than ever before. In order to show the importance which mathematical concepts possess in the formulation of the laws of physics, let us recall, as an example, the axioms of quantum mechanics as formulated, explicitly, by the great physicist, Dirac. There are two basic concepts in quantum mechanics: states and observables. The states are vectors in Hilbert space, the observables self-adjoint operators on these vectors. The possible values of the observations are the characteristic values of the operators - but we had better stop here lest we engage in a listing of the mathematical concepts developed in the theory of linear operators.
It is true, of course, that physics chooses certain mathematical concepts for the formulation of the laws of nature, and surely only a fraction of all mathematical concepts is used in physics. It is true also that the concepts which were chosen were not selected arbitrarily from a listing of mathematical terms but were developed, in many if not most cases, independently by the physicist and recognized then as having been conceived before by the mathematician. It is not true, however, as is so often stated, that this had to happen because mathematics uses the simplest possible concepts and these were bound to occur in any formalism. As we saw before, the concepts of mathematics are not chosen for their conceptual simplicity - even sequences of pairs of numbers are far from being the simplest concepts - but for their amenability to clever manipulations and to striking, brilliant arguments. Let us not forget that the Hilbert space of quantum mechanics is the complex Hilbert space, with a Hermitean scalar product. Surely to the unpreoccupied mind, complex numbers are far from natural or simple and they cannot be suggested by physical observations. Furthermore, the use of complex numbers is in this case not a calculational trick of applied mathematics but comes close to being a necessity in the formulation of the laws of quantum mechanics. Finally, it now begins to appear that not only complex numbers but so-called analytic functions are destined to play a decisive role in the formulation of quantum theory. I am referring to the rapidly developing theory of dispersion relations.
It is difficult to avoid the impression that a miracle confronts us here, quite comparable in its striking nature to the miracle that the human mind can string a thousand arguments together without getting itself into contradictions, or to the two miracles of the existence of laws of nature and of the human mind's capacity to divine them. The observation which comes closest to an explanation for the mathematical concepts' cropping up in physics which I know is Einstein's statement that the only physical theories which we are willing to accept are the beautiful ones. It stands to argue that the concepts of mathematics, which invite the exercise of so much wit, have the quality of beauty. However, Einstein's observation can at best explain properties of theories which we are willing to believe and has no reference to the intrinsic accuracy of the theory. We shall, therefore, turn to this latter question.
IS THE SUCCESS OF PHYSICAL THEORIES TRULY SURPRISING?
A possible explanation of the physicist's use of mathematics to formulate his laws of nature is that he is a somewhat irresponsible person. As a result, when he finds a connection between two quantities which resembles a connection well-known from mathematics, he will jump at the Page 6 conclusion that the connection is that discussed in mathematics simply because he does not know of any other similar connection. It is not the intention of the present discussion to refute the charge that the physicist is a somewhat irresponsible person. Perhaps he is. However, it is important to point out that the mathematical formulation of the physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena. This shows that the mathematical language has more to commend it than being the only language which we can speak; it shows that it is, in a very real sense, the correct language. Let us consider a few examples.
The first example is the oft-quoted one of planetary motion. The laws of falling bodies became rather well established as a result of experiments carried out principally in Italy. These experiments could not be very accurate in the sense in which we understand accuracy today partly because of the effect of air resistance and partly because of the impossibility, at that time, to measure short time intervals. Nevertheless, it is not surprising that, as a result of their studies, the Italian natural scientists acquired a familiarity with the ways in which objects travel through the atmosphere. It was Newton who then brought the law of freely falling objects into relation with the motion of the moon, noted that the parabola of the thrown rock's path on the earth and the circle of the moon's path in the sky are particular cases of the same mathematical object of an ellipse, and postulated the universal law of gravitation on the basis of a single, and at that time very approximate, numerical coincidence. Philosophically, the law of gravitation as formulated by Newton was repugnant to his time and to himself. Empirically, it was based on very scanty observations. The mathematical language in which it was formulated contained the concept of a second derivative and those of us who have tried to draw an osculating circle to a curve know that the second derivative is not a very immediate concept. The law of gravity which Newton reluctantly established and which he could verify with an accuracy of about 4% has proved to be accurate to less than a ten thousandth of a per cent and became so closely associated with the idea of absolute accuracy that only recently did physicists become again bold enough to inquire into the limitations of its accuracy. [ see, for instance, R. H. Dicke, Am. Sci., 25 (1959).] Certainly, the example of Newton's law, quoted over and over again, must be mentioned first as a monumental example of a law, formulated in terms which appear simple to the mathematician, which has proved accurate beyond all reasonable expectations. Let us just recapitulate our thesis on this example: first, the law, particularly since a second derivative appears in it, is simple only to the mathematician, not to common sense or to non-mathematically-minded freshmen; second, it is a conditional law of very limited scope. It explains nothing about the earth which attracts Galileo's rocks, or about the circular form of the moon's orbit, or about the planets of the sun. The explanation of these initial conditions is left to the geologist and the astronomer, and they have a hard time with them.
The second example is that of ordinary, elementary quantum mechanics. This originated when Max Born noticed that some rules of computation, given by Heisenberg, were formally identical with the rules of computation with matrices, established a long time before by mathematicians. Born, Jordan, and Heisenberg then proposed to replace by matrices the position and momentum variables of the equations of classical mechanics. They applied the rules of matrix mechanics to a few highly idealized problems and the results were quite satisfactory. However, there was, at that time, no rational evidence that their matrix mechanics would prove correct under more realistic conditions. Indeed, they say "if the mechanics as here proposed should already be correct in its essential traits." As a matter of fact, the first application of their mechanics to a realistic problem, that of the hydrogen atom, was given several months later, by Pauli. This application gave results in agreement with experience. This was satisfactory but still understandable because Heisenberg's rules of calculation were abstracted from problems which included the old theory of the hydrogen atom. The miracle occurred only when matrix mechanics, or a mathematically equivalent theory, was applied to problems for which Heisenberg's calculating rules were meaningless. Heisenberg's rules presupposed that the classical equations of motion had solutions with certain periodicity properties; Page 7 and the equations of motion of the two electrons of the helium atom, or of the even greater number of electrons of heavier atoms, simply do not have these properties, so that Heisenberg's rules cannot be applied to these cases. Nevertheless, the calculation of the lowest energy level of helium, as carried out a few months ago by Kinoshita at Cornell and by Bazley at the Bureau of Standards, agrees with the experimental data within the accuracy of the observations, which is one part in ten million. Surely in this case we "got something out" of the equations that we did not put in.
The same is true of the qualitative characteristics of the "complex spectra," that is, the spectra of heavier atoms. I wish to recall a conversation with Jordan, who told me, when the qualitative features of the spectra were derived, that a disagreement of the rules derived from quantum mechanical theory and the rules established by empirical research would have provided the last opportunity to make a change in the framework of matrix mechanics. In other words, Jordan felt that we would have been, at least temporarily, helpless had an unexpected disagreement occurred in the theory of the helium atom. This was, at that time, developed by Kellner and by Hilleraas. The mathematical formalism was too dear and unchangeable so that, had the miracle of helium which was mentioned before not occurred, a true crisis would have arisen. Surely, physics would have overcome that crisis in one way or another. It is true, on the other hand, that physics as we know it today would not be possible without a constant recurrence of miracles similar to the one of the helium atom, which is perhaps the most striking miracle that has occurred in the course of the development of elementary quantum mechanics, but by far not the only one. In fact, the number of analogous miracles is limited, in our view, only by our willingness to go after more similar ones. Quantum mechanics had, nevertheless, many almost equally striking successes which gave us the firm conviction that it is, what we call, correct.
The last example is that of quantum electrodynamics, or the theory of the Lamb shift. Whereas Newton's theory of gravitation still had obvious connections with experience, experience entered the formulation of matrix mechanics only in the refined or sublimated form of Heisenberg's prescriptions. The quantum theory of the Lamb shift, as conceived by Bethe and established by Schwinger, is a purely mathematical theory and the only direct contribution of experiment was to show the existence of a measurable effect. The agreement with calculation is better than one part in a thousand.
The preceding three examples, which could be multiplied almost indefinitely, should illustrate the appropriateness and accuracy of the mathematical formulation of the laws of nature in terms of concepts chosen for their manipulability, the "laws of nature" being of almost fantastic accuracy but of strictly limited scope. I propose to refer to the observation which these examples illustrate as the empirical law of epistemology. Together with the laws of invariance of physical theories, it is an indispensable foundation of these theories. Without the laws of invariance the physical theories could have been given no foundation of fact; if the empirical law of epistemology were not correct, we would lack the encouragement and reassurance which are emotional necessities, without which the "laws of nature" could not have been successfully explored. Dr. R. G. Sachs, with whom I discussed the empirical law of epistemology, called it an article of faith of the theoretical physicist, and it is surely that. However, what he called our article of faith can be well supported by actual examples - many examples in addition to the three which have been mentioned.
THE UNIQUENESS OF THE THEORIES OF PHYSICS
The empirical nature of the preceding observation seems to me to be self-evident. It surely is not a "necessity of thought" and it should not be necessary, in order to prove this, to point to the fact that it applies only to a very small part of our knowledge of the inanimate world. It is absurd to believe that the existence of mathematically simple expressions for the second derivative of the position is Page 8 self-evident, when no similar expressions for the position itself or for the velocity exist. It is therefore surprising how readily the wonderful gift contained in the empirical law of epistemology was taken for granted. The ability of the human mind to form a string of 1000 conclusions and still remain "right," which was mentioned before, is a similar gift.
Every empirical law has the disquietire which will be discovered, will fuse into a single consistent unit, or at least asymptotically approach such a fusion. Alternatively, it is possible that there always will be some laws of nature which have nothing in common with each other. At present, this is true, for instance, of the laws of heredity and of physics. It is even possible that some of the laws of nature will be in conflict with each other in their implications, but each convincing enough in its own domain so that we may not be willing to abandon any of them. We may resign ourselves to such a state of affairs or our interest in clearing up the conflict between the various theories may fade out. We may lose interest in the "ultimate truth," that is, in a picture which is a consistent fusion into a single unit of the little pictures, formed on the various aspects of nature.
It may be useful to illustrate the alternatives by an example. We now have, in physics, two theories of great power and interest: the theory of quantum phenomena and the theory of relativity. These two theories have their roots in mutually exclusive groups of phenomena. Relativity theory applies to macroscopic bodies, such as stars. The event of coincidence, that is, in ultimate analysis of collision, is the primitive event in the theory of relativity and defines a point in space-time, or at least would define a point if the colliding panicles were infinitely small. Quantum theory has its roots in the microscopic world and, from its point of view, the event of coincidence, or of collision, even if it takes place between particles of no spatial extent, is not primitive and not at all sharply isolated in space-time. The two theories operate with different mathematical concepts - the four dimensional Riemann space and the infinite dimensional Hilbert space, respectively. So far, the two theories could not be united, that is, no mathematical formulation exists to which both of these theories are approximations. All physicists believe that a union of the two theories is inherently possible and that we shall find it. Nevertheless, it is possible also to imagine that no union of the two theories can be found. This example illustrates the two possibilities, of union and of conflict, mentioned before, both of which are conceivable.
In order to obtain an indication as to which alternative to expect ultimately, we can pretend to be a little more ignorant than we are and place ourselves at a lower level of knowledge than we actually possess. If we can find a fusion of our theories on this lower level of intelligence, we can confidently expect that we will find a fusion of our theories also at our real level of intelligence. On the other hand, if we would arrive at mutually contradictory theories at a somewhat lower level of knowledge, the possibility of the permanence of conflicting theories cannot be excluded for ourselves either. The level of knowledge and ingenuity is a continuous variable and it is unlikely that a relatively small variation of this continuous variable changes the attainable picture of the world from inconsistent to consistent. [ This passage was written after a great deal of hesitation. The writer is convinced that it is useful, in epistemological discussions, to abandon the idealization that the level of human intelligence has a singular position on an absolute scale. In some cases it may even be useful to consider the attainment which is possible at the level of the intelligence of some other species. However, the writer also realizes that his thinking along the lines indicated in the text was too brief and not subject to sufficient critical appraisal to be reliable.]
Considered from this point of view, the fact that some of the theories which we know to be false give such amazingly accurate results is an adverse factor. Had we somewhat less knowledge, the group of phenomena which these "false" theories explain would appear to us to be large enough to "prove" these theories. However, these theories are considered to be "false" by us just for the reason that they are, in ultimate analysis, incompatible with more encompassing pictures and, if Page 9 sufficiently many such false theories are discovered, they are bound to prove also to be in conflict with each other. Similarly, it is possible that the theories, which we consider to be "proved" by a number of numerical agreements which appears to be large enough for us, are false because they are in conflict with a possible more encompassing theory which is beyond our means of discovery. If this were true, we would have to expect conflicts between our theories as soon as their number grows beyond a certain point and as soon as they cover a sufficiently large number of groups of phenomena. In contrast to the article of faith of the theoretical physicist mentioned before, this is the nightmare of the theorist.
Let us consider a few examples of "false" theories which give, in view of their falseness, alarmingly accurate descriptions of groups of phenomena. With some goodwill, one can dismiss some of the evidence which these examples provide. The success of Bohr's early and pioneering ideas on the atom was always a rather narrow one and the same applies to Ptolemy's epicycles. Our present vantage point gives an accurate description of all phenomena which these more primitive theories can describe. The same is not true any longer of the so-called free-electron theory, which gives a marvelously accurate picture of many, if not most, properties of metals, semiconductors, and insulators. In particular, it explains the fact, never properly understood on the basis of the "real theory," that insulators show a specific resistance to electricity which may be 10 26 times greater than that of metals. In fact, there is no experimental evidence to show that the resistance is not infinite under the conditions under which the free-electron theory would lead us to expect an infinite resistance. Nevertheless, we are convinced that the free-electron theory is a crude approximation which should be replaced, in the description of all phenomena concerning solids, by a more accurate picture.
If viewed from our real vantage point, the situation presented by the free-electron theory is irritating but is not likely to forebode any inconsistencies which are unsurmountable for us. The free-electron theory raises doubts as to how much we should trust numerical agreement between theory and experiment as evidence for the correctness of the theory. We are used to such doubts.
A much more difficult and confusing situation would arise if we could, some day, establish a theory of the phenomena of consciousness, or of biology, which would be as coherent and convincing as our present theories of the inanimate world. Mendel's laws of inheritance and the subsequent work on genes may well form the beginning of such a theory as far as biology is concerned. Furthermore,, it is quite possible that an abstract argument can be found which shows that there is a conflict between such a theory and the accepted principles of physics. The argument could be of such abstract nature that it might not be possible to resolve the conflict, in favor of one or of the other theory, by an experiment. Such a situation would put a heavy strain on our faith in our theories and on our belief in the reality of the concepts which we form. It would give us a deep sense of frustration in our search for what I called "the ultimate truth." The reason that such a situation is conceivable is that, fundamentally, we do not know why our theories work so well. Hence, their accuracy may not prove their truth and consistency. Indeed, it is this writer's belief that something rather akin to the situation which was described above exists if the present laws of heredity and of physics are confronted.
Let me end on a more cheerful note. The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.
Subscribe to:
Posts (Atom)
Emotional - Leonard Mlodnow
We’ve all been told that thinking rationally is the key to success. But at the cutting edge of science, researchers are discovering that ...
-
Father, it's too late for making up with you The time for debates on honour is over now You won, didn't you? You left me witho...
-
"അത്രമേല് പ്രാണനും പ്രാണനായ് നിന്ന നീ യാത്ര പറയാതെ പോയതുചിതമോ? വിണ്ണില് വെളിച്ചമെഴുതി നിന്നീടുമോ കണ്ണിലൊരുകുറി കൂടി ക്ഷണപ്രഭേ?&...
-
സ്നേഹിക്കയില്ല ഞാൻ നോവുമാത്മാവിനെ സ്നേഹിച്ചിടാത്തൊരു തത്വ ശാസ്ത്രത്തെയും!!!! ഒന്നാം കൊമ്പത്ത് വന്നിരുന്നന്നൊരു പുന്നാരക്കിളി ചോദിച്...