Wednesday, February 24, 2010
Cost of public school education
Earlier I have written a blogpost regarding the need for an educational reform citing the increasing number of college dropouts. In that, I have mentioned that the public universities give excellent education for an affordable cost. Recently Dr. Mark Perry of the University of Michigan has written an article comparing the per-capita money spent on public school education to the Harvard tuition fee. The results are astounding. A little number crunching removes the fog that surrounds the truth and when it is visible, sometimes it is stranger than fiction. Here is the link to Mark Perry's blogpost. The US seriously needs an educational reform.
Monday, February 22, 2010
Growth in computing power
I am running a fever, cough and cold that collaboratively make me tired at body and heart. The tiredness unfortunately keeps me away from blogging, as research, work, and study effectively occupy my reduced "active" time. When the going gets tough, I have to give priority to what I am paid for and what I pay for. Somehow I find sometime to write this, when my Python script is running in the background.
Recently I have read this 2002 NBER paper, titled The Progress of Computing by Yale's economic professor, W.D. Nordhaus. This is a good 61 page article that covers the 150 year history of computers. I am always a big fan of the evolution of computers, because it teaches a lot of lessons about what not to do. It's still a serious paper, not a set of anecdotes.
The important observation is the rate at which the power of computation has grown after the World War II. Believe it or not, over this period of the last 150 years, the growth in computational power is trillion times. I don't think any of technology or field can boast this kind of growth. Of course, medicine is a field that grows very fast, although it is not easily measurable like computers. But the growth in medicine had a huge acceleration when computers started playing a role in improving diagnostic and surgical ability of doctors, as with any other field.
The most interesting part of the essay is the graph shown above. This graph is plotted for a forty year period in which it shows the exponential decrease in the computer price (it's a semilog plot). As it can be observed, the price of computation from 1970 to 2000 has gone down by a factor of one million. I would like to see the increase in efficiency and productivity in various industries that has been brought about by this reduced cost of computation. This may arguably be the most important testimonial for man's achievements over the past half-a-century.
Recently I have read this 2002 NBER paper, titled The Progress of Computing by Yale's economic professor, W.D. Nordhaus. This is a good 61 page article that covers the 150 year history of computers. I am always a big fan of the evolution of computers, because it teaches a lot of lessons about what not to do. It's still a serious paper, not a set of anecdotes.
The important observation is the rate at which the power of computation has grown after the World War II. Believe it or not, over this period of the last 150 years, the growth in computational power is trillion times. I don't think any of technology or field can boast this kind of growth. Of course, medicine is a field that grows very fast, although it is not easily measurable like computers. But the growth in medicine had a huge acceleration when computers started playing a role in improving diagnostic and surgical ability of doctors, as with any other field.
The most interesting part of the essay is the graph shown above. This graph is plotted for a forty year period in which it shows the exponential decrease in the computer price (it's a semilog plot). As it can be observed, the price of computation from 1970 to 2000 has gone down by a factor of one million. I would like to see the increase in efficiency and productivity in various industries that has been brought about by this reduced cost of computation. This may arguably be the most important testimonial for man's achievements over the past half-a-century.
Thursday, February 04, 2010
Inequality and Income distribution of G20
Recently I have read an extremely informative article in G20 website about the growth in personal income among the population of different G20 countries and there was a similar NBER working paper. The article is a little long, but it gives deep insight about poverty, inequality and income distribution. The research tracks the income growth since 1970. One interesting observation is that the G20 income distribution exactly traces the entire world's income distribution. Since G20 represents more than 60% of world's population, it may make sense. But what makes then a part of this club is not their population, but their economic growth. If that is the case, shouldn't G20 income be significantly or at least notably larger than the world average? The fact that it is not simply implies that even in a fast growing economies, poverty is a continuous problem. The US, Europe and Japan occupy the highest income distribution segment among the G20, as expected.
Another observation is that the income range in 1980 is more or less the same as it was in 1970. Seventies has been the decade of small, recurring depressions all over the western world, just like the current decade, although there was nothing as worse as the current depression since 1930s. Seventies also saw the peak of cold war. In seventies, more than one-fourth of India and China was earning lesser than $1 per day. China's socialist market economy has not kick started at that time and India was still confused and there were not stable economic and trade policy to talk about. In fact, it was more tilted towards USSR.
Note: All images courtesy: www.g20.org, www.nber.org
Another observation is that the income range in 1980 is more or less the same as it was in 1970. Seventies has been the decade of small, recurring depressions all over the western world, just like the current decade, although there was nothing as worse as the current depression since 1930s. Seventies also saw the peak of cold war. In seventies, more than one-fourth of India and China was earning lesser than $1 per day. China's socialist market economy has not kick started at that time and India was still confused and there were not stable economic and trade policy to talk about. In fact, it was more tilted towards USSR.
In 1980, China has started its internal liberalizations. India was still confused and the government was holding most of the industries. The industries lacked in technological growth, innovation, and productivity. The society was split in different terms and keep on innovating ways to not stick together.
Then started the roaring nineties and its globalization. India made some right choices and jumped on the globalization bandwagon. Or rather it can be argued that India had no other choice. One thing that still bothers me even now was the marked absence of internal liberalization in India. Except in certain sectors like banking, when the Indian market opened for private investments, it immediately invited global players. So the number of pure Indian players who compete with these global giants was less. At the end of the roaring nineties, Asia hit a currency crisis with Thailand as its epicenter. The different countries which were the proud foster children of IMF fell down first almost pulling down everybody around along with them. What did India and China do differently? The more suitable question is what did they not do. They did not drink the capital market liberalization potion. And thus they survived.
The best observation that we can make out of these curves is that throughout this period, the shape of the income distribution curve remained the same. This means when India's economy started developing through globalization, it created positive effect of everybody across the histogram. On the other hand, China's curve became flatter and flatter - which means the inequality between the rich and the poor increased constantly and dramatically. But in India, although it did not achieve the income shift as China did, it managed to maintain its income distribution ratio.
Now whatever short term political stupidity might have happened in India over the past two decades, this could arguably be its greatest achievement since independence. Kudos to India's policymakers!
Note: All images courtesy: www.g20.org, www.nber.org
Labels:
china,
economics,
G20,
income distribution,
india,
inequality,
poverty
Wednesday, February 03, 2010
3D-FPGA Reinvented
The concept of 3D chip has been coming and going. I have not seen many commercial chip that is 3D. But that may be the norms in future, as we find it difficult to integrate more transistors into a given area.
If the area is small, stack it up. That's what was done in New York in the mid nineties. In future, at least in a distant future, that's what they would do in Atlanta and Phoenix. What applies to geography, applies to chip design - I find them both extremely similar.
This time, the famous and well respected innovator Zvi Or-Bach has reinvented 3D FPGA. FPGA has a big advantage of ASIC in terms of rapid development cycle and less initial investment. But in practice, FPGA is much slower and are not dense enough. FPGA architecture and the synthesis technology has undergone several changes taking it closer and closer to ASIC in terms of performance. 3D FPGA may be yet another step, as Or-Bach claims.
Nu-PGA tries to increase the interconnect density of the FPGA by taking the antifuse to a separate layer from the configurable logic blocks. What does the increasing interconnect density mean? A lot, actually. With a rich interconnect availability, the bounding box of the chip is reduced a lot. So the placement and routing algorithm need not have to bother much about where to lay the track, number of tracks in each direction and what the channel width is. Also it increases the speed of placement and routing process. All these clearly mean getting closer to ASIC.
But the 3D FPGA described above is more like a building with two levels. Not so fancy, but we have never done anything more than one-level building earlier. I would however expect the building to go up fast, like putting CLBs on top of each other in different layers, or having a layer of CLB and a layer of IO blocks. I seriously think there is a good scope for growing tall as there is more space available there. Let's see how the market reacts to these innovative ideas and where it takes us to.
If the area is small, stack it up. That's what was done in New York in the mid nineties. In future, at least in a distant future, that's what they would do in Atlanta and Phoenix. What applies to geography, applies to chip design - I find them both extremely similar.
This time, the famous and well respected innovator Zvi Or-Bach has reinvented 3D FPGA. FPGA has a big advantage of ASIC in terms of rapid development cycle and less initial investment. But in practice, FPGA is much slower and are not dense enough. FPGA architecture and the synthesis technology has undergone several changes taking it closer and closer to ASIC in terms of performance. 3D FPGA may be yet another step, as Or-Bach claims.
Nu-PGA tries to increase the interconnect density of the FPGA by taking the antifuse to a separate layer from the configurable logic blocks. What does the increasing interconnect density mean? A lot, actually. With a rich interconnect availability, the bounding box of the chip is reduced a lot. So the placement and routing algorithm need not have to bother much about where to lay the track, number of tracks in each direction and what the channel width is. Also it increases the speed of placement and routing process. All these clearly mean getting closer to ASIC.
But the 3D FPGA described above is more like a building with two levels. Not so fancy, but we have never done anything more than one-level building earlier. I would however expect the building to go up fast, like putting CLBs on top of each other in different layers, or having a layer of CLB and a layer of IO blocks. I seriously think there is a good scope for growing tall as there is more space available there. Let's see how the market reacts to these innovative ideas and where it takes us to.
Monday, February 01, 2010
Turned on by COTSon
A few months ago, I wrote an article longing for the need of a multicore processor simulator - preferably one that is free. One of the reader of my blog have left a message about COTSon (I wonder why (s)he wanted to be anonymous). So I started reading more about it, starting with the white paper that was in SIGOPS. And I liked what I read.
COTSon is the result of a collaborative effort by HP and AMD. The best part about COTSon is that it is not just a multicore processor simulator, but can also act as a full blown system simulator. That means it can simulate a range of hardware models and software stack. It is based on AMD SimNow, which performs high speed instruction set translation for x86 and AMD Athelon type processors. The software stack that it can simulate includes everything that runs on x86 and AMD Athelon including proprietary software like MATLAB for instance.
What COTSon does is not a cycle-accurate or bit-accurate simulation, which runs over the entire weekend to tell what is wrong. Rather it's a fully functional simulation. So may be COTSon can act as a first cut simulation at one level higher than the Transaction Level Modeling. Or on the other hand, COTSon can be used as a tool to visualize the paradigms of TLM. Maybe COTSon is what we need in the design cycle in this world where time-to-market is of prime importance.
I am now interacting with a friend doing his doctorate in Vrije University about COTSon, how we can learn how to use it and use it. We would know the reality only when the rubber hits the road. Lets see how good it is and I will post at the end of my analysis with some greater detail.
COTSon is the result of a collaborative effort by HP and AMD. The best part about COTSon is that it is not just a multicore processor simulator, but can also act as a full blown system simulator. That means it can simulate a range of hardware models and software stack. It is based on AMD SimNow, which performs high speed instruction set translation for x86 and AMD Athelon type processors. The software stack that it can simulate includes everything that runs on x86 and AMD Athelon including proprietary software like MATLAB for instance.
What COTSon does is not a cycle-accurate or bit-accurate simulation, which runs over the entire weekend to tell what is wrong. Rather it's a fully functional simulation. So may be COTSon can act as a first cut simulation at one level higher than the Transaction Level Modeling. Or on the other hand, COTSon can be used as a tool to visualize the paradigms of TLM. Maybe COTSon is what we need in the design cycle in this world where time-to-market is of prime importance.
I am now interacting with a friend doing his doctorate in Vrije University about COTSon, how we can learn how to use it and use it. We would know the reality only when the rubber hits the road. Lets see how good it is and I will post at the end of my analysis with some greater detail.
Labels:
cotson,
microprocessor,
processor simulator
Subscribe to:
Posts (Atom)