A major problem with this belief is that it relies on the existence of a program capable of designing computers that is at least as good as human engineers. We have no such program now, and we probably could not build one even if we tried. It would be a mammoth task, requiring a massive research program and a far better knowledge of the workings of the human mind and the scientific principles of computer design than we currently have. We may be able to build better computers, but an algorithm that builds better computers is a far more daunting task. It takes years of research and scientific breakthroughs in many fields to make a new computer chip, not just a simple improvement in the old design. In principle, it may be possible, but the predicitions that give a 50-year timeline based purely on Moore's law are flawed, as they pay no heed to whether or not we will have the software to use this mammoth power.

Also, this belief assumes - wrongly - that a computer with as many transistors as the human brain has neurons will automatically be as intelligent as a human. The prediction of Moore's Law is simply that the number of transistors on a chip will double every 18 months; not the processing power, not the intelligence, but the number of switches. We can only use this extra power to run our programs faster, and run larger programs. Real intelligence will probably take not just speed and high technology but a fundamental change in the type of machine we make, probably to a neural net based device.

Finally, there is no guarantee that Moore's Law will hold. Its original form had the time for doubling as 1 year; this was accurate until the 70's, when it was revised to 18 months. Change in the rate is therefore not without precedent; it may slow again, or even stop entirely, thus invalidating the current 30-50 year predictions for the "singularity" being reached.

Update: I'd like to point out that I am very much of the opinion that AI is possible; my point of contention with the "technology singularity" idea is that it seems to assume that computers will magically be as intelligent as humans when they possess the same processing power. My major objection to this is that we will need to know hoe to effectively apply that power, and that knowledge may elude us for far longer than the necessary computing power.