America’s Worst President


“You can fool some of the people all of the time, and all of the people some of the time, but you can’t fool all of the people all of the time.”


-Unknown (apocryphally attributed to Abraham Lincoln)


No, it’s not Barack Obama – he’s trying hard, but he hasn’t had much time yet and the competition is very stiff. It wasn’t Richard Nixon – a vile person can sometimes be a tolerable leader, and Nixon was actually one of the better Presidents. FDR gets a lot of crap from certain quarters, and he certainly made some big mistakes, but he dealt adequately with a couple of very serious problems not of his own creation (though Roosevelt might have turned out to be a horrific failure had Hitler been satisfied with Czechoslovakia). Warren Harding and Ulysses Grant are well known for the corruption they tolerated, but neither managed to really do much damage. It was during the Coolidge administration that the groundwork was laid for the First Great Depression, but there wasn’t much he could have done about it if he wanted to. Herbert Hoover is reviled for his failure to address the Depression (as Obama will be) but Hoover, like Obama, would have faced serious political challenges had he attempted to implement a meaningful policy. There are plenty of other candidates for last place – Reagan, Carter, George Bush II, Lyndon Johnson (whom I’d give a solid second-to-last), Andrew Johnson, James Buchanan…

Buchanan was faced, near the end of his one term, with the secession of South Carolina and then six other Southern states (Georgia and the so-called “Gulf Squadron” of Florida, Alabama, Mississipi, Louisiana, and later Texas). His response was to do very little – he said that although secession was illegal, the US Government was not empowered to intervene (a very dubious Constitutional interpretation). While lame-duck Buchanan waited passively to leave office, the seceding states seized most of the Federal property in their territory – including forts and arsenals. Buchanan did send a civilian ship to resupply Fort Sumter in Charleston Harbor, which the Secessionists were trying to starve out, but Star of the West fled when fired on by South Carolina’s shore batteries. Buchanan was still not ready to precipitate the country into war, though, and let the incident pass. Though he had nothing to do with causing the secessions, Buchanan is usually severely castigated by historians for his failure to do anything about them.

And yet – what was he supposed to do? The US Army was hardly prepared for an invasion of the South in January of 1861 – as usual the peacetime army had been very small*, and loyalties to states were then much stronger than now. Any attempt to prevent the capture of the Federal arsenals might well have failed, and/or caused a mutiny, and would certainly have provoked a war. Up until March of 1861, desperate eleventh-hour negotiations were still going on to reverse the secession, and it was far from obvious at the time that these were doomed. One Congressional peace plan was stalled by only a single vote. It is likely that the seceded states would have rejected it, but not certain: there was still a great deal of pro-Union sentiment in the South, and some people plainly regarded Secession as a ploy to gain concessions. Jefferson Davis himself had at first opposed Secession. Could we really expect Buchanan to throw away the last chance, however slim, of escaping the horror of a civil war, as the best solution to a crisis that he had not created, and take the blame on behalf of the man who had created it?

It is said that the Civil War was caused by Secession, but there was no certainty that Secession need bring about war. Furthermore, Secession itself could not have happened without the election of Abraham Lincoln. Secessionists in the South even made it their first priority to ensure Lincoln’s election by attacking Stephen Douglas, the only truly national candidate in the 1860 Presidential race and the only one likely to prevent Lincoln’s victory. It was Lincoln’s intemperate anti-slavery rhetoric that guaranteed his unacceptability to the slave-dependent cotton states of the Deep South. Although Douglas was denounced as a “rank Abolitionist”, he clearly and consistently favored protecting slavery wherever the citizens might want it, and his election would have made Secession politically impossible for the time being.

Had Abraham Lincoln been sincerely dedicated to the abolition of slavery, we would surely forgive him for using anti-slavery agitation as his chief political weapon, but he was not. Lincoln, like most other politicians, claimed whatever positions were politically useful to him, and contradicted them without the slightest hesitation whenever it was convenient. He is famous for saying that, “This Union cannot forever endure half slave and half free,” – a clear declaration of Abolitionist intent; he also said repeatedly that his only priority was preservation of the Union, and he admitted that if he could do this without freeing a single slave he would do so. Lincoln promised the South that he would not interfere with slavery wherever it already existed, and even supported a Constitutional amendment (the Corwin Amendment) protecting the “peculiar institution” in perpetuity. Even after the outbreak of war, Lincoln ordered Union generals (Hooker and Fremont) to return freed slaves to their owners.

Lincoln was also a racist, at least in public and in policy. He stated that blacks were by nature unequal to whites and could not be integrated into white civilization. Mexicans he despised as “a mongrel race not one in eight of whom are white.” His plan for dealing with freed slaves was to deport them to Africa. To this end, near the end of the war, a pilot colony was even established in the Caribbean. When the colony failed and many of the settlers died, Lincoln abandoned the plan for the time being, but it is uncertain whether he might later have yet undertaken mass deportations, had he lived and had Congress permitted.

It is said that Lincoln was no more racist than other men of his time, and that he could not afford to declare his supposedly genuine Abolitionist sentiments openly for political reasons. But there were Abolitionist political leaders, and even opponents of racism, who did declare themselves openly. If he lacked that much courage, why is Lincoln given any credit for being a visionary? He used Abolitionism to gain power, but at no time did he take any political risk for it, even though his strident exploitation of the anti-slavery movement had already destroyed any credibility his promise not to interfere with slavery could have had in the South. In fact, even during the war, Lincoln approached the abolition of slavery with all the eagerness of a man contemplating a dive into raw sewage. The Emancipation Proclamation had to wait until the tide of war was clearly running against the Confederacy, and even then was framed so that it did not immediately free any slaves. When slaves were eventually freed, it was at first only to be conscripted to labor for the Union armies, while the slaves of owners in non-seceded states were left in bondage.

Even without having any firm convictions, Lincoln managed to be perceived as an extreme partisan and was the most divisive Presidential candidate of his century, and perhaps ever. He achieved this partly through the manipulation of mobs. He owed the Republican nomination, which he captured from the leading (and more dignified) Republican, William Seward, to the fact that the Convention was held in Chicago, where organized crowds of Lincoln supporters could shout down pro-Seward speakers. (And, also, to backroom dealing wherein “Honest Abe” traded a Cabinet post for the votes of Simon Cameron’s delegation). His election campaign included virulent anti-Southern speeches and ominous pseudo-military parades, often conducted late at night by torch-bearing mobs. Lincoln worked hard to be perceived as the enemy of the South – a project in which the Secessionists wholeheartedly participated. Lincoln was not even on the ballot in most slave states, and where he was, his showing was miserable (26,395 votes altogether, most of them in Missouri).

Though he won the election, it was with a minority of the popular vote**. Lincoln was about as unpopular as a candidate can be and still win a bare majority in the Electoral College; he won almost all of the free states but in several cases (including his home state of Illinois) by small margins, and had essentially no votes in the slave states. Given Lincoln’s known political habits, it is not unlikely that he had some help from electoral fraud in certain key states (especially Pennsylvania, which Cameron had promised to deliver for him, and Illinois) in achieving this minority victory.

When his election precipitated the secession of seven slave states – as they had already been threatening to do – Lincoln participated in attempts to negotiate a peaceful solution (i.e., he cooperated with Buchanan’s much-derided policy), but any promises he made were of course disregarded in the South. Meanwhile, certain Northern states were already mobilizing troops.

Even before his inauguration, Lincoln had thus brought about the creation of the Confederacy. But the major questions remained unsettled: Would there be a war? If so, would the remaining Union hang together? Who would win? Lincoln decided the first two of these questions with some atrocious bungling which showed how completely out of touch he was with half of the country he had hijacked.

Lincoln was determined to rule over the entire United states, and not let any parts of it out of his grasp – this, not any Abolitionist intent, was his prime motivation throughout his administration. Negotiations having failed, his only alternative was to conquer the South by arms. On the surface, this seemed easy enough – the seven-state Confederacy had only a tiny fraction of the manpower and resources that the Union held, and no Navy or arms manufacturing capacity. Its only hope was foreign assistance. Lincoln’s problem was that, while the northernmost states were eager for bloodshed, in much of the Union there was little or no support for war. In particular, eight slave states had remained loyal but were obviously unsympathetic toward Lincoln’s aspirations.

Lincoln chose Fort Sumter, still under siege in Charleston harbor, as the place to start the war. Sumter was running out of food, and had to be immediately resupplied or surrendered. The President was presented with a plan for smuggling supplies in at night by small boats slipping into the harbor from warships lurking at sea. This plan was adopted, but Lincoln insisted on making some modifications.

What Lincoln ultimately did was to send warships openly into Charleston harbor in broad daylight, under the Confederate guns, flying the Union flag. To ensure that no one missed the point, Lincoln even cabled the Charleston authorities in advance to warn them of the mission. Since the latter had earlier fired on Star of the West, and had already announced that they would fire on any other resupply attempt, there was little risk that they would allow Union warships to defy them openly. They did not; the warships were forced to withdraw and Fort Sumter surrendered.

Lincoln’s intent was to force the Confederacy into firing the first shot so as to unite public opinion against them – as he put it, they would be “firing on bread” in view of the whole country. But Lincoln was sadly ignorant of the real state of opinion in the Border states, and unaware that no one was fooled by his clumsy ploy. When he issued an immediate demand that all the states provide quotas of troops to put down the rebellion, four more slave states promptly seceded and three more exhibited marked disloyalty.

The expansion of the Confederacy from seven states to eleven might not at first seem critical, but it was. The original seven had very little industry, and though they had a large (and almost indefensible) territory were comparatively sparsely populated (most of Florida and Texas was then empty). A large proportion of their population was slaves, who could not be used for troops and whose loyalty could hardly be counted on. The seven-state Confederacy had very little liquid capital, as all its wealth was in land, and its economy was entirely dependent on crop exports, mainly cotton – for which the main consumers were England and the North they had just seceded from. Ships, shipyards, foundries, weapon factories, iron, coal, etc. – even the manufacture of clothing and shoes – were all severely or totally lacking. To worsen its defensive situation, the Confederacy had a very long front with the Union – stretching from Texas to South Carolina – but little depth, without a single significant port being more than 400 miles from Union territory (and New Orleans and Savannah much closer). The large slave population was a major liability as well, as was the pro-Union white population of the hill regions.

Three of the four new Confederate states – Virginia, North Carolina, and Tennessee – were much more developed and all including Arkansas had relatively few slaves (which is why secession had not at first seemed necessary to them). Their accession doubled the population of the Confederacy, and tripled its industrial capacity. Most of the new population was white, and the loyalty of whites in Appalachia was to some degree cemented by Lincoln’s aggression and (presumably) by the fact that the Confederacy now appeared to have a fighting chance.

The Confederacy’s geographic defensibility was improved as it not only increased in depth (without adding much length) but gained the Appalachians as a defensive barrier. The Southern seaports, and the potentially rebellious concentrations of slaves in the Deep South, were now farther from Northern interference. More Federal arsenals were seized, and through Virginia the Confederacy was able to burn the Navy yard at Norfolk and threaten the capital at Washington. Meanwhile the Union was forced to devote part of its forces to occupying Missouri, Kentucky, and Maryland – slave states which had not managed to formally secede, but were now as much pro-Confederate as pro-Union.

Every one of the four states which Lincoln forced into the Confederacy had already rejected secession, either explicitly or by refusing to vote on it. The regional, partisan President had ignorantly misinterpreted these decisions as conclusive, and even more foolishly had thought that his ruse at Sumter would turn local Unionism in the Upper South into support for a national war against their fellow slave societies. On the contrary, the Border states detested and feared Lincoln, and they supported the right of secession even when declining to exercise it themselves. A Union military conquest of the South would mean the end of their own slave societies and the subjugation of the South to the North, and they knew it. They preferred peace and Union, but given a choice between taking arms against their slave-owning allies and taking arms against their Abolitionist enemies, it should have been plain that they would choose the latter.

Lincoln had enabled the creation of the Confederacy by his vocal anti-Southern partisanship; by his arrogant fumbling he had turned it from a hopelessly weak and helpless nation into one with some ability to fight. But the North still had a huge advantage in resources – more than twice as many people, the vast majority of the nation’s industry, an excellent (by comparison) rail network, reserves of gold and silver, and (perhaps most importantly of all) virtually the entire Navy. The conquest of the Confederacy should not have been too difficult – but Lincoln wasn’t yet done screwing things up.

The incompetence of Union general officers during the first half of the Civil War is legendary. Their specific failures receive plenty of attention (and deservedly so); the fact that they were all political appointees approved by Lincoln is less emphasized. It may be that they were forced on him by Congress, but in this (as in so many other things) the best that may be said of our sixteenth President is that he failed to stand up against pressure. The bloodthirsty alcoholic Ulysses Grant, often considered the best Union general, finally won the war, but at horrendous cost in life – his most remarkable quality was his relentless aggression and disregard for casualties.

Lincoln wasn’t always content to let his generals make all the military mistakes, either. For political reasons he sometimes insisted on attacks even when these were militarily wasteful or even counterproductive, and he refused to withdraw the government from Washington where its vulnerability crippled Union operations. Lincoln was much more concerned with his own political future (which was obviously tied to the war’s progress) than with the carnage among the soldiers who had to fight his war.

If it hadn’t been for Lincoln’s assistance in winning over the Border states for the Confederacy, it is unlikely that a war would even have been needed to bring the original seven secedors (eventually) back into the Union. The Deep South was ill-equipped to survive on its own and, heavily dependent on imports, very vulnerable to blockade. Even the eleven-state Confederacy might have succumbed eventually to the so-called “Anaconda Plan” even without a massive war; perhaps even the Anaconda was unnecessary. The Confederacy was very loosely organized, united only by its fear and hatred of Abolition, and each state was naturally determined to retain its own full sovereignty. It was also very dependent on Northern markets for cotton.

President Lincoln, however, was not willing to wait for economic pressure and internal disorder to bring the South back into the Union. This could only have been completed under a different President, less hateful to the South, and he was not about to surrender the power to which he felt he was entitled, by virtue of a quirky electoral system, to wield over the slave states. He was also surely aware that his chance of reelection was small if he displeased his fanatical followers by failing to take firm action against the “traitors”. His ambition, as he himself showed through both words and actions, was never to liberate slaves but to preserve the Union, and it would be fair to say that by “Union” he meant Union under his own personal rule.

During the war to restore the Constitutional Union, Lincoln constantly and flagrantly violated the Constitution he was allegedly defending. His opponents were arrested without charge and imprisoned without trial; critical newspapers were seized; the Supreme Court was even prevented by armed force from hearing key cases. By 1864, the outcome of the war was no longer in doubt and Lincoln was less unpopular than before (and of course most of the slave states weren’t voting), but he still hedged his bets with ballot fraud to ensure his reelection. Quite likely the near-dictatorship that he created for himself would have carried on indefinitely, but for two fortuitous circumstances: his early death and the fact that he had picked a Democrat for Vice President in 1864, vainly hoping to thereby appear less partisan. Andrew Johnson was in no position to take over the dictatorship, and it died with Lincoln.

Had Lincoln lived longer, his present fame would certainly be much diminished. He would surely have sought a third term, and to retain his dictatorial powers. As the thrill of victory wore off, his impositions would have become harder to tolerate, and his flaws more controversial to his allies. But a dead man is always safe to make into a hero, and the time and circumstances of Lincoln’s death made him an ideal martyr for the Republicans and Abolitionists whereas, had he lived much longer, many of them would have been working to undermine his dangerous autocracy.

The Lincoln legend was reinforced during the rapprochement of the later nineteenth century between North and South, when the leaders of both were re-cast as heroes. It was at this time that the motive of the North, which had been to annihilate State’s Rights and preserve the personal authority of Lincoln, was reinterpreted as the liberation of the slaves, and the motive of the South, which had never in reality been more than a defense of its brutal institution of slavery, was transmogrified into an idealistic crusade for State’s Rights. By mutual agreement (among whites) the self-serving perpetrators of slavery and of slaughter became saints.

When Lincoln’s widow, Mary Todd, died in 1882 of the syphilis which surely contributed to her insanity, the real cause of death was suppressed by her doctors, who ascribed to an accidental fall the diagnosis of tabes dorsalis (known even then to be caused only by syphilis). The legend of Abraham Lincoln was already too entrenched to be tarnished by mere truth. The likelihood is that Abraham, true to his character as we know it, knowingly infected his wife but never told her even after she became ill. Mary Todd sought treatment only in 1869, when she had been seriously ill for several years. There is evidence that “Honest Abe”, however, was being treated for syphilis before he even met Mary Todd.

The historical picture of Lincoln is of a man rather less than heroic: a venal, inept, and unscrupulous megalomaniac. He brought about a terrible war, which killed more Americans than both World Wars combined; he prosecuted that war incompetently; he was a typical crooked politician who lied, contradicted himself, broke his promises, and even cheated in elections; he scorned the Constitution and ruled as a dictator. But at least he freed the slaves, even if he hadn’t set out to do so, even if he dragged his feet at every turn and planned a mass deportation of blacks?

The simple answer is yes – but with or without the Civil War, slavery was already a dying institution by 1860. Slavery – of the antebellum American kind – was principally an adjunct of plantation agriculture. It had completely failed to take root in the New Mexico or Kansas territories, and even in the oldest districts of the Old South the small farmers of the hill country owned few or no slaves. By 1850 slavery was already fading out in the Border states. In the East, slaves were more plentiful than needed, and the real motivation behind much of the obsession with legalizing slavery in the territories was the vain hope that new markets would open up where these unwanted slaves could be disposed of. It was for financial, not humanitarian, reasons that the Confederate Constitution banned the importation of slaves. Some Southerners even hoped to conquer new territories in Mexico or the Caribbean where their slaves could be sold, not realizing that these places had no use for slaves (or, like Cuba, already had their own surplus). In reality, the handful of slavery-dependent cotton states had a fixed realm which could not expand and they would have been increasingly powerless as the rest of the world grew around them.

Before the Civil War, there was controversy in the South about whether slavery was even profitable. The slaves cost less than free workers, but they also produced less, stole more, and could not be simply fired when they got sick or were too old to work. In post-war writings, it is not unusual to find Southerners claiming that the end of slavery was an economic boon, which is not implausible. The real purpose of slavery was, arguably, the social control of the Negro, and this control did not end with slavery.

An independent South, especially if it were just South Carolina and the Gulf Squadron, would have had other problems in maintaining slavery. Without the Southern votes in Congress, the Fugitive Slave Laws would have disappeared, making it easier for slaves to escape. Public opinion in the industrial world in general was increasingly anti-slavery, especially in Britain which was the greatest buyer of cotton, and a country totally dependent on exports could not ignore this forever. If nothing else, agricultural machinery would eventually have replaced most slave labor, being cheaper.

The process of emancipation might have taken several decades, but not necessarily: slavery had been abolished throughout the Western world by the end of the century, even in places like Brazil and Cuba that had been totally dependent on slave-worked plantations. Was the nominal freedom of a generation or two of American slaves worth the tremendous cost in lives, the dislocation of war, and the permanent end of the balance between State and Federal power? Maybe so, but we should remember that the “Great Emancipator” was perfectly willing to take all those lives without freeing a single slave.

President Obama likes to be compared with Abraham Lincoln. He does in fact share many of Lincoln’s virtues: divisiveness, partisanship, narrow-mindedness, arrogance, corruption, dishonesty, total unscrupulousness, fanaticism in the pursuit of personal power, expertise at dirty politics and a facility for rhetorical public speaking. Both men rose suddenly to the Presidency from a position of obscurity, partly by means of cultivating a mob mentality among their followers. The comparison is less flattering than Mr. Obama thinks.



I am not much in the habit of citing sources, since I write on my own time, but I feel obliged to draw attention to Bruce Catton’s The Coming Fury as a key resource for events leading up to the Civil War, especially the details about the Fort Sumter affair, which are usually censored from popular histories.

*The US Army had only 16,000 men total in 1860, of whom a substantial part had already deserted or surrendered to Secessionist militias even before the debacle at Fort Sumter.

**Lincoln won slightly less than 40% of the ballots in a four-way race. Curiously, Lincoln would have carried the Electoral College even if the votes of all three other candidates had been united behind a single candidate. The election of 1860 was by far the most regionally-dominated in American history.

The King Is Dead


“No society can surely be flourishing and happy, of which the far greater part of the members are poor and miserable.”


-Adam Smith, Wealth of Nations


    In Anatomy of a Depression I explained the proximate cause of the ongoing depression. Of course, when I correctly predicted the current economic trainwreck, I gave much too optimistic an impression by exploring only the most immediate source of the present trouble – that is, an extreme inequity in the distribution of income which has led to an economy driven by untenable consumer borrowing. That’s plain old-fashioned Keynesian economics. Conceivably we could salvage the economy, at least for a while, by redistributing wealth (the pork stimulus won’t work, as you already know because I explained it in Why the “Stimulus” Will Fail). But the underlying problems go deeper than Reagan’s tax cuts for the rich: for several reasons, capitalism itself is no longer viable.


    I can already hear the whining: “Those Commie Pinko Socialists have been saying for a hundred years that capitalism was obsolete, and they were wrong! Capitalism has to last forever, because it’s still around!” Bullshit. The Commie Pinko Socialists were right. Capitalism was obsolescent more than a hundred years ago. Economic history ever since the Industrial Revolution has been a history of struggling to find solutions for the problems caused by capitalism. In the first Great Depression, in the Thirties, it failed completely and did not recover. Capitalism isn’t dying; it’s long since dead.


    But what about the boom period of the Forties, Fifties, and Sixties? Didn’t FDR save capitalism from extinction? No, he didn’t; what he did was put the terminal patient on life support. The American economy since the New Deal has been modeled on Mussolini’s Corporatist plan – a close partnership between industry and government, with a huge chunk of GDP directly ordered by the government and the rest tightly regulated. Sometimes it’s hard to even tell where the government ends and the corporations start. It’s a sweet deal for the big corporations, who are supported by government contracts and subsidies, protected by regulations that smother competition, and bailed out if they somehow manage to fail anyway. Even the big unions get in on the action, skimming a share from the surplus provided by the remainder of the workforce that is still productive.


    What keeps this government/corporate Frankenstein going is war. The high spending and high taxes (or prodigal borrowing) necessary to keep the masses employed have to be justified to the sub-literate public and the greedy corporate execs. “Emergency” economic measures may be acceptable during a collapse, but as soon as things have recovered somewhat, people start complaining about the impositions of “big government”. The underlying problem, though, has not been solved; massive government spending is the only way to keep aggregate demand high enough and stable enough to sustain the economy. Without it, any flicker in public confidence could lead to a swift and total shutdown.


    World War Two was a necessity for America, as FDR realized; support for the New Deal couldn’t last much longer. Soon after the war was over, the economy had come apart again, and a new war (Korea) had to be found. After that, the economy slid again but the Korean war was replaced by escalation of the Cold War, which provided a good excuse to keep military spending high even in peacetime, and then there was Vietnam on top of that. Times were good; except for a couple of brief interludes where military spending declined, the economy soared for three decades. But it wasn’t a capitalist economy; it was a wartime corporatist economy.


    Then some misguided people with insufficient knowledge of macroeconomics got the heretical idea that wholesale killing with no compelling political justification was a bad thing, and, even worse, that colossal peacetime preparations for total war were unnecessary and even dangerous. The Vietnam stimulus plan was cancelled; the economy went in the crapper and finally collapsed. It only recovered (sort of) when Reagan reinvigorated the Cold War stimulus plan. After it broke down again in 2000, it had to be restarted with the Iraq stimulus plan…


    There are a couple of problems with this way of doing things, aside from the fact that we can’t seem to keep the American economy functioning without bombing anyone. One of them is that Reagan’s regressive tax cut not only left the government ultimately insolvent but undermined the effectiveness of the system. Another is that we’ve created a monstrosity of government that has made democracy meaningless. But even if we were to continually fight big enough wars to keep things moving, and tax the rich enough to prevent the gradual concentration of all wealth in a few hands, it wouldn’t keep us afloat much longer. The historical conditions that enabled capitalism (even our bastardized modern corporatism) to create so much wealth (and it did, indeed, create a vast amount of wealth) are disappearing.


    Capitalism is based on certain fundamental assumptions, some of which are no longer valid. Among them are:

  • Competition among producers. Most of the purported benefits of capitalism come from competition, both between businesses and workers. Competition is supposed to regulate business profits, eliminate products that people don’t want, and encourage workers to be productive. In reality, genuine competition between businesses is a rare exception (competing advertising is not competition in any useful sense). Monopolism and collusion are problems that have long been recognized but never successfully dealt with. Even workers sometimes manage to beat the principle of competition by forming unions, allowing them to leach off of a non-competitive industry or the government (i.e., the taxpayers). But should we even want businesses to compete? The pressure of short-term competition encourages them to do irresponsible things ranging from long-term degradation of the industry to deliberate environmental contamination. The modern world requires a level of integration and planning that are inconsistent with ruthless competition.
  • Boundless growth is both possible and desirable. Continual growth is necessary for capitalism to work. A growing economy creates new industries that haven’t been monopolized yet, invents novel products that people will buy even though they don’t need them, and provides opportunities even for people who aren’t already rich and connected. Without growth, there are never enough jobs, competition disappears from the stagnant economy, and only the rich can get richer. But it should be evident to any sane person that growth cannot go on forever, at least not without a declining population to compensate. The Earth can only handle so much waste, only produce so much food, only provide so much energy. Every technological fix we find for an environmental problem or resource limitation creates more problems. Even when solutions are found in time, there is no guarantee of them being used. (See Hippies Cause Global Warming for example.) If we try to sustain infinite growth in a biosphere that isn’t growing at all, sooner or later we will make a fatal stumble. But even if we don’t, do we really wish to live in a world every square foot of which is overrun with people, superfluous consumer junk, and waste?
  • Consumption is unlimited. Implicit in the concept of capitalism is the assumption that people will always want to purchase more stuff, no matter how much they already have. However, most people (in the West) now have everything they actually need to live, and many people have so much crap that new crap has little marginal utility. It takes hundreds of billions of dollars of advertising every year to keep people buying, and most of them can still very easily cut their purchasing dramatically (for instance, if they are uncertain of the future and want to save). This is why a depression like the current one can happen so swiftly – the whole card castle depends on nearly everyone spending money as fast as they can borrow it, but there’s little or no real need for much of the spending and it can stop at any time. The mere expectation of hard times can cause total collapse.
  • Human labor is valuable. Capitalism is a cycle of production and consumption in which people freely exchange their own production for that of others; that is, they are only able to consume if their production is valued by others. This system worked great in the era when human labor was the key factor in production (i.e., before machinery) and was relatively scarce (i.e., before modern medicine) – it was certainly a huge improvement over the slavery that preceded it. But in the modern world, most human labor has very little value. The supply of people in the world is much larger than could ever be efficiently employed, and most kinds of labor can readily be replaced by machines. The price of labor in the world market accordingly can be no higher than the cost of mere subsistence; any deviations from this are due to national markets being protected from competition against nearly-free Asian labor, and these protections are crumbling. Once, automation was widely believed to be the future of manufacturing; now, an endless supply of arbitrarily cheap labor has mired us firmly in the sweatshop era.

    Better-educated workers are not a solution to this problem. A large supply of skilled workers would just drive the cost of skilled labor down to the same starvation level. Even now, the high incomes of the most intensively trained professions (medicine, law, engineering) are maintained only by artificial barriers to entry (see Education is Class Warfare) and strictly rationed education. If we produced more doctors and engineers, this would have some benefits, but if every human capable of absorbing the education received it, doctors and engineers would be reduced to the same poverty as garment workers and without eliminating the surplus of the latter. There are just too many people, and technology is getting better and better at replacing even skilled labor.


    Another problem with capitalism is that it deals poorly with what economists call “externalities”. These are costs or benefits of an activity that aren’t automatically charged to the person who causes or benefits from them; it is often infeasible to allocate them at all. National defense is an example; it is very difficult to say who benefits from it or how much. No one is going to mail in a check for what they think it is worth to them, and there is no way to just cut off the national defense service to your house if you don’t pay. Roads and public education also provide major externalities. Pollution is a negative externality – it is not practical for a company which causes some pollution to negotiate with every person who might ever be affected by it to pay them what they think fit to put up with it. Government intervention is required for externalities to be accounted for. When externalities were a minor aspect of the economy (i.e., when things were simpler and nobody cared about pollution), government interference could be minor. In the modern world, however, externalities make up a huge share of the economy, perhaps most of it.


    In the future, the limiting factor in production will no longer be the supply of human labor, or even the supply of capital; it will be factors in the natural environment: energy, land, and above all the need to preserve a livable environment. In fact we have already reached the point where environmental factors should be limiting, even though some countries (China) allow horrendous pollution. The ever-escalating consumption that capitalism demands cannot be sustained or allowed, nor can unrestrained competition, nor are these things even possible under capitalism without massive government interference. We need a different solution, one that can provide a bearable life to the people who inhabit this planet while preserving it for many future generations.


    What about isolationism? Except for oil and tropical fruit, America is capable of producing everything it needs, yet we import most of our manufactured goods, with dire economic consequences. If we banned the imports we don’t need, and expelled the illegal aliens, we could have full employment – for a while. We’d still be dependent on the whim of the public to keep buying unnecessary junk, and we’d run down our environment that much quicker with more manufacturing – infinite growth would still not be possible. And after a generation or two, better automation might bring back mass unemployment anyway.


    What about an economy based on services and intangible (intellectual) products? If people were content to consume mostly software and entertainment, a lot less waste would be generated. Unfortunately, it’s really easy to steal intellectual property – so easy that some people think it isn’t stealing, just like some people think there’s nothing wrong with helping themselves to your wallet if you’re careless enough to drop it. Most people are never going to be good enough at anything creative (or at programming) to be paid for it anyway, and people can abruptly stop buying such things just as easily as they can any luxury goods. Most services are far from necessities, too, or are needed only in relation to consumer goods, or require exceptional talent. The only services that people will reliably purchase are medical. Could we build an economy in which most of the population works full time just to make sure no one-in-a-million disease goes undetected and everyone who is too fat to stand up has their own personal bed pan changer? Maybe, if you don’t mind the slack-jawed girl who got through high school by copying your homework being your operating room nurse – but I’m not exactly looking forward to the world where all of society’s efforts go to extending the average lifespan by three weeks, the only employment for most people is nursing homes, and most of the gross national product is controlled by insurance companies. I think we can do better than that.


    If they were given a choice, many people would likely prefer less work and more leisure to achieving the maximum possible throughput of disposable consumer goods. The main goal of technology and capital in the past has always been to produce more junk, but higher productivity could just as well be used to reduce work. Manufacturing less superfluous crap would alleviate a lot of our environmental problems, it would stabilize the economy against sudden lapses in the crap-buying behavior of consumers, and it would give people more time to educate themselves, exercise, travel, get drunk, or whatever they want.


    If the amount of available labor were reduced or restricted, it would make labor scarce and valuable again. This alone would stop many abuses by employers, who could no longer be guaranteed of replacing any employee at will, but without protecting abuses by workers (which current labor laws do, when they are enforced). The distribution of income problem which has led to the present crisis would be solved by a combination of higher wages and paying people not to work. (We already do the latter, but we try to pretend it is somehow based on “need” or “merit”, which is pure baloney because no one is a worse judge of need or merit than a bureaucracy.)


    Paying people to not work would reduce the oversupply of labor, remedy the distribution of income, and stabilize the economy without frantic unsustainable growth. It could replace many existing programs that subsidize non-work, such as welfare, unemployment benefits, disability, and perhaps social security retirement. Unlike those programs, it would be fair, because everyone would have equal access to the benefits. There would be no need to pay armies of “social” “workers” to recruit “clients”. Every American would be guaranteed at least a minimum survival level of income, and they could decide for themselves whether they should work. With labor scarce and valuable, there would be a strong incentive to work for those able. Production could be limited to what is environmentally acceptable, without depriving tens of millions of Americans of their livelihood; there would be no need for gratuitous wars to accelerate public spending.


    Limiting the length of the work week would help stabilize the supply of labor; shorter hours would encourage more people to take jobs while preventing the more ambitious from creating an excess labor supply through sheer overwork. The so-called forty hour week is a joke; employers can and do demand as many work hours as they want, and overtime pay is no penalty because they just pay a lower base rate to make up for it. Workweek length should be an absolute limitation, or at least there should be a penalty to discourage overtime (a surtax, for instance).


    A simple guaranteed income plan would solve the distribution of income problem and stabilize the economy without the need for any “stimulus” spending or monetary shenanigans, ever again. It would cost a lot, certainly: the simplest and fairest way to do it would be to make a payment to every adult citizen, without trying to find out who is really unemployed and who is cheating the system by working “off the books”, and without punishing anyone for working. That would cost two or three trillion dollars a year. But there is no doubt that we can afford it; after all, we are already affording a basic living to almost everyone. Taxes would have to be higher, of course, but the subsidy would outweigh the tax increase for most people. The rich would have to pay more, but that is far overdue anyway.


    [By the way, don’t believe any liar who tells you that high taxes on the rich will ruin the economy. Some of this country’s biggest booms have happened when the top rate was 91% or even higher.]


    Here are some other things we need to do to fix the economy:

  • End competition with impoverished foreign workers who breed like flies. That means high tariffs or outright import bans targeted at countries with low standards of living – not tariffs designed to protect particular industries. It also means getting rid of illegal aliens, if necessary by closing and mining the Mexican border, and banning the employment of legal aliens. Eliminating imports would also mean we’d quit paying the Chinese to pollute the atmosphere.
  • Provide free training in useful occupations to those with the necessary aptitude. It’s a stupid ideological pretension that people should pay for their own education; the benefit to society outweighs the cost so it’s a common sense investment (provided of course that people are trained in useful things like teaching, medicine, or auto repair, not fripperies like drama and journalism). We also need to reform higher education to make it less wasteful, and lower education to make it effective – but that’s a different topic.
  • Fix the broken healthcare system before it devours us. We don’t need to spend more on healthcare; that only makes the problem worse. We can have better medicine for a lot less money – maybe I’ll explain how sometime.
  • Abolish labor unions. They prevent businesses from having the flexibility they need, and all they accomplish is allowing freeloaders to be grossly overpaid – largely at the expense of real workers and taxpayers. The workers who actually need protection are never unionized.
  • Get rid of regulations like the notorious Americans with Disabilities Act that are supposed to promote social justice. They’re a huge burden and more often abused than not. If the supply of labor is kept proportionate to the demand, workers will have the bargaining power to take care of themselves as they see fit.


    Capitalism was once a great economic system; it would have been perfect 300 years ago (i.e., before it was realized). Capitalism industrialized the world and gave us enormous wealth – including luxuries like education that enables us to find better solutions now that capitalism has died. It’s been gone for eighty years now; with dramatic but simple reforms we can replace its bastard child, the corrupt corporatist quagmire that we facetiously call “free enterprise”, with a functioning, stable economic system that will provide for the needs of all, preserve the environment from the consequences of profligate consumption, remove the economic imperative for continual warfare, and give us a solid foundation for adapting to future change.

Big Oil, Big Lies


“The broad mass of a nation… will more easily fall victim to a big lie than to a small one.”


-Adolf Hitler, Mein Kampf


    If you’ve been awake in the last five years, you probably already know that the production of corn ethanol consumes nearly as much energy as the final product’s energy content (maybe even more, but it’s possible there’s a net gain of as much as 30%). That may still seem like a prodigal way of getting energy, but that’s just the beginning of the story. The real costs of ethanol production are much larger than the natural gas wasted to make it.


    First, there is the severe impact on the food supply, which has already caused large increases in U.S. food prices and shortages in countries that we used to feed. This is the inevitable result of the law of supply and demand; when you reduce the supply, the price goes up. There are some brazen liars who try to blame the price increase on increased costs of production, but this is ridiculous when you think about it: farmers have no control over the price of their product and cannot pass their costs on to consumers. This has been their major whine for centuries! Agricultural commodities sell in a competitive market and the seller will get exactly what the market gives, not a penny more or less. If the cost of growing crops is more than their value, the farmer just has to take a loss – unlike a manufacturer, he can’t charger higher prices and sell a little less. In agriculture, you sell at the market price or you sell nothing at all.


    Some of the sneakier “bio” fuels crowd want to start in on alternative crops that would supposedly be grown on “marginal” land and not compete with food production. Here’s a fact of life: any crop that will grow on marginal land will grow better on prime land. If there’s better money in growing switchgrass than in growing wheat, farmers won’t waste money developing low-yield marginal land for a crop that has zero non-subsidized value; first they’ll plant switchgrass in their existing fields for a better yield with no investment or risk.


    What about “bio” diesel? The energy balance is a little better than ethanol, but it’s still pretty shabby and it still destroys badly needed food supplies. What’s more, diesel – of any kind – is an inherently dirty and polluting fuel. This is just the nature of the diesel engine cycle and cannot be changed. Diesel engines require high compression ratios, which gives them their high power-to-weight ratios but also means that they produce more nitrogen oxides (which contribute to smog, acid rain, and global warming). Also, because the Diesel cycle requires a liquid fuel, complete combustion is all but impossible, and thus diesel engines produce soot, especially in cold weather. Diesel engines are sometimes necessary, but they don’t belong on light vehicles like cars.


    Ethanol, though not as bad, is also a dirty fuel – worse than gasoline, though in different ways. Because alcohol burns cooler than gasoline, less nitrogen oxides are formed and less carbon monoxide, but instead you get nasty chemical byproducts like formaldehyde. What’s more, the production of ethanol creates much larger amounts of pollution of all kinds – carbon dioxide from fertilizer manufacture, various pollutants from diesel tractors, nitrogen oxides from fertilizer decay, aldehydes and other processing byproducts, and runoff of fertilizers, pesticides, herbicides, and silt.


    But pollution is just the beginning of the eco-catastrophe being wrought by “bio” fuels. The real damage is in the depletion of our precious land and water resources. Most modern cereal production (and certainly any increased production) requires irrigation. Using up precious water for unnecessary crops that contribute very little net energy is incredibly foolhardy; water shortages are a far more serious problem than energy shortages. Irrigation puts a heavy demand on dwindling water supplies, often using up groundwater that is (for all practical purposes) irreplaceable.


    Irrigation also destroys the soil over the long run by salinating it. Any water that’s been in contact with the ground has some salt in it, and inevitably some of this salt is left behind in the soil. In some places irrigation can also bring up additional salt from subsurface soil layers, which can destroy the soil quite rapidly. More often, it is a very long process as only a trace of salt is added each year, but it does add up and some of the best agricultural land in America is already suffering heavy attrition from salination.


    Needless to say, agriculture also depletes the soil of nutrients. We routinely replace nitrogen in mass quantities (which consumes a lot of energy and creates a lot of pollution), but other nutrients are much more problematic. Calcium has to be replaced by quarrying limestone and grinding it up, which is very expensive if the limestone has to be hauled any distance. Phosphorus is also mined (in the form of phosphates); it is already expensive and supply shortages are expected. Potassium is still more difficult to replace – and these are just the major plant nutrients. Iron, sulfur, magnesium, zinc, boron, copper, manganese, selenium and molybdenum are also gradually used up by cropping (and sometimes washed out by irrigation). Just as importantly, tilling causes rapid decay of the vital organic matter that helps keep soil permeable to air and water and able to retain water and nutrients.


    The biggest source of fertility loss, however, is erosion. Modern farming practices have greatly reduced this, but millions of acres of land are still ruined every year in the U.S. alone. Much of the eroded silt winds up in rivers and lakes, where it wreaks havoc on ecosystems. Silt and fertilizer runoff are responsible for the vast dead zone – thousands of square miles – in the Gulf of Mexico where the Mississippi river delivers the waste from most of North America’s farming.


    Every ninety hours, the combined effects of erosion and salination cost the world as much cropland as the entire Chernobyl exclusion zone – that’s a hundred Chernobyls, every single year, caused mostly by agriculture.


    Of course, most of this is for food production, not “bio” fuels – but we should hardly be planning to make large increases in it without a very good reason! And regarding the prospect of growing switchgrass or the like on marginal land, we should keep in mind that this “marginal” land isn’t just waste – most of it is either pasture (now being naturally and sustainably fertilized!) or part of the last remaining wilderness and wildlife refuge in the world. And much of this “marginal” land is very vulnerable to erosion, being relatively steeply sloped. If we destroy it, we won’t get it back.


    Brazil has already ruined a great deal of its “marginal” land (in this case, rain forest) in its miguided quest for temporary energy independence. The production of ethanol from cane sugar in the tropics has a far better energy balance than any “bio” fuel available to the U.S., yet even with this advantage Brazil had needed to raze most of its forests for crops. Unfortunately, the cleared forest soil is low in nutrients and subject to hardening when exposed to rainfall, and is often useless after just a few years. Then the only option is to burn some more rain forest… but the rain forest is starting to run out. Some of this devastation is due to the demand for cattle, not sugar cane – and isn’t it a bitter irony that the same “environmentalists” who would rather see people starve than see forests cut down, applaud the destruction of those same forests for the purpose of replacing gasoline with ethanol which is dirtier and more expensive?


    One of the side effects of clearing more land for crops is a huge increase in atmospheric CO2. Trees tie up large amounts of carbon, and Brazil has burned so much forest that it now has a larger carbon footprint than the U.S. “Bio” fuels in general cause a major increase in global warming (as compared to fossil fuels). In addition to the CO2 released when they are burned, there is all the CO2 released in the process of growing and making them – it takes a lot of energy, remember? Most of that energy comes from natural gas. There is also a substantial amount of nitrous oxide created as a byproduct of fertilization – and nitrous oxide is 300 times more powerful a greenhouse gas than CO2. And then there’s the CO2 released by clearing land, whether that’s done the quick way by burning vegetation or the slow way by tilling the soil so that the organic content decays.


    On an energy-for-energy basis, the burning of ethanol (by itself, without any of the other contributions) releases the same amount of CO2 as gasoline. Only on a liter-for-liter basis does ethanol release less CO2 – the energy content of ethanol is a third less than that of gasoline. It has also been deceptively claimed that “bio” fuels are carbon-neutral because growing them takes the same amount of carbon out of the atmosphere. This is false, because it doesn’t account for the greenhouses gases emitted in the process of making “bio” fuels – and it should be obvious to anyone that the land used for growing crops would not be totally barren if left uncultivated; some of it will actually sequester less carbon once it has been converted from forest into crops. This is certainly true of the “marginal” land that is temporarily cultivated and then abandoned because it can no longer support crops!


    Anyone who cares about global warming would make it their first priority to put a stop to the use of so-called “bio” fuels. There is no other way of achieving so large a reduction in greenhouse gases so quickly, and there is no advantage whatever to using them. They are catastrophic for the environment, bad for the economy, bad for the poor (who are hurt the most by food shortages), and sometimes bad for your car as well.


    “Bio” fuels have never had any prospect for making a major contribution to our energy supply. There simply isn’t enough land in the U.S.; even Brazil, with its vastly better conditions, doesn’t have enough land. Because they are expensive, “bio” fuels transfer wealth from taxpayers (who are forced to pay for the subsidies) to the giant agribusinesses that produce them. The artificially high food prices also benefit the owners of factory farms at the expense of everyone else. Smaller farmers benefit too, of course, like remoras clinging to a shark, but less than one percent of Americans work full time at farming – and one hundred percent of Americans have to eat.


    Another beneficiary of the “bio” fuels dementia has been the natural gas industry, because the extra demand for natural gas (used to make fertilizer and drive chemical processes) has forced the price up sharply. Again, it is the poor who are most hurt by the higher cost of home heating. And who benefits – why, the same oil companies who own the natural gas wells, and who also own huge tracts of farmland and have partnership deals with the big manufacturers of ethanol!


    If you don’t smell a rat yet, you should cut off your nose and mail it in for a refund. The oil companies are raping us at both ends – they profit from the ethanol itself, from the higher food prices, and from the waste of natural gas – while at the same time greenwashing their own dirty image and diverting public attention from energy sources that might actually compete with oil. They get around the necessity of finding replacements for gasoline additives like MBTE, and as an added bonus, the presence of ethanol allows gas stations to water your gasoline without detection. (If your mileage goes down by more than 2-3% on 10% gasohol, you should suspect adulteration. On the other hand, you should protect the environment by avoiding gasohol if it all possible.)


    Ethanol is a gigantic swindle perpetrated by Big Oil – along with soy diesel, switchgrass, and any other crop-based “bio” fuels. But what about energy generated from other bio-sources like “cellulosic waste”? Some of these may be acceptable, but remember that everything “bio” is something we take from the living environment – there’s nothing “green” about it; “bio” fuels don’t support life, they burn it. All biological “waste” contains nutrients that could be returned to the soil – nitrogen, phosphorus, trace minerals, organic matter, whatever. Even if we’re presently wasting it, making it into fuel is not necessarily preferable to recycling it. And a certain amount of crop waste should always be left in fields to protect them from erosion – it’s the single most effective defense against the single biggest soil thief.


    The only “bio” fuel that is environmentally positive is “producer gas” – methane generated from the decay of sewage, landfill trash, etc. This gas gets produced anyway, it contains no nutrients, and it’s a very powerful greenhouse gas, so we actually benefit by burning it. However, the available amount is quite small – a well-designed sewage treatment plant might recover enough energy to meet its own needs, but that’s about it. Any other “bio” fuels that are environmentally acceptable will also be strictly limited in quantity. There’s a niche there, but not a very big one.


    Sadly, there’s not much chance of changing the legislative ethanol agenda set by Big Oil. Nor is there much chance of a shift to sustainable agriculture before it’s too late. But most of us still have, at least, the option not to buy their damned moonshine.

Abgrund’s Investment Tips


“Lay not up for yourselves treasures upon earth, where moth and rust doth corrupt, and where thieves break through and steal.”


Matthew 6:19


    If you’re alive and over the age of twelve, you’ve already been lectured on the subject of saving up for retirement. If you’re literate and old enough to work, you’ve been told at least a hundred times how important it is to start your retirement account right away and invest a regular portion of the fruits of your toil in the volatile stock market which, no matter how bad it may look at any one time, is certain to earn huge returns in the long run.


    Like everything else you think you know, this is a combination of distortion, misdirection, and lies.


    Take the stock market, which bounces randomly up and down like your girlfriend’s mood on the rag but (like your girlfriend’s weight) always gains in the long run. Except that it doesn’t. If you bought stock in the Dow Jones companies at the market peak in 1929, you’d have to wait forty years before it got back up to its original real (inflation adjusted) value. That’s a whopping zero percent return for forty years! How long do you plan to work before you retire? Right around forty years, ain’t it?


    Of course, every expert will tell you that an episode like that could never happen again. That’s exactly what the experts said in 1929, too, and before the 1987 crash, and in 1999 right before the bubble burst – no creature on this green Earth is more resistant to learning from experience than an expert in any kind of economics. It’s true that the stock market has had an overall upward trend for the last forty years, but that’s hardly a valid predictor, unless you’re the kind of person who thinks you’re going to live forever because you’ve never died yet, and you expect the next forty years of history to be the same as the last forty years. Anyway, half of the “gains” of the last forty years were just inflation.


    But wait, it doesn’t even get that good – you can’t actually buy stock in the Dow Jones. The companies that make up the average change from time to time, with losers being dropped and strong newcomers added. That means the actual returns from owning stock in these “blue chip” companies will average less than the Dow Jones over time. Most of the “blue chip” stocks will pay some dividends as well, but for some years now these have generally been paltry compared to the price of the stock – and companies have no legal obligation to pay any dividends at all. If that doesn’t sound like a good enough deal yet, here’s an added bonus if you order now: any managed fund you invest in will skim a generous helping off any earnings, with extra charges for any transactions you might want to make. Think you can beat the market by second guessing the pros and hand-picking your own stocks? So did the fools that bought into Enron. Unless you have genuine inside information – not a “tip” from someone who is probably trying to unload his own stock or a glowing report from a fucking magazine – the stock market is like a casino: the odds are stacked against you and there is no way to change that.


    Don’t despair yet, there’s other things to invest in, nice safe things like bonds. Actually there’s tons of things, like money markets for the noncommital, commodities for the daredevils, and real estate for the swindlers, but none of them are worth a lick to the typical small investor.


    The price of bonds fluctuates and you can speculate in bonds (and get skinned) the same way you can in the stock market, but (unlike stocks) bonds actually have some intrinsic value – they pay interest. Unless the company goes belly-up, of course. Government bonds (of a real government, not some shithole like El Salvador) are reasonably secure, but they pay even less interest. Still, something is better than nothing, right?


    Not if something is less than nothing after inflation. Now, if you go by published figures you can generally count on getting a 2% net yield out of government bonds, which means your investments will double in a mere 35 years – that twenty cents you save now will buy you two packages of ramen noodles for your old age!


    Except that it won’t. To start with, the taxes you pay on interest don’t account for inflation. Suppose your marginal rate is 25% (I’m pulling this number out of my ass, but your combined state and federal marginal rate will probably be at least that much), secure bonds pay 5.2%, and inflation is a modest 2.5% – your net yield is really only 1.4%. But that’s probably understating inflation.


    Even if we don’t have another surge of high inflation like the Seventies, which would totally devalue any long-term bonds you might have, the published consumer price indexes do NOT reflect the actual increase in the cost of living, which is what you’re probably going to be concerned with when (if) you retire. They include a composite of luxuries and necessities that is probably not representative of what you’re going to need when you’re seventy – they especially underweight the cost of medical care, which is going up far faster than inflation. Rent, energy, and food are also going up much faster than the published rate. Maybe you plan to maintain a middle-class level of expenditure and spend most of your retirement income on fancy clothes and toys like new electronics – in which case, if you’re thirty and expect to maintain your current level of expenditure until you’re eighty, retiring at sixty-five, you’ll only need to invest roughly a fifth of your current pre-tax income (or slightly less, if you keep your money tied up where you can’t get it and take the tax shelter, gambling that tax rates won’t go up). That’s a heavy burden, but it’s doable if you don’t have any major problems in 35 years and your medical expenses don’t go up when you get old. Unfortunately, the first is quite likely and the second is almost certain, if you live that long at all.


    That’s a relatively benign scenario, though. The actual increase in cost of living – housing, utilities, food, health care – is unknown (to me at least), but certainly higher than 5%. This means they will be taking a larger and larger share of consumer expenses, and the rate of inflation is going to go up. If you’re only concerned about having enough to live on when you retire, and not having anything extra, the rate of inflation you have to account for is already over 5%. That means that, even without paying taxes on interest, you are getting NO return from any secure investment – more likely, you are losing money on it. If real inflation is 6%, the amount of your pre-tax income you’ll need to save goes from one-fifth to about one-third, with or without tax deferment. This isn’t likely to even be in the realm of possibility.


    If you start investing for retirement fresh out of college, as you are constantly being urged to do (or reminded that you disgracefully failed to do), you aren’t accomplishing as much as you think. In the first place, your money is NOT going to double every ten or fifteen years or whatever lie you’ve been told; most of that hypothetical yield will be eaten by risk and inflation and the “miracle” of compound interest is more like finding a quarter on the floor at the laundromat – it doesn’t hurt, but you’re not likely to get on your knees and thank God for the miracle. Money you invest when you are twenty-five may (if you are lucky) be worth twice the same amount invested at fifty-five, but just as likely it’ll be the same, and it damn sure won’t be worth ten or twenty times as much like some sleazy investment managers will tell you.


    In the second place, in real life, it’s unlikely your income will be constant for thirty or forty years. If you’re stuck in the working class, it will probably not keep up with inflation, and you won’t be making enough to invest anyway; if you have kids, you’ll be lucky not to have a net debt when you turn 65. On the other hand, if you make it into management or a profession, your income will probably rise faster than inflation for at least a part of your career. This means that the painful and risky investments of a third of your income that you make in your twenties aren’t likely to make a huge difference to your retirement anyway. If you’re prosperous enough to even have a shot at retirement, you will probably make twice as much real money (after inflation) after the midpoint of your working life as you do before – and if you’ve lived within your means for the first half, you should be able to painlessly save a large part of what you make in the second half.


    Taking a “long view” when you are young also increases your risk of making bad decisions, unless you have some kind of psychic vision of the future. The whole notion of investing for retirement assumes that there will be no major changes in the structure of society, the economy, or the laws before you cash in. It’s easy to look at the last forty years of relative stability and assume that the conditions of 2047 will be much the same as those of 2007, but history says it ain’t necessarily so.


    In 1917, Russia was a backward, agrarian nation, convulsed with civil strife, defeated in war, and partially occupied by Germany. Forty years later, the Soviet juggernaut, a rival superpower to America, led the world into space and had a huge nuclear arsenal and a formidable, modern industrial base. Another forty years, and Russia was again backward, impoverished, and in chaos, no longer a major power. In 1910, government welfare programs were the hare-brained idea of Socialist radicals; forty years later, most of Europe was ruled either by welfare states or outright Communism. Before WWII, periods of inflation were offset by periods of deflation – the price level in WWII was much the same as the price level during the War of 1812. Forty years after WWII, continuous inflation had become almost universally accepted (if not exactly appreciated).


    It’s difficult enough to account for the possibility of minor changes in conditions, like revisions in tax codes or bankruptcy laws or surges of inflation or a savings and loan collapse or the invention of new excuses for suing people; all of these are likely to happen in forty years. It’s quite impossible to account for the possibility of major changes, like the hyperinflation that destroyed the middle class of Weimar Germany, a major war, or the introduction of radical government policies (or even of a new government). U.S. government bonds are now considered perhaps the most reliable security in the world, yet it’s entirely plausible that out-of-control debt will force the U.S. government to default well before forty years have passed. And don’t assume you can just sell them before that happens – the government is not above restricting your right to buy and sell.


    In some ways, the rate of change in Europe and America since WWII has been slow, but don’t expect this to last forever. The world as a whole is changing rapidly, and someday our institutions will have to adjust. Other nations with different economic models are competing with us – and winning. A major catastrophe like repeated nuclear terrorism, global pandemic disease, or an environmental disaster is at least possible – and even the mere possibility could motivate very drastic reforms. Technology continues to race at an accelerating rate into an unpredictable future. There’s no guarantee that the stock market, the dollar, the United States, or even the money economy will even exist in forty years. Maybe we will enter a new era of prosperity in which no one need want for basic necessities; maybe those who have vast imaginary fortunes in the electronic vaults will lose it all when the System breaks down – maybe both.


    Drastic change isn’t the only risk to hoarded treasure; lawyers and government are bad enough. I don’t know what the laws are (if any) protecting retirement savings, but consider that these can always be changed anyway, and probably will be. There’s no way to ensure that your IRA won’t be confiscated due to some bogus lawsuit, a conniving spouse, or the schemes of greedy politicians. Even now, part of your funds are likely to be indirectly eroded by taxation of your meager “Social Security” check, based on your total income. There’s nothing to stop the government from pilfering as much of it as they please, indirectly or otherwise.


    Perhaps even worse than global catastrophe, civil war, or lawyers, there’s also a very real possibility that you won’t live to retire – or that you’ll be too debilitated to enjoy it, and spend your last years in a hospital, maybe not even remembering or caring that you have money you can’t enjoy. The average lifespan is supposedly going to go up a bit (though I’d take that with a grain of salt), but even if you make it to thirty with an expectancy of eighty you have a substantial chance of shuffling off this mortal coil before sixty-five, or seventy, or seventy-five, or whatever age the government requires.


    This doesn’t necessarily mean that there isn’t anything you can do to prepare for eventual retirement – but the conventional method is, at best, a minefield. The further you are from retirement, the riskier it is: if you’re sixty you’re probably not worried about the possibility of a Communist takeover or even an increase of the retirement age to eighty, but if you’re twenty-five, you should be. Government-regulated accounts like the 401k are particularly risky because the government knows you have the money and has unlimited power to change the rules (which may have something to do with why the government is so anxious for you to have one). Market investments in general tend to be weighted against you because as a small investor you have very little knowledge and are gambling against people who have a great deal of knowledge and perhaps even some influence over the outcome. This is particularly true of the stock market, which has become a gigantic pyramid scheme in which people buy “securities” at far above their reasonable value intending to sell them later at an even more outrageous price – sooner or later, the suckers run out and the whole house of cards collapses. You might get lucky, but the odds don’t run that direction. The real yields you can expect from conventional investments are little or none, or less. Hoarding cash is a guaranteed loss – even four percent minus inflation will always beat zero percent minus inflation. Fortunately, there are better (if more difficult) options.


    The most important thing you can do is stay out of debt. It’s amazing how many people think they are doing something for their future by putting token funds into a 401k while carrying tens of thousands of dollars in credit card debt at twenty or thirty percent vigorish. There is absolutely no sane reason ever to purchase anything with a credit card – if you must have one to build your credit, you can use it to pay bills. If there’s something really important that you have to borrow for, like a house or a car or an education, get a bank loan. If you can’t, then live without. The bank doesn’t think you can handle the loan at ten percent, and they are probably right; how the fuck are you going to handle it at twenty-five percent? Answer: you won’t. You’ll wind up paying several times the original principal to the loan sharks and still owing more than you borrowed. If you have money to invest, you have money to pay off debt. Consider that when you pay down debt you are effectively getting a guaranteed return on your money of whatever the interest rate is, whereas if you invest it you are getting a smaller return and/or taking some risk as well. Even a home mortgage at a choice rate like six or seven percent is a better yield than anything you will find on the market. You should pay that mortgage down before even thinking of investing – just make sure you have a clear title, and for God’s sake don’t be a fool and get married so some gold-digging bitch can steal the home you paid for. The only reason to get married is if you are the gold-digging bitch.


    One thing you can invest in that can’t be stolen is education. It’s highly risky; you have to pick a field that actually pays something, and gamble that you’ll be able to finish a degree and find work in that field. There’s also the chance that your chosen specialty will become obsolete while you still need to work and you’ll wind up being an over-educated file clerk. But if you stay away from the liberal arts and work hard in school, there’s a good chance that you will get an excellent return. A bachelor’s degree in a marketable area is estimated to double your lifetime earnings, and if you’re not smart enough for a technical specialty, you can get a business degree. If you don’t have the money or time for college, there are endless other opportunities. In the past (though not necessarily the future) truck driving was a high-paying field which one could get into with a very modest investment of time and money, and there’s plenty of other things if you’re too lazy for that.


    Of course, even moreso with education than with other investments, you’re gambling that there won’t be any radical societal changes that would make your qualifications irrelevant. But at least if you’re well educated you’ll be better prepared to deal with that sort of thing, even if you have a pansy-ass liberal arts degree.


    If you can get the right job, you can still get a reasonable guarantee of a good retirement. Mostly these days that means a government job; there are still a few jobs in the private sector with good retirement packages but in most cases there is a substantial likelihood you will lose out: the company may go under or they may fire you the day before your retirement plan is vested or find some other way to cheat you out of all or part of it. Mostly, company retirement plans just give you some stock in proportion to your investment, which may be a good deal or quite worthless depending on the company. Don’t let a paltry amount of stock in a dubious corporation (sure they’re dubious, didn’t they hire you?) lure you into starting a 401k that will only tie up your money for decades without giving you a real return – and don’t leave their stock as a large share of your account. Putting all your eggs in one basket is a good way to magnify your risk, and the company you work for doesn’t have any special advantage. Probably just the opposite, unless you quit soon.


    There’s one kind of major investment that’s usually worth making, provided you aren’t married: a home. The cost is usually about the same as renting – in fact landlords often set rent equal to their mortgage payment plus property tax and insurance, so that they have no expenses while you buy the property for them. For the price of the down payment, you can accumulate equity for yourself instead of for the parasitic landlord, and eventually escape the crushing burden of housing expense – or most of it, at least, you’ll always have to give the government its pound of flesh for the privilege of owning what you’ve paid for. Just be careful what you buy – real estate sellers are worse than used car dealers, and it’s worth a few thousand dollars of investigation to avoid being cheated out of a hundred thousand.


    One thing you shouldn’t do is buy and sell real estate as an investment without knowing what the hell you are doing. Property is in some ways better than monetary instruments – its value will go up right along with inflation, unlike the value of a bond, and it can’t become worthless overnight, like stock (unless you fail to insure it) – but even more than other markets, real estate is a great place for the non-expert to get skinned alive by the expert. Even going into the landlord business for yourself isn’t likely to be a cakewalk and you can wind up losing.


    Which reminds me what I meant to say in the first place: like everything else, investment requires work to succeed. Simply having money isn’t enough, no matter how hard you worked for it; if you want to turn some money into more money you can’t just write a check and wait. Or rather, you can, but only dumb luck will help you then. If you want to get any reasonable surety of real returns, you have to work at investing. If you just give your money to someone else to manage, you can expect them to bend you over like a woman at a service garage. If you manage it yourself, following interest rates and stock prices and news is NOT good enough, because there are plenty of other people who do that and some who do more (maybe illegally, but that won’t help you). The better a form of investment (potentially) is, the more knowledge and attention and outright work will be required to make it profitable.


    If you loan someone money to start a business, you can expect to lose it if they fail, but if they succeed you won’t get one dime more than they can possibly avoid giving you. If you want something done right, you have to do it yourself, and if you want to get the benefits of someone else’s work you have to have some leverage over them and some knowledge of what they’re doing. A successful business is a great investment, if you own it – so why would you give someone else money for a business, taking the risk, without having ownership or any involvement? Because involvement is work, and you’re hoping to get the benefits without doing the work. That’s exactly what you do when you invest money in a market you don’t understand or an enterprise you don’t control – you take the risk but give up the responsibility.


    At this point (unless you’ve decided that I have no clue what I’m talking about, or you’re already rich) you may be feeling less than confident about your chances of a comfortable early retirement. Well, don’t sweat it – you are not legally required to quit working at any age.


    It’s almost a religion of our culture that the goal of life is to cease productive work (or at the very least, compensated work) at approximately the same age that a person would formerly have been too old and infirm for the heaviest agricultural labor. Of course, in those days, the few people lucky enough to live that long didn’t loaf around watching television or even go on Branson vacations; they did whatever useful work they could do, for as long as they lived. Modern Americans are far more likely to reach old age, and probably more likely to remain able-bodied, but we have an irrational horror of having to keep working until we die or reach total incapacity – to the extent that many of us will work brutally hard for the largest part of our lives in the hope of having ten years of complete idleness at the end. Yet most people in their seventies are still capable of making a living, even if they can no longer harness a team of oxen. Why should it be so important to have abundant leisure time at the end of life, as opposed to the greater part between childhood and old age?


    The most unorthodox solution to the problem of saving for retirement is simply not to do it – not necessarily to not save at all, but not to save with retirement and ten or fifteen years of idleness as the goal. Instead, if leisure is the true reward of work, it’s possible to take it in installments, working less exhaustively in exchange for working a bit longer. All you really have to do is to be qualified for work you can still do when the body is weak; if the mind fails, you won’t be able to work at all, but you won’t know the difference, either.


    This approach has advantages: there is less accumulated wealth to attract thieves or be eaten by inflation, no risk of losing all the fruits of your labor before you can enjoy them (by dying young), and no sudden stressful transition from full-time worker to full-time idler. It may not be quite respectable for an American, but what the hell – having one’s own ideas is never respectable.

Misery is a Virtue


“Give strong drink unto him that is ready to perish, and wine unto those that be of heavy hearts. Let him drink, and forget his poverty, and remember his misery no more.”


Proverbs 31:6-7


    There’s this popular pseudo-Eastern philopsychical notion that the Way to Happiness is to lower your standards. In the East, this is called Letting Go Of Attachment/Desire or some such, and typically involves living naked in a cave and eating nettles and birdshit. In the West, we say “Be content with what you have”, or some such trite homily, and the idea is to shut up and tolerate whatever abuse and exploitation are imposed on you by bitch Fortune, your neighbors, or the government. You don’t get to live in a cave, though – that wouldn’t get any taxes paid.


    Think you should work hard to make a better life? Forget it, you’ll never be happy, don’t be so shallow. Rather get rich by inventing or creating something than stay poor slaving for someone else? Well, it’s okay to chase your dreams, provided your happiness isn’t riding on the outcome. Tired of struggling just to survive while working sixty hours a week and paying half your income in taxes, while sleazeball lawyers “work” six hours a week and pay lower taxes than you on their $400,000 a year incomes? Shut the fuck up and be content with what you have. Dying of cancer because you couldn’t afford routine screening, while some seventy year old whore gets her face lifted for the third time in a futile attempt to get her husband to quit staring at twenty year olds? Don’t worry, be happy.


    If watching the poor get poorer while the rich get more arrogant, seeing corporations steal with impunity and dodge taxes, watching judges scoff at the law while murderers are set free and honest citizens face draconian fines for the pettiest of infractions, being deprived of any meaningful political choice while one corrupt regime after another squanders the resources of the present and future – if these things should make you a little uneasy, there’s something wrong with you. You’re not unhappy because you’re getting the shaft and the world is going to hell, you’re unhappy because there’s a chemical “imbalance” in your brain. If your head was on straight, you’d be happy regardless of how badly you’re treated, and if not, you need to be drugged until you accept your lot in life.


    Way back when, Aldous Huxley anticipated much of modern culture. In his (then) futuristic novel “Brave New World”, he predicted a society where the populace is kept relaxed and docile by the ubiquitous ingestion of a drug called soma. Instead of just one such drug, we have a hatful of them – paxil, zoloft, valium, xanax, librium, prozac, welbutrin, elavil, ambien, the list goes on and on… if one doesn’t work, there’s always another one to try. We even have herbal tranquilizers for those who don’t want to pay for patent medicine, Ritalin for nonconforming children, marijuana for nonconformist adults, and much worse things for those who like a bit of adventure along with their escape.


    Our government is pleased to have as many citizens sedated and tranquilized as possible; they even spend our money to encourage any of us who might still be unhappy to get ourselves medicated, and to persuade us that Depression Is An Illness and it’s abnormal to be unhappy. Do the politicians really care if we’re unhappy? If they did, maybe they would stop screwing us over.


    The real function of antidespressants is to keep resentment under control. It’s okay for the public to be annoyed at the currently dominant political party, but if too many people feel miserable and hopeless under the two-party plutocracy, they might get out of hand and actually demand real change. That would never do.


    People who are not unhappy don’t go to protests or organize third parties. They don’t riot or arm themselves in militias and they don’t resist force with force. Would popular pressure have forced FDR to reform the labor laws if the unemployed had been sitting in their shacks being mellow instead of going to Socialist rallies and marching on Washington? Maybe not. Would the Vietnam War have been cut short without urban riots and unruly demonstrations? Not likely. Would the American colonists have revolted against George III if they’d had Prozac in the medicine cabinet to keep them calm? Hell no.


    Not that there’s any conspiracy. I have nothing against plausible conspiracy theories – there’s nothing in life more predictable than that people conspire – but any conspiracy that requires a large number of people with dubious integrity to keep a secret for a long time is bullshit. Actually it’s very unlikely that anyone in the drug companies or in the various government agencies that encourage drug use has ever given a moment’s thought to the sociopolitical implications of “treating” discontent with sedatives – they’re just trying to maximize profits and justify budgets, respectively. The system works because we, as a culture, have given up on the idea of taking responsibility for solving problems – both in our personal lives and in society in general.


    Most people just don’t see anything suspicious or inappropriate about using drugs to deal with unhappiness. They might wish that their personal circumstances would improve, but they see the task as too overwhelming, too risky, or just plain hopeless. This might very well be true; in a country with declining standards of living for most inhabitants and rapidly multiplying government restrictions, it can be quite difficult to get anywhere. Legal political action is worthless, the two-party plutocracy having long since become utterly nonresponsive to public needs. The only realistic recourse for the American people, as a whole, is armed revolt, but that’s a course of action for the angry and the desperate; it’s not a course of action for the drugged and placid. One must be unhappy to be inspired to struggle for change, and doubly unhappy to purposefully put one’s life in danger for it.


    Revolution, and all other kinds of progress, are driven by unhappiness. People who are satisfied with the status quo aren’t going to bust their nuts or go out on a limb to achieve anything better. Every invention, every business enterprise, every accumulation of capital, every reform in government and religion, every major human accomplishment, has been the work of people who weren’t happy with the labor they had to do, the amount of wealth they had, or the way they were being treated – and did something about it, instead of taking drugs to make them feel better.


    For the past three thousand years or so, the dominant religions of the Orient have advocated giving up the desire for improvement, as the best way to achieve happiness. This is not dissimilar to what tranquilizers do – give up the discontentment, the struggle for more, and accept whatever you’re stuck with. It is, in truth, a better way to be happy. It’s easier, quicker, more reliable, and more lasting. To actually change one’s circumstances is generally hard, patient work, and uncertain at best; moreover, most people do indeed find that when (if) they have gotten what they thought they wanted, they are not satisfied with it for long. One of the few people I’ve ever met who seemed genuinely happy was a homeless vagrant, who wandered the world free of all obligations. But that attitude doesn’t favor progress.


    While Easterners have (perhaps) lived and died in greater contentment, it’s restless, displeased Westerners that have built modern civilization – nearly all the innovations, in both technology and society, for the past two millennia have been Western. It was men unhappy with what they had, men driven to seek for more, who explored the globe, settled and cultivated the New World, harnessed the power of coal and steam, broke the ancient bonds of despotism and slavery, and, with all of our wars and exploitations and other missteps, created a world of miracles and abundance, where food, water, literacy, and even electric power can be taken for granted and premature death is the exception, not the rule.


    If we’re not happy with it, we should at least be thankful to all the generations of malcontents before us, who gave us our world, that we don’t have to live in theirs.

Education is Class Warfare


“The exclusive privileges of corporations, statutes of apprenticeship, and all those laws which restrain, in particular employments, the competition to a smaller number than might otherwise go into them… are a sort of enlarged monopolies, and may frequently, for ages together, and in whole classes of employments, keep up the market price of particular commodities above the natural price…”


-Adam Smith, The Wealth of Nations


“The history of all hitherto existing society is the history of class struggle.”


Karl Marx, The Communist Manifesto


    Everyone knows that education is the key to success, right? You have to have a good education to get a decent job to make enough money to pay for the education and hang on to your own little slice of The Middle Class so you can go to the doctor when you are sick and live indoors when you are too old to work and pay some shyster to bribe a judge to give your jackass kids community service when they get caught driving drunk with a carful of weed. It’s the American Dream.


    So it’s your responsibility if you’re poor, because you didn’t get an education. Shame on you, everyone has opportunity and it’s up to you to grab it. All you have to do is have parents with an income in the top quartile who can pay for it, or be a star athlete or a certified genius with an obsession for writing essays and kissing ass. These are, after all, necessary prerequisites without which you could never be qualified to listen to some sick fool whining for five minutes and then write him a prescription for amoxicillin, to shout down a class of unruly sixth graders, or interview drug addicts at the welfare office. You don’t deserve to have a decent job because you didn’t try hard enough – you could have worked two full time jobs at once, one to keep you alive and one to pay for your tuition, while going to school full time and still making passing grades. Dozens of people do it every year, some of them without using meth.


    If you think this system sucks, you’re not alone. If you think the answer is for the government to pour more and more money into education, you’re still not alone. You’re also a fool.


    Here is what happens when government puts more money into education: the price goes up. The number of people who get degrees remains about the same, because there is still the same number of schools, the same classrooms, the same teachers. When more money comes from the Federal government, state governments make up the difference by reducing their own contribution; i.e., they raise tuition. Every time Federal aid goes up, tuition increases by the same amount. Of course, the Federal money is mostly loans, so the students not only pay more for college, they have ten years worth of debt just for a bachelor’s. If by some bizarre chance the state fails to cut education funding, the extra will invariably be devoured by administration salaries, idiot projects like having classes on a Caribbean cruise (I’m not making that up), new sports facilities, or “rennovations” that suddenly become necessary.


    The last damn thing they will ever do is to actually educate more students – to build more classrooms, hire more teachers, start new schools. When a university gets too crowded, they are about a hundred times more likely to raise fees than they are to expand. And why is this?


    Because “education” isn’t about creating skilled workers who can do the things that are in demand. Well, maybe about 20% of it is. The other 80% is about making sure there aren’t enough skilled workers to meet the demand. Universities are a social institution to maintain class differentiation.


    If everyone who had the talent and desire to be a doctor was able to afford the ten years of training, would doctors be making a quarter million dollars a year? (Hint: NO.) The cost of health care (in this country at least) is rising at a rate that would embarrass many unstable third world dictatorships whose exchange rates have to be given in scientific notation, but it damn sure isn’t because of the quality. It’s because doctors can charge whatever they please. How do they get away with charging hundreds of dollars for a five minute “visit” during which they check your insurance status and then recommend some five thousand dollar screening test for whatever is the least likely cause of your symptoms?


    Education, my friend. Price is controlled by supply and demand; if you can limit the supply, you can control the price. The decade of training has far more to do with limiting the quantity of doctors than ensuring the quality. If everyone who was able and willing to be a physician could be, physicians would no longer be an elite, super-wealthy class. There would be enough doctors to go around, and people could pick and choose the best or the cheapest instead of being assigned one by a crooked HMO that pays the doctor extra if he keeps you from getting any treatment.


    If you’ve ever looked at the requirements for admission to a medical school, you already know that a large part of that education is pure bogus. A typical requirement, for instance (I’m not making this up, either), is to already have three years of college classes. Which classes? Doesn’t even matter! Half of them aren’t specified at all, and most of the rest are “this or that” alternatives: meaning that neither alternative is actually necessary (if they were, they’d both be required). Of the remaining handful, almost none have any relevance to the practice of medicine. Chemistry? What the hell for? How often does your doctor synthesize a new medicine for you? Does he really need to know how to calculate how many joules of heat will be released in your stomach if you swallow 37.5 grams of crystal soda lye? He’s not even going to do your lab tests.


    Here’s the funny part: Even though they have to study chemistry that they will never use, doctors are not required to have any formal training in pharmacology. They get this information – probably the most-used of anything they know – from reference books, periodicals, and adverstisements by drug companies. Instead of three expensive years of mainly irrelevant learning, wouldn’t it make more sense to have pre-med students spend just one year studying, oh, I don’t know – physiology, sickness and treatment?


    But shorter training (or more schools) would mean more doctors, and that wouldn’t serve the real function of medical “education”, which is to maintain the position of a wealthy and exclusive class.


    Lest anyone think I’m only talking about doctors, I should point out that all professions use the same method of exclusion. Lawyers have at least as much wasted quasi-training as doctors, and they make (i.e., extort) even more money while the net gain to society of having them is decidedly negative (doctors at least do some good on the whole, and we’ll need them to implement my plan of turning all lawyers into organ donors).


    Nearly every job with decent pay or any security requires a college degree. Often it doesn’t even matter what the degree is! If you have a “college education”, you’re eligible for many lower middle class jobs, such as management, that don’t actually require any skills beyond high school level. Why do employers insist that you have a college degree?


    Class solidarity is why. If you’ve bought your college degree, they know that you have an investment in their class. You’re a Responsible Person and can be trusted to share their values, act predictably, and uphold the system. There might be a hundred working class drones with high school diplomas who are better able to do the job, but such an inferior person, with different tastes, different values, and different manners, would never be trusted, might offend his respectable coworkers, would probably suck at golf, might steal the toilet paper. America has a diverse middle class, but one thing they nearly all have in common is a college education and the sense of superiority that comes with it. It’s this country’s foremost class distinction.


    Engineering is another profession where “education” is in large part a matter of buying entry to a privileged class. Engineers, with only five years of quasi-training, make a lot less money than doctors or lawyers, but the principle is the same. A glance at the curriculum for a certain college shows that a third of the classes are at best totally unnecessary, and often sublimely absurd (like art and philosophy courses for math geeks who will spend their careers figuring out ways to minimize the cost of making dishwashers and running power plants). Another third are important only to certain sub-disciplines, and a good part of the rest are of dubious value (Differential Equations, for instance; at one time a necessity, but these days computers do all that.)


    Yet in spite of all this padding, there is virtually no training in the tools that engineers will actually use on the job – specifically, software. After five years, many of the basic skills have yet to be acquired! Getting a degree is not so much a matter of learning to be an engineer as it is of preparing to learn on the job and of purchasing the right to do so.


    Some people will try to tell you that all this superfluous “education” has to do with making the student a more “rounded” or “broader” person. Well, that’s a load of horseshit. No one really cares if an engineer can quote Shakespeare; they want buildings that don’t fall down when the wind blows. Does it matter to you whether your doctor spends her leisure time reading Heidegger as opposed to bowling? How many hundreds of thousands of dollars are you willing to part with to know that she’s a “rounded” person?


    Anyway, real personal depth does not come from slogging through pointless mandatory classes – that just makes the victim hate the subject. Depth comes from actually spending time living life. Skill in a profession also comes mostly from real world experience, not the classroom. Irrelevant education is not only a waste of money and a barrier to entry, it delays the beginning of actual practical experience. Seriously, if you had your pick, would you want the surgeon who, after twelve years of learning about everything from Russian history to neurology, is about to perform his first real heart surgery on you – or would you prefer one who had three years of training practicing different procedures on cadavers and nine years of experience doing them on live patients?


    A lot of what we call “education” is not only giving us fewer and more expensive professionals, it’s actually making them less competent.


    Every society has its way of maintaining economic class barriers; in the Middle Ages, prosperity (for a commoner) could only come from buying one’s way into a trade guild and enduring an apprenticeship of many years in order to earn the right and (supposedly) acquire the arcane skill needed to tan hides or hammer horseshoes. Then, as now, it was a crock of shit.


    So what should be done about it? Well, obviously, nothing will be done. The majority who would benefit from a change have no political power, and the privileged few who benefit most from the status quo certainly do. But we should at least realize what is going on. “Putting more money into education” is one of the top slogans of every backstabber in Washington, because few people object to it and those that do are only concerned with the immediate cost.


    Yet everyone who cares about education should oppose increased aid to students – in fact they should oppose any increase in funding that does not go directly into increasing the capacity to educate students. That means more teachers and more schools, NOT more money spent on the same facilities we have now. Giving more money to students is the worst thing you can do for them (and I say this as a college student): it only drives up the prices, resulting in them graduating with more debt and raising the class barrier ever higher.


    If we, as a society, were actually capable of reform (we’re not), or interested in remaining an economic superpower (we’re not), we’d not only make education available to more people, we’d make education actually fit its ostensible goal. We’d radically revise the current requirements for entry into the various professions, and replace this twelfth century crap, where “education” means learning a little bit of everything, with specialized training in useful skills and quicker introduction to practical experience. The world needs doctors more than dilettantes.


    Students should be able to choose for themselves whether they want to learn about (for instance) literature or geography before attempting to design an electrical circuit. Sick people should be able to decide whether they need to pay five times as much to receive care from a doctor who has a deep grasp of the relation between Picasso and Existentialism.


    And the professions should be open to all those with talent, not just those who can buy their way past artificial roadblocks.

AIDS Heresy

“Convictions are more dangerous enemies of truth than lies.”

-Friedrich Wilhelm Nietzsche, Human, All Too Human


    For more than twenty years, Americans have been taught that HIV is the cause of AIDS, that AIDS is invariably fatal, that HIV and AIDS are sexually transmitted, and that there is a tremendous pandemic of AIDS in sub-Saharan Africa. A multi-billion dollar industry and a global civic movement have been based on these beliefs – yet they remain unproven, even dubious. Considerable evidence indicates that AIDS actually represents different disease processes, at least some of which are not necessarily fatal, that HIV is not the sole cause of AIDS, that neither HIV nor AIDS is sexually transmitted in any significant degree, and that the phenomenon called “AIDS” in Africa may have little to do either with HIV or immune deficiency.

    One of the problems with the HIV theory (which asserts that HIV is the only cause of AIDS) has always been that there is no known mechanism by which the virus can cause immune deficiency. After several years of fruitless searching for this mechanism, the U.S. government “solved” the problem by announcing that the mechanism had been “discovered”. What had been “discovered”, however, was merely what had been known all along – the ordinary means by which any virus kills cells (runaway virus production crowds out the cell’s normal processes). HIV does kill T-cells (white blood cells which are critical to the body’s immune response) in this way; however (as was also known all along), it simply does not kill enough of them to make a difference. The virus has great difficulty even entering T-cells, and is found in only a small fraction of them, even in advanced AIDS patients. HIV can be cultured in vitro in cultures of human T-cells without ever harming them!

    While the official “truth” is still being taught to the public, the search for an actual mechanism goes on. Several theories have been proposed, for instance that HIV somehow triggers a migration of T-cells from the blood to the lymph glands, or that it interferes with T-cell reproduction, among others. None of these mechanisms has so far been demonstrated, despite decades of research.

    It has been standard practice for more than a hundred years in medical research to insist that before a pathogen may be definitely assumed to be the cause of a disease, it must satisfy Koch’s postulates. These are the four postulates:

  • The putative disease organism must be found in all persons (or animals) with the disease, but not in healthy persons.
  • It should be possible to isolate and culture the organism from any diseased person.
  • Injecting this culture into a disease-free person should produce the disease.
  • The disease organism should be re-isolated from this person.

    HIV has not satisfied these postulates; around 5% of AIDS cases do not involve HIV (an annoying fact which the Center for Disease Control evaded in 1989 by simply renaming them), while around 15% of HIV infected persons, while receiving no treatment against the virus, never get AIDS (a fact which the CDC dealt with in 1993 by re-defining AIDS to include healthy persons). HIV cannot be cultured in isolation because, like any virus, it needs a host cell to reproduce. The injection test cannot be performed on humans; no one is likely to volunteer for it. (Normally animals are used, but viruses are often harmful only to a narrow range of host species). A very few persons have been accidentally injected by needlesticks, but of course none of these were from a pure virus culture – dirty needles could be contaminated with almost anything. HIV has been shown to be harmful to some primates, but does not cause AIDS in them – and it is harmless to some other primates.
    Obviously, it is not very reasonable to expect HIV to satisfy Koch’s postulates, which were invented before the discovery of viruses and are not really applicable to viral disease. The postulates also do not allow for long latency periods, which are known to occur in diseases other than AIDS. Another issue is that AIDS, which is properly defined as a syndrome, has no distinctive set of symptoms, so that its diagnosis is somewhat arbitrary – it is almost inevitable that some people will fit the symptoms of AIDS without having HIV.
    Skeptics of the HIV theory have often pointed out that HIV does not satisfy Koch’s postulates. Government officials could easily counter this by showing that it is irrelevant, and citing other evidence that HIV causes AIDS, but instead they have chosen merely to announce frequently, forcefully – and falsely – that HIV has satisfied Koch’s postulates. One reason for their doing so may be that the evidence linking HIV to AIDS is not entirely convincing.

    Most of the evidence that HIV causes AIDS revolves around correlations or chronological associations between HIV infection and AIDS symptoms. Correlation, however, does not prove causation; skeptics have argued that HIV infection and AIDS are both attributable to other factors, especially the abuse of intravenous or inhaled recreational drugs. Also, part of the correlation undoubtedly arises from the fact that HIV-negative persons are less likely to be monitored for rare diseases or tested for T-cell levels, while HIV-positive persons are usually affected by toxic anti-retroviral therapy and subject to high levels of stress. Chronological observations are also suspect, for similar reasons. The fact that some people eventually got a rare opportunistic disease, many years after being infected with HIV, does not prove that HIV was the cause.
    There is, on the other hand, ample clinical evidence indicating that HIV can be harmful to the immune system in at least some people (especially children). AIDS skeptics (including one of the discoverers of HIV, Peter Duesberg) generally claim that HIV is a harmless “passenger” virus – a belief that is untenable in the face of the evidence. “Harmful” is not, however, the same thing as “lethal”. There has never been any good reason to regard HIV infection, in itself, as being necessarily fatal.

    So how did it come to be so regarded? The answer is that HIV was prematurely identified with AIDS, a poorly understood syndrome that was never adequately researched. In the early years of AIDS, it was associated exclusively with homosexuals, and later with intravenous drug abusers and hemophiliacs. The homosexual victims were almost entirely highly promiscuous individuals who were frequently exposed to sexually transmitted diseases; many of them made excessive use of antibiotics as a form of prophylaxis – and antibiotics tend to suppress the body’s immune system. Nearly all of these men were also regular users of “poppers”, amyl or butyl nitrite used as an inhalant by some male homosexuals. Nitrites are highly toxic and damage the immune system. The drugs and diseases characteristic of intravenous drug abusers – such as heroin and hepatitis – are also immunosuppressive. Hemophiliacs are exposed to a multitude of diseases from the blood supply, which was at the time very poorly monitored, and tend to have weak immune systems as a consequence of their illness. It is no great surprise that some members of these risk groups experienced deterioration and failure of the immune system – especially those infected with yet another immunity-degrading virus, HIV. But for political reasons – because the first risk group identified was homosexuals – “lifestyle” factors were not considered; blaming the syndrome on promiscuity and drug abuse was considered homophobic, and the assumption was made from the beginning that a single infectious agent was responsible.
    In these early groups of AIDS patients, death was surely a great likelihood – the victims were very unhealthy and vulnerable to begin with. Those few who recovered were later reclassified as not having AIDS – how could they have it, when AIDS was “known” to be incurable? No one knows for sure how many of these patients actually had HIV; the virus was unknown at the time. Possibly most of them had HIV, but definitely not all. In the first two studies of HIV – on the basis of which the government announced that the cause of AIDS had been discovered – fewer than half of the AIDS patients involved tested positive for HIV exposure!

    A few years after the discovery of HIV, drugs began to be introduced for its “treatment”. These were substances that could kill retroviruses, but were so toxic that they would never have been authorized if not for the general public hysteria and the assumption that all of the AIDS patients would soon die anyway. The first of these drugs to be approved, AZT (which had been rejected years before as a chemotherapy agent because of its terrible side effects), was entirely capable of killing a healthy person who took it regularly for years. AIDS patients, unhealthy to begin with, did just that. Many of them were so adversely affected that they were unable to continue the drug, but some (about one in three) showed a temporary remission of symptoms – possibly due to the fact that the drug killed off various concurrent infections from which the patients were suffering, as well as temporarily suppressing HIV activity. In any case, the benefits were only short term, and all of the patients died. While some people have lived with HIV for decades, no one has ever survived high-dose AZT therapy for more than five years.
    Based on the short-term remission effect and the belief that anyone who developed AIDS was doomed, the manufacturer of AIDS sought and received permission to sell AZT as a prophylactic against AIDS. Hundreds of thousands of otherwise healthy HIV-positive persons were induced to take 1200 milligrams of AZT every day, hoping that this would delay the “inevitable” onset of AIDS. It did not, and the death rate from “AIDS” soared. Most of these people were still in one of the original risk categories, and many of them probably had other serious infections as well (such as hepatitis C), but they did not have AIDS, and in the absence of AZT treatment it is possible that some of them would have survived and had normal life spans. There has never been any proof that HIV alone, in the absence other health problems, is lethal.
    Part of the politics of AIDS was the insistence that there would be a heterosexually spread epidemic. This never occurred, but huge numbers of people outside the risk groups were tested, and a few were found to be infected. These unfortunates, though otherwise healthy, took AZT and subsequently died.

    In 1993, the standard dosage of AZT was reduced from 1200 to 600 milligrams. The death rate from AIDS soon tapered off; when AZT was partly replaced by the less harmful protease inhibitors, the death rate plummeted. The fact that modern “AIDS” patients have a much better prognosis than those of twenty or more years ago has been touted as proof that HIV causes AIDS and that the anti-retroviral drugs are effective against AIDS, but this ignores a crucial circumstance: the AIDS patients of the early years had little resemblance to those of today. The former were already desperately ill when diagnosed; nearly all were chronic drug abusers, and most if not all suffered from other complicating factors, such as repeated hepatitis and syphilis infections, malnutrition, and overuse of antibiotics. Modern patients, by contrast, are more likely to have contracted HIV from an isolated instance of drug injection, and to be reasonably healthy when they are diagnosed. Medical treatment in general, including that of specific AIDS diseases such as Pneumocystic Carinii Pneumonia, has also advanced. It is hardly surprising that modern patients live longer.

    Outside of sub-Saharan Africa, there have thus been two principal groups of AIDS victims: those who suffered critical immune failure due to various (often multiple) factors, and those who suffered long-term health degradation primarily due to treatment with toxic drugs, but HIV has likely been a contributing factor for most members of both groups. “AIDS” in Africa is a wholly different phenomenon, which may have little to do with HIV or immune failure.

    Non-African AIDS is very different from African AIDS in several ways. The relationship between HIV and AIDS in Africa is unknown, because virtually none of the victims have ever been tested for HIV exposure. Estimates of the prevalence of HIV infection in Africa are largely speculative; testing is inadequate and random samples of the population are impossible. Demographic data are poor; many sub-Saharan countries do not even keep records of births and deaths. The situation is further complicated by the fact that foreign assistance often depends on the perceived AIDS threat; governments have a powerful incentive to exaggerate the number of AIDS cases and deaths that they report. The diagnostic criteria used in Africa are quite different from those used elsewhere; instead of testing for HIV exposure and low T-cell counts, only the presence of chronic symptoms is considered, and these are different symptoms from those associated with AIDS in the rest of the world. In Africa, a chronic cough combined with diarrhea is diagnosed as AIDS – even though tuberculosis and dysentery are quite common throughout most of the sub-Saharan region. In America, the same patient would undergo medical testing and might be treated and cured without AIDS ever having been suspected. In Africa, the patient is doomed.

    It is interesting to note that when an African is found (usually by some Western-sponsored test) to be HIV-positive, he or she typically sickens and dies within a year. In America, a person newly diagnosed with HIV can expect an average of about ten years before experiencing any symptoms – with or without treatment. In the West, such exotic diseases as Pneumocystic Carinii Pneumonia, toxoplasmosis, or disseminated Mycobacterium Avium Complex are considered characteristic of AIDS; in Africa, common diseases such as tuberculosis and dysentery are AIDS indicators. Nearly all African AIDS victims experience rapid weight loss, but this symptom occurs in a minority of Western patients. The lack of drug treatment in Africa cannot explain these differences – Westerners typically experience a long incubation period with or without treatment, and the drugs are more likely to cause weight loss than to prevent it.

    The most significant difference between African and non-African AIDS is, however, its distribution; AIDS in sub-Saharan Africa is distributed equally between men and women and does not appear to have a strong preference for drug users. This is a plausible distribution for a sexually transmitted or otherwise contagious disease, but AIDS outside Africa does not fit this distribution at all. The distribution of HIV outside Africa is totally inconsistent with a sexual mode of transmission – after decades of warnings of an impending heterosexual pandemic, HIV outside Africa remains confined mostly to homosexuals and intravenous drug injectors, with male victims outnumbering females by three or four to one (and far more in many countries). This is not due to more use of condoms in the West, or less promiscuity; other sexually transmitted diseases, such as herpes and chlamydia, were (and still are) spreading rapidly at the same time that new HIV infections were declining. These sexually transmitted diseases, unlike HIV, affect males and females in equal proportion.
    In fact, there is very little reason to believe that HIV can be transmitted by normal sex, although it may be slightly transmissible by rough anal sex. While official statistics claim that many people are heterosexually infected, these claims are dubious in the extreme. They are based only on what the patients themselves report; actual investigation of even the most cursory nature is virtually unknown. Patients generally avoid admitting to behavior that might be disapproved or illegal, preferring to admit to some more acceptable cause. It is instructive that the proportion of reportedly homosexually transmitted HIV is very small among countries where homosexuals are strongly despised, while the proportion of transmission by intravenous drug use drops to zero in countries where such is severely punished. In some countries, all of the reported adult AIDS cases in 1997 were listed as “heterosexual” in origin – yet they are virtually all male! In addition, the wives of hemophiliacs (most hemophiliacs contracted HIV during the Eighties) have no higher rate of HIV infection than the general population, even though many of their husbands were infected before they were even known to be at risk. It is reasonable to believe that all or most of the alleged cases of heterosexually transmitted HIV were actually transmitted by other means.

    The “success” of programs directed against sexual transmission in controlling the spread of HIV has been cited as evidence that sexual transmission occurs, but these “successes” do not bear close examination. Senegal, for instance, is often touted as such a case; yet the rates of HIV prevalence in Senegal have always been comparable to those of its neighbors, both before and after the start of the control program. The same is true of Thailand, another oft-cited case. In Uganda, a very sharp decline did occur – but it began two years before the initiation of the condom-promoting program “So Strong So Smooth”. South Africa’s President Mbeki was much vilified for refusing to take the official AIDS theory at face value, and was blamed for South Africa’s high rate of HIV infection – yet that rate has never differed much from that of South Africa’s neighbors.

    African AIDS appears to be a different disease from non-African AIDS; it is spread by different means, has a different course and symptoms, and may be unrelated to HIV. Some have excused the differences as being due to a different strain of the virus (HIV2), but most of sub-Saharan Africa, including the most severely affected areas, harbor the same virus strain (HIV1) as the rest of the world. It is more likely that HIV is being erroneously blamed for the consequences of widespread malnutrition, malaria, tuberculosis, syphilis, and other problems. The swift demise of Africans who are diagnosed with AIDS may be attributed to psychosomatic and social causes: already ill, they are often vigorously ostracized, even by their own families, and the expectation of certain death is stressful and can have a dramatic effect on health, as illustrated by the efficacy of voodoo magic against those who believe in it. African “AIDS” victims are also sometimes denied medical care, on the assumption that others can better benefit from it.

    It may be that HIV is a complicating disease factor that has been there along, undetected. HIV is at least ten times as common among persons of Bantu as non-Bantu ancestry, both globally and in the United States; except for Southeast Asia, the global distribution of HIV follows the distribution of Bantu peoples surprisingly closely. Retroviruses generally can only be transmitted by blood-to-blood contact; lacking such modern innovations as hypodermics and organ transplants, they can only be transmitted from mother to child. If HIV is less deadly than heretofore believed, it may have existed in Bantu populations for many generations, and the supposed recent increase in its prevalence in Africa may reflect better testing, different statistical methods, competition among African countries for foreign assistance, or even false-positive test results caused by some other disease that is spreading through Africa (tuberculosis, flu, herpes, hepatitis, and malaria are among the illnesses that can sometimes cause false positive tests for HIV exposure).
    Official theories for why HIV spreads so much more quickly and indiscriminately in Africa than elsewhere revolve around the poverty and generally poor health of Africans, which is supposed to make infection easier, but non-African countries of equal poverty have experienced no HIV epidemic even remotely comparable. Whatever is happening in Africa that we call “AIDS” cannot be explained by the conventional HIV theory.

    Many alternatives have been suggested to the HIV theory of AIDS; one of the best-known theories blames “poppers” (nitrite inhalants), but this is only plausible for the earliest group of AIDS patients. A second theory, generally coexisting with the first, is that AZT itself is sufficient to cause AIDS. This theory is also weak, however; studies have failed to reveal any substantial difference in long-term survival, either better or worse, associated with AZT treatment for those already ill. Patients who have stopped taking the drug have on average fared no better than those who continued. Drug abuse in general, especially of injective drugs, is a likely cause of AIDS, but cannot explain all cases. Other possible causes include multiple chronic infections, tertiary syphilis (which may be undetectable in some patients), hepatitis C, or some other infectious agent. Yet the most likely explanation is that there is no single explanation; AIDS is a syndrome, and has manifested quite differently in different times, places, and persons. HIV does not appear to be the sole cause of AIDS, though it may well be the principal factor in most present cases of non-African AIDS.

    Investigation of alternative etiologies of AIDS might prove more fruitful than continued concentration on HIV exclusively. This is especially true for Africa, where “AIDS” is poorly defined, poorly understood, and may have little connection to HIV. Prevention programs have focused on safe sex, which is a complete waste of money – not only is HIV not sexually transmitted, these programs have not impacted the spread of genuine sexually transmitted diseases. By far the best way to prevent the spread of HIV (and certain other diseases such as Hepatitis C) is to provide clean needles to drug abusers, but this has been difficult for political reasons.
    Unfortunately, major change in existing policies seems unlikely. The principal medical agency of the U.S. government, the Center for Disease Control, has persistently refused to acknowledge the possible existence of flaws in the conventional theory. AIDS has become intensely politicized, and “fighting AIDS” has acquired the status of a universal symbol for compassion, tolerance, and internationalism. Skeptics are attacked not on the basis of evidence or ideas, but on the moral grounds of perceived opposition to this symbol. The conventional theory also has deep economic and academic roots – tens of thousands of jobs and tens of billions of dollars are tied to AIDS research, AIDS relief, AIDS treatment, AIDS advocacy, AIDS drugs, AIDS prevention, and other AIDS industries. It is no great wonder that the status quo resists interference. The last quarter century of Herculean expenditures has failed to defeat AIDS; the next quarter century will fail as well.