What punch cards teach us about AI risk

I (finally) read Edwin Black’s IBM and the Holocaust, and I can’t recommend it strongly enough. This book had been on my queue for years, and I put it off for the same reason that you have probably put it off: we don’t like to confront difficult things. But the book is superlative: not only is it fascinating and well-researched but given the current level of anxiety about the consequences of technological development, it feels especially timely. Black makes clear in his preface that IBM did not cause the Holocaust (unequivocally, the Holocaust would have happened without IBM), but he also makes clear in the book that information management was essential to every aspect of the Nazi war machine – and that that information management was made possible through IBM equipment and (especially) their punch cards.

I have known little of computing before the stored program computer, and two aspects of punch card systems of this era were surprising to me: first, to assure correct operation in these most mechanical of systems, the punch cards themselves must be very precisely composed, manufactured, and handled – and the manufacturing process itself is difficult to replicate. Second, punch cards of this era were essentially single-use items: once a punch card had been through a calculation, it had to be scrapped. Given that IBM was the only creator of punch cards for its machines, this may sound like an early example of the razor blade model, but it is in fact even more lucrative: IBM didn’t sell the machines at a discount because they didn’t sell the machines at all – they rented them. This was an outrageously profitable business model, and a reflection of the most dominant trait of its CEO, Thomas J. Watson: devotion to profit over all else.

In the Nazis, Watson saw a business partner to advance that profit – and they saw in him an American advocate for appeasement, with Hitler awarding Watson its highest civilian medal in 1937. (In this regard, the Nazis themselves didn’t understand that Watson cared only about profit: unlike other American Nazi sympathizers, Watson would support an American war effort if he saw profit in it – and he publicly returned the medal after the invasion of Holland in 1940, when public support of the Nazis had become a clear commercial liability.) A particularly revealing moment with respect to Watson’s disposition was in September 1939 (after the invasion of Poland!) when IBM’s German subsidiary (known at the time as Dehomag) made the case to him that the IBM 405 alphabetizers owned by IBM’s Austrian entity in the annexed Austria now belonged to the German entity to lease as they please. These particular alphabetizers were important: the 405 was an order of magnitude improvement over the IBM 601 – and it was not broadly found in Europe. Watson resisted handing over the Austrian 405s, though not over any point of principle, but rather of avarice: in exchange for the 405s, he demanded (as he had throughout the late 1930s) that he have complete ownership of IBM’s German subsidiary rather than the mere 90% that IBM controlled. The German subsidiary refused the demand and ultimately Watson relented – and the machines effectively became enlisted as German weapons of war.

IBM has made the case that it did not know how its machines were used to effect the Holocaust, but this is hard to believe given Watson’s level of micromanagement of the German subsidiary through Switzerland during the war: IBM knew which machines were where (and knew, for example, that concentration camps all had ample sorters and tabulators), to the point that the company was able to retrieve them all after the war – along with the profits that the machines had earned.

This all has much to teach us about the present day with respect to the true risks of technology. Technology serves as a force-multiplier on humanity, for both better and ill. The most horrific human act – genocide – requires organization and communication, two problems for which we have long developed technological solutions. Whether it was punch cards and tabulators in the Holocaust, radio transmission in the Rwandan Genocide, or Facebook in the Rohingya genocide, technology has sadly been used as an essential tool for our absolute worst. It may be tempting to blame the technology itself, but that in fact absolves the humans at the helm. Should we have stymied the development of tabulators and sorters in the 1920s and 1930s? No, of course not. And nor, for that matter, should Rwanda have been deprived of radio or Myanmar of social media. But this is not to say that we should ignore technology’s role, either: the UN erred in not destroying the radio transmission capabilities in Rwanda; Facebook erred by willfully ignoring the growing anti-Rohingya violence; and IBM emphatically erred by being willing to supply the Nazis in the name of its own profits.

To bring this into the present day: as I relayed in my recent Monktoberfest talk, the fears of AI autonomously destroying humanity are worse than nonsense, because they distract us from the very real possibilities of how AI may be abused. To allow ourselves to even contemplate a prohibition of the development of certain kinds of computer programs is to delude ourselves into thinking that the problem is a technical problem rather than a human one. Worse, the very absurdity of prohibition has itself created a reactionary movement in the so-called “effective accelerationists” who, like some AI equivalent of rolling coal, refuse to contemplate any negative ramifications of technological development whatsoever. This, too, is grievously wrong, and we need look no further than IBM’s involvement in the Holocaust to see the peril of absolute adherence to technology-based profit.

So what course to chart with respect to the (real, human) risks of AI? We should consider another important fact of IBM’s involvement with the Nazis: IBM itself skirted the law. Some the most interesting findings in Black’s book are from the US Department of Treasury’s 1943 investigation into IBM’s collusion with Hitler. The investigator – Harold Carter – had plenty of evidence that IBM was violating the Trading with the Enemy Act, but Watson had also so thoroughly supported the Allied war effort that he was unassailable within the US. We already have regulatory regimes with respect to safety: you can’t just obtain fissile material or make a bioweapon – it doesn’t matter if ChatGPT told you to do it or not. We should be unafraid to enforce existing laws. Believing that (say) Uber was wrong to illegally put their self-driving cars on the street does not make one a “decel” or whatever – it makes one a believer in the rule of law in a democratic society. That this sounds radical – that one might believe in a democracy that creates laws, affords companies economic freedom within those laws, and enforces those laws against companies that choose to violate them – says much about our divisive times.

And all of this brings us to the broadest lesson of IBM and the Holocaust: technological development is by its nature new – a lurch into the unknown and unexplored – but as I have discovered over and over again in my career, history has much to teach us. Even though the specifics of the technologies we work on may be without precedent, the humanity they serve to advance endures across generations; those who fret about the future would be well advised to learn from the past!