Los Alamos Chess

265 points8
keid7 days ago

I knew one of the authors, Mark Wells. After he retired from Los Alamos National Laboratory he moved, temporarily, to Las Cruces, NM, to be head of the CS department at New Mexico State University, so became my boss as a student employee in the department. I don't recall his authorship of the chess program being mentioned by any of the profs or grad students (it was a very small department). When I moved to Los Alamos many years later I discovered that he lived just down the street--they'd kept their house in Los Alamos while in Las Cruces and had returned (for the skiing, he said).

The first article linked below has some details about the chess program not in the Wikipedia article.

Chess was his (serious) hobby; his research was in programming language design, specifically concerning type theory.

chemeril7 days ago

I'd keep a house for the skiing at Pajarito too! For those reading along it's certainly worth the trip if you're in the area.

I have a vague recollection of coming across a physical 6x6 chessboard somewhere on lab property and found it a little odd, but never knew it had ties to MANIAC. Lots of history floating around in that place.

QuantumSeed7 days ago

I was at NMSU during his time as head of the CS department. His interest in chess inspired to write my first chess playing program

jhbadger7 days ago

I'm not sure if anyone actually plays this as an actual chess variant, but it is fun to write a chess player for this, mirroring the 1956 version. I remember reading about this in the 1980s and wrote a version in BASIC that could beat me every time (maybe not that impressive objectively, but it impressed me).

huytersd7 days ago

I hate sudoku variant videos that rope me in with some impossible scenario and then explain a totally different set of rules.

alhirzel7 days ago

Then you will hate this:

I love it though, his reactions are priceless.

huytersd7 days ago

It was this exact person that made me hate it. Oh look an empty puzzle, except you can’t have consecutive numbers, the 2s can’t be in the corners and every alternate cell is the sum of the previous two. It’s stupid.

smrq7 days ago
lanternfish7 days ago
passwordoops7 days ago

For some reason your comment made me think of the classic Simpsons bit, "When I stamp your foot and say 'Hello Mister Thompson' you say hi back." Funny how the human mind works

dvh7 days ago

Kruskal (author of the kruskal algorithm) won.

edit: pardon, wrong Kruskal

Joseph Kruskal had two notable brothers, Martin David Kruskal, co-inventor of solitons, and William Kruskal, who developed the Kruskal–Wallis one-way analysis of variance. One of Joseph Kruskal's nephews is notable computer scientist and professor Clyde Kruskal.

pavel_lishin7 days ago

And apparently his mother popularized origami in America!

mturmon7 days ago

Amazing stuff!

And origami is not unrelated to (hexa)flexagons, which are a perennial HN favorite and also a math/physics fascination (

thriftwy7 days ago

This one seems to be the author of–Szekeres_coordinates

Which are great at challenging commonly held beliefs about black holes.

frankfrank137 days ago

Found an example to play online with a bot!

procgen7 days ago

I'd love to play against the original algorithm.

Stevvo6 days ago

Doesn't follow the rules; enemy pawn promoted to queen upon reaching other side of board.

davidw7 days ago

> The computer still needed about 20 minutes between moves.

Wow. Coding and debugging that is... a whole other level of difficulty compared to coding these days.

pacaro7 days ago

My mother describes laying out the program flow on index cards (or recycled punch cards depending on what was available) on the living room floor and walking through the program/room with a notebook tracking the program state.

To her that was debugging

This was mid to late 60s

davidw7 days ago

That's pretty cool! You should get her to write it up so it's not forgotten.

huytersd6 days ago

The programs were so simple back then with almost no abstraction. I can’t even imagine laying out punch cards for a basic web dev project now. It might cover the earth or something.

tombert7 days ago

Every time I read about about how slow really old computers were, I say an internal "thank you" to Bell Labs for the transistor. I'll occasionally see a YouTube video about vacuum tube or relay based computers, and they'll show their operations on the order of 3-10 operations per second. And here I am writing this comment with something whose operations are on the order of billions per second.

kragen7 days ago

lumping vacuum tubes together with relays is a fundamental misunderstanding, and this misunderstanding has led you to make a statement that is false by about three or four orders of magnitude

electromagnetic relays can switch at up to a few kilohertz, while some vacuum tubes can switch at up to a few gigahertz—billions of cycles per second—though most of them can only handle signals up in the megahertz. in world war ii, before the first computer¹, people were already using vacuum tubes to produce, amplify, and decode multi-gigahertz radio signals. typically such microwave tubes only have a bandwidth of an octave or less; they don't work down to dc with the same transition times, a problem that also afflicts transistors

the ibm 709 was a vacuum-tube computer delivered in 01958 which was capable of 42000 arithmetic operations per second. this is literally ten thousand times faster than the '3–10 operations per second' you remember from the youtube videos you have occasionally seen. the 709 wasn't the fastest vacuum-tube computer; the enormous an/fsq-7 reached 75000 instructions per second. and, as i understand it, in the 709, the limiting factor wasn't the vacuum tubes, which could switch a great deal faster than that, but the core memory—the only part of the entire machine that used any transistors

the an/fsq-7 contained sixty thousand tubes. i haven't been able to find a citation for the total number of tubes in the 709. says its arithmetic unit contained 2000 tubes and 14500 diodes. the cathode heating filaments in the tubes used in many contemporary computers were about six watts each, and the whole machine used about a hundred kilowatts, so it couldn't have had more than about fifteen or twenty thousand tubes in it; my guess is that it had most of that number

a variety of schemes for computing at microwave frequencies were considered in the 01950s, but not adopted, in large part because of the absence of sufficiently large memories that could work at those speeds. the cycle time for core memory remained in the microsecond range from its invention until intel finally displaced it with sram in the 70s

unless you are very young, you have probably seen a cathode-ray tube television set, perhaps even a cathode-ray tube computer monitor. you may be unaware that cathode-ray tubes are a kind of vacuum tube, and that ntsc broadcast is about 5 million pels per second, so that single vacuum tube is successfully processing a signal of a few megahertz—down to dc, since it can successfully display either a solid white screen or a solid black screen. knowing this, it should now be inconceivable to you that vacuum-tube computers could have been limited to 3–10 operations per second. but there's more; the monochrome version of the ntsc standard is from 01941, and not just the data transfer from the electron-gun signal onto the screen but also the radio-frequency demodulation of that signal was being done by vacuum tubes, and not in military or scientific equipment, but in mass-market home appliances!

(525 lines by about 700 pels per line at 30 frames per second would give you 15 million pels per second, implying frequencies up to 7.5 megahertz, but actual ntsc signals are bandlimited to less than 6 megahertz, so the horizontal definition is quite a bit poorer than the vertical)

tubes weren't the only game in town, either. the square-loop ferrite used for core memory was also capable of solid-state digital logic, which won't surprise you if you know how core memory works. it was in fact commonly used for digital logic throughout the 01960s, especially in the soviet union, usually in combination with semiconductor diodes, which had been in wide use for radio electronics since the 01920s, long before the transistor. (bose's original 60-gigahertz submillimeter radio experiments in 01895 used semiconductor diodes but it took quite a while to simplify the apparatus to the point that it completely displaced coherers and electrolytic detectors.) as i understand it, in such 'ferrite–diode' systems, the ferrite provided logical inversion, state (memory), and amplification, while diodes provided all the combinational logic, just as diodes did in some vacuum-tube computers like the lgp-30 (3800 additions per second on 113 vacuum tubes: a twentieth of the performance of the an/fsq-7 for a 500th of the number of tubes). what eventually killed ferrite–diode systems, as i understand it, was the same performance ceiling that killed core memory

but many older atx power supplies still contain magnetic amplifiers, so-called 'magamps', which don't require square-loop ferrite. magamps were also used in the guidance systems for the wwii v-2 rocket

other digital logic technologies that have been used include pneumatic, fluidic, and neon-tube logic, but all of these are much slower than vacuum tubes, transistors, and even the magnetic logic types described above

computer architecture has vastly improved since the 01950s. if you were to build a computer out of 5200 vacuum tubes and 18000 diodes (like the 01958 univac ii, which also included 1200 transistors) you could surely implement a bit-parallel rv32iczmmul with dual-tube flip-flops for all the register bits and a very small instruction cache (say, 16 16-bit compressed instructions, 256 bits stored in 512 tubes) and run at several million instructions per second as long as you didn't have to touch core, instead of the 6000 that the univac ii did. but instruction caches hadn't been invented yet, univac ii was bit-serial and bcd, and the cdc 6600 was years in the future

so we've gotten about three orders of magnitude speedup from architectural improvements, quite aside from switching elements going faster. but early transistors weren't any faster than contemporary vacuum tubes; in fact, most discrete transistors still can't handle microwave frequencies. the main thing that is responsible for the transition from kiloflops on the desktop² to teraflops is not transistors but miniaturization. in the 01930s, people were already miniaturizing vacuum tubes to 'acorn tubes' to get them to run successfully at gigahertz frequencies. transistors are more amenable to miniaturization than vacuum tubes or relays, it's true, but nanometer-sized electrostatic relays would only be about an order of magnitude slower

as i mentioned above, wikipedia says that the univac i and a lot of other early vacuum tube computers used the not-at-all-miniaturized sylvania 7ak7 tube; as i read the datasheet at it has an input capacitance of 12 picofarads, passes about 35 milliamps when fully on, and turns all the way off when you bring the control grid to -11 volts. so if you had one tube like this driving the control grids of two other tubes, i think you'd get a transition time of about 8 nanoseconds (60 megahertz) if the tubes were the part of the circuit that limited the speed. but i don't understand how vacuum-tube logic circuits work, so i might be off by an order of magnitude here

corrections on any of this would be most welcome


¹ possibly someone will post a correction to this based on the fact that there were things called 'computers' before the zuse z4, the johnniac, the pilot ace, and the eniac's transition to programmability. yes, there were, but those things were not the things we call 'computers' today, which are machines capable of turing-complete computation up to their memory limits. volume 2 of the first edition of the oxford english dictionary, published in 01893, contains a definition for 'computer', one which was later extended to all kinds of machines used for calculating; the copy i scanned is at but the things we call 'computers' today did not exist at that time or at the beginning of world war ii. zuse's z3 from 01941 was discovered to be in some theoretical sense turing-complete decades later

² or under it, in the case of the lgp-30, which was the desk

Cheer21716 days ago

ok I believe you

kragen6 days ago


kevindamm7 days ago

...and that's not all! With access to our modern network you effectively have Teraflops at your fingertips, possibly even Petaflops if you know what you're doing.

ForHackernews7 days ago

Thank god, because I can `npm install` in mere minutes and run two, possibly even three Electron apps concurrently.

FredPret7 days ago

He said petaflops, not exaflops

moffkalast7 days ago

> This program was written at Los Alamos Scientific Laboratory by Paul Stein and Mark Wells for the MANIAC I computer

I see the names of computers have gone significantly downhill since the 60s.

kijin7 days ago

> Metropolis chose the name MANIAC in the hope of stopping the rash of silly acronyms for machine names [1]

Looks like even MANIAC was deliberately restrained to make it sound less maniac than the names of computers that came before it. Can't beat Manchester Baby, Colossus, The WITCH, SWAC and MADDIDA.

Being Los Alamos, they really should have named it ATOMIAC or something.


moffkalast7 days ago

ATOMIAC is a great option, but if you look at today's super computers it's all Sierra, Selene, Summit, etc. Lame ass corporate names.

I'm at least glad my university lab named their main GPU server The Crushinator.

nxobject7 days ago

I miss adding "AC" to the end of computer names to emphasize that you indeed have an "automatic computer". ("MACbook Pro" doesn't count, I swear!)

tibbetts7 days ago

I dunno, the new Microsoft / OpenAI computer is named “Stargate,” that’s a pretty solid name.

CrociDB7 days ago

> The computer played three games. The first was played against itself. The second one was against a strong human player, who played without a queen. The human player won. In the third game, MANIAC I played against a laboratory assistant who had been taught the rules of chess in the preceding week specifically for the game. The computer won, marking the first time that a computer had beaten a human player in a chess-like game

I think this paragraph is kinda funny.

tibbetts7 days ago

I was disappointed that the lab assistant is not named.

ryukoposting7 days ago

This looks like fun. I like chess variants that use smaller boards like this. They offer much of the same high-level strategic thinking, but a much shorter play time and a lower barrier to entry.

My favorite one is Onitama, it's actually quite different from chess but the same intuitions apply.

jamager6 days ago

Onitama is brilliant. For me the best along with Hive and Santorini.

QuesnayJr7 days ago

If anyone wants to try playing this, it's one of the variants supported by Fairy Stockfish.

JoeDaDude6 days ago

Not to be nitpicky (well, maybe a little)...

El Ajedrecista [1], a chess machine designed by Spanish engineer Leonardo Torres Quevedo in in 1912 and later improved in 1915, defeated chess Grandmaster Savielly Tartakower in 1951 [2], a few years earlier than the unnamed lab assistant who lost to Los Alamos Chess. This would make Savielly the first human player defeated in a chess-like game. The lab assistant loss must have occurred sometime after 1956.

To be fair, El Ajedrecista played a very limited game, only capable of playing a white king and rook against a black king moved by a human opponent. Still, I think it qualifies as a "chess like game"

As a matter of casual trivia, El Ajedrecista inspired Claude Shannon, the founder of information theory, to build his own chess machine [3], which he named Caissac (after the goddess of chess Caissa [4]) in 1950.





keid6 days ago

Just to add some details, the Los Alamos Chess people acknowledged that theirs was not the first such computer program, that Alan Turing had previously done created one. Presumably Turing's program didn't ever beat a human, though.

redbell7 days ago

> The reduction of the board size and the number of pieces from standard chess was due to the very limited capacity of computers at the time.

I wonder what the authors might have achieved having access to modern computers that are billions times faster than their machines.

> The computer played three games. The first was played against itself. The second one was against a strong human player who played without a queen. The human player won.

Again, I wonder what they would think seeing AlphaGo beating Lee Sedol [1],[2]



polynomial7 days ago

Why did they lose the bishops rather than knights? Seems like if you were simplifying the game for early computers to learn, that's the move (remove the one piece that moves weird.)

kqr7 days ago

Bishops have a worst branching factor of 9, while knights have only 8. And this difference is even larger on average; a bishop at b2 can move to 7 places, while the knight on the same location can move to just 4.

(This can be thought of dynamically as well. If a bishop moves one space closer to a corner, it loses three potential moves, but gains one move in the opposite corner. If a knight moves closer to a corner, it just loses moves, it never gains them.)

polynomial23 hours ago

I like the knight on the edge analogy.

Bishops control up to 13 squares, knights up to 8. The math maths.

Unless you ask ChatGPT, which will tell you bishops do better in closed positions than knights (before going on to say (correctly) that bishops do better in open positions, without acknowledging or seeming to "notice" it is flatly contradicting itself.

OscarCunningham7 days ago

Has this been solved?

bongodongobob7 days ago

No, we only have endgames solved up to 7 pieces. 24 pieces is waaaay too big to solve.

tbrake7 days ago

24 pieces on an 8x8 is a nightmare, sure.

However, other smaller chess versions (e.g. 5x5 w/ 20 pieces) are at least weakly solved and given the piece/pawn/move restrictions of this 6x6 variant, I wouldn't be surprised if a weak solution here is possible as well.

Sesse__7 days ago

We can do a rough estimate.

There are (_very_) roughly half as many squares in 6x6 as there are in 8x8. This means that for every piece you add, you can expect (again very roughly) a 50% bonus to your branching factor. Now look at the best tablebases currently available (Syzygy):

Up to 5 pieces: 1GB Up to 6 pieces: 151GB Up to 7 pieces: 17277GB

So, let's assume the factor of 140 per extra piece; of course this doesn't hold perfectly, but still:

Up to 8 pieces: 2418TB Up to 9 pieces: 338PB Up to 10 pieces: 47320PB

Now divide that last number by our 2^10 bonus. That leaves 46PB for 10-piece tablebases. We were asked for 24-piece. This is unlikely to go down well.

Of course, there will be some small additional bonus for the fact that fewer positions are possible (easier to be in check), double-pawn-push doesn't apply, no en passant or castling to worry about, and fewer piece types. Still, it honestly doesn't look that good. :-)

thethirdone7 days ago
tbrake7 days ago
zone4117 days ago
bongodongobob7 days ago

I would be. At a minimum you've got 16 moves the first couple moves. There's no way.

once_inc7 days ago

How did I go through 4 decades of life without knowing about the en-passant capture? I've never heard of it before...

vasco7 days ago

It's a common and fun way to make a noob angry that you're breaking the rules before you google it in front of them. I've had tough times "proving" it before Google though!

alberto_ol7 days ago

When I was in high schhol I won a game in a tornament because I hoped thet my opponent didn't know the rule. Otherwise it would have been a draw.

vasco6 days ago

Also had that happen in a school tournamen when I was young! Nowadays, over the board I try to force it in a first game with a new person to see if they know it and if they don't take my piece, I'll bring it up to prevent it being a problem later :)

QuesnayJr6 days ago

While a part of the standard chess rules for over a century, it's common for people who learn to play with their family as kids don't learn it. I didn't learn it, either, and only found out about it when I started playing chess programs online.

This is common enough that "Google en passant" is a meme in some circles.

ChilledTonic7 days ago

I'm actually very interested in picking this up as a varietal to regularly play; the smaller chessboard size and reduced pieces would (theoretically) result in faster, pickup play.

prometheus767 days ago

5 X 5 is also really fun. It's the queen side basically, with a row of pawns in front. So there's only one row of open spaces between the two sides. It really stretches my brain to play it, but has helped me get better at calculation for sure.

listenlight7 days ago

Has anyone here played losers chess? Some of the top players on ICC ( back in the day played losers…

michaelcampbell7 days ago

Looks interesting as a total looker-on of chess. Does the smaller board and reduced rules make the game any easier?

ryukoposting7 days ago

It makes the game go a lot faster, that's for sure. It lowers the barrier to entry by a lot, while still requiring the same modes of thinking.

There are a ton of chess-like games out there that have small boards and play times of 10-20 minutes. IMO as an everyday thing to do with friends, they're a lot more fun than chess.

mxwsn7 days ago

Any suggestions or favorites?

ryukoposting7 days ago

Onitama for sure:

It's played on a 5x5 board. You get two cards that tell you what moves you can make. Either card can be used with any of your pieces. Once you've used a card, it will end up in your opponent's hand a couple turns later.

Every time you play, it's a little bit different because the cards are different. In addition to the chess-like thinking, there's a lot of strategy in learning when to use particular cards. Some cards are really powerful, and it's worth keeping them in your hand until the right moment.

IME a single round of Onitama usually lasts 10-15 minutes. The shortest game I've played was probably about 6 minutes, and the longest was 30.

It's simple enough that you can just pull it out and play with anyone, even if they've never seen it before. Pretty much everything you need to know to play Onitama is printed on the cards sitting right in front of you. It's pretty easy to pick up.

It's also nice that you only have to clean up 10 pieces after a game instead of 32.

umanwizard7 days ago

Yes, that reduces the search space

abound7 days ago

Not to be confused with Atomic Chess:

Y_Y7 days ago

1. Feynman to U-235

AdmiralAsshat7 days ago

So judging from the omission of the bishop and the queen, the computer really hated calculating diagonals, huh? I would've assumed that calculating the knight's movements would've been the bigger headache.

InitialLastName7 days ago

The queen is there. It was probably a more interesting game to keep the knights for their moveset rather than keep the bishops (given the need to reduce to 6x6).

Between the ~half-sized board, the intrinsic limitations in where they can go, and the reduced pawn movement, bishops would be very clumsy if left in.

Sharlin7 days ago

I don’t think there’s anything difficult in calculating a knight’s moves. On the other hand rooks, bishops, and queens generate many moves at a time, making the game tree much wider, which might make a difference, or it might not. But anyway, as noted by the sibling, queens were there (except in the second game, in which Kruskal played without and the computer played with, to even the odds), and the omission of bishops in particular had nothing to do with evaluation difficulty per se.

Sesse__7 days ago

Humans are probably worse with knights than any other piece, relative to computers. Queen is a possible second; queen endgames are super-complex for humans since there are so many options and very few of them can be easily pruned. (I guess you can say that humans are bad with knights and computers are good with queens, especially endgame queens.)

Diagonal sliders do create lots of headaches for bitboards, but those were not used for chess before 1967 or so, i.e., more than a decade later. Generally one uses magic bitboards today (basically a perfect hashing technique to extract the bits of the diagonals a piece can attack into a neat little linear bit line), but they depend on fast multiplier and a fair amount of RAM for LUTs, which would of course not have been available on the MANIAC. I would assume that MANIAC just looped over all possible moves in the move generation phase.

Sharlin7 days ago

Thanks, interesting!

silisili7 days ago

They retained the queen, it was just the human played without one against the computer for some reason. Probably to give MANIAC a fighting chance.

cyberax7 days ago

I am disappointed because it doesn't involve nuclear bombs.

jameshart7 days ago

Hmm… something like:

Each player starts with only a king on the board.

Player turns consist of either:

1) adding a pawn to the board

2) moving the king

Or 3) launching one, some, or all of their pawns already on the board.

Launching a pawn involves removing the pawn from the board, and nominating a square where the pawn will land. If there is an opposing piece on that square at the end of the opponent’s next turn, that piece is removed from the board.

If your king is removed you lose.

hammock7 days ago

I was thinking spontaneous decay - at a random time every so often, one of your pieces may "decay" and be demoted, from a major piece to a minor one, or from a minor one to a pawn :)

kevindamm7 days ago

yeah, I was expecting chess piece variants like Fairy Chess.. but there still can be! But do we add wind and fallout? Or make it a MAD type of thing where the meta-game is a kind of Prisoner's Dilemma? Either way, we could call it Manhattan and give it a go..

csmeyer7 days ago

"Pawns may not be promoted to bishops;"


For real, I would absolutely love to know why that constraint was needed. Does anyone have info on why?

billforsternz7 days ago

I don't know why you've put lots of question marks, surely this is completely understandable. The whole point of this was that real chess was too big and complex, hence this dramatically smaller simpler version. There's no bishops in the start position so there's absolutely no way they'd bother implementing the bishop underpromotion corner case.

Amusingly enough Rybka, the top engine for many years comparatively recently didn't implement bishop underpromotion either. For a much different reason. The programmer chose a not so great low level bit representation convention that forbid representing this case. He refused to fix the error because it would be too much work for the more-or-less non-existent situations where promoting to a bishop is the best move.

teo_zero7 days ago

Interesting. Does anybody have any example where a promotion to bishop would be preferable to Queen?

Sesse__7 days ago

Basically anything where a bishop would win, but a queen would put the other side in stalemate.

For an extreme example, see

billforsternz7 days ago

I'm at work so I'm not going to try and do some experiments, but my guess at relative frequency of (under)promotion being the best move (Queen = 1) is something like;

Knight 1e-3 Rook 1e-5 Bishop 1e-7

A complicating factor in measuring this is that often a rook serves just as well as a queen, the opponent is going to capture immediately anyway, so players sometimes underpromote to a rook just for the fun factor. My number is more meant to represent cases where underpromotion really is the best move. Knight underpromotion is by far the most important, because a knight can do things a Queen can't, most typically give check to keep a combination going without giving the opponent an opportunity to play an active move. Rook and Bishop underpromotions can only be advantageous when they simultaneously avoid stalemate and keep a winning advantage. This is much more likely with a rook, for example in the famous Saavedra position, perhaps the most famous chess problem of all (it featured as decoration on our wedding cake :).

seanhunter7 days ago

It is extremely rare. I saw an example involving Fabiano Caruana(current world #2)[1] and he was visibly excited in the post-match interview and said that's the first time in his whole career (which is thousands and possibly tens of thousands of games especially if you consider online etc) that he had done it.

Edit to add: Here's the game. It was Caruana vs Robson 2021 and occurred in the 7th round of the USCF championship. Fabi's final move was to promote to bishop at which point Ray Robson resigned He did it for style points- a queen would have been just fine in that position.


rdlw7 days ago

An example from a real GM game:

abricq7 days ago

This rule is required because "as in chess", a pawn may promote to bishop.

If you only remove the bishops from the starting position, then the rules of chess still allow bishops to come in later in the game (through a promotion).

I am assuming they did not have enough memory to hold the rules of bishops, and therefore decided to completely remove them from the set of possible pieces. They allow rooks and knights promotion because these pieces are in the set of initial pieces: their rules are already implemented.

csmeyer7 days ago

Oh duh... well my question was not insightful but yes your answer makes sense, there are no bishops on the board lol

seanhunter7 days ago

Because they modelled the game with no bishops. You don't start with bishops and you also can't promote to a bishop.