The 70^{2} Puzzle |
Another geometry puzzle! This one is fairly old. It was popularized by Martin Gardner in the Sept. 1966 issue of Scientific American, but it seems the problem is even at least a few years older. It goes like this:
"Given that1*1 + 2*2 + 3*3 + ... + 23*23 + 24*24 = 70*70
, is it possible to completely fill a70*70
square with the 24 smaller squares of sizes1*1, 2*2, 3*3, ..., 24*24
?"
Apparently, it is unknown whether this is possible. Approximate solutions
were found even way back in '66 - probably without the help of a computer
program... The best solution found is a covering of the 70*70 square using
all smaller squares except the 7*7 square, covering 4851 units out of a
total of 4900. But it is unknown whether this is the best
solution!
(See also the geometry section of the
rec.puzzles
FAQ.)
(By the way, 70*70
is the only square number that ever
appears in the sum of consecutive square numbers. In other words, you can
continue to add 25*25
, 26*26
, etc., but the result will
never again be a square number. The proof of this has been published in 1918
by G.N. Watson in Messenger of Mathematics, New Series, Vol. 48,
pp. 1 - 22. It follows that if it is at all possible to dissect a square
into smaller unique squares with consecutive side lengths, it must be
the 70*70
square.)
In July 1996, somebody raised this question in the newsgroup
comp.programming.contests
, asking for a program to solve that
puzzle.
Tall order! This problem looks suspiciously like the knapsack problem. The knapsack problem belongs to a class of problems known as "NP-complete", which basically means that there is no efficient way to compute a solution using a computer program. Period.
But not all is lost. Approximate solutions for NP-complete problems can be found using so-called "heuristics" - the computer scientists' word for "educated guess". Sometimes these heuristics can even find optimal solutions.
Programs trying to solve such problems often use another technique called "backtracking". This means "trial and error": in our case, a backtracking program would place a few small squares according to a heuristic, and if it found that this arragement didn't work out, it would remove some of the squares it had already placed and try rearranging them.
I'm currently fine-tuning a program using these techniques. The problem is finding good heuristics to cut down the number of possibilities that are tried. A preliminary version of the program is available (last update: July 24, 1996, at 12:48 GMT+2).
Note that you can't just try out all possible combinations - there are so many, that the program would still run in about 3'000'000 years, even if it could check 1'000'000'000 possible arrangements in one second. My program currently checks about 500 positions per second... Now you have an idea what "NP-complete" means!
My program is fairly primitive: it uses some very simple placement heuristics and then does a depth-first search of all possible arrangements. This works quite well until it has found a covering of 4848 units out of a total of 4900, skipping the 4*4 and the 6*6 square. After that, it takes a looooooooong time until it finds a better solution. Even with improvements to avoid symmetric solutions, there are still far too many possibilities to check.
Some ideas to improve this simple algorithm could be:
(Have a look at the program's source to learn what I mean by "corridor".)
As you can see, the basic idea is to treat the problem like a zero-sum game similar to chess, checkers, othello (reversi), go-moku, or go. Whether this really is the right approach is to be seen - where's the second player in our problem? Maybe I'll rather look into one-player games...
One thing that could speed up the search process significantly would be to identify subregions, and once we had found a placement for such a region, we'd never have to re-try filling it. For instance, it happens quite often that that the program is trying to place the same squares within the same sub-rectangle, with the other squares placed slightly differently. Avoiding these cases may save a lot of time.
Another approach is to find good heuristics! If you think you know of one, check out the program, implement your approach, test it, and tell me the results! Good luck!
Of course I checked the literature on this problem. It appears that it is not a Knapsack problem, but a two-dimensional bin-packing problem. Anyway, both are NP-complete. The proof that our particular instance of this problem also is NP-complete is in
Leung, J.Y-T.; Tam, T.W.; Wong, C.S.; Young, G.H.; Chin, Y.L: Packing Squares into Squares, Journal of Parallel and Distributed Computing 10, pp. 271 - 275, 1990
Two-dimensional bin packing has received less attention than the classical one-dimensional case. This is rather surprising because it has industrial applications, e.g., in VLSI design (planning chip layouts). Yet there are a few publications. Those I looked up are the following:
Baker, B.S.; Coffman, E.G.; Rivest, R.L.: Orthogonal Packings in Two Dimensions, SIAM Journal on Computing 9, pp. 846 - 855, 1980
Baker, B.S.; Calderbank, A.R; Coffman, E.G.; Lagarias, J.C: Approximation Algorithms for Maximizing the Number of Squares Packed Into a Rectangle, SIAM Journal on Algebraic and Discrete Methods, 1983
Coffman, E.G.; Lagarias, J.C.: Algorithms for Packing Squares: A Probabilistic Analysis, SIAM Journal on Computing 18, pp. 166 - 185, 1989
Jennings, D.: On Packings of Squares and Rectangles, Discrete Mathematics 138(1-3), pp. 293 - 300, 1995
None of the proposed heuristics look promising to perform better than my simple program in this particular case. (Note that this only means that my program is very, very lucky to find a 4848 solution so "fast" - serious research of course tries to find heuristics that maybe don't find such good solutions, but have far better worst-case complexity bounds.)
The original problem statement appeared first in
Gardner, M.: Mathematical Games: The problem of Mrs. Perkins' quilt, and answers to last month's puzzles, Scientific American 215(3), pp. 264 - 272, Sep. 1966
where Martin Gardner attributes the origin of the Puzzle to one Richard B. Britton of Carlisle, Mass. and also claims that "Although I know of no proof that this is impossible, the work of Tutte and others makes it extremely unlikely, and perhaps an impossibility proof would not be that hard to find."
In the reprint of his colums in the book
Gardner, M.: Mathematical Carnival, Allen & Unwin, London 1976 (also published by Alfred A. Knopf, Inc., New York 1975)
he also says that in 1974 two computer scientists at the University of Illinois at Urbana-Champaign, Edward M. Reingold and James Bitner, already had written a program to solve this problem.
I formerly had stated here that they had to give up because their program had taken too much time... I don't know how it happened, (after all, I had written this page way back in 1996!) but apparently that information was wrong.
In 2002, two remarkable things happened:
Here's that reference:
Bitner, James R. and Reingold, Edward M.: "Backtrack Programming Techniques", CACM 18(11), November 1975, p. 655 (f?)
Quoting according to Mr. Mendes (I didn't have the time to dig out that old copy of CACM), that paper reads in part:
... As a final example , consider the problem of tiling a 70 x 70 square with square tiles of sizes 1 through 24. The identity 1^2 + 2^2 + ... + 24^2 = 70^2 suggests it may be possible, but a backtrack with [a] macros program demonstrated that it is not. The program looks for sections of the square that would be hard to tile, and tries to tile them first, hoping to force backtracking as soon as possible. It is difficult to tile valleys, i.e. groups of consecutive squares with tiles rising higher at both ends, because among the unused tiles, only those narrower than the valley can be used. Since the smallest valleys are hardest to fill, the program searches for the smallest valley, and tries to put a piece in it. If all unused pieces are too big, we backtrack. Otherwise the piece is placed at the far left of the valley, and the program searches for the new smallest valley. This is an example of search rearrangement. We work with the squares that are hardest to tile first, hoping to force the program to backtrack as soon as possible. In order to place the pieces , a macro is called that generates 24 blocks of code, one to place each piece. The first statement of each block is either a no-op (indicating the piece is available) or a branch to skip to the next block (if the piece has been used already). If the piece is available, the block checks if it fits; if so it places the piece, changes the no-op to branch, and does a subroutine jump to a search for the smallest valley. The return address is stacked for return when backtrack is necessary. In order to check the program, it was run on a set of tiles for which a solution is known to exist; the program found the solution. The program was then run on 24 tiles of sides 1, 2, ..., 24 and found, after a 16-minute search, that no such tiling of the 70 x 70 square is possible. |
Quite interesting! Note the today seemingly archaic, but at the time considered ingenious (or even necessary, due to small memory sizes) trick of using self-modifying code instead of proper data structures. Also note that the program only took some 16 minutes to find that no tiling is possible. That still does not invalidate my crude timing analysis: my program (and the timing analysis) were for trying to find the best possible arrangement of squares. According to the quoted description above, the program of Reingold and Bitner just proves that no arrangement exists (or, if one existed, had returned it). That is a different approach -- and a far more clever one than mine for that problem statement, I must admit!
Case closed. Well, if I have time, I might try to re-implement their approach...
Copyright © 2003 by
Thomas Wolf. All rights
reserved.
TW, May 28, 2003 |