In this case study, we take a closer look at a general classroom situation. The floorplan of the classroom is described in the following figure (the units are centimeters), with space delimited for the placement of the individual students colored in blue.

We applied our discretization procedure on the delimited space which resulted in a mesh with 2,148 points that is depicted in the following figure. We measure the granularity of a mesh as the ratio between the square root of the area of the largest rectangle in the mesh and the largest distance between any two points in the mesh. In this case, the area of the largest rectangle was 225.9 cm^{2}, its root being 15.03 cm, and the largest distance between any two points was 943.4 cm. This results in a ratio of 0.0159 (or 1.59 %).

After the mesh was generated, we run the optimization algorithm. We chose to investigate the optimal layout design for the number of locations between 10 and 20. The completion of each of these computations took several hours. The following figures show the resulting arrangements and the distances between selected points.

Naturally, as the number of locations to find increases, the minimum distance between any two selected location goes down, as is summarized in the table below. What is a bit counterintuitive, however, is that the rate at which the minimum distance decreases is not at all uniform. There are instances with quite a large jump, such as the one between 12 and 13, or 17 and 18 locations, but comparatively much smaller jumps between 14 and 15, or 18 and 19 locations. It is not really possible to tell in advance exactly how much impact will an increase (or decrease) in the number of selected locations have on the minimal distance.

# of locations | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |

min distance [cm] | 282 | 261 | 248 | 233 | 228 | 224 | 207 | 201 | 189 | 184 | 176 |

Is this the best we can do? Yes and no. It is true that the results are optimal, but they are optimal only with respect to the discretization. This means that we could make a finer mesh and try to squeeze out a little extra distance. The problem with this is that to get a mesh that is finer by a factor of 2, we need to consider roughly 4 times as many points. The computational requirements than start to rise dramatically. And, from our experience, we would only get a 1-2% increase in the optimal value. The one instance where it might be worthwhile to consider a finer mesh is when we would be pressured to comply with some restrictions. In this case, imagine we have a non-negotiable condition, that the locations must be at least 250 cm apart. We could settle for the layout for 11 locations and call it a day, or we could try to find the solution for 12 locations on a finer mesh, with the hope that it will fall above the 250 cm mark.