id
stringlengths 10
15
| question
stringlengths 63
2.3k
| solutions
stringlengths 20
28.5k
|
|---|---|---|
IMOSL-2024-C3
|
Let \(n\) be a positive integer. There are \(2n\) knights sitting at a round table. They consist of \(n\) pairs of partners, each pair of which wishes to shake hands. A pair can shake hands only when next to each other. Every minute, one pair of adjacent knights swaps places.
Find the minimum number of exchanges of adjacent knights such that, regardless of the initial arrangement, every knight can meet her partner and shake hands at some time.
|
Answer: The minimum number of exchanges is \(\frac{n(n - 1)}{2}\) .
Common remarks. The solution is divided into three lemmas. We provide multiple proofs of each lemma.
Solution. Join each pair of knights with a chord across the table. We'll refer to these chords as chains.
First we show that \(n(n - 1) / 2\) exchanges are required for some arrangements.
Lemma 1. If each knight is initially sitting directly opposite her partner, then at least \(n(n - 1) / 2\) exchanges are required for all knights to meet and shake hands with their partners.
Proof 1. In this arrangement any two chains are initially intersecting. For two knights to be adjacent to each other, it is necessary that their chain does not cross any other chain, and thus every pair of chains must be uncrossed at some time. Each exchange of adjacent knights can only uncross a single pair of intersecting chains, and thus the number of exchanges required is at least the number of pairs of chains, which is \(n(n - 1) / 2\) . \(\square\)
Proof 2. In this arrangement the two knights in each pair are initially separated by \(n - 1\) seats in either direction around the table, and so each pair must move a total of at least \(n - 1\) steps so as to be adjacent. There are \(n\) pairs, and each exchange moves two knights by a single step. Hence at least \(n(n - 1) / 2\) moves are required. \(\square\)
We will now prove that \(n(n - 1) / 2\) exchanges is sufficient in all cases. We'll prove a stronger version of this bound than is required, namely that every knight can shake hands with her partner at the end, after all exchanges have finished.
Begin by adding a pillar at the centre of the table. For each chain that passes through the centre of the table, we arbitrarily choose one side of the chain and say that the pillar lies on that side of the chain. While the pillar may lie on a chain, we will never move a knight if that causes the pillar to cross to the other side of a chain. Say that a chain passes in front of a knight if it passes between that knight and the pillar, and define the length of a chain to be the number of knights it passes in front of. Then each chain has a length between 0 and \(n - 1\) inclusive.
Say that a chain \(C\) encloses another chain \(C^{\prime}\) if \(C\) and \(C^{\prime}\) do not cross, and \(C\) passes between \(C^{\prime}\) and the pillar. Say that two chains are intersecting if they cross on the table; enclosing if one chain encloses the other; and disjoint otherwise. Let \(k\) , \(l\) and \(m\) denote respectively the number of enclosing, intersecting and disjoint pairs of chains. Then we have
\[k + l + m = \frac{n(n - 1)}{2}.\]
Lemma 2. \(2k + l\) exchanges are sufficient to reach a position with all pairs of knights sitting adjacent to each other.
Proof 1. We proceed by induction on \(2k + l\) .
If every chain has length 0, then every pair of knights is adjacent and the statement is trivial.
Otherwise, let \(A\) and \(B\) be a pair of knights whose chain \(C_0\) has length \(q \geqslant 1\) . Let \(S_0 = A\) , and let \(S_1, \ldots , S_q\) be the knights which \(C_0\) passes in front of, sitting in that order from \(A\) to \(B\) . We know that \(C_0\) passes in front of \(S_1\) , and there are three cases for the chain \(C_1\) for knight \(S_1\) .
If \(C_1\) passes in front of \(S_0\) then \(C_0\) and \(C_1\) are intersecting, and we can make them disjoint by exchanging the positions of \(S_0\) and \(S_1\) . This reduces the sum \(2k + l\) by 1.
If \(C_1\) passes in front of neither \(S_0\) nor \(B\) then \(C_1\) is enclosed by \(C_0\) , and we can swap \(S_0\) and \(S_1\) to make \(C_0\) and \(C_1\) an intersecting pair. This increases \(l\) by 1 and decreases \(k\) by 1, and hence reduces the sum \(2k + l\) by 1.
If this \(C_1\) passes in front of \(B\) then we cannot immediately find a beneficial exchange.
In the third case, we look instead at the knights \(S_i\) and \(S_{i + 1}\) , for each \(i\) in turn. Each time, we will either find a beneficial exchange, or find that the chain \(C_{i + 1}\) for knight \(S_{i + 1}\) passes in front of \(B\) . Eventually we will either find a beneficial exchange in one of the first two cases above, or we will find that the chain \(C_q\) for \(S_q\) passes in front of \(B\) , in which case \(C_q\) and \(C_0\) are intersecting and we can make \(C_q\) and \(C_0\) disjoint by swapping \(S_q\) and \(B\) .
Also note that the only times a chain is increased in length is when it is enclosed by another chain. But this cannot happen for a chain containing the pillar, so no chains ever cross the pillar.
Proof 2. We begin by ignoring the seats, and let each knight walk freely to a predetermined destination. Each pair of knights will walk around the table to one of the two points on the circumference midway between their initial locations, such that the chain between them passes between the pillar and the destination. If more than one pair of knights would have the same destination point, then we make small adjustments to the destination points so that each pair has a distinct destination point.
We then imagine each knight walking at a constant speed (which may be different for each knight). They all start and stop walking at the same time. We want to count how many times two knights pass (either in opposite directions, or in the same direction but at different speeds). For any two pairs of knights, the number of passes depends on the relation between their two chains.
If their two chains are intersecting then there will be one pass, involving the two knights for whom the other chain passes between them and the pillar.
If their two chains are enclosing then there will be two passes, with one of the knights with the enclosing chain passing both of the knights with the shorter enclosed chain.
If their two chains are disjoint then there will be no passes.
The number of passes is therefore \(2k + l\) . If multiple pairs of knights would pass at the same time, we can slightly adjust the walking speeds so that the passes happen at distinct times. We can then convert this sequence of passes into a sequence of seat exchanges in the original problem, which shows that \(2k + l\) exchanges is sufficient.
Lemma 3. \(k \leqslant m\) .
Proof 1. We proceed by induction on \(n\) . The base case \(n = 2\) is clear.
Consider a chain \(C\) of greatest length, and suppose it joins knights \(A\) and \(B\) . Let \(x\) be the number of chains that intersect \(C\) , and let \(y\) be the number of chains that are enclosed by \(C\) . Note that no chain can enclose \(C\) . Then \(C\) passes in front of one knight from each pair whose chain intersects \(C\) , and both knights in any pair whose chain is enclosed by \(C\) . Thus the length of \(C\) is \(x + 2y \leqslant n - 1\) . The number of chains that form a disjoint pair with \(C\) is then
\[n - 1 - x - y \geqslant (x + 2y) - x - y = y.\]
Now we can remove \(A\) and \(B\) and use the induction hypothesis. We need to show that the length of each remaining chain is at most \(n - 2\) so the chains remain valid. No chain increases in length after removing \(A\) and \(B\) . If any chain \(C\) had length \(n - 1\) , then the chain between \(A\) and \(B\) also had length \(n - 1\) . Then \(C\) must have passed in front of exactly one of \(A\) or \(B\) , and so has length \(n - 2\) after removing \(A\) and \(B\) . \(\square\)
Proof 2. Let \(k_{C}\) denote the number of chains \(C^{\prime}\) such that \(C\) encloses \(C^{\prime}\) .
Note that if \(C\) encloses \(C^{\prime}\) , then \(k_{C^{\prime}} < k_{C}\) .
First we will show that there at least \(k_{C}\) chains that are disjoint from \(C\) . Let \(x\) be the length of \(C\) , let \(\mathcal{S}\) be the set of \(x\) knights that \(C\) passes in front of, and let \(\mathcal{T}\) be the set of \(x\) knights sitting directly opposite them. None of the knights in \(\mathcal{T}\) can have a chain that encloses or is enclosed by \(C\) , and if any knight in \(\mathcal{T}\) has a chain that intersects \(C\) , then her partner must be a knight in \(\mathcal{S}\) . So we have that
\[2k_{C} = \mathrm{number~of~knights~in~}\mathcal{S}\mathrm{~whose~chain~is~enclosed~by~}C\] \[\qquad = x\mathrm{-number~of~knights~in~}\mathcal{S}\mathrm{~whose~chain~intersects~}C\] \[\qquad \leqslant x\mathrm{-number~of~knights~in~}\mathcal{T}\mathrm{~whose~chain~intersects~}C\] \[\qquad \leqslant \mathrm{number~of~knights~in~}\mathcal{T}\mathrm{~whose~chain~is~disjoint~from~}C\] \[\qquad \leqslant 2\times \mathrm{number~of~chains~that~are~disjoint~from~}C.\]
Now let \(m_{C}\) denote the number of chains \(C^{\prime}\) with \(C\) and \(C^{\prime}\) disjoint, and \(k_{C^{\prime}} < k_{C}\) . We will show that \(m_{C} \geqslant k_{C}\) .
Let \(\mathcal{R}\) be a set of \(k_{C}\) chains that are disjoint from \(C\) , such that \(\sum_{C^{\prime} \in \mathcal{R}} k_{C^{\prime}}\) is minimal. If every chain \(C^{\prime} \in \mathcal{R}\) has \(k_{C^{\prime}} < k_{C}\) , then we are done. Otherwise, let consider a chain \(C^{\prime}\) with \(k_{C^{\prime}} \geqslant k_{C}\) . There are then at least \(k_{C}\) chains \(C^{\prime \prime}\) for which the chain \(C^{\prime}\) passes between \(C^{\prime \prime}\) and the pillar. Each of these chains must have \(k_{C^{\prime \prime}} < k_{C^{\prime}}\) , and at least one of them is not in \(\mathcal{R}\) (otherwise \(\mathcal{R}\) would contain \(C^{\prime}\) and at least \(k_{C}\) other chains), so we can swap this chain with \(C^{\prime}\) to obtain a set \(\mathcal{R}^{\prime}\) with \(\sum_{C^{\prime} \in \mathcal{R}^{\prime}} k_{C^{\prime}} < \sum_{C^{\prime} \in \mathcal{R}} k_{C^{\prime}}\) . But this contradicts the minimality of \(\mathcal{R}\) .
We finish by summing these inequalities over all chains \(C\) :
\[k = \sum_{C} k_{C} \leqslant \sum_{C} m_{C} \leqslant m.\]
By Lemma 3, we have that \(2k + l \leqslant k + l + m = n(n - 1) / 2\) . Combining this with Lemma 2, we have that \(n(n - 1) / 2\) exchanges is enough to reach an arrangement where every knight is sitting next to her partner, as desired.
Comment. Either proof of Lemma 3 can be adapted to show that the configuration in Lemma 1 is the only one which achieves the bound.
|
IMOSL-2024-C4
|
On a board with 2024 rows and 2023 columns, Turbo the snail tries to move from the first row to the last row. On each attempt, he chooses to start on any cell in the first row, then moves one step at a time to an adjacent cell sharing a common side. He wins if he reaches any cell in the last row. However, there are 2022 predetermined, hidden monsters in 2022 of the cells, one in each row except the first and last rows, such that no two monsters share the same column. If Turbo unfortunately reaches a cell with a monster, his attempt ends and he is transported back to the first row to start a new attempt. The monsters do not move.
Suppose Turbo is allowed to take \(n\) attempts. Determine the minimum value of \(n\) for which he has a strategy that guarantees reaching the last row, regardless of the locations of the monsters.
|
Comment. One of the main difficulties of solving this question is in determining the correct expression for \(n\) . Students may spend a long time attempting to prove bounds for the wrong value for \(n\) before finding better strategies.
Students may incorrectly assume that Turbo is not allowed to backtrack to squares he has already visited within a single attempt. Fortunately, making this assumption does not change the answer to the problem, though it may make it slightly harder to find a winning strategy.
Answer: The answer is \(n = 3\) .
Solution. First we demonstrate that there is no winning strategy if Turbo has 2 attempts.
Suppose that \((2, i)\) is the first cell in the second row that Turbo reaches on his first attempt. There can be a monster in this cell, in which case Turbo must return to the first row immediately, and he cannot have reached any other cells past the first row.
Next, suppose that \((3, j)\) is the first cell in the third row that Turbo reaches on his second attempt. Turbo must have moved to this cell from \((2, j)\) , so we know \(j \neq i\) . So it is possible that there is a monster on \((3, j)\) , in which case Turbo also fails on his second attempt. Therefore Turbo cannot guarantee to reach the last row in 2 attempts.
Next, we exhibit a strategy for \(n = 3\) . On the first attempt, Turbo travels along the path
\[(1,1)\to (2,1)\to (2,2)\to \dots \to (2,2023).\]
This path meets every cell in the second row, so Turbo will find the monster in row 2 and his attempt will end.
If the monster in the second row is not on the edge of the board (that is, it is in cell \((2, i)\) with \(2 \leq i \leq 2022\) ), then Turbo takes the following two paths in his second and third attempts:
\[(1,i - 1)\to (2,i - 1)\to (3,i - 1)\to (3,i)\to (4,i)\to \dots \to (2024,i).\] \[(1,i + 1)\to (2,i + 1)\to (3,i + 1)\to (3,i)\to (4,i)\to \dots \to (2024,i).\]
The only cells that may contain monsters in either of these paths are \((3, i - 1)\) and \((3, i + 1)\) . At most one of these can contain a monster, so at least one of the two paths will be successful.

<center>Figure 1: Turbo's first attempt, and his second and third attempts in the case where the monster on the second row is not on the edge. The cross indicates the location of a monster, and the shaded cells are cells guaranteed to not contain a monster. </center>
If the monster in the second row is on the edge of the board, without loss of generality we may assume it is in \((2,1)\) . Then, on the second attempt, Turbo takes the following path:
\[(1,2)\to (2,2)\to (2,3)\to (3,3)\to \dots \to (2022,2023)\to (2023,2023)\to (2024,2023).\]

<center>Figure 2: Turbo's second and third attempts in the case where the monster on the second row is on the edge. The light gray cells on the right diagram indicate cells that were visited on the previous attempt. Note that not all safe cells have been shaded. </center>
If there are no monsters on this path, then Turbo wins. Otherwise, let \((i,j)\) be the first cell on which Turbo encounters a monster. We have that \(j = i\) or \(j = i + 1\) . Then, on the third attempt, Turbo takes the following path:
\[(1,2)\to (2,2)\to (2,3)\to (3,3)\to \dots \to (i - 2,i - 1)\to (i - 1,i - 1)\] \[\qquad \to (i,i - 1)\to (i,i - 2)\to \dots \to (i,2)\to (i,1)\] \[\qquad \to (i + 1,1)\to \dots \to (2023,1)\to (2024,1).\]
Now note that
- The cells from \((1,2)\) to \((i - 1,i - 1)\) do not contain monsters because they were reached earlier than \((i,j)\) on the previous attempt.- The cells \((i,k)\) for \(1 \leqslant k \leqslant i - 1\) do not contain monsters because there is only one monster in row \(i\) , and it lies in \((i,i)\) or \((i,i + 1)\) .- The cells \((k,1)\) for \(i \leqslant k \leqslant 2024\) do not contain monsters because there is at most one monster in column 1, and it lies in \((2,1)\) .
Therefore Turbo will win on the third attempt.
Comment. A small variation on Turbo's strategy when the monster on the second row is on the edge is possible. On the second attempt, Turbo can instead take the path
\[(1,2023)\to (2,2023)\to (2,2022)\to \dots \to (2,3)\to (2,2)\to (2,3)\to \dots \to (2,2023)\] \[\qquad \to (3,2023)\to (3,2022)\to \dots \to (3,4)\to (3,3)\to (3,4)\to \dots \to (3,2023)\] \[\qquad \to \dots\] \[\qquad \to (2022,2023)\to (2022,2022)\to (2022,2023)\] \[\qquad \to (2023,2023)\] \[\qquad \to (2024,2023).\]
If there is a monster on this path, say in cell \((i,j)\) , then on the third attempt Turbo can travel straight down to the cell just left of the monster instead of following the path traced out in the second attempt.
\[(1,j - 1)\to (2,j - 1)\to \dots \to (i - 1,j - 1)\to (i,j - 1)\] \[\qquad \to (i,j - 2)\to \dots \to (i,2)\to (i,1)\] \[\qquad \to (i + 1,1)\to \dots \to (2023,1)\to (2024,1).\]

<center>Figure 3: Alternative strategy for Turbo's second and third attempts. </center>
|
IMOSL-2024-C5
|
Let \(N\) be a positive integer. Geoff and Ceri play a game in which they start by writing the numbers 1, 2, ..., \(N\) on a board. They then take turns to make a move, starting with Geoff. Each move consists of choosing a pair of integers \((k, n)\) , where \(k \geq 0\) and \(n\) is one of the integers on the board, and then erasing every integer \(s\) on the board such that \(2^{k} \mid n - s\) . The game continues until the board is empty. The player who erases the last integer on the board loses.
Determine all values of \(N\) for which Geoff can ensure that he wins, no matter how Ceri plays.
|
Answer: The answer is that Geoff wins when \(N\) is of the form \(2^{n}\) for \(n\) odd or of the form \(t2^{n}\) for \(n\) even and \(t > 1\) odd.
Common remarks. We will say that a set \(S\) wins if the current player wins given \(S\) as the current set of integers on the board. Otherwise, we will say that \(S\) loses.
We will let \(J(\mathcal{S}, \mathcal{T}) = (2\mathcal{S} - 1) \cup (2\mathcal{T})\) . Note that every subset of \(\mathbb{Z}\) can be written as \(J(\mathcal{S}, \mathcal{T})\) for some unique pair \((\mathcal{S}, \mathcal{T})\) of subsets of \(\mathbb{Z}\) .
We will let \([n]\) denote the set \(\{1, 2, \ldots , n\}\) .
Solution.
Lemma 1. For any set \(S\) , \(S\) wins if and only if \(J(\mathcal{S}, \emptyset)\) wins. Similarly, \(S\) wins if and only if \(J(\emptyset , \mathcal{S})\) wins.
Proof. Let \((k, m)\) be a move on \(S\) , and let \(\mathcal{T}\) be the result of applying the move. Then we can reduce \(J(\mathcal{S}, \emptyset)\) to \(J(\mathcal{T}, \emptyset)\) by applying the move \((k + 1, 2m - 1)\) .
Conversely, let \((k, m)\) be a move on \(J(\mathcal{S}, \emptyset)\) . We can express the result of this move as \(J(\mathcal{T}, \emptyset)\) for some \(\mathcal{T}\) . Then we can reduce \(S\) to \(\mathcal{T}\) by applying the move \((\max (k - 1, 0), (k + 1) / 2)\) .
This gives us a natural bijection between games starting with \(S\) and games starting with \(J(\mathcal{S}, \emptyset)\) and thus proves the first part of the lemma. The second part follows by a similar argument.
Lemma 2. If \(S\) and \(\mathcal{T}\) are nonempty and at least one of them loses, then \(J(\mathcal{S}, \mathcal{T})\) wins.
Proof. If \(S\) is losing, then we can delete \(J(\emptyset , \mathcal{T})\) using the move \((1, t)\) for some \(t \in J(\emptyset , \mathcal{T})\) , which leaves the losing set \(J(\mathcal{S}, \emptyset)\) . Similarly, if \(\mathcal{T}\) is losing, then we can delete \(J(\mathcal{S}, \emptyset)\) using the move \((1, s)\) for some \(s \in J(\mathcal{S}, \emptyset)\) , leaving the losing set \(J(\emptyset , \mathcal{T})\) .
Lemma 3. If \(S\) is nonempty and wins, then \(J(\mathcal{S}, \mathcal{S})\) loses.
Proof. From this position, we can convert any sequence of moves into another valid sequence of moves by replacing \((k, 2n - 1)\) with \((k, 2n)\) , and vice versa. Thus we may assume that the initial move \((k, m)\) has \(m\) odd. We want to show that any such move results in a winning position for the other player.
The move \((0, m)\) loses immediately. Otherwise, the move results in the set \(J(\mathcal{T}, \mathcal{S})\) for some set \(\mathcal{T}\) . There are three cases.
If \(\mathcal{T}\) is empty then the other player gets the winning set \(J(\emptyset , \mathcal{S})\) .
If \(\mathcal{T}\) is losing then the other player can choose the move \((1, s)\) for some \(s \in J(\emptyset , \mathcal{S})\) , which leaves the losing set \(J(\mathcal{T}, \emptyset)\) .
If \(\mathcal{T}\) is nonempty winning then the other player can choose the move \((k, m + 1)\) , which results in the position \(J(\mathcal{T}, \mathcal{T})\) . We can then proceed by induction on \(|\mathcal{S}|\) to show that this is a losing set.
Lemma 4. \([2n]\) wins if and only if \([n]\) loses.
Proof. Note that \([2n] = J([n],[n])\) . The result then follows directly from the previous two lemmas. \(\square\)
Lemma 5. For any integer \(n \geqslant 1\) , \([2n + 1]\) wins.
Proof. By Lemma 4, either \([n]\) or \([2n]\) loses. If \([n]\) loses, then by Lemma 2 we have that \([2n + 1] = J([n + 1],[n])\) wins. Otherwise, \([2n]\) loses, and therefore \([2n + 1]\) wins by choosing the move \((k,2n + 1)\) for sufficiently large \(k\) so that only \(2n + 1\) is eliminated. \(\square\)
It remains to verify the original answer. We have two cases to consider:
- Suppose \(N = 2^{n}\) for some \(n\) . For \(N = 1\) , every move is an instant loss for Geoff. Then by Lemma 4, Geoff wins for \(N = 2^{n}\) if and only if Geoff loses for \(N = 2^{n - 1}\) , and thus by induction we have that Geoff wins for \(N = 2^{n}\) if and only if \(n\) is odd.
- Otherwise, \(N = t2^{n}\) , for some \(n\) and some \(t > 1\) with \(t\) odd. By Lemma 5, Geoff wins when \(n = 0\) . Then by Lemma 4, Geoff wins for \(N = t2^{n}\) if and only if Geoff loses for \(N = t2^{n - 1}\) , and thus by induction on \(n\) we have that Geoff wins for \(N = t2^{n}\) if and only if \(n\) is even.
Comment. We can represent this game as a game on partial binary trees. This representation could be common in rough working, as it facilitates exploration of small cases. If two sets produce trees which are topologically equivalent, then this equivalence leads to a natural bijection between games starting with the two sets. Such equivalences lead to a significant reduction in the number of distinct cases that need to be considered when exploring the game for small \(N\) .
The construction is as follows. First we begin by considering an infinite binary tree. For each positive integer \(n\) , we consider the binary representation of \(n - 1\) , starting with the least significant bit and ending with an infinite sequence of leading zeroes. We map this sequence of bits to a path on the binary tree by starting at the root, and then repeatedly choosing the left child if the bit is 0 and the right child if the bit is 1. We can then truncate each path after reaching a sufficient depth to distinguish the path from all other paths in the tree.

Valid moves in this representation of the game consist of selecting a node with two children, and removing either the left child or the right child (and its descendants). Selecting and removing the entire graph is also an allowed move (which loses instantly).
Two trees have equivalent games if they're topologically identical. This equivalence includes swapping the left and right children of any single node, or removing a node with a single child by merging the edges above and below it (and decreasing the depth of its children by one).
Comment. We can also analyse this game using Grundy values (also known as nim- values or nimbers). This requires a slight modification to the rules, wherein any move that would erase all integers on the board is disallowed, and the first player who cannot move loses. This is clearly equivalent to the original game.
Let \(g(\mathcal{S})\) denote the Grundy value of the game starting with the set \(\mathcal{S}\) . Note that the bijection in Lemma 1 shows that
\[g(\mathcal{S}) = g(J(\mathcal{S},\emptyset)) = g(J(\emptyset ,\mathcal{S})).\]
For any set \(V\) , let \(\mathrm{me}(V)\) denote the least nonnegative element that is not an element of \(V\) . For nonnegative integers \(x\) and \(y\) , define \(j(x,y)\) recursively as
\[j(x,y) = \mathrm{mex}(\{x,y\} \cup \{j(w,y) \mid w< x\} \cup \{j(x,z) \mid z< y\}).\]
The values of \(j(x,y)\) for small \(x\) and \(y\) are:
<table><tr><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>1</td><td>0</td></tr><tr><td>4</td><td>5</td><td>3</td><td>6</td><td>2</td><td>0</td><td>1</td></tr><tr><td>3</td><td>4</td><td>5</td><td>1</td><td>0</td><td>2</td><td>9</td></tr><tr><td>2</td><td>3</td><td>4</td><td>0</td><td>1</td><td>6</td><td>8</td></tr><tr><td>1</td><td>2</td><td>0</td><td>4</td><td>5</td><td>3</td><td>7</td></tr><tr><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td></tr><tr><td>—</td><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr></table>
We can show that \(g(J(\mathcal{S},\mathcal{T})) = j(g(\mathcal{S}),g(\mathcal{T}))\) for any nonempty sets \(\mathcal{S}\) and \(\mathcal{T}\) . The remainder of the proof follows a similar structure to the given solution.
|
IMOSL-2024-C6
|
Let \(n\) and \(T\) be positive integers. James has \(4n\) marbles with weights 1, 2, ..., \(4n\) . He places them on a balance scale, so that both sides have equal weight. Andrew may move a marble from one side of the scale to the other, so that the absolute difference in weights of the two sides remains at most \(T\) .
Find, in terms of \(n\) , the minimum positive integer \(T\) such that Andrew may make a sequence of moves such that each marble ends up on the opposite side of the scale, regardless of how James initially placed the marbles.
|
Answer: The minimum value of \(T\) is \(4n\) .
Solution 1. We must have \(T \geqslant 4n\) , as otherwise we can never move the marble of weight \(4n\) . We will show that \(T = 4n\) by showing that, for any initial configuration, there is a sequence of moves, never increasing the absolute value of the difference above \(4n\) , that results in every marble ending up on the opposite side of the scale. Because moves are reversible, it suffices to do the following: exhibit at least one configuration \(C\) for which this can be achieved, and show that any initial configuration can reach such a configuration \(C\) by some sequence of moves.
Consider partitioning the weights into pairs \((t, 4n + 1 - t)\) . Suppose that each side of the balance contains \(n\) of those pairs. If one side of the balance contains the pair \((t, 4n + 1 - t)\) for \(1 \leqslant t < 2n\) and the other side contains \((2n, 2n + 1)\) , then the following sequence of moves swaps those pairs between the sides without ever increasing the absolute value of the difference above \(4n\) .
\[\begin{array}{c}{t,4n + 1 - t\mid 2n,2n + 1}\\ {t,2n,4n + 1 - t\mid 2n + 1}\\ {t,2n\mid 2n + 1,4n + 1 - t}\\ {t,2n,2n + 1\mid 4n + 1 - t}\\ {2n,2n + 1\mid t,4n + 1 - t} \end{array} \quad (1)\]
Applying this sequence twice swaps any two pairs \((t, 4n + 1 - t)\) and \((t', 4n + 1 - t')\) between the sides. So we can achieve an arbitrary exchange of pairs between the sides, and \(C\) can be any configuration where each side of the balance contains \(n\) of those pairs.
We now show that any initial configuration can reach one where each side has \(n\) of those pairs. Consider a configuration where one side has total weight \(A - s\) and the other has total weight \(A + s\) , for some \(0 \leqslant s \leqslant 2n\) , and where some pair is split between the two sides. (If no pair is split between the two sides, they must have equal weights and we are done.) Valid moves include moving any weight \(w\) with \(1 \leqslant w \leqslant 2n + s\) from the \(A + s\) side to the \(A - s\) side, and moving any weight \(w\) with \(1 \leqslant w \leqslant 2n - s\) from the \(A - s\) side to the \(A + s\) side. Suppose the pair \((t, 4n + 1 - t)\) , with \(t \leqslant 2n\) , is split between the sides. If \(t\) is on the \(A + s\) side, or on the \(A - s\) side and \(t \leqslant 2n - s\) , it can be moved to the other side. Otherwise, \(t\) is on the \(A - s\) side and \(t \geqslant 2n - s + 1\) , so \(4n + 1 - t \leqslant 2n + s\) is on the \(A + s\) side and can be moved to the other side. So we can unite the two weights from that pair without splitting any other pair, and repeating this we reach a configuration where no pair is split between the sides.
Solution 2. As in Solution 1, \(T \geqslant 4n\) . Let \(\delta\) be the weight of the left side minus the weight of the right side. A configuration is called legal if \(|\delta | \leqslant 4n\) , and a move is legal if it results in a legal configuration. We will show that if \(\delta = 0\) then there is a sequence of legal moves after which every marble is on the opposite side.
We treat the \(n = 1\) case separately. The initial configuration has marbles 1, 4 on one side and 2, 3 on the other. So moving marbles 2, 4, 3, 1 in that order is legal and every marble ends on the opposite side. Now assume \(n \geqslant 2\) .
Marbles of weight at most \(2n\) are called small. We will make use of the following lemmas:
Lemma 1. If a pair of legal configurations differ only in the locations of small marbles then there is a sequence of legal moves to get from one to the other.
Proof. At first we only move marbles in the wrong position if they are not on the lighter side. (In the case of a tie, neither side is lighter.) Such a move is always legal. Since this reduces the number of marbles in the wrong position, eventually it will no longer be possible to perform such a move.
Then the only marbles in the wrong position are on the lighter side. So moving one marble in the wrong position at a time will always increase \(|\delta |\) , and \(|\delta | \leqslant 4n\) at the end. Hence every move is legal. \(\square\)
Lemma 2. Let \(k \in \mathbb{N}\) . A positive integer can be expressed as a sum of distinct positive integers up to \(k\) if and only if it is at most \(k(k + 1) / 2\) .
Proof. The maximum possible sum of distinct positive integers up to \(k\) is \(k(k + 1) / 2\) . For the other direction we use induction on \(k\) . The case \(k = 1\) is trivial. Assume the statement is true for \(k - 1\) . For positive integers up to \(k\) we only need a single term. For larger integers, including \(k\) in the expression means we are done by the inductive hypothesis. \(\square\)
Also note that \(n(2n + 1) \geqslant 4n\) for \(n \geqslant 2\) .
Let \(2n < m \leqslant 4n\) . Marbles of weight greater than \(m\) are called big and marbles from \(2n + 1\) to \(m\) are called medium.
Suppose all big marbles are on the correct side (that is, opposite where they started), \(m\) is on the incorrect side and the configuration is legal. Then the following steps give a sequence of legal moves after which \(m\) is on the correct side and the big marbles were never moved.
Assume \(m\) is on the left. In Step 2, we rearrange the small marbles so we can move \(m\) . But this is only possible if the weight of big and medium marbles on the right is not too large. So we may need to move some medium marbles from the right first, which we do in Step 1.
Step 1 Skip to Step 2 if the total weight of medium and big marbles on the right side is at most \(n(4n + 1) + 2n - m\) . Since the big marbles are in the correct position and \(m\) is in the incorrect position, the big marbles on the right can weigh at most \(n(4n + 1) - m\) . So there must be a medium marble \(m' < m\) on the right.
From the first assumption, it is legal to move all small marbles to the left. Then by Lemma 2 we can move some of the small marbles to the right so the right side has weight exactly \(n(4n + 1) + 2n\) . Then moving \(m'\) is legal. Repeat this step. Since the total weight of medium marbles on the right decreases, this step will occur a bounded number of times.
Step 2 Let the total weight of the right side be \(n(4n + 1) + 2n - m + x\) and the weight of small marbles on the right side be \(y\) . Note that \(y \geqslant x\) . If \(x \leqslant 0\) then moving \(m\) is legal.
Otherwise, by Lemma 2 there is a set of small marbles of weight \(y - x\) . By Lemma 1, there is a sequence of legal moves of small marbles such that the right side has weight exactly \(n(4n + 1) + 2n - m\) . Now moving \(m\) is legal.
Applying the process above for \(m = 4n\) , \(4n - 1\) , ..., \(2n + 1\) will move all nonsmall marbles to the opposite side. Then Lemma 1 completes the proof.
|
IMOSL-2024-C7
|
Let \(N\) be a positive integer and let \(a_{1}\) , \(a_{2}\) , ... be an infinite sequence of positive integers. Suppose that, for each \(n > N\) , \(a_{n}\) is equal to the number of times \(a_{n - 1}\) appears in the list \(a_{1}\) , \(a_{2}\) , ..., \(a_{n - 1}\) .
Prove that at least one of the sequences \(a_{1}\) , \(a_{3}\) , \(a_{5}\) , ... and \(a_{2}\) , \(a_{4}\) , \(a_{6}\) , ... is eventually periodic.
|
Solution 1. Let \(M > \max (a_{1}, \ldots , a_{N})\) . We first prove that some integer appears infinitely many times. If not, then the sequence contains arbitrarily large integers. The first time each integer larger than \(M\) appears, it is followed by a 1. So 1 appears infinitely many times, which is a contradiction.
Now we prove that every integer \(x \geqslant M\) appears at most \(M - 1\) times. If not, consider the first time that any \(x \geqslant M\) appears for the \(M^{\mathrm{th}}\) time. Up to this point, each appearance of \(x\) is preceded by an integer which has appeared \(x \geqslant M\) times. So there must have been at least \(M\) numbers that have already appeared at least \(M\) times before \(x\) does, which is a contradiction.
Thus there are only finitely many numbers that appear infinitely many times. Let the largest of these be \(k\) . Since \(k\) appears infinitely many times there must be infinitely many integers greater than \(M\) which appear at least \(k\) times in the sequence, so each integer 1, 2, ..., \(k - 1\) also appears infinitely many times. Since \(k + 1\) doesn't appear infinitely often there must only be finitely many numbers which appear more than \(k\) times. Let the largest such number be \(l \geqslant k\) . From here on we call an integer \(x\) big if \(x > l\) , medium if \(l \geqslant x > k\) and small if \(x \leqslant k\) . To summarise, each small number appears infinitely many times in the sequence, while each big number appears at most \(k\) times in the sequence.
Choose a large enough \(N' > N\) such that \(a_{N'}\) is small, and in \(a_{1}\) , ..., \(a_{N'}\) :
- every medium number has already made all of its appearances;
- every small number has made more than \(\max (k, N)\) appearances.
Since every small number has appeared more than \(k\) times, past this point each small number must be followed by a big number. Also, by definition each big number appears at most \(k\) times, so it must be followed by a small number. Hence the sequence alternates between big and small numbers after \(a_{N'}\) .
Lemma 1. Let \(g\) be a big number that appears after \(a_{N'}\) . If \(g\) is followed by the small number \(h\) , then \(h\) equals the amount of small numbers which have appeared at least \(g\) times before that point.
Proof. By the definition of \(N'\) , the small number immediately preceding \(g\) has appeared more than \(\max (k, N)\) times, so \(g > \max (k, N)\) . And since \(g > N\) , the \(g^{\mathrm{th}}\) appearance of every small number must occur after \(a_{N}\) and hence is followed by \(g\) . Since there are \(k\) small numbers and \(g\) appears at most \(k\) times, \(g\) must appear exactly \(k\) times, always following a small number after \(a_{N}\) . Hence on the \(h^{\mathrm{th}}\) appearance of \(g\) , exactly \(h\) small numbers have appeared at least \(g\) times before that point.
Denote by \(a_{[i,j]}\) the subsequence \(a_{i}\) , \(a_{i + 1}\) , ..., \(a_{j}\) .
Lemma 2. Suppose that \(i\) and \(j\) satisfy the following conditions:
(a) \(j > i > N' + 2\) ,
(b) \(a_{i}\) is small and \(a_{i} = a_{j}\) ,
(c) no small value appears more than once in \(a_{[i,j - 1]}\) . Then \(a_{i - 2}\) is equal to some small number in \(a_{[i,j - 1]}\) .
Proof. Let \(\mathcal{I}\) be the set of small numbers that appear at least \(a_{i - 1}\) times in \(a_{[1,i - 1]}\) . By Lemma 1, \(a_{i} = |\mathcal{I}|\) . Similarly, let \(\mathcal{I}\) be the set of small numbers that appear at least \(a_{j - 1}\) times in \(a_{[1,j - 1]}\) . Then by Lemma 1, \(a_{j} = |\mathcal{I}|\) and hence by (b), \(|\mathcal{I}| = |\mathcal{I}|\) . Also by definition, \(a_{i - 2} \in \mathcal{I}\) and \(a_{j - 2} \in \mathcal{I}\) .
Suppose the small number \(a_{j - 2}\) is not in \(\mathcal{I}\) . This means \(a_{j - 2}\) has appeared less than \(a_{i - 1}\) times in \(a_{[1,i - 1]}\) . By (c), \(a_{j - 2}\) has appeared at most \(a_{i - 1}\) times in \(a_{[1,j - 1]}\) , hence \(a_{j - 1} \leqslant a_{i - 1}\) . Combining with \(a_{[1,i - 1]} \subset a_{[1,j - 1]}\) , this implies \(\mathcal{I} \subseteq \mathcal{I}\) . But since \(a_{j - 2} \in \mathcal{I} \setminus \mathcal{I}\) , this contradicts \(|\mathcal{I}| = |\mathcal{I}|\) . So \(a_{j - 2} \in \mathcal{I}\) , which means it has appeared at least \(a_{i - 1}\) times in \(a_{[1,i - 1]}\) and one more time in \(a_{[i,j - 1]}\) . Therefore \(a_{j - 1} > a_{i - 1}\) .
By (c), any small number appearing at least \(a_{j - 1}\) times in \(a_{[1,j - 1]}\) has also appeared \(a_{j - 1} - 1 \geqslant a_{i - 1}\) times in \(a_{[1,i - 1]}\) . So \(\mathcal{I} \subseteq \mathcal{I}\) and hence \(\mathcal{I} = \mathcal{I}\) . Therefore, \(a_{i - 2} \in \mathcal{I}\) , so it must appear at least \(a_{j - 1} - a_{i - 1} = 1\) more time in \(a_{[i,j - 1]}\) . \(\square\)
For each small number \(a_{n}\) with \(n > N' + 2\) , let \(p_{n}\) be the smallest number such that \(a_{n + p_{n}} = a_{i}\) is also small for some \(i\) with \(n \leqslant i < n + p_{n}\) . In other words, \(a_{n + p_{n}} = a_{i}\) is the first small number to occur twice after \(a_{n - 1}\) . If \(i > n\) , Lemma 2 (with \(j = n + p_{n}\) ) implies that \(a_{i - 2}\) appears again before \(a_{n + p_{n}}\) , contradicting the minimality of \(p_{n}\) . So \(i = n\) . Lemma 2 also implies that \(p_{n} \geqslant p_{n - 2}\) . So \(p_{n}\) , \(p_{n + 2}\) , \(p_{n + 4}\) , . . . is a nondecreasing sequence bounded above by \(2k\) (as there are only \(k\) small numbers). Therefore, \(p_{n}\) , \(p_{n + 2}\) , \(p_{n + 4}\) , . . . is eventually constant and the subsequence of small numbers is eventually periodic with period at most \(k\) .
Note. Since every small number appears infinitely often, Solution 1 additionally proves that the sequence of small numbers has period \(k\) . The repeating part of the sequence of small numbers is thus a permutation of the integers from 1 to \(k\) . It can be shown that every permutation of the integers from 1 to \(k\) is attainable in this way.
Solution 2. We follow Solution 1 until after Lemma 1. For each \(n > N'\) we keep track of how many times each of 1, 2, ..., \(k\) has appeared in \(a_{1}\) , ..., \(a_{n}\) . We will record this information in an updating \((k + 1)\) - tuple
\[(b_{1},b_{2},\ldots ,b_{k};j)\]
where each \(b_{i}\) records the number of times \(i\) has appeared. The final element \(j\) of the \((k + 1)\) - tuple, also called the active element, represents the latest small number that has appeared in \(a_{1}\) , ..., \(a_{n}\) .
As \(n\) increases, the value of \((b_{1},b_{2},\ldots ,b_{k};j)\) is updated whenever \(a_{n}\) is small. The \((k + 1)\) - tuple updates deterministically based on its previous value. In particular, when \(a_{n} = j\) is small, the active element is updated to \(j\) and we increment \(b_{j}\) by 1. The next big number is \(a_{n + 1} = b_{j}\) . By Lemma 1, the next value of the active element, or the next small number \(a_{n + 2}\) , is given by the number of \(b\) terms greater than or equal to the newly updated \(b_{j}\) , or
\[\left|\{i\mid 1\leqslant i\leqslant k,b_{i}\geqslant b_{j}\} \right|. \quad (1)\]
Each sufficiently large integer which appears \(i + 1\) times must also appear \(i\) times, with both of these appearances occurring after the initial block of \(N\) . So there exists a global constant \(C\) such that \(b_{i + 1} - b_{i}\leqslant C\) . Suppose that for some \(r\) , \(b_{r + 1} - b_{r}\) is unbounded from below. Since the value of \(b_{r + 1} - b_{r}\) changes by at most 1 when it is updated, there must be some update where \(b_{r + 1} - b_{r}\) decreases and \(b_{r + 1} - b_{r}< - (k - 1)C\) . Combining with the fact that \(b_{i} - b_{i - 1}\leqslant C\) for all \(i\) , we see that at this particular point, by the triangle inequality
\[\min (b_{1},\ldots ,b_{r}) > \max (b_{r + 1},\ldots ,b_{k}). \quad (2)\]
Since \(b_{r + 1} - b_{r}\) just decreased, the new active element is \(r\) . From this point on, if the new active element is at most \(r\) , by (1) and (2), the next element to increase is once again from \(b_{1}\) , ..., \(b_{r}\) . Thus only \(b_{1}\) , ..., \(b_{r}\) will increase from this point onwards, and \(b_{k}\) will no longer increase, contradicting the fact that \(k\) must appear infinitely often in the sequence. Therefore \(|b_{r + 1} - b_{r}|\) is bounded.
Since \(|b_{r + 1} - b_{r}|\) is bounded, it follows that each of \(|b_{i} - b_{1}|\) is bounded for \(i = 1\) , ..., \(k\) . This means that there are only finitely many different states for \((b_{1} - b_{1},b_{2} - b_{1},\ldots ,b_{k} - b_{1};j)\) . Since the next active element is completely determined by the relative sizes of \(b_{1}\) , \(b_{2}\) , ..., \(b_{k}\) to each other, and the update of \(b\) terms depends on the active element, the active element must be eventually periodic. Therefore the small numbers subsequence, which is either \(a_{1}\) , \(a_{3}\) , \(a_{5}\) , ..., or \(a_{2}\) , \(a_{4}\) , \(a_{6}\) , ..., must be eventually periodic.
|
IMOSL-2024-C8
|
Let \(n\) be a positive integer. Given an \(n \times n\) board, the unit cell in the top left corner is initially coloured black, and the other cells are coloured white. We then apply a series of colouring operations to the board. In each operation, we choose a \(2 \times 2\) square with exactly one cell coloured black and we colour the remaining three cells of that \(2 \times 2\) square black.
Determine all values of \(n\) such that we can colour the whole board black.
|
Answer: The answer is \(n = 2^{k}\) where \(k\) is a nonnegative integer.
Solution 1. We first prove by induction that it is possible the colour the whole board black for \(n = 2^{k}\) . The base case of \(k = 1\) is trivial. Assume the result holds for \(k = m\) and consider the case of \(k = m + 1\) . Divide the \(2^{m + 1} \times 2^{m + 1}\) board into four \(2^{m} \times 2^{m}\) sub- boards. Colour the top left \(2^{m} \times 2^{m}\) sub- board using the inductive hypothesis. Next, colour the centre \(2 \times 2\) square with a single operation. Finally, each of the remaining \(2^{m} \times 2^{m}\) sub- board can be completely coloured using the inductive hypothesis, starting from the black square closest to the centre. This concludes the induction.
Now we prove that if such a colouring is possible for \(n\) then \(n\) must be a power of 2. Suppose it is possible to colour an \(n \times n\) board where \(n > 1\) . Identify the top left corner of the board by \((0,0)\) and the bottom right corner by \((n,n)\) . Whenever an operation takes place in a \(2 \times 2\) square centred on \((i,j)\) , we immediately draw an "X", joining the four cells' centres (see Figure 4). Also, identify this X by \((i,j)\) . The first operation implies there's an X at \((1,1)\) . Since the whole board is eventually coloured, every cell centre must be connected to at least one X. The collection of all Xs forms a graph \(G\) .

<center>Figure 4: L-trominoes placements corresponding to colouring operations (left) and the corresponding X diagram (right). </center>
Claim 1. The graph \(G\) is a tree.
Proof. Since every operation requires a pre- existing black cell, each newly drawn X apart from the first must connect to an existing X. So all Xs are connected to the first X and \(G\) must be connected. Now, suppose \(G\) has a cycle. Consider the newest X involved in the cycle, it must connect to previous Xs at at least two points. But this implies the corresponding operation will colour at most two cells, which is a contradiction.
Note that in the following arguments, Claims 2 to 4 only require the condition that \(G\) is a tree and every cell is connected to \(G\) .
Claim 2. If there's an \(\mathbf{X}\) at \((i,j)\) , then \(1 \leqslant i,j \leqslant n - 1\) and \(i \equiv j\) (mod 2).
Proof. The inequalities \(1 \leqslant i,j \leqslant n - 1\) are clear. Call an \(\mathbf{X}\) at \((i,j)\) good if \(i \equiv j\) (mod 2), or bad if \(i \neq j\) (mod 2). The first \(\mathbf{X}\) at \((i,j) = (1,1)\) is good. Suppose some \(\mathbf{X}\) s are bad. Since \(G\) is connected, there must exist a good \(\mathbf{X}\) connecting to a bad \(\mathbf{X}\) . But this can only occur if they connect at two points, creating a cycle. This is a contradiction, thus all \(\mathbf{X}\) s are good. \(\square\)
Call an \(\mathbf{X}\) at \((i,j)\) odd if \(i \equiv j \equiv 1\) (mod 2), even if \(i \equiv j \equiv 0\) (mod 2).
Claim 3. The integer \(n\) must be even. Furthermore, there must be \(4(n / 2 - 1)\) odd \(\mathbf{X}\) s connecting the cells on the perimeter of the board as shown in Figure 5.
Proof. If \(n\) is odd, the four corners of the bottom left cell are \((n,0),(n - 1,0),(n - 1,1)\) and \((n,1)\) , none of which satisfies the conditions of Claim 2. So the bottom left cell cannot connect to any \(\mathbf{X}\) . If \(n\) is even, each cell on the edge of the board has exactly one corner satisfying the conditions of Claim 2, so the \(\mathbf{X}\) connecting it is uniquely determined. Therefore the cells on the perimeter of the board are connected to \(\mathbf{X}\) s according to Figure 5. \(\square\)

<center>Figure 5: Highlighting the permitted points for \(\mathbf{X}\) s (left) and \(\mathbf{X}\) s on the perimeter (right). </center>
Divide the \(n \times n\) board into \(n^2 /4\) blocks of \(2 \times 2\) squares. Call each of these blocks a big- cell. We say a big- cell is filled if it contains an odd \(\mathbf{X}\) on its interior, empty otherwise. By Claim 3, each big- cell on the perimeter must be filled.
Claim 4. Every big- cell is filled.
Proof. Recall that \(\mathbf{X}\) s can only be at \((i,j)\) with \(i \equiv j\) (mod 2). Suppose a big- cell centred at \((i,j)\) is empty. Then in order for its four cells to be coloured, there must be four even \(\mathbf{X}\) s on \((i - 1,j - 1)\) , \((i + 1,j - 1)\) , \((i - 1,j + 1)\) and \((i + 1,j + 1)\) , "surrounding" the big- cell (see Figure 6).
By Claim 3, no empty big- cell can be on the perimeter. So if there exist some empty bigcells, the boundary between empty and filled big- cells must consist of a number of closed loops. Each closed loop is made up of several line segments of length 2, each of which separates a filled big- cell from an empty big- cell.
Since every empty big- cell is surrounded by even \(\mathbf{X}\) s and every filled big- cell contains an odd \(\mathbf{X}\) , the two end points of each such line segment must be connected by \(\mathbf{X}\) s. Since these line segments form at least one closed loop, it implies the existence of a cycle made up of \(\mathbf{X}\) s (see Figure 6). This is a contradiction, thus no big- cell can be empty. \(\square\)

<center>Figure 6: An empty big-cell surrounded by even Xs (left) and the boundary between empty and filled Xs creating a cycle (right). </center>
Therefore every big- cell is filled by an odd X, and the connections between them are provided by even Xs. We can now reduce the \(n\times n\) problem to an \(n / 2\times n / 2\) problem in the following way. Perform a dilation of the board by a factor of \(1 / 2\) with respect to \((0,0)\) . Each big- cell is shrunk to a regular cell. For the Xs, replace each odd X at \((i,j)\) by the point \((i / 2,j / 2)\) , and replace each even X at \((i,j)\) by an X at \((i / 2,j / 2)\) .
We claim the new resulting graph of Xs is a tree that connects all cells of an \(n / 2\times n / 2\) board. First, two connected Xs in the original \(n\times n\) board are still connected after their replacements (noting that some Xs have been replaced by single points). For each cell in the \(n / 2\times n / 2\) board, its centre corresponds to an odd X from a filled big- cell in the original \(n\times n\) board, so it must be connected to the graph. Finally, suppose there exists a cycle in the new graph. The cycle consists of Xs that correspond to even Xs in the original graph connecting big- cells, forming a cycle of big- cells. Since in every big- cell, the four unit squares were connected by an odd X, this implies the existence of a cycle in the original graph, which is a contradiction.
Thus the new graph of Xs must be a tree that connects all cells of an \(n / 2\times n / 2\) board, which are the required conditions for Claims 2 to 4. Hence we can repeat our argument, halving the dimensions of the board each time, until we reach the base case of a \(1\times 1\) board (where the tree is a single point). Therefore \(n\) must be a power of 2, completing the solution.
Solution 2. As in Solution 1, it is possible the colour the whole board black for \(n = 2^{k}\) .
The colouring operation is equivalent to the placement of L- trominoes. For each L- tromino we place on the board, we draw an arrow and a node as shown in Figure 7. We also draw a node in the top left corner of the board.

<center>Figure 7: Tromino with corresponding arrow and node drawn. </center>
Claim 1. The arrows and nodes form a directed tree rooted at the top left corner.
Proof. The proof is similar to the proof of Claim 1 in Solution 1, with the additional note that the directions of the arrows inherit the order of the colouring operations, so they must be pointing away from the top left node. \(\square\)
Note that since all edges of the tree are diagonal, the nodes can only lie on points \((i,j)\) with \(i + j\equiv 0\) (mod 2). This implies that we can only place down L- trominoes of one particular parity: that is, with the centre of the L- tromino on a point with \(i + j\equiv 0\) (mod 2). In the remainder of the proof, we will implicitly use this parity property when determining possible positions of L- trominoes.
Next, we show that certain configurations of edges of the tree are impossible.
Claim 2. There cannot be two edges in a "parallel" configuration (see Figure 8).
Proof. In such a configuration, the two edges can either be directed in the same direction or opposite directions. If they point in the same direction (see Figure 8), then the L- trominoes corresponding to the two edges overlap.

<center>Figure 8: Parallel configuration (left) and two parallel edges, case 1 (right). </center>
If they point in opposite directions, then we get the diagram in Figure 9. The cells marked \((\star)\) must lie inside the \(n\times n\) board, so they must be covered by L- trominoes. There is only one possible way to cover these with a L- tromino of the right parity. But this makes the arrows form a cycle, which cannot happen. So we have a contradiction. \(\square\)

<center>Figure 9: Two parallel edges, case 2. </center>
Claim 3. There cannot be three edges in a "zigzag" configuration, shown in Figure 10.

<center>Figure 10: Zigzag configuration. </center>
Proof. Assume for contradiction that there is a zigzag. Then take the zigzag with maximal distance from the root of the tree (measured by distance along the graph from the root to the middle edge of the zigzag).
We may assume without loss of generality that the middle edge is directed down- right. Then the right edge must be directed up- right, since no two arrows can point to the same node. Next, we draw in the corresponding L- trominoes, and consider the cell marked \((\star)\) . There are two possible ways to cover it with an L- tromino, because of the parity of L- tromino centres.
We could choose the centre of the L- tromino to be the top right corner of the cell (see Figure 11). This immediately gives another zigzag.

<center>Figure 11: Zigzag configuration, case 1. </center>
The other possibility is if we choose the centre of the L- tromino to be the bottom left corner of the cell (see Figure 12). Then we need to cover the cell marked \((\star \star)\) with an L- tromino. If

<center>Figure 12: Zigzag configuration, case 2. </center>
we placed the centre of the L- tromino on the top left corner of the cell, this would give two parallel edges, contradicting Claim 2. So we must place the centre of the L- tromino on the bottom right corner of the cell, which gives a zigzag.
In each case, we get another zigzag further away from the root of the tree, which contradicts our assumption of maximality. So there cannot be any zigzags.
We now colour the nodes of the tree. Colour the root node yellow. For all other nodes, we colour it white if it has an arrow coming out of it in a different direction to the arrow going in, and black otherwise.
Claim 4. Any child of a black node is white.

<center>Figure 13: Black node configuration. </center>
Proof. Suppose we have a black node with a child. Then the arrow exiting the black node must be in the same direction as the arrow entering it by the definition of our colouring, giving the left diagram of Figure 13.
The cell marked \((\star)\) must be covered by an L- tromino. If the centre of this L- tromino is the bottom left corner, then this would give an arrow leaving the black node in a different direction, which cannot happen. So the centre of the L- tromino must instead be the top right corner, which gives an arrow leaving the upper node in a different direction. Thus the upper node must be white. \(\square\)
Claim 5. Every white node has three children, all of which are black.

<center>Figure 14: White node configuration. </center>
Proof. Refer to Figure 14. Suppose we have a white node, as in the leftmost diagram. The cell marked \((\star)\) must be covered by an L- tromino. If the centre of this L- tromino is the bottom right corner of the cell, then this would form a zigzag, which by Claim 3 is not allowed. So the centre must be the top left corner.
Next, the cell marked \((\star \star)\) must be covered by an L- tromino. If the centre of this L- tromino is the top right corner, this would form a zigzag, so the centre must be the bottom left corner instead. Thus we have shown that any white node has three children.
Finally, note that if any of the child nodes had three children of their own, then this would give parallel edges in the diagram, which contradicts Claim 2. Therefore the child nodes of the white node must all be black. \(\square\)
We now know that the node colours alternate between black and white as you go down the tree, so all white nodes lie on points with coordinates \((2i, 2j)\) , and all black nodes lie on points with coordinates \((2i + 1, 2j + 1)\) .
Now (assuming \(n > 1\) ) we will construct a new board whose cells are \(2 \times 2\) squares of our current board. We replace the root node and its child with a single big cell and a big root node,

<center>Figure 15: Replacing with larger cells and L-trominoes. </center>
and we replace each white node and its three children with a big L- tromino, big arrow and big node as shown in Figure 15.
Every black node is the child of the root node or a white node, so every L- tromino is involved in exactly one replacement. Also, the parent of any white node is a black node, whose parent, in turn, is a white node or the root. So the starting point of every big arrow will be on a big node. Therefore we obtain an L- tromino tiling forming a tree.
This shows for \(n > 1\) that if an \(n\times n\) board can be tiled by L- trominoes forming a tree, then \(n\) is even, and an \(n / 2\times n / 2\) board can also be tiled by L- trominoes forming a tree. Since a \(1\times 1\) board can trivially be tiled, we conclude that the only values of \(n\) for which an \(n\times n\) board can be tiled are \(n = 2^{k}\)
|
IMOSL-2024-G1
|
Let \(ABCD\) be a cyclic quadrilateral such that \(AC< BD< AD\) and \(\angle DBA< 90^{\circ}\) . Point \(E\) lies on the line through \(D\) parallel to \(AB\) such that \(E\) and \(C\) lie on opposite sides of line \(AD\) , and \(AC = DE\) . Point \(F\) lies on the line through \(A\) parallel to \(CD\) such that \(F\) and \(C\) lie on opposite sides of line \(AD\) , and \(BD = AF\) .
Prove that the perpendicular bisectors of segments \(BC\) and \(EF\) intersect on the circumcircle of \(ABCD\) .
|
Solution 1. Let \(T\) be the midpoint of arc \(\overline{BAC}\) and let lines \(BA\) and \(CD\) intersect \(EF\) at \(K\) and \(L\) , respectively. Note that \(T\) lies on the perpendicular bisector of segment \(BC\) .

Since \(ABCD\) is cyclic, \(\frac{BD}{\sin\angle BAD} = \frac{AC}{\sin\angle ADC}\) . From parallel lines we have \(\angle DAF = \angle ADC\) and \(\angle BAD = \angle EDA\) . Hence,
\[AF\cdot \sin \angle DAF = BD\cdot \sin \angle ADC = AC\cdot \sin \angle BAD = DE\cdot \sin \angle EDA.\]
So \(F\) and \(E\) are equidistant from the line \(AD\) , meaning that \(EF\) is parallel to \(AD\) .
We have that \(KADE\) and \(FADL\) are parallelograms, hence we get \(KA = DE = AC\) and \(DL = AF = BD\) . Also, \(KE = AD = FL\) so it suffices to prove the perpendicular bisector of \(KL\) passes through \(T\) .
Triangle \(AKC\) is isosceles so \(\angle BTC = \angle BAC = 2\angle BKC\) . Likewise, \(\angle BTC = 2\angle BLC\) . Since \(T\) , \(K\) , and \(L\) all lie on the same side of \(BC\) and \(T\) lies on the perpendicular bisector of \(BC\) , \(T\) is the centre of circle \(BKLC\) . The result follows.
Solution 2. Let \(AF\) and \(DE\) meet \(\omega\) at \(X\) and \(Y\) , respectively, and let \(T\) be as in Solution 1.
As \(BD<AD\) , \(DY\parallel AB\) and \(\angle BAY=\angle DBA<90^{\circ}\) , we have \(DY<AB\) and \(Y\) lies on the opposite side of line \(AD\) to \(C\) . Also from \(BD<AD\) , we have \(B\) , \(C\) , and \(D\) all lie on the same side of the perpendicular bisector of \(AB\) which shows \(AC>AB\) . Combining these, we get \(DY<AB<AC=DE\) and, as \(Y\) and \(E\) both lie on the same side of line \(AD\) , \(Y\) lies in the interior of segment \(DE\) . Similarly, \(X\) lies in the interior of segment \(DF\) .
Since \(AB\) is parallel to \(DY\) , we have \(YA=BD=FA\) . Likewise \(XD=AC=ED\) .

Claim 1. \(T\) is the midpoint of arc \(\widehat{XY}\) .
Proof. From \(AX\parallel CD\) and \(AB\parallel DY\) we have
\[\angle CAX=\angle AXD=\angle AYD=\angle YDB.\]
Since \(T\) is the midpoint of arc \(\widehat{BAC}\) , we have \(\angle BAT=\angle TDC\) , so
\[\angle TAX=\angle CAX+\angle BAC-\angle BAT=\angle YDB+\angle BDC-\angle TDC=\angle YDT.\]
Recall from above we have \(AB<AC\) and analogously, \(DC<DB\) , which shows that \(X\) , \(Y\) and \(T\) all lie on the same side of line \(AD\) . In particular, \(T\) and \(A\) lie on opposite sides of \(XY\) so \(T\) lies on the internal angle bisector of \(\angle XAY\) . Since \(AF=AY\) , we have \(\Delta ATF\cong \Delta ATY\) , giving \(TF=TY\) .
Likewise, \(TE=TX\) , so \(TE=TF\) , meaning that \(T\) lies on the perpendicular bisector of segment \(EF\) as required.
Comment. The statement remains true without the length and angle conditions on cyclic quadrilateral \(ABCD\) however additional care is required to consider different cases based on the ordering of points on lines \(DE\) and \(AF\) . It is also possible for \(T\) to be on the external angle bisector of \(\angle XAY\) .
Solution 3. From \(AF = DB\) , \(AC = DE\) and
\[\angle (AC,AF) = \angle (AC,CD) = \angle (AB,BD) = \angle (DE,DB),\]
triangles \(ACF\) and \(DEB\) are congruent, so \(CF = BE\) .
Let \(P = BE \cap CF\) . Since
\[\angle (CP,BP) = \angle (CF,BE) = \angle (AF,DB) = \angle (DC,DB),\]
we have that \(P\) lies on circle \(ABCD\) .

Finally, let \(T\) be the Miquel point of the quadrilateral \(BCFE\) so \(T\) lies on circles \(EFP\) and \(ABCD\) . Note that \(T\) is the centre of spiral similarity taking segments \(BE\) to \(CF\) and since \(BE = CF\) , this is in fact just a rotation, so \(TB = TC\) and \(TE = TF\) ; that is, the perpendicular bisectors of \(BC\) and \(EF\) meet at \(T\) , on circle \(ABCD\) .
|
IMOSL-2024-G2
|
Let \(ABC\) be a triangle with \(AB < AC < BC\) , incentre \(I\) and incircle \(\omega\) . Let \(X\) be the point in the interior of side \(BC\) such that the line through \(X\) parallel to \(AC\) is tangent to \(\omega\) . Similarly, let \(Y\) be the point in the interior of side \(BC\) such that the line through \(Y\) parallel to \(AB\) is tangent to \(\omega\) . Let \(AI\) intersect the circumcircle of triangle \(ABC\) again at \(P \neq A\) . Let \(K\) and \(L\) be the midpoints of \(AB\) and \(AC\) , respectively.
Prove that \(\angle KIL + \angle PYX = 180^{\circ}\) .
|
Solution 1. Let \(A'\) be the reflection of \(A\) in \(I\) , then \(A'\) lies on the angle bisector \(AP\) . Lines \(A'X\) and \(A'Y\) are the reflections of \(AC\) and \(AB\) in \(I\) , respectively, and so they are the tangents to \(\omega\) from \(X\) and \(Y\) . As is well- known, \(PB = PC = PI\) , and since \(\angle BAP = \angle PAC > 30^{\circ}\) , \(PB = PC\) is greater than the circumradius. Hence \(PI > \frac{1}{2} AP > AI\) ; we conclude that \(A'\) lies in the interior of segment \(AP\) .

We have \(\angle APB = \angle ACB\) in the circumcircle and \(\angle ACB = \angle A'XC\) because \(A'X \parallel AC\) . Hence, \(\angle APB = \angle A'XC\) , and so quadrilateral \(BPA'X\) is cyclic. Similarly, it follows that \(CYA'P\) is cyclic.
Now we are ready to transform \(\angle KIL + \angle PYX\) to the sum of angles in triangle \(A'CB\) . By a homothety of factor 2 at \(A\) we have \(\angle KIL = \angle CA'B\) . In circles \(BPA'X\) and \(CYA'P\) we have \(\angle APX = \angle A'BC\) and \(\angle YPA = \angle BCA'\) , therefore
\[\angle KIL + \angle PYX = \angle CA'B + (\angle YPA + \angle APX) = \angle CA'B + \angle BCA' + \angle A'BC = 180^{\circ}.\]
Comment. The constraint \(AB < AC < BC\) was added by the Problem Selection Committee in order to reduce case- sensitivity. Without that, there would be two more possible configurations according to the possible orders of points \(A\) , \(P\) and \(A'\) , as shown in the pictures below. The solution for these cases is broadly the same, but some extra care is required in the degenerate case when \(A'\) coincides with \(P\) and line \(AP\) is a common tangent to circles \(BPX\) and \(CPY\) .

Solution 2. Let \(B C = a\) , \(A C = b\) , \(A B = c\) and \(s = \frac{a + b + c}{2}\) , and let the radii of the incircle, \(B\) - excircle and \(C\) - excircle be \(r\) , \(r_{b}\) and \(r_{c}\) , respectively. Let the incircle be tangent to \(A C\) and \(A B\) at \(B_{0}\) and \(C_{0}\) , respectively; let the \(B\) - excircle be tangent to \(A C\) at \(B_{1}\) , and let the \(C\) - excircle be tangent to \(A B\) at \(C_{1}\) . As is well- known, \(A B_{1} = s - c\) and \(\mathrm{area}(\triangle A B C) = r s = r_{c}(s - c)\) .
Let the line through \(X\) , parallel to \(A C\) be tangent to the incircle at \(E\) , and the line through \(Y\) , parallel to \(A B\) be tangent to the incircle at \(D\) . Finally, let \(A P\) meet \(B B_{1}\) at \(F\) .

It is well- known that points \(B\) , \(E\) , and \(B_{1}\) are collinear by the homothety between the incircle and the \(B\) - excircle, and \(B E \parallel I K\) because \(I K\) is a midline in triangle \(B_{0}E B_{1}\) . Similarly, it follows that \(C\) , \(D\) , and \(C_{1}\) are collinear and \(C D \parallel I L\) . Hence, the problem reduces to proving \(\angle Y P A = \angle C B E\) (and its symmetric counterpart \(\angle A P X = \angle D C B\) with respect to the vertex \(C\) ), so it suffices to prove that \(F Y P B\) is cyclic. Since \(A C P B\) is cyclic, that is equivalent to \(F Y \parallel B_{1} C\) and \(\frac{B F}{F B_{1}} = \frac{B Y}{Y C}\) .
By the angle bisector theorem we have
\[\frac{B F}{F B_{1}} = \frac{A B}{A B_{1}} = \frac{c}{s - c}.\]
The homothety at \(C\) that maps the incircle to the \(C\) - excircle sends \(Y\) to \(B\) , so
\[\frac{B C}{Y C} = \frac{r_{c}}{r} = \frac{s}{s - c}.\]
So,
\[\frac{B Y}{Y C} = \frac{B C}{Y C} -1 = \frac{s}{s - c} -1 = \frac{c}{s - c} = \frac{B F}{F B_{1}},\]
which completes the solution.
|
IMOSL-2024-G3
|
Let \(A B C D E\) be a convex pentagon and let \(M\) be the midpoint of \(A B\) . Suppose that segment \(A B\) is tangent to the circumcircle of triangle \(C M E\) at \(M\) and that \(D\) lies on the circumcircles of triangles \(A M E\) and \(B M C\) . Lines \(A D\) and \(M E\) intersect at \(K\) , and lines \(B D\) and \(M C\) intersect at \(L\) . Points \(P\) and \(Q\) lie on line \(E C\) so that \(\angle P D C = \angle E D Q = \angle A D B\) .
Prove that lines \(K P\) , \(L Q\) , and \(M D\) are concurrent.
|
Common remarks. Each of solutions we present consists of three separate parts:
(a) proving \(K P \parallel M C\) and \(L Q \parallel M E\) ;
(b) proving \(K L \parallel A B\) and, optionally, showing that points \(C\) , \(E\) , \(K\) , and \(L\) are concyclic;
(c) completing the solution either using homotheties or the parallelogram enclosed by lines \(K P\) , \(M K\) , \(M L\) and \(L Q\) , or radical axes between three circles.
Solution 1.
(a) Notice that the condition \(\angle P D C = \angle A D B\) is equivalent to \(\angle A D P = \angle B D C\) , and \(\angle E D Q = \angle A D B\) is equivalent to \(\angle E D A = \angle Q D B\) . From line \(A B\) being tangent to circle \(C M E\) , and circles \(A M D E\) and \(C D M E\) we read \(\angle E C M = \angle E M A = \angle E D A = \angle Q D B\) and \(\angle M E C = \angle B M C = \angle B D C = \angle A D P\) .
Using \(\angle A D P = \angle M E C\) , the points \(D\) , \(E\) , \(K\) , and \(P\) are concyclic, which gives that \(\angle E P K = \angle E D A = \angle E C M\) . From that, we can see that \(K P \parallel M C\) . It can be shown similarly that \(C\) , \(D\) , \(Q\) , and \(L\) are concyclic, \(\angle L Q C = \angle M E C\) and therefore \(L Q \parallel M E\) .

(b) Let rays \(D A\) and \(D B\) intersect circle \(C D E\) at \(R\) and \(S\) , respectively. We now observe that \(\angle S E C = \angle S D C = \angle M E C\) , so points \(E\) , \(M\) , and \(S\) are collinear. We similarly obtain that \(C\) , \(M\) , and \(R\) are collinear.
From \(\angle S R C = \angle S E C = \angle B M C\) we can see that \(R S \parallel A B\) . Since \(M\) bisects \(A B\) , it follows that \(K L \parallel R S\) .
(c) Consider the homothety at \(D\) that sends \(R S\) to \(K L\) . Because \(K P \parallel R C\) and \(L Q \parallel S E\) , that homothety sends the concurrent lines \(D M\) , \(R C\) , and \(S E\) to \(D M\) , \(K P\) , and \(L Q\) , so these lines are also concurrent.
Solution 2.
(a) As in Solution 1, we show the following: \(\angle ECM = \angle EMA = \angle EDA = \angle EPK\) ; \(\angle MEC = \angle BMC = \angle BDC = \angle LQC\) ; the points \(C\) , \(D\) , \(Q\) , and \(L\) are concyclic; the points \(D\) , \(E\) , \(K\) , and \(P\) are concyclic; \(KP \parallel MC\) ; and \(LQ \parallel ME\) .
(b) Notice that triangles \(EKP\) and \(EMC\) are homothetic at \(E\) , so their circumcircles \(CME\) and \(DEKP\) are tangent to each other at \(E\) . Similarly, circle \(CDQL\) is tangent to circle \(CME\) at \(C\) .
Suppose that the tangents to circle \(CME\) at \(C\) and \(E\) intersect at point \(X\) . (The case when \(CE\) is a diameter in circle \(CME\) can be considered as a limit case.) Moreover, let \(EX\) and \(CX\) intersect circles \(DEAM\) and \(BCDM\) again at \(A_1 \neq E\) and \(B_1 \neq C\) , respectively.

We have \(XE = XC\) because they are the tangents from \(X\) to circle \(CME\) . Moreover, in circle \(DEAM\) , chords \(AM\) and \(A_1E\) are tangent to circle \(CME\) , so \(A_1E = AM\) . Similarly, we have \(B_1C = BM\) , hence \(A_1E = AM = BM = B_1C\) . We conclude \(XA_1 = XB_1\) , so the powers of \(X\) with respect to circles \(DEAM\) and \(BCDM\) are equal. Therefore, \(X\) lies on the radical axis of these two circles, which is \(DM\) .
Now notice that by \(XC = XE\) , point \(X\) has equal powers to circles \(CDQL\) and \(DEKP\) , so \(DX\) is the radical axis of these circles. Point \(M\) lies on \(DX\) , so \(ME \cdot MK = MC \cdot ML\) ; we conclude that \(C\) , \(E\) , \(K\) , and \(L\) are concyclic. Hence, by \(\angle MKL = \angle ECM = \angle KMA\) we have \(KL \parallel AB\) .
(c) As \(\angle EPK = \angle EMA = \angle QLK\) , we have that \(KLQP\) is cyclic. The radical axes between circles \(DEKP\) , \(CDQL\) and \(KLQP\) are \(DM\) , \(KP\) and \(LQ\) , so they are concurrent at the radical centre of the three circles.
Solution 3.
(b) We present another proof that \(K L \parallel A B\) .
Let \(A D \cap L Q = I\) , \(B D \cap K P = H\) , \(A B \cap L Q = U\) and \(A B \cap K P = V\) . Since
\[\angle D H P = \angle D L M = 180^{\circ} - \angle C L D = 180^{\circ} - \angle C Q D = \angle D Q E,\]
point \(H\) lies on circle \(D P Q\) . Similarly, we obtain that point \(I\) lies on this circle. Hence, \(\angle L I H = \angle Q D B = \angle E D A = \angle E M A\) , and \(L Q \parallel M E\) implies that \(H I \parallel A B\) .

Let \(A M = B M = d\) , then we have
\[\frac{B U}{I H} = \frac{B L}{L H} = \frac{B M}{M V} = \frac{d}{d + A V}\quad \mathrm{and}\quad \frac{A V}{I H} = \frac{A K}{K I} = \frac{A M}{M U} = \frac{d}{d + B U}.\]
Hence, \(B U \cdot (d + A V) = A V \cdot (d + B U)\) , so \(B U = A V\) . Therefore, \(\triangle M L U \cong \triangle V K M\) which implies \(K L \parallel A B \parallel H I\) .
(c) Lines \(M K\) , \(M L\) , \(K P\) and \(L Q\) enclose a parallelogram. Line \(D M\) passes through the midpoint of \(K L\) , which the centre of the parallelogram, and passes through the vertex \(M\) . Therefore, \(D M\) passes through the opposite vertex, which is the intersection of \(K P\) and \(L Q\) .
|
IMOSL-2024-G4
|
Let \(A B C D\) be a quadrilateral with \(A B\) parallel to \(C D\) and \(A B< C D\) . Lines \(A D\) and \(B C\) intersect at a point \(P\) . Point \(X\neq C\) on the circumcircle of triangle \(A B C\) is such that \(P C = P X\) . Point \(Y\neq D\) on the circumcircle of triangle \(A B D\) is such that \(P D = P Y\) . Lines \(A X\) and \(B Y\) intersect at \(Q\) .
Prove that \(P Q\) is parallel to \(A B\) .
|
Solution 1. Let \(M\) and \(N\) be the midpoints of \(A D\) and \(B C\) , respectively and let the perpendicular bisector of \(A B\) intersect the line through \(P\) parallel to \(A B\) at \(R\) .
Lemma. Triangles \(Q A B\) and \(R N M\) are similar.
Proof. Let \(O\) be the circumcentre of triangle \(A B C\) , and let \(S\) be the midpoint of \(C X\) . Since \(N\) , \(S\) , and \(R\) are the respective perpendicular feet from \(O\) to \(B C\) , \(C X\) , and \(P R\) , we have that quadrilaterals \(P R N O\) and \(C N S O\) are cyclic. Furthermore, \(P\) , \(S\) , and \(O\) are collinear as \(P C = P X\) . Since \(A B C X\) is also cyclic, we have that
\[\angle Q A B = \angle X C B = \angle P O N = 180^{\circ} - \angle N R P = \angle M N R.\]
Analogously, we have that \(\angle A B Q = \angle R M N\) , so triangles \(Q A B\) and \(R N M\) are similar.

Let \(d(Z, \ell)\) denote the perpendicular distance from the point \(Z\) to the line \(\ell\) . Using that \(P R \parallel A B\) along with the similarities \(Q A B \sim R N M\) and \(P A B \sim P M N\) , we have that
\[\frac{d(Q, A B)}{A B} = \frac{d(R, M N)}{M N} = \frac{d(P, M N)}{M N} = \frac{d(P, A B)}{A B},\]
which implies that \(P Q \parallel A B\) .
Solution 2. Let \(BD\) and \(AC\) intersect at \(T\) and let the line through \(P\) parallel to \(AB\) intersect \(BD\) at \(V\) . Next, let \(Q'\) be the foot of the perpendicular from \(T\) to \(PV\) . Finally, let \(Q'A\) intersect circle \(ABC\) again at \(X'\) and \(Q'B\) intersect circle \(ABD\) again at \(Y'\) .

Claim. \(PQ'\) bisects \(\angle BQ'D\) externally.
Proof. Let \(PT\) intersect \(CD\) at \(L\) . Let \(\infty_{CD}\) be the point at infinity on line \(CD\) . From the standard Ceva- Menelaus configuration we have \((D, C; L, \infty_{CD})\) is harmonic. Hence projecting through \(P\) we have
\[-1 = (D,C;L,\infty_{CD}) = (D,B;T,V).\]
As \((D, B; T, V)\) is harmonic, and also \(\angle VQ'T = 90^{\circ}\) (by construction), the claim follows. \(\square\)
Now as
\[\angle Q'PD = \angle BAD = 180^{\circ} - \angle DY'B = 180^{\circ} - \angle DY'Q'\]
we have \(Q'PDY'\) cyclic. By the claim, we have that \(P\) is the midpoint of arc \(\widehat{DQ'Y'}\) , so \(PD = PY'\) .
Since \(Y\) is the unique point not equal to \(D\) on circle \(ABD\) satisfying \(PD = PY\) , we have \(Y' = Y\) .
Likewise \(X' = X\) so \(Q' = Q\) and we are done.
Solution 3. Let \(AX\) intersect circle \(PCX\) for the second time at \(Q'\) . Then
\[\angle AQ'P = \angle XQ'P = \angle XCP = \angle XCB = 180^{\circ} - \angle BAX = \angle Q'AB\]
so \(PQ'\) is parallel to \(AB\) . Hence, it suffices to show that \(Q'\) is equal to \(Q\) . To do so, we aim to show the common chord of circles \(PCX\) and \(PDY\) is parallel to \(AB\) , since then by symmetry \(Q'\) is also the second intersection of \(BY\) and circle \(PDY\) .

Let the centres of circles \(PCX\) and \(PDY\) be \(O_X\) and \(O_Y\) , respectively. Let the centres of circles \(ABC\) and \(ABD\) be \(O_C\) and \(O_D\) , respectively.
Note \(P\) , \(O_X\) , and \(O_C\) are collinear since they all lie on the perpendicular bisector of \(CX\) . Likewise \(P\) , \(O_Y\) , and \(O_D\) are collinear on the perpendicular bisector of \(DY\) . By considering the projections of \(O_X\) and \(O_C\) onto \(BC\) , and \(O_Y\) and \(O_D\) onto \(AD\) , we have
\[\frac{P O_{X}}{P O_{C}} = \frac{\frac{P C}{2}}{\frac{P B + P C}{2}} = \frac{\frac{P D}{2}}{\frac{P A + P D}{2}} = \frac{P O_{Y}}{P O_{D}}.\]
Hence \(O_X O_Y\) is parallel to \(O_C O_D\) , which is perpendicular to \(AB\) as desired.
|
IMOSL-2024-G5
|
Let \(A B C\) be a triangle with incentre \(I\) , and let \(\Omega\) be the circumcircle of triangle \(B I C\) Let \(K\) be a point in the interior of segment \(B C\) such that \(\angle B A K < \angle K A C\) . The angle bisector of \(\angle B K A\) intersects \(\Omega\) at points \(W\) and \(X\) such that \(A\) and \(W\) lie on the same side of \(B C\) , and the angle bisector of \(\angle C K A\) intersects \(\Omega\) at points \(Y\) and \(Z\) such that \(A\) and \(Y\) lie on the same side of \(B C\) .
Prove that \(\angle W A Y = \angle Z A X\) .
|
Common remarks. The key step in each solution is to prove that \(\angle Z A K = \angle I A Y\) and \(\angle W A K = \angle I A X\) . The problem is implied by these equalities, as we then have that
\[\angle W A Y = \angle W A K + \angle K A I + \angle I A Y = \angle I A X + \angle K A I + \angle Z A K = \angle Z A X.\]

We now present several proofs that \(\angle Z A K = \angle I A Y\) , with \(\angle W A K = \angle I A X\) following in an analogous manner.
Solution 1. Let \(\Gamma\) be circle \(A B C\) and \(\omega\) be circle \(A Y Z\) . Let \(O\) , \(M\) , and \(S\) be the centres of \(\Gamma\) , \(\Omega\) , and \(\omega\) , respectively. Let \(A K\) intersect \(\Gamma\) again at \(P\) , and let the angle bisector of \(\angle Z A Y\) intersect \(\omega\) again at \(N\) .

By power of a point from \(K\) to \(\Gamma\) and \(\Omega\) , we have that \(K A\cdot K P = K B\cdot K C = K Y\cdot K Z\) , so \(P\) also lies on \(\omega\) . The pairwise common chords of \(\Gamma\) , \(\Omega\) , and \(\omega\) are then \(A P\perp O S\) , \(B C\perp O M\) , and \(Y Z\perp M S\) , so we have that \(\angle O M S = \angle C K Y = \angle Y K A = \angle M S O\) . As \(M\) lies on \(\Gamma\) and \(O M = O S\) , \(S\) also lies on \(\Gamma\) . Note that \(N\) lies on \(M S\) as \(N Y = N Z\) , so
\[\angle P A N = \frac{1}{2}\angle P S N = \frac{1}{2}\angle P S M = \frac{1}{2}\angle P A M.\]
Thus, \(A N\) bisects \(\angle P A M\) in addition to \(\angle Z A Y\) , which means that \(\angle Z A K = \angle I A Y\) as \(K\) lies on \(A P\) and \(I\) lies on \(A M\) .
Solution 2. Define \(M\) and \(P\) as in Solution 1, and recall that \(A Y P Z\) is cyclic. Let \(Q\) be the second intersection of the line parallel to \(B C\) through \(P\) with circle \(A B C\) and let \(J\) be the. incentre of triangle \(A P Q\)

Since \(P Q\) is parallel to \(B C\) and \(\angle B A P< \angle P A C\) , the angle bisector of \(\angle A P Q\) is parallel to the angle bisector of \(\angle A K C\) . Hence, \(P J\) is parallel to \(Y Z\) . As \(M\) is the midpoint of \(P Q\) on circle \(A P Q\) , we have that \(M P = M J\) . Then since segments \(Y Z\) and \(P J\) are parallel and have a common point \(M\) on their perpendicular bisectors, \(P J Y Z\) is cyclic with \(J Y = P Z\) . It follows that \(J\) also lies on circle \(A Y P Z\) and that \(\angle Z A P = \angle J A Y = \angle I A Y\) .
Comment. The proof of the analogous case of \(\angle W A K = \angle I A X\) is slightly different. In this case, \(J\) should be defined as the \(A\) - excentre of \(A P Q\) so that \(P J\) is the external bisector of \(\angle A P Q\) and \(P J\parallel W X\) . The proof is otherwise exactly the same.
Solution 3. As in the previous solutions, let \(M\) be the centre of \(\Omega\) . Let \(L\) be the intersection of \(AM\) and \(BC\) , and let \(L'\) be the reflection of \(L\) over \(YZ\) . Let the circle \(MYZ\) intersect \(AM\) again at \(T\) .

Note that as \(M\) is the midpoint of \(\overline{BC}\) on circle \(ABC\) and \(L\) is the foot of the bisector of \(\angle BAC\) , we have that \(MA \cdot ML = MI^2 = MY^2\) . It follows by power of a point that \(MY\) is tangent to circle \(ALY\) , so \(\angle LAY = \angle LYM\) . Using directed angles, we then have that
\[\angle A Y T = \angle M T Y - \angle M A Y = \angle M Z Y - \angle L Y M = \angle Z Y M - \angle L Y M = \angle Z Y L = \angle L^{\prime}Y Z,\]
where we use the fact that \(MY = MZ\) and that \(L\) and \(L'\) are symmetric about \(YZ\) . Thus, \(YT\) and \(YL'\) are isogonal in \(\angle AYZ\) . Analogously, we have that \(ZT\) and \(ZL'\) are isogonal in \(\angle YZA\) . This means that \(T\) and \(L'\) are isogonal conjugates in triangle \(AYZ\) , which allows us to conclude that \(\angle ZAK = \angle IAY\) since \(L'\) lies on \(AK\) and \(T\) lies on \(AI\) .
Comment. Owing to the condition \(\angle BAK < \angle KAC\) , points \(L'\) and \(T\) lie inside triangle \(AYZ\) . However, if one tries to write down the same proof for \(\angle WAK = \angle IAX\) , the analogues \(L_1'\) and \(T_1\) of \(L'\) and \(T\) would lie outside triangle \(AWX\) . Thus, the solution has been written using directed angles so that it applies directly to this case as well. It is also possible that \(L_1'\) lies on circle \(AWX\) and \(T_1\) is a point at infinity. In this case, it is straightforward to interpret the directed angle chase to prove the isogonality, and the isogonality also follows from this scenario being a limit case of other configurations.
Note. The original proposal remarks that this problem is a special case of a more general property:
A convex quadrilateral \(ABCD\) is inscribed in a circle \(\omega\) . The bisectors between \(AC\) and \(BD\) intersect \(\omega\) at four points, forming a convex quadrilateral \(PQRS\) . Then the conditions
\[XA\cdot XC = XB\cdot XD\qquad and\qquad \angle (XP,XQ) = \angle (XS,XR)\]
on point \(X\) are equivalent.
The Problem Selection Committee believes that the proof of this generalisation is beyond the scope of the competition and considers the original problem to be more suitable.
|
IMOSL-2024-G6
|
Let \(ABC\) be an acute triangle with \(AB < AC\) , and let \(\Gamma\) be the circumcircle of \(ABC\) . Points \(X\) and \(Y\) lie on \(\Gamma\) so that \(XY\) and \(BC\) intersect on the external angle bisector of \(\angle BAC\) . Suppose that the tangents to \(\Gamma\) at \(X\) and \(Y\) intersect at a point \(T\) on the same side of \(BC\) as \(A\) , and that \(TX\) and \(TY\) intersect \(BC\) at \(U\) and \(V\) , respectively. Let \(J\) be the centre of the excircle of triangle \(TUV\) opposite the vertex \(T\) .
Prove that \(AJ\) bisects \(\angle BAC\) .
|
Solution 1. Let \(N\) be the midpoint of \(\overline{BAC}\) on \(\Gamma\) , and let \(NX\) and \(NY\) intersect \(BC\) at \(W\) and \(Z\) , respectively.
Claim. Quadrilateral \(WXYZ\) is cyclic, and its circumcentre is \(J\) .
Proof. As \(N\) is the midpoint of \(\overline{BAC}\) , \(W\) and \(Z\) lie on \(BC\) , and \(X\) and \(Y\) are the second intersections of \(NW\) and \(NZ\) with \(\Gamma\) , we have that \(WXYZ\) is cyclic.
Let the parallel to \(BC\) through \(N\) intersect \(TU\) and \(TV\) at \(U'\) and \(V'\) , respectively. Then \(U'\) is the intersection of the tangents to \(\Gamma\) at \(N\) and \(X\) , so \(U'N = U'X\) . As \(NU' \parallel BC\) , \(U'NX\) is similar to \(UWX\) , so \(UW = UX\) as well. Hence, the perpendicular bisector of \(WX\) is the internal bisector of \(\angle XUW\) , which is the external bisector of \(\angle VUT\) . Analogously, the perpendicular bisector of \(YZ\) is the external bisector of \(\angle TVU\) . This means that the circumcentre of \(WXYZ\) is the intersection of the external bisectors of \(\angle VUT\) and \(\angle TVU\) , which is \(J\) . \(\square\)

Let \(AN\) intersect \(BC\) at \(L\) , so \(XY\) passes through \(L\) as well. By power of a point from \(L\) to \(\Gamma\) and circle \(WXYZ\) , we have that \(LA \cdot LN = LX \cdot LY = LW \cdot LZ\) , so \(WANZ\) is also cyclic. Thus, \(A\) is the Miquel point of quadrilateral \(WXYZ\) . As \(WXYZ\) is cyclic with circumcentre \(J\) and its opposite sides \(WX\) and \(YZ\) intersect at \(N\) , we have that \(AN \perp AJ\) . Since \(AN\) is the external bisector of \(\angle BAC\) , this implies that \(AJ\) is the internal bisector of \(\angle BAC\) .
Solution 2. Let the internal and external angle bisectors of \(\angle BAC\) intersect \(BC\) at \(K\) and \(L\) , respectively. Let \(AK\) intersect circle \(ABC\) again at \(M\) , and let \(D\) be the intersection of the tangents to \(\Gamma\) at \(B\) and \(C\) . Let \(\Omega\) be the \(T\) - excircle of \(TUV\) , and let \(\omega\) be the incircle of \(DBC\) .
Claim. The points \(T\) , \(K\) , and \(D\) are collinear.
Proof. Note that \(BC\) and \(XY\) are the polars of \(T\) and \(D\) with respect to \(\Gamma\) . By La Hire's Theorem, \(TD\) is the polar of \(L\) with respect to \(\Gamma\) . As \((B, C; K, L) = - 1\) , \(K\) also lies on the polar of \(L\) , thus proving the collinearity. \(\square\)
Claim. The incentre of \(DBC\) is \(M\) .
Proof. We have that \(\angle MBC = \angle MAC = \frac{1}{2}\angle BAC = \frac{1}{2}\angle DBC\) , so \(BM\) bisects \(\angle DBC\) . Similarly, \(CM\) bisects \(\angle BCD\) , so \(M\) is the incentre of \(DBC\) . \(\square\)

Claim. The intersection of the common external tangents of \(\Omega\) and \(\omega\) is \(K\) .
Proof. Let \(K'\) be the intersection of the common external tangents of \(\Omega\) and \(\omega\) . As \(\Omega\) and \(\omega\) are both tangent to \(BC\) and lie on the same side of \(BC\) opposite to \(A\) , \(K'\) lies on \(BC\) . As \(T\) is the intersection of the common external tangents of \(\Gamma\) and \(\Omega\) and \(D\) is the intersection of the common external tangents of \(\Gamma\) and \(\omega\) , by Monge's theorem \(K'\) lies on \(TD\) . As \(K'\) lies on both \(BC\) and \(TD\) , it is the same point as \(K\) . \(\square\)
Hence, \(K\) is collinear with the centres of \(\Omega\) and \(\omega\) , which are \(M\) and \(J\) , respectively. As \(K\) and \(M\) both lie on the bisector of \(\angle BAC\) , so does \(J\) .
Note. It can be shown that circles \(AUV\) and \(ABC\) are tangent and that the tangents from \(U\) and \(V\) to circle \(ABC\) different from \(TU\) and \(TV\) intersect at a point \(W\) on line \(TK\) . Reframing the problem in terms of quadrilateral \(TUVW\) using these properties, we obtain the following problem:
Let \(ABCD\) be a convex quadrilateral with an incircle \(\omega\) , and let \(AC\) and \(BD\) intersect at \(P\) . Point \(E\) lies on \(\omega\) such that the circumcircle of \(ACE\) is tangent to \(\omega\) . Prove that if \(B\) and \(E\) lie on the same side of line \(AC\) , then the centre of the excircle of triangle \(ABC\) opposite the vertex \(B\) lies on line \(EP\) .
While this is an appealing statement, the Problem Selection Committee is uncertain about its difficulty and whether it has solutions that do not proceed by reducing to the original problem. Thus, it is believed that the original statement is more suitable for the competition.
|
IMOSL-2024-G7
|
Let \(ABC\) be a triangle with incentre \(I\) such that \(AB < AC < BC\) . The second intersections of \(AI\) , \(BI\) , and \(CI\) with the circumcircle of triangle \(ABC\) are \(M_A\) , \(M_B\) , and \(M_C\) , respectively. Lines \(AI\) and \(BC\) intersect at \(D\) and lines \(BM_C\) and \(CM_B\) intersect at \(X\) . Suppose the circumcircles of triangles \(XM_BM_C\) and \(XBC\) intersect again at \(S \neq X\) . Lines \(BX\) and \(CX\) intersect the circumcircle of triangle \(SXM_A\) again at \(P \neq X\) and \(Q \neq X\) , respectively.
Prove that the circumcentre of triangle \(SID\) lies on \(PQ\) .
|
Solution 1.

Let \(O\) be the circumcentre of \(\triangle ABC\) . First we note from standard properties of the Miquel point \(S\) we have:
\(\triangle SMC_MB \sim \triangle SBC \sim \triangle SPQ\) ; \((*)\)
- \(I\) and \(S\) are inverses with respect to circle \(ABC\) ;
\(\angle OSX = 90^{\circ}\) .
Claim 1. \(\angle M_APB = \angle CDA\) .
Proof. From the above we have \(\triangle OMA_I \sim \triangle OSM_A\) and
\[\angle M_APB = \angle M_APX = \angle M_ASX = 90^{\circ} + \angle M_ASO = 90^{\circ} + \angle OMA_I = \angle M_ABA = \angle CDA.\]
Claim 2. \(\frac{M_{C}B}{BP} = \frac{M_{B}C}{CQ} = \frac{AI}{ID}\) .
Proof. Observe that \(\angle P M_{C}M_{A} = \angle B M_{C}M_{A} = \angle D A C\) and \(\angle M_{C}M_{A}B = \angle I C D\) . Combining these with Claim 1 shows \(M_{C}P M_{A}B\sim A D C I\) . Therefore, \(\frac{M_{C}B}{BP} = \frac{AI}{ID}\) . Similarly, \(\frac{M_{B}C}{CQ} = \frac{AI}{ID}\) . \(\square\)
Claim 3. \(\frac{DP}{DQ} = \frac{IB}{IC}\) .
Proof. Firstly, observe that \(\angle I C B = \angle A M_{B}M_{C}\) and \(\angle C B I = \angle M_{B}M_{C}A\) which gives that \(\triangle I B C\sim \triangle A M_{C}M_{B}\) . This, combined with Claim 2, is enough to show \(\triangle D P Q\sim \triangle I B C\) by linearity, proving the claim. \(\square\)
Claim 4. \(\frac{IP}{IQ} = \frac{IB}{IC}\) .
Proof. Combining \(\triangle I B M_{C}\sim \triangle I C M_{B}\) with Claim 2 shows \(I B M_{C}P\sim I C M_{B}Q\) giving the result. \(\square\)
Finally, we have that
\[\frac{SP}{SQ} = \frac{SB}{SC} = \frac{BM_{C}}{CM_{B}} = \frac{IB}{IC}\]
from \((\ast)\) and \(\triangle I B M_{C}\sim \triangle I C M_{B}\) . Putting this together with Claims 3 and 4, we have that
\[\frac{IB}{IC} = \frac{DP}{DQ} = \frac{IP}{IQ} = \frac{SP}{SQ},\]
which shows that circle \(S I D\) is an Apollonius circle with respect to \(P\) and \(Q\) , giving the desired conclusion.
Comment. The condition \(A B< A C\) ensures \(S\neq X\) . We also need to avoid the case \(\angle B A C = 60^{\circ}\) as then \(B M_{C}\parallel C M_{B}\) .
Solution 2. We use Claim 1 from Solution 1. We will show that \(P\) and \(Q\) are inverses in circle \(SID\) which implies the result. Perform an inversion in circle \(BIC\) and denote the inverse of a point \(\bullet\) by \(\bullet '\) .

Claim 1. \(S' = J\) where \(J\) is the reflection of \(I\) across \(BC\) .
Proof. We have that \(S\) and \(I\) are inverses in circle \(ABC\) . Inverting this assertion in circle \(BIC\) shows that \(S'\) and \(I\) are inverses with respect to line \(BC\) , which is just a reflection in line \(BC\) . \(\square\)
Let \(Y = M_{B}M_{C}\cap BC\) . From \(\angle IM_{C}M_{B} = \angle M_{B}M_{C}A\) and \(\angle AM_{B}M_{C} = \angle M_{C}M_{B}I\) , we see that \(A\) and \(I\) are reflections in line \(M_{B}M_{C}\) so \(YA = YI\) . We have that circle \(SID\) maps to circle \(AIJ\) which, from the previous comment, has centre \(Y\) . Inverting the conclusion that \(P\) and \(Q\) are inverses with respect to circle \(SID\) in circle \(BIC\) , it suffices to show \(P'\) and \(Q'\) are inverses with respect to circle \(AIJ\) or equivalently, that \(YP' \cdot YQ' = YA^2\) .
Claim 2. Circle \(XSM_{A}\) maps to line \(YJ\) under the inversion in circle \(BIC\) .
Proof. Since circle \(BIC\) has centre \(M_{A}\) , the inverse of this circle is a line. By Claim 1, this line passes through \(J\) hence it suffices to prove that circle \(XSM_{A}\) passes through \(Y'\) . From inverting line \(BC\) in circle \(BIC\) , we have that \(BCM_{A}Y'\) is cyclic so
\[Y S\cdot Y X = Y B\cdot Y C = Y Y^{\prime}\cdot Y M_{A}.\]
where we have used that \(Y\) , \(S\) and \(X\) are collinear by a standard property of the Miquel point. Hence \(Y'\) lies on circle \(XSM_{A}\) as required. \(\square\)
Let \(A_{1}\) be the reflection of \(A\) in the perpendicular bisector of \(B C\) . Using Claim 1 from Solution 1,
\[\angle P^{\prime}B M_{A} = \angle M_{A}P B = \angle C D A = 180^{\circ} - \angle A C M_{A} = 180^{\circ} - \angle M_{A}B A_{1}.\]
Hence, \(P^{\prime}\) , \(B\) , and \(A_{1}\) are collinear. Similarly \(Q^{\prime}\) , \(C\) , and \(A_{1}\) are collinear. Let \(P_{1}\) and \(Q_{1}\) be the reflections of \(P^{\prime}\) and \(Q^{\prime}\) across \(B C\) . As \(P^{\prime}\) and \(Q^{\prime}\) lie on line \(Y J\) , it follows that \(P_{1}\) and \(Q_{1}\) lie on line \(Y I\) . Also from the previous collinearities, we get \(B P_{1} \parallel A C\) and \(C Q_{1} \parallel A B\) .
We have now reduced the problem to the following:
Claim 3 (Inverted Problem). Let \(A B C\) be a triangle with incentre \(I\) . Let \(Y\) be the point on \(B C\) such that \(Y A = Y I\) . Let \(P_{1}\) and \(Q_{1}\) be points on \(Y I\) such that \(B P_{1} \parallel A C\) and \(C Q_{1} \parallel A B\) . Then \(Y A^{2} = Y P_{1} \cdot Y Q_{1}\) .

Proof. Let \(Y I\) intersect \(A B\) and \(A C\) at \(E\) and \(F\) , respectively. From the parallel lines, we get that \(\triangle B E P_{1}\) and \(\triangle C Q_{1} F\) are homothetic with centre \(Y\) . Thus we have
\[\frac{Y E}{Y P_{1}} = \frac{Y Q_{1}}{Y F} \Longrightarrow Y P_{1} \cdot Y Q_{1} = Y E \cdot Y F.\]
Moreover, \(A I\) bisects \(\angle E A F\) and \(Y A = Y I\) so the circle centred at \(Y\) with radius \(Y A\) is the Apollonius circle of \(\triangle A E F\) with respect to the feet of the internal and external angle bisectors at \(A\) . This gives \(Y E \cdot Y F = Y A^{2}\) . Combining these results proves the claim. \(\square\)
Solution 3. As in Solution 1, let \(O\) be the circumcentre of \(\triangle ABC\) . Let \(XI\) intersect circle \(XSM_A\) again at \(Z \neq X\) and let \(Y = BC \cap M_B M_C\) . Let \(X^*\) be the inverse of \(X\) in circle \(ABC\) . We will use the properties of Miquel point \(S\) noted at the top of Solution 1 and in addition, that \(S\) lies on line \(XY\) .

Claim 1. \(YSAD\) is cyclic.
Proof. From \(OM_A \perp BC\) and \(YS \perp OS\) we have \(\angle DYS = 180^\circ - \angle SOM_A\) . From inverting collinear points \(A\) , \(I\) and \(M_A\) in circle \(ABC\) we get \(ASM_AO\) is cyclic which gives
\[\angle SOM_A = \angle SAM_A = \angle SAD \implies \angle SAD + \angle DYS = 180^\circ\]
proving the claim.
Claim 2. \(X^*\) lies on circle \(BIC\) which has centre \(M_A\) .
Proof. This follows immediately from inverting circle \(SBCX\) in circle \(ABC\) .
Claim 3. \(Z\) lies on circle \(SID\) .
Proof. We have that
\[\angle IZS = \angle XM_AS = \angle OM_AS - \angle OM_AX = \angle M_AIO - \angle M_AX^*O = \angle DIO - \angle M_AX^*O\]
where in the penultimate step we inverted in circle \(ABC\) to get the angle equalities.
From Brocard’s Theorem applied to cyclic quadrilateral \(B M_{C}M_{B}C\) , we get \(Y\) , \(I\) , and \(X^{*}\) collinear and \(\angle Y X^{*}O = 90^{\circ}\) . This gives that
\[\angle M_{A}X^{*}O = 90^{\circ} - \angle I X^{*}M_{A} = 90^{\circ} - \angle M_{A}I X^{*} = 90^{\circ} - \angle A I Y,\]
where the second equality is by Claim 2. We have that \(A\) and \(I\) are reflections in line \(M_{B}M_{C}\) . Hence,
\[90^{\circ} - \angle A I Y = 90^{\circ} - \angle Y A D = 90^{\circ} - \angle Y S D = \angle D S O\]
where the second step is by Claim 1, and in the last step we are using \(O S\bot Y S\) . Putting these together,
\[\angle I Z S = \angle D I O - \angle D S O = \angle I D S,\]
proving the claim.
Let the tangents from \(S\) and \(Z\) to circle \(X S M_{A}\) intersect at \(K\) . Observe from the standard Ceva- Menelaus configuration,
\[-1 = (X Y,X I;X B,X C)\stackrel {X}{=}(S,Z;P,Q).\]
This shows that \(K\) lies on line \(P Q\) . We then have
\[\angle Z K S = 180^{\circ} - 2\angle S X Z = 2\left(90^{\circ} - \angle S X I\right) = 2\left(180^{\circ} - \angle S I Z\right),\]
where we are using \(\angle I S X = 90^{\circ}\) . As \(K\) lies on the perpendicular bisector of \(S Z\) , this is enough to show that \(K\) is the centre of circle \(S I D Z\) completing the proof.
Solution 4. Solution 1 solves the problem by establishing \(\frac{SP}{SQ} = \frac{IP}{IQ} = \frac{DP}{DQ}\) , which implies that circle \(SID\) is an Apollonius circle with respect to \(P\) and \(Q\) . We demonstrate an alternate approach that only requires us to show two of the ratios \(\frac{SP}{SQ}\) , \(\frac{IP}{IQ}\) , and \(\frac{DP}{DQ}\) to be equal. This can arise from missing some of the observations in Solution 1, for example not proving Claim 3.
Claim. Given we have shown two of the ratios listed above to be equal, it suffices to show that circle \(SID\) is orthogonal to circle \(SXM_A\) , which the same circle as \(SPQ\) .
Proof. Supposing we have shown the orthogonality, if \(\frac{SP}{SQ} = \frac{IP}{IQ}\) or \(\frac{SP}{SQ} = \frac{DP}{DQ}\) , then we immediately have that circle \(SID\) is an Apollonius circle with respect to \(P\) and \(Q\) . If \(\frac{IP}{IQ} = \frac{DP}{DQ}\) and \(S\) does not lie on the Apollonius circle \(\mathcal{C}\) defined by this common ratio, then \(I\) and \(D\) lie on two distinct circles orthogonal to circle \(SPQ\) , namely circle \(SID\) and \(\mathcal{C}\) . This implies that \(I\) and \(D\) are inverses with respect to circle \(SPQ\) , which is a contradiction as both \(I\) and \(D\) lie inside circle \(SPQ\) . \(\square\)
Throughout this solution, we will use the properties of \(S\) from the beginning of Solution 1. Define \(O\) and \(Y\) as in previous solutions, and let \(E\) be the second intersection of circles \(SOM_A\) and \(SM_BM_C\) .

Lemma. We have that \(OE \perp AY\) .
Proof. Let \(M_A'\) , \(B'\) , and \(C'\) be the respective reflections of \(M_A\) , \(B\) , and \(C\) over line \(M_BM_C\) . As noted in Solution 3, \(A\) and \(I\) are reflections across \(M_BM_C\) . Because \(M_A\) is the centre of circle \(BIC\) , it follows that \(M_A'\) is the centre of circle \(AB'C'\) . On the other hand, \(Y\) lies on \(M_BM_C\) , so we have that \(YB \cdot YC = YB' \cdot YC'\) . Thus, \(Y\) lies on the radical axis of circles \(ABC\) and \(AB'C'\) , so \(OM_A' \perp AY\) . Finally, note that the inverses of circles \(SOM_A\) and \(SM_BM_C\) in circle \(ABC\) are line \(IM_A\) and circle \(IM_BM_C\) respectively, so \(E\) and \(M_A'\) are inverses in circle \(ABC\) . Thus, \(E\) lies on \(OM_A'\) and the lemma follows. \(\square\)
Let \(\mathcal{T}\) denote the composition of an inversion at \(S\) with radius \(\sqrt{S I\cdot S O}\) with a reflection across line \(S I\) . By standard properties of the Miquel point, \(\mathcal{T}\) swaps \(X\) and \(Y\) and any points \(Z_{1}\) and \(Z_{2}\) on circle \(A B C\) with \(I\in Z_{1}Z_{2}\) . Hence, \(\mathcal{T}\) swaps the pairs \((A,M_{A})\) , \((B,M_{B})\) , \((C,M_{C})\) , \((O,I)\) , and \((X,Y)\) . As \(D = A I\cap B C\) and \(E\) is the intersection of circles \(S O M_{A}\) and \(S M_{B}M_{C}\) , we have that \(\mathcal{T}(D) = E\) . Thus, \(\mathcal{T}\) maps circles \(S I D\) and \(S X M_{A}\) to lines \(O E\) and \(A Y\) , so by the Lemma, circles \(S I D\) and \(S X M_{A}\) are orthogonal, as required.

|
IMOSL-2024-G8
|
Let \(A B C\) be a triangle with \(A B< A C< B C\) , and let \(D\) be a point in the interior of segment \(B C\) . Let \(E\) be a point on the circumcircle of triangle \(A B C\) such that \(A\) and \(E\) lie on opposite sides of line \(B C\) and \(\angle B A D = \angle E A C\) . Let \(I\) , \(I_{B}\) , \(I_{C}\) , \(J_{B}\) , and \(J_{C}\) be the incentives of triangles \(A B C\) , \(A B D\) , \(A D C\) , \(A B E\) , and \(A E C\) , respectively.
Prove that \(I_{B}\) , \(I_{C}\) , \(J_{B}\) , and \(J_{C}\) are concyclic if and only if \(A I\) , \(I_{B}J_{C}\) , and \(J_{B}I_{C}\) concur.
|
Solution 1. Let \(X\) be the intersection of \(I_{B}J_{C}\) and \(J_{B}I_{C}\) . We will prove that, provided that \(A B< A C< B C\) , the following two conditions are equivalent:
(1) \(A X\) bisects \(\angle B A C\) ;
(2) \(I_{B}\) , \(I_{C}\) , \(J_{B}\) , and \(J_{C}\) are concyclic.
Let circles \(A I B\) and \(A I C\) meet \(B C\) again at \(P\) and \(Q\) , respectively. Note that \(A B = B Q\) and \(A C = C P\) because the centres of circles \(A I B\) and \(A I C\) lie on \(C I\) and \(B I\) , respectively. Thus, \(B\) , \(P\) , \(Q\) , and \(C\) are collinear in this order as \(B Q + P C = A B + A C > B C\) by the triangle inequality.
Claim 1. Points \(P\) , \(J_{B}\) , and \(I_{C}\) are collinear, and points \(Q\) , \(I_{B}\) , and \(J_{C}\) are collinear.
Proof. We have that
\[\angle A J_{B}B = 90^{\circ} + \frac{1}{2}\angle A E B = 90^{\circ} + \frac{1}{2}\angle A C B = \angle A I B = \angle A P B,\]
so \(A B J_{B}P\) is cyclic. As \(A\) is the centre of spiral similarity between \(A B E\) and \(A D C\) , it is also the centre of spiral similarity between \(A B J_{B}\) and \(A D I_{C}\) . Hence, \(A\) is the Miquel point of self- intersecting quadrilateral \(B D I_{C}J_{B}\) , so \(P\) lies on \(J_{B}I_{C}\) . Analogously, we have that \(Q\) lies on \(I_{B}J_{C}\) . \(\square\)

Throughout the rest of the solution, we will use directed angles.
Proof of (1) \(\Longrightarrow\) (2). We assume that (1) holds.
Claim 1 and the similarities \(ABDI_{B} \sim AECJ_{C}\) and \(ABEJ_{B} \sim ADCIC_{C}\) tell us that
\[\angle I_{B}X I_{C} = \angle J_{C}Q C + \angle B P J_{B} = \angle J_{C}A C + \angle B A J_{B} = \angle I_{B}A D + \angle D A I_{C} = \angle I_{B}A I_{C},\]
so \(AI_{B}X I_{C}\) is cyclic. Also, as \(X \in AI\) , we have that
\[\angle I_{B}A X = \angle B A I - \angle B A I_{B} = \angle I_{B}A I_{C} - \angle I B_{A}D = \angle D A I_{C}.\]
Using these, we have that
\[\angle I_{B}I_{C}P = \angle I_{B}A X = \angle D A I_{C} = \angle B A J_{B} = \angle B P J_{B},\]
so \(I_{B}I_{C} \parallel BC\) . Hence,
\[\angle I_{B}I_{C}J_{B} = \angle B P J_{B} = \angle B I J_{B} = \angle I_{B}I J_{B},\]
so \(II_{B}J_{B}I_{C}\) is cyclic. Analogously, we have that \(II_{C}J_{C}I_{B}\) is cyclic, so \(I_{B}J_{B}J_{C}I_{C}\) is cyclic, thus proving (2). \(\square\)
Proof of (2) \(\Longrightarrow\) (1). We assume that (2) holds.
Claim 2. Circles \(IBC\) , \(IJ_{B}I_{C}\) , and \(II_{B}J_{C}\) are tangent at \(I\) .
Proof. Using the cyclic quadrilateral \(BIJ_{B}P\) , we have that
\[\angle IBC = \angle IBP = \angle IJB_{P} = \angle IJB_{I}C.\]
As \(C\) , \(I_{C}\) , and \(I\) are collinear, the tangents to circles \(IJ_{B}I_{C}\) and \(IBC\) at \(I\) coincide, so circles \(IJ_{B}I_{C}\) and \(IBC\) are tangent at \(I\) . Analogously, circles \(II_{B}J_{C}\) and \(IBC\) are tangent at \(I\) , so all three circles are tangent at \(I\) . \(\square\)
Claim 3. Point \(I\) lies on circle \(I_{B}J_{B}J_{C}I_{C}\) .
Proof. Suppose that \(I\) does not lie on circle \(I_{B}J_{B}J_{C}I_{C}\) . Then the circles \(II_{B}J_{C}\) , \(IJ_{B}I_{C}\) , and \(I_{B}J_{B}J_{C}I_{C}\) are distinct. We apply the radical axis theorem to these three circles. By Claim 2, the radical axis of circles \(II_{B}J_{C}\) and \(IJ_{B}I_{C}\) is the tangent to circle \(IBC\) at \(I\) . As \(I_{B}J_{C}\) and \(J_{B}I_{C}\) intersect at \(X\) , \(IX\) must be tangent to circle \(IBC\) .
However, by Claim 1 we have that \(X\) is the intersection of \(PI_{C}\) and \(QI_{B}\) . As \(D\) lies on the interior of segment \(BC\) , \(I_{B}\) lies on the interior of segment \(BI\) and \(I_{C}\) lies on the interior of segment \(CI\) . Hence, \(I_{B}\) , \(P\) , \(Q\) , and \(I_{C}\) all lie on the perimeter of triangle \(IBC\) in this order, so \(X\) must be in the interior of triangle \(IBC\) . This means that \(IX\) cannot be tangent to circle \(BIC\) , so \(I\) must lie on circle \(IJ_{B}J_{C}I_{C}\) . \(\square\)
By Claims 2 and 3, circles \(II_{B}I_{C}\) and \(IBC\) are tangent, so \(I_{B}I_{C} \parallel BC\) . Since \(IJ_{B}J_{C}I_{C}\) is cyclic, we have that
\[\angle P J_{B}J_{C} = \angle I_{C}J_{B}J_{C} = \angle I_{C}I_{B}J_{C} = \angle P Q I_{B} = \angle P Q J_{C},\]
so \(PJ_{B}J_{C}Q\) is cyclic. By the radical axis theorem on circles \(AIPJ_{B}\) , \(AIQJ_{C}\) , and \(PJ_{B}J_{C}Q\) , we have that \(AI\) , \(I_{B}J_{C}\) , and \(J_{B}I_{C}\) concur at \(X\) , thus proving (1). \(\square\)
Solution 2. Let \(X\) be the intersection of \(I_{B}J_{C}\) and \(J_{B}I_{C}\) . As in Solution 1, we will prove that conditions (1) and (2) are equivalent. To do so, we introduce the new condition:
(3) \(I_{B}I_{C} \parallel BC\)
and show that (3) is equivalent to both (1) and (2), provided that \(AB < AC < BC\) .
Note that \(ABD \stackrel{+}{\sim} AEC\) and \(ABE \stackrel{+}{\sim} ADC\) , where \(\stackrel{+}{\sim}\) denotes positive similarity. We will make use of the following fact.
Fact. For points \(P\) , \(P_{1}\) , \(P_{2}\) , \(P_{3}\) , and \(P_{4}\) , the positive similarities
\[P P_{1}P_{2}\stackrel {+}{\times}P P_{3}P_{4}\quad \mathrm{and}\quad P P_{1}P_{3}\stackrel {+}{\times}P P_{2}P_{4}\]
are equivalent.
Proof of (1) \(\Longleftrightarrow\) (3). Let \(A I_{B}\) and \(A I_{C}\) meet \(B C\) at \(S\) and \(T\) , respectively. Let \(A J_{B}\) meet \(B E\) at \(K\) , \(A J_{C}\) meet \(C E\) at \(L\) , and \(K T\) and \(S L\) meet at \(Y\) .

Claim 1. Line \(A Y\) bisects \(\angle B A C\)
Proof. Let \(Y^{\prime}\) be the intersection of \(K T\) and the bisector of \(\angle B A C\) . As
\[\angle B A K = \frac{1}{2}\angle B A E = \frac{1}{2}\angle D A C = \angle T A C,\]
\(A Y^{\prime}\) also bisects \(\angle K A T\) . Hence, \(Y^{\prime}\) is the foot of the bisector of \(\angle K A T\) in triangle \(A K T\) . Using the Fact, we have that
\[A B E\stackrel {+}{\times}A D C\Longrightarrow A B E K\stackrel {+}{\times}A D C T\] \[\Longrightarrow A B D\stackrel {+}{\times}A K T\stackrel {+}{\times}A E C\] \[\Longrightarrow A B D S\stackrel {+}{\times}A K T Y^{\prime}\stackrel {+}{\times}A E C L\] \[\Longrightarrow A B E K\stackrel {+}{\times}A S L Y^{\prime}\stackrel {+}{\times}A D C T.\]
As \(K\) lies on \(B E\) , we have that \(Y^{\prime}\) lies on \(S L\) , so \(Y = Y^{\prime}\) and \(A Y\) bisects \(\angle B A C\) .
We show that \(X\) lies on \(A Y\) if and only if \(I_{B}I_{C}\parallel B C\) , which implies the equivalence of (1) and (3) by Claim 1. Let \(A Y\) meet \(I_{B}J_{C}\) and \(J_{B}I_{C}\) at \(X_{1}\) and \(X_{2}\) , respectively. As \(A B D\) and \(A E C\) are similar, we have that \(\frac{A I_{B}}{A S} = \frac{A J_{C}}{A T}\) , so \(I_{B}J_{C}\parallel S L\) . Analogously, we have that \(I_{B}I_{C}\parallel K T\) . Hence, \(X_{1}\) and \(X_{2}\) coincide with \(X\) if and only if
\[\frac{A I_{B}}{A S} = \frac{A X_{1}}{A Y} = \frac{A X_{2}}{A Y} = \frac{A I_{C}}{A T},\]
which is equivalent to \(I_{B}I_{C}\parallel B C\) .
Proof of (2) \(\Longleftrightarrow\) (3). Let \(A J_{B}\) and \(A J_{C}\) meet circle \(A B C\) at \(M\) and \(N\) , respectively, and let \(I_{B}^{\prime}\) and \(I_{C}^{\prime}\) be the \(A\) - excentres of \(A B D\) and \(A D C\) , respectively.

Claim 2. Lines \(I_{B}I_{C}\) , \(I_{B}^{\prime}I_{C}^{\prime}\) , and \(B C\) are concurrent or pairwise parallel.
Proof. We work in the projective plane. Let \(I_{B}I_{C}\) and \(I_{B}^{\prime}I_{C}^{\prime}\) meet \(B C\) at \(Z\) and \(Z^{\prime}\) , respectively. Note that \(Z\) is the intersection of the external common tangents of the incircles of \(A B D\) and \(A D C\) and \(A D\) is a common internal tangent of the incircles of \(A B D\) and \(A D C\) , so \((A D,A Z;A I_{B},A I_{C}) = - 1\) . Applying the same argument to the \(A\) - excircles of \(A B D\) and \(A D C\) gives \((A D,A Z^{\prime};A I_{B}^{\prime},A I_{C}^{\prime}) = - 1\) , which means that \(Z = Z^{\prime}\) . Thus, \(I_{B}I_{C}\) , \(I_{B}^{\prime}I_{C}^{\prime}\) , and \(B C\) concur, possibly at infinity. \(\square\)

Claim 3. Lines \(J_{B}I_{C}\) and \(C M\) are parallel, and lines \(I_{B}J_{C}\) and \(B N\) are parallel.
Proof. Using the Fact, we have that
\[A B E\stackrel {+}{\times}A D C\Longrightarrow A B E J_{B}\stackrel {+}{\times}A D C I_{C}\Longrightarrow A J_{B}I_{C}\stackrel {+}{\times}A B D.\]
Thus, \(\angle (B D,J_{B}I_{C}) = \angle B A J_{B} = \angle B C M\) , so \(J_{B}I_{C}\parallel C M\) . Similarly, we have that \(I_{B}J_{C}\parallel B N\) . \(\square\)
Claim 4. The centre of spiral similarity between \(J_{B}J_{C}\) and \(I_{B}^{\prime}I_{C}^{\prime}\) is \(A\) .
Proof. As \(I_{B}\) and \(I_{B}^{\prime}\) are respectively the incentre and \(A\) - excentre of triangle \(A B D\) , we have that \(A B I_{B}^{\prime}\stackrel {+}{\times}A I_{B}D\) . Using the similarity \(A B D\stackrel {+}{\times}A E C\) , this means that \(A B I_{B}^{\prime}\stackrel {+}{\times}A J_{C}C\) , so \(A B\cdot A C = A I_{B}^{\prime}\cdot A J_{C}\) and \(\angle B A I_{B}^{\prime} = \angle J_{C}A C\) . Similarly, we have that \(A B\cdot A C = A J_{B}\cdot A I_{C}^{\prime}\) and \(\angle B A J_{B} = \angle I_{C}^{\prime}A C\) . Together, these imply that \(A I_{B}^{\prime}\cdot A J_{C} = A J_{B}\cdot A I_{C}^{\prime}\) and \(\angle J_{B}A J_{C} = \angle I_{B}^{\prime}A I_{C}^{\prime}\) , so \(A J_{B}J_{C}\stackrel {+}{\times}A I_{B}^{\prime}I_{C}^{\prime}\) . \(\square\)
We proceed using directed angles. By Claim 3, we have that \(I_{B}J_{B}J_{C}I_{C}\) is cyclic if and only if
\[\angle I_{B}I_{C}J_{B} = \angle I_{B}J_{C}J_{B} \iff \angle I_{B}I_{C}J_{B} + \angle M C B = \angle I_{B}J_{C}J_{B} + \angle M N B\] \[\iff \angle (I_{B}I_{C},B C) = \angle (M N,J_{B}J_{C}).\]
By Claim 4, we have that
\[\angle (J_{B}J_{C},I_{B}^{\prime}I_{C}^{\prime}) = \angle J_{B}A I_{B}^{\prime}\] \[\qquad = \angle B A I_{B} + \angle M A B\] \[\qquad = \angle E A J_{C} + \angle M A B\] \[\qquad = \angle N A C + \angle M A B\] \[\qquad = \angle (M N,B C),\]
which is equivalent to \(\angle (B C,I_{B}^{\prime}I_{C}^{\prime}) = \angle (M N,J_{B}J_{C})\) . Thus, \(I_{B}J_{B}J_{C}I_{C}\) is cyclic if and only if
\[\angle (I_{B}I_{C},B C) = \angle (B C,I_{B}^{\prime}I_{C}^{\prime}). \quad (*)\]
Suppose that \(I_{B}I_{C}\) is parallel to \(B C\) . By Claim 2, \(I_{B}^{\prime}I_{C}^{\prime}\) is also parallel to \(B C\) , so we have that \(\angle (I_{B}I_{C},B C) = \angle (B C,I_{B}^{\prime}I_{C}^{\prime}) = 0^{\circ}\) . Thus, \((*)\) is satisfied, so \(I_{B}J_{B}J_{C}I_{C}\) is cyclic.

Suppose now that \(I_{B}I_{C}\) is not parallel to \(B C\) while \(I_{B}J_{B}J_{C}I_{C}\) is cyclic. By Claim 2, \(I_{B}I_{C}\) \(I_{B}^{\prime}I_{C}^{\prime}\) , and \(B C\) concur at a point \(Z\) . As \(I_{B}\) and \(I_{C}\) lie on segments \(B I\) and \(C I\) \(Z\) must lie outside segment \(B C\) . Since \(A\) is the intersection of the common external tangents of the incircle and \(A\) - excircle of \(A B D\) and \(Z D\) is a common internal tangent of the incircle and \(A\) - excircle of \(A B D\) we have that \((Z A,Z D;Z I_{B},Z I_{B}^{\prime}) = - 1\) . By \((\ast)\) \(Z D\) bisects \(\angle I_{B}Z I_{B}^{\prime}\) , so \(\angle A Z D = 90^{\circ}\) : that is, \(Z\) is the foot from \(A\) to \(B C\) . But this implies that \(\angle A B C\) or \(\angle B C A\) is obtuse, contradicting the fact that \(A B< A C< B C\) . \(\square\)
Comment. While we have written the solution using harmonic bundles for the sake of brevity, there are ways to prove Claim 2 and obtain the final contradiction without the use of projective geometry. Claim 2 can be proven using an application of Menelaus's theorem, and the final contradiction can be obtained using the fact that an excircle of a triangle is always larger than its incircle.
Solution 3. Let \(\omega_{B}\) and \(\omega_{C}\) denote circles \(A I B\) and \(A I C\) , respectively. Introduce \(P\) , \(Q\) and \(X\) as in Solution 1 and recall from Claim 1 in Solution 1 that \(P\) , \(J_{B}\) and \(I_{C}\) are collinear with \(J_{B}\) lying on \(\omega_{B}\) . From this, we can define \(J_{B}\) and \(I_{C}\) in terms of \(X\) by \(I_{C} = X P\cap C I\) and \(J_{B}\neq P\) as the second intersection of line \(X P\) with \(\omega_{B}\) . Similarly, we can define \(I_{B} = X Q\cap B I\) and \(J_{C}\neq Q\) as the second intersection of line \(X Q\) with \(\omega_{C}\) . Note that this now detaches the definitions of points \(I_{B}\) , \(I_{C}\) , \(J_{B}\) , and \(J_{C}\) from points \(D\) and \(E\) .
Let \(\ell\) be a line passing through \(I\) . We now allow \(X\) to vary along \(\ell\) while fixing \(\triangle A B C\) and points \(I\) , \(P\) , and \(Q\) . We use the definitions from above to construct \(I_{B}\) , \(I_{C}\) , \(J_{B}\) , and \(J_{C}\) . We will classify all cases where these four points are concyclic. Throughout the rest of the solution we use directed angles and directed lengths.
For nondegeneracy reasons, we exclude cases where \(X = I\) and \(X\) lies on line \(B C\) , which means that \(I_{B}\) , \(J_{B}\neq B\) and \(I_{C}\) , \(J_{C}\neq C\) . We also exclude the cases where \(\ell\) is tangent to either \(\omega_{B}\) or \(\omega_{C}\) . Similar results hold in these cases and they can be treated as limit cases.

Claim 1. Line \(I_{B}J_{B}\) passes through a fixed point on \(\omega_{B}\) , and line \(I_{C}J_{C}\) passes through a fixed point on \(\omega_{C}\) as \(X\) varies on \(\ell\) .
Proof. Let \(U\neq J_{B}\) be the second intersection of \(I_{B}J_{B}\) with \(\omega_{B}\) . We have by the law of sines that
\[\frac{\sin\zeta I J_{B}U}{\sin\zeta U J_{B}B} = \frac{\sin\zeta I J_{B}I_{B}}{\sin\zeta I_{B}J_{B}B} = \frac{\sin\zeta J_{B}I I_{B}}{\sin\zeta J_{B}B I_{B}}\cdot \frac{I I_{B}}{I_{B}B} = \frac{\sin\zeta J_{B}I B}{\sin\zeta J_{B}B I}\cdot \frac{I I_{B}}{I_{B}B} = \frac{\sin\zeta X P Q}{\sin\zeta X P I}\cdot \frac{I I_{B}}{I_{B}B}.\]
We also have
\[\frac{I I_{B}}{I_{B}B} = \frac{\sin\zeta I Q I_{B}}{\sin\zeta I_{B}Q B}\cdot \frac{|I Q|}{|B Q|} = \frac{\sin\zeta I Q X}{\sin\zeta X Q P}\cdot \frac{|I Q|}{|B Q|}.\]
Combining these and applying Ceva's Theorem in \(\triangle P I Q\) with point \(X\) , we get
\[\frac{\sin\zeta I J_{B}U}{\sin\zeta U J_{B}B} = \frac{\sin\zeta X P Q}{\sin\zeta X P I}\cdot \frac{\sin\zeta I Q X}{\sin\zeta X Q P}\cdot \frac{|I Q|}{|B Q|} = \frac{\sin\zeta X I Q}{\sin\zeta X I P}\cdot \frac{|I Q|}{|B Q|} = \frac{\sin\chi(\ell,I Q)}{\sin\zeta(\ell,I P)}\cdot \frac{|I Q|}{|B Q|},\]
which is independent of the choice of \(X\) on \(\ell\) . As \(\zeta I J_{B}U + \zeta U J_{B}B = \zeta I J_{B}B = \zeta I A B\) is fixed, this is enough to show point \(U\) is fixed on \(\omega_{B}\) .
Similarly, if we define \(V\neq J_{C}\) to be the second intersection of \(I_{C}J_{C}\) with \(\omega_{C}\) , we get that \(V\) is fixed on \(\omega_{C}\) . \(\square\)
Let \(G\neq X\) and \(H\neq X\) be the second intersections of \(\ell\) with \(\omega_{B}\) and \(\omega_{C}\) , respectively which exist as we are assuming \(\ell\) is not tangent to either \(\omega_{B}\) or \(\omega_{C}\) .
Claim 2. Points \(U\) , \(G\) and \(Q\) are collinear and points \(V\) , \(H\) and \(P\) are collinear.
Proof. Taking \(X = G\) , we have \(J_{B} = G\) and \(I_{B} = X Q\cap B I\) . Both of these points lie on line \(Q G\) which, by Claim 1, shows that \(U\) , \(G\) , \(Q\) are collinear. Similarly, \(V\) , \(H\) , \(P\) are collinear. \(\square\)
Claim 3. Points \(I_{B}\) , \(I_{C}\) , \(J_{B}\) , \(J_{C}\) are concyclic if and only if points \(P\) , \(Q\) , \(G\) , \(H\) are concyclic. In particular, this depends only on \(\ell\) , not on the choice of \(X\) on \(\ell\) .
Proof. We have that
\[\zeta I_{C}J_{B}I_{B} = \zeta P J_{B}U = \zeta P G U = \zeta P G Q\] \[\zeta I_{C}J_{C}I_{B} = \zeta V J_{C}Q = \zeta V H Q = \zeta P H Q.\]
Thus \(\zeta I_{C}J_{B}I_{B} = \zeta I_{C}J_{C}I_{B} \iff \zeta P G Q = \zeta P H Q\) which proves the claim. \(\square\)
Claim 4. \(P\) , \(Q\) , \(G\) , \(H\) are concyclic if and only if \(\ell \in \{I A, I P, I Q, t\}\) where \(t\) is the tangent to circle \(B I C\) at \(I\) .
Proof. When \(\ell = I A\) , we have \(G = H = A\) so the cyclic condition from Claim 3 holds. Similarly, when \(\ell = I P\) or \(\ell = I Q\) , \(G = P\) or \(H = Q\) , respectively, so again the cyclic condition holds.
Now, consider the case where \(\ell \notin \{I A, I P, I Q\}\) . In this case it is straightforward to see that the four points \(P\) , \(Q\) , \(G\) , and \(H\) are distinct. We then have that \(\zeta Q P G = \zeta B P G = \zeta B I G\) , so
\[P Q G H \mathrm{concyclic} \iff \zeta Q H G = \zeta Q P G \iff \zeta Q H G = \zeta B I G \iff Q H \parallel B I.\]
We also have that \(\zeta C Q H = \zeta C I H\) , so
\[\ell \mathrm{ tangent~to~circle~}B I C \iff \zeta C I H = \zeta C B I \iff \zeta C Q H = \zeta C B I \iff Q H \parallel B I.\]
Hence, in this case \(P\) , \(Q\) , \(G\) , \(H\) are concyclic if and only if \(\ell\) is tangent to circle \(B I C\) , as claimed.
We now revert to using points \(D\) and \(E\) to define points \(I_{B}\) , \(I_{C}\) , \(J_{B}\) , \(J_{C}\) , and \(X\) , returning to the original set- up.
Claim 5. Let \(\Gamma\) be the circle passing through \(P\) and \(Q\) that is tangent to \(IP\) and \(IQ\) , which exists as \(IP = IQ = IA\) . Then \(X\) lies on \(\Gamma\) . Furthermore, \(X\) lies on the same side of \(BC\) as \(A\) and does not lie on line \(BC\) .
Proof. We have that
\[\begin{aligned} \chi XPI & = \chi J_{B}PI = \chi J_{B}AI = \chi BAI - \chi BAI_{B} = \chi J_{B}AJ_{C} - \chi J_{B}AE \\ & = \chi EAI_{C} = \chi J_{C}AC = \chi J_{C}QC = \chi XQP, \end{aligned}\]
so circle \(XPQ\) is tangent to \(IP\) . Similarly, circle \(XPQ\) is tangent to \(IQ\) , so \(X\) lies on \(\Gamma\) .
As \(D\) lies in the interior of segment \(BC\) , \(I_{C}\) lies in the interior of segment \(CI\) . Since \(X\) is the second intersection of \(PI_{C}\) with \(\Gamma\) and \(IP\) is tangent to \(\Gamma\) , \(X\) lies in the interior of \(PQ\) on \(\Gamma\) on the same side of \(BC\) as \(A\) . This implies the second part of the claim. \(\square\)
By Claim 5, we cannot have \(\ell \in \{IP, IQ\}\) in the original problem. Furthermore, as shown in Claim 2 of Solution 1, we have that \(X\) lies inside triangle \(IBC\) , which means that \(\ell \neq t\) . Thus, the only remaining possibility in Claim 4 is \(\ell = AI\) . We then have
\[I_{B}I_{C}J_{B}J_{C}\mathrm{~c o n c y c l i c}\xrightarrow{\mathrm{C l a i m~3}}P Q G H\mathrm{~c o n c y c l i c}\xrightarrow{\mathrm{C l a i n~4}}X\mathrm{~l i e s~o n~}A I,\]
finishing the problem.
Comment. The condition \(AB < AC < BC\) is used in an essential way in the solutions. In Solution 1, it is used in the proof of Claim 3 to ensure that \(X\) lies in the interior of triangle \(IBC\) . In Solution 2, it is used in the final step to ensure that \(\angle ABC\) and \(\angle BCA\) cannot be obtuse. In Solution 3, it is used to exclude the case \(\ell = t\) . If the condition is removed, then the problem is no longer true: whenever \(\angle ABC\) or \(\angle BCA\) is obtuse, there exists a choice of \(D\) on \(BC\) such that \(I_{B}J_{B}J_{C}I_{C}\) is cyclic but \(AI\) , \(I_{B}J_{C}\) , and \(J_{B}I_{C}\) do not concur. This counterexample configuration can be constructed using Solution 3 by letting \(X\) be the intersection of \(t\) with \(\Gamma\) that lies on the same side of \(BC\) as \(A\) and constructing \(I_{B}\) , \(I_{C}\) , \(J_{B}\) , and \(J_{C}\) as described in the solution, from which we can reconstruct \(D\) .
Conversely, the problem holds whenever \(\angle ABC\) and \(\angle BCA\) are both not obtuse, as can be seen from Solution 2. This is thus the weakest possible condition on triangle \(ABC\) that is necessary for the problem to be true.

When \(X\) lies on the tangent to circle \(IBC\) at \(I\) , there is no contradiction in the proof of Claim 3 in Solution 1: circles \(II_{B}J_{C}\) and \(IJ_{B}I_{C}\) are distinct, and \(X\) is the radical centre of circles \(II_{B}J_{C}\) , \(IJ_{B}I_{C}\) , and \(IJ_{B}J_{C}J_{C}\) . There is also no contradiction in the final step of Solution 2, and indeed \(II_{B}I_{C}\) and \(BC\) intersect at the foot of the altitude from \(A\) to \(BC\) .
There are no configuration issues with the direction \((1) \implies (2)\) . This implication holds without any constraint on triangle \(ABC\) , and the proofs in Solutions 1 and 2 apply without any modification.
|
IMOSL-2024-N1
|
Find all positive integers \(n\) with the following property: for all positive divisors \(d\) of \(n\) , we have that \(d + 1 \mid n\) or \(d + 1\) is prime.
|
Answer: \(n \in \{1, 2, 4, 12\}\) .
Solution 1. It is easy to verify that \(n = 1, 2, 4, 12\) all work. We must show they are the only possibilities. We write \(n = 2^{k} m\) , where \(k\) is a nonnegative integer and \(m\) is odd. Since \(m \mid n\) , either \(m + 1\) is prime or \(m + 1 \mid n\) .
In the former case, since \(m + 1\) is even it must be 2, so \(n = 2^{k}\) . If \(k \geqslant 3\) , we get a contradiction, since \(8 \mid n\) but \(9 \nmid n\) . Hence \(k \leqslant 2\) , so \(n \in \{1, 2, 4\}\) .
In the latter case, we have \(m + 1 \mid 2^{k} m\) and \(m + 1\) coprime to \(m\) , and hence \(m + 1 \mid 2^{k}\) . This means that \(m + 1 = 2^{j}\) with \(2 \leqslant j \leqslant k\) (since \(j = 1\) gives \(m = 1\) , which was considered earlier).
Then we have \(2^{k} + 1 \nmid n\) : since \(2^{k} + 1\) is odd, it would have to divide \(m\) but is larger than \(m\) . Hence, by the condition of the problem, \(2^{k} + 1\) is prime. If \(k = 2\) , \(j\) must be 2 as well, and this gives the solution \(n = 12\) . Also, \(2^{k - 1} + 1 \nmid n\) for \(k > 2\) : since it is odd, it would have to divide \(m\) . However, we have no solutions to \(2^{k - 1} + 1 \mid 2^{j} - 1\) with \(j \leqslant k\) : the left- hand side is greater than the right unless \(j = k\) , when the left- hand side is just over half the right- hand side.
Since we have \(2^{k} \mid n\) and \(2^{k} + 1 \nmid n\) , and \(2^{k - 1} \mid n\) and \(2^{k - 1} + 1 \nmid n\) , we must have \(2^{k} + 1\) and \(2^{k - 1} + 1\) both prime. However, \(2^{a} + 1\) is a multiple of three if \(a\) is odd, so we must have \(2^{k} + 1 = 3\) (impossible as this gives \(k = 1\) ) or \(2^{k - 1} + 1 = 3\) , which gives \(j = k = 2\) , whence \(n = 12\) .
Solution 2. We proceed as in Solution 1 as far as determining that \(n = 2^{k}(2^{j} - 1)\) with \(j \leqslant k\) .
Now, we have \(2^{j} \mid n\) but \(2^{j} + 1 \nmid n\) , as it is odd and does not divide \(2^{j} - 1\) . Thus \(2^{j} + 1\) is prime. The theory of Fermat primes tells us we must have \(j = 2^{h}\) with \(h > 0\) .
Then \(2^{2^{h}} - 1\) is congruent to 3 or 6 (modulo 9) depending on whether \(h\) is odd or even, respectively. In particular it is not divisible by 9, so \(n = 2^{k}(2^{2^{h}} - 1)\) is not divisible by 9; so we must have \(k \leqslant 2\) , since if \(k \geqslant 3\) then \(8 \mid n\) but \(9 \nmid n\) with 9 not prime.
Solution 3. Let \(p\) be the smallest integer not dividing \(n\) . Since \(p - 1\) is a divisor of \(n\) , \(p\) must be a prime. Let \(1 \leqslant r \leqslant p - 1\) be the remainder of \(n\) modulo \(p\) . Since \(p - r < p\) , we have \(p - r \mid n\) , so we may consider the divisor \(d = \frac{n}{p - r}\) .
Since \(p \mid n - r\) , we have \(p \mid n + p - r\) , whence \(p \mid d + 1\) . Thus \(d + 1 \nmid n\) ; so it must be prime. On the other hand, this prime is divisible by \(p\) , so we conclude \(d + 1 = p\) , which means that \(n = (p - 1)(p - r)\) .
Then from \(p - 2, p - 3 \mid n\) we get \((p - 2)(p - 3) \mid 2(p - r)\) , from which we find
\[(p - 2)(p - 3) \leqslant 2(p - r) \leqslant 2(p - 1).\]
Solving this quadratic inequality gives \(p \leqslant 5\) , which means that \(n \in \{1, 2, 4, 8, 12, 16\}\) . Of this set, \(n = 8\) and \(n = 16\) are not solutions.
Solution 4. We suppose that \(n\) is not 1 or 2.
Since \(n \mid n\) and \(n + 1 \nmid n\) , we know that \(n + 1\) is prime. Thus it is odd, so \(2 \mid n\) ; as \(n > 2\) , we have \(\frac{n}{2} \mid n\) and \(\frac{n}{2} + 1 \nmid n\) , so \(\frac{n}{2} + 1\) is prime. Thus it is also odd, so \(4 \mid n\) .
We must then have \(\frac{n}{4} + 1 \mid n\) or \(\frac{n}{4} + 1\) prime.
In the former case, \(\frac{n}{4} |4(\frac{n}{4} +1) - n\) , so \(\frac{n}{4} +1 |4\) . This means that \(n = 4\) or \(n = 12\) .
In the latter case, \(\frac{n}{4} +1\) must be odd if \(n \neq 4\) . Thus we have \(n = 8m\) where \(2m + 1\) , \(4m + 1\) , \(8m + 1\) are all prime; \(n = 8\) does not work, so \(3 | m\) (otherwise one of those numbers would be divisible by 3). Thus \(24 | n\) , so \(25 | n\) as 25 is not prime.
Now suppose that \(p\) is the least positive integer not dividing \(n\) : as in Solution 3 we know that \(p\) is prime, and what we have done so far shows that \(p \geqslant 7\) . If \(p^2 - 1 = (p - 1)(p + 1)\) is the product of coprime integers less than \(p\) , it divides \(n\) , and \(p^2\) is not prime so also divides \(n\) (a contradiction); \(p - 1\) and \(p + 1\) are even and have no common factor higher than 2, so all odd prime power divisors of their product are less than \(p\) and the only case where \(p^2 - 1\) is not a product of coprime integers less than \(p\) is when one of \(p - 1\) and \(p + 1\) is a power of 2, say \(2^m\) (with \(m \geqslant 3\) ). If \(p = 2^m - 1\) , then \(3p - 1 = 4(3 \times 2^{m - 2} - 1)\) and \(3 \times 2^{m - 2} - 1\) is an odd integer less than \(p\) , so \(3p - 1 | n\) and so \(3p | n\) . Finally, if \(p = 2^m + 1\) , then \(m\) is even and \(2p - 1 = 2^{m + 1} + 1\) is a multiple of 3; the only case where it is a power of 3 is when \(m = 2\) , but we have \(m \geqslant 3\) , so \(2p - 1\) is a product of coprime integers less than \(p\) and again we have a contradiction.
Solution 5. As in Solution 4, we deduce that if \(n > 2\) then \(n\) must be even. We write \(n = 2 \cdot 3^k \cdot r\) , where \(k\) is a nonnegative integer and \(3 \nmid r\) .
Since \(r\) and \(2r\) are both different and nonzero modulo 3, one of them must be congruent to 2 modulo 3. We'll say that it is \(ar\) , where \(a \in \{1, 2\}\) .
Since \(ar | n\) , we must have that \(ar + 1\) is either prime or a factor of \(n\) . In the first case, \(ar + 1 = 3\) because \(3 | ar + 1\) , and so \(n = 2 \cdot 3^k \cdot r\) , where \(r = 2 / a\) is 1 or 2. Noting that we must have \(k \leqslant 1\) (else \(9 | n\) but \(10 \nmid n\) ), we can examine cases to deduce that \(n \in \{2, 4, 12\}\) are the only possibilities.
Otherwise, \(ar + 1 | n\) . Since \(ar + 1\) is coprime to \(r\) , we must in fact have that \(ar + 1 | 2 \cdot 3^k\) , and since \(3 | ar + 1\) by assumption we deduce that \(k \geqslant 1\) . In particular, \(3^k + 1\) is an even number that is at least 4, so is not prime and must divide \(n\) . As it is coprime to 3, we must in fact have \(3^k + 1 | 2r\) .
Let \(q_1\) and \(q_2\) be such that \(q_1(ar + 1) = 2 \cdot 3^k\) and \(q_2(3^k + 1) = 2r\) . We have that \(q_1ar < 2 \cdot 3^k\) and \(q_2 3^k < 2r\) , and multiplying these together gives \(q_1q_2a < 4\) .
If \(a = 2\) then \(q_1 = q_2 = 1\) , so \(2r + 1 = 2 \cdot 3^k\) , which is not possible (considering both sides modulo 2).
If \(a = 1\) then \(r\) must be equivalent to 2 modulo 3, so \(q_2(3^k + 1) = 2r\) gives that \(q_2\) is equivalent to 1 modulo 3, whence \(q_2 = 1\) . So we deduce that \(2r = 3^k + 1\) . Thus, we deduce that \(q_1(3^k + 3) = 4 \cdot 3^k\) , which rearranges to give \(3^{k - 1}(4 - q_1) = q_1\) , whence \(3^{k - 1} \leqslant q_1 < 4\) and so \(k \leqslant 2\) . We can examine cases to deduce that \(n = 12\) is the only possibility.
|
IMOSL-2024-N2
|
Determine all finite, nonempty sets \(\mathcal{S}\) of positive integers such that for every \(a\) , \(b \in \mathcal{S}\) there exists \(c \in \mathcal{S}\) with \(a \mid b + 2c\) .
|
Answer: The possible sets are \(\mathcal{S} = \{t\}\) and \(\mathcal{S} = \{t, 3t\}\) for any positive integer \(t\) .
Solution 1. Without loss of generality, we may divide all elements of \(\mathcal{S}\) by any common factor, after which they cannot all be even. As \(a \nmid b + 2c\) for \(a\) even and \(b\) odd, the elements of \(\mathcal{S}\) are all odd.
We now divide into three cases:
Case 1: \(|\mathcal{S}| = 1\) .
The set \(\mathcal{S} = \{t\}\) clearly works.
Case 2: \(|\mathcal{S}| = 2\) .
Say \(\mathcal{S} = \{r, s\}\) with \(r < s\) , so either \(s \mid r + 2r\) or \(s \mid r + 2s\) , and in either case \(s \mid 3r\) . We cannot have \(s = 3r / 2\) as we assumed that \(r\) is odd, so \(s = 3r\) and \(\mathcal{S} = \{r, 3r\}\) , which clearly works by examining cases for \(a\) and \(b\) .
Case 3: \(|\mathcal{S}| \geq 3\) .
If all elements of \(\mathcal{S}\) are odd then for any \(b\) , \(c \in \mathcal{S}\) , \(b + 2c \neq b\) (mod 4). If \(a \mid b + 2c\) with \(a \equiv b\) (mod 4), this means there exists \(k\) with \(b + 2c = ka\) and \(k \equiv 3\) (mod 4), so \(k \geq 3\) . If \(a\) is the greatest element of \(\mathcal{S}\) and \(b < a\) , we have \(b + 2c < 3a\) , a contradiction. Thus when \(a\) is the greatest element, no \(b < a\) has \(b \equiv a\) (mod 4) (and thus all elements other than the greatest are congruent modulo 4).
Let \(d\) and \(e\) be the largest and second largest element of \(\mathcal{S}\) respectively. Let \(f \neq d, e\) be any other element of \(\mathcal{S}\) . There is some \(c \in \mathcal{S}\) with \(e \mid f + 2c\) , and \(e \neq f + 2c\) (mod 4), so \(f + 2c \geq 3e\) , so \(c > e\) . Since \(e\) is the second largest element of \(\mathcal{S}\) , \(c = d\) , so \(e \mid f + 2d\) , and this holds for all \(f \in \mathcal{S}\) with \(f < e\) , but can only hold for at most one such \(f\) . So \(|\mathcal{S}| \leq 3\) .
Hence the elements of \(\mathcal{S}\) are \(d > e > f\) , and by the discussion above without loss of generality we may suppose these elements are all odd, \(e \equiv f\) (mod 4) and \(d \neq e\) (mod 4). We have above that \(e \mid f + 2d\) . Furthermore, there exists some \(c \in \mathcal{S}\) with \(d \mid f + 2c\) , and \(c \neq d\) as \(d > f\) so \(d \nmid f\) , so \(c \leq e\) ; as \(f + 2e < 3e\) , we have \(e > d / 3\) . Since \(f + 2c\) is odd and \(f + 2c < 3d\) , we have \(f + 2c = d\) .
Subcase 3.1: \(c = f\) .
Here \(d = 3f\) and \(e \mid f + 2d = 7f\) . As \(e > f\) and \(e \equiv f\) (mod 4), we have \(e = 7f / 3\) and the elements are some multiples of \(\{3, 7, 9\}\) . But \(a = 7\) and \(b = 9\) have no corresponding value of \(c\) .
Subcase 3.2: \(c = e\) .
Here \(d = f + 2e\) and \(e \mid f + 2d = 3f + 4e\) so \(e \mid 3f\) . But this is not possible with \(e > f\) and \(e \equiv f\) (mod 4).
Solution 2. As in Solution 1, we reduce to the case where all elements of \(\mathcal{S}\) are odd. Since all one- element sets satisfy the given conditions, we show that if \(|\mathcal{S}| \geq 2\) , then \(|\mathcal{S}| = 2\) and \(\mathcal{S} = \{t, 3t\}\) for some positive integer \(t\) .
Let \(d\) be the largest element. For any \(e \in \mathcal{S}\) with \(e \neq d\) there must be a \(f \in \mathcal{S}\) such that \(d \mid e + 2f\) . This implies \(2f \equiv - e\) (mod \(d\) ), hence \(2f \equiv d - e\) (mod \(d\) ). Now \(d - e\) is even (because all elements in \(\mathcal{S}\) are odd) and \(d\) is odd, so \(\frac{d - e}{2}\) is an integer and we have \(f \equiv \frac{d - e}{2}\) (mod \(d\) ). Further, \(0 < \frac{d - e}{2} < d\) , while we must also have \(0 < f \leq d\) , so \(f = \frac{d - e}{2}\) . We conclude that for any \(e \in \mathcal{S}\) with \(e \neq d\) the integer \(\frac{d - e}{2}\) is also in \(\mathcal{S}\) and not equal to \(d\) .
Denote by \(e_{1} < e_{2} < \dots < e_{k} < d\) the elements of \(\mathcal{S}\) , where \(k \geq 1\) . Then \(\frac{d - e_{1}}{2} > \frac{d - e_{2}}{2} > \dots > \frac{d - e_{k}}{2}\) are also elements of \(\mathcal{S}\) , none of them equal to \(d\) . Hence we must have \(e_{1} = \frac{d - e_{k}}{2}\) and
\(e_{k} = \frac{d - e_{1}}{2}\) , so \(2e_{1} + e_{k} = d = 2e_{k} + e_{1}\) . We conclude \(e_{1} = e_{k}\) , so \(k = 1\) , and also \(d = 2e_{k} + e_{1} = 3e_{1}\) . Hence \(\mathcal{S} = \{e_{1},3e_{1}\}\) for some positive integer \(e_{1}\) .
Solution 3. As in Solution 1, we reduce to the case where all elements of \(\mathcal{S}\) are odd. Since all one- element sets satisfy the given conditions, we show that if \(|\mathcal{S}|\geq 2\) , then \(|\mathcal{S}| = 2\) and \(\mathcal{S} = \{t,3t\}\) for some positive integer \(t\) .
Let \(d\) be the largest element, and let \(e\in \mathcal{S}\) be any other element. We will say that \(x\in \mathcal{S}\) (mod \(d\) ) if the unique element \(y\) in \(\{1,\ldots ,d\}\) such that \(x\equiv y\) (mod \(d\) ) is an element of \(\mathcal{S}\) . Note that by the choice of \(d\) being the largest element, if \(x\neq d\) , then \(x\neq 0\) (mod \(d\) ). The given condition implies that if \(b\in \mathcal{S}\) , then \(- \frac{b}{2}\in \mathcal{S}\) (mod \(d\) ). Repeating this gives \(- \frac{b}{2}\in \mathcal{S}\Rightarrow \frac{b}{4}\in \mathcal{S}\) (mod \(d\) ), and by iterating, we have \(b\in \mathcal{S}\Rightarrow \frac{b}{(- 2)^{k}}\in \mathcal{S}\) (mod \(d\) ) for all \(k\) . Since \(d\) is odd, there is some \(g\) such that \((- 2)^{g}\equiv 1\) (mod \(d\) ), so by setting \(k = g - 1\) , we get that
\[\mathrm{for~all~}d\neq e\in \mathcal{S}, - 2e\in \mathcal{S}\pmod {d}.\]
Now, if \(e > \frac{d}{2}\) , then \(- 2e\in \mathcal{S}\) (mod \(d\) ) and \(d - 2e< 0\) , so \(2d - 2e\in \mathcal{S}\) , contradicting the lack of even elements. Then \(e< \frac{d}{2}\) for any \(e\in \mathcal{S}\setminus \{d\}\) , so we have \(e\in \mathcal{S}\Rightarrow d - 2e\in \mathcal{S}\) . Since \(d - 2e\neq d\) , we must have \(d - 2e< \frac{d}{2}\) , which rearranges to \(e > \frac{d}{4}\) .
Let \(\lambda \in (0,1)\) be a positive real number and suppose we have proved that \(e > \lambda d\) for any \(e\in \mathcal{S}\setminus \{d\}\) . Then \(d - 2e > \lambda d\) , which rearranges to \(e< \frac{(1 - \lambda)d}{2}\) . Then \(d - 2e< \frac{(1 - \lambda)d}{2}\) , which rearranges to \(e > \frac{(1 + \lambda)d}{4}\) . Defining \(\lambda_{0} = \frac{1}{4}\) and \(\lambda_{i} = \frac{1 + \lambda_{i - 1}}{2}\) for \(i\geq 1\) , we have shown that for all \(e\in \mathcal{S}\setminus \{d\}\) and all \(\lambda_{i}\) , \(e > \lambda_{i}d\) . Now note that the sequence \(\lambda_{i}\) is increasing and bounded above by \(\frac{1}{3}\) , so it converges to some limit \(\ell\) , which satisfies \(\ell = \frac{1 + \ell}{4}\) , so \(\ell = \frac{1}{3}\) . Hence \(e\geq \frac{d}{3}\) , but then \(d - 2e\geq \frac{d}{3}\) implies \(e\leq \frac{d}{3}\) , so \(e\) must be \(\frac{d}{3}\) , and we are done.
Comment. We can finish Solution 3 alternatively as follows: after showing that if \(e\in \mathcal{S}\setminus \{d\}\) then \(d - 2e\in \mathcal{S}\setminus \{d\}\) , note that
\[(d - 2e) - \frac{d}{3} = \frac{2d}{3} -2e = -2\left(e - \frac{d}{3}\right).\]
So consider \(e\in \mathcal{S}\setminus \{d\}\) maximising \(|e - \frac{d}{3} |\) . If \(e\neq \frac{d}{3}\) , then the above shows that \(|(d - 2e) - \frac{d}{3} | > |e - \frac{d}{3} |\) , which is a contradiction. Thus \(\mathcal{S}\setminus \{d\}\) is empty or equal to \(\{\frac{d}{3}\}\) , which completes the proof.
|
IMOSL-2024-N3
|
Determine all sequences \(a_{1}\) , \(a_{2}\) , ... of positive integers such that, for any pair of positive integers \(m \leqslant n\) , the arithmetic and geometric means
\[\frac{a_{m} + a_{m + 1} + \cdots + a_{n}}{n - m + 1} \quad \text{and} \quad (a_{m}a_{m + 1}\cdot \cdot \cdot a_{n})^{\frac{1}{n - m + 1}}\]
are both integers.
|
Answer: The only such sequences are the constant sequences (which clearly work).
Solution 1. We say that an integer sequence \(b_{1}\) , \(b_{2}\) , ... is good if for any pair of positive integers \(m \leqslant n\) , the arithmetic mean \(\frac{b_{m} + b_{m + 1} + \cdots + b_{n}}{n - m + 1}\) is an integer. Then the condition in the question is equivalent to saying that the sequences \((a_{i})\) and \((\nu_{p}(a_{i}))\) for all primes \(p\) are good.
Claim 1. If \((b_{i})\) is a good sequence, then \(n - m \mid b_{n} - b_{m}\) for all pairs of integers \(m\) , \(n\) .
Proof. This follows from \(n - m\) dividing \(b_{m} + b_{m + 1} + \dots + b_{n - 1}\) and \(b_{m + 1} + b_{m + 2} + \dots + b_{n}\) , and then taking the difference. \(\square\)
Claim 2. If \((b_{i})\) is a good sequence where some integer \(b\) occurs infinitely many times, then \((b_{i})\) is constant.
Proof. Say \(b_{n_{1}}\) , \(b_{n_{2}}\) , \(b_{n_{3}}\) , ... are equal to \(b\) . Then for all \(m\) , we have that \(b - b_{m} = b_{n_{j}} - b_{m}\) is divisible by infinitely many different integers \(n_{j} - m\) , so it must be zero. Therefore the sequence is constant. \(\square\)
Now, for a given prime \(p\) , we look at the sequence \((\nu_{p}(a_{i}))\) . Let \(k = \nu_{p}(a_{1})\) . Then Claim 1 tells us that \(a_{1} \equiv a_{np^{k + 1} + 1} \pmod{p^{k + 1}}\) for all \(n\) , which implies that \(\nu_{p}(a_{np^{k + 1} + 1}) = k\) for all \(n\) . We now have that \(k\) appears infinitely many times in this good sequence, so by Claim 2, the sequence \((\nu_{p}(a_{i}))\) is constant. This holds for all primes \(p\) , so \((a_{i})\) must in fact be constant.
Solution 2. As in Claim 1 of Solution 1, we have that \(a_{i + r} \equiv a_{i} \pmod{r}\) , which tells us that the sequence \(a_{i}\) is periodic modulo \(p\) with period \(p\) . Also, by a similar argument, we have that \(a_{i + r} / a_{i}\) is the \(r^{\mathrm{th}}\) power of a rational number.
Now suppose that for some \(i \neq j\) (mod \(p\) ) we have \(a_{i}\) , \(a_{j} \neq 0\) (mod \(p\) ). As \(p\) and \(p - 1\) are coprime, we can find some \(i' \equiv i \pmod{p}\) , \(j' \equiv j \pmod{p}\) such that \(p - 1 \mid i' - j'\) . Then \(a_{i'} / a_{j'}\) is a perfect \((p - 1)^{\mathrm{th}}\) power, so
\[a_{i'} = t u^{p - 1}, \quad a_{j'} = t v^{p - 1}\]
for some positive integers \(t\) , \(u\) , \(v\) not divisible by \(p\) . By Fermat's little theorem, \(u^{p - 1}\) and \(v^{p - 1}\) must be 1 modulo \(p\) . So we must have
\[a_{i} \equiv a_{i'} \equiv t \equiv a_{j'} \equiv a_{j} \pmod{p}.\]
Thus all values of \(a_{i}\) that are not divisible by \(p\) are congruent modulo \(p\) .
For the sum of \(p\) consecutive values to be divisible by \(p\) , this means that all the \(a_{i}\) are congruent modulo \(p\) . Since this is true for all primes \(p\) , the sequence must therefore be constant.
Solution 3. Fix an arbitrary index \(m\) . First, we show that \(a_{m}\) divides \(a_{n}\) for sufficiently large \(n\) . Let \(n\) be sufficiently large that \(n > \nu_{p}(a_{m}) + m\) for every prime \(p\) . By Claim 1 of Solution 1, we have
\[\nu_{p}(a_{m})\equiv \nu_{p}(a_{n})\pmod {n - m}.\]
Since \(\nu_{p}(a_{m})< n - m\) , it follows that \(\nu_{p}(a_{m})\leqslant \nu_{p}(a_{n})\) . This holds for every prime \(p\) , so \(a_{m}\mid a_{n}\)
Next, suppose that there is some index \(k\) such that \(a_{m}\) does not divide \(a_{k}\) . By the previous, there is a maximal such \(k\) . Then \(a_{k + 1}\) , \(a_{k + 2}\) , . . . are all divisible by \(a_{m}\) . But now applying the first condition gives
\[a_{m}\mid a_{k} + a_{k + 1} + \cdot \cdot \cdot +a_{k + a_{m} - 1},\]
so \(a_{m}\) divides \(a_{k}\) , a contradiction. Therefore every term \(a_{n}\) is divisible by \(a_{m}\)
As \(m\) was arbitrary, we now have \(a_{m}\mid a_{n}\) and vice versa for all \(m\) , \(n\) . So the sequence must be constant.
|
IMOSL-2024-N4
|
Determine all positive integers \(a\) and \(b\) such that there exists a positive integer \(g\) such that \(\gcd (a^{n} + b, b^{n} + a) = g\) for all sufficiently large \(n\) .
|
Answer: The only solution is \((a, b) = (1, 1)\) .
Solution 1. It is clear that we may take \(g = 2\) for \((a, b) = (1, 1)\) . Supposing that \((a, b)\) satisfies the conditions in the problem, let \(N\) be a positive integer such that \(\gcd (a^{n} + b, b^{n} + a) = g\) for all \(n \geq N\) .
Lemma. We have that \(g = \gcd (a, b)\) or \(g = 2 \gcd (a, b)\) .
Proof. Note that both \(a^{N} + b\) and \(a^{N + 1} + b\) are divisible by \(g\) . Hence
\[a(a^{N} + b) - (a^{N + 1} + b) = ab - b = a(b - 1)\]
is divisible by \(g\) . Analogously, \(b(a - 1)\) is divisible by \(g\) . Their difference \(a - b\) is then divisible by \(g\) , so \(g\) also divides \(a(b - 1) + a(a - b) = a^{2} - a\) . All powers of \(a\) are then congruent modulo \(g\) , so \(a + b \equiv a^{N} + b \equiv 0\) (mod \(g\) ). Then \(2a = (a + b) + (a - b)\) and \(2b = (a + b) - (a - b)\) are both divisible by \(g\) , so \(g \mid 2 \gcd (a, b)\) . On the other hand, it is clear that \(\gcd (a, b) \mid g\) , thus proving the Lemma. \(\square\)
Let \(d = \gcd (a, b)\) , and write \(a = dx\) and \(b = dy\) for coprime positive integers \(x\) and \(y\) . We have that
\[\gcd \left((dx)^{n} + dy, (dy)^{n} + dx\right) = d\gcd \left(d^{n - 1}x^{n} + y, d^{n - 1}y^{n} + x\right),\]
so the Lemma tells us that
\[\gcd \left(d^{n - 1}x^{n} + y, d^{n - 1}y^{n} + x\right) \leq 2\]
for all \(n \geq N\) . Defining \(K = d^{2}xy + 1\) , note that \(K\) is coprime to each of \(d, x\) , and \(y\) . By Euler's theorem, for \(n \equiv - 1\) (mod \(\phi (K)\) ) we have that
\[d^{n - 1}x^{n} + y \equiv d^{-2}x^{-1} + y \equiv d^{-2}x^{-1}(1 + d^{2}xy) \equiv 0 \pmod {K},\]
so \(K \mid d^{n - 1}x^{n} + y\) . Analogously, we have that \(K \mid d^{n - 1}y^{n} + x\) . Taking such an \(n\) which also satisfies \(n \geq N\) gives us that
\[K \mid \gcd (d^{n - 1}x^{n} + y, d^{n - 1}y^{n} + x) \leq 2.\]
This is only possible when \(d = x = y = 1\) , which yields the only solution \((a, b) = (1, 1)\) .
Solution 2. After proving the Lemma, one can finish the solution as follows.
For any prime factor \(p\) of \(ab + 1\) , \(p\) is coprime to \(a\) and \(b\) . Take an \(n \geq N\) such that \(n \equiv - 1\) (mod \(p - 1\) ). By Fermat's little theorem, we have that
\[a^{n} + b \equiv a^{-1} + b = a^{-1}(1 + ab) \equiv 0 \pmod {p},\] \[b^{n} + a \equiv b^{-1} + a = b^{-1}(1 + ab) \equiv 0 \pmod {p},\]
then \(p\) divides \(g\) . By the Lemma, we have that \(p \mid 2 \gcd (a, b)\) , and thus \(p = 2\) . Therefore, \(ab + 1\) is a power of 2, and \(a\) and \(b\) are both odd numbers.
If \((a, b) \neq (1, 1)\) , then \(ab + 1\) is divisible by 4, hence \(\{a, b\} = \{- 1, 1\} \pmod {4}\) . For odd \(n \geq N\) , we have that
\[a^{n} + b \equiv b^{n} + a \equiv (-1) + 1 = 0 \pmod {4},\]
then 4 \(\mid g\) . But by the Lemma, we have that \(\nu_{2}(g) \leq \nu_{2}(2 \gcd (a, b)) = 1\) , which is a contradiction. So the only solution to the problem is \((a, b) = (1, 1)\) .
|
IMOSL-2024-N5
|
Let \(\mathcal{S}\) be a finite nonempty set of prime numbers. Let \(1 = b_{1}< b_{2}< \dots\) be the sequence of all positive integers whose prime divisors all belong to \(\mathcal{S}\) . Prove that, for all but finitely many positive integers \(n\) , there exist positive integers \(a_{1}\) , \(a_{2}\) , ..., \(a_{n}\) such that
\[\frac{a_{1}}{b_{1}} +\frac{a_{2}}{b_{2}} +\dots +\frac{a_{n}}{b_{n}} = \left[\frac{1}{b_{1}} +\frac{1}{b_{2}} +\dots +\frac{1}{b_{n}}\right].\]
|
Solution 1. If \(\mathcal{S}\) has only one element \(p\) , then \(b_{i} = p^{i - 1}\) and we can easily find \(a_{1}\) , ..., \(a_{n}\) with \(2 = \left[\sum_{i = 0}^{n - 1}\frac{1}{p^{i}}\right] = \sum_{i = 0}^{n - 1}\frac{a_{i}}{p^{i - 1}}\) by taking \(a_{1} = a_{2} = \dots = a_{n - 1} = 1\) and choosing \(a_{n} = p^{n - 1} - (p + p^{2} + \dots +p^{n - 2})\) .
More generally, observe that the sum of \(\frac{1}{b_{i}}\) over all \(i\) is
\[\sum_{i}\frac{1}{b_{i}} = \prod_{i}\left(1 + \frac{1}{p_{i}} +\frac{1}{p_{i}^{2}} +\dots\right)\] \[\qquad = \prod_{p\in \mathcal{S}}\frac{p}{p - 1}.\]
In particular, if \(n\) is large enough, then
\[\left[\sum_{j = 1}^{n}\frac{1}{b_{j}}\right] = \left[\prod_{p\in \mathcal{S}}\frac{p}{p - 1}\right].\]
For the remainder of the proof, we will only consider \(n\) large enough that this equality holds. Next, we handle the special case \(\mathcal{S} = \{2,3\}\) , for which this product is 3. Start by setting
\[
a_i =
\begin{cases}
1, & \text{if } 2b_i \le b_n, \\
2, & \text{if } 2b_i > b_n.
\end{cases}
\]
Then,
\[\sum_{i\leqslant n}\frac{a_{i}}{b_{i}} = \left\{ \begin{array}{l l}{\frac{2}{3^{i}}, \mathrm{if} b_{n}\geqslant 3^{i};\] \[0, \mathrm{otherwise.}} \end{array} \right.\]
As a result,
\[\sum_{i\leqslant n}\frac{a_{i}}{b_{i}} = \sum_{\stackrel{t\geqslant 0}{3^{t}\leqslant b_{n}}}\frac{2}{3^{t}}\] \[= 3 - \frac{1}{3^{T}}\]
where \(T\) is the largest \(t\geqslant 0\) with \(3^{t}\leqslant b_{n}\) . Thus, increasing \(a_{j}\) by one (where \(b_{j} = 3^{T}\) ) gives a sequence of \(a_{i}\) that works.
Otherwise, we may assume that \(|\mathcal{S}| > 1\) and \(\mathcal{S}\neq \{2,3\}\) , which means that the product \(\prod_{p\in \mathcal{S}}\frac{p}{p - 1}\) is not an integer. Indeed,
- if \(|\mathcal{S}| > 2\) then 2 divides the denominator at least twice and so divides the denominator of the overall fraction;
- if \(|\mathcal{S}| = 2\) and \(2\notin \mathcal{S}\) then 2 divides the denominator and not the numerator;
- if \(\mathcal{S} = \{2, p\}\) then the product is \(2p / (p - 1)\) which is not an integer for \(p > 3\) .
It follows that for some fixed \(\alpha > 0\) , we have that
\[\left[\prod_{p\in \mathcal{S}}\frac{p}{p - 1}\right] = \prod_{p\in \mathcal{S}}\frac{p}{p - 1} +\alpha ,\]
from which it follows that
\[\left[\sum_{i = 1}^{n}\frac{1}{b_{i}}\right] - \sum_{i = 1}^{n}\frac{1}{b_{i}} >\alpha .\]
It will now suffice to prove the following claim.
Claim. Suppose that \(n\) is large enough, and let \(e_{p}\) be the largest nonnegative integer such that \(p^{e_{p}}\leqslant b_{n}\) . Let \(M = \prod_{p\in \mathcal{S}}p^{e_{p}}\) . If \(u\) is a positive integer such that \(u / M > \alpha\) , then there exist nonnegative integers \(a_{i}\) such that
\[\sum_{i}\frac{a_{i}}{b_{i}} = \frac{u}{M}.\]
The problem statement follows after replacing \(a_{i}\) with \(a_{i} + 1\) for each \(i\) .
To prove this, choose some constant \(c\) such that \(\sum_{p\in \mathcal{S}}p^{- c}< \alpha\) , and suppose \(n\) is large enough that \(p^{c}< b_{n}\) for each \(p\in \mathcal{S}\) ; in particular, \(p^{c}\mid M\) with \(M\) defined as above.
For each \(p\in \mathcal{S}\) , let \(i_{p}\) be such that \(b_{i_{p}} = p^{e_{p}}\) and choose the smallest nonnegative integer \(a_{i_{p}}\) satisfying
\[p^{e_{p} - c}\mid a_{i_{p}}\left(\frac{M}{p^{e_{p}}}\right) - u.\]
Such an \(a_{i_{p}}\) must exist and be at most \(p^{e_{p} - c}\) ; indeed, \(\frac{M}{p^{e_{p}}}\) is an integer coprime to \(p\) , so we can take \(a_{i_{p}}\) to be equal to \(u\) times its multiplicative inverse modulo \(p^{e_{p} - c}\) . The sum of the contributions to the sum from the \(a_{i_{p}}\) is at most
\[\sum_{p\in \mathcal{S}}\frac{p^{e_{p} - c}}{p^{e_{p}}} = \sum_{p\in \mathcal{S}}p^{-c}< \alpha .\]
So, we have
\[\frac{u}{M} = \sum_{p\in \mathcal{S}}\frac{a_{i_{p}}}{p^{e_{p}}} +\frac{r}{\prod_{p\in \mathcal{S}}p^{e_{p}}},\]
where \(r\) is an integer because of our choice of \(a_{i_{p}}\) and \(r\) is nonnegative because of the bound on \(u\) . Simply choose \(a_{i} = r\) where \(b_{i} = \prod_{p\in \mathcal{S}}p^{c}\) to complete the proof.
Solution 2. We reduce to the claim as in Solution 1, and provide an alternative approach for constructing the \(a_{i}\) .
Let \(p_{0}\in \mathcal{S}\) be the smallest prime in \(\mathcal{S}\) . Let \(z_{0} = u / M\) . We construct a sequence \(z_{0}\) , \(z_{1}\) , \(z_{2}\) , ... and values of \(a_{i}\) by the following iterative process: to construct \(z_{j + 1}\) ,
- select the largest prime \(p\in \mathcal{S}\) dividing the denominator of \(z_{j}\) , and let \(\mu\) be the number of times \(p\) divides the denominator of \(z_{j}\) ;
- choose the largest \(\nu\) such that \(p_{0}^{\nu}p^{\mu}\leqslant b_{n}\) , and let \(i\leqslant n\) be such that \(b_{i} = p_{0}^{\nu}p^{\mu}\) ;
- choose \(0\leqslant a_{i}< p\) such that the denominator of \(z_{k} - a_{i} / b_{i}\) has at most \(\mu - 1\) factors of \(p\) , and let \(z_{k + 1} = z_{k} - a_{i} / b_{i}\) ;
- continue until \(p_{0}\) is the only prime dividing the denominator of \(z_{k}\) .
Note that we can always choose \(a_{i}\) in step 3; by construction, \(z_{k}b_{i}\) has no factors of \(p\) in its denominator, so must be realised as an element of \(\mathbb{Z}_{p}\) .
Each time we do this, \(b_{i} > M / p_{0}\) by construction, so
\[\frac{a_{i}}{b_{i}} < \frac{p p_{0}}{M} \leqslant \frac{p_{0}p_{1}}{M},\]
where \(p_{1}\) is the largest prime in \(\mathcal{S}\) . And the number of times we do this operation is at most
\[\sum_{p\in \mathcal{S}\atop p > p_{0}}e_{p}\leqslant |\mathcal{S}|\log_{2}(M),\]
so the sum of the \(a_{i} / b_{i}\) we have assigned is at most \(|\mathcal{S}|p_{0}p_{1}\log_{2}(M) / M\) .
Choose \(n\) large enough that \(\log_{2}(M) / M < \alpha\) ; after subtracting the above choices of \(a_{i} / b_{i}\) from \(u / M\) , we have a quantity of the form \(r / p_{0}^{e_{p_{0}}}\) , where \(r\) is an integer by construction and \(r\) is positive by the above bounds. Simply set \(a_{i} = r\) where \(b_{i} = p_{0}^{e_{p_{0}}}\) to complete the proof.
Solution 3. As in Solution 1, we may handle \(|\mathcal{S}| = 1\) and \(\mathcal{S} = \{2,3\}\) separately; otherwise, we can define \(\alpha\) as we did in that solution. Also define \(e_{p}\) to be the largest nonnegative integer such that \(p^{e_{p}} \leqslant b_{n}\) as we did in Solution 1.
We will show that, for \(n\) sufficiently large, we may choose some \(j \leqslant n\) , and positive integers \(a_{i}\) , such that
\[\sum_{i \neq j} \frac{a_{i}}{b_{i}} - \sum_{i \neq j} \frac{1}{b_{i}} < \alpha .\]
and all \(\frac{a_{i}}{b_{i}}\) are integer multiples of \(\frac{1}{b_{j}}\) . We then set \(a_{j}\) to be the least positive integer such that the sum on the left is an integer, which will obviously have the required value.
Concretely, choose \(j\) such that \(b_{j} = \prod_{p \in \mathcal{S}} p^{|e_{p} / |\mathcal{S}|}\) , which is less than \(b_{n}\) by construction. For \(i \neq j\) , set \(a_{i} = b_{i} / \gcd (b_{i}, b_{j})\) . We have
\[\sum_{i \neq j} \frac{a_{i}}{b_{i}} - \sum_{i \neq j} \frac{1}{b_{i}} < \sum_{\substack{i \neq j \\ a_{i} > 1}} \frac{a_{i}}{b_{i}}.\]
If \(a_{i} > 1\) , then there must be some \(p \in \mathcal{S}\) for which \(p^{|e_{p} / |\mathcal{S}| + 1} | b_{i}\) , and so
\[\frac{a_{i}}{b_{i}} = \frac{1}{\gcd (b_{i}, b_{j})} \leqslant \frac{1}{p^{|e_{p} / |\mathcal{S}|}} < \frac{p}{b_{n}^{1 / |\mathcal{S}|}},\]
where the last inequality follows from the fact that \(p^{e_{p} + 1} > b_{n}\) .
Now \(n \leqslant \prod_{p \in \mathcal{S}} (\log_{p}(b_{n}) + 1) \leqslant (2 \log b_{n})^{|\mathcal{S}|}\) , so
\[\sum_{\substack{i \neq j \\ a_{i} > 1}} \frac{a_{i}}{b_{i}} \leqslant \frac{(2 \log b_{n})^{|\mathcal{S}|}}{b_{n}^{1 / |\mathcal{S}|}},\]
and so we can choose \(n\) large enough that this quantity is less than \(\alpha\) , as required.
|
IMOSL-2024-N6
|
Let \(n\) be a positive integer. We say that a polynomial \(P\) with integer coefficients is \(n\) - good if there exists a polynomial \(Q\) of degree 2 with integer coefficients such that \(Q(k)(P(k) + Q(k))\) is never divisible by \(n\) for any integer \(k\) .
Determine all integers \(n\) such that every polynomial with integer coefficients is an \(n\) - good polynomial.
|
Answer: The set of such \(n\) is any \(n > 2\) .
Solution 1. First, observe that no polynomial is 1- good (because \(Q(X)(P(X) + Q(X))\) always has roots modulo 1) and the polynomial \(P(X) = 1\) is not 2- good (because \(Q(X)(Q(X) + 1)\) is always divisible by 2).
Now, if \(P\) is \(d\) - good with some \(Q\) , then \(Q \cdot (P + Q)\) has no roots mod \(d\) . Therefore, it certainly has no roots mod \(n\) for \(d \mid n\) , so \(P\) must be \(n\) - good. Consequently, it suffices to show that all polynomials are \(n\) - good whenever \(n\) is an odd prime, or \(n = 4\) .
We start by handling the case \(n = 4\) . We will construct a \(Q\) such that \(Q(X)\) is never divisible by 4 and \(Q(X) + P(X)\) is always odd; this will clearly show that \(P\) is 4- good. Note that any function modulo 2 must be either constant or linear – in other words, there are \(a, b \in \{0, 1\}\) such that \(P(X) = aX + b \bmod 2\) for all \(X\) . If \(a = 0\) then set \(Q(X) = 4X^2 + b + 1\) , and if \(a = 1\) then set \(Q(X) = X^2 + b + 1\) ; in all cases, \(Q\) will satisfy the required properties.
It remains to prove that any polynomial is \(p\) - good, where \(p\) is an odd prime. We will prove that for any function \(f\) defined mod \(p\) , there is a quadratic \(Q\) with no roots mod \(p\) such that \(Q(x) \neq f(x) \bmod p\) for all \(x\) ; the statement about \(P\) then follows with \(f\) replaced by \(- P\) . For the remainder of the proof, we will consider all equalities modulo \(p\) .
Suppose that a function \(f\) not satisfying the above exists; in other words, \(f\) has the property that for any quadratic \(Q\) with no roots mod \(p\) , there is some \(x\) such that \(Q(x) = f(x)\) . Without loss of generality, we may assume that \(f\) has no roots mod \(p\) . To see why, suppose that \(f(u) = 0\) for some \(u\) , and let \(g\) be the function such that \(g(x) = f(x)\) for \(x \neq u\) and \(g(u) = 1\) . For any \(Q\) with no roots, we know that there is some \(x \neq u\) such that \(P(x) = f(x)\) , and so \(P(x) = g(x)\) for that choice of \(x\) . In particular, \(g\) is also not \(p\) - good.
Now, suppose first that there is some nonzero \(t\) such that \(t\) is not in the image of \(f\) . Then we may take \(Q(X) = pX^2 + t\) ; this quadratic is never equal to \(f\) and is never zero. Thus, \(f\) must be surjective onto the nonzero residues mod \(p\) . There are \(p\) choices for \(X\) and \(p - 1\) nonzero residues mod \(p\) , so there must be some \(x_1 \neq x_2 \bmod p\) such that \(f(x_1) = f(x_2)\) , and \(f\) is a bijection from the set of residues mod \(p\) not equal to \(x_2\) to the set of nonzero residues mod \(p\) .
Now, note that we may choose any \(b\) and \(c\) with \(b\) nonzero and replace \(f(X)\) with \(g(X) = f(bX + c)\) ; if there were some \(Q\) with no roots such that \(Q(x) \neq g(x)\) for all \(x\) , then \(Q(X / b - c / b)\) would work for \(f\) . Choose \(b\) and \(c\) such that \(bx_1 + c = 1\) and \(bx_2 + c = - 1\) ; such \(b\) and \(c\) must exist (we may take \(b = 2 / (x_1 - x_2)\) and \(c = (x_1 + x_2) / (x_2 - x_1)\) ). Renaming \(g\) to \(f\) , we see that we may assume \(f(1) = f(- 1)\) .
Let \(r'\) be a quadratic nonresidue mod \(p\) . Choose \(y \neq 0\) such that \(f(y) = (1 - r')f(0)\) , which must exist as the right hand side is nonzero and \(1 - r'\) is not equal to 1. Choose \(r = y^2 /r'\) , which is a quadratic nonresidue.
Consider \(\phi (X) = f(X) / (X^2 - r)\) . By definition, \(\phi (1) = \phi (- 1)\) and \(\phi (0) = \phi (y)\) , so there are no more than \(p - 2\) values in the image of \(\phi\) . Choose some nonzero \(a\) not in the image of \(\phi\) , so \(f(X) / (X^2 - r)\) is never equal to \(a\) . The quadratic \(Q(X) = a(X^2 - r)\) is never zero and also never equal to \(f(X)\) , which completes the proof.
Comment. In fact, there is no need to pass from polynomials \(P\) to functions \(f\) , as any function mod \(p\) is a polynomial. Concretely, instead of passing from \(f\) to \(g\) , we would have instead replaced \(P(X)\) with \(P(X) + 1 - (X - u)^{p - 1}\) , which is a polynomial that is unchanged except at \(X = u\) .
Solution 2. Given \(f\) a function mod \(p\) such that \(f\) is surjective onto the nonzero elements of \(\mathbb{Z} / p\mathbb{Z}\) and \(f(1) = f(- 1)\) , we provide an alternative approach to construct a nonzero quadratic \(Q(X)\) such that \(Q(X) \neq f(X)\) . Let \(r\) be the smallest quadratic nonresidue mod \(p\) (so \(r - 1\) is a square) and let \(a\) vary over the nonzero elements mod \(p\) ; we will show that it is possible to choose \(Q_{a}(X) = a(X^{2} - r)\) for some choice of \(a\) . Note that any quadratic of this form will be nowhere zero.
Suppose that no such \(Q_{a}\) works. Then, for each \(a\) , there exists \(x\) such that \(a(x^{2} - r) = f(x)\) . We may assume that \(x \neq - 1\) , as if the equality holds for \(x = - 1\) then it also holds for \(x = 1\) . However, \(a(x^{2} - r) = f(x)\) implies \(a = f(x) / (x^{2} - r)\) , so \(f(x) / (x^{2} - r)\) must be a surjection from \(\{x \neq - 1\}\) to the set of nonzero \(a\) , and so this is a bijection. In particular, for each \(a\) , there exists a unique \(x_{a}\) such that \(f(x_{a}) = a(x_{a}^{2} - r)\) .
We now have
\[\prod_{t\neq 0}t = \prod_{a\neq 0}f(x_{a})\] \[\qquad = \prod_{a\neq 0}a\prod_{a\neq 0}(x_{a}^{2} - r)\] \[\qquad = \prod_{a\neq 0}a\prod_{x\neq -1}(x^{2} - r)\]
where the first equality follows because \(f\) is surjective onto the nonzero residues mod \(p\) , and the second equality follows from the definition of \(x_{a}\) . The two products cancel, which means that \(\prod_{x \neq - 1}(x^{2} - r) = 1\) .
However, we also get
\[\prod_{x\neq -1}(x^{2} - r) = (-r)(1 - r)\left(\prod_{x = 2}^{(p - 1) / 2}(x^{2} - r)\right)^{2}.\]
However, this is a contradiction as \(- r(1 - r) = r(r - 1)\) , which is not a quadratic residue (by our choice of \(r\) ).
Comment. By Wilson's theorem, we know that the product of the nonzero elements mod \(p\) is \(- 1\) ; however, this fact was not necessary for the solution so we chose to present the solution without needing to state it.
Comment. One can in fact show that
\[\prod_{x\neq -1}(x^{2} - r) = \frac{-4r}{1 - r}.\]
To do this, note that the polynomial \(X^{\frac{p - 1}{2}} - 1\) has the \(\frac{p - 1}{2}\) quadratic residues as roots, so we have
\[\prod_{s\mathrm{~quad.~res.}}(X - s) = X^{\frac{p - 1}{2}} - 1\]
and so
\[\prod_{x\neq 0}(X - x^{2}) = (X^{\frac{p - 1}{2}} - 1)^{2}.\]
Since \(r\) is a quadratic nonresidue, by Euler's criterion \(r^{\frac{p - 1}{2}} = - 1\) , and the result follows.
Therefore, one can replace the condition that \(r\) is the smallest quadratic nonresidue with the condition that \(r\) is a quadratic nonresidue not equal to \(- \frac{1}{3}\) (which is possible for all \(p \geqslant 3\) ).
Solution 3. As in Solution 1, we will reduce to the case of \(p\) being an odd prime and \(f\) being a function mod \(p\) with no roots which is surjective onto the set of nonzero residues mod \(p\) , although we make no assumption about the values of \(x_{1}\) and \(x_{2}\) with \(f(x_{1}) = f(x_{2})\) .
We will again consider quadratics of the form \(Q_{a,b,c}(X) = aR(bX + c)\) , where \(R(X) = X^{2} - r\) for an arbitrary fixed quadratic nonresidue \(r\) , \(a\) and \(b\) are nonzero mod \(p\) , and \(c\) is any residue mod \(p\) .
For each fixed \(b\) and \(c\) , there must be \(n\) pairs \((a,x)\) such that \(aR(bx + c) = f(x)\) , because there must be exactly one value of \(a\) for each \(x\) . If any \(a\) appears in no such pair then we are done, so assume otherwise. In other words, there must be exactly one \(a\) such that there are two such \(x\) , and for all other \(a\) there is only one such \(x\) .
Thus, for each \((b,c)\) , there is exactly one unordered pair \(\{x_{1},x_{2}\}\) such that for some \(a\) we have \(f(x_{i}) = aR(bx_{i} + c)\) ; in other words, there is exactly one unordered pair \(\{x_{1},x_{2}\}\) such that \(f(x_{1}) / R(bx_{1} + c) = f(x_{2}) / R(bx_{2} + c)\) .
Now, we show that for each unordered pair \(\{x_{1},x_{2}\}\) there must be at least one pair \((b,c)\) such that \(f(x_{1}) / R(bx_{1} + c) = f(x_{2}) / R(bx_{2} + c)\) . Indeed, let \(t = f(x_{1}) / f(x_{2})\) . There must be some \(x_{1}^{\prime},x_{2}^{\prime}\) such that \(R(x_{1}^{\prime}) / R(x_{2}^{\prime}) = t\) ; this is because \(R(X)\) and \(tR(X)\) both take \(\frac{p + 1}{2}\) nonzero values mod \(p\) , so the intersection must be nonempty by the pigeonhole principle. Choosing \(b\) and \(c\) such that \(bx_{1} + c = x_{1}^{\prime}\) and \(bx_{2} + c = x_{2}^{\prime}\) gives the claim.
Note further that if \((b,c)\) and \(\{x_{1},x_{2}\}\) satisfy the relation, then the same is true for \((- b, - c)\) and \(\{x_{1},x_{2}\}\) because \(R(bx + c) = R(- bx - c)\) . Since \(b\) is nonzero, this means that each pair \(\{x_{1},x_{2}\}\) corresponds to at least two pairs \((b,c)\) . However, since there are \(p(p - 1)\) pairs \((b,c)\) with \(b\) nonzero and \(p(p - 1) / 2\) unordered pairs \(\{x_{1},x_{2}\}\) , each \(\{x_{1},x_{2}\}\) must correspond to exactly two pairs \((b,c)\) and \((- b, - c)\) for some \((b,c)\) .
Now, since the image of \(f\) has only \(p - 1\) elements, there must be some \(x_{1}\) , \(x_{2}\) such that \(f(x_{1}) = f(x_{2})\) . Choose any \(b\) , \(c\) such that \(bx_{1} + c = -(bx_{2} + c)\) , so \(R(bx_{1} + c) = R(bx_{2} + c)\) and so \(f(x_{1}) / R(bx_{1} + c) = f(x_{2}) / R(bx_{2} + c)\) . There is such a pair \(b\) , \(c\) for any nonzero \(b\) , so there are at least \(p - 1\) such pairs, and this quantity is greater than 2 for \(p \geq 5\) .
Finally, for the special case that \(p = 3\) , we observe that there must be at least one allowed value for \(Q(x)\) for each \(x\) , so there must exist such a quadratic \(Q\) by Lagrange interpolation.
Comment. We may also handle the case \(p = 3\) as follows. Recall that we may assume \(f\) is nonzero and surjective onto \(\{1,2\}\) mod 3, so the image of \(f\) must be \((1,1,2)\) or \((1,2,2)\) in some order. Without loss of generality \(f(1) = f(2)\) , so we either have \((f(0),f(1),f(2)) = (1,2,2)\) or \((2,1,1)\) . In the first case, take \(Q(X) = 2X^{2} + 2\) , and in the second case take \(Q(X) = X^{2} + 1\) .
In some sense, this is equivalent to the Lagrange interpolation approach, as in each case the polynomial \(Q(X)\) can be determined by Lagrange interpolation.
Solution 4. Again, we reduce to the case of \(p\) being an odd prime and \(f\) being a function mod \(p\) ; we will show that there is a quadratic which is nowhere zero such that \(Q(x) = f(x)\) has no root. We can handle the case of \(p = 3\) separately as in Solution 3, so assume that \(p \geq 5\) .
We will prove the following more general statement: let \(p \geq 5\) be a prime and let \(\mathcal{A}_{1}\) , \(\mathcal{A}_{2}\) , ..., \(\mathcal{A}_{p}\) be subsets of \(\mathbb{Z} / p\mathbb{Z}\) with \(|\mathcal{A}_{i}| = 2\) for all \(i\) . Then there exists a polynomial \(Q \in \mathbb{Z} / p\mathbb{Z}[X]\) of degree at most 2 such that \(Q(i) \notin \mathcal{A}_{i}\) for all \(i\) . Indeed, applying this statement to the sets \(\mathcal{A}_{i} = \{0, f(i)\}\) (and adding \(pX^{2}\) if necessary) produces a quadratic \(Q\) satisfying the desired property.
Choose the coefficients of \(Q\) uniformly at random from \(\mathbb{Z} / p\mathbb{Z}\) , and let \(T\) be the random variable denoting the number of \(i\) for which \(Q(i) \in \mathcal{A}_{i}\) . Observe that for \(k \leq 3\) , we have
\[\mathbb{E}\left[\binom{T}{k}\right] = 2^{k}\binom{p}{k}p^{-k}.\]
To see why, let \(k \leq 3\) . If \(\mathcal{S} \subseteq \mathbb{Z} / p\mathbb{Z}\) has size \(k\) and \((a_{i})_{i \in \mathcal{S}}\) is a \(k\) - tuple, the probability that \(Q(i) = a_{i}\) on \(\mathcal{S}\) is equal to \(p^{- k}\) ; for \(k = 3\) this follows by Lagrange interpolation, and for \(k < 3\)
it follows from the \(k = 3\) case by summing. The expectation is therefore equal to the number of \(S \subseteq \mathbb{Z} / p\mathbb{Z}\) of size \(k\) times the probability that \(Q(i) \in \mathcal{A}_{i}\) for each \(i \in \mathcal{S}\) , which is equal to the right hand side as each \(\mathcal{A}_{i}\) has size 2.
Now, observe that we have the identity \((t - 1)(t - 3)(t - 4) = - 12 + 12\binom{t}{1} - 10\binom{t}{2} + 6\binom{t}{3}\) , so
\[\mathbb{E}[(T - 1)(T - 3)(T - 4)] = -12 + 12\mathbb{E}\left[\binom{T}{1}\right] - 10\mathbb{E}\left[\binom{T}{2}\right] + 6\mathbb{E}\left[\binom{T}{3}\right]\] \[\qquad = -12 + 12\cdot 2 - 10\cdot 2\left(1 - \frac{1}{p}\right) + 6\cdot \frac{4}{3}\left(1 - \frac{1}{p}\right)\left(1 - \frac{2}{p}\right)\] \[\qquad = -\frac{4}{p} +\frac{16}{p^{2}}.\]
This is negative for \(p \geqslant 5\) . Because \((t - 1)(t - 3)(t - 4) \geqslant 0\) for all integers \(t > 0\) , it then follows that \(T = 0\) with positive probability, which implies that there must exist some \(Q\) with \(Q(i) \notin \mathcal{A}_{i}\) for all \(i\) , as desired.
Comment. We do not have much freedom to choose a different polynomial in place of \(R(T) = (T - 1)(T - 3)(T - 4)\) in this argument. Indeed, it can be shown (by comparing coefficients of \(\binom{T}{K}\) ) that if \(R\) has degree at most 3, then the expected value of \(R(T)\) tends to \(\frac{1}{3} (R(4) + 2R(1))\) as \(p\) tends to infinity, so \(R\) must have both 1 and 4 as roots. In particular, \(R\) must be of the form \(R(T) = (T - 1)(T - 4)(T - d)\) for some \(d \geqslant 3\) , and if \(d < 4\) then the argument works for any \(p\) with \(p > 4 / (4 - d)\) .
|
IMOSL-2024-N7
|
Let \(\mathbb{Z}_{>0}\) denote the set of positive integers. Let \(f\colon \mathbb{Z}_{>0}\to \mathbb{Z}_{>0}\) be a function satisfying the following property: for \(m\) , \(n\in \mathbb{Z}_{>0}\) , the equation
\[f(mn)^{2} = f(m^{2})f(f(n))f(mf(n))\]
holds if and only if \(m\) and \(n\) are coprime.
For each positive integer \(n\) , determine all the possible values of \(f(n)\) .
|
Answer: All numbers with the same set of prime factors as \(n\) .
Common remarks. We refer to the given property as \(P(m,n)\) . We use the notation \(\operatorname {rad}(n)\) for the radical of \(n\) : the product of the distinct primes dividing \(n\) .
Solution 1. We start with a series of straightforward deductions:
- From \(P(1,1)\) , we have \(f(1)^{2} = f(1)f(f(1))^{2}\) , so \(f(1) = f(f(1))^{2}\) .
- From \(P(1,f(1))\) , we have \(f(f(1))^{2} = f(1)f(f(f(1)))f(f(f(1)))\) , so \(f(f(f(1))) = 1\) .
- From \(P(1,f(f(1)))\) , we have \(f(f(f(1)))^{2} = f(1)f(f(f(f(1))))f(f(f(f(1))))\) , which simplifies to \(1 = f(1)^{3}\) , so \(f(1) = 1\) .
- From \(P(1,n)\) we deduce \(f(n) = f(f(n))\) for all \(n\) .
- From \(P(m,1)\) we deduce \(f(m) = f(m^{2})\) for all \(m\) .
- Simplifying \(P(m,n)\) , we have that
\[f(mn)^{2} = f(m)f(n)f(mf(n))\]
if and only if \(m\) and \(n\) are coprime; refer to this as \(Q(m,n)\) .
- From \(Q(m,f(n))\) , we have that \(f(mf(n)) = f(m)f(n)\) if and only if \(m\) and \(f(n)\) are coprime; refer to this as \(R(m,n)\) .
Claim. If \(f(a) = 1\) , then \(a = 1\) .
Proof. If \(a\neq 1\) , then \(Q(a,a)\) gives \(f(a)^{2}\neq f(a)^{2}f(af(a))\) . If \(f(a) = 1\) , then both sides simplify to 1, a contradiction. \(\square\)
Claim. If \(n\neq 1\) then \(\gcd (n,f(n))\neq 1\) .
Proof. If \(\gcd (n,f(n)) = 1\) , then \(Q(f(n),n)\) gives \(f(nf(n))^{2} = f(n)^{3}\) , and \(Q(n,f(n))\) gives \(f(nf(n))^{2} = f(n)^{2}f(nf(n))\) , which together yield \(f(n) = 1\) for a contradiction. \(\square\)
Claim. For all \(n\) we have \(\operatorname {rad}(n)\mid f(n)\) .
Proof. For any prime \(p\mid n\) , write \(n = p^{v}n^{\prime}\) with \(p\nmid n^{\prime}\) . From \(Q(p^{v},n^{\prime})\) we have \(f(n)^{2} =\) \(f(p^{v})f(n^{\prime})f(p^{v}f(n^{\prime}))\) . Since \(\gcd (p^{v},f(p^{v}))\neq 1\) , it follows that \(p\mid f(p^{v})\) , so \(p\mid f(n)\) , and thus \(\operatorname {rad}(n)\mid f(n)\) . \(\square\)
Claim. If \(n\) is coprime to \(f(k)\) , then \(f(n)\) is coprime to \(f(k)\) .
Proof. From \(Q(f(k),n)\) we have \(f(nf(k))^{2} = f(k)f(n)f(f(k)f(n))\) ; applying \(R(n,k)\) to the LHS, we conclude that \(f(k)f(n) = f(f(k)f(n))\) . Applying \(R(f(n),k)\) we deduce that \(f(n)\) is coprime to \(f(k)\) , as required. \(\square\)
Claim. If \(p\) is prime then \(f(p)\) is a power of \(p\) .
Proof. Suppose otherwise. We know that \(p \mid f(p)\) ; let \(q \neq p\) be another prime with \(q \mid f(p)\) .
If, for some positive integer \(N\) , we have \(p \nmid f(N)\) , then \(f(p)\) is coprime to \(f(N)\) , so \(q \nmid f(N)\) , so \(q \nmid N\) ; thus, if \(q \mid N\) , then \(p \mid f(N)\) (and in particular, \(p \mid f(q)\) , by taking \(N = q\) ).
Similarly, if \(q \nmid f(N)\) then \(f(q)\) is coprime to \(f(N)\) ; as \(p \mid f(q)\) , this means \(p \nmid f(N)\) , so \(p \nmid N\) . So if \(p \mid N\) , then \(q \mid f(N)\) .
Together with \(\operatorname {rad}(n) \mid f(n)\) , this means that for any \(n\) not coprime to \(pq\) , we have \(pq \mid f(n)\) . Let \(m = \min \{\nu_{p}(f(x)) \mid x\) is not coprime to \(pq\}\) , and let \(X\) be a positive integer not coprime to \(pq\) such that \(\nu_{p}(f(X)) = m\) . The argument above shows \(m \geqslant 1\) . We can write \(f(X) = p^{m}q^{y}X^{\prime}\) , where \(y \geqslant 1\) , \(p \nmid X^{\prime}\) and \(q \nmid X^{\prime}\) . Since \(f(f(X)) = f(X)\) we have \(f(p^{m}q^{y}X^{\prime}) = p^{m}q^{y}X^{\prime}\) . Applying \(Q(p^{m}, q^{y}X^{\prime})\) gives \((p^{m}q^{y}X^{\prime})^{2} = f(p^{m})f(q^{y}X^{\prime})f(p^{m}f(q^{y}X^{\prime}))\) . The RHS is divisible by \(p^{3m}\) but the LHS is only divisible by \(p^{2m}\) , yielding a contradiction. \(\square\)
Claim. For any integer \(n\) , \(\operatorname {rad}(f(n)) = \operatorname {rad}(n)\) .
Proof. We already have that \(\operatorname {rad}(n) \mid f(n)\) , so it remains only to show that no other primes divide \(f(n)\) . If \(p\) is prime and \(p \nmid n\) , the previous Claim shows that \(n\) is coprime to \(f(p)\) , and thus \(f(n)\) is coprime to \(f(p)\) ; that is, \(p \nmid f(n)\) . So exactly the same primes divide \(f(n)\) as divide \(n\) . \(\square\)
It remains only to exhibit functions that show all values of \(f(n)\) with \(\operatorname {rad}(f(n)) = \operatorname {rad}(n)\) are possible. Given \(e(p) \geqslant 1\) for each prime \(p\) , take
\[f(n) = \prod_{p \mid n} p^{e(p)}\]
and we verify by examining exponents of each prime that this satisfies the conditions of the problem.
Comment. A quicker but less straightforward proof that \(f(1) = 1\) is to let \(f(n) = M\) be the least value that \(f\) takes; then \(P(1, n)\) gives \(M^{2} = f(n)^{2} = f(1)f(f(n))^{2} \geqslant M^{3}\) so \(M = 1\) and \(f(1) = 1\) .
Solution 2. As in Solution 1, we see that there are indeed functions \(f\) satisfying the given condition and producing all the given values of \(f(n)\) , and we follow Solution 1 to show the following facts:
\(\cdot f(1) = 1\)
\(\cdot f(m) = f(m^{2})\) for all \(m\)
\(\cdot f(n) = f(f(n))\) for all \(n\)
\(\cdot f(mn)^{2} = f(m)f(n)f(mf(n))\) if and only if \(m\) and \(n\) are coprime; refer to this as \(Q(m, n)\) .
Taking \(Q(m, n)\) together with \(Q(n, m)\) gives that \(f(mf(n)) = f(nf(m))\) if \(m\) and \(n\) are coprime.
Suppose now that \(m\) is coprime to both \(n\) and \(f(n)\) . We have \(f(mn)^{2} = f(m)f(n)f(mf(n))\) and squaring both sides gives
\[f(mn)^{4} = f(m)^{2}f(n)^{2}f(mf(n))^{2}\] \[\qquad = f(m)^{2}f(n)^{2}f(m)f(f(n))f(mf(f(n)))\] \[\qquad = f(m)^{3}f(n)^{3}f(mf(n)).\]
Thus \(f(mf(n)) = f(m)f(n)\) , so \(f(mn)^{2} = f(m)^{2}f(n)^{2}\) , so \(f(mn) = f(m)f(n) = f(mf(n)) = f(nf(m))\) .
If \(m\) is coprime to both \(n\) and \(f(n)\) but however \(n\) is not coprime to \(f(m)\) , we have
\[f(n f(m))^{2} \neq f(n) f(f(m)) f(n f(f(m)))\] \[\qquad = f(n) f(m) f(n f(m))\] \[\qquad = f(n f(m))^{2},\]
a contradiction. Thus, given that \(m\) and \(n\) are coprime, we know that \(m\) is coprime to \(f(n)\) if and only if \(n\) is coprime to \(f(m)\) . In particular, if \(p\) and \(q\) are different primes, then \(p \mid f(q)\) if and only if \(q \mid f(p)\) , and likewise, for any positive integer \(k\) , \(p \mid f(q^{k})\) if and only if \(q \mid f(p)\) . More generally, if \(p \nmid n\) , then \(p \mid f(n)\) if and only if \(n\) is not coprime to \(f(p)\) .
Now form a graph whose vertices are the primes, and where there is an edge between primes \(p \neq q\) if and only if \(p \mid f(q)\) (and so \(q \mid f(p)\) ); every vertex has finite degree. For any integer \(n\) , the primes dividing \(f(n)\) are all the primes that are neighbours of any prime \(q \mid n\) , together possibly with some further primes \(p \mid n\) .
If \(p\) and \(q\) are different primes, we have \(f(p f(q)) = f(q f(p))\) . The LHS is divisible by all primes that (in the graph) are neighbours of \(p\) or neighbours of neighbours of \(q\) , and possibly also by \(p\) and by some primes that are neighbours of \(q\) , and a corresponding statement with \(p\) and \(q\) swapped applies to the RHS. Thus any prime that is a neighbour of a neighbour of \(q\) must be one of: \(p\) , \(q\) , distance 1 from \(q\) , or distance 1 or 2 from \(p\) . For any prime \(r\) that is distance 2 from \(q\) , there are only finitely many primes \(p\) that it is distance 2 or less from, so by choosing a suitable prime \(p\) (depending on \(q\) ) we conclude that every prime that is a neighbour of a neighbour of \(q\) is actually \(q\) itself or a neighbour of \(q\) .
So the connected components of the graph are (finite) complete graphs. If \(m\) is divisible only by primes in one component, and \(n\) is divisible only by primes in another component, then \(f(m n) = f(m)f(n)\) . If \(n\) is divisible by more than one prime from a component, considering the expression for \(f(m n)^{2}\) as applied with successive prime power divisors of \(n\) shows that \(f(n)\) is divisible by all the primes in that component. However, while \(f(p^{k})\) is divisible by all the primes in the component of \(p\) except possibly for \(p\) itself, we do not yet know that \(p \mid f(p^{k})\) . We now consider cases for the order of a component.
For any prime \(p\) , we cannot have \(f(p^{k}) = 1\) , because \(Q(p^{k}, p^{k})\) gives
\[f(p^{2k})^{2} \neq f(p^{k}) f(p^{k}) f(p^{k}) f(p^{k})),\]
and simplifying using \(f(m^{2}) = f(m)\) results in \(1 \neq 1\) . So for a component of order 1, \(f(p^{k})\) is a positive power of \(p\) , so has the same set of prime factors as \(p\) , as required.
Now consider a component of order at least 2. Since \(f(f(n)) = f(n)\) , if the component has order at least 3, then for any \(n \neq 1\) whose prime divisors are in that component, \(f(n)\) is divisible by all the primes in that component. If the component has order 2, we saw above that this is true except possibly for \(n = p^{k}\) . However, if the primes in the component are \(p\) and \(q\) , and \(f(p^{k}) = q^{\ell}\) , then \(f(q^{\ell}) = f(f(p^{k})) = f(p^{k}) = q^{\ell}\) , which contradicts \(p \mid f(q^{\ell})\) . So for any component of order at least 2, and any \(n \neq 1\) whose prime divisors are in that component, \(f(n)\) is divisible by all the primes in that component.
In a component of order at least 2, let \(m\) be the product of all the primes in that component, and let \(t\) be maximal such that \(m^{t} \mid f(n)\) for all \(n \neq 1\) whose prime divisors are in that component; we have seen that \(t \geqslant 1\) . If \(m\) and \(n\) are coprime numbers greater than 1, all of whose prime divisors are in that component, then \(Q(m, n)\) tells us that \(m^{3t / 2} \mid f(m n)\) . For any \(n^{\prime} \neq 1\) , all of whose prime divisors are in that component, \(f(n^{\prime})\) is divisible by all the primes in that component, so can be expressed as such a product, so \(m^{3t / 2} \mid f(f(n^{\prime})) = f(n^{\prime})\) . But this means \(t \geqslant 3t / 2\) , a contradiction, so all components have order 1, and we are done.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.