I have an idea on how to test the intelligence of an AI. Ask it to create a visualization of a pathological mathematical object like the Alexander horned sphere, the Hopf fibration, or a spinor. The reason I like this test is because it is impossible to represent these objects exactly in 3D space projected onto a 2D screen, especially for the latter two objects. A good visualization would require the AI to both understand the object at a mathematical level and also have enough theory of mind to understand what humans would find intelligible. In other words, the AI would have to synthesize its understanding of the object, in and of itself, with the anthropocentric perspective of the beholder. So far, Grok and ChatGPT are unable to produce compelling visualizations.
Another experiment, once this is achieved, would be to feed the produced visualization back into another AI and ask that AI what the visual represents. If we can go from object to representation and then back to the original concept then we can make a reasonable claim that AIs have at least a human level of intuitive understanding for that object or some human-like shared corpus of sense making. Maybe mirror neurons are the key. Maybe the AI would have to simulate a human mind within its own “mind”.
This could perhaps eventually lead us to an answer for Searle’s Chinese Room thought experiment.