Thanks for your comment. As an AI engineer, I'm optimistic about LLMs in many ways. The point of the article is to say that one should never outsource ones thinking to LLMs as I find many people doing.
Yes, you can ask questions that are in the intersection of history, chemistry and fencing. But make sure you do the work of fact checking the validity of the response at every point of the explanation.
If you ever notice that the LLM got the answer and explanation right for something niche, there could be 3 possibilities:
1. The question you asked may not be as novel as you think (Most likely)
2. The model might have stumbled upon the right answer coincidently (possible but unlikely)
3. The model has enough conceptual understanding of the subjects to give the correct answer (Extremely unlikely)
Yes, LLMs' strength lie in different places than humans. This gives humans more time to work on the frontiers. Not LLMs.
Tying back to the calculator metaphor, the calculator speeds up the process of calculating the answer. It's our job to formulate the equation and validate the answer.