After machines beat human at their reading tests, computational linguists are still wrangling with the question, "But do the machines really understand?"
Even with powerful pre-training, continual improvements and fine tuning, machines with neural language models may still face a fundamental obstacle to real understanding. In this article, the author wonders if machines can perfectly model language in general. Can comprehensively designed or carefully filtered training data sets capture all the edge cases and unforeseen inputs that humans effortlessly cope with when using natural language? Well, not so much. Read on for more...