I entered the #ComputationalLinguistics field in 2018 by enrolling for a Bachelor's degree.
Since then, a lot has changed. Almost all the things we learned about, programmed in practice and did research on are now nearly irrelevant in our day-to-day.
Everything is #LLMs now. Every paper, every course, every student project.
And the newly enrolled students changed, too. They're no longer language nerds, they're #AI bros.
I miss #CompLing before ChatGPT.
Remember when we programmed "conversational robots" (like Siri or Alexa) in a way that we could actually control exactly what is being parsed, understood, and output?
When we could manually decide that if the user asks for the weather, the device is going to consult a weather data API as an actual real vetted information source?
When we programmed them to be *useful* for *tasks* and *information retrieval*, rather than being amazed that they're "talking" in a *convincing* manner?
It's crazy!
If you told one of our robots "hey, please call Lucia but before that see whether her birthday collides with anything on my calendar", it would actually work!
And we could look into the state of the machine and see what it interpreted your intent to be.
Something like "Get date:'birthday' from contact:'Lucia' and check events @ calendar:date, ask user confirmation, then call contact:'Lucia'".
It *had* understanding, context, knowledge. We threw that all out for a word generator!!
@lianna I thought similar for a while and I have always wondered why higher levels of machine learning or LLMs are not used to improve on that concept. After all, that IS the number one problem with generative AI: It does not have concepts. And really, while I do not think that the hard-coded Siri etc were great ("Sorry, I did not understand that"), I think those concepts should have been iterated and enhanced / improved instead of replaced.
@wildrikku @lianna then again, people already bought stupid Siris and Alexas in masses, so I guess demand is sufficient, why invest in a good product when you can sell a bad one...
@wildrikku In academia, we were already at a point where those "sorry, I didn't understand that" type situations rarely happened, and if they did, they would give you more helpful troubleshooting information and a less infuriating error response.
It's just that all of that progress suddenly was cut – sometimes literally, cutting off the funding for ongoing research – once ChatGPT came out.