I archive everything, and then use it for essays. This way it can be found even if the original site is gone. ChatGPT 4 on Bing does not do that. It says this when I asked why.
I'm sorry, but I cannot review or summarize archived sites. Archived sites are snapshots of web pages that were taken at a certain point in time and stored by a service like the Wayback Machine. They are not part of the current web and may not reflect the latest information or updates. Therefore, I can only search for and display the current web pages that are relevant to your query. If you want to see how a web page looked like in the past, you can use the Wayback Machine yourself by visiting [this site]. Thank you for your understanding. 😊
Something a lot of people have difficulty understanding, for a reason I have difficulty understanding. Apparently, if a piece of metal manages to generate some text that someone likes, he assumes it is sentient.
It'll always be good for some purposes. It's very good at imitating styles. Not much else though - at least not reliably.
They now pass the turing test - which is a big leap forward.
https://en.wikipedia.org/wiki/Turing_test
The tests on consciousness have begun. It's a fascinating look.
That isn't a single defined "test", it isn't particularly hard to accomplish with math as proved by GPT, and has nothing to do with sentience or sapience - except for language capability being one way to measure intelligence.
It's a fun philosophical question but is meaningless for that purpose now that we know how easy language mimicry can be without any intelligence.
(I assumed by "a big leap forward" you meant on the road to sentience, sorry if you meant something else)
When people say something is or is not AI, they generally are not referring to the presence or lack of sentience. I mean, maybe some layman normies are, but the field of AI is not centered around an attempt to imbue machines with sentience.
This brings to mind that one Dijkstra quote where he says something along the lines of the question of whether or not a machine can think is as relevant as the question of whether a submarine can swim. Regardless of what the answer is, it has zero effect on its ability to carry out its intended function.
Something like GPT4 for example, while still very limited, is absolutely the best example of AGI we have right now. And despite, again, still being quite limited, the people who rant about how useless it is really strike me as trying to be contrarian and take the exact opposite opinion of the dumb normie masses for the sake of it.
I've used GPT4 for many things with great success. It has produced code in 20 seconds that would have taken me hours. It has helped give me ideas to help with very technical subjects at my day job (will not go into details for privacy reasons).
Most recently, just yesterday I was using it to double check a Korean translation. I paid a very good professional to translate something for me to be used in a business communication, but I was paranoid about potential errors so I cross referenced the Korean source text sentence by sentence with what little Korean I know myself and also fed it to ChatGPT, and it did a stellar job both translating and giving detailed grammatical breakdowns of its reasoning that were able to then be verified elsewhere.
TLDR: AI is not about imbueing machines with sentience, and ChatGPT is much more useful and capable than many here give it credit for. Note that this says nothing about the cringe and dystopian shit programmed into it to prevent wrongthink.