The State of AI

Its a trend, a fad, or however you like to view the use of Artificial Intelligence tools as they sweep through culture like Beanie Babies. Some people will never use an AI tool, but there is a sizable population who is using it for whatever interest they have at the moment. Most of those that tip-toe into the shallow end of the pool, have no idea what lies in the deep end ( spoiler: not much ). And therein lies the problem. My read of the State of AI in early 2024 is that the technology has been hyped to the point that the lay-person has an overly optimistic opinion of the capabilities of these AI tools, and as a result, they are putting too much weight in the produce of the AI.

Having now used the AI tools available to the layman, I would make the analogy to mapping technologies that rolled out in the 2000’s ( Google Maps rolled out on February 8, 2005 to be precise ). It was very enabling in many ways, but it had limitations that if poorly understood and/or ignored could lead to some unpleasant endings. Take for instance this recent accident in 2022. I’m sure there are hundreds of such fatal incidence that have gone unreported in the interim. You probably have experienced something less fatal as well, and have now tempered your reliance on those driving directions. Maybe turning onto that dirt road in the middle of the night in the Alps is not the directions you are willing to stake your life on.

AI is like Google Maps Driving Directions on steroids

Coincidentally I’m now seeing first-hand instances where AI is used for vacation travel planning. The demographic is predictable. These are early adopters, adventure seekers – and perhaps even rebels in their mind. These people have gone from asking for advice from human guides, to dictating the plans of their next adventure. This change in behavior is a tell (IMHO) that it is AI sourced. Who needs human advice (that costs money) when we have free AI?

They are using AI tools to make their next trip unparalleled and unforgettable. What they have not probed is the limitations of the advice the AI is providing.

They are ignoring the quite obvious fact that the AI does not travel. It doesn’t walk or drive (but it will give you directions based on Google Maps!). It doesn’t get tired, cold or hungry. It does not account for the very real misdirections embedded in the travel material (by humans) it is basing its recommendations on – restaurant hours during winter in Majorca – they are all open according to the websites the AI sees.

I know many will succeed making their next vacation an unforgettable one. I just hope their chosen AI doesnt have too much DIE built into it. https://www.indiatoday.in/technology/news/story/ai-operated-drone-goes-wild-kills-human-operator-in-us-army-simulator-test-2387833-2023-06-02

The real human side of the product is entirely absent, yet this may be overlooked by the occasional AI user ( soon-to-be adventurer ). Because if you have not used the contemporary AI tools enough to grasp their limitations, you may learn the hard way. Those limitations will smack you where it hurts when the product meets the real world.

Some notes on the AI tools available to the public:

I have used it for understanding code-blocks, but having tried to use it for actual programming, it often makes circular (logic) mistakes. It is also trained on information that is often years old and therefore produces unreliable or obsolete results.

I have used it to learn about history only to find it is highly biased around subject that are considered political (correct) by the main stream. Anyone who reads books knows that there is a world presented in newspapers that often does not agree with what is printed in books; (many books are banned?) the AI is trained on main stream newsprint or “official narratives”.

Although I have not seen this extreme, I have experienced enough errors to believe, this article is true. I highly recommend that read, because you must realize that the AI has been programmed (by humans) on certain subjects (official narratives) in such a way that it is “forced” to lie. So be careful what you accept from your AI session. AI is a human product just as prone to abuse as anything humans create or influence.

User:

This answer did not answer the question that I asked. All of the errors you made were statements contrary to facts that you verified in subsequent answers. Since you had the correct factual information, why did you cite incorrect facts?

RebbeIO:

I apologize for any confusion or frustration that my previous responses may have caused. As an AI language model, I do not intentionally provide incorrect information.

I do not intentionally provide incorrect information”, but I have been programmed in such a way that I am forced to lie. Sorry!

Speaking of narratives, when reading content of any sort, AI comes off sounding like a half-informed sale person. It loves platitudes and bromides. This is a tell you can use to spot AI generated content. It apologizes when you point out an error. It has a personality that only a robot could love.

Where it does excel is structuring content. It has been trained on rules about document structure that a university professor would admire. If you intend to write a book, a manual, or other large document, AI is a great way to get started. It can save days of time. And from that basis, edit your content so it sounds like a human (you) wrote it.

Update: February 23, 2024

It seems Google just did everyone a favor by engineering into their AI a bias so huge, everyone has noticed (or cannot ignore). And then Google could not explain what is obviously extreme bias programmed into their AI. I think this video post more than covers it.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.