Apples don't grow on pear trees.
Seems pretty obvious. But are the flowers in the next picture taken in my yard, from an apple or a pear tree? Perhaps it is from one of the cherry trees? Maybe it is from another tree all together.
How can you tell?
The right information.
And I am noticing more and more how many people are essentially doing the same thing as the simple guessing game above, but with far more complicated and complex information, as they are relying on generative AI to provide information that due to their own lack of experience, they can't tell whether it is quality or not.
A horticulturalist would likely know which tree.
Because they have the experience required, or they can get pretty close, because they also have the experience to identify what tree it is not. But, so much of the information that is coming into us isn't from our area of expertise. And because it lays outside of our wheelhouse, we aren't able to determine the quality of the information we have been given, we just have to trust it.
It is from an apple tree that sits outside my office window.
Trusting untrustworthy information isn't a new problem and has been going on for many decades, centuries and probably millennia. We are social animals built to share through stories to pass on what we know, but we have also learned that we are able to manipulate each other by giving incomplete, misinformation., or by telling unverifiable stories - the way of religions.
For instance, a more recent example was from a family friend who rides horses at a stable where a teen girl fell off and died. We were visiting and my friend was really upset, because with close proximity knowledge to the events and details, she recognized that the newspaper got the story quite wrong in many ways. This was seen as even more unforgivable, because the newspaper headquarters were less than fifteen kilometers from the stables.
However, after reading us excerpts, pointing out incorrect facts, she then turned the page and started talking about another article about the war in Crimea (this was a decade ago) and how terrible all these things are, reading out facts from that article. At no point did she see the irony of her behavior, she just trusted. If the journalists couldn't get a simple story of a girl falling off a horse correct, what are the chances of them representing complex geopolitical conflicts well?
But, while the story of misinformation isn't new, the problem now is that we have so much information at our fingertips, being brought to us through what is effectively a black box (even if the links are provided, no one researches credibility on all information), across complex topics that we have very little to no direct experience with. Not only this, we are also getting served this information without even searching for it, where algorithms are pushing it through our feeds, and being human, we tend to believe that it is vetted information.
On top of this, depending on which preferences we have, we are also pushed information in siloes in order to evoke an emotion, belief or action from us. For example, last weekend there was a stabbing event in a shopping mall in Australia, and the information that came out was saying the perpetrator was a muslim, a jew, a person who was actually innocent, a spy for multiple sides... the list went on with each being shared by siloed groups wanting to believe one story over another, because it suited their purpose, their agenda.
It was a person who suffered from schizophrenia.
Not exactly the face of evil that was being reported. It was a sick person. However, when someone stabs mostly women and a baby, it is easy to attribute motives that might not have been there at all. As terrible as the event is and for the families and friends of the victims, what should really be considered is why so many people have mental health problems these days. Rather than looking at the outcomes, go upstream and look at the conditions that lead to them.
Conditions like the quality of information being spread, and the lack of verification.
What we have to consider these days, is that the quality of the information we receive from AI programs, is only as good as the information that is fed into it, and for the most part, we don't actually know what that is. Not only that, we don't know what the AI programs are doing with that information, how they are verifying it, and what kinds of weighting is being applied when there are conflicts. We have seen quite a few "bias" problems coming through the results of late.
Of course, AI could be applied in many ways, where for instance if the pool of information it is using as a resource has already been verified. For example, if all the formal documentation for a company was used as the data set, generative AI results will be quite solid, because it is contained, and it would be based on the expertise of the many people in the company.
We are what we eat.
And AI results are reflective of what it is fed also. Feed it shit, and it will provide generative shit. Feed it informational experience, and it will turn it into some form of wisdom, being able to give accurate insights. However, most of what we are seeing so far in the AI space at the general level at least, is largely coming from unverified sources. And even if the results seem okay, if experts look at the results, they will pick out errors or at least potentials for conflicts and differing opinions. For us though as laypeople on most topics we interact with, it all seems plausible.
Just like the picture from the pear tree outside my office window.
Taraz
[ Gen1: Hive ]
Posted Using InLeo Alpha