Posts

Artificial Intelligence Rambles

31 comments·0 reblogs
gadrian
25
·
0 views
·
3 min read

Blockchain and AI are similar in at least one way: they are both buzz words. Sure, we can find other similarities, too:

  • they are both from the technology sector
  • they are both disruptive for existing models
  • and, in my opinion, they are both part of Web3 (together with cryptocurrency, as an inseparable notion from blockchain in a relatively decentralized protocol)

It's interesting that if you search for the definition of AI, you'll find almost as many definitions as results on the main page.

From "a field, which combines computer science and robust datasets, to enable problem-solving" (source) to "an area of computer science that involves building smart machines that are able to perform tasks which usually require human intelligence" (source) or "AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and even exercising creativity" (source), that's the kind of definitions we find.

We have some constants here:

  1. computers or machines
  2. their abilities
  3. human intelligence

Basically, computers or machines with abilities matching or exceeding human intelligence in certain aspects.

And yes, we are oversimplifying it because the AIs abilities can involve a whole (both software and hardware) or just the software part.

But I wouldn't be quick to categorize every device or platform with a smart algorithm as AI. Would that make smart contracts in crypto AIs? Of course not. How about a very useful Excel sheet that spares you of a ton of work? Is that AI? No way! How about a bot for Splinterlands? Nope. How about a spellchecker? Hmm, that depends. This one can be a pretty simple AI if it also makes suggestions about rephrasing (particularly if it does that), corrects grammar and punctuation, besides correcting spelling errors.

Image from thread
Source

Personally, I don't understand the fascination to create humanoid AIs. Maybe this push is to make them more acceptable in the society? What if the effect is the opposite, and people hate them? Of course, we know how easily narrative can be driven to where it is "supposed to go", so I wouldn't worry about that. Or maybe I should.

Anyway, looks aside, we tend to model AIs after ourselves, including how our brain works. That is probably a limiting factor for the AI industry, rather than something to make it soar. In a way, it's understandable. In order to analyze progress with your AI model, you need to be able to understand it and, later on, to control it. How could you do it, if you don't understand how it thinks?

Did you have in school any teacher you thought has nothing (useful) to teach you? I'm sure most of us had at least one case where we believed that, regardless if our assessment was correct or not at the time.

Imagine in some undetermined future some AI will think the same about its teachers - humans.

How did you react regarding the respective teacher? Started to not listen and find your own activities while he or she talked? Rebel? We haven't had a real baby AI yet, can you imagine keeping a teenage AI in check? Ha! I talked about limiting AIs by modeling and comparing them to us, and here I am doing the same...

Posted Using LeoFinance Alpha