The AI Text Generator That’s Too Dangerous To Make Public?

in dlike •  6 years ago  (edited)

share-with-dlike.jpg

In 2015, car-and-rocket man Elon Musk joined with influential startup backer Sam Altman to put artificial intelligence on a new, more open course. They cofounded a research institute called OpenAI to make new AI discoveries and give them away for the common good. Now, the institute’s researchers are sufficiently worried by something they built that they won’t release it to the public.

Google, too, has decided that it’s no longer appropriate to innocently publish new AI research findings and code. Last month, the search company disclosed in a policy paper on AI that it has put constraints on research software it has shared because of fears of misuse. The company recently joined Microsoft in adding language to its financial filings warning investors that its AI software could raise ethical concerns and harm the business.

So Google and not-so-open OpenAI are so concerned about fake news that they’re not releasing their latest AI research?

Does this have FUD written all over it? Artificial Intelligence is here, evolving, and getting stronger. We’re supposed to believe that Google and friends have our best interests in mind as they’re suppressing research findings?

@taskmaster4450 has written several articles about SingularityNet’s open source advancements in AI, specifically to insure that the Googles of the world and their government overlords (or pawns if you will) don’t have a monopoly on this technology.

 

 

 

 


Source of shared Link

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

In Google's case I am actually worried--I am losing fast what little faith I had in the company's ethics.

Concerning Elon, I would not discount the possibility that his pet project isn't giving results; the claim that it's too dangerous to release might be just a convenient excuse to save face.

LOL raise concerns about the gods all these slaves listen to, the AI wouldnt know who to listen to with so much convoluted bullshit coming out of so many different faiths mouths.

I cant wait until the age of sentience is upon us...

I would argue that anyone with a subjective interpretation of a god would not be of much intrinsic value to a sentient machine, and no one likes being called a slave, right?

AI is very promising, I think. But I’m one of the believers that there are potential frightening implications. The technology should not be controlled or managed by any one person or entity.

We've all got a one way ticket on this dark-future-tech-ride and none of us are getting off until we're dead.

Hi @preparedwombat!

Your post was upvoted by @steem-ua, new Steem dApp, using UserAuthority for algorithmic post curation!
Your UA account score is currently 4.743 which ranks you at #1488 across all Steem accounts.
Your rank has not changed in the last three days.

In our last Algorithmic Curation Round, consisting of 343 contributions, your post is ranked at #192.

Evaluation of your UA score:
  • Some people are already following you, keep going!
  • The readers appreciate your great work!
  • You have already shown user engagement, try to improve it further.

Feel free to join our @steem-ua Discord server