Is AI coming for us all?

in techai •  2 years ago 

The intelligence is artificial the dangers are real and some of them are already with us.
The intelligence is artificial the dangers are real and some of them are already with us..png

I'm Kayden, and I'd like to welcome you to Kush in Tech, where we discuss recent technological advancements. You've probably seen the doomsday headlines, but are machines really out to get us or are they working on eradicating humanity?
About two weeks ago, hundreds of tech experts and academics warned governments that artificial intelligence, or AI, if left unchecked, could soon develop the capability and the desire to wipe out all of humanity. Among those sounding the alarm were some of the industry's biggest players, including AI developers like the Microsoft-backed open Ai and Google's deep mind companies that continue to pour billions of big Tech dollars into this technology. However, there is a lot of skepticism surrounding this story and AI.
Now the problem is that most of the journalists do not fully understand the technology they're covering!
The puzzle that perplexes us all is how AI went so quickly from the thrilling, or exciting technology to the ominous, from the limitless possibilities to the fear of Will a technology we created ultimately destroy us in the future.

Some people believe that catastrophic risks are definitely something that is very conceivable and that people should be aware of the potential risks associated with AI and Robotics. True there could be a range of really bad outcomes, starting with destabilizing economies but going all the way to the kind of effects you might expect from, you know, large-scale nuclear war.
Personally, I believe that we don't know enough to feel very secure and that we should start working to prevent even the low probability event of something catastrophic. Since there are currently warnings that humanity may go extinct in the relatively near future, perhaps in the relatively near future.

Many of the claims and arguments suggesting that artificial general intelligence (AGI) could potentially lead to the extinction of humanity are currently overblown, exaggerated, and highly speculative. The mechanisms through which AGI could pose such catastrophic risks are often not well-founded or based on solid evidence. It is important to approach these discussions with a balanced perspective and avoid undue alarmism.

When a new technology is introduced, some people are interested in promoting it and telling us that it will change the world in wonderful ways while others are asking, "Wait a minute, where is this going, and where might it end up?".
That kind of extreme debate runs the risk of ignoring the present and the really useful developments that are taking place. Many AI experts will concur that the Doomsday scenarios don't merit the media attention they're receiving and should be put on the back burner.
AGI (artificial general intelligence), a type of AI that does not yet exist but could eventually be developed to think, reason, and act autonomously without the need for human input, has been pushed to the periphery of news coverage because some claim that those scenarios and existential fears are based on it. Still, that possibility generates alarmist news angles that attract clicks and divert viewers' attention from the actual harm that AI is already doing through technologically induced bias, discrimination, and misinformation.
Unconscious biases can enter machine learning models when humans select the data that artificial intelligence uses; these biases are then automated and reinforced.

One of the issues I find concerning about the emphasis on doomsday scenarios is that it diverts valuable energy and resources away from addressing the real harms caused by AI systems today. For instance, studies conducted by the U.S. government have revealed that AI-driven facial recognition tools often misidentify individuals of color, resulting in wrongful arrests. These tools are currently impacting people's access to essential resources such as mortgages, healthcare, job opportunities, and fair compensation. It is important to prioritize addressing these tangible issues instead of solely focusing on distant and speculative future risks.

These are the kinds of worries we should be emphasizing in the discussion of artificial intelligence. There are a number of risks that have been raised, and I'm particularly concerned about how these tools could amplify the level of misinformation that we already experience.
If we don't take care to restrain them and stay away from autonomous systems that can make their own decisions, these tools could also increase the level of misinformation that we already experience.
The implications for reporters' jobs and livelihoods are one real worry about AI in newsrooms, a concern that is frequently raised out of self-interest.
In journalism, artificial intelligence is not entirely new; automation tools have been producing news copy for years. In financial reporting, in particular, these tools can track market movements and quickly and effectively sift through earnings reports.
The impact that AI is having on the news industry will grow with the introduction of new generative AI tools like chat GPT and their capacity to produce original text and images, but many journalists are not more knowledgeable about AI and its drawbacks and limitations than the audiences they report to.
Lessening their fear of technology will help people understand it better, as well as its strengths and weaknesses. It is being used effectively by good media organizations to support their journalists, making them more effective, efficient, and connected to their audience, freeing them up to produce the kind of human journalism that we sorely need.

It will also be simple to replace journalists and produce easily automated banal journalism that you can't necessarily trust. Some people will do this for profit to increase their page views, while others will do it to try and add value.
We are aware that it has shortcomings, hasn't been trained on current data, isn't responsive to changing societal conditions, and isn't factual in its fundamental abilities.
It can effectively mimic the speech patterns of people, but it is unable to reveal the true meaning of the words that appear on the page. This is an important distinction to remember.
A generative AI system cannot take the place of journalists as a really important resource for society.

In a letter that was only one sentence long, some of the biggest names in the industry, including Google deep minds Demis Hasabis and open ai's Sam Altman, warned that humanity was on the verge of an AI-driven extinction. The letter was published late last month and stated that regulation should be a top priority on a global scale to limit any potential harm that their AI products might cause.
Sam Altman's prophecies of doom may be used to secure a seat for billionaire Tech Bros at the table where the rules and regulations for the future of AI will ultimately be drafted, analysts have noted on stage, despite the fact that those companies are still investing heavily in AI research.

You can see this, for example, when Sam Altman visits Washington, DC, where he meets with a number of CEOs and senators to discuss the dangers.
What they are saying is that the technologies we are currently developing pose a threat to all of humanity, and the only way we can effectively mitigate these risks is to rely on the companies themselves but keep in mind that these companies haven't shown a track record of responding to public concerns about the general impact of their systems.
Therefore, I believe it's crucial that we pay close attention to how market incentives encourage these companies to introduce systems into widespread use before they're ready.
The human being is the weak link in this situation, even with microscopic internal scrutiny and let alone sufficient testing from regulators.

Although I am optimistic about the potential for developing secure and incredibly helpful AI systems, it is important to recognize that even with such advancements, there is a risk that AI knowledge will be misused to create harmful systems that could endanger humanity. Understanding the motivations of those who use AI as well as social, political, and economic factors are all important in resolving this issue.

Participating in the discussion and decision-making processes surrounding AI is crucial if we are to minimize these risks. This can be accomplished by raising public awareness and educating people about AI technology, its advantages, and potential drawbacks. Platforms for open discussion that include experts, decision-makers, and the general public enable a shared understanding of the implications of AI and guarantee that development is in line with societal values and priorities.

To control the ethical development and application of AI, rules and regulations must also be put in place. Companies should put ethics first and take responsibility for any risks that their AI systems may present. A comprehensive framework that supports the secure and advantageous use of AI technology can be developed through cooperation between business, academia, the government, and civil society.
image.png

The integration of interdisciplinary approaches, such as those from the fields of ethics and philosophy, is also essential, in addition to the advancement of technology. This broader viewpoint helps us foresee potential outcomes and risks and deal with them before they happen. We can work toward a future in which human lives are not forfeited for the sake of technological efficiency by anticipatorily evaluating the potential effects of technology.

In conclusion, important steps towards lowering the risks connected with AI include involving society, raising public awareness, establishing regulations, and adopting a multidisciplinary approach. By doing this, we can balance utilizing AI's potential advantages with protecting ourselves from potential drawbacks, ultimately putting human welfare and security first.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!