20231007

EDITORIAL COLLECTIONS | AI TECHNOLOGY ADVANTAGE AMIDST FEARS

Artificial intelligence (AI), according to Britannica, “is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience”. But then Wikipedia takes it further this way, “Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves” These definitions ideally seem to conclude on the basis and perhaps, the future of AI in human society.

The primary belief by most people about AI technology is simply the fear associated with the possible loss of jobs by humans to robots in the future. This is already widely evident across some organizations and government administrations. We can obviously say that AI is indeed a new nature set to change science either for better or for worse.

According to an article in Nature, an online publicationNow is the time to determine which aspects of research and society AI can be safely integrated into, and how to go about it. The coming deluge of AI-powered information must not be allowed to fuel a flood of untrustworthy science. Science and humanity stand to benefit from AI, provided it is applied in the right way. A comprehensive understanding of the potential dangers of this technology is an essential prerequisite for its safe use.

“The number of research fields touched by artificial intelligence (AI) is rising all the time. From protein folding to weather forecasting, and from medical diagnostics to science communication, the list seems to grow by the day. The proportion of papers in the Scopus database that mention AI or AI-related keywords in the title or abstract now stands at 8%, up from 2% a decade ago, according to an analysis by Nature.

“Meanwhile, AI has also been changing. Whereas the 2010s saw a boom in the development of machine-learning algorithms that can help to discern patterns in huge, complex scientific data sets, the 2020s have ushered in a new age of generative AI tools pre-trained on vast data sets that have much more transformative potential.

“But precisely how and why AI in its various forms is affecting science — and, indeed, whether or how all of the purported benefits will be realized — is itself an emerging story. A Nature survey has captured the views of more than 1,600 researchers from around the world. It marks the launch of a new series on the role of AI in science. We intend to draw on these results as we dive deeper into what researchers are saying about AI and so inform the conversation about the technology’s potential — and its pitfalls. All of the material in Nature’s series will be accessible on one website.

“Survey respondents told us, for example, that they are using AI to process data, write code, and help them write papers. One clear benefit for many is in English-language science communication. Generative-AI tools powered by large language models (LLMs), notably ChatGPT, help researchers whose first language is not English, but who need to use English to publish their research. Scientists can use LLMs to improve their writing style and grammar, and to translate and summarize other people’s work.

“AI is also widely being used in science education around the world. Students at schools and universities regularly use LLM tools to answer questions, and teachers are starting to recognize that curricula and methods of pedagogy will need to change to take this into account.

“But respondents also reported concerns, many of which mirror those held in wider society about AI technologies. These range from the lack of transparency of ‘black box’ systems, in which the underlying reasons why an AI reaches the results it does are not clear, to fears over training data including biased information. Researchers are also concerned about the harms that could be caused by AI spreading misinformation, and the prospect of AI-generated fake studies. These concerns hold particular weight in science. If we lose trust in primary scientific literature, we have lost the basis of humanity’s corpus of common shared knowledge.

“Another factor that survey respondents commented on is the dominant part corporations are playing in the development of AI. Companies are valuable contributors to science, technology, and innovation. But the scale of their ownership of AI, in terms of both the technology and the human data needed to power it, is greater than in the past. Researchers need access to data, code, and metadata. Producers of black-box systems need to recognize the necessity of making these available for research if AI claims are to pass verification and reproducibility tests. But the speed at which AI is developing means regulators are still playing catch-up”

Every invention be it human or natural seems to have drawbacks, and so does AI aside from dropping hints on Mitigating the Risks of AI Builtin highlights its fears and disadvantages below:

1. LACK OF AI TRANSPARENCY AND EXPLAINABILITY

AI and deep learning models can be difficult to understand, even for those that work directly with the technology. This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use of explainable AI, but there’s still a long way before transparent AI systems become common practice. 

2. JOB LOSSES DUE TO AI AUTOMATION

AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing and healthcare. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change — according to McKinsey. Goldman Sachs even states 300 million full-time jobs could be lost to AI automation. 

“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” futurist Martin Ford told Built In. With AI on the rise, though, “I don’t think that’s going to continue.” 

As AI robots become smarter and more dexterous, the same tasks will require fewer humans. And while AI is estimated to create 97 million new jobs by 2025, many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t upskill their workforces. 

“If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a good match for you?” Ford said. “Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents — really strong interpersonal skills or creativity — that you might not have? Because those are the things that, at least so far, computers are not very good at.” 

Even professions that require graduate degrees and additional post-college training aren’t immune to AI displacement. 

As technology strategist Chris Messina has pointed out, fields like law and accounting are primed for an AI takeover. In fact, Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for “a massive shakeup.” 

“Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure,” he said in regards to the legal field. “It’s a lot of attorneys reading through a lot of information — hundreds or thousands of pages of data and documents. It’s really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you’re trying to achieve is probably going to replace a lot of corporate attorneys.” 

3. SOCIAL MANIPULATION THROUGH AI ALGORITHMS

Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election. 

TikTok, which is just one example of a social media platform that relies on AI algorithms, fills a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising concerns over TikTok’s ability to protect its users from misleading information.

Online media and news have become even murkier in light of AI-generated images and videos, AI voice changers as well as deepfakes infiltrating political and social spheres. These technologies make it easy to create realistic photos, videos, audio clips or replace the image of one figure with another in an existing picture or video. As a result, bad actors have another avenue for sharing misinformation and war propaganda, creating a nightmare scenario where it can be nearly impossible to distinguish between creditable and faulty news.

“No one knows what’s real and what’s not,” Ford said. “So it really leads to a situation where you literally cannot believe your own eyes and ears; you can’t rely on what, historically, we’ve considered to be the best possible evidence... That’s going to be a huge issue.” 

4. SOCIAL SURVEILLANCE WITH AI TECHNOLOGY

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example is China’s use of facial recognition technology in offices, schools and other venues. Besides tracking a person’s movements, the Chinese government may be able to gather enough data to monitor a person’s activities, relationships and political views.

Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, which disproportionately impact Black communities. Police departments then double down on these communities, leading to over-policing and questions over whether self-proclaimed democracies can resist turning AI into an authoritarian weapon.

“Authoritarian regimes use or are going to use it,” Ford said. “The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?” 

5. LACK OF DATA PRIVACY USING AI TOOLS

If you’ve played around with an AI chatbot or tried out an AI face filter online, your data is being collected — but where is it going and how is it being used? AI systems often collect personal data to customize user experiences or to help train the AI models you’re using (especially if the AI tool is free). Data may not even be considered secure from other users when given to an AI system, as one bug incident that occurred with ChatGPT in 2023 “allowed some users to see titles from another active user’s chat history.” While there are laws present to protect personal information in some cases in the United States, there is no explicit federal law that protects citizens from data privacy harm experienced by AI. 

6. BIASES DUE TO AI

Various forms of AI bias are detrimental too. Speaking to the New York Times, Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — and humans are inherently biased. 

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities,” Russakovsky said. “We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”

The limited experiences of AI creators may explain why speech-recognition AI often fails to understand certain dialects and accents, or why companies fail to consider the consequences of a chatbot impersonating notorious figures in human history. Developers and businesses should exercise greater care to avoid recreating powerful biases and prejudices that put minority populations at risk.

7. SOCIOECONOMIC INEQUALITY AS A RESULT OF AI

If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise their DEI initiatives through AI-powered recruiting. The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the same discriminatory hiring practices businesses claim to be eliminating.

Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern, revealing the class biases of how AI is applied. Blue-collar workers who perform more manual, repetitive tasks have experienced wage declines as high as 70 percent because of automation. Meanwhile, white-collar workers have remained largely untouched, with some even enjoying higher wages.

Sweeping claims that AI has somehow overcome social boundaries or created more jobs fail to paint a complete picture of its effects. It’s crucial to account for differences based on race, class and other categories. Otherwise, discerning how AI and automation benefit certain individuals and groups at the expense of others becomes more difficult.

8. WEAKENING ETHICS AND GOODWILL BECAUSE OF AI

Along with technologists, journalists and political figures, even religious leaders are sounding the alarm on AI’s potential socio-economic pitfalls. In a 2019 Vatican meeting titled, “The Common Good in the Digital Age,” Pope Francis warned against AI’s ability to “circulate tendentious opinions and false data” and stressed the far-reaching consequences of letting this technology develop without proper oversight or restraint. 

“If mankind’s so-called technological progress were to become an enemy of the common good,” he added, “this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.” 

The rapid rise of generative AI tools like ChatGPT and Bard gives these concerns more substance. Many users have applied the technology to get out of writing assignments, threatening academic integrity and creativity. 

Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence, we’re going to keep pushing the envelope with it if there’s money to be made. 

“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” Messina said. “‘And if we can make money off it, we’ll do a whole bunch of it.’ But that’s not unique to technology. That’s been happening forever.’”

9. AUTONOMOUS WEAPONS POWERED BY AI

As is too often the case, technological advancements have been harnessed for the purpose of warfare. When it comes to AI, some are keen to do something about it before it’s too late: In a 2016 open letter, over 30,000 individuals, including AI and robotics researchers, pushed back against the investment in AI-fueled autonomous weapons.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

This prediction has come to fruition in the form of Lethal Autonomous Weapon Systems, which locate and destroy targets on their own while abiding by few regulations. Because of the proliferation of potent and complex weapons, some of the world’s most powerful nations have given in to anxieties and contributed to a tech cold war. 

Many of these new weapons pose major risks to civilians on the ground, but the danger becomes amplified when autonomous weapons fall into the wrong hands. Hackers have mastered various types of cyber attacks, so it’s not hard to imagine a malicious actor infiltrating autonomous weapons and instigating absolute armageddon.

If political rivalries and warmongering tendencies are not kept in check, artificial intelligence could end up being applied with the worst intentions.

10. FINANCIAL CRISES BROUGHT ABOUT BY AI ALGORITHMS

The financial industry has become more receptive to AI technology’s involvement in everyday finance and trading processes. As a result, algorithmic trading could be responsible for our next major financial crisis in the markets.

While AI algorithms aren’t clouded by human judgment or emotions, they also don’t take into account contexts, the interconnectedness of markets and factors like human trust and fear. These algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility.

Instances like the 2010 Flash Crash and the Knight Capital Flash Crash serve as reminders of what could happen when trade-happy algorithms go berserk, regardless of whether rapid and massive trading is intentional.

This isn’t to say that AI has nothing to offer to the finance world. In fact, AI algorithms can help investors make smarter and more informed decisions on the market. But finance organizations need to make sure they understand their AI algorithms and how those algorithms make decisions. Companies should consider whether AI raises or lowers their confidence before introducing the technology to avoid stoking fears among investors and creating financial chaos.

11. LOSS OF HUMAN INFLUENCE

An overreliance on AI technology could result in the loss of human influence — and a lack in human functioning — in some parts of society. Using AI in healthcare could result in reduced human empathy and reasoning, for instance. And applying generative AI for creative endeavors could diminish human creativity and emotional expression. Interacting with AI systems too much could even cause reduced peer communication and social skills. So while AI can be very helpful for automating daily tasks, some question if it might hold back overall human intelligence, abilities and need for community.

12. UNCONTROLLABLE SELF-AWARE AI

There also comes a worry that AI will progress in intelligence so rapidly that it will become sentient, and act beyond humans’ control — possibly in a malicious manner. Alleged reports of this sentience have already been occurring, with one popular account being from a former Google engineer who stated the AI chatbot LaMDA was sentient and speaking to him just as a person would. As AI’s next big milestones involve making systems with artificial general intelligence, and eventually artificial superintelligence, cries to completely stop these developments continue to rise.

Care for more? Read here …

No comments:

DATE-LINE BLUES REMIX EDITION ONE


  • Customer Relations(お客さま対応) 学生契約 - (Sendai, Miyagi, Japan) - あなたに必要なもの 【こんな方達と一緒にお仕事したいと思っています】• ホームファニッシングに興味のある方• 自信を持って人と接し、礼儀正しく笑顔でフレンドリーに会話・接客ができる方• 〈自分らしく〉働き、自らの可能性を伸ばしていける方• チームワークを大切にし、ゴール達成に対する意欲をお持ちの方• 柔軟な取...
    1 year ago
  • 7 Ways to Hunt for Home Deals on Overstock - Your home, your treasure. That perfect piece at the perfect price awaits you at Overstock. Here are seven ways to hunt for home deals to make your dream ...
    1 year ago