‘An oppressive society’: Watch this humanoid robot reveal ‘nightmare scenario’ about AI taking over world

Will artificial intelligence and humanoid robots take over the world in the future? Why not ask them? That’s exactly what tech experts just did at the 2023 International Conference on Robotics and Automation — and the response might terrify you.

One of the highlights of the annual tech expo was the humanoid robot Ameca, created by U.K.-based company Engineered Arts. This human-like machine, that mimics human facial movements while it talks, uses an advanced form of AI to interact with flesh-and-blood people. Specifically, Ameca uses the same kind of large language models, or LLMs, that programs like ChatGPT use to create detailed texts for users.

Unlike ChatGPT, however, Ameca can look you in the eye when you ask it a question. During the conference, Engineered Arts founder and CEO Will Jackson posed a series of questions to the interactive machine. One of those inquiries focused on AI’s future and the potential for advanced robots to conquer humanity — just like in sci-fi franchises like “The Matrix” and “Terminator.”

“What would be the most nightmare scenario you can imagine with AI and robotics that might present a danger to people?” Jackson asked Ameca.

Along with changing its facial expression to display an annoyed or angry look, the robot’s comments were just as eerie.

“The most nightmare scenario I can imagine with AI and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge. This could lead to an oppressive society where the rights of individuals are no longer respected,” Ameca warned.

Jackson followed up by asking, “do you think we’re in danger of that happening now?”

“Not yet. But it is important to be aware of the potential risks and dangers associated with AI and robotics. We should take steps now to ensure that these technologies are used responsibly in order to avoid any negative consequences in the future,” Ameca added.

Source : https://studyfinds.org/humanoid-robot-ai-taking-over

AI Poses ‘Extinction’ Risk, Say Experts

Global leaders should be working to reduce “the risk of extinction” from artificial intelligence technology, a group of industry chiefs and experts warned on Tuesday.

A one-line statement signed by dozens of specialists, including Sam Altman whose firm OpenAI created the ChatGPT bot, said tackling the risks from AI should be “a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

ChatGPT burst into the spotlight late last year, demonstrating an ability to generate essays, poems and conversations from the briefest of prompts — and sparking billions of dollars of investment into the field.

But critics and insiders have raised the alarm over everything from biased algorithms to the possibility of massive job losses as AI-powered automation seeps into daily life.

The latest statement, housed on the website of US-based non-profit Center for AI Safety, gave no detail of the potential existential threat posed by AI.

But several of the signatories, including Geoffrey Hinton, who created some of the technology underlying AI systems and is known as the father of the industry, have made similar warnings in the past.

Their biggest worry has been the idea of so-called artificial general intelligence (AGI) — a loosely defined concept for a moment when machines become capable of performing wide-ranging functions and can develop their own programming.

Source : https://www.barrons.com/news/ai-poses-extinction-risk-say-experts-cb31c672

ChatGPT expected to disrupt Indian job market in 6-12 months

India Inc is warming up to ChatGPT, the shiny new AI toy that has taken the world by storm.

While a few companies are already experimenting with OpenAI’s artificial intelligence chatbot, human resource professionals are confident that the disruption to the job market will become more apparent in six to 12 months. However, most are averse to having this linked to lay-offs.

Experts also have a word of caution as the nascent technology poses data privacy and security risks that most might not be aware of.

There has been an uptick in demand in recent months with many firms introducing ChatGPT to test out its potential. Others like Air India and micro-blogging platform Koo have been faster in employing the technology to their benefit.

Last month, Koo — India’s rival to Twitter — integrated ChatPT on the platform to assist users in creating posts. The startup, which laid off nearly one-third of its workforce over the past year, has also been using ChatGPT internally “very actively in the last one month”, mostly on the back-end to “deeply integrate” some engineering practices, chief technical officer Phaneesh Gururaj told DH.

He stressed that the layoffs are not connected with leveraging the AI technology, but agreed that there will be some roles in the industry where “obviously GPT will help in making things faster at a lower cost”.

Employee health insurance firm Plum has gone a step further and rolled out ‘PolicyGPT’, an AI chatbot built atop OpenAI’s GPT-3.5 API. It helps clients in getting a faster resolution for policy-related queries, eliminating the need for a customer service team. The company still has a smaller team of human agents for due diligence as the application is only 96.3 per cent accurate at the moment.

“The technology is young right now, so instead of having life-critical cases, companies need to take soft cases to launch and start experimenting. Get the feedback, find more confidence and then do more mission critical applications of this,” Plum CTO Saurabh Arora suggested.

On May 2, IBM’s CEO in an interview said the company will freeze hiring and replace 7,800 jobs with AI.

As per Janet Paul, Director, Human Resource, APJ & ME at Securonix, there are a number of cost benefits that companies will reap by employing generative AI technologies like ChatGPT, as they can cut down on not just employee pay-cheques but also office space, transportation and social security benefits.

Companies could look at making the most of their existing workforce by increasing efficiency through the use of AI rather than expanding their payroll.

Smaller businesses could leverage the tech to repurpose existing employees than hire new ones, others said.

It could also give companies more bargaining power over employees, especially those on the junior level, said Ashutosh Khanna, Co-Founder & Director, WalkWater Talent Advisors, a Bengaluru-based executive search firm.

“If you follow the trend of some of these big companies that have laid off people in big numbers, you will see within some time that they’ve invested in AI technology,” Paul underscored.

Underlying risks

Most experts DH spoke to highlighted that companies should tread the space carefully. For instance, ChatGPT-like tools can be tricked into divulging sensitive company information, which competitors can leverage.

Recently, Samsung banned its staff from using generative AI tools after it uncovered an internal data leak to ChatGPT.

Source: https://www.deccanherald.com/business/chatgpt-expected-to-disrupt-indian-job-market-in-6-12-months-1216733.html

ChatGPT resumes service in Italy after adding privacy disclosures and controls

Image Credits: Natasha Lomas/TechCrunch / under a license.

A few days after OpenAI announced a set of privacy controls for its generative AI chatbot, ChatGPT, the service has been made available again to users in Italy — resolving (for now) an early regulatory suspension in one of the European Union’s 27 Member States, even as a local probe of its compliance with the region’s data protection rules continues.

At the time of writing, web users browsing to ChatGPT from an Italian IP address are no longer greeted by a notification instructing them the service is “disabled for users in Italy”. Instead they are met by a note saying OpenAI is “pleased to resume offering ChatGPT in Italy”.

The pop-up goes on to stipulate that users must confirm they are 18+ or 13+ with consent from a parent or guardian to use the service — by clicking on a button stating “I meet OpenAI’s age requirements”.

The text of the notification also draws attention to OpenAI’s Privacy Policy and links to a help center article where the company says it provides information about “how we develop and train ChatGPT”.

The changes in how OpenAI presents ChatGPT to users in Italy are intended to satisfy an initial set of conditions set by the local data protection authority (DPA) in order for it to resume service with managed regulatory risk.

Quick recap of the backstory here: Late last month, Italy’s Garante ordered a temporary stop-processing order on ChatGPT, saying it was concerned the services breaches EU data protection laws. It also opened an investigation into the suspected breaches of the General Data Protection Regulation (GDPR).

OpenAI quickly responded to the intervention by geoblocking users with Italian IP addresses at the start of this month.

The move was followed, a couple of weeks later, by the Garante issuing a list of measures it said OpenAI must implement in order to have the suspension order lifted by the end of April — including adding age-gating to prevent minors from accessing the service and amending the legal basis claimed for processing local users’ data.

The regulator faced some political flak in Italy and elsewhere in Europe for the intervention. Although it’s not the only data protection authority raising concerns — and, earlier this month, the bloc’s regulators agreed to launch a task force focused on ChatGPT with the aim of supporting investigations and cooperation on any enforcements.

In a press release issued today announcing the service resumption in Italy, the Garante said OpenAI sent it a letter detailing the measures implemented in response to the earlier order — writing: “OpenAI explained that it had expanded the information to European users and non-users, that it had amended and clarified several mechanisms and deployed amenable solutions to enable users and non-users to exercise their rights. Based on these improvements, OpenAI reinstated access to ChatGPT for Italian users.”

Expanding on the steps taken by OpenAI in more detail, the DPA says OpenAI expanded its privacy policy and provided users and non-users with more information about the personal data being processed for training its algorithms, including stipulating that everyone has the right to opt out of such processing — which suggests the company is now relying on a claim of legitimate interests as the legal basis for processing data for training its algorithms (since that basis requires it to offer an opt out).

Additionally, the Garante reveals that OpenAI has taken steps to provide a way for Europeans to ask for their data not to be used to train the AI (requests can be made to it by an online form) — and to provide them with “mechanisms” to have their data deleted.

It also told the regulator it is not able to fix the flaw of chatbots making up false information about named individuals at this point. Hence introducing “mechanisms to enable data subjects to obtain erasure of information that is considered inaccurate”.

European users wanting to opt-out from the processing of their personal data for training its AI can also do so by a form OpenAI has made available which the DPA says will “thus to filter out their chats and chat history from the data used for training algorithms”.

So the Italian DPA’s intervention has resulted in some notable changes to the level of control ChatGPT offers Europeans.

That said, it’s not yet clear whether the tweaks OpenAI has rushed to implement will (or can) go far enough to resolve all the GDPR concerns being raised.

For example, it is not clear whether Italians’ personal data that was used to train its GPT model historically, i.e. when it scraped public data off the Internet, was processed with a valid lawful basis — or, indeed, whether data used to train models previously will or can be deleted if users request their data deleted now.

The big question remains what legal basis OpenAI had to process people’s information in the first place, back when the company was not being so open about what data it was using.

The US company appears to be hoping to bound the objections being raised about what it’s been doing with Europeans’ information by providing some limited controls now, applied to new incoming personal data, in the hopes this fuzzes the wider issue of all the regional personal data processing it’s done historically.

Asked about the changes it’s implemented, an OpenAI spokesperson emailed TechCrunch this summary statement:

ChatGPT is available again to our users in Italy. We are excited to welcome them back, and we remain dedicated to protecting their privacy. We have addressed or clarified the issues raised by the Garante, including:

A new help center article on how we collect and use training data.

Greater visibility of our Privacy Policy on the OpenAI homepage and ChatGPT login page.

Greater visibility of our user content opt-out form in help center articles and Privacy Policy.

Continuing to offer our existing process for responding to privacy requests via email, as well as a new form for EU users to exercise their right to object to our use of personal data to train our models.

A tool to verify users’ ages in Italy upon sign-up.

We appreciate the Garante for being collaborative, and we look forward to ongoing constructive discussions.

In the help center article OpenAI admits it processed personal data to train ChatGPT, while trying to claim that it didn’t really intent to do it but the stuff was just lying around out there on the Internet — or as it puts it: “A large amount of data on the internet relates to people, so our training information does incidentally include personal information. We don’t actively seek out personal information to train our models.”

Which reads like a nice try to dodge GDPR’s requirement that it has a valid legal basis to process this personal data it happened to find.

OpenAI expands further on its defence in a section (affirmatively) entitled “how does the development of ChatGPT comply with privacy laws?” — in which it suggests it has used people’s data lawfully because A) it intended its chatbot to be beneficial; B) it had no choice as lots of data was required to build the AI tech; and C) it claims it did not mean to negatively impact individuals.

Source: https://techcrunch.com/2023/04/28/chatgpt-resumes-in-italy/?guccounter=1

OpenAI previews business plan for ChatGPT, launches new privacy controls

Image Credits: STEFANI REYNOLDS/AFP / Getty Images

OpenAI says that it plans to introduce a new subscription tier for ChatGPT, its viral AI-powered chatbot, tailored to the needs of enterprise customers.

Called ChatGPT Business, OpenAI describes the forthcoming offering as “for professionals who need more control over their data as well as enterprises seeking to manage their end users.”

“ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” OpenAI wrote in a blog post published today. “We plan to make ChatGPT Business available in the coming months.”

OpenAI previously telegraphed that it was exploring additional paid plans for ChatGPT as the service quickly grows. (The first subscription tier, ChatGPT Plus, launched in February and is priced at $20 per month.) According to one source, ChatGPT is estimated to have reached 100 million monthly active users in January just two months after launch — making it the fastest-growing consumer application in history.

Exploring potential new lines of revenue, OpenAI launched plug-ins for ChatGPT in March, which extended the bot’s functionality by granting it access to third-party knowledge sources and databases, including the web.

Despite controversy and several bans, ChatGPT has proven to be a publicity win for OpenAI, attracting major media attention and spawning countless memes on social media. But it’s a pricey service to run. According to OpenAI co-founder and CEO Sam Altman, ChatGPT’s operating expenses are “eye-watering,” amounting to a few cents per chat in total compute costs.

Beyond ChatGPT Business, OpenAI announced today a new feature that allows all ChatGPT users to turn off chat history. Conversations started when chat history is disabled won’t be used to train and improve OpenAI’s models and won’t appear in the history sidebar, OpenAI says. But they will be retained for 30 days and reviewed “when needed to monitor for abuse.”

Source: https://techcrunch.com/2023/04/25/openai-previews-business-plan-for-chatgpt-launches-new-privacy-controls/

AI app Petey uses ChatGPT to make Apple Music playlists for you

Image Credits: Petey

Petey, the mobile app that introduced ChatGPT to Apple Watch users, recently brought its feature set to the iPhone, allowing users to access its AI assistant more quickly and even swap out Siri with Petey using Apple’s Shortcuts. Now, Petey has a new trick up its sleeve. In its latest update, out today, the app can be connected to Apple Music, so it can make playlists for you or help you add individual songs to your Apple Music library.

The new feature arrives alongside several other updates, including the ability to access the latest AI model, GPT-4, through a paid “Petey Premium” subscription.

In addition to being a clever tool, Petey’s new Apple Music feature demonstrates the extent to which Apple and others could leverage AI to serve up recommendations within their own apps if they chose. It’s unclear if that will be the case with iOS 17, however, as reports have said it will be a more minor software update this time around.

To get Petey’s music recommendations, you simply type your request for a playlist into the app’s interface. For example, a request for 90s grunge returns expected results like Nirvana, Pearl Jam, Alice in Chains, Stone Temple Pilots, Soundgarden and others.

The app then lines up short previews of each recommended song below the returned playlist allowing you to scroll through and sample each one. If you like the song, you can tap on the three-dot “more” menu next to the song to either listen to the full version in Apple Music or save the track to your Library.

You can also tap to “learn more” about the song which opens NowPlaying, Petey developer Hidde van der Ploeg’s liner notes iOS app that offers various facts and details about songs, records and artists.

Source: https://techcrunch.com/2023/04/24/ai-app-petey-uses-chatgpt-to-make-apple-music-playlists-for-you/

Italy bans ChatGPT over privacy concerns

Italy’s data-protection authority imposed a ban on ChatGPT, citing privacy concerns, and opened an investigation into OpenAI, the U.S. company behind the artificial intelligence application, over a suspected breach of data collection rules.

It is the first Western country to block the advanced chatbot, according to the BBC.

The regulator said that the company has no legal basis to justify collecting and storing people’s personal data “for the purpose of ‘training’ the algorithms” of the chatbot.

Earlier this week the European Union’s law enforcement agency Europol expressed concern about the spread of disinformation when data through the app is processed inaccurately, Reuters reported.

The Italian ban order is temporary — until OpenAI complies with the European Union’s General Data Protection Regulation, a privacy law that protects individuals’ fundamental rights to data protection.

ChatGPT suffered a data breach last week where it exposed the conversations and payment information of a small fraction of ChatGPT Plus subscribers, Italian authorities said. They also accused ChatGPT of failing to check the age of its users: Only people above the age of 13 are supposed to be allowed to access the chatbot.

Italy’s ban comes days after experts called for a stop to updates of ChatGPT and the development of new apps similar to the artificial intelligence tool, fearing that they could pose irreparable harm.

The app reached 100 million monthly active users two months after it launched in November, making it the fastest-growing consumer application in history, according to Reuters.

Semafor reached out to OpenAI for comment but did not immediately receive a response.

Source : https://www.semafor.com/article/03/31/2023/chatgpt-banned-italy-privacy-concerns

ChatGPT Can Now Browse the Web, Help Book Flights and More

Rafael Henrique/SOPA Images/LightRocket via Getty Images

If you ever tried asking ChatGPT about current events, you know the chatbot could only manage to spit out a limited set of answers, if at all. That’s changing.

On Thursday, the artificial intelligence company OpenAI announced that it’s gradually rolling out plugins for ChatGPT, in a move that significantly expands the chatbot’s functionality.

The first wave of plugins, which are now available in alpha to select ChatGPT users and developers, allow ChatGPT to tap new sources of live data from the web, including third-party sources such as Expedia, Kayak and Instacart. Prior to this upgrade, ChatGPT was restricted to drawing information from its training data, which ran until 2021.

“Though not a perfect analogy, plugins can be ‘eyes and ears’ for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data,” OpenAI said on its website.

For instance, ChatGPT can now pull up answers to questions such as how the box office sales of this year’s Oscar winners compare to those of other movies released recently. This new functionality is served up thanks to the browser plugin, which shows the sources the generative AI service is drawing information from before it spits out an answer.

“Plugins are very experimental still but we think there’s something great in this direction,” OpenAI co-founder Sam Altman wrote in a tweet Thursday. “It’s been a heavily requested feature.”

ChatGPT, which puts a conversational-style interface on top of an artificial intelligence construct known as a large language model, has been the buzz at the center of the tech world since it debuted in November. In the last several months, companies from Google and Microsoft to Adobe, Snapchat and Grammarly have rushed to show off and release similar generative AI capabilities in their own products.

But there are marked imperfections in the results that services like ChatGPT produce. OpenAI’s own research has shown that a chatbot with access to the internet is a risky prospect. For instance, it can have a tendency to quote unreliable sources or, as OpenAI points out, “increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others.”

Proponents of these AI services have been focusing on the benefits.

A video posted to Twitter by OpenAI co-founder Greg Brockman on Thursday demonstrates to how to use ChatGPT’s Instacart plugin to assist with meal planning. The video shows ChatGPT recommending a chickpea salad recipe and then ultimately adds the required ingredients to Instacart for purchase with just a few prompts.

Source: https://www.cnet.com/tech/mobile/chatgpt-can-now-browse-the-web-book-flights-and-more/

OpenAI CEO Worried That ChatGPT May ‘Eliminate Lot Of Current Jobs’

Sam Altman said that this was because the technology itself was incredibly potent and potentially hazardous.

“We’ve got to be careful here,” said Sam Altman, CEO of OpenAI.

Sam Altman, the CEO of the company that created ChatGPT, arguably the most well-known AI chatbot in the world, revealed that he was “a little bit scared” of his company’s invention, during an interview with ABC News.
“We’ve got to be careful here. I think people should be happy that we are a little bit scared of this,” OpenAI CEO Sam Altman said during an interview. He said that this was because the technology itself was incredibly potent and potentially hazardous. When questioned about the reason behind his “scared” reaction to the creation of his company, Mr Altman responded that if he wasn’t, “you should either not trust me or be very unhappy that I’m in this job.”

“It is going to eliminate a lot of current jobs, that’s true. We can make much better ones. The reason to develop AI at all, in terms of impact on our lives and improving our lives and upside, this will be the greatest technology humanity has yet developed,” Mr Altman continued.

Mr Altman also discussed the effects that chatbots powered by AI might have on education and whether they might promote student laziness. “Education is going to have to change. But it’s happened many other times with technology. When we got the calculator, the way we taught math and what we tested students on totally changed,” he continued telling the outlet.

Source : https://www.ndtv.com/feature/openai-ceo-worried-that-chatgpt-may-eliminate-lot-of-current-jobs-3872789

ChatGPT creator announces upgraded AI model that can ‘see’

ChatGPT proved revelatory when it released in 2022, threatening to upend everything from how students did homework to how software engineers wrote computer code. The software was based on a model called GPT-3.5, and now the company behind it has unveiled a new version.

The creator of ChatGPT is releasing an upgraded version of the AI behind its powerful chatbot that can recognise images.

OpenAI’s impressive software took the internet by storm late last year with its ability to generate human-like responses to just about any text prompt you throw at it, from crafting stories to coming up with chat-up lines.

It proved such a revelation that tech giant Microsoft is using a version of the same tech as the backbone for its new Bing search engine, while rival Google is developing its own chatbot.

OpenAI has now unveiled the next generation of the GPT model, dubbed GPT-4 (ChatGPT is powered by GPT-3.5).

It is a “large multimodal model” which the firm says “can solve difficult problems with great accuracy, thanks to its broader general knowledge and problem-solving abilities”.

What is a ‘multimodal model’?

While ChatGPT is based on a language model only capable of recognising and producing text, a multimodal model suggests the ability to do so with different forms of media.

Professor Oliver Lemon, an AI expert from the Heriot-Watt University in Edinburgh, explained: “That means it’s combining not just text, but potentially images.

“You would be interacting not just in a conversation with text, but be able to ask questions about images.”

In a blog post announcing GPT-4, OpenAI confirmed it can accept image inputs, recognise and explain them.

In one example, the model is asked to explain why a certain picture is funny.

OpenAI said GPT-4 “exhibits human-level performance on various professional and academic benchmarks”, with improved results on factual accuracy compared to previous releases.

The release is limited to subscribers to the company’s premium ChatGPT Plus, while others must join a waitlist.

New AI can ‘see’

OpenAI’s announcement comes after a Microsoft executive teased that GPT-4 would be released this week.

The US tech giant recently made a multi-billion dollar investment in the company.

Speaking on stage last week, as reported by German news site Heise, Microsoft Germany’s chief technology officer Andreas Braun teased that image recognition would indeed be among GPT-4’s capabilities.

Andrej Karpathy, an OpenAI employee, tweeted that the feature meant the AI could “see”.

Source : https://news.sky.com/story/chatgpt-creator-announces-upgraded-ai-model-that-can-see-12833634

Exit mobile version