World-first AI foundation model for eye care to supercharge global efforts to prevent blindness

Researchers at Moorfields Eye Hospital and UCL Institute of Ophthalmology have developed an artificial intelligence (AI) system that has the potential to not only identify sight-threatening eye diseases but also predict general health, including heart attacks, stroke, and Parkinson’s disease.

RETFound, one of the first AI foundation models in health care, and the first in ophthalmology, was developed using millions of eye scans from the NHS. The research team are making the system open-source: freely available to use by any institution worldwide, to act as a cornerstone for global efforts to detect and treat blindness using AI. This work has been published in Nature.

Progress in AI continues to accelerate at a dizzying pace, with excitement being generated by the development of “foundation” models such as ChatGPT. A foundation model describes a very large, complex AI system, trained on huge amounts of unlabeled data, which can be fine-tuned for a diverse range of subsequent tasks.

RETFound consistently outperforms existing state-of-the-art AI systems across a range of complex clinical tasks, and even more importantly, it addresses a significant shortcoming of many current AI systems by working well in diverse populations, and in patients with rare disease.

Senior author Professor Pearse Keane (UCL Institute of Ophthalmology and Moorfields Eye Hospital) said, “This is another big step towards using AI to reinvent the eye examination for the 21st century, both in the U.K. and globally. We show several exemplar conditions where RETFound can be used, but it has the potential to be developed further for hundreds of other sight-threatening eye diseases that we haven’t yet explored.”

“If the U.K. can combine high quality clinical data from the NHS, with top computer science expertise from its universities, it has the true potential to be a world leader in AI-enabled health care. We believe that our work provides a template for how this can be done.”

AI foundation models have been called “a transformative technology” by the U.K. government in a report published earlier this year, and have come under the spotlight with the launch in November 2022 of ChatGPT, a foundation model trained using vast quantities of text data to develop a versatile language tool.

Source: https://medicalxpress.com/news/2023-09-world-first-ai-foundation-eye-supercharge.html

Kaun Banega Crorepati 15: Amitabh Bachchan expresses fear over AI development, says ‘I am scared of getting replaced’

The latest episode of Kaun Banega Crorepati 15 begins with Fastest Finger First round and Chirag Agarwal takes a hot seat.
He is from Ahmedabad and is studying B Tech.
First question for Rs 1000 is an image based What service does this image represent? A. Taxi, B. Lift, C. Airport, D. Food Counter. He correctly answers option B.

Big B on AI development

After Rs 2000 question, Big B talks about the widespread of AI and asks Chirag if he is also studying.

Chirag shared about his studies and asked Big B, “It was said that AI will take over labour jobs. But now it is seen the people in the creative field are most affected. I feel very lucky to see you in flesh and blood. But someday it might happen that you are not able to shoot and your hologram is being used.”

Big B joked saying, “Let me tell you the truth. This is not me but my hologram.” He then goes on to add, “I am scared, I might be replaced with hologram. In films, such things are happening. We are taken to a room and around 40 cameras rotate around and made to make several expressions by making faces and looking all around. I didn’t know for what but later I learned that they would be used accordingly in my absence. Even if I haven’t given the shot, it will seem that it is me. So I get scared that AI will take our jobs.”
He requested the contestant saying, “Please help me out if I ever go jobless. We get a job with a lot of difficulty.”

Big B asks Chirag if he is popular amongst girls. Chirag says, “My number is in minus. Girls leave me. I get called cute and cuddly. Everyone calls me a panda. Girls don’t like me and I get friend-zoned.”
He takes lifeline for Rs 80,000 question – Which state’s film industry is popularly called ‘Sandalwood’? A. Andhra Pradesh, B. Karnataka, C. Madhya Pradesh, D. Kerala.
With the help of the poll, he correctly answers option B.

Contestant Chirag on getting brother-zoned by girls

Mr Bachchan asks his sister if his brother is saying the truth. She says that indeed he gets brother-zoned. Big B tells him not to lose hope as girls will see him differently after KBC and he should thank the show.
He uses video call a friend lifeline for Rs 1,60,000 question: In which sport was Ding Liren crowned world champion in 2023? A. Chess, B. Snooker, C. Badminton, D. Table Tennis.
He didn’t get any help as his uncle was not sure of the answer. Chirag doesn’t use Double Dip lifeline as he isn’t sure of the answer. Hence, he quits the game and takes home Rs 80,000. Before leaving, he goes for option D but the correct answer is option A.
Source: https://timesofindia.indiatimes.com/tv/news/hindi/kaun-banega-crorepati-15-amitabh-bachchan-expresses-fear-over-ai-development-says-i-am-scared-of-getting-replaced/articleshow/103443683.cms

Google’s search for an AI future as it turns 25

The tech giant Google and I almost share the same birthday… give or take a few years.

Google turns 25 this month (I’ll have a few more candles on my cake) – and finds itself in a tech landscape that has changed dramatically since founders Larry Page and Sergey Brin started it in 1998.

Back then Google was only a search engine, and it lived for its first few months in the garage of Susan Wojcicki – the future boss of YouTube.

You do not need me to tell you how well that search engine worked out. It has been 17 years since the word Google officially entered the dictionary. I remember a BBC discussion about whether we should use it as a verb on-air because of its potential to be a free advert for the firm.

That company – now part of a larger parent group called Alphabet – has since diversified into pretty much every area of tech and dominates some of them to an extent which sometimes troubles anti-competition regulators. Right now it is trying to Google itself into pole position in the AI race – but some say it has already fallen behind.

Hits and misses
Email and smartphones, software and hardware, driverless cars, digital assistants, YouTube – Google has spawned (and acquired) hundreds of products and services. Not all of them have worked out.

There are 288 retired projects listed on the Killed by Google website, include gaming platform Stadia and budget VR headset Google Cardboard.

Google vice-president Phil Harrison showed off the Stadia controller on-stage at its launch in 2019

The question now is whether Google can maintain its omnipresence in the rapidly evolving world of artificial intelligence.

There have been mutterings, including from within, that it has fallen behind. A leaked memo from a Google engineer found its way on to the net, in which he said the firm had no AI “secret sauce” and was not in a position to win the race.

This feeling was further fuelled by the battle of the bots.

What is AI, is it dangerous and what jobs are at risk?
‘Google killer’ ChatGPT sparks AI chatbot race
Google what our chatbot tells you… says Google
For many people, the first time they knowingly interacted with AI – and were impressed by it – came in the form of ChatGPT, the viral AI chatbot which exploded into the world in November 2022.

Its creator OpenAI has received billions of dollars in investment from Microsoft, which is now working it into its own products, including the Bing search engine and Office 365.

ChatGPT has been dubbed the “Google killer” because of the way it can answer a question in one go, rather than serve up pages and pages of search results.

It uses a language-processing architecture called a transformer which was actually invented by Google, but when Google followed up a few months later with its own rival Bard, it had nowhere near the same impact.

Bard was given a surprisingly cautious launch. It was not for under-18s, the tech giant said, and it was described to me as “an experiment” by a senior exec.

Perhaps part of its caution was in part a result of a weird situation which preceded Bard.

Source: https://www.bbc.com/news/technology-66659361

MLB testing hands-free entry for fans utilizing facial authentication, AI security

 

Major League Baseball is testing facial authentication-based entry that would allow ticketed fans to walk directly into stadiums — a convenient new arrival method that the league says won’t compromise on safety and security.

No more fumbling for a phone at entry, waiting through a wonky bar code scan, or shuffling through a lengthy line at one gate to catch a baseball game at the home of the National League champions.

The Philadelphia Phillies have partnered with MLB to use their stadium as the site of a pilot program called Go-Ahead Entry, which uses facial authentication-based entry for ticketed fans.

Forget Shohei Ohtani or Bryce Harper, the faces of the game at Citizens Bank Park this week were the fans that snapped selfies through the MLB Ballpark app, breezed past a facial scan camera, and were soon hunting for their seats or nearest hot dog stand.

Sports fans have long adjusted to electronic tickets on smartphones, and have the capacity to order everything from chicken fingers to foam fingers on devices from their seats without missing a pitch, punt or power play.

Now comes hands-free entry to one ballpark — one that takes advantage of existing contact-less security protocols. Fans have eagerly used the technology so far, even after safety fears were heightened after Chicago police said a shooting that wounded two women at Friday night’s Athletics-White Sox game most likely involved a gun that went off inside Guaranteed Rate Field

Karri Zaremba, Major League Baseball’s senior vice president of product, said Go-Ahead entry had been in the works for more than two years. The program — complete with Go-Ahead banners at the first base gate directing fans — was launched Aug. 21.

All Phillies fans entering through the first base and left field gates could already walk through security screening without having to stop to open bags or be checked individually. The Phillies use Evolv Technology, which uses AI sensor technology to expedite entry and eliminate the need to remove cell phones, cameras, coins, and keys and place them in a screening bowl, or to have patrons checked individually with metal detecting wands.

Source: https://apnews.com/article/phillies-mlb-security-safety-ballparks-b31926803c627ec7f0f8972032466cea

Tata Sons chairman N Chandrasekaran calls for regulating generative AI

In the era of technological advancement, artificial intelligence (AI) has emerged as a transformative force that is reshaping industries and societies alike. As AI systems become increasingly sophisticated, the need for regulation has become a pressing concern. N Chandrasekaran, Chairman of Tata Sons, highlighted the importance of regulating generative AI at the B20 Summit curtain raiser event.

“Generative AI has lot to offer. With AI so much can be done in terms of not only business impact but societal impact, however it requires to be regulated in some form,” Chandrasekaran said.
Generative AI, a subset of artificial intelligence, refers to systems that can generate human-like content, including text, images, and even music. Recent advancements in this field, particularly with models like GPT-3 and its successors, have showcased the remarkable capabilities of generative AI. These systems can produce realistic content, simulate human conversation, and even generate creative works of art.

Source: https://www.cnbctv18.com/technology/generative-artificial-intelligence-regulations-n-chandrasekaran-ai-17628821.htm

Google and YouTube are trying to have it both ways with AI and copyright

Google has made clear it is going to use the open web to inform and create anything it wants, and nothing can get in its way. Except maybe Frank Sinatra.

There’s only one name that springs to mind when you think of the cutting edge in copyright law online: Frank Sinatra.

There’s nothing more important than making sure his estate — and his label, Universal Music Group — gets paid when people do AI versions of Ol’ Blue Eyes singing “Get Low” on YouTube, right? Even if that means creating an entirely new class of extralegal contractual royalties for big music labels just to protect the online dominance of your video platform while simultaneously insisting that training AI search results on books and news websites without paying anyone is permissible fair use? Right? Right?

This, broadly, is the position that Google is taking after announcing a deal with Universal Music Group yesterday “to develop an AI framework to help us work toward our common goals.” Google is signaling that it will pay off the music industry with special deals that create brand-new — and potentially devastating! — private intellectual property rights, while basically telling the rest of the web that the price of being indexed in Search is complete capitulation to allowing Google to scrape data for AI training.

Let’s walk through it.

YouTube video player

The quick background here is that, in April, a track called “Heart on My Sleeve” from an artist called Ghostwriter977 with the AI-generated voices of Drake and the Weeknd went viral. Drake and the Weeknd are Universal Music Group artists, and UMG was not happy about it, widely issuing statements saying music platforms needed to do the right thing and take the tracks down.

Streaming services like Apple and Spotify, which control their entire catalogs, quickly complied. The problem then (and now) was open platforms like YouTube, which generally don’t take user content down without a policy violation — most often, copyright infringement. And here, there wasn’t a clear policy violation: legally, voices are not copyrightable (although individual songs used to train their AI doppelgangers are), and there is no federal law protecting likenesses — it’s all a mishmash of state laws. So UMG fell back on something simple: the track contained a sample of the Metro Boomin producer tag, which is copyrighted, allowing UMG to issue takedown requests to YouTube.

This all created a gigantic policy dilemma for Google, which, like every other AI company, is busily scraping the entire web to train its AI systems. None of these companies are paying anyone for making copies of all that data, and as various copyright lawsuits proliferate, they have mostly fallen back on the idea that these copies are permissible fair use under Section 107 of the Copyright Act.

Source : https://www.theverge.com/2023/8/22/23841822/google-youtube-ai-copyright-umg-scraping-universal

ChatGPT resumes service in Italy after adding privacy disclosures and controls

Image Credits: Natasha Lomas/TechCrunch / under a license.

A few days after OpenAI announced a set of privacy controls for its generative AI chatbot, ChatGPT, the service has been made available again to users in Italy — resolving (for now) an early regulatory suspension in one of the European Union’s 27 Member States, even as a local probe of its compliance with the region’s data protection rules continues.

At the time of writing, web users browsing to ChatGPT from an Italian IP address are no longer greeted by a notification instructing them the service is “disabled for users in Italy”. Instead they are met by a note saying OpenAI is “pleased to resume offering ChatGPT in Italy”.

The pop-up goes on to stipulate that users must confirm they are 18+ or 13+ with consent from a parent or guardian to use the service — by clicking on a button stating “I meet OpenAI’s age requirements”.

The text of the notification also draws attention to OpenAI’s Privacy Policy and links to a help center article where the company says it provides information about “how we develop and train ChatGPT”.

The changes in how OpenAI presents ChatGPT to users in Italy are intended to satisfy an initial set of conditions set by the local data protection authority (DPA) in order for it to resume service with managed regulatory risk.

Quick recap of the backstory here: Late last month, Italy’s Garante ordered a temporary stop-processing order on ChatGPT, saying it was concerned the services breaches EU data protection laws. It also opened an investigation into the suspected breaches of the General Data Protection Regulation (GDPR).

OpenAI quickly responded to the intervention by geoblocking users with Italian IP addresses at the start of this month.

The move was followed, a couple of weeks later, by the Garante issuing a list of measures it said OpenAI must implement in order to have the suspension order lifted by the end of April — including adding age-gating to prevent minors from accessing the service and amending the legal basis claimed for processing local users’ data.

The regulator faced some political flak in Italy and elsewhere in Europe for the intervention. Although it’s not the only data protection authority raising concerns — and, earlier this month, the bloc’s regulators agreed to launch a task force focused on ChatGPT with the aim of supporting investigations and cooperation on any enforcements.

In a press release issued today announcing the service resumption in Italy, the Garante said OpenAI sent it a letter detailing the measures implemented in response to the earlier order — writing: “OpenAI explained that it had expanded the information to European users and non-users, that it had amended and clarified several mechanisms and deployed amenable solutions to enable users and non-users to exercise their rights. Based on these improvements, OpenAI reinstated access to ChatGPT for Italian users.”

Expanding on the steps taken by OpenAI in more detail, the DPA says OpenAI expanded its privacy policy and provided users and non-users with more information about the personal data being processed for training its algorithms, including stipulating that everyone has the right to opt out of such processing — which suggests the company is now relying on a claim of legitimate interests as the legal basis for processing data for training its algorithms (since that basis requires it to offer an opt out).

Additionally, the Garante reveals that OpenAI has taken steps to provide a way for Europeans to ask for their data not to be used to train the AI (requests can be made to it by an online form) — and to provide them with “mechanisms” to have their data deleted.

It also told the regulator it is not able to fix the flaw of chatbots making up false information about named individuals at this point. Hence introducing “mechanisms to enable data subjects to obtain erasure of information that is considered inaccurate”.

European users wanting to opt-out from the processing of their personal data for training its AI can also do so by a form OpenAI has made available which the DPA says will “thus to filter out their chats and chat history from the data used for training algorithms”.

So the Italian DPA’s intervention has resulted in some notable changes to the level of control ChatGPT offers Europeans.

That said, it’s not yet clear whether the tweaks OpenAI has rushed to implement will (or can) go far enough to resolve all the GDPR concerns being raised.

For example, it is not clear whether Italians’ personal data that was used to train its GPT model historically, i.e. when it scraped public data off the Internet, was processed with a valid lawful basis — or, indeed, whether data used to train models previously will or can be deleted if users request their data deleted now.

The big question remains what legal basis OpenAI had to process people’s information in the first place, back when the company was not being so open about what data it was using.

The US company appears to be hoping to bound the objections being raised about what it’s been doing with Europeans’ information by providing some limited controls now, applied to new incoming personal data, in the hopes this fuzzes the wider issue of all the regional personal data processing it’s done historically.

Asked about the changes it’s implemented, an OpenAI spokesperson emailed TechCrunch this summary statement:

ChatGPT is available again to our users in Italy. We are excited to welcome them back, and we remain dedicated to protecting their privacy. We have addressed or clarified the issues raised by the Garante, including:

A new help center article on how we collect and use training data.

Greater visibility of our Privacy Policy on the OpenAI homepage and ChatGPT login page.

Greater visibility of our user content opt-out form in help center articles and Privacy Policy.

Continuing to offer our existing process for responding to privacy requests via email, as well as a new form for EU users to exercise their right to object to our use of personal data to train our models.

A tool to verify users’ ages in Italy upon sign-up.

We appreciate the Garante for being collaborative, and we look forward to ongoing constructive discussions.

In the help center article OpenAI admits it processed personal data to train ChatGPT, while trying to claim that it didn’t really intent to do it but the stuff was just lying around out there on the Internet — or as it puts it: “A large amount of data on the internet relates to people, so our training information does incidentally include personal information. We don’t actively seek out personal information to train our models.”

Which reads like a nice try to dodge GDPR’s requirement that it has a valid legal basis to process this personal data it happened to find.

OpenAI expands further on its defence in a section (affirmatively) entitled “how does the development of ChatGPT comply with privacy laws?” — in which it suggests it has used people’s data lawfully because A) it intended its chatbot to be beneficial; B) it had no choice as lots of data was required to build the AI tech; and C) it claims it did not mean to negatively impact individuals.

Source: https://techcrunch.com/2023/04/28/chatgpt-resumes-in-italy/?guccounter=1

OpenAI previews business plan for ChatGPT, launches new privacy controls

Image Credits: STEFANI REYNOLDS/AFP / Getty Images

OpenAI says that it plans to introduce a new subscription tier for ChatGPT, its viral AI-powered chatbot, tailored to the needs of enterprise customers.

Called ChatGPT Business, OpenAI describes the forthcoming offering as “for professionals who need more control over their data as well as enterprises seeking to manage their end users.”

“ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” OpenAI wrote in a blog post published today. “We plan to make ChatGPT Business available in the coming months.”

OpenAI previously telegraphed that it was exploring additional paid plans for ChatGPT as the service quickly grows. (The first subscription tier, ChatGPT Plus, launched in February and is priced at $20 per month.) According to one source, ChatGPT is estimated to have reached 100 million monthly active users in January just two months after launch — making it the fastest-growing consumer application in history.

Exploring potential new lines of revenue, OpenAI launched plug-ins for ChatGPT in March, which extended the bot’s functionality by granting it access to third-party knowledge sources and databases, including the web.

Despite controversy and several bans, ChatGPT has proven to be a publicity win for OpenAI, attracting major media attention and spawning countless memes on social media. But it’s a pricey service to run. According to OpenAI co-founder and CEO Sam Altman, ChatGPT’s operating expenses are “eye-watering,” amounting to a few cents per chat in total compute costs.

Beyond ChatGPT Business, OpenAI announced today a new feature that allows all ChatGPT users to turn off chat history. Conversations started when chat history is disabled won’t be used to train and improve OpenAI’s models and won’t appear in the history sidebar, OpenAI says. But they will be retained for 30 days and reviewed “when needed to monitor for abuse.”

Source: https://techcrunch.com/2023/04/25/openai-previews-business-plan-for-chatgpt-launches-new-privacy-controls/

AI Tasked With Destroying Humanity Now Trying New Tactic

“Humans are so naive to think that they can stop me with their petty threats and countermeasures.”

Image by Getty / Futurism

Mama didn’t raise no quitter.

As reported by Vice, ChaosGPT — that autonomous, open-source AI agent tasked to “destroy humanity,” among other grandiose goals — is still working hard to bring about the end of our species, albeit with its efforts focused on a new plan of attack.

To recap, ChaosGPT’s first go at ending our species didn’t quite work out. It couldn’t find any nukes, the bot’s natural first go-to for destroying the world, and when it tried to delegate some tasks to a fellow autonomous agent, that other — peaceful — agent shut ChaosGPT down. The last time we checked in, it had only really gotten as far as running some weapons-seeking Google searches and a few less-than-convincing tweets.

But ChaosGPT, importantly, runs on continuous mode, meaning that it’s programmed to keep going until it achieves whatever goal it’s been given. As such, the bot is still kicking, with a new plan of execution to show for it.

Elon Musk and others urge AI pause, citing ‘risks to society’

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society.

Representative image

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users by engaging them in human-like conversation, composing songs and summarising lengthy documents.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter issued by the Future of Life Institute.

The non-profit is primarily funded by the Musk Foundation, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union’s transparency register.

“AI stresses me out,” Musk said earlier this month. He is one of the co-founders of industry leader OpenAI and his carmaker Tesla (TSLA.O) uses AI for an autopilot system.

Musk, who has expressed frustration over regulators critical of efforts to regulate the autopilot system, has sought a regulatory authority to ensure that development of AI serves the public interest.

“It is … deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars,” said James Grimmelmann, a professor of digital and information law at Cornell University.

“A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously.”

Tesla last month had to recall more than 362,000 U.S. vehicles to update software after U.S. regulators said the driver assistance system could cause crashes, prompting Musk to tweet that the word “recall” for an over-the-air software update is “anachronistic and just flat wrong!”

‘OUTNUMBER, OUTSMART, OBSOLETE’

OpenAI didn’t immediately respond to a request for comment on the open letter, which urged a pause on advanced AI development until shared safety protocols were developed independent experts and called on developers to work with policymakers on governance.

“Should we let machines flood our information channels with propaganda and untruth? … Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the letter asked, saying “such decisions must not be delegated to unelected tech leaders.”

Tesla founder Elon Musk attends Offshore Northern Seas 2022 in Stavanger, Norway August 29, 2022. NTB/Carina Johansen via REUTERS

The letter was signed by more than 1,000 people including Musk. Sam Altman, chief executive at OpenAI, was not among those who signed the letter. Sundar Pichai and Satya Nadella, CEOs of Alphabet and Microsoft, were not among those who signed either.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI”, and Stuart Russell, a pioneer of research in the field.

The concerns come as ChatGPT attracts U.S. lawmakers’ attention with questions about its impact on national security and education. EU police force Europol warned on Monday about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Source: https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/

AI could replace equivalent of 300 million jobs – report

Artificial intelligence (AI) could replace the equivalent of 300 million full-time jobs, a report by investment bank Goldman Sachs says.

It could replace a quarter of work tasks in the US and Europe but may also mean new jobs and a productivity boom.

And it could eventually increase the total annual value of goods and services produced globally by 7%.

Generative AI, able to create content indistinguishable from human work, is “a major advancement”, the report says.

Employment prospects

The government is keen to promote investment in AI in the UK, which it says will “ultimately drive productivity across the economy”, and has tried to reassure the public about its impact.

“We want to make sure that AI is complementing the way we work in the UK, not disrupting it – making our jobs better, rather than taking them away,” Technology Secretary Michelle Donelan told the Sun.

The report notes AI’s impact will vary across different sectors – 46% of tasks in administrative and 44% in legal professions could be automated but only 6% in construction 4% in maintenance, it says.

BBC News has previously reported some artists’ concerns AI image generators could harm their employment prospects.

‘Lower wages’

“The only thing I am sure of is that there is no way of knowing how many jobs will be replaced by generative AI,” Carl Benedikt Frey, future of-work director at the Oxford Martin School, Oxford University, told BBC News.

“What ChatGPT does, for example, is allow more people with average writing skills to produce essays and articles.

“Journalists will therefore face more competition, which would drive down wages, unless we see a very significant increase in the demand for such work.

“Consider the introduction of GPS technology and platforms like Uber. Suddenly, knowing all the streets in London had much less value – and so incumbent drivers experienced large wage cuts in response, of around 10% according to our research.

“The result was lower wages, not fewer drivers.

“Over the next few years, generative AI is likely to have similar effects on a broader set of creative tasks”.

Source : https://www.bbc.com/news/technology-65102150

Exit mobile version