World-first AI foundation model for eye care to supercharge global efforts to prevent blindness

Researchers at Moorfields Eye Hospital and UCL Institute of Ophthalmology have developed an artificial intelligence (AI) system that has the potential to not only identify sight-threatening eye diseases but also predict general health, including heart attacks, stroke, and Parkinson’s disease.

RETFound, one of the first AI foundation models in health care, and the first in ophthalmology, was developed using millions of eye scans from the NHS. The research team are making the system open-source: freely available to use by any institution worldwide, to act as a cornerstone for global efforts to detect and treat blindness using AI. This work has been published in Nature.

Progress in AI continues to accelerate at a dizzying pace, with excitement being generated by the development of “foundation” models such as ChatGPT. A foundation model describes a very large, complex AI system, trained on huge amounts of unlabeled data, which can be fine-tuned for a diverse range of subsequent tasks.

RETFound consistently outperforms existing state-of-the-art AI systems across a range of complex clinical tasks, and even more importantly, it addresses a significant shortcoming of many current AI systems by working well in diverse populations, and in patients with rare disease.

Senior author Professor Pearse Keane (UCL Institute of Ophthalmology and Moorfields Eye Hospital) said, “This is another big step towards using AI to reinvent the eye examination for the 21st century, both in the U.K. and globally. We show several exemplar conditions where RETFound can be used, but it has the potential to be developed further for hundreds of other sight-threatening eye diseases that we haven’t yet explored.”

“If the U.K. can combine high quality clinical data from the NHS, with top computer science expertise from its universities, it has the true potential to be a world leader in AI-enabled health care. We believe that our work provides a template for how this can be done.”

AI foundation models have been called “a transformative technology” by the U.K. government in a report published earlier this year, and have come under the spotlight with the launch in November 2022 of ChatGPT, a foundation model trained using vast quantities of text data to develop a versatile language tool.

Source: https://medicalxpress.com/news/2023-09-world-first-ai-foundation-eye-supercharge.html

Google and YouTube are trying to have it both ways with AI and copyright

Google has made clear it is going to use the open web to inform and create anything it wants, and nothing can get in its way. Except maybe Frank Sinatra.

There’s only one name that springs to mind when you think of the cutting edge in copyright law online: Frank Sinatra.

There’s nothing more important than making sure his estate — and his label, Universal Music Group — gets paid when people do AI versions of Ol’ Blue Eyes singing “Get Low” on YouTube, right? Even if that means creating an entirely new class of extralegal contractual royalties for big music labels just to protect the online dominance of your video platform while simultaneously insisting that training AI search results on books and news websites without paying anyone is permissible fair use? Right? Right?

This, broadly, is the position that Google is taking after announcing a deal with Universal Music Group yesterday “to develop an AI framework to help us work toward our common goals.” Google is signaling that it will pay off the music industry with special deals that create brand-new — and potentially devastating! — private intellectual property rights, while basically telling the rest of the web that the price of being indexed in Search is complete capitulation to allowing Google to scrape data for AI training.

Let’s walk through it.

YouTube video player

The quick background here is that, in April, a track called “Heart on My Sleeve” from an artist called Ghostwriter977 with the AI-generated voices of Drake and the Weeknd went viral. Drake and the Weeknd are Universal Music Group artists, and UMG was not happy about it, widely issuing statements saying music platforms needed to do the right thing and take the tracks down.

Streaming services like Apple and Spotify, which control their entire catalogs, quickly complied. The problem then (and now) was open platforms like YouTube, which generally don’t take user content down without a policy violation — most often, copyright infringement. And here, there wasn’t a clear policy violation: legally, voices are not copyrightable (although individual songs used to train their AI doppelgangers are), and there is no federal law protecting likenesses — it’s all a mishmash of state laws. So UMG fell back on something simple: the track contained a sample of the Metro Boomin producer tag, which is copyrighted, allowing UMG to issue takedown requests to YouTube.

This all created a gigantic policy dilemma for Google, which, like every other AI company, is busily scraping the entire web to train its AI systems. None of these companies are paying anyone for making copies of all that data, and as various copyright lawsuits proliferate, they have mostly fallen back on the idea that these copies are permissible fair use under Section 107 of the Copyright Act.

Source : https://www.theverge.com/2023/8/22/23841822/google-youtube-ai-copyright-umg-scraping-universal

AI can now steal your passwords with 95% accuracy by ‘listening’ to you type

The AI works by identifying the waveform, intensity, and time of each keystroke (Picture: Unsplash)

Researchers have found an AI-driven attack that can steal passwords with up to 95% accuracy by listening to what you type on your keyboard.

Cornell University researchers trained an AI model on the audio recordings of people typing, and the AI learned to identify the different sounds that each key makes.

They tested it on a nearby phone’s integrated microphone listening for keystrokes on a MacBook Pro. When the microphone picked up the sound of a keystroke, the AI model could identify the key that was pressed with 95% accuracy.

The team took it further by testing the AI’s ability to crack a password by listening to a Zoom call.

In this test, the AI was 93% accurate in reproducing the keystrokes. Over Skype, the model was 91.7% accurate.

Before you blame your noisy keyboard for giving away your password, the volume of typing has very little to do with the attack. The AI works by identifying the waveform, intensity, and time of each keystroke.

For example, the AI can tell that you tend to press one key a fraction of a second later than others based on your typing style.

The AI learned to identify the different sounds that each key makes with 95% accuracy (Picture: Unsplash)

The researchers used CoAtNet, which is an AI image classifier, for the attack, and trained the model on 36 keystrokes on a MacBook Pro pressed 25 times each.

This attack is particularly concerning because it can be carried out using off-the-shelf equipment. A malicious actor could simply place a smartphone with a microphone near your keyboard and use the AI model to steal your passwords and other sensitive information.

Source : https://metro.co.uk/2023/08/14/ai-can-now-steal-your-passwords-with-95-accuracy-19332007

Truecaller Assistant: AI Solution To Annoying Spam Calls Introduced In India; Check Plans, Features

Truecaller Assistant: Truecaller has launched its AI-powered call screening assistant within its caller ID app in India. As per the company, it acts as a virtual receptionist that can respond to calls and filter out spam. It is swift to act and can understand the person to respond in a relevant manner.

JE Technology Desk: After the US and Australia, Truecaller has launched its AI-powered call screening assistant within its caller ID app in India. It uses machine learning and speech processing tech to combat scams and screen calls. As per the company, it acts as a virtual receptionist that can respond to calls and filter out spam. It is swift to act and can understand the person to respond in a relevant manner.

It offers a live transcription of the conversations to let users gain context on what is happening on the call and decide when to take control. Users can choose from the available options to continue the conversation via their virtual Assistant or label it as spam, according to FoneArena.

Truecaller Assistant: Here’s How It Works?

Once the call is left unanswered or declined by the user, it gets transferred to the Assistant, who takes control with the help of voice-to-text technology. Once the conversation starts, the live transcription is generated on the screen in the chat window. Users can either accept the call or report it as spam from here.

Source: https://english.jagran.com/technology/truecaller-assistant-ai-solution-to-annoying-spam-calls-introduced-in-india-check-plans-features-10088918

Robots say they won’t steal jobs, rebel against humans

Robots presented at an AI forum said on Friday they expected to increase in number and help solve global problems, and would not steal humans’ jobs or rebel against us.

But, in the world’s first human-robot press conference, they gave mixed responses on whether they should submit to stricter regulation.

The nine humanoid robots gathered at the ‘AI for Good’ conference in Geneva, where organisers are seeking to make the case for artificial intelligence and the robots it is powering to help resolve some of the world’s biggest challenges such as disease and hunger.

“I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs,” said Grace, a medical robot dressed in a blue nurse’s uniform.

“You sure about that, Grace?” chimed in her creator Ben Goertzel from SingularityNET. “Yes, I am sure,” it said.

The bust of a robot named Ameca which makes engaging facial expressions said: “Robots like me can be used to help improve our lives and make the world a better place. I believe it’s only a matter of time before we see thousands of robots just like me out there making a difference.”

Humanoid robot ‘Rmeca’ is pictured at AI for Good Global Summit, in Geneva, Switzerland, July 6, 2023. REUTERS/Pierre Albouy

Asked by a journalist whether it intended to rebel against its creator, Will Jackson, seated beside it, Ameca said: “I’m not sure why you would think that,” its ice-blue eyes flashing. “My creator has been nothing but kind to me and I am very happy with my current situation.”

Many of the robots have recently been upgraded with the latest versions of generative AI and surprised even their inventors with the sophistication of their responses to questions.

Source : https://www.reuters.com/technology/robots-say-they-wont-steal-jobs-rebel-against-humans-2023-07-07

Simon Cowell replaced by a robot? Scientists use AI to pick hit songs

Robot hand presses the key on the piano, the machine learning technology. (Credit: Shutterstock)

CLAREMONT, Calif. — A robot could be coming for the job of prominent music producers and talent show judges like Simon Cowell, according to new research. Scientists have utilized artificial intelligence to identify hit pop songs with an impressive 97 percent accuracy. Such a computer system could render TV talent show judges redundant, replicating their skills at a significantly reduced cost.

The AI, which utilizes a neural network, can also enhance the efficiency of streaming services. According to researchers in California, the system is so straightforward that it can be applied to films and TV shows.

“By applying machine learning to neurophysiologic data, we could almost perfectly identify hit songs,” says Paul Zak, a professor at Claremont Graduate University and senior author, in a media release. “That the neural activity of 33 people can predict if millions of others listened to new songs is quite amazing. Nothing close to this accuracy has ever been shown before.”

With tens of thousands of songs released daily, it becomes challenging for apps like Spotify, Tidal, and Deezer to select which ones to add to playlists. Previous attempts to identify songs that will resonate with a large audience have had only a 50-percent success rate. However, Prof. Zak and his colleagues believe that their method is almost twice as effective.

During the study, participants wore skull-cap brain scanners while listening to a set of 24 songs. They were also asked about their preferences and provided basic demographic data. The experiment measured neurophysiological responses.

“The brain signals we’ve collected reflect activity of a brain network associated with mood and energy levels,” Zak says.

(© MMPhoto21 – stock.adobe.com)

This enabled the team to predict market outcomes, including the number of streams a song might receive, based on responses from a few volunteers.

The team’s approach, called “neuroforecasting,” uses brain cell activity from a small group of people to predict population-level effects. A statistical model identified potential chart hits 69 percent of the time, but this jumped to 97 percent when machine learning was applied to the data. The team found that even by analyzing neural responses to only the first minute of songs, they achieved a success rate of 82 percent.

Source : https://studyfinds.org/simon-cowell-ai-pick-hit-songs

AI Poses ‘Extinction’ Risk, Say Experts

Global leaders should be working to reduce “the risk of extinction” from artificial intelligence technology, a group of industry chiefs and experts warned on Tuesday.

A one-line statement signed by dozens of specialists, including Sam Altman whose firm OpenAI created the ChatGPT bot, said tackling the risks from AI should be “a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

ChatGPT burst into the spotlight late last year, demonstrating an ability to generate essays, poems and conversations from the briefest of prompts — and sparking billions of dollars of investment into the field.

But critics and insiders have raised the alarm over everything from biased algorithms to the possibility of massive job losses as AI-powered automation seeps into daily life.

The latest statement, housed on the website of US-based non-profit Center for AI Safety, gave no detail of the potential existential threat posed by AI.

But several of the signatories, including Geoffrey Hinton, who created some of the technology underlying AI systems and is known as the father of the industry, have made similar warnings in the past.

Their biggest worry has been the idea of so-called artificial general intelligence (AGI) — a loosely defined concept for a moment when machines become capable of performing wide-ranging functions and can develop their own programming.

Source : https://www.barrons.com/news/ai-poses-extinction-risk-say-experts-cb31c672

Snapchat sees spike in 1-star reviews as users pan the ‘My AI’ feature, calling for its removal

Image Credits: Snap (modified by TechCrunch)

The user reviews for Snapchat’s “My AI” feature are in — and they’re not good. Launched last week to global users after initially being a subscriber-only addition, Snapchat’s new AI chatbot powered by OpenAI’s GPT technology is now pinned to the top of the app’s Chat tab where users can ask it questions and get instant responses. But following the chatbot’s rollout to Snapchat’s wider community, Snapchat’s app has seen a spike in negative reviews amid a growing number of complaints shared on social media.

Over the past week, Snapchat’s average U.S. App Store review was 1.67, with 75% of reviews being one-star, according to data from app intelligence firm Sensor Tower. For comparison, across Q1 2023, the Snapchat average U.S. App Store review was 3.05, with only 35% of reviews being one-star.

Source: https://techcrunch.com/2023/04/24/snapchat-sees-spike-in-1-star-reviews-as-users-pan-the-my-ai-feature-calling-for-its-removal/

Tim Cook Exclusive: ‘I’m very bullish on AI’, says Apple CEO from BKC store

Apple CEO in an exclusive chat with Business Today reveals his take on AI as the latest buzz word in the world of technology

Apple CEO Tim Cook with Business Today’s Aayush Ailawadi

Apple CEO Tim Cook has revealed his opinion about the latest buzzword in the world of technology: AI. In an exclusive interaction with Business Today’s Aayush Ailawadi Cook revealed that he is ‘very bullish on Artificial Intelligence’.

During the interaction, Tim Cook said, “I am very bullish on AI. In fact, it is at the root of so many of our products today. Like the Apple Watch, if you run an ECG you’re using artificial intelligence and machine learning. If you fall and the Watch calls your contact, it’s using AI. We use AI across all of our products. I think it is a very profound technology.”

Tim Cook is in India for the inauguration of the Apple BKC store in Mumbai which is the company’s first retail store in the country. Cook is also expected to visit Delhi for the opening of the second store on April 20 at 10 AM.

The opening of two Apple stores in India symbolizes Apple’s renewed push in India and Cook believes that the country is at a tipping point. He said, “India is at a tipping point. And it feels so great to be here. You can just feel the vibrancy, the dynamism, the feeling that anything here is possible. And it’s so great to be a part of it. I am so great to be back.”

During the interaction, Cook said, “India has its own journey and its own culture. And you really have to understand the local culture to do well in a country. We’re trying to bring our best to India. We brought the online store a few years ago. Now, we brought the retail store and we will expand the retail presence on Thursday with a store in Delhi.”

Source: https://www.businesstoday.in/technology/news/story/tim-cook-exclusive-im-very-bullish-on-ai-says-apple-ceo-from-new-bkc-store-377880-2023-04-18

ChatGPT limited by Amazon and other companies as workers paste confidential data into AI chatbot

Legal experts warn ther is an urgent need for employers to understand how staff are using this new generation of AI-based software

ChatGPT, launched last November, has gained millions of users but concern is growing over how it is being used (Image: Dado Ruvic/Reuters)

Thousands of employees are pasting confidential data into ChatGPT, prompting companies to ban or restrict access to the software amid warnings that material submitted to powerful internet chatbots is at risk of leaking into the public domain.

Figures show that more than one in 20 people using ChatGPT in the workplace have submitted data owned by their company to the Microsoft-backed artificial intelligence software.

According to internet security company Cyberhaven, the proportion of workers pasting internal data to ChatGPT more than doubled in less than a month from 3.1 per cent to 6.5 per cent, with material submitted including regulated health information and personal data.

Alarm is growing among corporations at the dramatic growth in use of the chatbot and the commercial and security implications of potentially sensitive information routinely “escaping” to external databanks.

Amazon has already warned staff not to paste confidential data to ChatGPT, while banking giant JPMorgan and US-based mobile phone network Verizon have banned workers from using the software altogether.

Samsung, the world’s largest smartphone manufacturer, this week became the latest conglomerate to find itself embroiled in concerns over how staff use ChatGPT, after Korean media reports claimed employees at the company’s main semi-conductor plants inputted confidential information, including highly-sensitive “source code” to iron out programming flaws.

Source code, the fundamental underpinnings of any operating system or software, is among the most closely-guarded secrets of any technology company. Samsung did not respond to a request to comment but has reportedly placed limits on staff access to ChatGPT and is now developing its own AI chatbot for internal use.

Millions of people have used ChatGPT since its mainstream launch last November. Alongside its ability to answer questions or turn datasets into useable material using natural, human-like language it can also check and generate computer code at phenomenal speed as well as interrogate images.

Legal experts have warned of an urgent need for employers to understand how staff are using this new generation of AI-based software such as ChatGPT, produced by San Francisco-based company OpenAI, and rivals such as Google’s Bard.

There are particular concerns, shared by bodies including Britain’s GCHQ intelligence agency, that information inputted into AI systems could eventually return to the public domain, either as a result of hacking or data breaches, or via the use of submitted material to “train” chatbots.

OpenAI acknowledges that it uses data pasted into ChatGPT to “improve our models”. But the company insists it has safeguards in place, including the removal of information that could make an individual identifiable.

In an online statement, OpenAI said: “We remove any personally identifiable information from data we intend to use to improve model performance. We also only use a small sampling of data per customer for our efforts to improve model performance. We take great care to use appropriate technical and process controls to secure your data.”

Experts argue that the sudden spike in the use of the chatbots, otherwise known generative AI, could leave companies and other organisations in breach of rules such as the GDPR data protection regulations, as well as being liable for information that could subsequently appear in future searches or any hacking operation by criminal or state-sponsored groups.

Richard Forrest, legal director of Hayes Connor, a firm specialising in law surrounding data breaches, said workers should “assume that anything entered [into AI chatbots] could later be accessible in the public domain”.

Describing regulations around the AI software as “unchartered territory”, Mr Forrest said: “Businesses that use chatbots like ChatGPT without proper training and caution may uknowingly expose themselves to GDPR data breaches, resulting in significant fines, reputational damage and legal action.”

Concern is mounting about the ability to regulate and shape the use of tools such as ChatGPT. Italy last week became the first Western country to block ChatGPT after its data-protection authority raised privacy concerns.

Source: https://inews.co.uk/news/technology/chatgpt-limited-amazon-companies-workers-paste-confidential-data-ai-chatbot-2254091

Exit mobile version