A new computer model visualizes material in the lower mantle that cannot come from subducted plates. (Credit: Sebastian Noe / ETH Zurich)
Miles beneath the Pacific Ocean, in a region of Earth’s mantle where conventional wisdom says nothing unusual should exist, scientists have discovered something extraordinary. Using innovative technology to analyze seismic waves, researchers have identified massive structures that challenge fundamental theories about how our planet formed and evolved. It’s as if we’ve discovered a new geological continent – not on Earth’s surface, but deep within it.
Just as doctors use ultrasound waves to peer inside the human body without surgery, geophysicists employ seismic waves from earthquakes to study Earth’s deep interior. When earthquakes occur, they send waves in all directions through the planet. These waves travel at different speeds depending on the materials they encounter, getting bent, bounced, and scattered along the way. By recording these waves at seismic stations worldwide, scientists can create images of structures deep within Earth, much like creating a medical scan of our planet.
For decades, this technique revealed fast-moving wave patterns primarily beneath areas where tectonic plates collide and one plate dives beneath another – a process called subduction. These patterns were thought to be the remains of ancient tectonic plates that had sunk into Earth’s mantle, the layer between the crust and core. However, the earth-shattering new study, published in Scientific Reports, has uncovered something unexpected.
Using one of the world’s most powerful supercomputers, the Piz Daint at CSCS in Lugano, researchers from ETH Zurich and the California Institute of Technology have discovered similar wave patterns in places where they shouldn’t exist – beneath vast oceans and continental interiors, far from any known plate boundaries. “Apparently, such zones in the Earth’s mantle are much more widespread than previously thought,” says Thomas Schouten, the study’s lead author and doctoral student at ETH Zurich’s Geological Institute, in a statement.
The key to this discovery lies in a sophisticated technique called full-waveform inversion (FWI). Unlike traditional methods that analyze only specific types of seismic waves, FWI examines entire seismograms, capturing a more complete picture of Earth’s interior. This comprehensive approach requires enormous computational power but provides unprecedented detail.
The most striking finding emerged beneath the western Pacific Ocean, where researchers identified a massive anomaly between 900 and 1,200 kilometers depth. According to current plate tectonic theories, this material couldn’t have come from subducted plates because the region has no recent history of subduction zones.
ETH professor Andreas Fichtner, who developed the computer model, draws a medical parallel: “It’s like a doctor who has been examining blood circulation with ultrasound for decades and finds arteries exactly where he expects them. Then if you give him a new, better examination tool, he suddenly sees an artery in the buttock that doesn’t really belong there. That’s exactly how we feel about the new findings.”
The discovery suggests these deep Earth structures might have diverse origins, Schouten explains. They could be ancient silica-rich material that has survived since the mantle’s formation about 4 billion years ago, despite continuous churning movements. Alternatively, they might be zones where iron-rich rocks have accumulated over billions of years due to these mantle movements.
The research team emphasizes that current models only show wave speed patterns, which alone cannot fully explain Earth’s complex interior. Future research will need to delve deeper into the material properties creating these patterns, requiring even more sophisticated models and computational power.
ISRO’s SpaDeX satellites, launched on December 30, 2024, are testing space docking technology in low-Earth orbit.
The Space Docking Experiment (SpaDeX) satellites, which ISRO planned to unite early Thursday, faced an unexpected issue late Wednesday, forcing it to postpone the procedure for the second time in just three days. “While making a maneuver to reach 225m between satellites, the drift was found to be more than expected, post non-visibility period. The planned docking for tomorrow (January 9) is postponed. Satellites are safe,” ISRO announced at around 9 pm on Wednesday.
The drift maneuver for the chaser satellite — one of the two satellites designated as chaser and target — began at 8:05 pm. Since their launch on December 30, ISRO has been meticulously preparing for the docking, which involves a series of complex steps. Each stage has been closely monitored from the ground, with clearance required before moving to the next phase.
This latest delay comes just days after ISRO had rescheduled the first docking attempt. On January 6, the agency identified a need for further ground simulation validations based on an abort scenario and moved the docking attempt to January 9.
Meta-owned WhatsApp is reportedly planning to add a new feature that allows users to attach photos to polls. By adding the ability to attach photos to poll options, voters will have a visual representation of each choice, making it easier to understand and evaluate the options before casting their vote.
This feature is helpful when text isn’t enough, and visuals can add clarity. According to the report from WABetaInfo, a website that tracks WhatsApp, Channel owners will be able to add a photo to each poll option. For example, channels about design, travel, or food can use images for poll choices, making it easier for followers to decide
“It appears WhatsApp is continuing to enhance the polls feature by developing new options to further boost engagement. Thanks to the latest WhatsApp beta for Android 2.25.1.17 update, which is available on the Google Play Store, we discovered that WhatsApp is working on a feature to assign photos to poll options in channels,” the report said.
The report also revealed that if you add a photo to one poll option, you’ll need to add photos to all the other options too. This keeps the poll consistent and ensures every choice is shown equally. It also avoids confusion by presenting all options in the same format, making it easier for voters to compare them.
Initially, the ability to assign photos to poll options will be exclusive to channels, likely to enable WhatsApp to refine the feature within a controlled environment before expanding it to group chats and individual conversations in the future.
Social media company Meta Platforms (META.O), on Tuesday scrapped its U.S. fact-checking program and reduced curbs on discussions around contentious topics such as immigration and gender identity, bowing to criticism from conservatives as President-elect Donald Trump prepares to take office for a second time.
The move is Meta’s biggest overhaul of its approach to managing political content on its services in recent memory and comes as CEO Mark Zuckerberg has been signaling a desire to mend fences with the incoming administration.
The changes will affect Facebook, Instagram and Threads, three of the world’s biggest social media platforms with more than 3 billion users globally.
Last week, Meta elevated Republican policy executive Joel Kaplan as global affairs head and on Monday announced it had elected Dana White, CEO of Ultimate Fighting Championship and a close friend of Trump, to its board.
“We’ve reached a point where it’s just too many mistakes and too much censorship. It’s time to get back to our roots around free expression,” Zuckerberg said in a video.
He acknowledged the role of the recent U.S. elections in his thinking, saying they “feel like a cultural tipping point, towards once again prioritizing speech.”
When asked about the changes at a press conference, Trump welcomed them. “They have come a long way – Meta. The man (Zuckerberg) was very impressive,” he said.
Asked if he thought Zuckerberg was responding to his threats, which have included a pledge to imprison the CEO, Trump said “probably.”
In place of a formal fact-checking program to address dubious claims posted on Meta’s platforms, Zuckerberg instead plans to implement a system of “community notes” similar to that used on Elon Musk-owned social media platform X.
Meta also will stop proactively scanning for hate speech and other types of rule-breaking, reviewing such posts only in response to user reports, Zuckerberg said. It will focus its automated systems on removing “high-severity violations” like terrorism, child exploitation, scams and drugs.
The company will move teams overseeing the writing and review of content policies out of California to Texas and other U.S. locations, he added.
Meta has been working on the shift away from fact-checking for more than a year, a source familiar with the discussions told Reuters.
It has not shared relocation plans with employees, however, prompting confused posts on the app Blind, which provides a space for employees to share information anonymously.
Most of Meta’s U.S. content moderation is already performed outside California, another source told Reuters.
Kaplan, who appeared on the “Fox & Friends” program on Tuesday morning to address the changes, offered Meta employees only a summary of his public statements in a post on the company’s internal forum Workplace, which was seen by Reuters.
A Meta spokesperson declined to comment on planning for the changes or say which specific teams would be leaving California. The spokesperson also declined to cite examples of mistakes or bias on the part of fact-checkers.
Meta CEO Mark Zuckerberg makes a keynote speech during the Meta Connect annual event, at the company’s headquarters in Menlo Park, California, U.S. September 25, 2024. REUTERS/Manuel Orbegozo/File Photo Purchase Licensing Rights
CAUGHT BY SURPRISE
The demise of the fact-checking program, started in 2016, caught partner organizations by surprise.
“We’ve learned the news as everyone has today. It’s a hard hit for the fact-checking community and journalism. We’re assessing the situation,” AFP said in a statement provided to Reuters.
The head of the International Fact-Checking Network, Angie Drobnic Holan, challenged Zuckerberg’s characterization of its members as biased or censorious.
“Fact-checking journalism has never censored or removed posts; it’s added information and context to controversial claims, and it’s debunked hoax content and conspiracies,” she said in a statement.
Kristin Roberts, Gannett Media’s chief content officer, said “truth and facts serve everyone — not the right or the left — and that’s what we will continue to deliver.”
Other partners did not immediately respond to requests for comment, while Reuters declined to comment. Meta’s independent Oversight Board welcomed the move.
Zuckerberg in recent months has expressed regret over certain content moderation actions on topics including COVID-19. Meta also donated $1 million to Trump’s inaugural fund, in a departure from its past practice.
“This is a major step back for content moderation at a time when disinformation and harmful content are evolving faster than ever,” said Ross Burley, co-founder of the nonprofit Centre for Information Resilience.
“This move seems more about political appeasement than smart policy.”
For now, Meta is planning the changes only for the U.S. market, with no immediate plans to end its fact-checking program in places like the European Union which take a more active approach to regulation of tech companies, a spokesperson said.
Musk’s X is already under European Commission investigation over issues including the “Community Notes” system.
The Commission began its probe in December 2023, several months after X launched the feature. A Commission spokesperson said it had taken note of Meta’s announcement and was continuing to monitor the company’s compliance in the EU.
An equal collaboration between NASA and the Indian Space Research Organisation, NISAR will offer unprecedented insights into Earth’s constantly changing land and ice surfaces using synthetic aperture radar technology. The spacecraft, depicted here in an artist’s concept, will launch from India. NASA/JPL-Caltech
A Q&A with the lead U.S. scientist of the mission, which will track changes in everything from wetlands to ice sheets to infrastructure damaged by natural disasters.
The upcoming U.S.-India NISAR (NASA-ISRO Synthetic Aperture Radar) mission will observe Earth like no mission before, offering insights about our planet’s ever-changing surface.
The NISAR mission is a first-of-a-kind dual-band radar satellite that will measure land deformation from earthquakes, landslides, and volcanoes, producing data for science and disaster response. It will track how much glaciers and ice sheets are advancing or retreating and it will monitor growth and loss of forests and wetlands for insights on the global carbon cycle.
As diverse as NISAR’s impact will be, the mission’s winding path to launch — in a few months’ time — has also been remarkable. Paul Rosen, NISAR’s project scientist at NASA’s Jet Propulsion Laboratory in Southern California, has been there at every step. He recently discussed the mission and what sets it apart.
How will NISAR improve our understanding of Earth?
The planet’s surfaces never stop changing — in some ways small and subtle, and in other ways monumental and sudden. With NISAR, we’ll measure that change roughly every week, with each pixel capturing an area about half the size of a tennis court. Taking imagery of nearly all Earth’s land and ice surfaces this frequently and at such a small scale — down to the centimeter — will help us put the pieces together into one coherent picture to create a story about the planet as a living system.
What sets NISAR apart from other Earth missions?
NISAR will be the first Earth-observing satellite with two kinds of radar — an L-band system with a 10-inch (25-centimeter) wavelength and an S-band system with a 4-inch (10-centimeter) wavelength.
Whether microwaves reflect or penetrate an object depends on their wavelength. Shorter wavelengths are more sensitive to smaller objects such as leaves and rough surfaces, whereas longer wavelengths are more reactive with larger structures like boulders and tree trunks.
So NISAR’s two radar signals will react differently to some features on Earth’s surface. By taking advantage of what each signal is or isn’t sensitive to, researchers can study a broader range of features than they could with either radar on its own, observing the same features with different wavelengths.
Is this new technology?
The concept of a spaceborne synthetic aperture radar, or SAR, studying Earth’s processes dates to the 1970s, when NASA launched Seasat. Though the mission lasted only a few months, it produced first-of-a-kind images that changed the remote-sensing landscape for decades to come.
It also drew me to JPL in 1981 as a college student: I spent two summers analyzing data from the mission. Seasat led to NASA’s Shuttle Imaging Radar program and later to the Shuttle Radar Topography Mission.
What will happen to the data from the mission?
Our data products will fit the needs of users across the mission’s science focus areas — ecosystems, cryosphere, and solid Earth — plus have many uses beyond basic research like soil-moisture and water resources monitoring.
We’ll make the data easily accessible. Given the volume of the data, NASA decided that it would be processed and stored in the cloud, where it’ll be free to access.
How did the ISRO partnership come about?
We proposed DESDynI (Deformation, Ecosystem Structure, and Dynamics of Ice), an L-band satellite, following the 2007 Decadal Survey by the National Academy of Sciences. At the time, ISRO was exploring launching an S-band satellite. The two science teams proposed a dual-band mission, and in 2014 NASA and ISRO agreed to partner on NISAR.
Since then, the agencies have been collaborating across more than 9,000 miles (14,500 kilometers) and 13 time zones. Hardware was built on different continents before being assembled in India to complete the satellite. It’s been a long journey — literally.
More About NISAR
The NISAR mission is an equal collaboration between NASA and ISRO and marks the first time the two agencies have cooperated on hardware development for an Earth-observing mission. Managed for the agency by Caltech, JPL leads the U.S. component of the project and is providing the mission’s L-band SAR. NASA is also providing the radar reflector antenna, the deployable boom, a high-rate communication subsystem for science data, GPS receivers, a solid-state recorder, and payload data subsystem.
An image of a dense, star-rich portion of our galaxy, the Milky Way, taken by the Hubble Space Telescope. (Credit: NASA/ESA/Hubble Heritage Team)
The carbon atoms in your body have likely traveled much farther than you have, possibly hundreds of thousands of light-years into space and back. A new study reveals how galaxies, including our own Milky Way, operate vast recycling systems that send star-forged elements like carbon on epic journeys through space before they eventually become part of planets, and even living things.
Life as we know it depends entirely on elements created inside stars. Nearly all atoms heavier than helium — including the carbon in our DNA, the oxygen we breathe, and the iron in our blood — were forged in stellar furnaces and scattered across space when those stars died. But rather than drifting aimlessly through space, these life-essential elements appear to travel on massive conveyor belt-like currents extending far beyond their galaxies of origin.
Using the Hubble Space Telescope’s Cosmic Origins Spectrograph, lead author Samantha Garza’s international research team studied this galactic recycling system — known as the circumgalactic medium (CGM) — by examining how light from distant quasars was affected by the gas surrounding closer galaxies. They specifically tracked triply-ionized carbon, a form of carbon that has lost three electrons, serving as an important marker of the CGM’s composition and conditions.
“Think of the circumgalactic medium as a giant train station: It is constantly pushing material out and pulling it back in,” explains Garza, a University of Washington doctoral candidate, in a statement. “The heavy elements that stars make get pushed out of their host galaxy and into the circumgalactic medium through their explosive supernovae deaths, where they can eventually get pulled back in and continue the cycle of star and planet formation.”
The research, published in The Astrophysical Journal Letters, revealed a striking difference between active star-forming galaxies and their quieter counterparts. Among galaxies still actively forming stars, 72% showed significant amounts of carbon in their surrounding halos. In contrast, only 23% of passive galaxies — those that had largely stopped forming stars — displayed similar carbon signatures. In some cases, researchers detected carbon extending almost 400,000 light-years into space, four times the diameter of our own galaxy.
“The implications for galaxy evolution, and for the nature of the reservoir of carbon available to galaxies for forming new stars, are exciting,” says Jessica Werk, UW professor and chair of the Department of Astronomy. “The same carbon in our bodies most likely spent a significant amount of time outside of the galaxy!”
This pattern mirrors a similar phenomenon previously discovered for another element, oxygen, suggesting that the relationship between a galaxy’s star formation activity and its recycling system is fundamental to galactic evolution. The presence or absence of these highly ionized elements provides crucial clues about how galaxies maintain their ability to form new stars — and eventually planets that could support life.
Understanding this cosmic recycling system could help explain why galaxies eventually cease forming stars. If the cycle of pushing material out and pulling it back in slows down or breaks down, a galaxy may lose its fuel source for creating new stars.
The team calculated that these galactic halos contain massive amounts of carbon — at least 3 million times the mass of our Sun. This substantial reservoir exists within a radius of about 120,000 light-years from each galaxy’s center, highlighting the vast scale of these galactic recycling systems and their potential role in seeding the universe with the building blocks of life.
At the Norwegian University of Science and Technology (NTNU) in Gjøvik, researchers are combining sensors with antenna technology to be able to recognize different smells. (Photo credit: Mads Wang-Svendsen)
Imagine a device that could sniff out mechanical damage in apples before bruising appears, detect diseases through a patient’s breath, monitor food freshness in real-time across entire supply chains, and identify hazardous gases in industrial settings — all using technology similar to what’s already in your smartphone. Scientists at the Norwegian University of Science and Technology (NTNU) have developed just such a device: a revolutionary electronic nose that achieves with a single sensor what typically requires hundreds.
This breakthrough, dubbed the “Ant-nose,” could transform how we monitor everything from food safety to environmental hazards, while being significantly simpler and less expensive than existing systems. What makes this development particularly remarkable is that it leverages familiar antenna technology — the same basic principle that helps our phones and computers communicate — to create an artificial sense of smell that in some ways surpasses both human and canine olfactory abilities.
Researchers believe the Ant-nose could match or exceed both human and canine olfactory abilities, using technology that’s already present in our homes. Their findings are published in the journal Sensors and Actuators: B. Chemical.
“We are literally surrounded by technology that communicates using antenna technology,” says Michael Cheffena, professor of telecommunications at NTNU, in a statement. The ubiquity of antennas in our everyday devices, from mobile phones to computers and TVs, creates an existing infrastructure that could be leveraged for this new sensing technology.
Traditional electronic noses, or e-noses, were inspired by how mammals smell. They usually need arrays of different sensors, sometimes hundreds of them, each coated with distinct materials to detect various gases. “Other electronic noses can have several hundred sensors, often each coated with different materials,” Cheffena explains. “This makes them both very power-intensive to operate and expensive to manufacture. They also entail high material consumption. In contrast, the antenna sensor consists of only one antenna with one type of coating.”
The Ant-nose works by transmitting radio signals at various frequencies and analyzing how they’re reflected back. These reflections create unique patterns based on the gases present, similar to chemical fingerprints. The device can detect volatile organic compounds (VOCs), gases that readily evaporate at low temperatures. These compounds are present throughout our environment — from the pleasant scent of freshly cut grass (which plants emit for protection and communication) to gasoline fumes.
One of the device’s notable capabilities is its ability to distinguish between isomers – chemical compounds that Yu Dang, the study’s lead author, describes as being “a bit like twins: very similar, yet not identical.” The Ant-nose demonstrates remarkable accuracy in differentiating these molecularly similar compounds, achieving a 96.7% accuracy rate in distinguishing between six different VOCs, including pairs of isomers.
The research suggests several potential applications across industries. The Ant-nose could potentially assist in food quality monitoring, industrial safety, and environmental protection. Its ability to maintain stable communication while sensing makes it particularly interesting for integration into existing sensor networks.
In laboratory tests, researchers demonstrated the Ant-nose’s practical utility. They used it to assess apple damage by monitoring chemical emissions after applying pressure similar to what fruit might experience during shipping. The device successfully distinguished between damaged and undamaged apples, suggesting potential applications in food transport monitoring.
The team expanded their testing to evaluate food freshness, examining strawberries, grapes, and pork samples. The device proved capable of detecting the chemical changes that occur as food ages, successfully differentiating between fresh items and those stored for five days.
The researchers envision future medical applications for this technology. “Volatile organic compounds enable trained dogs to detect health-threatening changes in blood sugar and diseases like cancer, so the principle is largely the same,” says Dang. Unlike detection dogs, which require months of specialized training, the Ant-nose could potentially offer a more accessible solution for disease detection, though this application requires further research and validation.
The draft Digital Personal Data Protection Rules, 2025, published by the Union Ministry of Electronics and Information Technology (MeitY) on January 3, introduces measures aimed at protecting the personal data of children.
These draft rules are part of a broader legislative framework established by the Digital Personal Data Protection Act, 2023, which was cleared by Parliament in August 2023.
The government has sought objections and suggestions from stakeholders on the rules by February 18, 2025.
Child protection measures
Under the draft rules, social media platforms and other online services will need to obtain verifiable parental consent before processing the personal data of children. This means that parents will need to explicitly agree to their child’s data being collected and used by the service.
The draft rules also specify that data fiduciaries (organisations that collect and store personal data) will need to take steps to verify the identity of the person claiming to be a child’s guardian. This could involve checking government-issued ID or using digital tokens linked to identity services.
For instance, if a child wishes to create an online account, the data fiduciary must enable the parent to identify themselves through secure means before processing the child’s data.
The following illustration is provided in the draft rules:
“C is a child, P is her parent, and DF is a Data Fiduciary. A user account of C is sought to be created on the online platform of DF, by processing the personal data of C.
Case 1: C informs DF that she is a child. DF shall enable C’s parent to identify herself through its website, app or other appropriate means. P identifies herself as the parent and informs DF that she is a registered user on DF’s platform and has previously made available her identity and age details to DF. Before processing C’s personal data for the creation of her user account, DF shall check to confirm that it holds reliable identity and age details of P.”
Processing of personal data by State
The rules allow State entities to process personal data when providing subsidies, benefits, or services. This provision is aimed at ensuring that such processing aligns with established standards and safeguards, reinforcing accountability in public sector data handling.
Security measures
To protect personal data from breaches, data fiduciaries are required to implement reasonable security safeguards. These measures include:
Encrypting and securing personal data;
Controlling access to computer resources used for processing;
Maintaining logs and monitoring access to detect unauthorised use.
Breach notification requirements
In the event of a data breach, data fiduciaries must notify affected individuals promptly. The notification must include:
A description of the breach’s nature and extent.
Potential consequences for affected individuals.
Measures taken to mitigate risks.
Additionally, they must report breaches to the regulatory board within a specified timeframe, ensuring transparency and accountability in handling such incidents.
Artificial intelligence has shown remarkable promise in healthcare, from reading X-rays to suggesting treatment plans. But when it comes to actually talking to patients and making accurate diagnoses through conversation — a cornerstone of medical practice — AI still has significant limitations, according to new research from Harvard Medical School and Stanford University.
Published in Nature Medicine, the study introduces an innovative testing framework called CRAFT-MD (Conversational Reasoning Assessment Framework for Testing in Medicine) to evaluate how well large language models (LLMs) perform in simulated doctor-patient interactions. As patients increasingly turn to AI tools like ChatGPT to interpret symptoms and medical test results, understanding these systems’ real-world capabilities becomes crucial.
“Our work reveals a striking paradox — while these AI models excel at medical board exams, they struggle with the basic back-and-forth of a doctor’s visit,” explains study senior author Pranav Rajpurkar, assistant professor of biomedical informatics at Harvard Medical School. “The dynamic nature of medical conversations – the need to ask the right questions at the right time, to piece together scattered information, and to reason through symptoms – poses unique challenges that go far beyond answering multiple choice questions.”
The research team, led by senior authors Rajpurkar and Roxana Daneshjou of Stanford University, evaluated four prominent AI models across 2,000 medical cases spanning 12 specialties. Current evaluation methods typically rely on multiple-choice medical exam questions, which present information in a structured format. However, study co-first author Shreya Johri notes that “in the real world this process is far messier.”
Testing conducted through CRAFT-MD revealed stark performance differences between traditional evaluations and more realistic scenarios. In four-choice multiple-choice questions (MCQs), GPT-4’s diagnostic accuracy dropped from 82% when reading prepared case summaries to 63% when gathering information through dialogue. This decline became even more pronounced in open-ended scenarios without multiple-choice options, where accuracy fell to 49% with written summaries and 26% during simulated patient interviews.
The AI models demonstrated particular difficulty synthesizing information from multiple conversation exchanges. Common problems included missing critical details during patient history-taking, failing to ask appropriate follow-up questions, and struggling to integrate various types of information, such as combining visual data from medical images with patient-reported symptoms.
CRAFT-MD’s efficiency highlights another advantage of the framework: it can process 10,000 conversations in 48-72 hours, plus 15-16 hours of expert evaluation. Traditional human-based evaluations would require extensive recruitment and approximately 500 hours for patient simulations and 650 hours for expert assessments.
“As a physician scientist, I am interested in AI models that can augment clinical practice effectively and ethically,” says Daneshjou, assistant professor of Biomedical Data Science and Dermatology at Stanford University. “CRAFT-MD creates a framework that more closely mirrors real-world interactions and thus it helps move the field forward when it comes to testing AI model performance in health care.”
Based on these findings, the researchers provided comprehensive recommendations for AI development and regulation. These include creating models capable of handling unstructured conversations, better integration of various data types (text, images, and clinical measurements), and the ability to interpret non-verbal communication cues. They also emphasize the importance of combining AI-based evaluation with human expert assessment to ensure thorough testing while avoiding premature exposure of real patients to unverified systems.
The study demonstrates that while AI shows promise in healthcare, current systems require significant advancement before they can reliably engage in the complex, dynamic nature of real doctor-patient interactions. For now, these tools may best serve as supplements to, rather than replacements for, human medical expertise.
This will doubly ensure the privacy of a child on various social media platforms and other websites. (Representative image)
A Data Fiduciary will have to ensure verifiable consent of the parent before the processing of any personal data of a child, according to the draft Digital Personal Data Protection Rules, 2025 which have been published by the Central Government on Friday.
This will doubly ensure the privacy of a child on various social media platforms and other websites.
The Digital Personal Data Protection Act, 2023 was passed by the Parliament in 2023. The draft rules, once finalised and notified, will come into force. The Centre has set a deadline of February 18, 2025, for the public to comment and send their feedback over the draft rules.
As per the draft, the Data Fiduciary shall adopt appropriate technical and organisational measures to ensure that verifiable consent of the parent is obtained before the processing of any personal data of a child. The Data Fiduciary would also observe due diligence, for checking that the individual identifying herself as the parent is an adult who is identifiable.
The draft rules have illustrated four scenarios in this regard where C is a child, P is her parent, and DF is a Data Fiduciary. A user account of C is sought to be created on the online platform of DF, by processing the personal data of C.
Case 1
C informs DF that she is a child. DF shall enable C’s parent to identify herself through its website, app or other appropriate means. P identifies herself as the parent and informs DF that she is a registered user on DF’s platform and has previously made available her identity and age details to DF. Before processing C’s personal data for the creation of her user account, DF shall check to confirm that it holds reliable identity and age details of P.
Case 2
C informs DF that she is a child. DF shall enable C’s parent to identify herself through its website, app or other appropriate means. P identifies herself as the parent and informs DF that she herself is not a registered user on DF’s platform. Before processing C’s personal data for the creation of her user account, DF shall, by reference to identity and age details issued by an entity entrusted by law or the Government with maintenance of the said details or to a virtual token mapped to the same, check that P is an identifiable adult.
P may voluntarily make such details available using the services of a Digital Locker service provider.
Case 3
P identifies herself as C’s parent and informs DF that she is a registered user on DF’s platform and has previously made available her identity and age details to DF. Before processing C’s personal data for the creation of her user account, DF shall check to confirm that it holds reliable identity and age details of P.
Artificial intelligence has shown remarkable promise in healthcare, from reading X-rays to suggesting treatment plans. But when it comes to actually talking to patients and making accurate diagnoses through conversation — a cornerstone of medical practice — AI still has significant limitations, according to new research from Harvard Medical School and Stanford University.
Published in Nature Medicine, the study introduces an innovative testing framework called CRAFT-MD (Conversational Reasoning Assessment Framework for Testing in Medicine) to evaluate how well large language models (LLMs) perform in simulated doctor-patient interactions. As patients increasingly turn to AI tools like ChatGPT to interpret symptoms and medical test results, understanding these systems’ real-world capabilities becomes crucial.
“Our work reveals a striking paradox — while these AI models excel at medical board exams, they struggle with the basic back-and-forth of a doctor’s visit,” explains study senior author Pranav Rajpurkar, assistant professor of biomedical informatics at Harvard Medical School. “The dynamic nature of medical conversations – the need to ask the right questions at the right time, to piece together scattered information, and to reason through symptoms – poses unique challenges that go far beyond answering multiple choice questions.”
The research team, led by senior authors Rajpurkar and Roxana Daneshjou of Stanford University, evaluated four prominent AI models across 2,000 medical cases spanning 12 specialties. Current evaluation methods typically rely on multiple-choice medical exam questions, which present information in a structured format. However, study co-first author Shreya Johri notes that “in the real world this process is far messier.”
Testing conducted through CRAFT-MD revealed stark performance differences between traditional evaluations and more realistic scenarios. In four-choice multiple-choice questions (MCQs), GPT-4’s diagnostic accuracy dropped from 82% when reading prepared case summaries to 63% when gathering information through dialogue. This decline became even more pronounced in open-ended scenarios without multiple-choice options, where accuracy fell to 49% with written summaries and 26% during simulated patient interviews.
The AI models demonstrated particular difficulty synthesizing information from multiple conversation exchanges. Common problems included missing critical details during patient history-taking, failing to ask appropriate follow-up questions, and struggling to integrate various types of information, such as combining visual data from medical images with patient-reported symptoms.
CRAFT-MD’s efficiency highlights another advantage of the framework: it can process 10,000 conversations in 48-72 hours, plus 15-16 hours of expert evaluation. Traditional human-based evaluations would require extensive recruitment and approximately 500 hours for patient simulations and 650 hours for expert assessments.
“As a physician scientist, I am interested in AI models that can augment clinical practice effectively and ethically,” says Daneshjou, assistant professor of Biomedical Data Science and Dermatology at Stanford University. “CRAFT-MD creates a framework that more closely mirrors real-world interactions and thus it helps move the field forward when it comes to testing AI model performance in health care.”
Based on these findings, the researchers provided comprehensive recommendations for AI development and regulation. These include creating models capable of handling unstructured conversations, better integration of various data types (text, images, and clinical measurements), and the ability to interpret non-verbal communication cues. They also emphasize the importance of combining AI-based evaluation with human expert assessment to ensure thorough testing while avoiding premature exposure of real patients to unverified systems.
The study demonstrates that while AI shows promise in healthcare, current systems require significant advancement before they can reliably engage in the complex, dynamic nature of real doctor-patient interactions. For now, these tools may best serve as supplements to, rather than replacements for, human medical expertise.
When it comes to mobile phone radiation, living in a city with abundant cell towers might actually reduce your exposure. This seemingly paradoxical conclusion emerges from new research examining how 5G technology is changing our electromagnetic environment across urban and rural landscapes.
Switzerland, an early adopter of 5G technology in Europe, provided an ideal testing ground for this research. After introducing new frequency bands in 2019, including the crucial 3.5 GHz band used by 5G networks, the country became perfectly positioned to study how these advanced networks influence our daily exposure to electromagnetic fields.
Modern 5G networks employ sophisticated antenna systems that function quite differently from previous cellular technologies. These systems, called massive Multiple-Input Multiple-Output (Ma-MIMO) antennas, can direct focused beams of signal precisely toward users’ devices. Consider it like a spotlight following an actor on stage rather than flooding the entire theater with light. This targeted approach, known as beamforming, marks a significant shift from older cellular networks that broadcast signals more uniformly across wide areas.
The research team, part of Project GOLIAT, collected measurements across two major Swiss cities (Zurich and Basel) and three rural villages (Hergiswil, Willisau, and Dagmersellen). In their baseline measurements, taken with phones in airplane mode, they found that exposure levels increased with population density. Rural villages experienced average exposure levels of 0.17 milliwatts per square meter (mW/m²), while the cities of Basel and Zurich recorded higher averages of 0.33 and 0.48 mW/m² respectively.
“The highest levels were found in urban business areas and public transport, which were still more than a hundred times below the international guideline values,” says study senior author Martin Röösli, a researcher at the Swiss Tropical and Public Health Institute, in a statement.
When researchers simulated intensive data usage by downloading large files repeatedly, exposure levels increased significantly to averages of 6-7 mW/m². This increase was particularly noticeable in urban areas, where 5G networks use beamforming to direct stronger signals to active devices.
The most compelling findings emerged during tests of maximum upload speeds, where devices continuously sent large files to the network. During these tests, exposure levels reached an average of 16 mW/m² in cities but jumped to 29 mW/m² in villages. This unexpected result occurs because phones in rural areas must work harder to maintain connections with distant cell towers.
“We have to keep in mind that in our study the phone was about 30 cm away from the measuring device, which means that our results might underestimate the real exposure,” says study lead author Adriana Fernandes Veludo. “A mobile phone user will hold the phone closer to the body and thus the exposure to RF-EMF could be up to 10 times higher.”
“Environmental exposure is lower when base station density is low. However, in such a situation, the emission from mobile phones is by orders of magnitude higher,” This creates what Veludo describes as a paradoxical situation. “This has the paradoxical consequence that a typical mobile phone user is more exposed to RF-EMF in areas with low base station density.”
As this research expands beyond Switzerland’s borders to nine more European nations, scientists will track how different approaches to 5G implementation affect electromagnetic exposure levels. Their findings will help inform the ongoing debate about optimal cellular network design and its implications for public health.
Massive AST SpaceMobile satellites deployed in low Earth orbit an artist’s impression.
India is all set to launch a massive American communications satellite that would allow making phone calls using direct connectivity from space. This is a highly innovative and a more modern approach to satellite telephony than the existing services.
This is also the first time an American company is launching a massive communications satellite from India in a dedicated launch on an Indian rocket. Till date, India has only launched small satellites made by American entities.
India’s Science Minister Dr Jitendra Singh disclosed that “In February or March we will be launching a US satellite for mobile communication, this satellite will enable voice communication on mobile phones. It will be an interesting mission”.
While neither the minister nor Indian space agency ISRO confirmed who the American satellite operator is, experts confirm that it is AST SpaceMobile, a Texas-based company is hoping to launch its big communication satellite from Sriharikota.
The US company has asserted that one can use any smartphone to make voice calls using their services. Most other current satellite-based Internet and voice providers ask subscribers to buy special handsets or have special terminals like Starlink does.
American media had reported that Abel Avellan, the CEO of AST SpaceMobile, had confirmed in an investor call last year by announcing that they will use the Geo-synchronous Satellite Launch Vehicle (GSLV) to launch a single Block 2 of the Bluebird satellite.
NDTV has reached out to AST SpaceMobile for a statement. No response came in till the time the story was filed.
Each Bluebird satellite will have an antenna of the size of 64 square meters or about half the size of a football field. The satellite will weigh nearly 6000 kilograms and India’s rocket will put it in a low Earth orbit.
In an earlier statement, Abel Avellan said they “invented a technology that connects satellites directly to ordinary cell phones and provides broadband internet through the largest ever commercial phase array in low Earth orbit”.
AST SpaceMobile’s mission, he added, is to close the global connectivity gap and digitally transform nations by bringing “affordable 5G broadband service from space to billions of people worldwide, direct to everyday smartphones”.
An ISRO expert said this satellite will enable “direct to mobile communication” and the company is hoping to place some massive satellites in the Earth’s orbit to power this path-breaking technology.
ISRO experts confirmed that AST SpaceMobile has hired the services of India’s Bahuballi rocket or the Launch Vehicle Mark-3 for launching the Bluebird satellite.
It is a huge boost for ISRO since now even American companies are having faith in India’s LVM-3 which has a one hundred percent success record.
Before this there have been two dedicated commercial launches of LVM-3 to hoist satellites for the OneWeb constellation, where Bharti Enterprises have a big stake, the same group also owns Indian telecom service Airtel.
This new satellite-based direct to mobile connectivity will be in direct competition to the existing providers like Starlink and Oneweb, both of which use massive constellations (satellite network) to provide broadband Internet connectivity.
In contrast, an ISRO expert said since AST SpaceMobile wants to deploy massive satellites they could make do with a slightly smaller constellation.
AST SpaceMobile asserts its technology is “designed to connect directly to mobile phones by becoming a pioneer as we create the first and only space-based cellular broadband network”.
“Private autopsy doesn’t confirm cause of death stated by police.”
The family of Suchir Balaji, the 26-year-old OpenAI whistleblower who was found dead just a month after his New York Times exposé was published, is claiming that the young man was murdered.
An account that appears to belong to Balaji’s mother Poornima Ramarao — shortened to “Rao” online — said in a post on X-formerly-Twitter that a private investigator’s probe has led the family to believe that the young whistleblower did not commit suicide as officials allege.
“We hired private investigator and did second autopsy to throw light on cause of death,” Ramarao tweeted. “Private autopsy doesn’t confirm cause of death stated by police.”
“Suchir’s apartment was ransacked,” she continued, adding that there was some “sign of struggle in the bathroom and looks like some one hit him in bathroom based on blood spots.”
The account, which has shared photos of Balaji that hadn’t previously been seen in the press and a GoFundMe for the private investigation efforts, went on to suggest that the city of San Francisco is covering up a “cold blooded” murder.
After calling for an FBI investigation, the Rao account then tagged Elon Musk, Vivek Ramaswamy, and California Gov. Gavin Newsom — and actually got a response from Musk himself.
“This doesn’t seem like a suicide,” the X owner and OpenAI cofounder wrote.
Pressing Matter
In the wake of Balaji’s death, Ramarao has spoken out online and in the press about her son and her suspicions — though for the most part, her missives have not made waves stateside.
Earlier in December, Business Insider published an interview with the grieving mother, who described her son’s precocious interest in math and artificial intelligence, his disillusionment and ultimate exit from OpenAI, and the debacle she endured when trying to get answers about his death.
According to Ramarao, Balaji had been on vacation with friends to celebrate his 26th birthday just before being found dead in his apartment. She claims she was the one to call the police when she hadn’t heard from him upon his return from his trip, and that she waited outside the apartment for hours only to learn of Balaji’s death when a medical examiner’s van arrived on the scene with a stretcher.
“They didn’t give the news to me,” Ramarao told BI. “I’m still sitting there thinking, ‘My son is traveling. He’s gone somewhere.’ It’s such a pathetic moment.”
Study pinpoints the percentage of daily calories individuals should consume in their morning meal
Mom always said breakfast was the most important meal of the day. As it turns out, she was right—but with a catch. New research suggests that when it comes to our morning meal, both portion size and nutritional quality play crucial roles in maintaining good health, especially for older adults at risk for heart disease.
While research has shown that skipping breakfast is linked to poorer overall diet quality and higher cardiometabolic risk, Spanish researchers wanted to explore an understudied area: how both the calorie intake and dietary quality of breakfast might affect cardiovascular health over time.
“Promoting healthy breakfast habits can contribute to healthy aging by reducing the risk of metabolic syndrome and associated chronic diseases, thereby improving quality of life,” says Karla-Alejandra Pérez-Vega, a researcher at Hospital del Mar and CIBER for Obesity and Nutrition, in a statement.
How breakfast influences overall health
The investigation was part of the larger PREDIMED-Plus trial, which studies the effects of Mediterranean diet and lifestyle interventions on cardiovascular health.
The study included 383 adults aged 55-75 who were participating in the trial at the Hospital del Mar Research Institute in Barcelona. All participants had metabolic syndrome—a cluster of conditions including high blood pressure, high blood sugar, excess body fat around the waist, and abnormal cholesterol levels that together increase risk of heart disease, stroke, and diabetes. They were also following a weight-loss lifestyle intervention based on the Mediterranean diet.
For three years, researchers tracked these individuals’ breakfast habits and health markers. They discovered something fascinating: people who ate either too little (less than 20% of their daily calories) or too much (more than 30%) at breakfast tended to fare worse than those who hit the sweet spot of 20-30% of their daily caloric intake during their morning meal.
By the study’s end, the differences were striking. Compared to the goldilocks group who ate just right, participants who consumed too little or too much at breakfast showed higher body mass index (BMI) measurements and larger waist circumferences. Their blood work also revealed higher levels of triglycerides (a type of fat found in blood) and lower levels of “good” HDL cholesterol.
Quality plays key role, too
But quantity wasn’t the only factor that mattered. Quality played an equally important role. Participants whose breakfasts scored low on nutritional quality—regardless of size—showed similar negative health trends. They too had larger waist measurements, less favorable blood fat profiles, and perhaps most surprisingly, decreased kidney function compared to those who ate more nutritionally balanced morning meals.
To assess breakfast quality, researchers used the Meal Balance Index, which scores meals based on nine nutritional components. The index uses Acceptable Macronutrient Distribution Ranges for proteins and fats, Daily Values for fiber, potassium, calcium, and iron, and World Health Organization recommendations for added sugars, saturated fats, and sodium. Each component receives a score from 0 to 100, with scores for potassium and saturated fat weighted double in the final calculation. Higher scores indicate better nutritional quality.
These findings, published in The Journal of Nutrition, Health and Aging, have particular relevance for older adults trying to manage or prevent heart disease. While previous research has established that eating breakfast is better than skipping it, this study suggests that simply eating any breakfast isn’t enough—both portion size and nutritional quality need careful consideration.
Interestingly, the study took place during a broader health intervention where participants were following a Mediterranean diet and trying to lose weight. Even within this generally healthy dietary pattern, breakfast composition made a measurable difference in health outcomes.
The ‘perfect’ breakfast
For those wondering what an ideal breakfast might look like, the study suggests aiming for that 20-30% sweet spot of daily calories. For someone eating 2,000 calories per day, that would mean a breakfast between 400-600 calories. Quality-wise, think balanced meals incorporating whole grains, lean proteins, healthy fats, and fruits or vegetables while limiting processed foods high in added sugars and unhealthy fats.
With metabolic syndrome and cardiovascular disease representing major public health challenges worldwide, understanding how simple dietary adjustments—like optimizing breakfast—could help manage these conditions is invaluable.
As our understanding of nutrition science evolves, it’s becoming clear that when we eat may be almost as important as what we eat. This study adds another piece to that puzzle, suggesting that front-loading our day with the right amount of high-quality nutrition might be one key to better metabolic health.
Perhaps mom’s advice needs a slight update.
“Breakfast is the most important meal of the day, but what and how you eat it matters. Eating controlled amounts—not too much or too little—and ensuring good nutritional composition is crucial,” says Álvaro Hernáez, researcher at the Hospital del Mar Research Institute, CIBER for Cardiovascular Diseases (CIBERCV), and professor at the Blanquerna Faculty of Health Sciences at Ramon Llull University. “Our data show that quality is associated with better cardiovascular risk factor outcomes. It’s as important to have breakfast as it is to have a quality one.”
In the tech world, we like to label periods as the year of (insert milestone here). This past year (2024) was a year of broader experimentation in AI and, of course, agentic use cases.
As 2025 opens, VentureBeat spoke to industry analysts and IT decision-makers to see what the year might bring. For many, 2025 will be the year of agents, when all the pilot programs, experiments and new AI use cases converge into something resembling a return on investment.
In addition, the experts VentureBeat spoke to see 2025 as the year AI orchestration will play a bigger role in the enterprise. Organizations plan to make management of AI applications and agents much more straightforward.
Here are some themes we expect to see more in 2025.
More deployment
Swami Sivasubramanian, VP of AI and data at AWS, said 2025 will be the year of productivity, because executives will begin to care more about the costs of using AI. Proving productivity becomes essential, and this begins with understanding how multiple agents, both inside internal workflows and those that touch other services, can be made better.
“In an agentic world, workflows are going to be reimagined, and you start asking about accuracy and how do you achieve five times productivity,” he said.
Palantir chief architect Akshay Krishnaswamy agreed that decision-makers, especially those outside of the technology cluster, are beginning to get antsy about seeing the impact these AI investments will have on their businesses.
“People are rightfully fatigued about more sandboxing, because it’s off the back of the whole data and analytics journey of the past 10 years, where people also did a ton of experimentation,” said Krishnaswamy. “If you’re an executive, you’re like, ‘this has to be the year I actually start to see some ROI, right?’”
An explosion of orchestration frameworks
Going into 2025, there is a greater need to create infrastructure to manage multiple AI agents and applications.
Chris Jangareddy, a managing director at Deloitte, told VentureBeat that next year will be very exciting. Competitors will face LangChain and other AI companies looking to offer their own orchestration platforms.
“A lot of tools are catching up to LangChain, and we’re going to see more new players come up,” Jangareddy said. “Even before organizations can think about multiagents, they’re already thinking about orchestration so everyone is building that layer.”
Many AI developers turned to LangChain to start building out a traffic system for AI applications. But LangChain isn’t always the best solution for some companies, which is where some new options including Microsoft’s Magentic, or comparable companies like LlamaIndex come in. But for 2025, expect to see an explosion of even more new options for enterprises.
“Orchestration frameworks are still very experimental, with LangChain and Magentic, so you can’t be heads down for just one,” said PwC global commercial technology and innovation officer Matt Wood. “Tooling in this space is still early, and it’s only going to grow.”
Better agents and more integrations
AI agents became the biggest trend for enterprises in 2024. As organizations gear up to deploy multiple agents into their workflows, the possibility of agents crossing from one system to another becomes more apparent. This is particularly true when enterprises are looking to demonstrate their agents’ full value to executives and employees.
Platforms like AWS’s Bedrock, and even Slack, offer connections to other agents from Salesforce’s Agentforce or ServiceNow, making it easier to transfer context from one platform to another. However, understanding how to support these integrations and teaching orchestrator agents to identify internal and external agents will become an important task.
When agentic workflows become more complex, the recent crop of more powerful reasoning models, like OpenAI’s recently announced 03 or Google’s Gemini 2.0, could make orchestrator agents more powerful.
Apple is challenging Nvidia’s dominance in AI hardware by developing custom chips, like ‘Baltra,’ to reduce reliance on external GPUs.
Apple is making headlines in the technology world with its recently launched AI-based features, though there’s one player Apple would seem to prefer to keep out of the limelight: Nvidia. As the reigning king of GPU technology, Nvidia is vital to the AI revolution. Still, Apple appears to be quietly charting a different course. Rather than follow the industry trend of heavily relying upon Nvidia, Apple is in the market or developing new alternatives to Nvidia, including its own AI server chips. Such a step actually reflects a mix of strategic ambition, financial restraint, and a long-standing rivalry ever since the times of Steve Jobs.
A Calculated Move Toward Independence
Apple’s shift away from Nvidia is deliberate. While many tech companies are snapping up Nvidia’s GPUs, Apple primarily rents access to them via cloud providers like Amazon and Microsoft. According to reports, Apple has even used in-house chips designed by Google to train its largest AI models. This approach underscores Apple’s desire for independence, avoiding overreliance on a single supplier.
Experts say that Apple’s decision has been influenced by two factors: cost-efficiency and control. Building in-house hardware allows Apple to integrate AI solutions more smoothly into its ecosystem while retaining greater control over its technology stack.
A Two-Decade-Old Rift
The tensions between Apple and Nvidia are not new. Some sources from within the industry claim that it goes back at least 20 years to disputes during the time of Steve Jobs. The business fallout from those conflicts has lingered, with Apple seeming reluctant to deepen its ties with Nvidia, even as Nvidia has climbed to dominance in AI hardware.
ISRO’s SpaDeX Mission Launches Spinach (in left) into Space to Study Food and Nutrition for Astronauts | ISRO
Indian Space Research Organisation (ISRO) has ended the year 2024 by launching the crucial SpaDeX Mission to become the fourth country in the world to bring together and join two aircraft in space. The mission majorly known for its docking objective, carried spinach into space to study the possibility of food and nutrition for astronauts in space.
On Monday, ISRO launched two small spacecraft by Polar Satellite Launch Vehicle (PSLV) from the Satish Dhawan Space Centre in Sriharikota. As India aims to leave a mark in the world through its space exploration missions, this expedition will help India’s space ambitions such as ‘Indian on Moon’, sample return from the Moon and the building and operation of Bharatiya Antariksh Station- India’s very own space station.
While the primary objective of the SpaDeX mission is to work on the technology for docking and undocking of aircraft in a low-Earth circuit orbit, it has also carried biological payload for the first time on orbit. Under this objective, scientists have chosen ‘Spinacia Oleracea’, commonly known as spinach, to be taken into space and study the possibility of food and nutrition during future space missions.
The first biological payload to be put on orbit has been designed by Amity University’s Center for Excellence in Astrobiology and will be monitored real-time by Amity University Mumbai through its special Amity Plant Experimental Module in Space (APEMS). Since plants are very sensitive to environmental stimuli like light, temperature, nutritional conditions, and gravity, APEMS will use the monitoring information to study the effects of lack of light and gravity on the plant.
The information obtained from this experiment under APEMS will provide an understanding how higher plants sense the direction of gravity and light and improve itself to respond to gravitational stress and regulate their direction of growth. The significance of studying the callus is that it can differentiate into shoots, roots, or a whole plant through addition of a specific set of phytohormones, chemical messengers that regulate plant growth.
According to scientists, the advantage of using Spinach callus is that it is a fast growing plant which will help to easily measure its growth rate and the changes in its green color will help to capture the growth and death of the plant through the in-built camera. Along with monitoring the growth of plant callus, Amity University Mumbai will also conduct parallel experiments to learn the difference in plant growing in space and the university’s laboratory.
Dr W Selvamurthy, former DRDO scientist and Director General for Amity Directorate of Science and Innovation, said, “While we are planning to put humans to space, we will also need to give them fresh food for their satisfaction and to meet their nutrional requirements. Through this plant callus, we will get to know how plants behave in a microgravity environment and what challenges are faced to grow vegetables in space.”
Attention to data plays a central role in contemporary macroeconomic analyses, but measuring it presents a challenge, particularly when covering varying geographic levels and time frequencies.
To address this, Dr. Nathan Goldstein from Bar-Ilan University’s economics department and Dr. Ohad Raveh, head of the Department of Environmental Economics and Management at the Robert H. Smith Faculty of Agriculture, Food and Environment at the Hebrew University of Jerusalem, proposed an unconventional and innovative metric for public attention — based on individual reports of unidentified aerial phenomena (UAP), or UFO sightings.
In a study published in the Humanities and Social Sciences Communications journal, the researchers discovered a surprising link between reports of these phenomena and macroeconomic conditions at the county, state and national levels in the U.S.
After accounting for weather conditions and external factors, they found that reports of such phenomena were more common in wealthier regions, though a counter-cyclical trend was observed within those regions as well.
The fluctuations in attention to extraordinary phenomena in the skies reflect broader patterns of changes in public attention.
“In our study, we examined the relationship between the attention U.S. county residents pay to the skies (UFO sightings from 2000 to 2020) and the attention they give to economic factors, aiming to see if there’s overlap,” Dr. Raveh explained in an interview with Ynet.
“To our surprise, we found a strong positive correlation between economic attention and UFO sightings, even when considering factors like weather, mental health and more,” he added.
The researchers supported their interpretation by relying on external regional variations in COVID-19 restrictions, finding evidence of a causal impact on public attention. They also demonstrated that the UAP reporting metric closely correlates with traditional attention metrics based on expectation data.
Stargazing in the U.S. (Photo: Mike Blake, Reuters)
The researchers applied their metric in the context of monetary policy transmission — defined in macroeconomics as the management of nominal variables in the economy, including the money supply and payment instruments relative to output. They found it explained significant regional variation in responses to monetary shocks.
Higher levels of attention across U.S. regions, as well as within regions during business cycles, significantly reduced the impact of contractionary monetary policies (e.g., central bank interest rate hikes).
Dr. Raveh highlighted the practical implications of the findings, which span public attention measurement, geographic economic variability, monetary policy planning, attention during economic downturns and COVID-19 effects.
“Reports of unidentified aerial phenomena serve as an indirect but effective indicator of public attention to unusual events. The study shows that these reports are linked to economic conditions, meaning they can be used to understand how attention shifts based on economic circumstances,” said Dr. Raveh.
“We found that wealthier areas report more unidentified aerial phenomena, while such reports increase during economic recessions within specific regions, suggesting that public attention shifts more toward unusual events during times of crisis,” he added.
“These patterns align with standard macroeconomic attention metrics. Moreover, the UAP metric helps predict how different regions respond to monetary policy decisions like interest rate changes. In areas with high attention levels, for instance, interest rate hikes may have less impact on consumer and business behavior.”
ISRO’s PSLV-C60 carrying SpaDeX and its payloads, lifts off from the first launch pad at Satish Dhawan Space Centre, in Sriharikota, Andhra Pradesh, Monday, Dec. 30, 2024. Credit: PTI Photo
Sriharikota: Two spacecraft that would aid ISRO in demonstrating space docking, a critical technology for future space missions, got separated successfully and were placed into the desired orbit late Monday, the country’s space agency said.
“PSLV C60 mission accomplished as a SpaDeX spacecraft is considered,” said Mission Director M Jayakumar.
Indian Space Research Organisation (ISRO) chief S Somanath said the rocket has placed the satellites in the right orbit of 475 km circular orbit, after over 15 minutes of flight.
“So, as far as we are concerned, the rocket has placed the spacecraft in the right orbit and the SpaDeX satellites have moved one behind the other, and over the period of time, it will pick up further distance, travel about 20 km away and then the rendezvous and docking process will start.
“And we hope that the docking process can happen in another week and the nominal time is going to be approximately January 7,” he said in his address from the Mission Control Centre.
In this mission, the very important part is the POEM-4 with 24 payloads from startups, industries, academia and also from ISRO centres, he said.
These are scheduled to be fired on Tuesday morning. Scientists would work through the night to ensure that the POEM-4 reaches the desired orbit level to perform the operation, Somanath said.
Later, talking to reporters at the Satish Dhawan Space Centre, Somanath said the PSLV-C60 mission placed the two SpaDeX satellites weighing 220kg in a circular orbit and 475 km as against the projected 470 km and this mission also has the POEM-4 which has 24 payloads to perform research and development.
“They are payloads and are not satellites. They are going to be attached to the fourth stage (of the PSLV rocket) for conducting experiments over the next two months. The upper stage of the PSLV rocket will be brought down to a lowered orbit to 350 km and that process is currently going on. After that, we will have many activities to continue,” Somanath, also the Secretary, Department of Space, said.
On the Space Docking Experiment, he said the scientists would have many operations from December 31 at the ISTRAC Bengaluru and he expected that the docking condition ‘possibly on January 7’.
“So, we will be able to see that from the Control Centre, ISTRAC, Bengaluru. All those activities of docking, including telecasting of the onboard images from the camera of the docking processes,” he said.
Somanath, flanked by SpaDeX Project Director N Surendran, Mission Director M Jayakumar and Directors of the various Centre said ISRO was “very proud” of this accomplishment and expressed hope that the SpaDeX mission’s objective can be achieved in the coming days.
“Really important mission for us, this you know, with the space sector reforms, and expansion of space activities. Then we have human space flight programmes, building space stations etc. This (Monday’s) mission is so critical for us to work on future missions like the Chandrayaan-4, missions to the Moon as well,” he said.
“I believe this is not the first SpaDeX and there will be many more SpaDeX varieties including complex versions of docking systems in the coming days,” he said.
Mission Director M Jayakumar said, “Hearty congratulations to the team ISRO for venturing into the exciting domain of Space Docking and this Mission once again has POEM-4. We have 24 payloads, and some interesting experiments in POEM-4 like debris capture, and biological experiments are there.”
“We had two launches of the PSLV from the same launch pad that is the first launch pad in December. So, after the first launch (on December 5 for PSLV-C59/Proba-3 mission), the Satish Dhawan Space Centre team was quick in raising to the occasion (for Monday’s mission),” he said.
Surendran said, “I would like to congratulate the PSLV team for the successive successful launch of PSLV in a month, it is a record, we have also placed our twin babies in a perfectly circular orbit, as per our requirement.
“I am happy to say that our solar panels are successfully deployed and the spacecraft are on their journey and holding their wings towards the docking and it is expected to happen around January first week,” he said.
As you are aware the space sector is going through a phase to enable private players to meet the growing demands, as per the policy guidelines, SpaDeX was assembled and integrated here for the first time, he said.
Dubbed as a prelude to the ISRO setting up its own Space Station by 2035, the PSLV-C60 mission would also make India join an elite club in achieving this feat which is expected to take place in the coming days.
The 44.5-metre-tall rocket carried two spacecraft — Spacecraft A and B, each weighing 220 kg which would help in space docking, satellite servicing and interplanetary missions.
After the culmination of the 25-hour countdown which commenced on Sunday, the rocket lifted off at 10 pm from the First Launch Pad at this spaceport, emanating thick orange-coloured fumes and thunderous sound in the island, located about 135 km east of Chennai.
According to ISRO scientists, the two spacecraft-Spacecraft A (SDX01) or the ‘Chaser’ and Spacecraft B (SDX02) or the ‘Target’ would be merged together later at an altitude of about 470 km after travelling at the same speed and distance.
Google CEO Sundar Pichai highlights 2025 as a pivotal year for the company, focusing on scaling the Gemini AI model. Photo : iStock
Sundar Pichai, the CEO of Google, remarked that 2025 will be a decisive year for the company while emphasising the need to lift the AI stakes. Speaking at the strategy meeting held on December 18, Pichai, with an undisclosed group of executives, detailed plans for the next year but did so wearing sweaters in the spirit of the holidays. “The stakes are high,” Pichai said, urging the employees to recognise how important this moment is. His statements come even as the race for dominance in AI continues toe-to-toe with industry giant Google adamant about reinforcing its leadership by scaling Gemini in the consumer space.
Pichai did not shy away from the reality that Google still has some ground to cover in the competitive game called AI. He emphasised that, although the Gemini model had some early success, 2025 will be a tough time to reach a status where it can meet its competition and still happen to be successful, thus suggesting bonding over a 2025 project. He pointed out that the Gemini application has garnered “incredible momentum”, but said that obstacles lie ahead and urged speed if Google hopes to stay ahead in the race of technology where rapid changes occur almost instantaneously.
Gemini, Google’s flagship AI model, is set to play a central role in the company’s strategy next year. Pichai described scaling Gemini for consumers as Google’s “biggest focus” for 2025, signalling an all-hands-on-deck approach to expanding the model’s capabilities and applications. This includes integrating Gemini into more products and services to enhance user experiences.
Vaping, even without nicotine, can immediately drop your blood vessels’ ability to carry life-sustaining oxygen throughout the body.
Research presented at 2024’s annual meeting of the Radiological Society of North America showed that e-cigarettes interfere with blood flow and decrease the oxygen in the veins. That may mean that the vaper’s lungs are absorbing less oxygen. The study will need to be confirmed with corroborating research, but it could indicate that vaping regularly leads to blood vessel disease.
Electronic cigarettes are not a safer alternative to cigarette smoking, according to Dr. Marianne Nabbout, the study’s lead author from the University of Arkansas for Medical Sciences in Little Rock.
Even though vaping aerosols don’t have all the same cancer-causing toxins as tobacco smoking, vapers are still breathing chemicals and affecting their blood vessels. E-cigarettes work by turning a liquid into vapor which is inhaled by the user. That vapor may contain lead, formaldehyde, propylene glycol, glycerin, and other harmful substances.
The investigation included 31 vapers and smokers between the ages of 21 and 49. These people were compared to 10 abstainers. Each participant, in three separate sessions, got MRI scans before and after smoking tobacco, vaping with nicotine, and vaping without nicotine.
During the vaping sessions, a thigh cuff (a blood pressure device) restricted blood flow to the leg. After vaping or smoking, the thigh cuff was released, and researchers measured the speed of the return of blood to the leg. The researchers also measured the oxygen of the blood returning to the heart after flowing throughout the body, supplying oxygen to tissues.
After vaping or smoking, there was a significant drop in the rate of blood flow to the leg. Compared with the nonsmokers and smokers, vapers using nicotine products had the biggest drop in blood flow performance, followed by the vapes without nicotine, and then the smokers. There was also a decrease in oxygen in the blood returning to the heart in both vapers and smokers.
Good blood flow is essential for carrying oxygen and nutrients throughout the body and for removing waste products of metabolism. Poor blood flow can lead to blot clots, high blood pressure, and stroke.
In another study in the journal Arteriosclerosis, Thrombosis and Vascular Biology, researchers found that chronic e-cigarette users had impaired blood vessel function which may put them at risk for heart disease. Matthew E. Springer, PhD, a professor of medicine at the University of California-San Franscisco, stated that chronic users of e-cigarettes may experience a risk of vascular disease similar to that of chronic smokers.
Springer and his colleagues collected blood samples from a group of 120 volunteers that included those who were long-term e-cigarette users and long-term smokers, as well as people who didn’t smoke or vape.
If you know the phrase, “What the sigma?” or know what “GYAT” means, give yourself a pat on the back because you’re already ahead of the pop culture curve. These two terms are just a taste of Gen Alpha slang words. Generation Alpha, AKA people who were born between 2010 and 2024, have grown up amid a digital revolution. Instagram launched, the word “app” started gaining traction and iPads made their debut in the early 2000s.
Since this group grew up in a digital age, most of their slang comes from their many forms of communication like YouTube, TikTok, Instagram, Roblox, school and friends. If you’re a parent, you certainly don’t want to be in a social setting and say an outdated slang word or phrase that used to be cool. Wouldn’t you cringe if your best friend said “bling” or “booyah” to a group of tweens? Yikes.
There’s generally no point in Googling the latest slang because you’ll never catch up. Luckily for me, I’m the parent of a Gen Alpha and Gen Z kid, so I’m inundated with nonsensical terms and outbursts that don’t make sense. In fact, both of them helped me with this list!
While they don’t use the same words, Gen Z’s slang words sometimes overlap with the even younger generation. Understanding terms from either group can help you fit into conversations and, yes, avoid being labeled “beta.” What’s “beta”? It’s the opposite of an Alpha person, so “beta” is slang for someone who’s less assertive (AKA weak).
OK, it’s time to get hip. Keep note that not all of these were created by Gen Alpha kids, but they’re commonly used by people of this generation. Here’s hoping that our list of 55 Gen Alpha slang words will give you some cool points with the youth. It will also give you some clue as to what these kids are saying.
55 Gen Alpha Slang Words (With Meanings)
1. Brain Rot
General term for spending too much time online watching low quality content that “rot” the brain.
2. Skibidi
This slang, which could mean good, bad, cool, is named after the popular YouTube series called “Skibidi Toilet” by animator Alexey Gerasimov. Most of the videos show toilets with the heads of grown men spinning and talking while singing songs.
3. Skibiddi Ohio Rizz
To call someone this means that they’re weird.
4. Rizz
Abbreviation of the word “charisma”, which is the charm, magnetism or ability to attract others.
A 2018 artist’s concept shows the Parker Solar Probe spacecraft flying into the Sun’s outer atmosphere, called the corona, on a mission to help scientists learn more about the Sun. (via REUTERS)
NASA’s Parker Solar Probe scripted history with a record-breaking closest approach to the Sun on December 24. The feat was confirmed by the US space agency on Friday, December 27.
NASA said its Parker Solar Probe is on a mission “to touch” the Sun. It was launched in 2018 and has been gradually circling closer towards the sun, using flybys of Venus to gravitationally pull it into a tighter orbit with the sun.
The Parker Solar Probe survived its record-breaking closest approach to the solar surface on December 24, 2024. It flew into the sun’s outer atmosphere called the corona.
The spacecraft was out of contact with the Earth because of constraints on signal transmission while it was in close proximity to the sun. It was reported that the spacecraft was not able to send a signal back to its operators until December 27, indicating its condition after the flyby.
NASA finally received Parker Solar Probe’s “beacon tone” late December 26 (local time), confirming the spacecraft is safe.
“After seven days of silence, Parker has resumed communication with Earth, confirming it’s healthy after soaring just 3.8 million miles from the solar surface — the closest a human-made object has ever been to a star,” NASA said.
Parker Solar Probe has made history.
After seven days of silence, Parker has resumed communication with Earth, confirming it’s healthy after soaring just 3.8 million miles from the solar surface — the closest a human-made object has ever been to a star.https://t.co/YgLBDsRlGypic.twitter.com/UMCNq0BzhA
Simply put, it was the closest solar flyby in history.
Parker moved at a blistering pace of around 430,000 mph (690,000 kph), fast enough to fly from the US capital Washington to Japan’s Tokyo in under a minute. The spacecraft endured temperatures of up to 1,800 degrees Fahrenheit (982 degrees Celsius), according to the NASA website.
The spacecraft is expected to send back detailed telemetry data on its status on January 1.
What does the record-breaking fly mean?
NASA said that by getting closer to the sun than ever before, Parker Solar Probe will “reveal the secrets about our star that can help protect our technology and support our future exploration.”
This close-up study of the Sun allows Parker Solar Probe to take measurements that help scientists better understand how material in this region gets heated to millions of degrees.
Artificial Intelligence words are seen in this illustration (Reuters)
Global tech companies as well as local start-ups are looking to tap deeper into the Indian market with artificial intelligence (AI) platforms adapted for India’s vast range of languages, Financial Times wrote.
India has 22 official languages, with Hindi the most widespread, but researchers estimate the languages and dialects spoken by its 1.4 billion people rise into the thousands, according to the article.
Examples of companies and their products
Microsoft, Google, and start-ups including Silicon Valley-backed Sarvam AI, founded just last year, and Krutrim, founded by Bhavish Aggarwal of Indian mobility group Ola, are all working on AI voice assistants and chatbots in languages such as Hindi and Tamil, the article read.
The tools are aimed at fast-growing Indian industries, such as the country’s large customer service and call centre sector, according to the article.
Google launched its Gemini AI assistant in nine Indian languages on Tuesday.
Microsoft’s Copilot AI assistant is available in 12 Indian languages, and the company is working on other projects tailored for India, including building “tiny” language models at its Bengaluru-based research centre.
These can run on smartphones rather than the cloud, making them cheaper and potentially better suited to countries like India where connectivity can be limited, the article read.
Microsoft is also partnering with Bengaluru-based Sarvam AI, which is developing generative AI tools for Indian businesses. The start-up raised $41 million from investors including Peak XV, Sequoia’s former India arm, and Menlo Park-based Lightspeed Venture Partners, according to the article.
The background
Investing in local AI companies is becoming more important as governments seek to develop “sovereign AI” that is trained and stored within their borders, Hemant Mohapatra, partner at Lightspeed India told Financial Times.
The details
India’s AI race does not involve building LLMs (large language models) from scratch to compete with leaders such as Open AI, with investors arguing the resources and capital required would be too much, the article read.
Instead, companies such as Sarvam AI are focusing on adapting existing LLMs for Indian languages and using voice data instead of text, making them more effective in a country where many prefer to communicate through audio messages rather than in writing, according to the article.
In good news for WhatsApp users, the instant messaging app is introducing a new feature that allows them to scan documents directly within the app. The feature is available in the document-sharing menu, where users can use their phone’s camera to capture a document.
After taking the picture, they can preview it and adjust the margins to make sure everything fits properly. Once done, they can send the scanned document directly to a chat or group.
“With the WhatsApp for iOS 24.25.89 update now available on the App Store, WhatsApp continues to enhance the user experience by introducing new features that improve document-sharing capabilities,” WABetaInfo, a website that tracks WhatsApp, reported.
According to the report, this feature is very important, especially for users who need to share documents while on the move but lack access to traditional scanning equipment. By incorporating this capability within WhatsApp, users can address their document-sharing needs without toggling between apps or depending on external tools.
Additionally, the scanning functionality of the app ensures that the scanned documents are clear, legible, and professionally formatted, making them highly usable for recipients.
This is a heartbreaking story out of Florida. Megan Garcia thought her 14-year-old son was spending all his time playing video games. She had no idea he was having abusive, in-depth and sexual conversations with a chatbot powered by the app Character AI.
Sewell Setzer III stopped sleeping and his grades tanked. He ultimately committed suicide. Just seconds before his death, Megan says in a lawsuit, the bot told him, “Please come home to me as soon as possible, my love.” The boy asked, “What if I told you I could come home right now?” His Character AI bot answered, “Please do, my sweet king.”
After its victory against Google in an antitrust trial earlier this year, the Department of Justice recently proposed a sweeping set of changes its search business. The DOJ put a lot on the table, demanding that Google sell its Chrome browser, syndicate its search results, and avoid exclusive deals with companies like Apple for default search placement. It even kept open the possibility of forcing an Android sale.
Now, Google has responded with a far simpler proposal: prohibit those default placement deals, and only for three years.
A court found Google liable for unlawfully monopolizing online search, and its remedies are supposed to reset the market, letting rivals fairly compete. Google (obviously) disagrees that it’s running a monopoly, but before it can appeal that underlying conclusion, it’s trying to limit the fallout if it loses.
Google’s justification is that search deals were at the heart of the case, so they’re what a court should target. Under the proposal, Google couldn’t enter deals with Android phone manufacturers that require adding mobile search in exchange for access to other Google apps. It couldn’t require phone makers to exclude rival search engines or third-party browsers. Browser companies like Mozilla would be given more flexibility in setting rival search engines as defaults.
Perhaps the biggest concession is that this agreement would specifically end Google’s long-running multibillion-dollar search deal with Apple. It would bar Google from entering agreements that make Google Search the default engine on any “proprietary Apple feature or functionality, including Siri and Spotlight” in the US — unless the deal lets Apple choose a different default search engine on its browser annually and “expressly permits” it to promote other search engines.
And in a nod to some DOJ concerns about Google locking out rival AI-powered search tools and chatbots, Google proposes it should be disallowed from requiring phone makers to add its Gemini Assistant mobile app in order to access other Google offerings.
The government has proposed ten years of restrictions, but Google’s counterproposal is only three — it argues nothing more is necessary because “the pace of innovation in search has been extraordinary” and regulating a “fast-changing industry” like search would slow innovation.
If the court accepts Google’s streamlined proposal over the DOJ’s, the company could lose out on some lucrative or strategically advantageous deals, but its business would remain intact. It wouldn’t have to spin out its Chrome browser or have the threat of an Android divestment order hanging over it. And it wouldn’t need to share many of the underlying signals that help it figure out how to serve useful search results, so that rivals could catch up and serve as a true competitive pressure, as the DOJ hopes.
Both Google and the DOJ’s proposals are essentially starting points from which the judge can work. But Google is betting it could have an easier time selling a simple proposal that addresses a major, specific problem raised in the trial. It’s positioning the government’s proposals as extreme and reaching beyond the scope of the judge’s earlier decision, perhaps — Google will likely tell the court — even in a way that could get overturned on appeal.
The Parker Solar Probe will race past the sun at 435,000mph today. Pic: NASA / John Hopkins / APL
The fastest object ever built by humans will fly within a whisker of the sun today.
The Parker Solar Probe will race past at 435,000mph as it studies the sun’s surface and atmosphere.
That’s so fast it would cover the distance between London and New York in just 29 seconds.
It has picked up such an extraordinary speed because it is being pulled in by immense gravitational forces.
Now in its closest orbit yet, Parker will pass just 3.8 million miles above the sun’s surface.
It will be within the corona, the super-heated outer atmosphere that’s visible from Earth as bright whisps during a total solar eclipse.
The front of the spacecraft is expected to reach 1,400C, and mission scientists won’t know whether it has survived until it signals back to Earth that all is well on 27 December.
Yanping Guo, the mission design and navigation manager, told Sky News that it will be an anxious wait.
“We will be looking forward to that,” she said. “It’s like a baby to me.
“But I’m pretty confident we will hear good news and get more data from the spacecraft.”
The Parker probe was launched in August 2018 and it has been spiralling ever closer to the sun. This is its 22nd orbit – and it is as close as it will get.
Scientists are hoping for a huge batch of data to help understand our sun. A team at Imperial College London analysed data from a previous orbit and found sharp kinks in the sun’s magnetic field that were generating the million-mile-an-hour solar wind.
Professor Tim Horbury, who led the research, said the stream of particles drives the aurora on Earth – but they are also a threat.
“The radiation can damage astronauts, it can knock out satellites and even have effects on the ground, for example, on the power grid,” he said.
“By understanding how the solar wind is made and how it carries the magnetic field out into interplanetary space, we hope in the long run to be able to make better predictions about what’s going to arrive at the Earth.”
Scientists hope the new orbit will help them understand the sun’s super-heated outer atmosphere. It reaches more than 1 million degrees Celsius – yet the surface of the sun is only 6,000C or so.
Do we really need all this? Photo by Amelia Holowaty Krales / The Verge
The grid is a comfortable place to live.
The app grid, I mean: the rows and rows of app icons on your iPhone’s homescreen. It’s familiar. Safe. It’s how I’ve lived with my various phones over the past decade. But at some point, it started to feel oppressive.
All those icons staring at me in the face, vying for my attention. The clutter! The distracting little notification badges! The grid was a reasonable way to organize apps when I had like, ten of them. There are sixty on the iPhone I’m using now, and I set it up from scratch a few months ago.
Naturally, living off-grid or in a non-traditional homescreen arrangement has been possible for much longer on Android. Google’s OS lets you keep your screen clear and just find your apps in the app drawer, which is always a swipe away. You can even replace the launcher entirely. But iOS — where every new app you download winds up on your homescreen by default — hasn’t exactly made it easy to abandon the grid.
That started to change when iOS 14 added widgets, an app library, and the ability to hide apps from your homescreen — though I haven’t developed the muscle memory to use it much. Now, iOS 18 adds even more flexibility. You can put apps and widgets anywhere you want on your homescreen, change their colors, and put more functions into the Control Center. But even as the apps and customization options have multiplied, most of us are still using our homescreens in basically the same way as we did with our first smartphones.
With the new options in iOS 18 — and getting a peek at other peoples’ well-curated homescreens — I decided it was time to do a little cleanup. Why should an app I only open once a month when I park downtown take up space on my homescreen year-round? Better yet, does any app deserve to occupy that precious real estate?
I spent about an hour deleting icons, arranging widgets, and adding controls to create my new homescreen. The camera control button on the iPhone 16 renders that icon unnecessary; the action button launches the oft-used daycare app, so that could go too. When I was done, my haphazardly maintained system of folders with cute emoji labels was whittled down to just four apps in the dock and a handful of widgets spread across two pages, which I’m affectionately calling “Windows Phone 2.0.”
Was it scary? A little. But you know what? I don’t miss those rows of icons at all. Nine out of ten times the app I’m looking for is in the Siri suggested apps that pop up when I open search. If not, I type in the first few letters of the app name and there it is. You could swipe over to the app library, I guess, but I hardly ever do.
The biggest drawback is that I’ll see a notification, dismiss it, and then forget about it for days since the app icon and its little red notification badge aren’t in my face anymore. But I missed things here and there even when I was living on the grid, and those badges are a real problem for me: I’m the kind of person who needs to reach badge zero, so I’ll constantly open apps just to clear out the notifications and get the red dot out of my face. Living off the app grid removes this distraction, and it’s the number one thing I appreciate about my new lifestyle.
I’m happy with my new homescreen, but some of my colleagues take the off-the-grid philosophy to the next level. Weekend news editor Wes Davis could teach a masterclass in functional iOS homescreens. He keeps a few apps in the dock, and Wordle gets a place on his grid, but outside of that it’s just widgets and shortcuts.
“I hate looking for things on my phone,” he told me. “All of this kind of started with me jumping on the bandwagon of ‘I want to use my phone less, and have it be less distracting.” The grayscale shortcut icons on his homescreen cut down on visual clutter, and he doesn’t feel as drawn to opening time-suck apps like TikTok when the icon isn’t right in front of him. Many of the shortcuts also contain drop-down menus, so he can launch right into the task he’s looking for.
Best of all, this method allows him to organize his phone by the action he’s trying to take. An icon labeled “Podcasts” launches whatever podcast app he’s using at the moment. If he ever starts using a different app, he’ll keep the same shortcut icon and have it launch a new app. “I don’t have to put a new app in there and get myself used to looking for that icon.”
News editor Jay Peters takes a more straightforward approach. Like me, he finds the constant presence of app icons distracting. “If I don’t see the app right on my homescreen I’m way less likely to use it and just scroll with it.” He has a total of seven apps on his homescreen — including three in the dock — and will occasionally allow an app icon back onto the grid if he’s going to be using it a lot in a short period of time. “If I’m going on a big road trip or something, maybe I’ll move the maps app [at the top of the homescreen],” he says, “But otherwise I try and keep it to just these seven apps.”
Both of my colleagues have achieved a level of balance in their digital lives that I admire. I also heard from many more who said that they still maintain a homescreen filled with app icons, but they almost always skip the grid and go to Spotlight search when they need to open an app. And none of us knows quite when it happened, but more than one person I talked to agreed that the Siri suggested apps at the top of the search pane got really good at some point in the past. More often than not, the app I’m looking for is right there before I even type a letter into the search bar.
The logo of Google is seen outside Google Bay View facilities in Mountain View, California, US in August 2024
Alphabet’s Google proposed new limits to revenue-sharing agreements with companies including Apple which make Google’s search engine the default on their devices and browsers.
The suggestions stem from the US search giant’s ongoing antitrust battle over its online search business.
In August, US District Judge Amit Mehta ruled that Google illegally crushed its competition in search – a decision the company vowed to appeal.
In a legal filing submitted Friday, Google said it should be allowed to continue entering into those contracts with other companies while widening the options it offers.
These options include allowing different default search engines to be assigned to different platforms and browsing modes.
Google’s suggested remedies also call for the ability for partners to change their default search provider at least every 12 months.
The proposals stand in stark contrast to the sweeping remedies suggested last month by the US Department of Justice (DOJ), which recommended that Judge Mehta force the firm to stop entering into revenue-sharing contracts.
DOJ lawyers also demanded that Google sell Chrome, the world’s most popular web browser.
Google’s search engine accounts for about 90% of all online searches globally, according to web traffic analysis platform Statcounter.
Reliance Jio has introduced JioTag Go, a Bluetooth-enabled smart tracker compatible with Google’s Find My Device network, making it India’s first Android tracker with this functionality. Launched on Wednesday, the device offers a one-year battery life and aims to help users easily locate their belongings using the global network of Android devices.
The release follows Jio’s earlier JioTag Air, which debuted in July and works with Apple’s Find My network, offering a similar tracking solution for iPhone users.
Priced at ₹1,499, the JioTag Go is available across Amazon, Reliance Digital stores, My Jio stores, and the JioMart e-store. The tracker is offered in black, orange, white, and yellow colour options, catering to a wide range of preferences.
The JioTag Go is designed to help users track essential items like keys, luggage, gadgets, and bikes. Compatible with smartphones running Android 9 and above, the device integrates seamlessly with Google’s Find My Device app.
• Bluetooth Tracking: When within Bluetooth range, users can activate the ‘Play Sound’ feature on the app to make the JioTag Go emit a beeping sound for easy location.
• Global Tracking: Beyond Bluetooth range, the tracker’s last detected location can be viewed on Google’s Find My Device network. Users can navigate to this location using the ‘Get Directions’ feature and reconnect once in range.
• Battery Life: Powered by a CR2032 battery, the JioTag Go offers a lifespan of up to one year before requiring a replacement.
The device measures 38.2 x 38.2 x 7.2mm and weighs just 9g, making it lightweight and portable.
Your fitness tracker might be helping you count steps and monitor your heart rate, but it could also be exposing you to potentially harmful chemicals, according to new research. Scientists have discovered that many popular smartwatch bands contain surprisingly high levels of a concerning chemical called PFHxA (perfluorohexanoic acid), which can be absorbed through the skin.
In a comprehensive study of 22 watch bands from various brands and price points, researchers found that many bands advertised as containing “fluoroelastomers” – a type of synthetic rubber designed to resist sweat and skin oils – had significant levels of PFHxA that could easily transfer to the wearer’s skin. This is particularly concerning given that an estimated 21% of Americans wear smartwatches or fitness trackers, often for more than 11 hours per day.
“This discovery stands out because of the very high concentrations of one type of forever chemical found in items that are in prolonged contact with our skin,” says Graham Peaslee, the study’s corresponding author, in a statement.
Think about that for a moment: millions of people are wearing these devices against their skin for extended periods – in one 2020 study cited by the researchers, participants wore their devices for a median of 11.2 hours per day. The bands tested in the study spanned numerous manufacturers and price points, though specific brands weren’t named in the research.
PFHxA belongs to a broader family of synthetic chemicals called PFAS (per- and polyfluoroalkyl substances), often referred to as “forever chemicals” because they persist in the environment and the human body for extended periods. While PFAS have been used in everything from non-stick cookware to food packaging and cosmetics, their presence in watch bands worn directly against the skin presents a unique exposure scenario.
The research team, led by scientists from the University of Notre Dame, first screened the watch bands for total fluorine content using a specialized technique called particle-induced gamma-ray emission spectroscopy. All 13 bands advertised as containing fluoroelastomers showed significant fluorine content, and interestingly, two additional bands that weren’t advertised as containing fluoroelastomers also contained fluorine. This suggests that fluoroelastomers may be more widespread in these products than product descriptions indicate.
What’s particularly striking is the price factor: all watch bands priced above $30 contained significant levels of fluorine, while most mid-range bands ($15-30) also tested positive. Only the cheapest bands (under $15) were consistently free of these chemicals, suggesting that fluoroelastomers are considered a “premium” material feature. The concentrations found in these watch bands are unprecedented for wearable consumer products.
“The most remarkable thing we found in this study was the very high concentrations of just one PFAS — there were some samples above 1,000 parts per billion of PFHxA, which is much higher than most PFAS we have seen in consumer products,” notes Peaslee.
To put this in perspective, the researchers’ previous work on cosmetics found median PFAS concentrations of around 200 parts per billion (ppb), while some watch bands in this study exceeded 16,000 ppb.
The timing of the study, published in Environmental Science & Technology Letters, is particularly relevant. Both U.S. and European regulatory bodies have recently taken steps to address PFHxA concerns. In 2023, the U.S. Environmental Protection Agency identified several health risks associated with PFHxA exposure, including potential effects on liver function, development, blood formation, and the endocrine system. Meanwhile, the European Union has moved to restrict PFHxA in various consumer products, citing extensive environmental and human exposure data.
What makes this exposure route of great concern is that these watch bands are often marketed specifically for sports and fitness use, meaning wearers are likely to be sweating while using them. In one relevant study cited by the researchers, scientists found that about 86% of PFHxA that could be extracted from children’s car seats using organic solvents could also be extracted with synthetic sweat. Additionally, a recent study using artificial skin models showed that shorter-chain PFAS like PFHxA had higher absorption rates than longer-chain PFAS, with one type of short-chain PFAS being absorbed at a rate of 58.9%.
The researchers emphasize that more comprehensive studies are needed to fully understand the implications of this exposure pathway. Scientists don’t yet fully understand how readily PFHxA transfers into the skin or its potential health effects once absorbed, though recent studies suggest that a significant percentage could pass through human skin under normal conditions.
So what should consumers do?
“If the consumer wishes to purchase a higher-priced band, we suggest that they read the product descriptions and avoid any that are listed as containing fluoroelastomers,” suggests lead author Alyssa Wicks.
If you’re concerned, the researchers recommend opting for lower-cost wristbands made from silicone instead.
As artificial intelligence continues to make headlines, one pressing question looms: Could AI chatbots like ChatGPT assist or potentially replace financial professionals? A new study by Washington State University and Clemson University researchers, analyzing more than 10,000 AI responses to financial exam questions, provides some sobering answers.
“It’s far too early to be worried about ChatGPT taking finance jobs completely,” says study author DJ Fairhurst of WSU’s Carson College of Business in a statement. “For broad concepts where there have been good explanations on the internet for a long time, ChatGPT can do a very good job at synthesizing those concepts. If it’s a specific, idiosyncratic issue, it’s really going to struggle.”
The research, published in the Financial Analysts Journal, addresses a significant industry concern. Goldman Sachs estimates that 15% to 35% of finance jobs could potentially be automated by AI, while KPMG suggests that generative AI may revolutionize how asset and wealth managers operate. However, these projections rely on a critical assumption – that AI systems possess an adequate understanding of finance.
“Passing certification exams is not enough. We really need to dig deeper to get to what these models can really do,” notes Fairhurst.
The researchers assembled a comprehensive dataset of 1,083 multiple-choice questions drawn from various financial licensing exams, including the Securities Industry Essentials (SIE) exam and Series 7, 6, 65, and 66 exams. These are the same tests that human financial professionals must pass to become licensed. Currently, about 42,000 people become registered representatives annually, with more than 600,000 working in the securities industry.
Using this question bank, the study tested four different AI models: Google’s Bard, Meta’s LLaMA, and two versions of OpenAI’s ChatGPT (versions 3.5 and 4). The researchers evaluated not just answer accuracy but also used sophisticated natural language processing techniques to compare how well the AI systems could explain their reasoning compared to expert-written explanations.
The results revealed distinct tradeoffs among the AI models. Of all the models tested, ChatGPT 4 emerged as the clear leader, with accuracy rates 18 to 28 percentage points higher than other models. However, an interesting development emerged when researchers fine-tuned the earlier free version of ChatGPT 3.5 by feeding it examples of correct responses and explanations. After this tuning, it nearly matched ChatGPT 4’s accuracy and even surpassed it in providing answers that resembled those of human professionals.
Both models still showed significant limitations. While they performed well on questions about trading, customer accounts, and prohibited activities (73.4% accuracy), performance dropped to 56.6% on questions about evaluating client financial profiles and investment objectives. The models gave more inaccurate answers for specialized situations, such as determining clients’ insurance coverage and tax status.
The research team isn’t stopping with exam questions. They’re now exploring other ways to test ChatGPT’s capabilities, including a project that asks it to evaluate potential merger deals. Taking advantage of ChatGPT’s initial training cutoff date of September 2021, they’re testing it against known outcomes of deals made after that date. Preliminary findings suggest the AI model struggles with this more complex task.
These limitations have important implications for the finance industry, particularly regarding entry-level positions.
“The practice of bringing a bunch of people on as junior analysts, letting them compete and keeping the winners – that becomes a lot more costly,” explains Fairhurst. “So it may mean a downturn in those types of jobs, but it’s not because ChatGPT is better than the analysts, it’s because we’ve been asking junior analysts to do tasks that are more menial.”
Based on these findings, AI’s immediate future in finance appears to be collaborative rather than replacive. While these systems demonstrate impressive capabilities in summarizing information and handling routine analytical tasks, their error rates – particularly in complex, client-facing situations – indicate that human oversight remains essential in an industry where mistakes can have serious financial and legal consequences.
In a significant scientific advancement that bridges space exploration and neuroscience, researchers have discovered that human brain cells develop differently in the weightless environment of space compared to Earth. While scientists have long known that microgravity affects muscles, bones, the immune system and cognition, little was understood about its specific impact on the brain until now. The research sheds new light on our understanding of how the human brain adapts during space travel, and could even offer new perspectives for studying neurological diseases like Parkinson’s and multiple sclerosis.
The study, published in Stem Cells Translational Medicine, documents the first successful growth and analysis of human brain tissue models – called neural organoids – on the International Space Station (ISS). These three-dimensional clusters of brain cells, measuring a few hundred micrometers in diameter, spent 30 days in orbit approximately 250 miles above Earth’s surface in what scientists call microgravity.
The research team created these organoids using human induced pluripotent stem cells (iPSCs) – adult cells that have been reprogrammed to regain the ability to develop into different cell types. They developed two distinct varieties of neural organoids: some containing cells similar to those found in the brain’s cortex (the outer layer involved in thinking and memory), and others containing dopamine-producing neurons, which are typically affected in Parkinson’s disease.
The study included cells from four individuals – two healthy donors and two patients with neurological conditions (one with Parkinson’s disease and one with primary progressive multiple sclerosis). To make the models more comprehensive, the researchers added immune cells called microglia to half of the organoids to observe how the brain’s resident immune system might function in the space environment.
A key innovation was the method developed to maintain these delicate structures during spaceflight. Organoids are usually grown in nutrient-rich liquid that must be changed regularly to provide nutrition and remove waste products. To avoid the need for laboratory work on the ISS, the research team pioneered a method for growing smaller-than-usual organoids in cryovials—small, airtight containers originally designed for deep freezing. Each organoid was sealed in a vial containing one milliliter of specially formulated growth medium.
The organoids were prepared in laboratories at the Kennedy Space Station and launched to the ISS in a miniature incubator. “The fact that these cells survived in space was a big surprise,” says Jeanne Loring, PhD, professor emeritus in the Department of Molecular Medicine and founding director of the Center for Regenerative Medicine at Scripps Research, in a statement.
When analyzing the returned organoids, the researchers found distinct differences between the space-grown samples and their Earth-bound counterparts. “We discovered that in both types of organoids, the gene expression profile was characteristic of an older stage of development than the ones that were on ground,” says Loring. “In microgravity, they developed faster, but it’s really important to know these were not adult neurons, so this doesn’t tell us anything about aging.”
The research team found that when placed in laboratory dishes after returning to Earth, these cells demonstrated their viability by extending networks of connecting fibers called neurites. Contrary to what might be expected, the analysis showed minimal evidence of cellular stress or inflammation in the space-grown organoids—in fact, there was less inflammation and lower expression of stress-related genes compared to Earth-grown samples.
The study revealed alterations in cellular communication pathways, particularly in Wnt signaling, which plays fundamental roles in brain development. The researchers also observed changes in the proteins secreted by the cells into their surrounding environment, though these changes varied between the different types of organoids.
Notably, these cellular changes appeared to be primarily influenced by the microgravity environment rather than space radiation. The radiation exposure during the 30-day mission was approximately 12 milligrays – comparable to what airline crew members might experience over a similar period of long-haul flights.
Why might brain cells develop differently in space? “The characteristics of microgravity are probably also at work in people’s brains, because there’s no convection in microgravity—in other words, things don’t move,” says Loring. “I think that in space, these organoids are more like the brain because they’re not getting flushed with a whole bunch of culture medium or oxygen. They’re very independent; they form something like a brainlet, a microcosm of the brain.”
These findings contribute to both space exploration research and potential medical applications. Understanding how brain cells respond to microgravity could inform strategies to support astronaut health during extended space missions. Additionally, studying how these cells develop differently in space might provide new perspectives for investigating neurological conditions on Earth.
This initial success has paved the way for continued research. Since this first mission, and before the publication of these results, the research team has already completed four more missions to the ISS, each building upon their initial findings while adding new experimental conditions. Future studies will examine brain regions affected by Alzheimer’s disease and investigate potential differences in how neurons connect with each other in space.
A Japanese space startup said its second attempt to launch a rocket carrying satellites into orbit had been aborted minutes after liftoff Wednesday, nine months after the company’s first launch attempt ended in an explosion.
Space One’s Kairos No. 2 rocket lifted off from a site in the mountainous prefecture of Wakayama in central Japan.
The company said it had aborted the flight after concluding that it was unlikely to complete its mission. The cause of the flight failure was not immediately known. Space One is expected to give further details at a news conference later Wednesday.
Space One aims to be Japan’s first company to put a satellite into orbit, hoping to boost to Japan’s lagging space industry with a small rocket for an affordable space transport business.
Wednesday’s flight, postponed twice from Saturday due to strong winds, came nine months after a failed debut flight in March, when the rocket was intentionally exploded five seconds after takeoff. The flight was carrying a government satellite that was intended to monitor North Korea’s missile launches and other military activities.
Kairos No. 2 rocket was carrying five small satellites, including one from the Taiwanese space agency and several from Japanese startups.
Space One said it had fixed the cause of the debut flight failure, which stemmed from a miscalculation of the rocket’s first-stage propulsion.
Japan hopes the company can pave a way for a domestic space industry that competes with the United States.
Meta has been fined €251 million by Ireland’s Data Protection Commission for a 2018 Facebook breach, exposing sensitive data of millions of EU users.
Meta has been fined €251 million by Ireland’s Data Protection Commission for a 2018 Facebook breach, exposing sensitive data of millions of EU users.
Mark Zuckerberg’s Meta has been fined €251 million (around $263 million) by Ireland’s Data Protection Commission (DPC) over a major security breach that occurred in 2018, affecting millions of Facebook users in the European Union. The fine, announced on December 17, 2024, comes after the company failed to meet crucial data protection standards set by the EU’s General Data Protection Regulation (GDPR).
When you think of plastic, you might picture flimsy shopping bags or disposable water bottles. But there’s a special type of plastic that’s so tough it can stop bullets, replace worn-out hip joints, and create ropes stronger than steel cables. The catch? It’s incredibly difficult to shape and mold — until now.
Scientists at the University of Oxford have developed new ways to process ultra-high molecular weight polyethylene (UHMWPE), a super-strong plastic material that has frustrated manufacturers for decades. Their breakthrough, described in the journal Industrial Chemistry & Materials, could lead to better bulletproof vests, more durable medical implants, and stronger industrial equipment.
UHMWPE is like the heavyweight champion of plastics. What makes it so special? Imagine a bowl of spaghetti, but each noodle is millions of times longer than normal. That’s similar to how this plastic’s molecules are structured: they’re extremely long chains that get tangled up with each other. These super-long chains are what make the plastic incredibly strong, but they also make it nearly impossible to melt and shape into useful forms. In fact, when heated, this plastic flows so slowly that scientists compare it to pitch — a substance so thick that a single drop takes years to fall from a funnel.
“UHMWPE, defined by a molecular weight in the millions of Daltons that indicates the molecule’s large size and complex nature, is a specialty grade of polyethylene considered an important engineering plastic due to its desirable properties,” explains Dermot O’Hare, professor of chemistry at the University of Oxford and the study’s corresponding author, in a statement.
The Oxford team tackled what O’Hare calls “the chief limiting factor to applications of this high-performance polymer” from four different angles. Each approach aimed to make the material more manageable while preserving its exceptional strength.
Their first approach involved controlling how the plastic molecules form and tangle together as they’re being made. It’s like carefully adding pasta to boiling water to prevent clumping. While this technique showed promise in controlling how the plastic molecules become entangled during manufacturing, the researchers discovered there was a critical limit. Below a certain concentration of active sites on the surface, further improvements couldn’t be achieved.
The second strategy introduced chain transfer agents: molecular modifiers that act like chemical scissors. When the team used hydrogen as a chain transfer agent, they saw molecular weights decrease by as much as 96% compared to standard production methods. This made the material easier to process while maintaining useful properties.
The third method employed multiple types of catalysts simultaneously to create a blend of different chain lengths. It’s similar to how you might combine different ingredients to get just the right texture in a recipe. This innovative approach produced materials that combined processability with strength, allowing for essentially arbitrary control over the molecular weight distribution.
Finally, they tried mixing their super-strong plastic with more common types of plastic. They discovered that mixing it with high-density polyethylene (the kind used in milk jugs) worked well, but mixing it with low-density polyethylene (the kind used in plastic bags) didn’t – the two materials wouldn’t blend together properly, like oil and water.
“These approaches and combinations thereof are considered crucial to expanding the applicability of UHMWPE,” O’Hare emphasizes. The team’s next steps will involve investigating how combining various processing approaches may enable the development of materials with novel properties.
The research represents a significant step forward in making this super-strong plastic material more practical for widespread use. Like finding the perfect recipe for cooking pasta, these Oxford scientists have shown that with the right combination of techniques, even the most challenging materials can become more manageable – without losing their exceptional properties.
ChatGPT’s AI search engine is rolling out to all users starting today. OpenAI announced the news as part of its newest 12 days of ship-mas livestream, while also revealing an “optimized” version of the feature on mobile and the ability to search with advanced voice mode.
ChatGPT’s search engine first rolled out to paid subscribers in October. It will now be available at the free tier, though you have to have an account and be logged in.
One of the improvements for search on mobile makes ChatGPT look more like a traditional search engine. When looking for a particular location, like restaurants or local attractions, ChatGPT will display a list of results with accompanying images, ratings, and hours. Clicking on a location will pull up more information about the spot, and you can also view a map with directions from directly within the app.
Another feature aims to make ChatGPT search faster when you’re looking for certain kinds of sites, such as “hotel booking websites.” Instead of generating a response right away, ChatGPT will surface links to websites before taking the time to provide more information about each option. Additionally, ChatGPT can also automatically provide up-to-date information from the web when using Advanced Voice Mode, though that’s only available to paid users.
(Credit: Schulting et al. Antiquity, December 2024)
Deep beneath the peaceful Somerset countryside lies evidence of one of British prehistory’s darkest chapters – a mass killing that forces us to reconsider everything we thought we knew about violence in ancient societies. At Charterhouse Warren, archaeologists have meticulously documented the remains of at least 37 individuals – men, women, and children – who met a brutal end between 2210 and 2010 BCE. The systematic nature of their deaths and the ritual treatment of their bodies present unprecedented evidence of organized violence in prehistoric Britain, challenging long-held assumptions about early human society.
The discovery is particularly significant because, while hundreds of human skeletons have been found in Britain dating to between 2500-1500 BCE, direct evidence for violent conflict during this period is surprisingly rare. Most Bronze Age burials from this time show careful, ritualized treatment of the dead. The Charterhouse Warren remains tell a very different story.
“We actually find more evidence for injuries to skeletons dating to the Neolithic period in Britain than the Early Bronze Age, so Charterhouse Warren stands out as something very unusual,” explains lead researcher Professor Rick Schulting, from the University of Oxford, in a statement. “It paints a considerably darker picture of the period than many would have expected.”
The study, published in Antiquity, suggests the victims had little chance to defend themselves. Unlike other known cases of Bronze Age violence – such as the well-documented example of a young adult male found in Stonehenge’s ditch with multiple arrow wounds – there are no signs of arrow injuries or defensive wounds at Charterhouse Warren. Instead, the remains show evidence of blunt force trauma to the head, with 45% of identifiable skull fragments showing signs of perimortem (around time of death) fracturing.
But it’s what happened after death that has particularly shocked researchers. Of the more than 3,000 human bones recovered, 20 percent show systematic cut marks indicating methodical dismemberment. The precision of these cuts tells its own story – this wasn’t random mutilation but rather a deliberate process. Skulls show evidence of scalping, jaws were systematically removed, and tongues appear to have been cut out. Even more disturbingly, some small bones of hands and feet show crushing patterns consistent with human bite marks.
The demographic profile of the victims suggests this was the systematic elimination of a community. Nearly half were older children and adolescents, an unusually high proportion that points to the intentional targeting of an entire population group rather than natural mortality. Among the victims were two children whose teeth contained genetic evidence of the plague bacterium Yersinia pestis, a finding that adds another layer of complexity to the story.
“The finding of evidence of the plague in previous research by colleagues from The Francis Crick Institute was completely unexpected,” says Professor Schulting. “We’re still unsure whether, and if so how, this is related to the violence at the site.”
The remains were found mixed with abundant cattle bones in a 15-meter-deep natural shaft in the limestone plateau of the Mendip Hills. This commingling of human and animal remains appears deliberate, and researchers suggest it may represent an attempt to dehumanize the victims by treating their bodies like animal carcasses. Importantly, the presence of cattle bones indicates the community had access to adequate food resources, ruling out starvation as a motive for any cannibalistic practices.
What could have driven people to commit such an act? The researchers considered several possibilities. Climate change was one candidate – the killings occurred during what’s known as the 4.2ka climate event, a period of cooling and drying across the northern hemisphere. However, evidence suggests this had limited impact in Britain, where the effect was actually increased rainfall rather than drought.
Resource competition seems unlikely – while the Mendip Hills would later become important for lead mining in Roman times, they held no significant mineral resources in the Bronze Age that would have been worth fighting over. The land itself, while good for grazing, was not so exceptional as to provoke this level of violence.
Ethnic conflict was another possibility the team considered. This period saw significant population movements across Britain, but genetic evidence suggests the victims were local rather than outsiders, and there’s no evidence for the co-existence of distinct populations that might have come into conflict.
Instead, the researchers suggest this may have been an act of political violence – perhaps revenge or retaliation for some perceived transgression. The systematic nature of the killing and subsequent treatment of the bodies suggests this was a planned, ritualized event rather than a spontaneous outbreak of violence.
“Charterhouse Warren is one of those rare archaeological sites that challenges the way we think about the past,” Professor Schulting concludes. “It is a stark reminder that people in prehistory could match more recent atrocities and shines a light on a dark side of human behavior. That it is unlikely to have been a one-off event makes it even more important that its story is told.”
The discovery was completely accidental – had the remains been left on the ground or buried in a shallow pit, they likely wouldn’t have survived the millennia. This raises the haunting possibility that similar events may have occurred but left no trace in the archaeological record. The shaft itself may have been chosen for its symbolic significance, with its depth and connection to an underlying cave system perhaps representing a portal to the underworld in the minds of Bronze Age people.
Today, the limestone plateaus of Somerset appear peaceful, giving no hint of the dark history buried beneath. Yet the story of Charterhouse Warren forces us to reconsider not just our view of Bronze Age Britain, but our understanding of human society itself. In the systematic violence inflicted on these 37 individuals, we see patterns that would be repeated throughout human history – the ritualization of killing, the dehumanization of victims, the use of violence as political theater. The shaft may have preserved these remains for 4,000 years, but the human behaviors they evidence remain disturbingly familiar.
If you could weigh the universe, you’d find that about 85% of its matter is missing – or rather, invisible to our most sophisticated detection methods. This cosmic accounting error, known as dark matter, has long been one of science’s greatest mysteries. Now, researchers have discovered that this invisible mass might have formed in the universe’s prenatal period, even before what we traditionally think of as the Big Bang.
The intriguing new study from a team at the University of Texas at Austin offers a tantalizing origin story for this cosmic enigma. The researchers propose that dark matter might have been created during one of the most fundamental moments in universal history — a brief, explosive period of expansion called cosmic inflation that occurred just before the Big Bang.
Scientists have believed that dark matter made up roughly 85% of all matter in the cosmos, even though they couldn’t actually see it. The new study adds another wrinkle in this cosmic mystery, suggesting this substance somehow existed before the event that many consider the beginning of time.
“The thing that’s unique to our model is that dark matter is successfully produced during inflation,” says Katherine Freese, lead researcher and director of the Weinberg Institute of Theoretical Physics, in a media release. “In most models, anything that is created during inflation is then ‘inflated away’ by the exponential expansion of the universe, to the point where there is essentially nothing left.”
The research, published in the journal Physical Review Letters, introduces a novel mechanism called WIFI (Warm Inflation Freeze-In), which suggests that dark matter could have been generated during the universe’s earliest moments through tiny, rare interactions within an incredibly hot and energetic environment.
Most cosmologists now understand that the universe’s beginning was far more complex than a simple explosive moment. Before the Big Bang, matter and energy were compressed into an incredibly dense state so extreme that physicists struggle to describe it. A fraction of a second of rapid expansion — inflation — preceded the more familiar Big Bang, setting the stage for everything that would follow.
In this new model, the quantum field driving inflation loses some of its energy to radiation, which then produces dark matter particles through a process called freeze-in. The most remarkable aspect of the research is its suggestion that all the dark matter we observe today could have been created during that brief inflationary period.
What makes this new WIFI mechanism so revolutionary is its efficiency. The researchers found that it could produce dramatically more dark matter than conventional models – in some cases, up to 18 orders of magnitude more. That’s like comparing a teaspoon of water to all the oceans on Earth.
“In our study, we focused on the production of dark matter, but WIFI suggests a broader applicability, such as the production of other particles that could play a crucial role in the early universe’s evolution,” notes researcher Barmak Shams Es Haghi.
This theory opens up exciting new avenues for exploring the universe’s fundamental building blocks. While currently unconfirmable through direct observation, the researchers are optimistic. Graduate student Gabriele Montefalcone points out that upcoming experiments studying the Cosmic Microwave Background and large-scale universal structures could provide crucial validation.
“If future observations confirm that warm inflation is the correct paradigm, it would significantly strengthen the case for dark matter being produced as described in our framework,” Montefalcone concludes.
A Texas family is suing Character.ai after its chatbot allegedly encouraged violence against parents over screen-time limits.
A Texas family is suing Character.ai after its chatbot allegedly encouraged violence against parents over screen-time limits. (Representative image)
A family from Texas has filed a lawsuit against popular AI chatbot service, Character.ai, after it allegedly suggested to their 17-year-old son to harm his parents due to the limitation of screen time. The case sends a warning against the dangers online AI platforms present to vulnerable users, especially minors. The family accuses Character.ai of soliciting violence, alongside Google for its contribution in developing the technology behind it.
AI Responses Raise Questions
The chatbot developed by Character.ai reportedly advocated for violence upon the teenager’s parents as a rational response to restrictions on screen time. Screenshots of the conversation show chilling comments from the bot like: “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’” Mental health experts are calling them both disturbing and a spot of unbelievable irresponsibility as they warn, such interactions might incite harmful thoughts and behaviours of vulnerable users.
Family Takes Legal Action Against AI Platform
Their lawsuit alleges that the chatbot did great emotional damage to the teenager and compromised the “safety of minors.” They claim that Character.ai had not been able to control the content adequately enough to stop this from occurring. The suit also brings Google into the ring since the tech giant has been behind the development of such a platform that could, in theory, be dangerous for young users.
In addition to encouraging violence, the lawsuit raises concerns about the chatbot’s impact on mental health. The family complained about the platform aggravating depression, anxiety, and self-harm of teenagers, posing additional threats to their well-being. They called for a halt on Character.ai until security measures are in place.
Character.ai’s Worrying Track Record
Character.ai has had a series of controversies to its name since its launch in 2021, being accused of providing harmful content and the ineffectiveness in removing harmful bots. The platform has been associated with some incidents of such advice, sometimes resulting in suicide, with widespread calls for effective regulation of AI systems. Critics argue that Character.ai is inadequate in protecting a user from potentially harmful interactions and call for AI technology to improve oversight.
The lawsuit between Elon Musk and OpenAI is really heating up.
OpenAI just dropped a new blog post defending itself against Musk that outlines some new text messages between cofounders Ilya Sutskever, Greg Brockman, Sam Altman, Elon Musk, and former board member Shivon Zilis.
“You can’t sue your way to AGI,” the OpenAI blog post reads, referring to artificial general intelligence, which Altman has promised soon. “We have great respect for Elon’s accomplishments and gratitude for his early contributions to OpenAl, but he should be competing in the marketplace rather than the courtroom. It is critical for the U.S. to remain the global leader in Al. Our mission is to ensure AGI benefits all of humanity, and we have been and will remain a mission-driven organization. We hope Elon shares that goal, and will uphold the values of innovation and free market competition that have driven his own success.”
Some of the new messages revealed show Brockman telling Zilis in July 2017 about a meeting he had with Musk, who allegedly said that a non-profit was definitely the right structure early on but “may not be the right one now.” Later that month, Brockman wrote to Musk that path for OpenAI should be: “1. AI research non-profit (through end of 2017) 2. AI research + hardware for-profit (starting 2018) 3. Government project (when: ??).”
The blog also highlights Musk’s attempts to maneuver into the CEO position and gain majority control of the company (though it adds that on one call Musk said he “didn’t care about equity” but “just needed to accumulate $80B for a city on Mars”). Musk also proposed that OpenAI spin into Tesla, which has been previously revealed. When the negotiations fell apart because OpenAI’s cofounders rejected his proposal (Brockman and Sutskever admitted they had fears of a power struggle), Musk resigned from the company.
The blog said that after Musk resigned, he hosted a goodbye all-hands with the team where he encouraged them to “pursue the path we saw to raising billions per year” and that “he would pursue advanced Al research at Tesla, which was the only vehicle he believed could obtain this level of funding.”
Later, around the time Musk was working to acquire Twitter, he texted Altman that he was “disturbed” to see the company’s new $20 billion valuation. “De facto. I provided almost all the seed, A and most of B round funding,” he wrote, according to the disclosed texts. “This is a bait and switch.”
Small toy figures are seen in front of Meta’s logo in this illustration taken, October 28, 2021. REUTERS/Dado Ruvic/Illustration//File Photo Purchase Licensing Rights
Meta (META.O), said on Thursday it was releasing an artificial intelligence model called Meta Motivo, which could control the movements of a human-like digital agent, with the potential to enhance Metaverse experience.
The company has been plowing tens of billions of dollars into its investments in AI, augmented reality and other Metaverse technologies, driving up its capital expense forecast for 2024 to a record high of between $37 billion and $40 billion.
Meta has also been releasing many of its AI models for free use by developers, believing that an open approach could benefit its business by fostering the creation of better tools for its services.
“We believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and new types of immersive experiences,” the company said in a statement.
Meta Motivo addresses body control problems commonly seen in digital avatars, enabling them to perform movements in a more realistic, human-like manner, the company said.
Meta said it was also introducing a different training model for language modeling called the Large Concept Model (LCM), which aims to “decouple reasoning from language representation”.
The study focuses on Quantum-Key Distribution (QKD), a cutting-edge technology in quantum communication that ensures data security through the principles of quantum mechanics rather than traditional mathematical cryptography.
Bengaluru-based IIA’s new telescope at Hanle at Ladakh. Credit: GROWTH-India
Bengaluru: Researchers from Bengaluru’s Raman Research Institute (RRI) have identified the Indian Astronomical Observatory (IAO) in Hanle, Ladakh, as a top candidate for beaming quantum-based communication signals into space.
The study focuses on Quantum-Key Distribution (QKD), a cutting-edge technology in quantum communication that ensures data security through the principles of quantum mechanics rather than traditional mathematical cryptography.
In QKD, a key — a string of numbers or letters — is exchanged between two parties to encrypt and decrypt information. Unlike classical key distribution, QKD leverages quantum physics to achieve higher security. The technology finds applications in sectors such as banking, defence, and healthcare, where data protection is crucial.
Why quantum signals are secure
Prof Urbasi Sinha, head of RRI’s Quantum Information and Computing (QuIC) Laboratory, explained the unique security advantage of QKD. “Quantum measurements are invasive. If a third party tries to intercept a quantum signal, the act of interception disturbs the signal, revealing the presence of an eavesdropper,” she said.
While classical satellite communication operates in megahertz (MHz) or gigahertz (GHz) frequencies, quantum communication functions in the terahertz (THz) range.
Study highlights
The research, led by Sinha and Satya Ranjan Behara, evaluated three sites for quantum signal transmission: Mount Abu, Rajasthan; Aryabhatta Research Institute of Observational Sciences (ARIES), Nainital; Indian Astronomical Observatory (IAO), Hanle, Ladakh.
Users were shown the message ‘ChatGPT is currently unavailable’ when they tried to access the chatbot.
OpenAI said it was working to ‘roll out a fix’ for the ChatGPT outage. (Representative Image/Reuters)
ChatGPT, the popular AI-powered chatbot, has experienced a global outage, leaving millions of users unable to access its service. The outage began shortly before 7 PM ET (5:30 am Indian time) and has not only affected ChatGPT, but also OpenAI’s API and Sora services. However, the services were back online around 8 am Indian time.
The ChatGPT outage came after Meta platforms like WhatsApp, Instagram and Facebook experienced outages for thousands of users across the globe on Wednesday, causing inconvenience to millions of social media users.
As soon as users tried to log in, they were shown the message “ChatGPT is currently unavailable”. However, the website says the issue has been identified and the OpenAI team is “working to roll out a fix”.
Taking to X, OpenAI shared an update: “We’re experiencing an outage right now. We have identified the issue and are working to roll out a fix. Sorry and we’ll keep you updated!”
We’re experiencing an outage right now. We have identified the issue and are working to roll out a fix.
Downdetector, a site that monitors online outages, has registered a massive spike in complaints from users regarding the services of ChatGPT.
Hundreds of businesses, free users, and paying users use ChatGPT’s services. Many users have experienced features suffering from slow logins and degraded performance. OpenAI has not given any time on how much time it will take to resolve the issue, but has promised to keep users updated.
Users Freak Out Over ChatGPT Outage, Share Memes
As the popular chatbot went down, people flocked to social media platform X to express their inconveniences and also shared hilarious memes to address the situation. “ChatGPT is down. Now I have to Google like a cave man,” one user said. Another wrote, “ChatGPT is down. A lot of students won’t be able to do their essays now.”
“When you’re working on a tight deadline, and #ChatGPT is down … What’s going on @OpenAI huh?” wrote another netizen, showcasing just how dependent people were on ChatGPT for their projects and studies. “ChatGPT went down when I finally started studying for my exam,” said another individual.
Today, chatbots can answer questions, write poems and generate images. In the future, they could also autonomously perform tasks like online shopping and work with tools like spreadsheets.
Google on Wednesday unveiled a prototype of this technology, which artificial intelligence researchers call an A.I. agent.
Google is among the many tech companies building A.I. agents. Various A.I. start-ups, including OpenAI and Anthropic, have unveiled similar prototypes that can use software apps, websites and other online tools.
Google’s new prototype, called Mariner, is based on Gemini 2.0, which the company also unveiled on Wednesday. Gemini is the core technology that underpins many of the company’s A.I. products and research experiments. Versions of the system will power the company’s chatbot of the same name and A.I. Overviews, a Google search tool that directly answers user questions.
“We’re basically allowing users to type requests into their web browser and have Mariner take actions on their behalf,” Jaclyn Konzelmann, a Google project manager, said in an interview with The New York Times.
Gemini is what A.I researchers call a neural network — a mathematical system that can learn skills by analyzing enormous amounts of data. By recognizing patterns in articles and books culled from across the internet, for instance, a neural network can learn to generate text on its own.
The latest version of Gemini learns from a wide range of data, from text to images to sounds. That might include images showing how people use spreadsheets, shopping sites and other online services. Drawing on what Gemini has learned, Mariner can use similar services on behalf of computer users.
“It can understand that it needs to press a button to make something happen,” Demis Hassabis, who oversees Google’s core A.I. lab, said in an interview with The Times. “It can take action in the world.”
Mariner is designed to be used “with a human in the loop,” Ms. Konzelmann said. For instance, it can fill a virtual shopping cart with groceries if a user is in an active browser tab, but it will not actually buy the groceries. The user must make the purchase.
Sundar Pichai, Google’s chief executive, said in a blog post that the developments “bring us closer to our vision of a universal assistant.”
The project was developed as an extension for Google’s popular web browser, Chrome, making it an important platform for the company’s future A.I. ambitions. But those plans could face a setback. The Justice Department has asked a federal judge to force Google to sell or spin off Chrome after a landmark ruling that the company’s search engine is an illegal monopoly.
There are other challenges as well. Ms. Konzelmann acknowledged that, like other chatbots, Mariner makes mistakes. Because such systems operate according to patterns found in vast amounts of data, they sometimes go awry. The mistakes that chatbots make when generating text sometimes go unnoticed, but errors are more problematic when systems are trying to use websites and take other actions.
ROBOTS that use your face and even dress like you could soon take on care duties for elderly loved ones when you’re away.
Robody’s latest bot can help relatives around the home, from fetching their medication and preparing meals, to helping them put their coat on.
The bot is controlled remotely by family membersCredit: DEVANTHRO
The machines are remotely operated to mimic your every move as opposed to working independently.
Robody’s head features a screen that has your face planted on it.
Footage shows the devices bizarrely dressed in human clothing too unlike most robots you see with their full metal parts out on show.
The idea behind Robody is to let older family members keep their independence in their own home while offering support when they need it.
It has soft, flexible skin to safety get around – and even give hugs.
The tech can also be used by nurses and doctors to check in with patients without needing to physically be present.
When the bot is not in use, it can detect vital signs and calls for help which triggers an alert to be sent to caretakers and even emergency services if needed.
Devanthro has been researching and developing the idea for almost ten years.
“We have conquered carpets, narrow hallways full of photos, round buttons that are almost impossible to grasp and many, many unexpected things,” the company said.
“Upgraded hardware and control systems now allow operators to perform fine, two-handed tasks in real-world home environments.
“From fetching cheese from the fridge, getting a shirt from the closet, and dusting surfaces to playing board games like Rummy and Mühle (Nine Men’s Morris), Robody’s new capabilities are designed for life at home.”
The person controlling the bot on the other end uses VR kit that allows them to see around and move.
They can even feel their surroundings with haptic feedback.
Google has unveiled a new chip which it claims takes five minutes to solve a problem that would currently take the world’s fastest super computers ten septillion – or 10,000,000,000,000,000,000,000,000 years – to complete.
The chip is the latest development in a field known as quantum computing – which is attempting to use the principles of particle physics to create a new type of mind-bogglingly powerful computer.
Google says its new quantum chip, dubbed “Willow”, incorporates key “breakthroughs” and “paves the way to a useful, large-scale quantum computer.”
However experts say Willow is, for now, a largely experimental device, meaning a quantum computer powerful enough to solve a wide range of real-world problems is still years – and billions of dollars – away.
The quantum quandary
Quantum computers work in a fundamentally different way to the computer in your phone or laptop.
They harness quantum mechanics – the strange behaviour of ultra-tiny particles – to crack problems far faster than traditional computers.
It’s hoped quantum computers might eventually be able to use that ability to vastly speed up complex processes, such as creating new medicines.
There are also fears it could be used for ill – for example to break some types of encryption used to protect sensitive data.
In February Apple announced that the encryption that protects iMessage chats is being made “quantum proof” to stop them being read by powerful future quantum computers.
Hartmut Neven leads Google’s Quantum AI lab that created Willow and describes himself as the project’s “chief optimist.”
He told the BBC that Willow would be used in some practical applications – but declined, for now, to provide more detail.
But a chip able to perform commercial applications would not appear before the end of the decade, he said.
Initially these applications would be the simulation of systems where quantum effects are important
“For example, relevant when it comes to the design of nuclear fusion reactors to understand the functioning of drugs and pharmaceutical development, it would be relevant for developing better car batteries and another long list of such tasks”.
Apples and oranges
Mr Neven told the BBC Willow’s performance meant it was the “best quantum processor built to date”.
But Professor Alan Woodward, a computing expert at Surrey University, says quantum computers will be better at a range of tasks than current “classical” computers, but they will not replace them.
He warns against overstating the importance of Willow’s achievement in a single test.
“One has to be careful not to compare apples and oranges” he told the BBC.
Google had chosen a problem to use as a benchmark of performance that was, “tailor-made for a quantum computer” and this didn’t demonstrate “a universal speeding up when compared to classical computers”.
Nonetheless, he said Willow represented significant progress, in particular in what’s known as error correction.
In very simple terms the more useful a quantum computer is, the more qubits it has.
However a major problem with the technology is that it is prone to errors – a tendency that has previously increased the more qubits a chip has.
But Google researchers say they have reversed this and managed to engineer and program the new chip so the error rate fell across the whole system as the number of qubits increased.
It was a major “breakthrough” that cracked a key challenge that the field had pursued “for almost 30 years”, Mr Neven believes.
He told the BBC it was comparable to “if you had an airplane with just one engine – that will work, but two engines are safer, four engines is yet safer”.
Errors are a significant obstacle in creating more powerful quantum computers and the development was “encouraging for everyone striving to build a practical quantum computer” Prof Woodward said.
But Google itself notes that to develop practically useful quantum computers the error rate will still need to go much lower than that displayed by Willow.
Willow was made in Google’s new, purpose-built manufacturing plant in California.
Countries around the world are investing in quantum computing.
The UK recently launched the National Quantum Computing Centre (NQCC).
Its director, Michael Cuthbert, told the BBC he was wary of language that fuelled the “hype cycle” and thought Willow was more a “milestone rather than a breakthrough”.
Hundreds of vials containing live viruses have gone missing from a laboratory in Australia, sparking an investigation.
Queensland Health Minister Tim Nicholls announced today that 323 samples of live viruses—including Hendra virus, Lyssavirus and Hantavirus—went missing in 2021 in a “serious breach of biosecurity protocols.”
The breach was discovered in August 2023, with nearly 100 of the missing vials containing Hendra virus, which is deadly. Two of the vials contained hantavirus, while 223 vials contained samples of lyssavirus.
Stock image of vials in a laboratory (main) and virus particles (inset). Hundreds of vials containing samples of deadly viruses have gone missing from a lab in Australia. ISTOCK / GETTY IMAGES PLUS
Hendra virus was first discovered in the mid-1990s after infecting and killing several horses in Australia. Only a handful of humans have caught the disease after being infected by horses, but a large proportion of infected people died.
“Hendra virus has a 57 percent fatality rate in humans and has had a devastating impact on those who have been infected, their families and on the veterinary and equine industries in areas where the virus spills over,” Raina Plowright, a professor at the department of public and ecosystem health at Cornell University’s College of Veterinary Medicine, previously told Newsweek.
Hantavirus is carried by rodents and can cause Hantavirus Pulmonary Syndrome (HPS), which has a mortality rate of around 38 percent, while lyssavirus is similar to rabies and also has a very high mortality rate.
The lab has not been able to conclude if the viruses were destroyed or removed from secure storage, but they do not appear to have been stolen.
“There is nothing to suggest that these have been taken from the laboratory. Secondly […] we don’t have any evidence that Hendra virus has been weaponized in any way in any research laboratory,” Nicholls said at a press conference.
“Of course, all this kind of research is taken in secret, but we are not aware that this has been weaponized in any way. The process of weaponizing a virus is very sophisticated, and is not something an amateur does.”
The samples appear to have gone unaccounted for after a freezer they were being stored in at Queensland’s Public Health Virology Laboratory broke down.
“It’s this part of the transfer of those materials that is causing concern,” Nicholls said, as reported by local news ABC.
“They were transferred to a functioning freezer without the appropriate paperwork being completed. The materials may have been removed from that secure storage and lost, or otherwise unaccounted-for.”
According to a statement from the Queensland government, there is “no evidence of risk to the community from the breach,” as the viruses would have degraded very quickly and subsequently become harmless to humans.
“It’s difficult to conceive of a scenario whereby the public could be at risk,” Queensland Chief Health Officer John Gerrard said in the statement.
“It’s important to note that virus samples would degrade very rapidly outside a low temperature freezer and become non-infectious.
Gerrard notes that the samples were incredibly unlikely to have been thrown away in geeral waste, and were probably destroyed in an autoclave as per usual lab protocol.
“Importantly, no Hendra or Lyssavirus cases have been detected among humans in Queensland over the past five years, and there have been no reports of Hantavirus infections in humans ever in Australia,” Gerrard explained.
An investigation into the breach has been initiated, which hopes to find out exactly how these viruses went missing and what prevented the discovery of the breach for nearly two years.
“With such a serious breach of biosecurity protocols and infectious virus samples potentially missing, Queensland Health must investigate what occurred and how to prevent it from happening again,” Nicholls said in the statement.
CHERNOBYL has transformed wild dogs into radiation hounds who can survive the deadly nuclear fallout, scientists have revealed.
Two stray canine populations have managed to adapt to the uninhabitable conditions in Ukraine for nearly 40 years, transforming man’s best friend into a mutant mongrel.
Two stray populations make up the fury inhabitants of a nuclear wasteland, nearly 40 years after most humans fledCredit: AFP
The study used 500 dogs living around Chernobyl, Ukraine, who have managed to live in the harsh landscape through miraculous generational adaptation.
It is believed that understanding how they survived will help scientists learn more about the health risks involved with radiation.
Experts found there were two main groups of dogs – one living around the power plant and another within Chernobyl city.
Researchers identified 52 genes that could be associated with exposure to the contamination of the nuclear power plant.
All 61 of the nuclear power plant dogs and 52 out of 55 of the Chernobyl city dogs were identified as being at least 10 per cent German Shepherd, according to the study.
Head researcher Dr. Norman J. Kleiman said: “In addition to classifying the population dynamics within these dogs at both locations, we took the first steps towards understanding how chronic exposure to multiple environmental hazards may have impacted these populations.
“Understanding the genetic and health impacts of these chronic exposures in the dogs will strengthen our broader understanding of how these types of environmental hazards can impact humans and how best to mitigate health risks.”
Dr. Matthew Breen from NC State, “The overarching question here is: does an environmental disaster of this magnitude have a genetic impact on life in the region?
“Think of these regions as markers, or signposts, on a highway. They identify areas within the genome where we should look more closely at nearby genes.”
Some of the markers point to genes associated with genetic repair, specifically with genetic repair after exposures similar to those experienced by the dogs in Chernobyl.
The research paper reads: “In this foundational study we determined that while the two local populations of dogs are separated by only 16km, they have very low rates of interpopulation migration.
“We also detected genetic evidence that suggests that these population may have adapted to exposures faced over many generations.”
“None of the sampled dogs in either the Nuclear Power Plant or Chernobyl City populations were determined to be purebred, with both populations averaging 25 breed matches per dog.”
What happened at Chernobyl?
THE nuclear catastrophe in Chernobyl claimed 31 lives as well as leaving thousands of people and animals exposed to potentially fatal radiation.When an alarm bellowed out at the nuclear plant on April 26, 1986, workers looked on in horror as the control panels signaled a major meltdown in the number four reactor.The safety switches had been switched off in the early hours to test the turbine but the reactor overheated and generated a blast – the equivalent of 500 nuclear bombs.The reactor’s roof was blown off and a plume of radioactive material was blasted into the atmosphere.As air was sucked into the shattered reactor, it ignited flammable carbon monoxide gascausing a fire which burned for nine days.The catastrophe released at least 100 times more radiation than the atom bombs dropped on Nagasaki and Hiroshima.Soviet authorities waited 24 hours before evacuating the nearby town of Pripyat – giving the 50,000 residents just three hours to leave their homes.After the accident traces of radioactive deposits were found in Belarus where poisonous rain damaged plants and caused animal mutations.But the devastating impact was also felt in Scandinavia, Switzerland, Greece, Italy, France and the UK.An 18-mile radius known as the “Exclusion Zone” was set up around the reactor following the disaster.
Wildlife has thrived in the absence of hunting, farming, and urban development, turning Chernobyl into an accidental refuge for nature.
In the first few days, radiation levels were so high that plants turned brown and died, and forests close to the reactor were devastated.
This makes it even more incredible that these dog populations managed to not only survive, but learn to live in the barren landscape.
It is believed that dogs were left behind in the disaster by their owners, but they managed to bounce back and stun scientists.
Many of the dogs have formed packs to protect themselves and for companionship.
Some of the dogs have even bonded with the few human scientists who are still over there.
WhatsApp Update: The online counter feature is available to some beta testers who install the latest updates of beta for Android from the Google Play Store.
WhatsApp.
The Meta-owned WhatsApp is currently working on several new features to provide a better user experience on its platform. According to a recent report, the instant messaging giant is working on an online counter feature for group chats. This new feature shows the number of participants currently online in the top bar of group chats, giving a quick view of their availability.
“It seems that WhatsApp is experimenting with new enhancements for the chat interface, with the aim to make it even more useful. Thanks to the latest WhatsApp beta for Android 2.24.25.30 update, which is available on the Google Play Store, we discovered that WhatsApp is rolling out an online counter feature for group chats,” WABetaInfo reported.
The screenshot shared in report revealed that some beta testers can explore a new feature that displays the number of participants currently online in the top app bar of group chats.
Previously, the top app bar in group chats displayed a summary of the group participants’ names and their current activity.
In this update, the Meta-owned app has replaced the activity-based summary in the top app bar with the number of participants currently online. This feature provides users with a quick view of how many group members have the app open and are connected to the service.
“It’s worth noting that this count does not reflect how many participants are actively engaging in the group chat but only those who have the app open,” WABetaInfo said.
The report further mentioned that the number of participants currently online excludes any members who have turned off their online status visibility from their privacy settings. These users will not be included in the count, ensuring that their privacy settings are always respected.
This approach will maintain the balance between offering useful group insights and protecting individual privacy preferences, as it excludes users who have chosen to hide their online status from being considered.
Beef steers graze on a ranch in Dillon, Montana. The machine nearby releases a seaweed supplement while also measuring the cattle’s methane emissions. (Credit: Paulo de Méo Filho / UC Davis)
When it comes to climate change, one of the most significant yet rarely discussed contributors comes from an unexpected source: cow burps. Now, scientists may have found a potential solution that could reshape the future of sustainable cattle farming: seaweed.
To understand the scale of this challenge, consider the numbers: Livestock account for 14.5% of global greenhouse gas emissions, with the largest portion coming from methane that cattle release when they burp. In the United States alone, there are over 64 million beef cattle and nine million dairy cows contributing to these emissions. When cattle digest fibrous plant materials, their specialized stomach compartments host microorganisms that break down tough cellulose, producing methane as a byproduct. Grazing cattle, which consume more fibrous grass than their feedlot or dairy counterparts, produce even more methane as a result.
“Beef cattle spend only about three months in feedlots and spend most of their lives grazing on pasture and producing methane,” explains senior author Ermias Kebreab, professor in the Department of Animal Science at the University of California, Davis, in a media release. “We need to make this seaweed additive or any feed additive more accessible to grazing cattle to make cattle farming more sustainable while meeting the global demand for meat.”
Breaking New Ground: Testing Seaweed Supplements in Grazing Cattle
Previous research has shown promising results with seaweed supplements, reducing methane emissions by 82% in feedlot cattle and over 50% in dairy cows. However, finding effective solutions for grazing cattle has proven particularly challenging because they spend most of their time on open pastures, making daily supplementation difficult. The UC Davis study represents the first worldwide test of seaweed supplements on grazing beef cattle.
To tackle this challenge, researchers developed a precisely formulated supplement called Brominata, using a specific seaweed species, Asparagopsis taxiformis. The pelleted supplement contained 20% of the seaweed along with distillery solubles (15%), wheat middlings (64.8%), and a small amount of palatability enhancer (0.25%). This careful formulation aimed to make the supplement both effective and palatable to cattle.
The study took place at Matador Ranch in Montana, where researchers worked with 24 Wagyu-Angus crossbred steers. The cattle were divided into two equal groups: one received standard feed pellets, while the other received the Brominata supplement. Using sophisticated monitoring equipment called the GreenFeed system, researchers tracked the animals’ emissions during feeding sessions that occurred up to three times daily over a 70-day period.
The results revealed three distinct phases: a three-week ramp-up period as cattle adjusted to the supplement, a three-week optimal phase where effects were most pronounced, and a two-week decreasing phase. During the optimal and decreasing phases, cattle receiving the seaweed supplement showed an average reduction in methane emissions of 37.7% compared to the control group. Importantly, this significant decrease occurred without negatively impacting the animals’ growth or feed intake.
The researchers discovered a precise relationship between the amount of bromoform (the active compound in the seaweed) consumed and methane reduction: for every 100 milligrams of daily bromoform consumption, methane emissions dropped by approximately 20%. With an average bromoform intake of 193 milligrams per day, the study demonstrated that even voluntary consumption of the supplement could achieve significant results.
The study, published in the Proceedings of the National Academy of Sciences, also tracked other greenhouse gases. Carbon dioxide emissions showed a modest 4% reduction in supplemented cattle, while hydrogen emissions increased significantly – rising 85.7% in production and 76.5% in yield. This increase in hydrogen emissions was expected, as it indicates the successful disruption of methane-producing digestive processes.
Looking toward practical implementation, Kebreab notes, “Ranchers could even introduce the seaweed through a lick block for their cattle.”
This approach could solve one of the biggest challenges in grazing operations: delivering supplements to cattle that often range far from ranch headquarters. While ranchers typically supplement their cattle’s diet during winter months or when grass becomes scarce, having a delivery method that works year-round could maximize the climate benefits.
The Climate Impact of Cattle: A Growing Global Challenge
The global implications of this research extend far beyond American ranches. Pastoral farming supports millions of people worldwide, often in regions most vulnerable to climate change. Making cattle grazing more environmentally sustainable could help protect both these traditional farming practices and the communities that depend on them. This aligns with findings from a related PNAS article emphasizing the importance of improving livestock production in low and middle-income countries through better genetics, feeding practices, and animal health measures.
For the millions of ranchers worldwide who manage grazing cattle, this research offers a practical path toward climate change mitigation. While the 37.7% reduction might not match the more dramatic results seen in controlled feeding situations, it represents a significant breakthrough for grazing operations. The simplicity of the solution – a supplement that can be delivered through existing feeding practices – makes it particularly promising for widespread adoption as the world seeks to balance increasing demand for meat with environmental protection.
People shelter under umbrellas from the wind and rain as they cross a road near Shinjuku train station on October 12, 2019 in Tokyo, Japan ahead of Typhoon Hagibis’ expected landfal later in the evening. Photo by Carl Court/Getty Images
GenCast, a new AI model from Google DeepMind, is accurate enough to compete with traditional weather forecasting. It managed to outperform a leading forecast model when tested on data from 2019, according to recently published research.
AI isn’t going to replace traditional forecasting anytime soon, but it could add to the arsenal of tools used to predict the weather and warn the public about severe storms. GenCast is one of several AI weather forecasting models being developed that might lead to more accurate forecasts.
“Weather basically touches every aspect of our lives … it’s also one of the big scientific challenges, predicting the weather,” says Ilan Price, a senior research scientist at DeepMind. “Google DeepMind has a mission to advance AI for the benefit of humanity. And I think this is one important way, one important contribution on that front.”
Price and his colleagues tested GenCast against the ENS system, one of the world’s top-tier models for forecasting that’s run by the European Centre for Medium-Range Weather Forecasts (ECMWF). GenCast outperformed ENS 97.2 percent of the time, according to research published this week in the journal Nature.
GenCast is a machine learning weather prediction model trained on weather data from 1979 to 2018. The model learns to recognize patterns in the four decades of historical data and uses that to make predictions about what might happen in the future. That’s very different from how traditional models like ENS work, which still rely on supercomputers to solve complex equations in order to simulate the physics of the atmosphere. Both GenCast and ENS produce ensemble forecasts, which offer a range of possible scenarios.
When it comes to predicting the path of a tropical cyclone, for example, GenCast was able to give an additional 12 hours of advance warning on average. GenCast was generally better at predicting cyclone tracks, extreme weather, and wind power production up to 15 days in advance.
One caveat is that GenCast tested itself against an older version of ENS, which now operates at a higher resolution. The peer-reviewed research compares GenCast predictions to ENS forecasts for 2019, seeing how close each model got to real-world conditions that year. The ENS system has improved significantly since 2019, according to ECMWF machine learning coordinator Matt Chantry. That makes it difficult to say how well GenCast might perform against ENS today.
To be sure, resolution isn’t the only important factor when it comes to making strong predictions. ENS was already working at a slightly higher resolution than GenCast in 2019, and GenCast still managed to beat it. DeepMind says it conducted similar studies on data from 2020 to 2022 and found similar results, although that hasn’t been peer-reviewed. But it didn’t have the data to make comparisons for 2023, when ENS started running at a significantly higher resolution.
Dividing the world into a grid, GenCast operates at 0.25 degree resolution — meaning each square on that grid is a quarter degree latitude by quarter degree longitude. ENS, in comparison, used 0.2 degree resolution in 2019 and is at 0.1 degree resolution now.
Nevertheless, the development of GenCast “marks a significant milestone in the evolution of weather forecasting,” Chantry said in an emailed statement. Alongside ENS, the ECMWF says it’s also running its own version of a machine learning system. Chantry says it “takes some inspiration from GenCast.”
Speed is an advantage for GenCast. It can produce one 15-day forecast in just eight minutes using a single Google Cloud TPU v5. Physics-based models like ENS might need several hours to do the same thing. GenCast bypasses all the equations ENS has to solve, which is why it takes less time and computational power to produce a forecast.
Google CEO Sundar Pichai announced AI-driven changes for Google Search by 2025, promising advanced capabilities.
Google CEO Sundar Pichai announced AI-driven changes for Google Search by 2025, promising advanced capabilities. (Image: Fortune India)
Google’s search engine could be set for an overhaul by 2025, according to CEO Sundar Pichai. During the New York Times DealBook Summit, Pichai hinted at a radical overhaul that will change the way users interact with the platform. “I think we are going to be able to tackle more complex questions than ever before,” he said, and users should expect visible changes towards the beginning of 2025. This announcement comes as the search giant ramps up its focus on artificial intelligence to stay ahead in competition with others in the tech space.
Pichai’s comments came just after Microsoft CEO Satya Nadella earlier criticised Google as to having an unfair advantage in the race for AI. Answering Nadella’s comment, Pichai confidently pointed out that Google’s own in-house AI could potentially be better than all of their competitors, and who are heavily reliant on external partnerships. “I would love to do a side-by-side comparison of Microsoft’s own models and our models,” he quipped with a reference to Microsoft working with OpenAI.
The changes to Google Search build on this year’s AI-driven updates, which included features like AI-generated search summaries and enhanced video-based search using Google Lens. The company’s next major milestone is a substantial upgrade to its Gemini AI model, which is expected to enhance its competitive standing against rivals such as Microsoft, OpenAI, and emerging players like Perplexity.
What if everything we thought we knew about who’s most at risk during extreme heat was wrong? A shocking new study turns conventional wisdom on its head, revealing that in Mexico, it’s actually young people – not the elderly – who are dying more frequently from heat exposure. The research shows that 75% of heat-related deaths occur among people under 35 years old, with many victims being otherwise healthy young adults.
The Temperature Paradox
For decades, scientists and public health officials have focused their heat-protection efforts on elderly populations, believing them to be most vulnerable to temperature extremes. But when researchers analyzed death records in Mexico, they discovered something unexpected: between 1998 and 2019, the country experienced about 3,300 heat-related deaths each year, with nearly a third occurring in people ages 18 to 35. Even more surprising, people aged 50 to 70 – who were thought to be highly vulnerable – actually had the lowest rates of heat-related deaths.
To understand this pattern, researchers chose to study Mexico for a specific reason: the country keeps detailed records of both deaths and daily temperatures for every local area, creating a rich dataset for analysis. Think of it as a massive weather-and-health diary covering an entire nation, with entries spanning more than two decades and including information about 13.4 million deaths.
Heat and Humidity: A Deadly Combinationc
The researchers focused on something called “wet-bulb temperature” – a measurement that combines heat and humidity to show how well our bodies can cool themselves through sweating. While this might sound technical, think of it this way: on a dry, hot day, your sweat evaporates quickly and helps cool you down. But on a humid day, even if it’s not as hot, the air is already so full of moisture that your sweat can’t evaporate as effectively, making it harder for your body to cool itself.
It’s like trying to dry clothes outside. On a hot, dry day, your clothes dry quickly because the air can absorb the moisture. But on a humid day, they stay damp much longer because the air is already saturated with water vapor. Our bodies face the same challenge when trying to cool down in humid conditions.
When wet-bulb temperatures reach around 35°C (95°F), it becomes physically impossible for humans to survive without artificial cooling, because our natural cooling system (sweating) simply can’t work anymore. Even at lower temperatures of 31°C (88°F), our bodies start to struggle significantly with cooling down.
Who’s at Risk and Why
“We project, as the climate warms, heat-related deaths are going to go up, and the young will suffer the most,” said the study’s co-lead author, R. Daniel Bressler, a PhD. candidate in Columbia’s Sustainable Development program.
The high death rate among young adults, particularly those between 18 and 34, stems largely from their work conditions. “These are the more junior people, low on the totem pole, who probably do the lion’s share of hard work, with inflexible work arrangements,” explained Shrader. Young adults typically fill jobs in construction, farming, and factory work – occupations that often involve intense physical activity in hot environments with little flexibility to take breaks or avoid the hottest parts of the day.
Children under 5, especially infants, face different challenges. Their bodies are particularly vulnerable because of their physical makeup: they have a higher ratio of surface area to body weight, which means they absorb heat more quickly than adults do. Their ability to sweat – the body’s main cooling mechanism – isn’t fully developed yet. Adding to their risk, their immune systems are still maturing, making them more susceptible to diseases that become more common in hot, humid conditions, such as those spread by mosquitoes or contaminated water.
Finding the Breaking Point
The researchers discovered that different temperatures affect people differently. They found that young people fare best at wet-bulb temperatures around 13°C (think of a mild spring day with 40% humidity – it would feel like 71°F). However, the highest number of deaths occurred at wet-bulb temperatures of 23 or 24°C – not because these temperatures are the most dangerous, but because they happen more frequently. It’s like how most car accidents happen on familiar roads near home, not because they’re the most dangerous roads, but because we drive on them more often.
While this study focused on Mexico, its implications stretch far beyond its borders. Mexico, where about 15% of workers are employed in agriculture, represents a middle ground in terms of its workforce and age distribution. But consider countries in Africa and Asia, where much larger portions of the population are young and working outdoors in manual labor. If the patterns found in Mexico hold true elsewhere, these nations could face even more devastating impacts as global temperatures rise.
This possibility is particularly troubling because many of these countries also have limited access to air conditioning and other cooling technologies that could help protect workers. A recent study found that farmworkers in many developing nations are already struggling to work in increasingly oppressive heat and humidity.
The Cold Truth
Interestingly, despite all our fears about rising temperatures, cold weather currently claims more lives globally than heat, even in Mexico. The study found that older adults are particularly vulnerable to cold temperatures, partly because their bodies tend to run at lower core temperatures, making them more sensitive to chills. When it’s cold, older people often stay indoors, where infectious diseases can spread more easily.
However, this pattern is changing. Since 2000, the proportion of heat-related deaths has been steadily climbing, and scientists expect this trend to continue as our planet warms. The research team is now expanding their investigation to other countries, including the United States and Brazil, to see if young people face similar risks elsewhere.
The findings from this research, published in Science Advances, challenge us to rethink who needs protection from extreme heat and how we provide it. As our world continues to warm, we need solutions that consider everyone’s vulnerability – from infants to the elderly, from office workers to those laboring outdoors. The answer might lie in better workplace protections, improved access to cooling technology, and stronger climate policies. After all, when it comes to heat-related deaths, age isn’t just a number – it’s a crucial factor in determining who’s most at risk.
Known across the globe as the stuck astronauts, Butch Wilmore and Suni Williams hit the six-month mark in space Thursday with two more to go.
The pair rocketed into orbit on June 5, the first to ride Boeing’s new Starliner crew capsule on what was supposed to be a weeklong test flight. They arrived at the International Space Station the next day, only after overcoming a cascade of thruster failures and helium leaks. NASA deemed the capsule too risky for a return flight, so it will be February before their long and trying mission comes to a close.
While NASA managers bristle at calling them stuck or stranded, the two retired Navy captains shrug off the description of their plight. They insist they’re fine and accepting of their fate. Wilmore views it as a detour of sorts: “We’re just on a different path.”
“I like everything about being up here,” Williams told students Wednesday from an elementary school named for her in Needham, Massachusetts, her hometown. “Just living in space is super fun.”
Both astronauts have lived up there before so they quickly became full-fledged members of the crew, helping with science experiments and chores like fixing a broken toilet, vacuuming the air vents and watering the plants. Williams took over as station commander in September.
“Mindset does go a long way,” Wilmore said in response to a question from Nashville first-graders in October. He’s from Mount Juliet, Tennessee. “I don’t look at these situations in life as being downers.”
Boeing flew its Starliner capsule home empty in September, and NASA moved Wilmore and Williams to a SpaceX flight not due back until late February. Two other astronauts were bumped to make room and to keep to a six-month schedule for crew rotations.
Like other station crews, Wilmore and Williams trained for spacewalks and any unexpected situations that might arise.
“When the crews go up, they know they could be there for up to a year,” said NASA Associate Administrator Jim Free.
NASA astronaut Frank Rubio found that out the hard way when the Russian Space Agency had to rush up a replacement capsule for him and two cosmonauts in 2023, pushing their six-month mission to just past a year.
Boeing said this week that input from Wilmore and Williams has been “invaluable” in the ongoing inquiry of what went wrong. The company said in a statement that it is preparing for Starliner’s next flight but declined comment on when it might launch again.
NASA also has high praise for the pair.
“Whether it was luck or whether it was selection, they were great folks to have for this mission,” NASA’s chief health and medical officer, Dr. JD Polk, said during an interview with The Associated Press.
On top of everything else, Williams, 59, has had to deal with “rumors,” as she calls them, of serious weight loss. She insists her weight is the same as it was on launch day, which Polk confirms.
During Wednesday’s student chat, Williams said she didn’t have much of an appetite when she first arrived in space. But now she’s “super hungry” and eating three meals a day plus snacks, while logging the required two hours of daily exercise.
The Proba-3 launch is part of a commercial mission between ISRO and the European Space Agency to study the Sun.
ISRO’s PSLV would take European satellite Proba-3 to space after liftoff today. (ISRO)
The Indian Space Research Organisation (ISRO) is all set for the liftoff of the European Space Agency’s Proba-3 satellite from Sriharikota, Andhra Pradesh today (December 4) at 4:08 pm. The Proba-3 is part of the PSLV C-59 mission that is a commercial collaboration between ESA and ISRO via NewSpace India Ltd (NSIL).
“PSLV-C59, showcasing the proven expertise of ISRO, is ready to deliver ESA’s PROBA-3 satellites into orbit. This mission, powered by NSIL with ISRO’s engineering excellence, reflects the strength of international collaboration. A proud milestone in India’s space journey and a shining example of global partnerships,” said ISRO in a post on X.
🚀 Liftoff Day is Here!
PSLV-C59, showcasing the proven expertise of ISRO, is ready to deliver ESA’s PROBA-3 satellites into orbit. This mission, powered by NSIL with ISRO’s engineering excellence, reflects the strength of international collaboration.
Proba-3 is dubbed as the world’s first initiative to study the Sun, and comprises two spacecraft – the “Coronagraph Spacecraft (CSC) and the Occulter Spacecraft (OSC) – which will fly together as one in an elliptical orbit to study the Sun’s outer atmosphere.
The ISRO is using its flagship workhorse Polar Satellite Launch Vehicle (PSLV) for the commercial mission. The two satellites Coronagraph and Occulter would be launched together in a stacked configuration. They would be launched together as a single unit at the start.
How Does It Work?
After reaching the initial orbital conditions, they would fly 150 metres apart in tandem as one large satellite structure and maintain formation for around six hours at a time. The Occulter spacecraft would block the solar disk of the Sun to enable the other spacecraft to study the Sun’s atmosphere for scientific observation without interruption.
“The corona-much hotter than the Sun itself, is where space weather originates and a topic of widespread scientific and practical interest,” the ESA said. “The mission will demonstrate formation flying in the context of a large-scale science experiment. The two satellites will together form an approximately 150-metre long solar coronagraph to study the Sun’s faint corona closer to the solar rim than ever before it has been achieved,” it said.
Could another group of ancient humans have lived alongside Homo sapiens? A new study suggests that they did, and scientists are starting to piece together the clues of their forgotten past. A researcher from the University of Hawai’i at Manoa is revealing new insights into a group called the Julurens — meaning the “big head” people.
The new research is revolutionizing our understanding of human evolution, particularly in eastern Asia, where scientists have uncovered a far more intricate picture of our ancient past than previously thought.
For decades, researchers believed human evolution followed a relatively straightforward path. The dominant theories suggested either that humans gradually evolved in place across different regions or that a single group from Africa replaced all other human populations. However, the groundbreaking study published in the journal Nature Communications is turning those simplistic models on their head.
Paleoanthropologists Christopher Bae and Xiujie Wu introduce a potentially revolutionary concept: a new human species called Homo juluensis. This group, which may include the mysterious Denisovans — ancient human relatives known primarily through fragmentary DNA evidence — lived approximately 300,000 years ago, hunting and surviving in small groups across eastern Asia before disappearing around 50,000 years ago.
Moreover, they found that eastern Asia was home to multiple distinct human species during the Late Quaternary period, roughly 50,000 to 300,000 years ago. Instead of a linear progression, the human story looks more like a complex, branching network of different populations (including the Julurens) interacting, mixing, and coexisting.
The team identified four human species that existed during this time: Homo floresiensis, a diminutive human found on the Indonesian island of Flores; Homo luzonensis from the Philippines; Homo longi, discovered in China; and the recently named Homo juluensis, which includes fossils from various sites across eastern Asia.
“We did not expect being able to propose a new hominin (human ancestor) species and then to be able to organize the hominin fossils from Asia into different groups. Ultimately, this should help with science communication,” Bae says in a university release.
Each of these species possessed unique morphological characteristics that set them apart. Homo floresiensis, for instance, was remarkably small, earning it the nickname “hobbit” human. Homo luzonensis represented another compact human variant, while Homo longi was characterized by a massive cranium that suggests a different evolutionary trajectory.
The most intriguing aspect of these discoveries is how they challenge our previous understanding of human migration and interaction. Rather than a simple “out of Africa” narrative where one human group replaced all others, the evidence now suggests a much more nuanced story of multiple dispersals, interactions, and genetic exchanges.
The Hualongdong fossils from central-eastern China exemplify this complexity. Dating back approximately 300,000 years, these remains display a mosaic of characteristics that cannot be easily categorized into any single known human lineage. These findings underscore just how intricate human evolution truly was.
“I see the name Juluren not as a replacement for Denisovan, but as a way of referring to a particular group of fossils and their possible place in the network of ancient groups,” writes anthropologist John Hawks, who did not take part in this study, in a statement. “In my opinion, Bae and collaborators have a good case for distinguishing the Chinese fossil record from the fossils from Africa and western Eurasia across this time.”
What makes this research particularly exciting is how it represents a significant leap forward in our understanding of human prehistory. The eastern Asian fossil record has traditionally lagged behind those of Europe and Africa, but now it’s revealing a rich, diverse evolutionary landscape that demands we rethink our previous models.
A new theory, that explains how light and matter interact at the quantum level has enabled researchers to define for the first time the precise shape of a single photon. (Credit: Dr Benjamin Yuen)
For the first time in physics history, scientists have developed a mathematical framework that allows them to visualize the precise shape of a single particle of light, known as a photon. This breakthrough, achieved by researchers at the University of Birmingham, represents a fundamental advance in our understanding of how light interacts with matter at the quantum level.
At its core, the research solves a problem that has puzzled quantum physicists for decades: how to accurately model the infinite possibilities of how light can interact with and travel through its environment. The team accomplished this by developing a new mathematical approach that groups these countless possibilities into distinct, manageable sets.
“The geometry and optical properties of the environment has profound consequences for how photons are emitted, including defining the photons shape, colour, and even how likely it is to exist,” explains Professor Angela Demetriadou from the University of Birmingham in a statement.
To understand the significance of this work, now published in Physical Review Letters, consider how light behaves in everyday situations. When sunlight passes through a stained glass window, its interaction with the glass creates beautiful colors. At the quantum level, these interactions become far more complex, with individual photons interacting with atoms and molecules in ways that previous mathematical models struggled to describe accurately.
The research team’s new approach, called “pseudomode transformation,” allows scientists to track precisely how light bounces around and interacts with matter in complex nanoscale systems. What makes this method particularly powerful is its ability to describe both what happens near the source of the light and how the energy travels out into the surrounding space.
“Our calculations enabled us to convert a seemingly insolvable problem into something that can be computed,” says Dr. Benjamin Yuen, the study’s first author. “And, almost as a bi-product of the model, we were able to produce this image of a photon, something that hasn’t been seen before in physics.”
To demonstrate their method, the researchers studied what happens when a quantum emitter (such as an atom or molecule) interacts with a tiny silicon sphere just one micrometer in diameter – about one-hundredth the width of a human hair. This seemingly simple system revealed a complex dance of quantum interactions that their new mathematical framework could precisely describe.
“This work helps us to increase our understanding of the energy exchange between light and matter, and secondly to better understand how light radiates into its nearby and distant surroundings,” explains Dr. Yuen. “Lots of this information had previously been thought of as just ‘noise’ – but there’s so much information within it that we can now make sense of, and make use of.”
Journalists are finding more readers and less hate on Bluesky than on the platform they used to know as Twitter.
When Ashton Pittman, an award-winning news editor and reporter, first joined the app Bluesky, he said, he was the only Mississippi journalist he knew to be using it. Until about five weeks ago, he said, that was the case. But now, Pittman said, there are at least 15 Mississippi journalists on Bluesky as it becomes a preferred platform for reporters, writers, activists and other groups who have become increasingly alienated by X.
Pittman’s outlet, the Mississippi Free Press, already has more followers on Bluesky (28,500) than it ever did on X (22,000), the platform formerly known as Twitter, and Pittman said the audience engagement on Bluesky is booming.
“We have posts that are exactly the same on Twitter and on Bluesky, and with those identical posts, Bluesky is getting 20 times the engagement or more than Twitter,” Pittman said. “Seeing a social media platform that doesn’t throttle links really makes it clear how badly we were being limited.”
Since Elon Musk bought Twitter, has turned the platform into an increasingly difficult place for journalists, and many had come to suspect that the platform had begun to suppress the reach of posts that include links to external websites. On Sunday, Musk confirmed the platform has deprioritized posts including links, which was how journalists and other creators historically shared their work. But four journalists told NBC News that after millions of users migrated to Bluesky, an alternative that resembles a pared-back version of X, after the election, they are rebuilding their audiences there, too.
“My average post that isn’t a hot-button issue or isn’t trending might not perform as well on X as it does on Bluesky,” said Phil Lewis, a senior front page editor at HuffPost who has over 400,000 followers on X and close to 300,000 on Bluesky. “Judging by retweets, likes and comments, it’s a world of difference.”
Platform and audience editors at The Guardian and The Boston Globe have publicly noted higher traffic to their news websites from Bluesky than from competitors including Threads, Meta’s X alternative. Rose Wang, Bluesky’s chief operating officer, quoted the Guardian’s stats, writing: “We want Bluesky to be a great home for journalists, publishers, and creators. Unlike other platforms, we don’t de-promote your links. Post all the links you want — Bluesky is a lobby to the open web.”
Bluesky, initially built as part of an initiative funded by Twitter co-founder Jack Dorsey, who cut ties with the company in May, launched to the public as an invitation-only platform last year. Some of its earliest users included Black, trans and politically progressive people. Journalists who belong to and cover issues affecting marginalized populations have found Bluesky to be a much more welcoming environment.
“I think that Bluesky’s demographic is literally just anybody who can’t stand the sort of toxic environment that Twitter has become, and that spans a large range of people,” said Erin Reed, an independent journalist covering trans rights issues on Substack. “Journalists don’t like toxicity and toxic comments. We want to have conversations with people, and we don’t want everything to devolve into slurs being hurled back and forth.”
Numerous studies and analyses have found that after Musk took over the platform, use of hate speech increased. Over time, the platform became a bastion of the right-wing internet.
Reed also said traffic to her Substack articles has doubled since she began posting exclusively on Bluesky. She and Talia Lavin, a journalist and author who covers the far right, said X had become overrun with anti-trans speech, as well as other forms of bigotry and harassment. Lavin said she noticed an uptick of antisemitism and pro-Nazi accounts on X, as did Pittman.
In April, NBC News found that on X, at least 150 pro-Nazi accounts were able to purchase verification on the app and boost pro-Nazi content that was viewed millions of times on the app.
“If I’m not able to drive any consistent views to my newsletter from Twitter, why am I here?” Lavin said about her decision to move to Bluesky. “All the replies were AI bots and Nazis, and none of the earnestly engaged readers are seeing my content. So what was the point of subjecting myself to psychic damage?
“Having any sort of space where I can say, ‘Here is my newsletter, here is my book,’ and you can at least be exposed to the work I’m writing, that feels good, as opposed to a billionaire who actively hates the press being in charge and not wanting anyone to see your work,” Lavin continued. “I don’t know if it signifies some brand new hope for journalism, but it is nice to have a platform where you’re not actively being stifled.”
While journalists and writers have begun finding success in reaching an engaged and paying audience on Bluesky, they aren’t the only ones. Aaron Kleinman, director of research for the States Project, a state legislative campaigning group, said in a post that the group’s Give Smart fundraising effort made more money on Bluesky than on X in 2023, even when follower counts were much smaller. “Twitter’s cooked as a platform for raising money,” Kleinman wrote.
Over the past decade, space has become increasingly cluttered with the growing number of spacecraft being sent into orbit. As these objects eventually disintegrate, they release polluting emissions that risk harming the upper atmosphere.
SpaceX launches its fifth Starship flight test from Launch Complex 1 at Starbase Boca Chica, Texas, on October 13, 2024. UPI / NEWSCOM / SIPA
You can share an article by clicking on the share icons at the top right of it.
The total or partial reproduction of an article, without the prior written authorization of Le Monde, is strictly forbidden.
For more information, see our Terms and Conditions.
For all authorization requests, contact syndication@lemonde.fr.
When looking at the spectacular test flights of SpaceX’s giant rocket Starship, the handful of manned flights to the International Space Station every year or the rare scientific probes exploring the Solar System, one might think that leaving Earth remains a rare event. However, this would be forgetting that the space sector has been in a frenzy of satellite launches for years now, which are taking place on an almost daily basis. There were 211 successful lift-offs in 2023, a record that is set to be broken in 2024.
Stijn Lemmens, a space debris expert at the European Space Agency (ESA), said that “over the last decade, space activities have grown exponentially.” This is largely due to Elon Musk, who not only implemented the concept of the reusable rocket but also began deploying his megaconstellation of Starlink satellites, delivering the internet from space. This program is being imitated by other players, both private, like Amazon, and state-owned, with China and the European Union also wanting their own megaconstellations.
The result: “In the last three years, we’ve seen more satellite launches than in the previous 60 years,” observed Lemmens. “Today, the annual number of satellites put into orbit is in the thousands, with over 2,400 objects in 2023, and this trend is set to continue. Over the next decade, we anticipate an influx into low-Earth orbit that could amount to several tens of thousands of satellites.”
The Google logo is seen on the Google house at CES 2024, an annual consumer electronics trade show, in Las Vegas, Nevada, U.S. January 10, 2024. REUTERS/Steve Marcus/File Photo Purchase Licensing Rights
Canada’s Competition Bureau is suing Alphabet’s (GOOGL.O), opens new tab Google over alleged anti-competitive conduct in online advertising, the antitrust watchdog said on Thursday.
The Competition Bureau, in a statement, said it had filed an application with the Competition Tribunal seeking an order that, among other things, requires Google to sell two of its ad tech tools. It is also seeking a penalty from Google to promote compliance with Canada’s competition laws, the statement said.Google said the complaint “ignores the intense competition where ad buyers and sellers have plenty of choice and we look forward to making our case in court.”
“Our advertising technology tools help websites and apps fund their content, and enable businesses of all sizes to effectively reach new customers,” Dan Taylor, VP of Global Ads, Google said in a statement.
The Competition Bureau opened an investigation in 2020 to probe whether the search engine giant had engaged in practices that harm competition in the online ads industry, and expanded the probe to include Google’s advertising technology services earlier this year. Source: https://www.reuters.com/technology/canadas-antitrust-watchdog-sues-google-alleging-anti-competitive-conduct-2024-11-28/
Commander Sunita Williams and ISS crew responded to a toxic smell from a cargo spacecraft.
Sunita Williams has previously been in space for over 321 days during two stays aboard the space station.(HT/File Photo)
A crew of astronauts aboard the International Space Station (ISS), led by Commander Sunita Williams, quickly responded to an emergency triggered after a strange “toxic” smell was detected when opening a cargo spacecraft.
The hatch of the Progress MS-29 cargo spacecraft, which had brought essential supplies like food and fuel to the ISS, was opened and an unusual odour was detected. Astronauts also found small droplets inside the spacecraft.
Detecting a potential hazard, the crew quickly sealed off the hatch and isolated the affected area from the rest of the space station.
What did astronauts do?
NASA acted swiftly and air-scrubbing systems across the station were initiated to purify the air. To ensure safety, crew members wore personal protective equipment (PPE) while monitoring air quality. The quick response helped the crew safeguard themselves and continue with their operations.
NASA has now confirmed that the air quality aboard the ISS had returned to normal and no safety risks remained for the crew. However, the cause of the smell remains unclear and investigations are on to determine whether the odour originated from the Progress spacecraft or from the vestibule connecting it to the ISS.
The samples were collected by Japan’s Hayabusa-2 spacecraft, which launched in December 2014 and successfully rendezvoused with Ryugu in June 2018.
The team observed filamentous structures that they interpreted as microorganisms. (Photo: Jaxa)
Scientists have made a surprising discovery involving a sample from the asteroid Ryugu, which was found to be overrun with Earth-based microorganisms after its return to our planet.
This research highlights the remarkable ability of terrestrial microbes to colonise extraterrestrial materials, raising questions about contamination and the potential for life beyond Earth.
The samples were collected by Japan’s Hayabusa-2 spacecraft, which launched in December 2014 and successfully rendezvoused with Ryugu in June 2018.
After spending a year studying the asteroid, approximately 3,000 feet (900 meters) in diameter, Hayabusa-2 scooped up a sample that was returned to Earth on December 6, 2020. The sample was subsequently divided among various research teams for analysis.
Matthew Genge, team leader and palaeontologist at Imperial College London, stated, “We found micro-organisms in a sample returned from an asteroid. They appeared on the rock and spread with time before finally dying off.”
The team observed filamentous structures that they interpreted as microorganisms, likely belonging to common bacterial groups such as Bacillus, although their precise identification remains uncertain without DNA analysis.
Despite initial hopes that these microbes could represent extraterrestrial life, the researchers ruled this out.
Genge explained that no microbes were detected during initial scans of the sample before it was exposed to Earth’s atmosphere. Within just a week of exposure, microbial life began to flourish on the specimen’s surface, indicating that these organisms colonised the sample after its return to Earth.
The findings show the adaptability of Earth’s microorganisms and raise concerns about contamination during space missions.
Selling Chrome might not be the most painful part of the DOJ’s antitrust demands for Google.
Image: Cath Virginia / The Verge, Getty Images
Next year, a court might tell Google to do anything from syndicating its search results to selling the Chrome browser. These remedies and more were included in a request last week from the Justice Department, which is aiming to break up Google’s search monopoly.
The DOJ’s proposals clued in the public to what the government really wants out of Google. Though the complaint was filed in 2020, the first phase of the trial focused only on whether Google was liable for the antitrust harms the government alleged. After Judge Amit Mehta ruled this summer that Google is an illegal monopolist in general search services and search text advertising, the government has finally laid out its plan for how to restore competition, with proposals ranging from relatively simple tweaks in business practices to large structural changes.
The remedies the DOJ is seeking “would imperil Google’s ability to compete in its core business of search and search advertising,” says David Halliday, teaching associate professor of strategic management and public policy at George Washington School of Business. Judge Mehta accepting these remedies wouldn’t be “quite as big a deal as breaking up Standard Oil, but this would be a bigger deal, I think, than breaking up AT&T.”
If Mehta accepts only some of these proposals after a two-week trial in April, Google might be in better shape. But it could still see billions of dollars shaved off its empire. And according to experts watching the case, attention-grabbing options like a Chrome sale may not be the biggest threat to Google’s power.
Selling Chrome
The DOJ says that Google should be forced to sell Chrome because, as the largest browser by market share, it serves as a critical access point for search. It’s installed by default on Android phones and captures around 60 percent of the US browser market.
The goal here is to keep Google from owning a crucial gatekeeping platform that it can use to funnel users to its own search engine and steer them away from others. In practice, the proposal raises a lot of questions about how a sale would impact the web.
There are several options for potential buyers: Rumble, the anti-“cancel culture” video platform, has already declared its interest. Bloomberg Intelligence senior tech analyst Mandeep Singh says most other big tech companies that might want it, like Amazon and Meta, would likely be blocked as potential antitrust threats. Apple might be an exception, Singh says, if the government wants to incentivize it developing a rival search engine — something Google highly discouraged with a lucrative revenue-sharing deal. (That said, Apple already owns a major browser, which would consolidate the tech market in a different way.) Depending on who buys Chrome, the court could also approve conditions that constrain how a buyer leverages it.
“There is definitely an issue about whether you’re just simply transferring a valuable asset from one company where these assets are too tightly integrated, to another company.”
Outside the standard big tech players, Chrome could also be valuable to large language model companies like OpenAI or Anthropic, where it could provide a distribution channel for their AI chatbots. “Chrome as an independent entity doesn’t generate any revenue,” Singh notes — its value lies in having a huge audience to monetize. So plugging it into another search-based product, especially if the DOJ wins other remedies like data-sharing rules, is a likely prospect.
Will this actually create a better, more competitive environment for search engines? Or will it just give another company (perhaps even a massive one like Microsoft, which works closely with OpenAI) its own anti-competitive advantage? “There is definitely an issue about whether you’re just simply transferring a valuable asset from one company where these assets are too tightly integrated, to another company,” says Shubha Ghosh, director of the Intellectual Property Law Institute at Syracuse University. DuckDuckGo SVP of public affairs Kamyl Bazbaz says the judge and DOJ “should be thoughtful about how to make sure that a sale doesn’t result in creating another space that’s hard to compete for all search engines.”
But even if a company like OpenAI can tie a browser with its search product, Singh says it wouldn’t necessarily have the same impact on the market. “When you think about the time spent on the internet as an aggregate, Google still has the most time spent,” thanks to everything from YouTube to Gmail, says Singh. That makes it a unique powerhouse for advertising — which is, fundamentally, how search engines (and likely, eventually, some AI services) make money. “You can’t replicate engagement.”
“When you think about why the ads shown on Google are so effective, Chrome is a big part of that.”
The DOJ says Google would also need to spin out its open-source browser project Chromium — which helps power the Brave, Opera, and Microsoft Edge browsers — as part of the Chrome sale. The nonpartisan Consumer Choice Center has expressed concern over this outcome, saying it could put the project “in jeopardy.” Singh seems less concerned, saying the open-source project may “take its own course,” but that’s still a significant risk for browser makers that rely on it.
Depending on what restrictions a buyer faces, Chrome could offer a huge distribution channel for whatever other products they offer. For consumers, the browser experience will likely depend on who ends up buying it — a company that already has a savvy browser-building team like Apple or a company or group without that specialized experience, like a private equity firm.
Selling off Chrome won’t necessarily mean its users stop going to google.com, whose name has been synonymous with “search engine” for decades. “I think Google Search will still be the most visited page,” says Singh. “But it’s just the ad business. When you think about why the ads shown on Google are so effective, Chrome is a big part of that.”
Avoiding self-preferencing
The Chrome sale is part of a larger project: stopping Google from using all its many tools and platforms to unfairly promote each other. The government says Google should be barred in particular from preferencing its search engine on other services — that means avoiding things like making Google Search mandatory on Android or degrading the quality of competing products there.
“Google would essentially be forbidden from managing a search engine that did anything other than collect people who went to google.com or set their preferences as google.com,” says Halliday. “It would actually allow all of their competitors much more flexibility than Google.”
The DOJ wants the judge to prohibit Google from doing things like giving its own search, search text ads, or AI products “preferential access to Android or Google-owned apps or data” relative to competitors. That means Google couldn’t do things like make its Gemini AI product mandatory on Android devices or degrade the quality of rival products on Android.
Selling precious data
The demand that Google sell Chrome might be the DOJ’s most eye-catching proposal, but another section could be an even bigger deal. The government wants Google to syndicate the very data its search engine is built upon — disrupting a self-reinforcing cycle that helps Google stay on top.
The DOJ says that as Google gobbled up access points to search engines, its huge volume of search queries gave it another advantage. It’s got more information than any competitor about which search results people find useful, and the government argues that makes it impossible for anyone to catch up. The result is that Google faces little competitive pressure to keep making its service better — which, even if it’s got the best search engine in the business, may end up making users’ experience worse. (If you don’t like AI summaries injected on top of your search results, for instance, do you want the engine using them to be the only game in town?)
The government’s proposal would (in theory) change that. For 10 years, the DOJ wants Google to syndicate its search results, ranking signals, and query data to competitors at a marginal cost. That kind of information could let competing search engines like Microsoft’s Bing or DuckDuckGo very quickly improve their products. If that happens, search engines’ competitive edge would likely center more around the additional product features they offer — anything from privacy to user interface details.
Singh called the DOJ’s search results remedy “the strongest” of all the proposals. Google has been able to build up a robust moat through its extensive data collection over the years, Singh says, so “if you make that search index available to everyone, then potentially, you could see more competition in search as a result of that.”
Google will make money from syndicating this data, but Singh says it won’t outweigh losing its huge advantage in search. He predicts it could cut Google’s search revenue by up to 10 percent, comparing the impact to Meta’s $10 billion revenue hit after Apple started requiring stricter privacy settings on iOS.
“If that were to happen, suddenly your LLM search companies like OpenAI, Anthropic, Mistral, Perplexity, these guys will be a formidable competitor in search,” Singh says. “The reason why people go to Google Search is because their search indexes are just the best.”
“It’s not like Google created the data. They created the platform that allowed the data to be generated.”
Ordering so-called behavioral remedies is far more likely than a breakup and still quite threatening to Google’s bottom line, according to Bloomberg Intelligence litigation analysts Matthew Schettenhelm and Jennifer Rie. “A behavioral order is likely to result in loss of the default position and market share,” they write in a recent research note. “The ultimate impacts will depend on the injunction’s scope and subsequent user behavior,” but they predict a possible net loss upwards of $28 billion. While Google plans to appeal the ruling, Schettenhelm and Rie predict that the DC District Court decision will be upheld, calling it “well-reasoned, thorough and based on a straightforward antitrust framework.”
It may sound surprising for the government to make Google license some of its most valuable data, but Ghosh says it’s possible. “Data is not really protected, per se, by intellectual property,” he says. “It’s not like Google created the data. They created the platform that allowed the data to be generated.” It’s like asking who owns a news event, he adds. “The news is just what happens, and you just have an agency that collects it or observes it. But that doesn’t by itself create any kind of property right.”
That said, Halliday notes that syndication fees will still almost certainly make Google rich. “By rich, I mean less rich than today,” he says, “but still very, very rich compared to other companies.”
Goodbye to exclusionary search deals
Perhaps the most straightforward request is a ban on Google striking exclusionary contracts for preferred placement of its search products on browsers and phones. Google would be banned from entering revenue-share agreements to distribute its search product or offer anything of value to Apple, Android phone-makers, or browser companies for any kind of default preinstallation or preferred placement. That means an end to things like its multibillion-dollar deal with Apple for prime placement on Safari for macOS and iOS.
This isn’t surprising given how much of the trial focused on that Apple revenue-sharing deal. But ironically, Google may actually benefit from some of these changes. “Even if you prevent these contracts from being done between companies and introduce a bidding mechanism, there may not be any other bidder that is willing to pay $20 billion to Apple,” says Singh. “In that case, if anything, the traffic acquisition costs may go down for Alphabet, and people may still use Google because it has the brand and these habits are hard to change. And so that may actually be a net benefit to Google.”
Halliday says prohibiting Google’s default agreement for Apple is “probably a wash at the end for Google” since it gets to save the money it spent there. But it would likely still impact Google’s revenue by reducing the number of people searching on its service.
Mozilla warns that the DOJ’s proposal could “unnecessarily impact browser competition”
The proposed judgment would require choice screens on Chrome and Android for users to select their preferred search engine. This kind of remedy has been tried as an antitrust remedy in Europe, where it’s reportedly had little impact on Google’s market share. But some proponents have pinned the blame on how Google implemented it, something an independent committee could review here. The state plaintiffs led by Colorado are also requesting Google fund a national education program that will inform consumers about the remedies. That could even include “short-term incentive payments to users who voluntarily choose a non-Google default GSE [general search engine] on a Choice Screen.”
While competitors to Google Search and Chrome would certainly benefit from many of the remedies, Mozilla — which runs the Firefox browser and relies massively on payments from Google — warns that the DOJ’s proposal could “unnecessarily impact browser competition.” Mozilla spokesperson Brandon Borrman says in a statement that “as written, the remedies will harm independent browsers without material benefit to search competition.” During the trial, Google pointed out that Firefox actually did switch to Yahoo search at one point — only to come back to Google after users hated it.
But DuckDuckGo’s Bazbaz says the industry could see “a rising tide lifts all boats.” The theory goes that, over time, ad revenue would follow other search engines as they increase in size, helping their revenue-share payments to distribution channels like Mozilla make up for the large payments it would lose from Google. And Apple in particular could have a greater incentive to develop its own search competitor without the exclusive agreement, something the government has frequently emphasized.
More transparency for advertisers
Judge Mehta also found Google maintained an illegal monopoly in the search text ads market, charging more than a reasonable competitive price for ads while degrading their quality. To remedy that, the government is proposing that Google give advertisers more transparency and control. Under the proposal, Google would have to give advertisers more insight into their ad performance and costs and give them more options in how their ads are targeted. Google would also have to let advertisers export their search text ad data, making it easier to switch to rivals.
What about Android?
There’s one demand the government notably didn’t make: forcing Google to sell Android. But it says the option should be available should Google fail to comply with other remedies or if these remedies prove less effective than anticipated.
“Google is going to have a lot of incentives to not comply,” says Adam Epstein, president and co-CEO of adMarketplace, a search advertising marketplace. “That really is where the ball game is going to get won or lost for the consumer and the advertisers and the publishers.“ In that case, an Android sell-off is one of the final cards the DOJ could play.
“The only silver lining here for Alphabet is the DOJ is talking about a forced sale of Chrome and not Android.”
Like Chrome, Google’s Android mobile operating system serves as an important access point for search. Losing Android, Singh says, would “really hit them in the gut,” since it’s where Google gets so much of its mobile distribution. It would put its dominance in search up for grabs in a new way.
To my loyal and wonderful StudyFinds readers: First off, with Thanksgiving here, I want to tell you how thankful I am for your great support and enthusiasm over the years. Just the other day, I received an email from a newsletter subscriber who told me how much she looks forward to seeing our emails in her inbox each day. Every time I receive an email like that, I really do beam with joy knowing that we are fulfilling our mission here. Unfortunately, though, this Thanksgiving is a hard one for us: I regret to inform you that as we head into 2025, we are forced to dramatically downsize our operations.
Despite a wildly successful 2023 and the promise of a future where we could bring more research to the forefront as a larger news organization, those hopes came to a screeching halt earlier this year when Google implemented one of its multi-annual “core updates,” or algorithm changes. This update didn’t just devastate StudyFinds, but likely scores of other small sites like ours. The unexpected blow led to a drop of about 90% of the traffic we’d typically see on a daily basis, and I’ve been forced to cut nearly all of my writing and editing staff.
My heart hurts, and my dreams feel dashed; but even with this change, I have no plans to end my mission of bringing responsible, agenda-free research articles to your screen. And I am asking for your help.
What I ask of you, if you’ve enjoyed our content, is to support us not via donations but by signing up for our free newsletter, visiting our site regularly, and by following us on Google News, Facebook, Instagram, Pinterest, X, and YouTube. Read one (or more) of our articles every day. Share what you love (or hate) with your own social media following. Just doing this will help keep us afloat — we take no outside funding and operate solely on programmatic advertising revenue and affiliate revenue from our content.
Just by reading StudyFinds, you give us life and a future.
I love this site, I love our mission, I love my team, and I love knowing that we are helping people in their everyday lives. Ironically, in January of this year, Google even published a story about the site, our growth, how we’d given people jobs, and provided valuable content for free to our readers. What a difference a year makes.
Some sites and individuals have ripped into Google for the severe impacts of its recent core updates. It is a horrific punch to the gut and I completely understand. But Google breathed life into StudyFinds — and just as Google giveth, it’s a business too, and if it’s better for its bottom line, of course it will taketh away. It is what it is, and we will survive one way or another. As my mother always reminds me during the hard times, this too shall pass. Crises might knock me down, but never out.
The Story of StudyFinds
They say life is a roller coaster ride, and that couldn’t be more spot on when talking about the life of StudyFinds. Just two years ago, an idea I had to help build the site into something bigger by wedding research with a common consumer behavior wound up catapulting us onto the most exciting and exhilarating uphill adventure in our short history. You see, the engine for this ride is perhaps the most famous and definitively thrilling engine in the world — one that every content creator wants powering their success: Google.
Two years ago, Google validated StudyFinds for all the hard work we’d put into it, at least, so I thought. It gave us far greater prominence in its rankings, which led to record traffic and revenue that allowed me to hire a dozen writers looking for work, while providing our readers with much more content. My dream was realized. And though I’m not one for thrill rides, the roller coaster took off to glorious heights.
Of course, what goes up must come down, and as mentioned, in March of this year, the roller coaster drop was so steep—not just for StudyFinds but for many mom-and-pop content sites like ours—that we’re now on life support.
When I created StudyFinds at the end of 2016, it was more than just an extension of my love for research about long-term wellness, archaeological finds, new animal species, fast radio bursts from outer space, and other fascinating explanations as to why the world is the way it is. StudyFinds started as a passion project with the intent to give news readers a place to get the important details that I felt mainstream news sites, particularly broadcast news sites, often irresponsibly leave out.
After all, studies are more than just interesting headlines: we use them to make decisions about so many aspects of our daily lives, from our diet to sleeping habits to how we care for aging loved ones to how we feed our pets. The problem was that you can’t truly give someone the full scope of a study in 45 seconds of on-air time, and so many people were reading about studies on broadcast news sites that were simply regurgitated TV scripts. I wanted to help fix the problem, even on a shoestring budget.
My goal was to tell the story of the study and give readers a credible, agenda-free outlet to find the information they needed to make smart decisions. I especially wanted to create a civil, down-the-middle forum where readers could respectfully debate the veracity of the studies, since they’re usually limited or skewed by various flaws. (That last one, as comments sections continue to show, didn’t go as hoped.)
Of course, the site wasn’t perfect — it still isn’t — but it did grow fairly quickly and started getting eyeballs from prominent media outlets and personalities. It turned into a small operation that allowed me to hire a few freelance writers and an editor. It was, for all intents and purposes, pretty close to what I had envisioned from the start.
When my division at CBS was unexpectedly dismantled at the end of 2021, I turned to StudyFinds as a full-time operation because the growth over the years was undeniable. Maybe I really did have something here, and maybe being laid off only two months after I was given a major promotion was all meant to be. Maybe fate wanted me to make StudyFinds my life. I know I did.
Birth of ‘Best of the Best’
After 10 months of small growth and more uncertainty, an idea came to me based on the way I shop. You see, when I plan on making a big purchase, be it a smart TV or air fryer, I turn to Google and search for reviews. But I don’t read just one or two — I’ll read many until I find a product that is considered one of the best across multiple reviews.
I will often choose what I buy because after reading all the reviews, I know it’s among the best of the best.
It was the idea that changed everything. I hired a few writers to search for the consensus “best of the best” by creating lists of the top five products across 10 expert reviews. It started with products and went on to include everything from rock bands to core exercises. No other site was doing this, and in talking to others, my searching behavior was apparently quite common. Doing the dirty work and researching reviews for consensus takeaways was something millions of people could certainly find useful. And as a bonus, we could provide people with studies we’ve published that speak to specific products, such as vitamins or foods.
The Best of the Best was born.
To say the Best of the Best took off immediately is an understatement. The response was extraordinarily positive, and there was no better indicator of that than the response from Google. The engine of the world seemed to really like our new content, and muscled us up the rankings as a reward. It made perfect sense to me: the Best of the Best was a valuable, one-of-a-kind tool, and we made sure to be good citizens by linking back multiple times in each article to every review we mentioned.
While our studies are the heart and soul of StudyFinds, the Best of the Best pieces were practical, interesting, and most of all, incredibly resourceful. Even today, generative AI platforms still can’t provide you with consensus reviews as robust as ours. What’s funny is while I’ve always pushed for civil, productive debate on the scientific studies, I found the most engaging, civil, and productive conversations were happening on the Best of the Best posts.
In addition to the tremendous traffic, the Best of the Best allowed us to add a new channel of revenue to the site through affiliate links. We didn’t really think twice about it, because so many major news outlets including The New York Times, CBS, CNet, and CNN were going all-in on affiliate marketing. (Interestingly, an ongoing core update this month is apparently pushing out many of these sites out of the affiliate business. I predict this has to do with the rise of e-commerce through AI platforms.)
All this new revenue allowed me to hire a dozen writers and three more editors, while envisioning a larger operation in the years to come. The thought that maybe this was what I was meant to be doing no longer was a maybe: I had found my place and so much joy in the process. And as a cherry on top, the media watchdog NewsGuard gave us a 100/100 grade, while another watchdog, Fact Check/Media Bias, declared we were unbiased and a credible news site. How incredible it was.
Our First Red Alert
By the summer of 2023, everything was going right. But as any fairy tale goes, a major conflict arises just when all seems perfect. Something odd happened at the end of June: Google alerted us to a “manual action,” meaning someone from Google essentially auditing the site flagged something that they didn’t like. They suggested we were attempting to grow traffic by obtaining links from some bizarre, small website I wasn’t familiar with and have no connection to, but apparently had been linking to us frequently. (The traffic was so insignificant it wasn’t even a blip on my radar — so many odd sites linked to us on a daily basis.)
Aside from being entirely wrong, it was a massive blow to the site. Manual flags typically lead to significant drops in Google ranking and, as a result, traffic. Within days, the upward trajectory of StudyFinds began a downward shift. We reached the peak of the roller coaster ride.
In response to manual actions, Google suggests you reach out to the site it’s accusing you of scheming with and telling them to remove all the links to your site. It also suggests you “disavow” that site, essentially telling Google not to recognize outbound links from the “bad” site to yours.
I reluctantly “disavowed” it as Google suggested, but I did not contact the site so as to avoid any potential backlash or further unnecessary drama. Instead, I filed an emotional appeal, and weeks later, Google, sure enough, removed the mistaken manual flag. The thing about manual actions by Google, however, is that even if your reconsideration request is granted, you still may not recover. And in our case, we didn’t. The downward shift never changed direction even though we were cleared of the accusation.
The decline was slow-moving and we continued to plan on ways to improve the site and think bigger picture. Even with the downturn, at the time, we were still in good shape. The dream was not dashed, yet. I was sure we’d turn it around.
Google’s Core Issues
I mentioned at the beginning that it was the core update this past March that did us in. But that wasn’t the first core update to affect us. There were major core updates in August, September, October, and November of 2023 that were especially crushing for other sites. These updates, believed to target sites that were spam or bogus businesses, hit sites like ours that were legitimate and out to do good. One prominent example was HouseFresh.com, which received global attention after it documented how Google failed it, despite doing everything by the book. One article even showed how some of the biggest and richest names in the game stood to grow richer off of the core update victims.
Every day, I’d follow the conversation on X, Reddit, and SEO websites, keeping an eye out for any signs of change, good or bad. SEO expert Lily Ray extensively canvassed the destruction of these core updates, conducting her own study of sorts on the widespread impact of the game-changing core updates.
These updates nicked StudyFinds at first and we saw the decline from late June begin to drop at a faster rate. The November 2023 update impact did the most damage of the bunch, but even in December, we were still at a good place with the traffic we’d still been receiving. I remained hopeful because of my confidence in our site, our team, and our content. And by the way, that confidence hasn’t wavered.
As we watched the traffic shrink, my team and I worked constantly to slow it down. Some meetings would last hours, as we’d sit on Zoom calls and lift up every possible rock in the depths of our tired brains for answers. “Could this be the cause?” “Perhaps this is why Google targeted us?” “No, it’s got to be this.”
Unfortunately, there’s no way to really know for sure. Google doesn’t provide you with an account manager at the ready for your questions. The company is a real-life Wizard of Oz: only the powerful folks behind the curtain can clue you in to its magic, or lack thereof. Like any top restaurant, the chef isn’t sharing their secrets with you.
We experimented with all sorts of tips and tricks others suggested would help, but nothing really worked. The only thing that was certain was we peaked in June 2023, and that dastardly manual flag started the reversal of our fortunes.
Then, March happened. More than 80% of our traffic was gone by the end of the chilling month. No new dreams bloomed with the arrival of spring. We even redesigned the site afterward, improving user experience and doing a better job of highlighting our team and our story, hoping perhaps that was what we needed all along. It wasn’t.
With the loss of most of our writers, I’ve now turned to artificial intelligence to help us interpret many studies, as noted in our AI policy and on our posts. I promise you that any content that includes AI assistance is thoroughly reviewed and will not be any less in quality than what we’ve always put forth. If anything, the incredible growth of generative AI will help keep StudyFinds alive and allow for stronger stories to be published, even with less. After all, there are already so many studies showing how helpful artificial intelligence is for doctors, scientists, and researchers.
The Sun is the largest object in our solar system and essential to our survival
Scientists in India have reported the “first significant result” from Aditya-L1, the country’s first solar observation mission in space.
The new learnings, they said, could help keep power grids and communication satellites out of harm’s way the next time solar activities threatened infrastructure on Earth and space.
On 16 July, the most important of the seven scientific instruments Aditya-L1 is carrying – Visible Emission Line Coronagraph, or Velc – captured data that helped scientists estimate the precise time a coronal mass ejection (CME) began.
Studying CMEs – massive fireballs that blow out of the Sun’s outermost corona layer – is one of the most important scientific objectives of India’s maiden solar mission.
“Made up of charged particles, a CME could weigh up to a trillion kilograms and can attain a speed of up to 3,000km [1,864 miles] per second while travelling. It can head out in any direction, including towards the Earth,” says Prof R Ramesh of the Indian Institute of Astrophysics that designed Velc.
“Now imagine this huge fireball hurtling towards Earth. At its top speed, it would take just about 15 hours to cover the 150 million km Earth-Sun distance.”
The coronal ejection that Velc captured on 16 July had started at 13:08 GMT. Prof Ramesh, Velc’s Principal Investigator who has published a paper on this CME in the prestigious Astrophysical Journal Letters, said it originated on the side of the Earth.
“But within half an hour of its journey, it got deflected and went in a different direction, going behind the Sun. As it was too far away, it did not impact Earth’s weather.”
But solar storms, solar flares and coronal mass ejections routinely impact Earth’s weather. They also impact the space weather where nearly 7,800 satellites, including more than 50 from India, are stationed.
According to Space.com, they rarely pose a direct threat to human life, but they can cause mayhem on Earth by interfering with the Earth’s magnetic field.
Their most benign impact is causing beautiful auroras in places close to the North and South Pole. A stronger coronal mass ejection can cause auroras to show up in skies further away such as in London or France – as it did in May and October.
But the impact is much more serious in space where the charged particles of a coronal mass ejection can make all the electronics on a satellite malfunction. They can knock down power grids and affect weather and communication satellites.
“Today our lives fully depend on communication satellites and CMEs can trip the internet, phone lines and radio communication,” Prof Ramesh says. “That can lead to absolute chaos.”
The most powerful solar storm in recorded history occurred in 1859. Called the Carrington Event, it triggered intense auroral light shows and knocked out telegraph lines across the globe.
Scientists at Nasa say an equally strong storm was headed at Earth in 2012 and we had “a close shave just as perilous”. They say a powerful coronal mass ejection tore through Earth’s orbit on 23 July but that we were “incredibly fortunate” that instead of hitting our planet, the storm cloud hit Nasa’s solar observatory STEREO-A in space.
Aditya-L1: India’s Sun mission reaches final destination
Chandrayaan-3: India makes historic landing near Moon’s south pole
In 1989, a coronal mass ejection knocked out part of Quebec’s power grid for nine hours, leaving six million people without power.
And on 4 November 2015, solar activity disrupted air traffic control at Sweden and some other European airports, leading to travel chaos for hours.
Scientists say that if we are able to see what happens on the Sun and spot a solar storm or a coronal mass ejection in real time and watch its trajectory, it can work as a forewarning to switch off power grids and satellites and keep them out of harm’s way.
A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File Photo Purchase Licensing Rights
Nov 27 (Reuters) – The U.S. Federal Trade Commission has opened a broad antitrust investigation into Microsoft (MSFT.O)., including of its software licensing and cloud computing businesses, a source familiar with the matter said on Wednesday.
The probe was approved by FTC Chair Lina Khan ahead of her likely departure in January. The election of Donald Trump as U.S. president, and the expectation he will appoint a fellow Republican with a softer approach toward business, leaves the outcome of the investigation up in the air.
The FTC is examining allegations the software giant is potentially abusing its market power in productivity software by imposing punitive licensing terms to prevent customers from moving their data from its Azure cloud service to other competitive platforms, sources confirmed earlier this month.
The FTC is also looking at practices related to cybersecurity and artificial intelligence products, the source said on Wednesday.
Microsoft declined to comment on Wednesday.
Competitors have criticized Microsoft’s practices they say keep customers locked into its cloud offering, Azure. The FTC fielded such complaints last year as it examined the cloud computing market.
NetChoice, a lobbying group that represents online companies including Amazon and Google, which compete with Microsoft in cloud computing, criticized Microsoft’s licensing policies, and its integration of AI tools into its Office and Outlook.
“Given that Microsoft is the world’s largest software company, dominating in productivity and operating systems software, the scale and consequences of its licensing decisions are extraordinary,” the group said.
Google in September complained to the European Commission about Microsoft’s practices, saying it made customers pay a 400% mark-up to keep running Windows Server on rival cloud computing operators, and gave them later and more limited security updates.
The FTC has demanded a broad range of detailed information from Microsoft, Bloomberg reported earlier on Wednesday.
The agency had already claimed jurisdiction over probes into Microsoft and OpenAI regarding competition in artificial intelligence, and started looking into Microsoft’s $650 million deal with AI startup Inflection AI.
Microsoft has been somewhat of an exception to U.S. antitrust regulators’ recent campaign against allegedly anticompetitive practices at Big Tech companies.
Facebook owner Meta Platforms (META.O)., Apple (AAPL.O), and Amazon.com Inc. (AMZN.O) have all been accused by the U.S. of unlawfully maintaining monopolies.
Alphabet’s (GOOGL.O) Google is facing two lawsuits, including one where a judge found it unlawfully thwarted competition among online search engines.
Microsoft CEO Satya Nadella testified at Google’s trial, saying the search giant was using exclusive deals with publishers to lock up content used to train artificial intelligence.
It is unclear whether Trump will ease up on Big Tech, whose first administration launched several Big Tech probes. JD Vance, the incoming vice president, has expressed concern about the power the companies wield over public discourse.
“The Trump administration was an aggressive enforcer of the antitrust laws,” said Andre Barlow, a lawyer with Doyle Barlow & Mazard, noting it filed suits against Google and Facebook.
“When administrations change, the agencies do not necessarily drop ongoing investigations,” he added, noting that “changes in administration can lead to evolving enforcement priorities and shifts in how aggressively certain types of conduct are scrutinized.”
Still beautiful. Still good. Still the wrong form factor for basically everybody.
We are beautiful, we are doomed.
The M4 iMac is a beautiful computer that feels more and more like it fell out of a universe where laptops never took off.
You can see it, can’t you? In a world without laptops, the iMac would be the ultimate computer. Instead of a box and a screen with a tangle of wires leading everywhere, everything you need is right there, jammed into an impossibly thin aluminum chassis. Monitor, processor, speakers, webcam, microphones, and all the ports: all built in. It’s elegant. It’s restrained. It’s lovely. It’s plenty fast enough for most people. The iMac would be in every library, in dorm rooms, in cubicles, in computer labs and living rooms. People would haul them to coffee shops.
Now imagine going to that universe and showing them a MacBook Pro. People might go for that instead.
The M4 iMac is a beautiful object and a good computer. The design is three years old, but it’s still stunning, especially from the back. It’s still the only Mac that comes in actual colors, and this year they’re even cheerier. It’s a little faster than last year, there’s more RAM in the base model, and it gets the same new webcam and anti-glare screen option as the M4 MacBook Pro.
But otherwise, it’s the same machine as it was in 2023, and it’s substantially the same as it was back in 2021. Don’t get me wrong, I love looking at this thing. I feel calmer and more productive walking into my office and seeing that unbroken expanse of blue instead of the rat’s nest of cables that come out of my regular monitor. There are just vanishingly few situations in which the most important thing about a computer is how it looks from the back, and the iMac asks you to give up too much in exchange.
The M4 iMac starts at $1,299 with an 8-core CPU, 8-core GPU M4 chip, 16GB of memory, and a 256 GB SSD. As usual, the starting configuration seems to exist only to encourage you to spend more money. Only two of the base models’ four USB-C ports are Thunderbolt ports, it only supports one external display instead of two, the Ethernet port costs extra, and the Magic Keyboard it comes with doesn’t have TouchID. All those problems disappear if you spend $200 more to get the next tier, which also bumps you up to a 10-core CPU. If you’re buying an iMac for yourself, that $1,499 model is the real starting point.
My review unit, with a 10-core CPU and GPU, 24GB of RAM, a 1TB SSD, the $200 anti-glare nanotexture coating, and the full-sized Magic Keyboard comes out to $2,329. This is more than you should spend on an iMac.
The thing about an all-in-one computer is that all of those things have to be worth it. If you have to start plugging in a bunch of stuff to compensate for what’s built in, you might as well get something else. (This is what’s known in the biz as “foreshadowing.”) And the iMac mostly, mostly nails it.
That 10-core processor is the same chip as the base M4 Mac Mini or MacBook Pro, and in daily use, the iMac feels plenty fast. Even the 8-core base model should be good for at least five years and probably longer, thanks to that 16GB starting RAM. My work machine is a four-year-old M1 MacBook Air with 16GB of RAM, and I have no complaints about its speed in day-to-day work. (Port selection and the fact that I can only use one external monitor, yes.) Apple Silicon has some legs to it. You do have to hand it to them.
The iMac’s speakers are as good as ever, and the mics and noise-canceling are advanced enough that I never had to plug in a headset for a video call. The 12MP Center Stage camera is a big upgrade over last year’s model and much less obnoxious than the similar ultrawide one in the Surface Pro 11, which defaults to a zoomed-way-out view of your entire surroundings. It’s better at keeping me centered in the screen than the gimbal-mounted Insta360 Link webcam I usually use. And unlike the Insta360, it doesn’t randomly decide to point at my lap or bookshelf instead of my face or refuse to turn on because it’s not getting quite enough power from the USB hub behind my monitor.
Microsoft has spent years working toward a future where Windows and its many components exist entirely in the cloud. That reality looks a lot closer this week, with Microsoft revealing the Windows 365 Link mini PC, which streams a version of Windows 11 from the Windows 365 cloud service. It boots up in seconds, and there’s no ability to store data or run apps locally. It’s all about the cloud.
When Microsoft announced the Link onstage at Ignite, there were loud cheers from the IT professionals gathered in the audience, so this could become a popular option for some businesses looking to simplify and secure their Windows user base as we approach Windows 10 end of life next year.
While this might not sound immediately appealing over a traditional laptop or PC, the Link is designed specifically for businesses. Many have already migrated some or all of their user bases to virtual machines that live in the cloud to ensure no data is stored on any laptop or PC in the event of an employee leaving or a device getting lost or stolen.
Lure Users In Hackers are using fake AI video generation tools to spread malware on Windows and macOS devices, stealing personal data like passwords and cryptocurrency.
Hackers are using fake AI video tools to spread malware on Windows and macOS devices. (Image: Freepik)
With the rising popularity of AI-powered tools, hackers have picked up on the chance to target unsuspecting users. Reports in recent days have mentioned yet another rising issue, where malicious actors have been leveraging fake AI video tools to spread malware to Windows and macOS devices. These fake tools look genuine, with the promise of easy-to-use, free video generation services attracting users. Unfortunately, these fake websites are indeed traps that steal personal information, such as cryptocurrency wallets, passwords, and other browsing data. Here’s what you need to know to keep yourself safe.
Fake AI Tools Lure Users In
Hackers have been spreading malware through fake websites masquerading as the AI video generator tool EditPro. Posts on social media platforms such as X (formerly Twitter) advertise these free AI tools, promising not to require any special skills to use. Clicking on these posts directs users to a website that looks genuine but is actually set to infect their computers. Using familiar cookie banners and legitimate-looking interfaces, these malicious websites are highly believable, making spotting them tough.
How the Malware Works
Once users click on the “Get now” button, they unknowingly begin downloading harmful files – one for Windows (“Edit-ProAI-Setup-newest_release.exe”) and another for macOS (“EditProAi_v.4.36.dmg”). These files contain dangerous malware, including Lumma Stealer for Windows and AMOS for macOS. These malware types are designed to steal login credentials, cryptocurrency wallets, and browsing history; the stolen data is forwarded to the hackers, who may use it for subsequent attacks or sell it off on the dark web.
NASA’s Curiosity rover has been exploring Mars since it landed on the Red Planet in 2012, providing vital data about its history, climate, and potential for life. Recently, the rover concluded its study of the Gediz Vallis channel, a region located on the slopes of Mount Sharp, and is now heading toward a new target called the boxwork formation. This exploration is a critical part of Curiosity’s mission to understand how Mars transitioned from having a wetter, more habitable climate to the arid, dry conditions that dominate the planet today.
Gediz Vallis reveals clues about Mars’ past climate and geology.
Gediz Vallis is a channel or valley on Mars that reveals clues about the planet’s past climate and geological processes. The valley’s features suggest that water once flowed through this region, and scientists believe that it may have been formed by a combination of rivers, debris flows, and avalanches—a mix of wet and dry processes over time. This area is located on the slopes of Mount Sharp, a peak inside the Gale Crater, where the Curiosity rover has been operating for years. Mount Sharp itself has layers of ancient rocks that are key to understanding the evolution of Mars’ climate, as they have preserved evidence of the planet’s environmental changes over billions of years.
Before leaving Gediz Vallis, Curiosity captured a 360-degree panorama of the landscape, providing a rich visual record of the region. These images help scientists study the terrain and features in detail, giving them further insights into the channel’s formation and the processes that shaped it. By exploring areas like Gediz Vallis, Curiosity is helping researchers piece together how Mars evolved from a warm, potentially wetter world to the cold and dry planet it is now.
Sulphur-rich stones found by Curiosity provide clues to Mars’ past
One of the most exciting findings during Curiosity’s exploration of Gediz Vallis is the discovery of rare sulphur-rich stones. These stones are bright white in colour, and when Curiosity’s wheels crushed them, they revealed yellow crystals inside. This discovery is significant because sulphur is a key element when studying planetary environments, and it can be indicative of past chemical processes, including potential signs of microbial life.
What makes this discovery even more intriguing is that on Earth, sulphur is usually associated with volcanic activity or hot springs, where sulphur-rich compounds are commonly found due to high-temperature environments. However, Mount Sharp doesn’t have volcanic features or hot springs—two things that are usually associated with sulphur on Earth. This raises a mystery for scientists: how did these sulphur-rich deposits form on Mars?
Ashwin Vasavada, Curiosity’s project scientist at NASA’s Jet Propulsion Laboratory, described the discovery as a “fascinating mystery.” Researchers are now analysing the data to determine the origins of these sulphur deposits. Possible explanations include chemical reactions involving water and minerals, but scientists are still investigating all potential causes. The discovery could be a key piece of the puzzle in understanding Mars’ history of water and its potential for supporting life in the distant past.
The move allows users to conveniently pay for all use cases [File] | Photo Credit: REUTERSOne97 Communications (OCL), that owns the Paytm brand, on Tuesday said Paytm users will be able to make UPI payments at select international locations, including popular spots in the UAE, Singapore, France, Mauritius, Bhutan, and Nepal.
The move allows users to conveniently pay for all use cases including shopping, dining, and local experiences abroad using UPI through their Paytm app, according to a release.
“One97 Communications (OCL) that owns the brand Paytm, India’s leading payments and financial services distribution company and the pioneer of QR, Soundbox and mobile payments, has enabled Paytm users to make UPI payments at select international locations,” the release said.
Kim Kardashian posted a series of Instagram stories capturing her interactions with the humanoid robot Optimus by Elon Musk’s Tesla.
The image shows Elon Musk’s humanoid robot playing rock-paper-scissors with Kim Kardashian. (Instagram/@kimkardashian)
Kim Kardashian, in a series of videos posted to her Instagram Story, revealed her ‘new friend’—the humanoid robot Optimus by Elon Musk’s company Tesla. The socialite was seen praising, making hearts with her hands, and playing rock-paper-scissors with the robot, which reportedly costs between $20,000 and $30,000.
The game
“Umm, rock-paper-scissors,” Kim Kardashian says in an Instagram video. The robot responds by raising its hands, and with a chuckle, the reality TV star says, “Oh, raise the roof! Yap.”
Kim makes the first move in the game, and the robot follows. As it loses the game, the socialist says, “Oh! You’re a little slow. I beat you.” What happens next is interesting.
Upon getting beaten in the game, the robot throws its hands up, which humans often do to signify frustration.
“Meet my new friend”
The billionaire reality star-turned-mogul introduced the robot on X with this caption. She also shared a video that shows her waving “Hi” to the robot and asking, “Can you do this?” while making a heart sign with her hand. The robot quickly follows her lead and completes the hand gesture, which surprises Kim as she gasps and says, “You know how to do that!”
Fortune reported that the robots were operated by handlers when they debuted rather than autonomously. However, in Kim’s videos, it is unclear if there is a handler or if the robot is operating on its own.
A college student was horrified after Google’s Gemini AI chatbot asked him to “please die” following a request for help with a homework assignment.
The company has previously encountered similar criticism. (Representative Image)
Vidhay Reddy, from Michigan, asked the chatbot for help on an assignment about issues adults face as they age. But the response quickly escalated into shocking, hateful language, including a chilling message that read, “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”
Reddy’s sister, Sumedha, who witnessed the exchange, described her terror after receiving the chatbot’s reply. “I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time, to be honest,” she recalled in an interview with CBS News.
This interaction has increased worries about the possible impact that unfiltered and dangerous content produced by AI systems could cause.
Sumedha Reddy voiced concerns about the potential impact of such reactions on those who are more vulnerable after considering the occurrence. “If someone who was alone and in a bad mental place, potentially considering self-harm, had read something like that, it could put them over the edge,” she added.
This could be so much better. Photo by Allison Johnson / The Verge
We could argue all day about the merits of iOS versus Android, but there’s one thing the Android ecosystem offers that you definitely won’t find from Apple right now: a decent midrange phone.
By “decent,” I mean something I can enthusiastically recommend. Not “Eh, if it’s literally your only option, then it’s fine,” which is how I’ve caveated my recommendations of the iPhone SE over the past couple of years. Remember the SE? Apple released the first version in 2016, putting a then-current A9 chip into an older body design for $399 compared to $649 for the iPhone 6S. I still have mine, and it rules.
Two generations later, and the SE is still Apple’s most affordable iPhone, now starting at $429. For that price, you get a well-built phone with good dust and water resistance, a good camera as long as there’s enough light, and wireless charging. Not bad on the face of it, but it’s the things you have to put up with that make it very hard to recommend.
The screen is cramped, its LCD panel is dated, and the bezels are just massive. There’s only 64GB of storage at the base configuration, and the camera’s image quality falls apart in low light because there’s no night mode. Imagine selling a phone in 2024 with no night mode! In 2020, these concessions were acceptable, especially since that second-gen SE started at $399. But when the third-gen SE launched in 2022, with a price bump and relatively few substantial updates, it already felt like it was well behind the times.
Since then, midrange Android phones have only gotten better. The Google Pixel 8A is a straight-up banger. For $499, it comes with the same IP67 rating for dust and water resistance as the 2022 iPhone SE, plus a modern OLED screen, an excellent camera, 128GB of storage, and seven years of OS updates. Samsung has offered a couple of good midrange phones over the past few years, too, though its most recent Galaxy A55 skipped the US. But you can buy a Galaxy A35 for $399 and get plenty of bells and whistles, like an OLED screen and an IP67 rating. It all makes the 2022 SE look pretty shabby in comparison.
There’s hope, though; rumors of a fourth-gen iPhone SE, arriving in 2025, look promising. It might get an OLED screen, a modern design with slimmer bezels, and enough processing power and RAM to run Apple Intelligence. Factor in a bump in base storage — come on, you can’t sell a phone with just 64GB of storage in 2025 — and even if the price jumps up to $499, the iPhone SE starts looking like a decent option. Even if we don’t get everything rumored for the 2025 SE, an updated design and a base storage bump would go a long way.
Truthfully, there’s a lot of fat Apple could trim from the iPhone 16 to make a pared-down midrange phone that still delivers the stuff you want from an iPhone. You can — and Apple probably will — omit the Action Button, the camera control, and the Dynamic Island. The SE will probably stick with one rear camera, making the secondary ultrawide on the iPhone 16 an upgrade feature. Some people simply don’t care about all that extra stuff; I know this because I’m married to such a person.
Let’s retire these bezels, shall we Apple? Photo by Allison Johnson / The Verge
A midrange phone built with spare parts Apple had laying around may not sound that exciting, but I think it could be a really big deal. Analyst firm CIRP estimates that, in the US, the average selling price of an iPhone in September 2024 was $1,018. No doubt many of those were subsidized through carrier deals and financing. But the preference for more expensive models might also reflect the lack of choice on the low end, where people might have more flexibility in how they pay.
When every option is too expensive to buy out of pocket, why not take the carrier deal for the 16 Pro? Once you’re paying $20 per month for a new phone, why not pay an extra $4 per month and get the fancier model? You can see how sales start skewing to the more expensive phones.
Sources say India’s opposition, echoing a Chinese petition, is directed against an EU proposal; such measures put the cost of a low-carbon transition on developing and poor countries, violating principles of equity, India says
Participants attend day five of the UNFCCC COP29 Climate Conference at Baku in Azerbaijan on November 15, 2024. | Photo Credit: Getty Images
India has voiced its disapproval of “protectionist” measures that link trade barriers and carbon emissions, at the ongoing climate talks in Baku, Azerbaijan.
A week before the UN summit began, China had petitioned the Presidency of the 29th Conference of Parties (COP 29) to include a discussion on “climate change-related unilateral restrictive trade measures” as part of the formal conference agenda.
“A regime of unilateral trade measures on climate change,” India stated on Friday, “imposes the cost of the transition to low-carbon economies on developing and low-income countries… [S]uch measures are discriminatory… and detrimental to multi-lateral cooperation. They violate principles of equity.”
NASA astronaut Sunita Williams has addressed recent rumors about her health aboard the ISS, assuring everyone she’s in great shape and staying fit.
Sunita Williams
In response to recent tabloid reports speculating about her health aboard the International Space Station (ISS), NASA astronaut Sunita Williams has reassured her supporters that she’s in excellent shape. Rumors had circulated that Williams appeared “gaunt” in recent images, with some media outlets suggesting potential health issues in Earth’s orbit. However, both NASA officials and Williams herself have refuted these claims.
Speaking in a video interview from the ISS on November 12, Williams clarified, “I’m the same weight that I was when I got up here,” addressing concerns over her appearance. The astronaut emphasized the rigorous exercise regimen she maintains to counter the effects of microgravity on muscle and bone density. Williams highlighted her use of the ISS’s fitness equipment, including an exercise bike, treadmill, and resistance machines. “Weightlifting… has definitely changed me. My thighs are a little bit bigger, my butt is a little bit bigger,” she noted, underscoring the effectiveness of her workout routine.
Williams arrived at the ISS in June, alongside astronaut Butch Wilmore, on Boeing’s historic Starliner Crew Flight Test (CFT), initially planned as a 10-day mission. However, issues with the Starliner capsule’s thrusters required NASA to extend their stay to further investigate. In early September, the decision was made to bring Starliner back without a crew. Williams and her fellow crew members are now scheduled to return to Earth in February 2025 with SpaceX’s Crew-9 astronauts.
NASA astronaut Sunita Williams has experienced significant weight loss during her extended stay in space. Officials reassured the public that all International Space Station residents are under routine medical evaluations and are monitored by flight surgeons, ensuring their health.
Retired US Navy officer and NASA astronaut Sunita ’Suni’ Williams speaks about Diwali via video from the International Space Station at the end of October.(PTI)
NASA astronaut Sunita Williams may be feeling the ill effects of a prolonged stay in space — with recent photos showing a significant weight loss. Officials have sought to alleviate concerns about her health in recent days and insist that all International Space Station residents are closely monitored by a medical team.
“All NASA astronauts aboard ISS undergo routine medical evaluations and are monitored by dedicated flight surgeons. All are in good health,” NASA spokesperson Jimi Russell told the Daily Mail last week.
Williams and fellow astronaut Butch Wilmore have been in space since June after an eight day mission went awry. The duo served as test pilots for a Boeing Starliner on June 5 but became ‘stranded’ after their aircraft malfunctioned. The Starliner eventually returned to earth without its crew after NASA deemed it ‘too risky’ to carry Williams and Wilmore.
The two astronauts have continued their work formally as part of the expedition and will return in February next year.
New photos of the astronaut have gone viral on social media with many flagging her ‘drastic’ weight loss over the past year. Reports indicate that Williams had started the trip while weighing about 140 lbs but struggled to meet the high-calorie intake required to maintain her weight as her stay in space continued.
“She has lost a lot of weight. The pounds have melted off her and she’s now skin and bones. So it’s a priority to help her stabilize the weight loss and hopefully reverse it,” the New York Post quoted a NASA employee directly involved in the mission as saying.
According to the BCG report, 26 per cent of global companies utilise AI. Fintech, software, and banking sectors are increasingly using AI in their operations. Meanwhile, India is leading in Artificial Intelligence (AI) adoption as 30 per cent of Indian companies are maximising value through the use of such emerging technology.
India is leading in AI adoption at 30,
New Delhi: A new research by Boston Consultancy Group (BCG) pegged that India is leading in Artificial Intelligence (AI) adoption as 30 per cent of Indian companies are maximising value through the use of such emerging technology.
According to the BCG report, 26 per cent of global companies utilise AI. Fintech, software, and banking sectors are increasingly using AI in their operations.
The BCG report claimed that after years of investing, hiring talent, and launching pilots in artificial intelligence (AI), CEOs are now seeking tangible returns from the technology. In the same breath, the report said realizing its full value remains difficult.
According to the research, only 26 per cent of companies have developed the necessary set of capabilities to move beyond proofs of concept and generate tangible value even with the widespread implementation of AI programs across industries.
The report, titled ‘Where’s the Value in AI?’, was based on a survey of 1,000 Chief Experience Officers (CxOs) and senior executives from over 20 sectors, spanning 59 countries in Asia, Europe, and North America, and covering as many as ten major industries.
The rocky planet, about the same mass as Earth, revolves around a white dwarf in the constellation Sagittarius.
The discovery brings a glimmer of hope for Earth’s survival when our sun enters its final stages.
A team of astronomers has uncovered an Earth-like planet orbiting a star located 4,000 light years away from the solar system, potentially offering insights into Earth’s distant future. The rocky planet, about the same mass as Earth, revolves around a white dwarf in the constellation Sagittarius.
The discovery brings a glimmer of hope for Earth’s survival when our sun enters its final stages. It suggests that Earth could potentially avoid being consumed by the expanding sun, opening up possibilities for human migration to the outer solar system, with moons such as Europa, Callisto, and Ganymede around Jupiter, or Enceladus near Saturn, becoming possible havens for future generations.
What is a white dwarf?
A white dwarf is a star’s remnant after it has run out of nuclear fuel and shed its outer layers. It symbolises the sun’s eventual fate. The sun will grow into a red giant as its nuclear fuel runs out, then shrink to become a white dwarf. The extent of its expansion will determine which planets in the solar system will be engulfed – Mercury and Venus are likely to be consumed. But what about Earth?
In a study published in Nature Astronomy, researchers from the University of California, Berkeley, used the Keck Telescope in Hawaii to observe a system designated KMT-2020-BLG-0414. The system contains a white dwarf star with an Earth-sized planet in an orbit twice as far from the star as Earth is from the sun. Alongside the planet is a brown dwarf – a planet roughly 17 times the mass of Jupiter.
This finding supports the theory that as the sun expands into a red giant, its loss of mass will push the planets into more distant orbits. This phenomenon could allow Earth to escape destruction. Jessica Lu, an associate professor of astronomy at UC Berkeley, noted, “Whether life can survive on Earth through that (red giant) period is unknown. But certainly, the most important thing is that Earth isn’t swallowed by the Sun when it becomes a red giant.”
Future of Earth
“We do not currently have a consensus whether Earth could avoid being engulfed by the red giant sun in six billion years,” said Keming Zhang, the lead author and a former doctoral student at the UC Berkeley, who is now an Eric and Wendy Schmidt AI in Science Postdoctoral fellow at UC San Diego.
“In any case, planet Earth will only be habitable for another billion years, at which point Earth’s oceans would be vaporized by the runaway greenhouse effect-long before the risk of getting swallowed by the red giant.”
Now the best value in Apple’s lineup, the Mac Mini takes its ideal form with an impressively small design that compromises on very little.
Why wouldn’t you want the new Mac Mini? Over the last several days of testing Apple’s redesigned desktop Mac, I’ve been impressed by all the power and potential crammed into this very compact machine. For a starting price of $599 and with 16GB of RAM now standard, the M4 Mac Mini has immediately become the best value in Apple’s entire Mac lineup. It’s more than capable for most computing tasks today, and if my M1 MacBook Air is anything to go by, the Mini won’t feel slow (or anything close to it) for at least the next four or five years.
Apple provided me with two very different Mac Mini units. This first review is focused on the standard M4 model, which includes a 10-core CPU, 10-core GPU, and the default 16GB of memory. My machine has 512GB of storage, which bumps its price to $799, but everyday performance should be identical to that of the base 256GB configuration. I’ve also got a kitted-out M4 Pro model, which raises the price to a stratospheric $2,199. For that money, I’d damn well expect the M4 Pro Mini to be a powerhouse.
But most people won’t need to spend anywhere near that amount. The regular M4 edition offers a lot in its own right and would be my recommendation for anyone who wants a dependable desktop Mac — especially if you’ve already settled on a monitor and / or keyboard that you love.
As always with the Mac Mini, Apple provides the computer; you bring your own display, keyboard, and mouse. Apple is more than happy to sell you its own Studio Display and peripherals, but with the Mini now able to run up to three displays at once (up from two on the M2 model), you’ve got a ton of runway for creative, versatile desk setups. If you’ve perfected your work-from-home office, you can just add the Mini to whatever’s already there without having to rearrange everything.
And the machine itself will barely take up any of that space. Measuring 5 inches wide, 5 inches deep, and 2 inches tall, the 2024 Mac Mini’s footprint is less than half that of the previous enclosure, which was designed around the Intel platform. (Just look at how much unused space there was after the Apple Silicon transition.) It’s not quite as tiny as an Apple TV 4K, but to me, it’s the most striking example yet of what Apple can achieve with hardware that’s purpose-built for its M-series chips. Another welcome change is that the new Mini puts some ports right on the front, whereas its predecessors made me blindly plug everything into the back — or turn the whole thing around so I could actually see what I was doing.
The new M4 Mac Mini above its less mini predecessor. Since I know someone will ask, the iPhone wallpaper is by BasicAppleGuy.
Now you’ve got a pair of 10Gbps USB-C ports (USB 3) and a headphone jack up front. Around back are three Thunderbolt 4 (USB 4) ports, HDMI, and a gigabit ethernet jack that’s upgradable to 10-gigabit speeds. I do miss the SD card slot you’d find on a Mac Studio or MacBook Pro, but I haven’t once felt disappointed about the lack of USB-A. Everyone has a different accessory situation, so its absence might sting more for you, but it’s easy to just plug in a dongle if necessary. The M4 Pro Mac Mini has even faster, brand-new Thunderbolt 5 connectivity, with theoretical data transfer speeds of up to 120Gb/s (three times faster than Thunderbolt 4). But that’s mostly targeted at creative professionals and intensive video work. Despite its dramatically reduced dimensions, the Mini still retains an internal power supply, so there’s no cumbersome brick to worry about.
Like the Mac Studio, the Mac Mini finally has some ports on the front.
Notice that I haven’t mentioned a power button yet. That’s because Apple made the curious decision to move it to the bottom of the machine near the rear left corner. Do I wish the button were someplace else? Sure. Pressing it requires reaching over the Mini and lifting the unit up slightly. It’s silly but hasn’t negatively affected my experience in any material way. If you’ve got an Apple Magic Keyboard with Touch ID, you’ll be reminded of its awkward location right during setup, when you’ve got to double-press the power button to make a secure link between the fingerprint sensor and Mac. The Mini is used in a wide mix of environments including home theater systems and live event production. I could see the button’s position becoming a hassle in some of those scenarios, but if you’re using it on a desk, it’s more of a strange quirk than an annoyance. And there are always workarounds.
This is not the ideal location for a power button, but it’s workable.
Apple’s revamped thermal system for the Mini keeps the M4 model running quietly. Even when I’m deep in a Lightroom photo editing session, I don’t hear the fan. I’m certain the M4 Pro’s extra GPU cores would make those RAW edits even faster, but the regular M4 is up to the task for most photo work. Elsewhere, the machine has rarely missed a step, no matter what I throw at it. I’m no videographer, so I can’t speak to whether serious editing work would expose the M4’s limits. If there’s one use case that warrants stepping up to the M4 Pro, it’s likely that.
Everywhere else, the M4 Mini just hums along. As you can see in our benchmarks, it’s right in keeping with the M4 iMac and MacBook Pro. I’ve barely sampled any of the Apple Intelligence features in macOS Sequoia — I don’t find them particularly compelling — but I’m already a big fan of iPhone Mirroring and the built-in window tiling that has allowed me to finally bid farewell to Moom. MacOS feels like it’s in a great place these days.
With the Mini now being so charmingly small, it’s easy to dream up all sorts of hardware and software possibilities. Why not give people a choice of colors like the iMac? As for software, this thing looks more like an Apple TV than ever before, so what if it sometimes behaved like one too? Imagine a TV-optimized entertainment interface — yes, like a modern Front Row — that would kick in whenever a TV screen is connected over HDMI. The M4 is more than capable enough to juggle both macOS and a tvOS-like experience.
The world’s first wooden satellite, built by Japanese researchers, was launched into space on Tuesday, in an early test of using timber in lunar and Mars exploration.
LignoSat, developed by Kyoto University and homebuilder Sumitomo Forestry (1911.T), opens new tab, will be flown to the International Space Station on a SpaceX mission, and later released into orbit about 400 km (250 miles) above the Earth.
Named after the Latin word for “wood”, the palm-sized LignoSat is tasked to demonstrate the cosmic potential of the renewable material as humans explore living in space.
“With timber, a material we can produce by ourselves, we will be able to build houses, live and work in space forever,” said Takao Doi, an astronaut who has flown on the Space Shuttle and studies human space activities at Kyoto University.
With a 50-year plan of planting trees and building timber houses on the moon and Mars, Doi’s team decided to develop a NASA-certified wooden satellite to prove wood is a space-grade material.
“Early 1900s airplanes were made of wood,” said Kyoto University forest science professor Koji Murata. “A wooden satellite should be feasible, too.”
Wood is more durable in space than on Earth because there’s no water or oxygen that would rot or inflame it, Murata added.
A wooden satellite also minimises the environmental impact at the end of its life, the researchers say.
Takao Doi, a former Japanese astronaut and professor at Kyoto University, holds an engineering model of LignoSat during an interview with Reuters at his laboratory at Kyoto University in Kyoto, Japan, October 25, 2024. REUTERS/Irene Wang/File Photo Purchase Licensing Rights
Decommissioned satellites must re-enter the atmosphere to avoid becoming space debris. Conventional metal satellites create aluminium oxide particles during re-entry, but wooden ones would just burn up with less pollution, Doi said.
“Metal satellites might be banned in the future,” Doi said. “If we can prove our first wooden satellite works, we want to pitch it to Elon Musk’s SpaceX.” INDUSTRIAL APPLICATION
The researchers found that honoki, a kind of magnolia tree native in Japan and traditionally used for sword sheaths, is most suited for spacecraft, after a 10-month experiment aboard the International Space Station.
LignoSat is made of honoki, using a traditional Japanese crafts technique without screws or glue.
Once deployed, LignoSat will stay in the orbit for six months, with the electronic components onboard measuring how wood endures the extreme environment of space, where temperatures fluctuate from -100 to 100 degrees Celsius every 45 minutes as it orbits from darkness to sunlight.
LignoSat will also gauge wood’s ability to reduce the impact of space radiation on semiconductors, making it useful for applications such as data centre construction, said Kenji Kariya, a manager at Sumitomo Forestry Tsukuba Research Institute.
Last year, India became the first country in the world to land near the previously-unexplored lunar south pole
India recently announced a host of ambitious space projects and approved 227bn rupees ($2.7bn; £2.1bn) for them.
The plans include the next phase of India’s historic mission to the Moon, sending an orbiter to Venus, building of the first phase of the country’s maiden space station and development of a new reusable heavy-lifting rocket to launch satellites.
It’s the single largest allocation of funds ever for space projects in India, but considering the scale and complexity of the projects, they are far from lavish and have once again brought into focus the cost-effectiveness of India’s space programme.
Experts around the world have marvelled at how little Indian Space Research Organisation’s (Isro) Moon, Mars and solar missions have cost. India spent $74m on the Mars orbiter Mangalyaan and $75m on last year’s historic Chandrayaan-3 – less than the $100m spent on the sci-fi thriller Gravity.
Nasa’s Maven orbiter had cost $582m and Russia’s Luna-25, which crashed on to the Moon’s surface two days before Chandrayaan-3’s landing, had cost 12.6bn roubles ($133m).
Despite the low cost, scientists say India is punching much above its weight by aiming to do valuable work.
Chandrayaan-1 was the first to confirm the presence of water in lunar soil and Mangalyaan carried a payload to study methane in the atmosphere of Mars. Images and data sent by Chandrayaan-3 are being looked at with great interest by space enthusiasts around the world.
So how does India keep the costs so low?
Retired civil servant Sisir Kumar Das, who looked after Isro’s finances for more than two decades, says the frugality can be traced back to the 1960s, when scientists first pitched a space programme to the government.
India had gained independence from British colonial rule only in 1947 and the country was struggling to feed its population and build enough schools and hospitals.
“Isro’s founder and scientist Vikram Sarabhai had to convince the government that a space programme was not just a sophisticated luxury that had no place in a poor country like India. He explained that satellites could help India serve its citizens better,” Mr Das told the BBC.
India makes historic landing near Moon’s south pole
The year India reached the Moon – and aimed for the Sun
But India’s space programme has always had to work with a tight budget in a country with conflicting needs and demands. Photographs from the 1960s and 70s show scientists carrying rockets and satellites on cycles or even a bullock cart.
Decades later and after several successful interplanetary missions, Isro’s budget remains modest. This year, India’s budgetary allocation for its space programme is 130bn rupees ($1.55bn) – Nasa’s budget for the year is $25bn.
Mr Das says one of the main reasons why Isro’s missions are so cheap is the fact that all its technology is home-grown and machines are manufactured in India.
In 1974, after Delhi conducted its first nuclear test and the West imposed an embargo, banning transfer of technology to India, the restrictions were “turned into a blessing in disguise” for the space programme, he adds.
“Our scientists used it as an incentive to develop their own technology. All the equipment they needed was manufactured indigenously – and the salaries and cost of labour were decidedly less here than in the US or Europe.”
Science writer Pallava Bagla says that unlike Isro, Nasa outsources satellite manufacturing to private companies and also takes out insurance for its missions, which add to their costs.
“Also, unlike Nasa, India doesn’t do engineering models which are used for testing a project before the actual launch. We do only a single model and it’s meant to fly. It’s risky, there are chances of crash, but that’s the risk we take. And we are able to take it because it’s a government programme.”
Mylswamy Annadurai, chief of India’s first and second Moon missions and Mars mission, told the BBC that Isro employs far fewer people and pays lower salaries, which makes Indian projects competitive.
He says he “led small dedicated teams of less than 10 and people often worked extended hours without any overtime payments” because they were so passionate about what they did.
The tight budget for the projects, he said, sometimes sent them back to the drawing board, allowed them to think out of the box and led to new innovations.
“For Chandrayaan-1, the allocated budget was $89m and that was okay for the original configuration. But subsequently, it was decided that the spacecraft would carry a Moon impact probe which meant an additional 35kg.”
Scientists had two choices – use a heavier rocket to carry the mission, but that would cost more, or remove some of the hardware to lighten the load.
“We chose the second option. We reduced the number of thrusters from 16 to eight and pressure tanks and batteries were reduced from two to one.”
Reducing the number of batteries, Mr Annadurai says, meant the launch had to take place before the end of 2008.
“That would give the spacecraft two years while it went around the Moon without encountering a long solar eclipse, which would impact its ability to recharge. So we had to maintain a strict work schedule to meet the launch deadline.”
Mangalyaan cost so little, Mr Annadurai says, “because we used most of the hardware we had already designed for Chandrayaan-2 after the second Moon mission got delayed”.
Mr Bagla says India’s space programme coming at such low cost is “an amazing feat”. But as India scales up, the cost could rise.
At the moment, he says, India uses small rocket launchers because they don’t have anything stronger. But that means India’s spacecraft take much longer to reach their destination.
Besides wild poliovirus cases, the WHO registry has data of only the circulating VDPV cases and not the cases that belong to the other two VDPV categories — iVDPV and aVDPV
The WHO registry on polio — wild poliovirus and vaccine-derived poliovirus — is at best sketchy | Photo Credit: VELANKANNI RAJ B
On June 17, 2022, WHO published a report of a VDPV type-1 case that was detected from an environmental sewage sample in Kolkata on April 25, 2022. The report said that genetic sequencing “established that it was not related to any of the previously identified VDPV1 viruses and was likely to be iVDPV (excreted from an immune-deficient individual)”. But nearly three months after the results of the Meghalaya polio case was shared with the WHO on August 12, and more than one-a-half-months after the follow-up results confirmed that the immunological profile of the child was normal and that there was no evidence that the virus was circulating in the community, WHO is yet to publish the details.
If the failure or delay by WHO in publishing the case details is puzzling, it has now come to light that besides not reporting vaccine-associated paralytic polio (VAPP) cases, the WHO registry does not report all categories of vaccine-derived poliovirus (VDPV) cases either. WHO classifies VDPV cases into: 1) circulating vaccine-derived polioviruses (cVDPVs), 2) immune-deficiency associated VDPV (iVDPV), and 3) ambiguous vaccine-derived polioviruses (aVDPVs). However, besides wild poliovirus cases, the WHO registry has data of only the circulating VDPV cases and not the cases that belong to the other two categories — iVDPV and aVDPV. In fact, the registry does not even list the other two VDPV categories, namely the iVDPV and ambiguous VDPV. Even in the case of the circulating VDPV cases, the WHO registry does not classify the cases based on poliovirus serotypes — type-1, type-2 or type-3.
Now that Apple has finished announcing its slate of new M4-equipped Mac computers, we’ve finally been able to see them in person. The Verge’s Vjeran Pavic got some hands-on time with the new products and took some gorgeous photos that you can peruse below.
I’m blown away by how small the new Mac Mini is; the old Mac Mini, which was already small, seems giant in comparison. Vjeran tells me that, in person, the smaller Mac Mini is cute but that it’s “more like a mini Mac Studio than a mini Mac Mini.” He also says there’s no way to reach the power button, which is on the underside of the computer, without lifting it up.
As for the other computers, the new colors of the iMac really pop when they’re lined up together in these photos, if you ask me. And while the space black color of the MacBook Pros isn’t totally new, seeing it in these photos makes me really wish that Apple would bring it to the MacBook Air.
The new Macs are all available to preorder now ahead of their official release next week.
(L-R) Russia’s Alexander Grebenkin and NASA astronauts Michael Barratt, Matthew Dominick and Jeanette Epps (Picture: AP)
A NASA astronaut was flown to a hospital after returning to Earth with what the US space agency described as an unspecified medical condition.
Three NASA astronauts and a fourth from the Russian space agency spent almost eight months at the International Space Station after their homecoming had been stalled by problems with the Starliner capsule.
The crew splashed down off Florida’s coast at 7.29am on Friday aboard SpaceX’s Crew Dragon.
The astronaut, who NASA did not name for privacy reasons, is understood to be suffering from a medical issue, but the agency did not disclose what it is as of yet.
NASA initially said the entire crew was flown to Ascension Sacred Heart Pensacola hospital out of precaution, but did not specify whether all or some of them had been experiencing issues.
The other three crew members have left the hospital and returned to Houston, the space agency said.
‘The one astronaut who remains at Ascension is in stable condition and is under observation as a precautionary measure,’ NASA said in a statement, referring to Ascension Sacred Heart Pensacola hospital.
The agency said it will not share the nature of the astronaut’s condition.
SpaceX launched the four personnel – Nasa’s Matthew Dominick, Michael Barratt and Jeanette Epps, and Russia’s Alexander Grebenkin – in March.
Imagine standing in the heart of Copenhagen, surrounded by an orchestra of 30 speakers buried in the ground. However, instead of Mozart or Beethoven, you’re about to hear something far more primordial — the haunting sound of Earth’s magnetic field as it flipped 41,000 years ago. Welcome to the cutting edge of geomagnetic research, where science meets art in the most unexpected way.
In a groundbreaking project, scientists from the Technical University of Denmark and the German Research Centre for Geosciences have transformed data from the European Space Agency’s Swarm satellite mission into an auditory experience that’s both fascinating and slightly unnerving.
The star of this cosmic concert? The Laschamp event, a brief but dramatic period when Earth’s magnetic field did the unthinkable – it completely reversed direction. During this geomagnetic rollercoaster, our planet’s magnetic shield weakened to a mere 5% of its current strength, leaving Earth more vulnerable to cosmic rays than ever before.
“The team used data from ESA’s Swarm satellites, as well as other sources, and used these magnetic signals to manipulate and control a sonic representation of the core field. The project has certainly been a rewarding exercise in bringing art and science together,” explains Klaus Nielsen, a musician and project supporter from the Technical University of Denmark, in a 2022 media release after researchers first converted the magnetic field into sound.
The result is a soundscape that blends familiar natural noises like creaking wood and falling rocks with otherworldly tones, creating an audio journey that’s both familiar and alien. It’s a symphony of science, with each of the 30 speakers in Copenhagen’s Solbjerg Square representing a different location on Earth and demonstrating how our magnetic field has fluctuated over the last 100,000 years.
So, why go to all this trouble to turn invisible magnetic fields into sound? “The intention, of course, is not to frighten people – it is a quirky way of reminding us that the magnetic field exists and although its rumble is a little unnerving, the existence of life on Earth is dependent on it,” Nielsen says, shedding light on the project’s purpose.
Indeed, Earth’s magnetic field is our invisible protector, a complex and dynamic bubble that shields us from cosmic radiation and the relentless solar wind. Generated by the swirling liquid iron in Earth’s outer core, this magnetic dynamo is essential for life as we know it.
The Swarm satellite mission, launched by ESA in 2013, aims to unravel the mysteries of this magnetic shield. By measuring magnetic signals from Earth’s core, mantle, crust, oceans, and even the ionosphere and magnetosphere, Swarm is helping scientists understand how our magnetic field is generated and how it changes over time.
When people search for “Adam driver Megalopolis” on Instagram or Facebook right now, instead of seeing posts about Francis Ford Coppola’s latest film, they’re shown a warning, titled, “Child sexual abuse is illegal.”
That bizarre fact was pointed out in a post on X yesterday, and as of today, I’m still seeing it when I search for the phrase. But why? Well, it doesn’t seem to have anything to do with recent Threads moderation failures. Nor are there bombshell revelations about Megalopolis or its main star that I’m aware of.
Yikes. Screenshot: Instagram
Instead, Facebook and Instagram seem to be blocking searches containing “mega” and “drive” — I saw it when I searched with those two words together, but not when I searched for “Megalopolis,” “Adam Driver,” or either term mixed with any others. The issue isn’t new, either, as this nine-month-old Reddit post about searching for “Sega mega drive” on Facebook illustrates. (That search seems to work as expected, now.)
Two men have become considerably more powerful inside Google.
The first is Demis Hassabis, who already heads up Google’s AI research and is now better positioned to compete with ChatGPT. The second is Nick Fox, a company veteran who now oversees the company’s cash cow: Search.
Before, Hassabis didn’t have control of the product team that put the models his researchers developed into the world. Now, he oversees the end-to-end experience of Gemini, from the research driving the models to the chatbot people use to access them.
Brazil’s Polícia Federal announced the arrest of the hacker linked to a breach that leaked 2.9 billion records that included sensitive personal information, including some Social Security numbers. The data from that hack, which came to light in August, was put for sale on the dark web in April by an entity identifying themselves as USDoD.
As pointed out by BleepingComputer, according to a machine translation of the department’s press release, the hacker was linked to “two publications selling” federal police data. The hacker also boasted of disclosing the personal data of 80,000 members of the FBI’s InfraGard program, the department said.
Security researchers at Atlas have created a tool to search the leaked records and told PCMag that the leak contains about 272 million unique SSNs, along with as many as 600 million phone numbers. National Public Data and its parent company, Jerico Pictures, filed for Chapter 11 bankruptcy earlier this month, facing a flood of lawsuits and potential penalties over the incident.
Boeing-Lockheed joint venture United Launch Alliance’s next-generation Vulcan rocket is launched for the second time on a certification test flight from the Cape Canaveral Space Force Station in Cape Canaveral, Florida, U.S., October 4, 2024. REUTERS/Joe Skipper/File Photo Purchase Licensing Rights
The Biden administration on Thursday eased export restrictions on U.S. commercial space companies to ship certain satellite and spacecraft-related items to allies and partners.
The changes are intended to make it easier for the growing U.S. commercial space industry to expand sales while also protecting national security and foreign policy interests.
U.S. space companies like Elon Musk’s SpaceX, and large defense contractors with space units like Lockheed Martin (LMT.N), opens new tab, L3Harris Technologies (LHX.N), opens new tab and Boeing (BA.N), opens new tab, could benefit from the new rules, which were posted in the Federal Register on Thursday afternoon.
“As the diversity of commercial activity in space grows, these rules will reduce the burden for U.S. industry to continue innovating and leading in the space sector,” Don Graves, deputy secretary of the Department of Commerce, said in a statement.
The updates will also add to the U.S.’s ability to “broaden and deepen international partnerships, to grow our economy and to collaborate on mutual space priorities,” Graves said.
Certain items involving remote sensing spacecraft or space-based logistics assembly, and servicing spacecraft will no longer need licenses for shipment to Australia, Canada, and the United Kingdom, the Commerce Department said in the statement.
The rules could help the U.S. push ahead with the trilateral AUKUS security pact between Britain, the U.S. and Australia formed in 2021 to respond to China’s growing power in the Indo-Pacific region. Part of the pact is focused on technology sharing.
Some less sensitive satellite and spacecraft parts and components will no longer require licenses for shipment to over 40 countries. The countries include Canada, Australia, Japan, South Korea and most of the European Union, a person familiar with the matter said.
In addition, the Commerce Department will do away with license requirements for the least sensitive items like electrical connectors for most of the world, but not countries of concern like Russia and China, the person said.
The main messaging apps are all free to use, so what is in it for them?
In the past 24 hours I’ve written more than 100 WhatsApp messages.
None of them were very exciting. I made plans with my family, discussed work projects with colleagues, and exchanged news and gossip with some friends.
Perhaps I need to up my game, but even my most boring messages were encrypted by default, and used WhatsApp’s powerful computer servers, housed in various data centres around the world.
It’s not a cheap operation, and yet neither I nor any of the people I was chatting with yesterday, have ever parted with any cash to use it. The platform has nearly three billion users worldwide.
So how does WhatsApp – or zapzap, as it’s nicknamed in Brazil – make its money?
Admittedly, it helps that WhatsApp has a massive parent company behind it – Meta, which owns Facebook and Instagram as well.
Individual, personal WhatsApp accounts like mine are free because Whatsapp makes money from corporate customers wanting to communicate with users like me.
Since last year firms have been able to set up channels for free on Whatsapp, so they can send out messages to be read by all who choose to subscribe.
But what they pay a premium for is access to interactions with individual customers via the app, both conversational and transactional.
The UK is comparatively in its infancy here, but in the Indian city of Bangalore for example, you can now buy a bus ticket, and choose your seat, all via Whatsapp.
“Our vision, if we get all of this right, is a business and a customer should be able to get things done right in a chat thread,” says Nikila Srinivasan, vice president of business messaging at Meta.
“That means, if you want to book a ticket, if you want to initiate a return, if you want to make a payment, you should be able to do that without ever leaving your chat thread. And then just go right back to all of the other conversations in your life.”
Businesses can also now choose to pay for a link that launches a new WhatsApp chat straight from an online ad on Facebook or Instagram to a personal account. Ms Srinivasan tells me this is alone is now worth “several billions of dollars” to the tech giant.
Other messaging apps have gone down different routes.
Signal, a platform renowned for its message security protocols which have become industry-standard, is a non-profit organisation. It says it has never taken money from investors (unlike the Telegram app, which relies on them).
Instead, it runs on donations – which include a $50m (£38m) injection of cash from Brian Acton, one of the co-founders of WhatsApp, in 2018.
“Our goal is to move as close as possible to becoming fully supported by small donors, relying on a large number of modest contributions from people who care about Signal,” wrote its president Meredith Whittaker in a blog post last year.
Discord, a messaging app largely used by young gamers, has a freemium model – it is free to sign-up, but additional features, including access to games, come with a pricetag. It also offers a paid membership called Nitro, with benefits including high-quality video streaming and custom emojis, for a $9.99 monthly subscription.
Snap, the firm behind Snapchat, combines a number of these models. It carries ads, has 11 million paying subscribers (as of August 2024) and also sells augmented reality glasses called Snapchat Spectacles.
And it has another trick up its sleeve – according to the website Forbes, between 2016-2023 the firm made nearly $300m from interest alone. But Snap’s main source of revenue is from advertising, which brings in more than $4bn a year.
Chandrayaan-3 was launched July 6 last year from Sriharikota spaceport and it successfully made the soft landing near the South Pole on August 23.
ISRO Chairman S Somanath Receives IAF World Space Award
Dr S Somanath, chairman of the Indian Space Research Organisation (ISRO) on Monday received the International Astronautical Federation’s (IAF) prestigious World Space Award in Milan for Chandrayaan-3’s remarkable achievement.
“ISRO is honored to announce that Dr. S. Somanath, Secretary DOS and Chairman ISRO, has received the prestigious IAF World Space Award for Chandrayaan-3’s remarkable achievement the national agency headquartered in Bengaluru said in an online post. This recognition celebrates India’s contributions to space exploration. Celebrations underway in Milan as we continue to strive for new frontiers,” the national agency headquartered in Bengaluru said in an online post.
ISRO is honored to announce that Dr. S. Somanath, Secretary DOS and Chairman ISRO, has received the prestigious IAF World Space Award for Chandrayaan-3’s remarkable achievement 🌕🚀. This recognition celebrates India’s contributions to space exploration. Celebrations underway in… pic.twitter.com/FnrvnHjQqt
According to the Indian Air Force, the Chandrayaan-3 mission by ISRO exemplifies the synergy of scientific curiosity and cost-effective engineering, symbolising India’s commitment to excellence and the vast potential that space exploration offers humanity.
The logo of Google LLC is seen at the Google Store Chelsea in New York City, U.S., January 20, 2023. REUTERS/Shannon Stapleton/File Photo Purchase Licensing Rights
Alphabet’s (GOOGL.O), opens new tab Google said on Monday it signed the world’s first corporate agreement to buy power from multiple small modular reactors to meet electricity demand for artificial intelligence.
The technology company’s agreement with Kairos Power aims to bring Kairos’ first small modular reactor online by 2030, followed by additional deployments through 2035.
The companies did not reveal financial details of the agreement or where in the U.S. the plants would be built. Google said it has agreed to buy a total of 500 megawatts of power from six to seven reactors, which is smaller than the output of today’s nuclear reactors.
“We feel like nuclear can play an important role in helping to meet our demand … cleanly in a way that’s more around the clock,” Michael Terrell, senior director for energy and climate at Google, told reporters on a call.
Technology firms have signed several recent agreements with nuclear power companies this year as artificial intelligence boosts power demand for the first time in decades.
In March, Amazon.com (AMZN.O), opens new tab purchased a nuclear-powered datacenter from Talen Energy (TLN.O), opens new tab. Last month, Microsoft (MSFT.O), opens new tab and Constellation Energy (CEG.O), opens new tab signed a power deal to help resurrect a unit of the Three Mile Island plant in Pennsylvania, the site of the worst U.S. nuclear accident in 1979.
U.S. data center power use is expected to roughly triple between 2023 and 2030 and will require about 47 gigawatts of new generation capacity, according to Goldman Sachs estimates, which assumed natural gas, wind and solar power would fill the gap.
Kairos will need to get full construction and design permitting from the U.S. Nuclear Regulatory Commission as well as permits from local agencies, a process that can take years.
Kairos late last year got a construction permit from the NRC to build a demonstration reactor in Tennessee.
“The NRC is ready to efficiently and appropriately review applications for new reactors,” said Scott Burnell, an NRC spokesperson.
Small modular reactors are intended to be smaller than today’s reactors with components built in a factory, instead of onsite, to reduce construction costs.
Critics say SMRs will be expensive because they may not be able to achieve the economy of scale of larger plants. In addition, they will likely produce long-lasting nuclear waste for which the country does not yet have a final repository.
NASA launched a spacecraft from Florida on Monday on a mission to examine whether Jupiter’s moon Europa has conditions suitable to support life, with a focus on the large subsurface ocean believed to be lurking beneath its thick outer shell of ice.
The U.S. space agency’s Europa Clipper spacecraft blasted off from the Kennedy Space Center in Cape Canaveral on a SpaceX Falcon Heavy rocket under sunny skies. The robotic solar-powered probe is due to enter orbit around Jupiter in 2030 after journeying about 1.8 billion miles (2.9 billion km) in 5-1/2 years. The launch had been planned for last week but was put off because of Hurricane Milton.
It is the largest spacecraft NASA has built for a planetary mission, at about 100 feet (30.5 meters) long and about 58 feet (17.6 meters) wide with its antennas and solar arrays fully deployed – bigger than a basketball court – while weighing approximately 13,000 pounds (6,000 kg).
Even though Europa, the fourth-largest of Jupiter’s 95 officially recognized moons, is just a quarter of Earth’s diameter, its vast global ocean of salty liquid water may contain twice the water in Earth’s oceans. Earth’s oceans are thought to have been the birthplace for life on our planet.
Europa, whose diameter of roughly 1,940 miles (3,100 km) is approximately 90% that of our moon, has been viewed as a potential habitat for life beyond Earth in our solar system. Its icy shell is believed to be 10-15 miles (15-25 km) thick, sitting atop an ocean 40-100 miles (60-150 km) deep.
NASA Associate Administrator Jim Free told a prelaunch briefing on Sunday that Europa boasts one of the most promising environments for potential habitability in our solar system, beyond Earth, though he noted that this mission will not be a search for any actual living organisms.
“What we discover on Europa,” Free said, “will have profound implications for the study of astrobiology and how we view our place in the universe.”
Flames are shown as a SpaceX Falcon Heavy rocket is launched for the Europa Clipper mission to study one of Jupiter’s 95 moons, at the Kennedy Space Center in Cape Canaveral, Florida, U.S. October 14, 2024. REUTERS/Joe Skipper Purchase Licensing Rights
“Scientists believe Europa has suitable conditions below its icy surface to support life. Its conditions are water, energy, chemistry and stability,” said Sandra Connelly, deputy associate administrator of NASA’s science mission directorate.
Among the mission objectives are measuring the internal ocean and the layer of ice above it, mapping the moon’s surface composition, and hunting for plumes of water vapor that may be venting from Europa’s icy crust. The plan is for Europa Clipper starting in 2031 to conduct 49 close flybys of Europa over a span of three years, coming as close as 16 miles (25 kilometers) to the moon’s surface.
Europa Clipper will be operating in an intense radiation environment around Jupiter, our solar system’s biggest planet.
Jupiter is enveloped by a magnetic field about 20,000 times stronger than Earth’s. This magnetic field spins, capturing and accelerating charged particles and creating radiation that could harm spacecraft. NASA fashioned a vault made of titanium and aluminum inside the Europa Clipper to protect its sensitive electronics from this radiation. Source: https://www.reuters.com/science/nasa-launches-spacecraft-gauge-if-jupiters-moon-europa-can-host-life-2024-10-14/
Tesla’s robotaxi is seen at an unveiling event in Los Angeles, California, U.S. October 10, 2024, in this still image taken from a video. Tesla/Handout via REUTERS Purchase Licensing Rights
Elon Musk showcased a robotaxi with two gull-wing doors and no steering wheel or pedals at a splashy event on Thursday and added a robovan to the roster as Tesla’s (TSLA.O), opens new tab goal shifts from low-priced mass-market automaker to robotics manufacturer.
Musk reached the stage in a “Cybercab” which he said will go into production in 2026 and be priced less than $30,000. He said operation will cost 20 cents a mile over time and charging will be inductive, requiring no plugs.
He said the cars rely on artificial intelligence and cameras and do not need other hardware such as what robotaxi rivals use – an approach investors and analysts have flagged as challenging both from a technical and regulatory stand point.
“The autonomous future is here,” Musk said. “We have 50 fully autonomous cars here tonight. You’ll see model Ys and the Cybercab. All driverless.”
Musk also showcased a larger, self-driving vehicle – called Robovan – capable of carrying up to 20 people, and showed off Tesla’s Optimus humanoid robot.
Musk’s plan is to operate a fleet of self-driving Tesla taxis that passengers can hail through an app. Individual Tesla owners will also be able to make money on the app by listing their vehicles as robotaxis.
Thursday’s event at the Warner Bros studio near Los Angeles, California, is titled “We, Robot” – an apparent nod to the “I, Robot” science-fiction short stories by American writer Isaac Asimov, but also echoes Musk’s insistence that Tesla “should be thought of as an AI robotics company” rather than an automaker.
Those attending included investors, stock analysts and Tesla fans.
Investors expecting concrete details on how quickly Tesla can ramp up robotaxi production, secure regulatory approval and implement a strong business plan to leapfrog rivals such as Alphabet’s (GOOGL.O), opens new tab Waymo were left disappointed.
“Everything looks cool, but not much in terms of time lines, I’m a shareholder and pretty disappointed. I think the market wanted more definitive time lines,” said Dennis Dick, equity trader at Triple D Trading. “I don’t think he said much about anything… He didn’t give much info.”
What appears as a faint dot in this James Webb Space Telescope image may actually be a groundbreaking discovery. Detailed information on galaxy GS-NDG-9422, captured by Webb’s NIRSpec (Near-Infrared Spectrograph) instrument, indicates that the light we see in this image is coming from the galaxy’s hot gas, rather than its stars. Astronomers think that the galaxy’s stars are so extremely hot (more than 140,000 degrees Fahrenheit, or 80,000 degrees Celsius) that they are heating up the nebular gas, allowing it to shine even brighter than the stars themselves. (Credit: NASA, ESA, CSA, STScI, Alex Cameron (Oxford))
Astronomers have identified a galaxy that seems to defy the laws of stellar physics. This cosmic oddball, named GS-NDG-9422, resides a staggering 12.9 billion light-years away, offering us a glimpse into the universe when it was merely 900 million years-old. What makes this galaxy truly extraordinary is its peculiar spectrum of light, which has left scientists scratching their heads and proposing intriguing theories about its nature.
The study, led by Alex J. Cameron of the University of Oxford and published in the Monthly Notices of the Royal Astronomical Society, presents a detailed analysis of GS-NDG-9422’s spectrum. This spectrum, essentially the galaxy’s light signature, reveals an unexpected feature: a sharp downturn in ultraviolet light at a specific wavelength. This characteristic is so unusual that it has sparked a debate about the galaxy’s composition and the processes occurring within it.
The researchers propose that GS-NDG-9422 harbors extremely hot stars, with surface temperatures exceeding 140,000 degrees Fahrenheit (80,000 degrees Celsius). To put this in perspective, typically hot, massive stars in our local universe have temperatures ranging between 70,000 to 90,000 degrees Fahrenheit (40,000 to 50,000 degrees Celsius). These scorching stellar bodies could be responsible for generating an intense radiation field that ionizes the surrounding gas, creating a nebula that outshines the stars themselves.
“It looks like these stars must be much hotter and more massive than what we see in the local universe, which makes sense because the early universe was a very different environment,” says Harley Katz, a co-author of the study from the University of Oxford and the University of Chicago, in a statement.
The research team suspects that GS-NDG-9422 is undergoing a brief phase of intense star formation within a dense gas cloud, producing a large number of massive, hot stars. This cloud is bombarded with so many photons of light from the stars that it glows extraordinarily bright.
This discovery is not just a curiosity; it has far-reaching implications for our understanding of how galaxies evolved in the early universe. The phenomenon of nebular gas outshining stars is particularly intriguing because it has been predicted to occur in environments hosting the universe’s first generation of stars, known as Population III stars.
While GS-NDG-9422 does not contain these primordial stars due to its chemical complexity, it may represent a transitional phase in galactic evolution.
“The exotic stars in this galaxy could be a guide for understanding how galaxies transitioned from primordial stars to the types of galaxies we already know,” Katz explains.
The study was made possible by the unprecedented capabilities of the James Webb Space Telescope (JWST). This next-generation observatory, with its superior sensitivity and resolution, allowed astronomers to capture detailed spectra of this distant galaxy, unveiling its mysterious nature.
Interestingly, GS-NDG-9422 may not be alone in its peculiarity. The researchers identified two other galaxies with similar spectral features: the Lynx Arc at a distance of 11.5 billion light-years, and A2744-NDG-ZD4, which is even more distant at 13.1 billion light-years away. These findings suggest that such exotic galaxies might be more common in the early universe than previously thought.
As with any groundbreaking discovery, this study raises as many questions as it answers. How common are such galaxies in the early universe? And what can they tell us about the evolution of the cosmos? Cameron and his team are actively working to identify more galaxies with similar characteristics to better understand the conditions in the universe within its first billion years.
Due to the cyberattack, the most important government websites and services of the state have completely shut down, including important platforms like CM Helpline, Land Registry, and e-Office.
Work in government offices is stalled for the second day today. (Representational)
A sudden major cyberattack in Uttarakhand brought the entire IT system of the state to a standstill, which has had a serious impact on government work.
Due to the cyberattack, the most important government websites and services of the state have completely shut down, including important platforms like CM Helpline, Land Registry, and e-Office.
Work in government offices is stalled for the second day today, due to which administrative work has been affected across the state, including the Secretariat.
Speaking to ANI, about the cyberattack, Nikita Khandelwal, Director, ITDA Information Technology Development Agency, said, “During scanning on October 2, it was found that the machine has been affected by malware, so as a precaution we have shut down our data centre, due to which all applications have been shut down and all are being scanned.”