Evolution: What Will Humans Look Like in 50,000 Years?

Many people hold the view that evolution in modern humans has come to a halt. But while modern medicine and technologies have changed the environment in which evolution operates, many scientists are in agreement that the phenomenon is still occurring.

This evolution may be less about survival and more about reproductive success in our current environment. Changes in gene frequencies because of factors like cultural preferences, geographic migration and even random events continue to shape the human genome.

But what might humans look like in 50,000 years time? Such a question is clearly speculative in nature. Nevertheless, experts that Newsweek spoke to gave their predictions for how evolution might affect the appearance of our species in the future.

“Evolution is part deterministic—there are rules for how systems evolve—and part random—mutations and environmental changes are primarily unpredictable,” Thomas Mailund, an associate professor of bioinformatics at Aarhus University in Denmark, told Newsweek.

“In some rare cases, we can observe evolution in action, but over a time span of tens or hundreds of years, it is mostly guesswork. We can make somewhat qualified guesses, but the predictive power is low, so think of it as thought experiments more than anything else.”

Something we can say with certainty is that 50,000 years is more than enough time for several evolutionary changes to occur, albeit on a relatively minor scale, according to Mailund.

“Truly dramatic changes require a longer time, of course. We are not going to grow wings or gills in less than millions of years, and 50,000 years ago, we were anatomically modern humans.”

Jason Hodgson, an anthropologist and evolutionary geneticist at Anglia Ruskin University in the United Kingdom, told Newsweek that 50,000 years is an “extremely long time” in the course of human evolution, representing more than 1,667 human generations given a 30-year generation time.

A 3D illustration of a facial recognition system. What will humans look like in 50,000 years? Design Cells/iStock/Getty Images Plus

“Within the past 50,000 years most of the variation that is seen among human populations evolved,” Hodgson said. “This includes all of the skin color variation seen across the globe, all of the stature variation, all of the hair color and texture variation, etc. In fact, most of the variation we are so familiar with evolved within the past 10,000 years.”

In the more immediate future, Hodgson predicts that global populations will become more homogenous and less structured when it comes to genetics and phenotype—an individual’s observable traits.

“Currently the phenotypes that we associate with geographic regions—for example, dark skin in Africans, light skin in Scandinavians, short stature in African pygmy hunter-gatherers, tall stature in Dutch, etc.—is maintained by assortative mating. People are much more likely to choose mates who are similar to themselves,” he said.

“Part of this is due to the human history of migration and culture which means people tend to live by and be exposed to people who are more similar to themselves with respect to global variation. And some of this is due to preference for similarity within local populations for reasons that we still do not really understand.

“However, admixture—mating between distantly related groups—is increasing, and this will result in less structure and a more homogenous global population. As an analogy, if you stick a bunch of poodles, rottweilers, chihuahuas and St. Bernards on an island and let them breed randomly, within a few generations everything would be a medium sized brown dog.”

When distinct populations mix, so do their traits. Some traits are determined by a few gene variants. But many traits result from a combination of various different genes, and there we will blend together to some degree, according to Mailund.

“So there will be some changes, not caused by selection, but because previously isolated groups are now mixing,” he said.

It is still possible though that despite increasing homogeneity, not everyone would evolve in the same direction, according to Nick Longrich, a paleontologist and evolutionary biologist at the University of Bath in the United Kingdom.

“You could imagine that in distinct subpopulations you could get people evolving in different ways,” he said.

If there are strong, consistent pressures toward certain characteristics, our species could experience “very rapid evolution” in a matter of thousands—or possibly even hundreds—of years, Longrich said.

While we do not know what the selective pressures will be like going forward, Longrich said he expects a number of developments, extrapolating from past trends and current conditions.

For example, we might get taller, because of sexual selection. And we might also become more attractive on average, since sexual selection plays more of a role in modern society than natural selection.

“Attractiveness is relative, so maybe we’d look like movie stars but if everyone looked that way, it wouldn’t be exceptional,” he said.

As time passes and technology evolves, it is also possible that humans will begin to direct our own evolution in a targeted fashion through gene editing tools such as CRISPR—potentially aided by artificial intelligence.

“Applying genetic techniques to humans that alter phenotypes is highly controversial and ethically fraught. Indeed, 20th century eugenicists thought they could improve the human species by only allowing the ‘right’ people to breed,” Hodgson said.

Source : https://www.newsweek.com/evolution-what-will-humans-look-like-50000-years-2006894

The human brain processes thoughts 5,000,000 times slower than the average internet connection

The brain may not be as powerful as previously thought, according to the research (Picture: Getty Images)

People think many millions of times slower than the average internet connection, scientists have found.

The body’s sensory systems, including the eyes, ears, skin, and nose, gather data about our environments at a rate of a billion bits per second.

But the brain processes these signals at only about 10 bits per second, millions of times slower than the inputs, according to author Markus Meister.

A bit is the unit of information in computing. A typical Wi-Fi connection processes about 50 million bits per second.

Despite the brain having over 85 billion neurons, researchers found that humans think at around 10 bits per second – a number they called ‘extremely low’.

Writing in the scientific journal Neuron, research co-author Markus Meister said: ‘Every moment, we are extracting just 10 bits from the trillion that our senses are taking in and using those 10 to perceive the world around us and make decisions.

‘This raises a paradox: What is the brain doing to filter all this information?’

Individual nerve cells in the brain are capable of transmitting over 10 bits per second.

However, the new findings suggest they don’t help process thoughts at such high speeds.

This makes humans relatively slow thinkers, who are unable to process many thoughts in parallel, the research suggests.

This prevents scenarios like a chess player being able to envision a set of future moves and only lets people explore one possible sequence at a time rather than several at once.

The discovery of this ‘speed limit’ paradox in the brain warrants further neuroscience research, scientists say.

They speculated that this speed limit likely emerged in the first animals with a nervous system.

These creatures likely used their brains primarily for navigation to move toward food and away from predators.

Since human brains evolved from these, it could be that we can only follow one ‘path’ of thought at a time, according to researchers.

Source : https://metro.co.uk/2024/12/27/human-brain-processes-thoughts-5-000-000-times-slower-average-internet-connection-22258645/

Young and restless: 37% of Gen Z skipping the gym, going straight to Ozempic

Overweight woman applying medicine injection (© Mauricio – stock.adobe.com)

CORONA DEL MAR, Calif. — Is your New Year’s resolution to lose some weight? A new poll finds many people may actually achieve their goals in 2025 — with a little help from their pharmacist. More than a quarter of Americans are planning to turn to GLP-1 medications like Ozempic and Wegovy to reach their 2025 weight loss goals.

According to researchers with Tebra, who surveyed over 1,000 Americans in November 2024, there’s now a growing acceptance of pharmaceutical interventions for weight management, particularly among younger people.

Specifically, Gen Z is skipping the gym and going straight to the pharmacy, with 37% planning to add these medications to their wellness strategy in the coming year. Women are leading the charge, with 30% intending to use GLP-1 drugs to reach their weight loss goals, compared to 20% of men. On average, women are setting more ambitious weight loss targets, aiming to shed 23 pounds in 2025, while men are looking to lose 19 pounds.

Despite the growing enthusiasm for weight loss shortcuts, the path to accessing these medications remains complicated. Nearly eight in 10 people believe GLP-1 weight loss medications are out of reach for the average person due to their skyrocketing cost. In fact, 64% of those interested in using these medications cite high costs as their main concern, followed by worries about potential side-effects (59%).

For those who have already taken the plunge, the results appear to justify the costs. An overwhelming 86% of current GLP-1 users report that the health risks are worth the results they’re seeing. This satisfaction may explain why 66% of Americans now believe these medications are more effective than traditional weight loss routes like diet and exercise.

Baby boomers show the strongest confidence in these drugs’ effectiveness, with 72% believing they outperform traditional methods, followed by Gen X at 70%, millennials at 64%, and Gen Z at 58%. The gender gap is even more pronounced, with 75% of women believing in the superior effectiveness of GLP-1 medications compared to 53% of men.

Despite the growing trust in popular weight loss drugs, nearly one in four current users are taking these medications without a doctor’s oversight, raising questions about safety and proper usage. This statistic becomes particularly alarming when you consider that 41% of Americans are uncertain about the long-term effectiveness of these drugs, and 39% worry about developing an addiction to them.

The timing of this shift toward pharmaceutical weight loss solutions may not be coincidental. The survey reveals that nearly half (49%) of Americans have previously abandoned their New Year’s resolution wellness goals, with 31% giving up as early as February. This history of frustration with traditional approaches might explain the growing openness to medical shortcuts for weight loss.

Source : https://studyfinds.org/gen-z-ozempic/

 

The effects of ‘brain rot’: How junk content is damaging our minds

Recent research has found that Internet use and abuse is associated with a decrease in gray matter in the prefrontal regions of the brain.
Photographer, Basak Gurbuz Derman (Getty Images)

“Brain rot” was named the Oxford Word of the Year for 2024 after a public vote involving more than 37,000 people. Oxford University Press defines the concept as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.”

According to Oxford’s language experts, the term reflects growing concerns about “the impact of consuming excessive amounts of low-quality online content, especially on social media.” The term increased in usage frequency by 230% between 2023 and 2024.

But brain rot is not just a linguistic quirk. Over the past decade, scientific studies have shown that consuming excessive amounts of junk content — including sensationalist news, conspiracy theories and vacuous entertainment — can profoundly affect our brains. In other words, “rot” may not be that big of an exaggeration when it comes to describing the impact of low-quality online content.

Research from prestigious institutions such as Harvard Medical School, Oxford University, and King’s College London — cited by The Guardian — reveals that social media consumption can reduce grey matter, shorten attention spans, weaken memory, and distort core cognitive functions.

A 2023 study highlighted these effects, showing how internet addiction causes structural changes in the brain that influence behavior and cognitive abilities. Michoel Moshel, a researcher at Macquarie University and co-author of the study, explains that compulsive content consumption — popularly known as doomscrolling — “takes advantage of our brain’s natural tendency to seek out new things, especially when it comes to potentially harmful or alarming information, a trait that once helped us survive.”

Moshel explains that features like “infinite scrolling,” which are designed to keep users glued to their screens, can trap people — especially young individuals — in a cycle of content consumption for hours. “This can significantly impair attention and executive functions by overwhelming our focus and altering the way we perceive and respond to the world,” says the researcher.

Eduardo Fernández Jiménez, a clinical psychologist at Hospital La Paz in Madrid, explains that the brain activates different neural networks to manage various types of attention. He notes that excessive use of smartphones and the internet is causing issues with sustained attention, which “allows you to concentrate on the same task for a more or less extended period of time.” He adds: “It is the one that is linked to academic learning processes.”

The problem, says the researcher, is that social media users are constantly exposed to rapidly changing and variable stimuli — such as Instagram notifications, WhatsApp messages, or news alerts — that have addictive potential. This means users are constantly switching their focus, which undermines their ability to concentrate effectively.

The first warning came with email

Experts have been sounding the alarm about this issue since the turn of the century, when email became a common tool. In 2005, The Guardian ran the headline: “Email pose ‘threat to IQ.’” The article reported that a team of scientists at the University of London investigated the impact of the constant influx of information on the brain. After conducting 80 clinical trials, they found that participants who used email and cellphones daily experienced an average IQ drop of 10 points. The researchers concluded that this constant demand for attention had a more detrimental effect than cannabis use

This was before the rise of tweets, Instagram reels, TikTok challenges, and push notifications. The current situation, however, is even more concerning. Recent research has found that excessive internet use is linked to a decrease in grey matter in the prefrontal regions of the brain — areas responsible for problem-solving, emotional regulation, memory, and impulse control.

The research conducted by Moshel and his colleagues supports these findings. Their latest study, which reviewed 27 neuroimaging studies, revealed that excessive internet use is associated with a reduction in the volume of grey matter in brain regions involved in reward processing, impulse control, and decision-making. “These changes reflect patterns observed in substance addictions,” says Moshel, comparing them to the effects of methamphetamines and alcohol.

That’s not all. The research also found that “these neuroanatomical changes in adolescents coincide with disruptions in processes such as identity formation and social cognition — critical aspects of development during this stage.” This creates a kind of feedback loop, where the most vulnerable individuals are often the most affected. According to a study published in Nature in November, people with poorer mental health are more likely to engage with junk content, which further exacerbates their symptoms.

Source : https://english.elpais.com/technology/2024-12-26/the-effects-of-brain-rot-how-junk-content-is-damaging-our-minds.html

Which infectious disease is most likely to be biggest emerging problem in 2025?

(Credit: Melnikov Dmitriy/Shutterstock)

COVID emerged suddenly, spread rapidly and killed millions of people around the world. Since then, I think it’s fair to say that most people have been nervous about the emergence of the next big infectious disease – be that a virus, bacterium, fungus or parasite.

With COVID in retreat (thanks to highly effective vaccines), the three infectious diseases causing public health officials the greatest concern are malaria (a parasite), HIV (a virus) and tuberculosis (a bacterium). Between them, they kill around 2 million people each year.

And then there are the watchlists of priority pathogens – especially those that have become resistant to the drugs usually used to treat them, such as antibiotics and antivirals.

Scientists must also constantly scan the horizon for the next potential problem. While this could come in any form of pathogen, certain groups are more likely than others to cause swift outbreaks, and that includes influenza viruses.

One influenza virus is causing great concern right now and is teetering on the edge of being a serious problem in 2025. This is influenza A subtype H5N1, sometimes referred to as “bird flu.” This virus is widely spread in both wild and domestic birds, such as poultry. Recently, it has also been infecting dairy cattle in several U.S. states and found in horses in Mongolia.

When influenza cases start increasing in animals such as birds, there is always a worry that it could jump to humans. Indeed, bird flu can infect humans with 61 cases in the U.S. this year already, mostly resulting from farm workers coming into contact with infected cattle and people drinking raw milk.

Compared with only two cases in the Americas in the previous two years, this is quite a large increase. Coupling this with a 30% mortality rate from human infections, bird flu is quickly jumping up the list of public health officials’ priorities.

Luckily, H5N1 bird flu doesn’t seem to transmit from person to person, which greatly reduces its likelihood of causing a pandemic in humans. Influenza viruses have to attach to molecular structures called sialic receptors on the outside of cells in order to get inside and start replicating.

Flu viruses that are highly adapted to humans recognise these sialic receptors very well, making it easy for them to get inside our cells, which contributes to their spread between humans. Bird flu, on the other hand, is highly adapted to bird sialic receptors and has some mismatches when “binding” (attaching) to human ones. So, in its current form, H5N1 can’t easily spread in humans.

However, a recent study showed that a single mutation in the flu genome could make H5N1 adept at spreading from human to human, which could jump-start a pandemic.

If this strain of bird flu makes that switch and can start transmitting between humans, governments must act quickly to control the spread. Centers for disease control around the world have drawn up pandemic preparedness plans for bird flu and other diseases that are on the horizon.

For example, the UK has bought 5 million doses of H5 vaccine that can protect against bird flu, in preparation for that risk in 2025.

Even without the potential ability to spread between humans, bird flu is likely to affect animal health even more in 2025. This not only has large animal welfare implications but also the potential to disrupt food supply and have economic effects as well.

Source : https://studyfinds.org/which-infectious-disease-is-most-likely-to-be-biggest-emerging-problem-in-2025/

 

Why human civilization may be on the brink of a ‘planetary phase shift’

(Credit: © Aleksandr Zamuruev | Dreamstime.com)

Systems theorist suggests the ‘next giant leap in evolution’ is nearing, but authoritarian politics could get in the way
Picture a caterpillar transforming into a butterfly. At a certain point, the creature enters a critical phase where its old form breaks down before emerging as something entirely new. According to a thought-provoking paper by renowned systems theorist Dr. Nafeez Ahmed, human civilization may be approaching a similar transformative moment, or what researchers call a “planetary phase shift.” And while the potential for positive transformation is enormous, Ahmed warns that rising authoritarianism could derail this evolutionary leap.

Ahmed, founding director of the System Shift Lab, presents compelling evidence in the journal Foresight that we’re living through an unprecedented period of change. Multiple global crises — from climate change to economic instability to technological disruption — aren’t just separate problems, but symptoms of an entire civilization undergoing metamorphosis.

“An amazing new possibility space is emerging, where humanity could provide itself superabundant energy, transport, food and knowledge without hurting the earth,” Ahmed says in a statement. “This could be the next giant leap in human evolution.”

The paper synthesizes research across natural and social sciences to develop a new theory of how civilizations rise and fall. It introduces the concept of “adaptive cycles,” a pattern observed in everything from forest ecosystems to ancient civilizations. These cycles move through four phases: rapid growth, conservation (stability), release (creative destruction), and reorganization. Think of it like the seasons: spring growth, summer abundance, autumn release, and winter renewal.

According to Ahmed, industrial civilization is now entering the “release” phase, where old structures begin breaking down. This explains why we’re seeing simultaneous crises across multiple systems. The fossil fuel economy is faltering, evidenced by a global decrease in Energy Return on Investment (EROI) for oil, gas, and coal. Meanwhile, renewable energy technologies are experiencing exponentially improving EROI rates.

But here’s where it gets interesting: these breakdowns aren’t necessarily catastrophic. They’re creating space for radical new possibilities. The study points to major technological innovations expected between the 2030s and 2060s, including clean energy, cellular agriculture, electric vehicles, artificial intelligence, and 3D printing. When combined, these technologies could enable what the researcher calls “networked superabundance” — a world where clean energy, transportation, food, and knowledge become universally accessible at near-zero cost while protecting Earth’s systems.

“This planetary renewable energy system will potentially enable citizens everywhere to produce clean energy ‘superabundance’ at near-zero marginal costs for most times of the year. This huge energy surplus – as much as ten times what we produce today – could power a global ‘circular economy’ system in which materials are rigorously recycled, with the system overall requiring 300 times less materials by weight than the fossil fuel system,” Ahmed writes. “[C]ost and performance improvements in autonomous driving technology could enable a new model called transport-as-a-service, leading private car ownership to collapse by about 90% – replaced by fleets of privately or publicly-owned autonomous taxis and buses up to ten times cheaper than transport today – as early as the 2030s.”

However, Ahmed emphasizes that technology alone won’t determine our fate. The key challenge is whether we can evolve our “operating system” — our social, economic, and cultural structures — to harness these capabilities for the common good. There’s a growing gulf between the old “industrial operating system” and emerging new systems that are inherently distributed and decentralized. This mismatch is driving major political and cultural disruptions globally.

Source : https://studyfinds.org/human-civilization-planetary-phase-shift/

 

Are we moral blank slates at birth? New study offers intriguing clues

(Photo by Ana Tablas on Unsplash)

What does a baby know about right and wrong? A foundational finding in moral psychology suggested that even infants have a moral sense, preferring “helpers” over “hinderers” before uttering their first word. Now, nearly 20 years later, a study that tried to replicate these findings calls this result into question.

In the original study, Kiley Hamlin and her colleagues showed a puppet show to six- and ten-month-old babies. During the show, the babies would see a character — which was really just a shape with googly eyes — struggling to reach the top of a hill.

Next, a new character would either help the struggling individual reach the top (acting as a “helper”) or push the character back down to the bottom of the hill (acting as a “hinderer”).

By gauging babies’ behavior — specifically, watching how their eyes moved during the show and whether they preferred to hold a specific character after the show ended — it seemed that the infants had basic moral preferences. Indeed, in the first study, 88% of the ten-month-olds – and 100% of the six-month-olds – chose to reach for the helper.

But psychology, and developmental psychology, in particular, is no stranger to replicability concerns (when it is difficult or impossible to reproduce the results of a scientific study). After all, the original study sampled only a few dozen infants.

This isn’t the fault of the researchers; it’s just really hard to collect data from babies. But what if it was possible to run the same study again — with say, hundreds or even thousands of babies? Would researchers find the same result?

This is the chief aim of ManyBabies, a consortium of developmental psychologists spread around the world. By combining resources across individual research labs, ManyBabies can robustly test findings in developmental science, like Hamlin’s original “helper-hinderer” effect. And as of last month, the results are in.

With a final sample of 567 babies, tested in 37 research labs across five continents, babies did not show evidence of an early-emerging moral sense. Across the ages tested, babies showed no preference for the helpful character.

Blank slate?

John Locke, an English philosopher argued that the human mind is a “tabula rasa” or “blank slate.” Everything that we, as humans, know comes from our experiences in the world. So should people take the most recent ManyBabies result as evidence of this? My answer, however underwhelming, is “perhaps.”

This is not the first attempted replication of the helper-hinderer effect (nor is it the first “failure to replicate”). In fact, there have been a number of successful replications. It can be hard to know what underlies differences in results. For example, a previous “failure” seemed to come from the characters’ “googly eyes” not being oriented the right way.

The ManyBabies experiment also had an important change in how the “show” was presented to infants. Rather than a puppet show performed live to baby participants, researchers instead presented a video with digital versions of the characters. This approach has its strengths. For example, ensuring that the exact same presentation occurs across every trial, in every lab. But it could also shift how babies engage with the show and its characters.

Source : https://studyfinds.org/are-we-moral-blank-slates-at-birth-new-study-offers-intriguing-clues/

Make it personal: Customized gifts trigger this unique psychological response in recipients

Man and woman opening Christmas gifts (© luckybusiness – stock.adobe.com)

When Nike launched its customization platform NikeID, few could have predicted it would reveal profound insights about human psychology. Now, research spanning four countries shows that personalized products trigger a fascinating emotional phenomenon called “vicarious pride.” That is, recipients of customized gifts experience the same pride their friends felt while creating them.

The study, published in the journal Psychology & Marketing, explores the psychological dynamics at play when someone receives a personalized gift.

“Gift-giving is an age-old tradition, but in today’s world, personalization has become a powerful way to make gifts stand out,” explains Dr. Diletta Acuti, a marketing expert at the University of Bath School of Management, in a statement.

When someone receives a customized gift, such as a chocolate bar with personally selected flavors or a leather journal with their name inscribed, they don’t just appreciate the thought behind it.

“You don’t just appreciate the care and intention they put into crafting that gift; you feel them,” Dr. Acuti explains.

This emotional mirroring stems from a psychological concept called simulation theory, where people mentally recreate others’ experiences and emotions. It’s similar to how sports fans feel their team’s victories and defeats as if they were on the field themselves, or how parents beam with pride at their children’s achievements. When it comes to customized gifts, recipients essentially piggyback on the gift-giver’s sense of creative accomplishment.

Through four carefully designed studies, the researchers examined this phenomenon from different angles. In their first experiment with 74 participants, they studied how people responded to customized clothing gifts. To measure appreciation objectively, recipients were asked to indicate which items, if any, they would change – a novel approach to gauging satisfaction. Those who received customized gifts wanted to make fewer changes to their presents, suggesting higher appreciation.

The second study took a different approach, showing 134 participants videos of two different gift-selection processes: one showing the customization of a T-shirt, and another showing standard gift selection through website browsing. Even when controlling for the time spent selecting the gift, customized presents consistently generated more appreciation.

In the third and fourth studies, conducted online using a mug and wristwatch as gifts, the researchers confirmed that customization not only increased appreciation but also enhanced recipients’ self-esteem. This suggests that receiving a personalized gift makes people feel more valued and special.

Interestingly, the research revealed that the time and effort spent on customization didn’t significantly impact the recipient’s appreciation. Whether the giver spent considerable time or just a few minutes personalizing the gift, recipients experienced similar levels of vicarious pride. This finding challenges common assumptions about the relationship between time invested and gift appreciation.

The study also uncovered an important caveat: relationship anxiety can diminish these positive effects. When recipients feel insecure about their relationship with the gift-giver, the benefits of customization – including vicarious pride and enhanced self-esteem – may not materialize.

For businesses, these insights suggest new opportunities in the growing customized gift market, which is projected to reach $13 billion by 2027 according to Technavio. “Using ‘made by’ signals – such as including the giver’s name, a short message about the process or a visual representation of the customization – can make things even more impactful,” suggests Dr. Acuti. “These small additions reinforce the emotional connection between the giver and the recipient.”

The research also has implications for sustainability, as the study found that recipients tend to take better care of gifts they value more. This suggests that personalization might contribute to longer product lifespans and reduced waste.

“When choosing a gift, personalization can be a game-changer. But it’s not just about selecting a customizable option: you also need to communicate that effort to your recipient. Sharing why you chose elements of the gift or the thought that went into it will make the recipient appreciate it even more. Indeed, this additional effort helps them to connect with the pride you felt in your choices, making the gift even more meaningful,” Dr. Acuti advises.

Perhaps the true magic of customized gifts isn’t in the personalization itself, but in their ability to create invisible bridges between people – emotional connections forged through shared pride and mutual recognition. In a world increasingly mediated by screens and algorithms, these moments of genuine human connection might be the most valuable gift of all.

Source : https://studyfinds.org/customized-gifts-psychological/

A growth mindset protects mental health during hard times

(© Татьяна Макарова – stock.adobe.com)

When the world turned upside down during the COVID-19 pandemic, some people seemed to weather the storm better than others. Though many struggled with depression and loneliness during lockdowns, others maintained their mental well-being and even thrived. What made the difference? According to new research, one key factor may be something called a “growth mindset” – the belief that our abilities and attributes can change and develop over time.

This fascinating study, conducted by researchers at the University of California, Riverside and the University of the Pacific, followed 454 adults ages 19 to 89 over two years of the pandemic, from June 2020 to September 2022. Their findings suggest that people who believe their capabilities are malleable rather than fixed were better equipped to handle the psychological challenges of the pandemic.

Growth mindset represents a fundamental belief about human potential – that we can develop our abilities through effort, good strategies, and input from others. During the pandemic, this mindset appeared to help people view adversity as opportunities for adaptation and learning.

Looking at adults from diverse backgrounds in Southern California, the researchers examined how growth mindset related to three key aspects of mental health during the pandemic: depression levels, overall well-being, and how well people adjusted their daily routines to accommodate physical distancing requirements.

The results, published in PLOS Mental Health, were striking. People with stronger growth mindsets reported lower levels of depression and higher levels of well-being, even after accounting for various demographic factors like age, income, and education level. They were also more likely to successfully adapt their daily routines to pandemic restrictions.

The study included a unique group of older adults who had participated in a special learning intervention before the pandemic. These individuals had spent three months learning multiple new skills – from painting to using iPads to speaking Spanish. Not only did this group show increased growth mindset after their learning experience, but they also demonstrated better mental health outcomes during the pandemic compared to their peers who hadn’t participated in the intervention.

This finding suggests that actively engaging in learning new skills might help build mental resilience for challenging circumstances. The combination of growth mindset with actual learning experiences appeared to create stronger psychological benefits during the pandemic.

Age played a fascinating role in the results. While older adults generally showed more resilience in terms of emotional well-being and lower depression rates compared to younger participants, they were less likely to adjust their daily routines during the pandemic. This suggests that while age may bring emotional stability, it might also be associated with less behavioral flexibility.

Source : https://studyfinds.org/growth-mindset-protects-mental-health-during-hard-times/

What sleep paralysis feels like: Terrifying, like you’re trapped with a demon on your chest

The Nightmare, a 1781 oil painting by Swiss artist Henry Fuseli of a woman in deep sleep, arms flung over her head and an incubus, a male demon, on her belly, has been taken as a symbol of sleep paralysis. Photo by Detroit Institute of Arts.

This feature is part of a National Post series by health reporter Sharon Kirkey on what is keeping us up at night. In the series, Kirkey talks to sleep scientists and brain researchers to explore our obsession with sleep, the seeming lack of it and how we can rest easier.

Psychologist Brian Sharpless has been a horror movie buff since watching 1974’s It’s Alive! on HBO, a cult classic about a fanged and sharp-clawed mutant baby with a proclivity to kill whenever it got upset.

In his new book, Monsters on the Couch: The Real Psychological Disorders Behind Your Favorite Horror Movies, Sharpless devotes a full chapter to a surprisingly common human sleep experience that has been worked into so many movie plots “it now constitutes its own sub-genre of horror.”

Not full sleep, exactly, but rather a state stuck between sleep and wakefulness that follows a reliable pattern: People suddenly wake but cannot move because all major muscles are paralyzed.

The paralysis is often accompanied by the sensed presence of another, human or otherwise. The most eerie episodes involve striking hallucinations. Sharpless once hallucinated a “serpentine-necked monstrosity” lurking in the silvery moonlight seeping through the slats of his bedroom window blind.

Feeling pressure on the chest or a heavy weight on the ribs is also common. People feel as if they’re being smothered. They might also sweat, tremble or shake, but are “trapped,” unable to move their arms or legs, yell or scream. The experience can last seconds, or up to 20 minutes, “with a mean duration of six minutes,” Sharpless shared with non-sleep specialists in his doctor’s guide to sleep paralysis.

Sleep paralysis is a parasomnia, a sleep disorder that at least eight per cent of the general population will experience at least once in their lifetime. That low-ball estimate is higher still among university students (28 per cent) and those with a psychiatric condition (32 per cent). It’s usually harmless, but the combination of a waking nightmare and temporary paralysis can make for a “very unpleasant experience,” Sharpless advised clinicians, “one that may not be easily understood by patients.”

“Patients may instead use other non-medical explanations to make sense of it,” such as, say, some kind of alien, spiritual or demonic attack.

Eight years into studying sleep paralysis and with hundreds of interviews with experiencers under his belt, Sharpless had never once experienced the phenomenon himself, until 2015, the year he published his first book, Sleep Paralysis, with Dr. Karl Doghramji, a professor of psychiatry at Thomas Jefferson University. Sharpless woke at 2 a.m. and saw shadows in the hallway mingling and melding into a snake-like form with a freakishly long neck and eyes that glowed red. When he attempted to lift his head to get a better look, “I came to the uncomfortable realization I couldn’t move,” Sharpless recounts in Monsters on the Couch. “Oh my God, you’re having sleep paralysis,” he remembers thinking when he began to think rationally again.

“It’s an unusual experience that a lot of folks have,” Sharpless said in an interview with the National Post. The hallucinatory elements “that tap into a lot of paranormal and supernatural beliefs” is partly what makes it so fascinating, he said. Several celebrities — supermodel Kendall Jenner, American singer-songwriter and Apple Music’s 2024 Artist of the Year Billie Eilish, English actor and Spider-Man star Tom Holland — have also been open about their sleep paralysis.

You’re seeing, smelling, hearing something that isn’t there but feels like it is

It has a role in culture and folklore as well. In Brazilian folklore, the “Pisadeira” is a long-finger-nailed crone “who lurks on rooftops at night” and tramples on people lying belly up. Newfoundlanders called sleep paralysis an attack of the “Old Hag.” Sleep paralysis has been recognized by scholars and doctors since the ancient Greeks, Sharplesss said. Too much blood, different lunar phases, upset gastrointestinal tracts — all were thought to trigger bouts of sleep paralysis. Episodes have been described in the Salem Witch Trials in 1692. The Nightmare, a 1781 oil painting by Swiss artist Henry Fuseli of a woman in deep sleep, arms flung over her head and an incubus, a male demon, on her belly, has been taken as a symbol of sleep paralysis, among other interpretations. Sleep paralysis figures in numerous scary films and docu-horrors, including Shadow People, Dead Awake, Haunting of Hill House, Be Afraid, Slumber and The Nightmare.

The wildest story Sharpless has heard involved an undergrad student at Pennsylvania State University who was sleeping in her dormitory bunk bed when she woke suddenly, moved her eyes to the left and saw a child vampire with blood coming out of her mouth.

“The vampire girl ripped her covers off, grabbed her by the leg and started screaming, ‘I’m dragging you to hell, I’m dragging you to hell,’ pulling her out of the bed, all the while blood is coming out of her mouth,” Sharpless recalled the student telling him.

When she was able to move again, she found herself fully covered, her leg still under the blankets and not hanging off the ledge of the bunk bed as she imagined.

With sleep paralysis, hallucinations evaporate the moment movement returns, Sharpless said.

People are immobile in the first place because, during REM sleep, when dreams tend to be the most vivid and emotion-rich, muscles that move the eyes and involve breathing keep moving but most other muscles do not. The relaxed muscle tone keeps people from acting out their dreams and potentially injuring themselves or a bedmate.

“In REM, if you’re dreaming that you’re running or playing the piano, the brain is sending commands to your muscles as if you were awake,” said Antonio Zadra, a professor of psychology and a sleep scientist at Université de Montréal.

More than a decade ago, University of Toronto neuroscientists Patricia Brooks and John Peever found that two distinct brain chemicals worked together to switch off motor neurons communicating those brain messages to move. The result: muscle atonia or that REM sleep muscle paralysis. With REM sleep behaviour disorder, another parasomnia, the circuit isn’t switched off to inhibit muscle movement. People can act out their dreams, flailing, kicking, sitting up or even leaving the bed.

Normally, when people wake out of REM sleep, the paralysis that accompanies REM also stops. With sleep paralysis, the atonia carries over into wakefulness.

“You’re experiencing two aspects of REM sleep, namely, the paralysis and any dream activity, but now going on while the person is fully awake,” Sharpless said. People have normal waking consciousness. They think just like they can when fully awake. But they’re also experiencing “dreams” and because they’re awake, the dreams are hallucinations that feel just as real as anything in waking life.

Sleep paralysis tends to happen most often when people sleep in supine (on their back) positions and, while Sharpless and colleagues found that about 23 per cent of 172 adults with recurrent sleep paralysis surveyed reported always, mostly or sometimes pleasant experiences — some felt as if they were floating or flying — the hallucinations, like the beast Sharpless conjured up, are almost always threatening and bizarre.

Why so negative?

Evolution primed humans to be afraid of the dark and, in general, when we wake up, “It’s not usual for us to be paralyzed,” Sharpless said. “That’s an unusual experience from the get-go.” Sometimes people have catastrophizing thoughts like, “Oh my god, I’m having a stroke,” or they fear they’re going to die or be forever paralyzed.

“If you start having the hallucinatory REM sleep-dream activity going on, then it can get even worse,” Sharpless said.

Should people sense a presence in the room, the brain organizes that sensed presence into an actual shape or object, usually an intruder, attacker or something else scary, like an evil force. “If it goes on, you might actually make physical contact with the hallucination: You could feel that you’re being touched. You might smell it; you might hear it,” Sharpless said.

These aren’t nightmares. With nightmares, people aren’t aware of their bedroom surroundings and they certainly can’t move their eyes around the room.

What might explain that dense pressure on the chest, like you’re being suffocated or smothered? People are more likely to experience breathing disruptions when they’re sleeping on their backs. People with sleep apnea are also more likely to experience bouts of sleep paralysis because of disrupted oxygen levels, and the fact that they are awake, temporarily paralyzed and in a not so positive state, can affect respiration. Rates of sleep paralysis are higher in other sleep disorders as well, narcolepsy especially.

While sleep paralysis can be weird and seriously uncomfortable, Sharpless marvels in Monsters on the Couch at how often people have asked him how one might be able to induce sleep paralysis.

One way is to have messed up sleep. Anything that disrupts sleep seems to increase the odds, Sharpless said, like sleep deprivation, jet lag, erratic sleep schedules. Sleep paralysis has also been linked to “exploding head syndrome,” a sleep disorder Sharpless has published a good bit on. People experience auditory hallucinations — loud bangs or explosions that last a mere second — during sleep-wake transitions.

How can people snap out of sleep paralysis?

In a survey of 156 university students with sleep paralysis, some of the more effective “disruption techniques” involved trying to move smaller body parts like fingers or toes, and trying to calm down or relax in the moment.

One review of 42 studies linked a history of trauma, a higher body mass index and chronic pain with episodes of fearful sleep paralysis. Excessive daytime sleepiness, excessively short (fewer than six hours) or excessively long (longer than nine hours) sleep duration have also been implicated.

To reduce the risk, Sharpless recommends good sleep hygiene, including going to bed and waking up at the same time, not drinking alcohol or caffeine too close to bedtime and “taking care of any issues you’ve been avoiding,” especially anxiety, depression or trauma. One simple suggestion: try to sleep on your side. “If you have a partner, have them gently roll you over,” Sharpless said. Zadra, author, with Robert Stickgold, of When Brains Dream: Exploring the Science and Mystery of Sleep, recommends trying to move the tongue to disengage motor paralysis. “The tongue is not paralyzed in REM sleep. Technically, you can move it,” Zadra said. Even thinking about moving the tongue or toes can put people into a whole different mindset “rather this feeling of panic and not being able to move at all,” said Zadra.

Source : https://nationalpost.com/longreads/sleep-paralysis-terrors

 

Weight loss drugs help with fat loss – but they cause bone and muscle loss too

Patient injecting themself in the stomach with an Ozempic (semaglutide) needle. (Photo by Douglas Cliff on Shutterstock)

For a long time, dieting and exercise were the only realistic options for many people who wanted to lose weight, but recent pharmaceutical advances have led to the development of weight loss drugs. These are based on natural hormones from the intestine that help control food intake, such as GLP and GIP.

GLP-1-based drugs such as semaglutide (Wegovy and Ozempic) and tirzepatide (Mounjaro) work by helping people to feel less hungry. This results in them eating less – leading to weight loss.

Studies show that these drugs are very effective in helping people lose weight. In clinical trials of people with obesity, these drugs lead to a weight loss of up to 20% body weight in some instances.

But it’s important to note that not all the weight lost is fat. Research shows that up to one-third of this weight loss is so-called “non-fat mass” – this includes muscle and bone mass. This also happens when someone goes on a diet, and after weight loss surgery.

Muscle and bone play very important roles in our health. Muscle is really important for a number of reasons including that it helps us control our blood sugar. Blood sugar control isn’t as good in people who have lower levels of muscle mass.

High blood sugar levels are also linked to health conditions such as Type 2 diabetes – where having high blood sugar levels can lead to blindness, nerve damage, foot ulcers and infections, and circulation problems such as heart attacks and strokes.

We need our bones to be strong so that we can carry out our everyday activities. Losing bone mass can increase our risk of fractures.

Researchers aren’t completely sure why people lose fat-free mass during weight loss – though there are a couple of theories.

It’s thought that during weight loss, muscle proteins are broken down faster than they can be built. And, because there’s less stress on the bones due to the weight that has been lost, this might affect normal bone turnover – the process where old bone is removed and new bone is formed leading to less bone mass being manufactured than before weight loss.

Because GLP-1 drugs are so new, we don’t yet know the longer-term effects of weight loss achieved by using them. So, we can’t be completely sure how much non-fat mass someone will lose while using these drugs or why it happens.

It’s hard to say whether the loss of non-fat mass could cause problems in the longer term or if this would outweigh the many benefits that are associated with these drugs.

Maintaining muscle and bone

There are many things you can do while taking GLP-1 drugs for weight loss to maintain your muscle and bone mass.

Research tells us that eating enough protein and staying physically active can be helpful in reducing the amount of non-fat mass that is lost when losing weight. One of the best types of exercise is doing resistance training or weight training. This will help to preserve muscle mass, and protein will help us maintain and build muscle.

Source : https://studyfinds.org/weight-loss-drugs-bone-muscle/

Content overload: Streaming audiences plagued by far too many options

(Credit: DANIEL CONSTANTE/Shutterstock)

New survey finds the average viewer spends 110 hours each year just figuring out what to watch.

In an era of endless entertainment options, streaming subscribers are drowning in choices — and not in a good way. A new survey reveals a startling paradox: despite having more content at their fingertips than ever before, viewers are struggling to find something worth watching.

Commissioned by UserTesting and conducted by Talker Research, the survey exposes the growing frustration with the current streaming landscape. The research paints a vivid picture of entertainment exhaustion, revealing that the average person now spends a staggering 110 hours per year — nearly five full days — simply scrolling through streaming platforms in search of something to watch.

One in five subscribers believe finding something to watch is harder now than a decade ago, a sentiment rooted in the overwhelming abundance of content. Forty-one percent of respondents struggle with increasingly large content libraries, while 26% feel there’s an overproduction of original content.

“The streaming landscape has evolved from solving the problem of content access to creating a new challenge of content discovery,” says Bobby Meixner, Senior Director of Industry Solutions at UserTesting, in a statement.

This observation is backed by intriguing revelations that highlight the complexity of our modern entertainment landscape. While 75% appreciate streaming service algorithms for providing accurate recommendations, 51% simultaneously admit feeling overwhelmed by the sheer quantity of suggested content.

Traditional TV is rapidly transforming too

Researchers found that 48% of subscribers have already abandoned cable television. TV viewers have been drawn to streaming platforms for various reasons, including content variety (43%), access to shows not available on cable (34%), and the convenience of on-the-go viewing (29%). However, the audience’s satisfaction remains elusive. In fact, 51% of subscribers would welcome more streaming options, even if those options include advertisements.

When envisioning their ideal streaming service, subscribers prioritized some specific features. Four in 10 desired premium channels and networks at no additional cost, while 39% emphasized the importance of an easy-to-navigate interface. The average subscriber believes a comprehensive streaming service should cost no more than $46 per month, though 11% would be willing to pay over $100 for the right service.

Hidden fees and content availability present significant challenges to subscriber loyalty. Seventy-nine percent expressed frustration with streaming services requiring additional subscription fees for select content. When encountering these fees, viewers respond dramatically: 73% look for the content on another platform, 77% give up and watch something else, and 37% consider canceling their subscription altogether. One in five would even resort to signing up for a free trial just to watch a specific show.

What do loyal customers want?

The study also revealed the precarious nature of content loyalty. Two in three people have opened a streaming service only to find the show they signed up to watch had been removed from the platform. Forty-four percent would switch services to continue watching a favorite show, with 56% planning to cancel their subscription immediately after finishing that show. The cancellation process itself becomes another point of friction, with 23% of subscribers reporting difficulties, including challenges in finding the cancellation option (39%) and overly complicated multi-step processes (36%).

Source : https://studyfinds.org/content-overload-streaming/

Microplastics are invading our bodies — 5 ways to keep them out

Microplastics on the beach. (© David Pereiras – stock.adobe.com)

Most people know by now that microplastics are building up in our environment and within our bodies. However, according to Dr. Leonarde Transande, director of environmental pediatrics at NYU School of Medicine, there are ways to reduce the influx of plastics into our bodies. It starts with avoiding canned foods.

Plastic is everywhere. It’s in our food packaging, our homes, and our clothing. You can’t avoid it completely. Much of it serves important purposes in everything from computers to cars, but it’s also overwhelming our environment.

It affects our health. Minute bits of plastic, called microplastics or nanoplastics, are shed from larger products. These particles have invaded our brains, glands, reproductive organs, and cardiovascular systems.

CNN Chief Medical Correspondent Dr. Sanjay Gupta discussed with Transande his last two decades studying environmental effects on our health. Transande said that we eat a lot of plastic and also inhale it as dust. It’s even in cosmetics we absorb into our skin.

This contamination also concerns what’s in the plastic as well; chemicals causing inflammation and irritation. Polyvinyl chloride, a plastic in food packaging, has added chemicals called phthalates which make it softer.

Dr. Transande worries about phthalates (an ingredient in personal care items and food packaging), bisphenols (lining aluminum cans and thermal paper receipts), and perfluoroalkyl and polyfluoroalkyl substances (PFAS) – called “forever chemicals” because they last for centuries in the environment.

Many of these added chemicals are especially concerning due to their effects on the endocrine system – glands and the hormones they secrete. The endocrine system controls many of our bodies’ functions, such as metabolism and reproduction. Hormones are signaling molecules, acting as expert conductors of the body’s communication within itself.

5 things you can do to avoid exposure

Avoid canned foods

While bisphenol A (BPA) — a chemical that was commonly used in the lining of many metal food and drink cans, lids, and caps — is no longer present in the packaging for most products (canned tuna, soda, and tomatoes), industry data shows that it is still used about 5% of the time, possibly more.

Also, it is unclear if BPA’s replacement is safer. One of the common substitutes, bisphenol S, is as toxic as BPA. It has seeped into our environment as well.

Keep plastic containers away from heat and harsh cleaners

The “microwave and dishwasher-safe” labeling on some plastics refers only to the warping or gross misshaping of a plastic container. If, however, you examine the container microscopically, you can see damage. Bits of chemical additives and/or plastic are shed and absorbed into the food, which you then ingest.

If the plastic is etched, like a well-used plastic cutting board, it should be discarded. Etching increases the leaching of chemicals into your food.

Source : https://studyfinds.org/microplastics-5-ways-to-keep-them-out/

Heart tissue can regenerate — How Cold War nuclear tests led to major discovery

(ID 328527023 © Dmitry Buksha | Dreamstime.com)

Study reveals extraordinary self-healing potential in advanced heart failure patients
TUCSON, Ariz. — For decades, medical science has insisted that the human heart cannot repair itself in any meaningful way. This dogma, as fundamental to cardiology as a heartbeat itself, is now being challenged by game-changing research that reveals our hearts may possess an extraordinary power of regeneration—provided they’re given the right conditions to heal.

The study, published in Circulation, offers potential new directions for treating heart failure, a condition that affects nearly 7 million U.S. adults and accounts for 14% of deaths annually, according to the Centers for Disease Control and Prevention.

Traditionally, the medical community has viewed the human heart as having minimal regenerative capabilities. Unlike skeletal muscles that can heal after injury, cardiac muscle tissue has been thought to have very limited repair capacity.

“When a heart muscle is injured, it doesn’t grow back. We have nothing to reverse heart muscle loss,” says Dr. Hesham Sadek, director of the Sarver Heart Center at the University of Arizona College of Medicine – Tucson, in a statement.

However, this new research, conducted by an international team of scientists, demonstrates that hearts supported by mechanical assist devices can achieve cellular renewal rates significantly higher than previously observed. The study examined tissue samples from 52 patients with advanced heart failure, including 28 who received left ventricular assist devices (LVADs) – mechanical pumps surgically implanted to help weakened hearts pump blood more effectively.

The research methodology centered on an innovative approach to tracking cell renewal. Using a technique that measures carbon-14 levels in cellular DNA – taking advantage of elevated atmospheric levels from Cold War nuclear testing – researchers could effectively date when cardiac cells were created. This method provided unprecedented insight into the heart’s regenerative processes.

The findings revealed a stark contrast between different patient groups. In healthy hearts, cardiac muscle cells (cardiomyocytes) naturally renew at approximately 0.5% per year. However, in failing hearts, this renewal rate drops dramatically – to 0.03% in cases of non-ischemic cardiomyopathy (heart failure not caused by blocked arteries) and 0.01% in ischemic cardiomyopathy (heart failure from blocked arteries).

The most significant finding emerged from patients who responded positively to LVAD support. These “responders,” who showed improved cardiac function, demonstrated cardiomyocyte renewal rates more than six times higher than those seen in healthy hearts. This observation provides what Dr. Sadek calls “the strongest evidence we have, so far, that human heart muscle cells can actually regenerate.”

The study builds upon previous research, including Dr. Sadek’s 2011 publication in Science showing that heart muscle cells actively divide during fetal development but cease shortly after birth to focus solely on pumping blood. His 2014 research provided initial evidence of cell division in artificial heart patients, laying the groundwork for the current study.

The mechanism behind this increased regeneration may be linked to the unique way LVADs support heart function. These devices effectively provide cardiac muscle with periods of reduced workload by assisting with blood pumping, potentially creating conditions that enable regeneration. This observation aligns with established knowledge about how other tissues in the body heal and regenerate when given adequate rest.

The research team found that in failing hearts, most cellular DNA synthesis is directed toward making existing cells larger or more complex through processes called polyploidization and multinucleation, rather than creating new cells. However, in LVAD patients who showed improvement, a significant portion of DNA synthesis was dedicated to generating entirely new cardiac muscle cells – a more beneficial form of cardiac adaptation.

Approximately 25% of LVAD patients demonstrate this enhanced regenerative response, raising important questions about why some patients respond while others do not. Understanding these differences could be crucial for developing new therapeutic approaches. “The exciting part now is to determine how we can make everyone a responder,” says Sadek.

The implications of this research are particularly promising because LVADs are already an established treatment option. As Dr. Sadek points out, “The beauty of this is that a mechanical heart is not a therapy we hope to deliver to our patients in the future – these devices are tried and true, and we’ve been using them for years.”

Source: https://studyfinds.org/heart-muscle-regeneration-cold-war-tests/

The dark side of digital work: ‘Always on’ culture creating new type of anxiety for employees

(© Maridav – stock.adobe.com)

Think about the last time you checked your work email after hours. Do you find yourself having the urge to scan your inbox frequently while on vacation? A new study from the University of Nottingham suggests these digital intrusions may be taking a significant toll on employee wellbeing.

The research, published in Frontiers in Organizational Psychology, explores what researchers call the “dark side” of digital workplaces: the hidden psychological and physical costs that come with being constantly connected to work through technology. While digital tools have enabled greater flexibility and collaboration, they’ve also created new challenges that organizations need to address.

The researchers identified a phenomenon they term “Digital Workplace Technology Intensity” (DWTI). This is the mental and emotional effort required to navigate constant connectivity, handle information overload, deal with technical difficulties, and cope with the fear of missing important updates or connections in the digital workplace.

“Digital workplaces benefit both organizations and employees, for example, by enabling collaborative and flexible work,” explains Elizabeth Marsh, ESRC PhD student from the School of Psychology who led the qualitative study, in a statement. “However, what we have found in our research is that there is a potential dark side to digital working, where employees can feel fatigue and strain due to being overburdened by the demands and intensity of the digital work environment. A sense of pressure to be constantly connected and keeping up with messages can make it hard to psychologically detach from work.”

Rise of ‘productivity anxiety’

To understand these challenges, the research team conducted in-depth interviews with 14 employees across various roles and industries. The participants, aged 27 to 60, included store managers, software engineers, and other professionals, providing insights into how digital workplace demands affect different types of work.

The researchers identified five key themes that characterize the challenges of digital work. The first is “hyperconnectivity.” They define this as a state of constant connection to work through digital devices that erodes the boundaries between professional and personal life. As one participant explained: “You kind of feel like you have to be there all the time. You have to be a little green light.”

This always-on culture has given rise to what the study reveals as “productivity anxiety,” or workers’ fear of being perceived as unproductive when working remotely. One participant described this pressure directly: “It’s that pressure to respond […] I’ve received an e-mail, I’ve gotta do this quickly because if not, someone might think ‘What is she doing from home?’”

FOMO leading to workplace overload

The study also identified “techno-overwhelm,” where workers struggle with the sheer volume of digital communications and platforms they must manage. Participants described feeling bombarded by emails and overwhelmed by the proliferation of messages, applications, and meetings in the digital workplace.

Technical difficulties, which the researchers termed “digital workplace hassles,” emerged as another significant source of stress. The study found these challenges were particularly significant for older workers and those with disabilities, highlighting important accessibility concerns that organizations need to address.

The research also revealed an interesting pattern around the Fear of Missing Out (FoMO) in professional settings. While digital tools are meant to improve communication, many participants expressed anxiety about potentially missing important updates or opportunities for connection with colleagues.

“This research extends the Job Demands-Resources literature by clarifying digital workplace job demands including hyperconnectivity and overload,” says Dr. Alexa Spence, Professor of Psychology at Nottingham. “It also contributes a novel construct of digital workplace technology intensity which adds new insight on the causes of technostress in the digital workplace. In doing so, it highlights the potential health impacts, both mental and physical, of digital work.”

Disconnecting from the connected world

The study’s findings are particularly relevant in our post-pandemic era, where the boundaries between office and home have become increasingly blurred. As one participant noted: “[It’s] just more difficult to leave it behind when it’s all online and you can kind of jump on and do work at any time of the day or night.”

Source : https://studyfinds.org/the-dark-side-of-digital-work-productivity-anxiety/

80% of adults carry this virus — For some, it could trigger Alzheimer’s

The brain’s immune cells, or microglia (light blue/purple), are shown interacting with amyloid plaques (red) — harmful protein clumps linked to Alzheimer’s disease. The illustration highlights the microglia’s role in monitoring brain health and clearing debris. (Illustration by Jason Drees/Arizona State University)

In the gut of some Alzheimer’s patients lies an unexpected culprit: a common virus that may be silently contributing to their disease. While scientists have long suspected microbes might play a role in Alzheimer’s disease, new research has uncovered a surprising link between a virus that infects most humans and a distinct subtype of the devastating neurological condition.

The research suggests that human cytomegalovirus (HCMV) — a virus that infects between 80% of adults over 80 — may play a more significant role in Alzheimer’s disease than previously thought, particularly when combined with specific immune system responses.

The study, led by researchers at Arizona State University and multiple collaborating institutions, focused on a specific type of brain cell called microglia marked by a protein called CD83. These CD83-positive microglia were found in 47% of Alzheimer’s patients compared to 25% of unaffected individuals.

This study, published in the journal Alzheimer’s and Dementia, is particularly notable because it examines multiple body systems, including the gut, the vagus nerve (which connects the gut to the brain), and the brain itself. The researchers found that subjects with CD83-positive microglia in their brains were more likely to have both HCMV and increased levels of an antibody called immunoglobulin G4 (IgG4) in their colon, vagus nerve, and brain tissue.

“We think we found a biologically unique subtype of Alzheimer’s that may affect 25% to 45% of people with this disease,” says study co-author Dr. Ben Readhead, a research associate professor with ASU-Banner Neurodegenerative Disease Research Center, in a statement. “This subtype of Alzheimer’s includes the hallmark amyloid plaques and tau tangles—microscopic brain abnormalities used for diagnosis—and features a distinct biological profile of virus, antibodies and immune cells in the brain.”

For their research, the team examined tissue samples from multiple areas of the body in both Alzheimer’s patients and healthy controls. They found that patients with CD83-positive microglia in their brains were significantly more likely to have both HCMV and elevated IgG4 levels in their colon, vagus nerve, and brain tissue.

“It was critically important for us to have access to different tissues from the same individuals. That allowed us to piece the research together,” says Readhead, who also serves as the Edson Endowed Professor of Dementia Research at the center.

To further investigate the potential impact of HCMV on brain cells, the team conducted experiments using cerebral organoids – simplified versions of human brain tissue grown in the laboratory. When these organoids were infected with HCMV, they showed accelerated development of two key markers of Alzheimer’s disease: amyloid beta-42 and phosphorylated Tau-212. The infected organoids also showed increased rates of neuronal death.

The researchers emphasize that while HCMV infection is common, only a subset of individuals showed evidence of intestinal HCMV infection, which appears to be the relevant factor in the virus’s presence in the brain.

Study authors suggest that in some individuals, HCMV infection might trigger a cascade of events involving the immune system that could contribute to the development or progression of Alzheimer’s disease. This is particularly interesting because it might help explain why some people develop Alzheimer’s while others don’t, despite HCMV being so common in the general population.

Looking ahead, the research team is developing a blood test to identify individuals with chronic intestinal HCMV infection. They hope to use this in combination with emerging Alzheimer’s blood tests to evaluate whether existing antiviral drugs could be beneficial for this subtype of Alzheimer’s disease.

“We are extremely grateful to our research participants, colleagues, and supporters for the chance to advance this research in a way that none of us could have done on our own,” notes Dr. Eric Reiman, Executive Director of Banner Alzheimer’s Institute and the study’s senior author. “We’re excited about the chance to have researchers test our findings in ways that make a difference in the study, subtyping, treatment and prevention of Alzheimer’s disease.”

With the development of a blood test to identify patients with chronic HCMV infection on the horizon, this research might not just explain why some people develop Alzheimer’s – it might also point the way toward preventing it. In the end, the key to understanding this devastating brain disease may have been hiding in our gut all along.

Source : https://studyfinds.org/gut-virus-trigger-alzheimers/

You shouldn’t have! Holiday shoppers spending $10.1 billion on gifts nobody wants

(Credit: Asier Romero/Shutterstock)

This holiday season, take a moment to ask yourself, “Does this person really want what I’m buying them?” A new survey finds the answer is likely no! Researchers have found that more than half of Americans (53%) will receive a gift they don’t want.

As Elon Musk and Vivek Ramaswamy go looking for waste in Washington, it turns out that everyday Americans are throwing away tons of money too. According to the new forecast from Finder, unwanted presents will reach an all-time high in both volume and cost this year, with an estimated $10.1 billion being spent on gifts headed for the regifting pile.

Overall, the annual holiday spending forecast finds that roughly 140 million Americans will receive at least one unwanted present in 2024. Shockingly, one in 20 people expect to receive at least five gifts they won’t want to keep. The average cost of these unwanted items is expected to rise to $72 this holiday season, up from $66 last year. That represents a billion-dollar surge in wasteful holiday spending.

Saying “you shouldn’t have…” might be a more truthful statement than ever when it comes to certain gift ideas. Clothing and accessories top 2024’s list of the most unwanted gifts people receive. Specifically, 43% hope to avoid these personal items. However, that number is actually down from the 49% who didn’t want clothes for Christmas in 2022. So, maybe some Americans need a new pair of socks this year.

Household items follow clothing as the least popular holiday gifts (33%), while cosmetics and fragrances round out the top three at 26%. Interestingly, technology gifts are skyrocketing in unpopularity. Since 2022, the dislike for tech gifts has risen by a whopping 10%, going from 15% in 2022 to 25% this holiday season. So, maybe think twice before getting your friend their eighth pair of headphones.

The season of re-giving

So, what happens to all these well-intentioned but unwanted presents? The survey found that regifting is the most popular solution in 2024. Nearly four in 10 Americans (39%) plan to pass their unwanted gifts along to someone else. That’s the most popular option this year, surpassing the awkward choice of keeping a bad gift. Interestingly, a staggering 43% of Americans kept their unwanted presents in 2022, but that number has now fallen to 35%.

Another 32% take advantage of post-holiday exchange policies to swap their unwanted items for something more desirable. However, more and more people are just opting to sell their sub-par presents for cold hard cash. Over one in four (27%) plan to sell unwanted gifts after the holidays, up significantly from 17% in 2022.

So, if you’re still looking for last-minute gifts this holiday season, choose wisely. There’s a very good chance the person you’re buying for won’t like your choices anyway.

Source : https://studyfinds.org/holiday-shoppers-unwanted-gifts/

See how Google Gemini 2.0 Flash can perform hours of business analysis in minutes

Anyone who has had a job that required intensive amounts of analysis will tell you that any speed gain they can find is like getting an extra 30, 60, or 90 minutes back out of their day.

Automation tools in general, and AI tools specifically, can assist business analysts who need to crunch massive amounts of data and succinctly communicate it.

In fact, a recent Gartner analysis, “An AI-First Strategy Leads to Increasing Returns,” states that the most advanced enterprises rely on AI to increase the accuracy, speed, and scale of analytical work to fuel three core objectives — business growth, customer success, and cost efficiency — with competitive intelligence being core to each.

Google’s newly released Gemini 2.0 Flash provides business analysts with greater speed and flexibility in defining Python scripts for complex analysis, giving analysts more precise control over the results they generate.

Google claims that Gemini 2.0 Flash builds on the success of 1.5 Flash, its most adopted model yet for developers.

Gemini 2.0 Flash outperforms 1.5 Pro on key benchmarks, delivering twice the speed, according to Google. 2.0 Flash also supports multimodal inputs, including images, video, and audio, as well as multimodal output, including natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. It can also natively call tools like Google Search, code execution, and third-party user-defined functions.

Taking Gemini 2.0 Flash for a test drive
VentureBeat gave Gemini 2.0 Flash a series of increasingly complex Python scripting requests to test its speed, accuracy, and precision in dealing with the nuances of the cybersecurity market.

Using Google AI Studio to access the model, VentureBeat started with simple scripting requests, working up to more complex ones centered on the cybersecurity market.

What’s immediately noticeable about Python scripting with Gemini 2.0 Flash is how fast it is — nearly instantaneous, in fact — at providing Python scripts, generating them in seconds. It’s noticeably faster than 1.5 Pro, Claude, and ChatGPT when handling increasingly complex prompts.

VentureBeat asked Gemini 2.0 Flash to perform a typical task that a business or market analyst would be requested to do: Create a matrix comparing a series of vendors and analyze how AI is used across each company’s products.

Analysts often have to create tables quickly in response to sales, marketing, or strategic planning requests, and they usually need to include unique advantages or insights into each company. This can take hours and even days to get done manually, depending on an analyst’s experience and knowledge.

VentureBeat wanted to make the prompt request realistic by having the script encompass an analysis of 13 XDR vendors, also providing insights into how AI helps the listed vendors handle telemetry data. As is the case with many requests analysts receive, VentureBeat asked Python to produce an Excel file of the results.

Here is the prompt we gave Gemini 2.0 Flash to execute:

Write a Python script to analyze the following cybersecurity vendors who have AI integrated into their XDR platform and build a table showing how they differ from each other in implementing AI. Have the first column be the company name, the second column the company’s products that have AI integrated into them, the third column being what makes them unique and the fourth column being how AI helps handle their XDR platforms’ telemetry data in detail with an example. Don’t web scrape. Produce an Excel file of the result and format the text in the Excel file so it is clear of any brackets ({}), quote marks (‘) and any HTML code to improve readability. Name the Excel file. Gemini 2 flash test.
Cato Networks, Cisco, CrowdStrike, Elastic Security XDR, Fortinet, Google Cloud (Mandiant Advantage XDR), Microsoft (Microsoft 365 Defender XDR), Palo Alto Networks, SentinelOne, Sophos, Symantec, Trellix, VMware Carbon Black Cloud XDR

Using Google AI Studio, VentureBeat created the following AI-powered XDR Vendor Comparison Python scripting request, with Python code produced in seconds:

Next, VentureBeat saved the code and loaded it into Google Colab. The goal in doing this was to see how bug-free the Python code was outside of Google AI Studio and also measure its speed of being compiled. The code ran flawlessly with no errors and produced the Microsoft Excel file Gemini_2_flash_test.xlsx.

The results speak for themselves
Within seconds, the script ran, and Colab signaled no errors. It also provided a message at the end of the script that the Excel file was done.

VentureBeat downloaded the Excel file and found it had been finished in less than two seconds. The following is a formatted view of the Excel table where the Python script was delivered.

The total time needed to get this table done was less than four minutes, from submitting the prompt, getting the Python script, running it in Colab, downloading the Excel file, and doing some quick formatting.

Source: https://venturebeat.com/ai/google-gemini-2-0-flash-test-drive-reveals-why-every-analyst-needs-to-know-this-modelgoogle-gemini-2-0-flash-test-drive-why-every-analyst-needs-to-know-this-model/

‘Big Brother’ isn’t just watching — He’s changing how your brain works

Surveillance cameras are seemingly everywhere. (ID 192949897 © Aleksandr Koltyrin | Dreamstime.com)

Every time you walk down a city street, electronic eyes are watching. From security systems to traffic cameras, surveillance is ubiquitous in modern society. Yet these cameras might be doing more than just recording our movements: according to a new study that peers into the psychology of surveillance, they could be fundamentally altering how our brains process visual information.

While previous research has shown that surveillance cameras can modify our conscious behavior – making us less likely to steal or more inclined to follow rules – a new study published in Neuroscience of Consciousness suggests that being watched affects something far more fundamental: the unconscious way our brains perceive the world around us.

“We found direct evidence that being conspicuously monitored via CCTV markedly impacts a hardwired and involuntary function of human sensory perception – the ability to consciously detect a face,” explains Associate Professor Kiley Seymour, lead author of the study, in a statement.

Putting surveillance to the test
The research team at the University of Technology Sydney, led by Seymour, designed an ingenious experiment to test how surveillance affects our unconscious visual processing. They recruited 54 undergraduate students and split them into two groups: one group completed a visual task while being conspicuously monitored by multiple surveillance cameras, while the control group performed the same task without cameras present.

The monitored group was shown the surveillance setup beforehand, including a live feed of themselves from the adjacent room, and had to sign additional consent forms acknowledging they would be watched. To ensure participants felt the full weight of surveillance, cameras were positioned to capture their whole body, face, and even their hands as they performed the task.

The visual task itself employed a clever technique called continuous flash suppression (CFS), which temporarily prevents images shown to one eye from reaching conscious awareness while the brain still processes them unconsciously. Participants viewed different images through each eye: one eye saw rapidly changing colorful patterns, while the other saw faces that were either looking directly at them or away from them.

‘Ancient survival mechanisms’ turn on when being watched
The results were remarkable: “Our surveilled participants became hyper-aware of face stimuli almost a second faster than the control group. This perceptual enhancement also occurred without participants realizing it,” says Seymour. This held true whether the faces were looking directly at them or away, though both groups detected direct-gazing faces more quickly overall.

This heightened awareness appears to tap into ancient survival mechanisms. “It’s a mechanism that evolved for us to detect other agents and potential threats in our environment, such as predators and other humans, and it seems to be enhanced when we’re being watched on CCTV,” Seymour explains.

Importantly, this wasn’t simply due to participants trying harder or being more alert under surveillance. When the researchers ran the same experiment using simple geometric patterns instead of faces, there was no difference between the watched and unwatched groups. The enhancement was specific to social stimuli – faces – suggesting that surveillance taps into fundamental neural circuits evolved for processing social information.

Effects on mental health and consciousness
The findings have particular relevance for mental health. “We see hyper-sensitivity to eye gaze in mental health conditions like psychosis and social anxiety disorder where individuals hold irrational beliefs or preoccupations with the idea of being watched,” notes Seymour. This suggests that surveillance might interact with these conditions in ways we don’t yet fully understand.

Perhaps most unsettling was the disconnect between participants’ conscious experience and their brain’s response. “We had a surprising yet unsettling finding that despite participants reporting little concern or preoccupation with being monitored, its effects on basic social processing were marked, highly significant and imperceptible to the participants,” Seymour reveals.

These findings arrive at a crucial moment in human history, as we grapple with unprecedented levels of technological surveillance. From CCTV cameras and facial recognition systems to trackable devices and the “Internet of Things,” our activities are increasingly monitored and recorded. The study suggests that this constant observation may be affecting us on a deeper level than previously realized, modifying basic perceptual processes that normally operate outside our awareness.

The implications extend beyond individual privacy concerns to questions about public mental health and the subtle ways surveillance might be reshaping human cognition and social interaction. As surveillance technology continues to advance, including emerging neurotechnology that could potentially monitor our mental activity, understanding these unconscious effects becomes increasingly crucial.

Source: https://studyfinds.org/big-brother-watching-surveillance-changing-how-brain-works/

 

The Most Beautiful Mountains on Earth | International Mountain Day

Denali Mountain (Photo by Bryson Beaver on Unsplash)

From the snow-capped peaks of the Himalayas to the dramatic spires of Patagonia, Earth’s mountains stand as nature’s most awe-inspiring monuments. On International Mountain Day, we celebrate these colossal formations that have shaped cultures, inspired religions, and challenged adventurers throughout human history. These geological giants aren’t just spectacular viewpoints – they’re vital ecosystems that provide water, shelter diverse wildlife, and influence global weather patterns. In this visual journey, join us to explore the most beautiful mountains on our planet, each telling its own story of natural forces, cultural significance, and unparalleled beauty that continues to captivate millions of visitors and photographers from around the world.

Most Beautiful Mountains in the World, According to Experts
1. Mount Fuji in Japan
This active volcano on the island of Honshu is a sight to behold. A site of pilgrimage for centuries among Buddhists, Shinto, and others, Mount Fuji is the largest peak in Japan. The last time it erupted was in the 18th century.

Mount Fuji soars to an impressive height of 12,389 feet (3,775 meters) and is particularly stunning when adorned with its signature snowy cap. As Hostelworld points out, while many visitors are eager to get up close to this legendary mountain, its true majesty is often best appreciated from a distance – though you’ll need some patience, as this shy giant has a habit of playing hide-and-seek behind the clouds.

The mountain’s significance runs far deeper than its physical beauty. According to Exoticca, Mount Fuji’s perfect conical shape has made it not just a national symbol, but a deeply spiritual place. Its slopes have long been intertwined with Shinto traditions, and by the early 12th century, followers of the Shugendō faith had even established a temple at its summit, marking its importance in Japanese religious life.

There’s a fascinating irony to Mount Fuji’s allure. Atlas & Boots shares a telling Japanese proverb: climbing it once makes you wise, but twice makes you a fool. While around 300,000 people make the trek annually, the immediate mountain environment is surprisingly stark. The real magic lies in viewing Fuji from afar, where its serene symmetry and majestic presence have rightfully earned it a place among the world’s most beautiful mountains.

2. Mount Kilimanjaro in Tanzania
As the highest freestanding mountain in the world, Kilimanjaro is also the highest mountain in Africa. It is made up of three dormant volcanic cones: Kibo, Mawenzi, and Shira.

Standing proudly at 19,341 feet (5,895 meters), Mount Kilimanjaro offers something you rarely find in a single mountain: an incredible variety of ecosystems stacked one above the other. As The Travel Enthusiast says, this African giant hosts everything from lush rainforests and moorlands to alpine deserts, culminating in an arctic summit that seems almost impossible for its location.

Those who venture to climb Kilimanjaro are treated to more than just stunning vistas. Veranda notes that the mountain provides spectacular views of the surrounding savanna, while the journey up its slopes takes you through an impressive sequence of distinct ecological zones. It’s like traveling from the equator to the poles in a matter of days.

The mountain’s surroundings are just as remarkable as its height. According to Travel Triangle, this legendary peak – one of Africa’s Seven Summits – is crowned with glaciers and an ice field, though both are slowly shrinking. The surrounding Kilimanjaro National Park is a haven for wildlife, where visitors might spot everything from elegant black and white colobus monkeys to elephants and even the occasional leopard prowling through the forest.

3. Matterhorn in Switzerland and Italy
The famously pyramid-shaped Matterhorn straddles the border of Italy and Switzerland in the Alps. Considered one of the deadliest peaks to climb in the world, its beauty is breathtaking and unmistakable.

At 14,692 feet (4,478 meters), the Matterhorn might not be the Alps’ tallest peak, but it’s arguably its most mesmerizing. As Hostelworld notes, this pyramid-shaped giant earned its legendary status not just through its distinctive silhouette, but also through its dramatic history – including its first ascent in 1865 by British climber Edward Whymper.

As Exoticca points out, the mountain’s majesty is best appreciated from the charming Swiss town of Zermatt. This picturesque resort has become synonymous with the Matterhorn itself, offering visitors front-row seats to one of nature’s most impressive displays.

According to Earth & World, which also crowns it the world’s most beautiful mountain, the Matterhorn creates an unforgettable natural spectacle when its rocky peak catches the light, particularly when reflected in the nearby Stellisee Lake. The area around this “mountain of mountains” is also home to Europe’s highest summer skiing region, operating year-round as a paradise for winter sports enthusiasts.

4. Denali Peak in Alaska
Also known as Mount McKinley, Denali Peak is the crown jewel of Alaska’s Denali National Park and Preserve. It’s aptly named, as Denali means “The High One,” being the tallest mountain in North America.

Rising to a staggering 20,237 feet (6,168 meters), Denali dominates the Alaskan landscape as one of the world’s most isolated and impressive peaks. As Beautiful World notes, this snow-crowned giant draws adventurers throughout the year, from mountaineers and backpackers in the warmer months to cross-country skiers who glide along its snow-blanketed paths in winter.

Among the world’s greatest climbing challenges, Denali stands as a formidable test of skill and endurance. Atlas & Boots ranks it as perhaps the most demanding of the Seven Summits after Everest, though its breathtaking beauty helps explain why climbers continue to be drawn to its unforgiving slopes.

The mountain’s appeal extends far beyond just climbing, according to Travel Triangle. Situated at the heart of the vast Denali National Park, this Alaskan masterpiece offers visitors a chance to experience nature in its most magnificent form. Its remarkable isolation and untamed character make it a perfect destination for those seeking to connect with the raw power of the natural world.

Source: https://studyfinds.org/most-beautiful-mountains/

Married Millennials Are Getting ‘Sleep Divorces’

Married millennials who are otherwise happy in their relationships are getting “sleep divorces,” a phenomenon in which mismatched sleeping habits make it impossible for the couple to continue sleeping in the same bed, or even in the same bedroom.

Watch an old episode of I Love Lucy and you’ll probably cock your head to the side like a confused dog when you see Lucy and Ricky’s sleeping arrangement: a married couple sleeping in the same bedroom but in two different beds, separated by a bedside table. That’s the way some couples used to sleep, and it’s the only way the FCC allowed TV shows to depict couples in their bedrooms back in the day.

Today’s 30-something married couples are living life like Lucy and Ricky. Whether it’s snoring, restless movements, or one or both having to get up to pee, there are simply too many issues to deal with that can disturb a partner’s sleep.

According to sleep scientist and psychologist Wendy Troxel, up to 30 percent of a person’s sleep quality is influenced by their partner’s sleepytime behavior. Sure, your own thoughts and anxieties make falling and staying asleep a nightmare, but add your partner’s sleep idiosyncrasies into the mix and you have a recipe for insomnia.

A study from Ohio State University found that couples who are not getting adequate sleep are more likely to exhibit negative behaviors when discussing their marital issues. A study of 48 British couples showed that men move around in their sleep a lot more than women, with women reporting being disturbed by their male partner’s movements.

Interestingly, the same study showed that most couples prefer to sleep together rather than apart despite the downsides.

Source : https://www.vice.com/en/article/married-millennials-sleep-divorces/

Friendship after 50: Why social support becomes a matter of life and death

(© Rawpixel.com – stock.adobe.com)

For adults over 50, maintaining close friendships isn’t just about having someone to chat with over coffee – it could be integral to their health and well-being. A new study reveals a stark reality: while 75% of older adults say they have enough close friends, those saying they’re in poor mental or physical health are significantly less likely to maintain these vital social connections. The findings paint a concerning picture of how health challenges can create a cycle of social isolation, potentially making health problems worse.

The University of Michigan’s National Poll on Healthy Aging, conducted in August 2024, surveyed 3,486 adults between 50 and 94, offering an in-depth look at how friendships evolve in later life and their crucial role in supporting health and well-being. The results highlight a complex relationship between health status and social connections that many may not realize exists.

“With growing understanding of the importance of social connection for older adults, it’s important to explore the relationship between friendship and health, and identify those who might benefit most from efforts to support more interaction,” explains University of Michigan demographer Sarah Patterson, in a statement.

Patterson, a research assistant professor at the UM Institute for Social Research’s Survey Research Center, emphasizes the critical nature of understanding these social connections. A robust 90% of adults over 50 said they have at least one close friend, with 48% maintaining one to three close friendships and 42% enjoying the company of four or more close friends. However, these numbers drop dramatically for those facing health challenges.

Among individuals reporting fair or poor mental health, 20% have no close friends at all – double the overall rate. Similarly, 18% of those with fair or poor physical health report having no close friends, suggesting that health challenges can significantly impact social connections.

The gender divide in friendship maintenance is notable: men are more likely than women to report having no close friends. Age also plays a role, with those 50 to 64 years-old more likely to report no close friendships compared to their older counterparts 65 and older – a somewhat counterintuitive finding that challenges assumptions about social isolation increasing with age.

When it comes to staying in touch, modern technology has helped keep connections alive. In the month before the survey, 78% of older adults had in-person contact with close friends, while 73% connected over the phone, and 71% used text messages. This multi-channel approach to maintaining friendships suggests that older adults are adapting to new ways of staying connected.

The findings resonate particularly with AARP, one of the study’s supporters.

“This poll underscores the vital role friendships play in the health and well-being of older adults,” says Indira Venkat, Senior Vice President of Research at AARP. “Strong social connections can encourage healthier choices, provide emotional support, and help older adults navigate health challenges, particularly for those at greater risk of isolation.”

Perhaps most striking is the role that close friends play in supporting health and well-being. Among those with at least one close friend, 79% say they can “definitely count on these friends for emotional support in good times or bad,” and 70% feel confident turning to their friends to discuss health concerns. These aren’t just casual relationships – they’re vital support systems that can influence health behaviors and outcomes.

Consider this: 50% of older adults say that their close friends encouraged them to make healthy choices, such as exercising more or eating a healthier diet. Another 35% say friends motivated them to get concerning symptoms or health issues checked out by a healthcare provider, and 29% received encouragement to stop unhealthy behaviors like poor eating habits or excessive drinking.

The practical support is equally impressive: 32% had friends who helped them when sick or injured, 17% had friends pick up medications for them, and 15% had friends attend medical appointments with them. These statistics underscore how friendship networks can function as informal healthcare support systems.

However, the study reveals a challenging paradox: making and maintaining friendships becomes more difficult precisely when people might need them most. Among those reporting fair or poor mental health, 65% say making new friends is harder now than when they were younger, compared to 42% of the overall population. Similarly, 61% of those with fair or poor mental health find it harder to maintain existing friendships, compared to 34% of the general over-50 population.

A desire to form new friendships remains high, with 75% of older adults expressing interest in developing new friendships (14% very interested, 61% somewhat interested). This interest is particularly strong among those who live alone and those who report feeling lonely, suggesting a recognition of the importance of social connections.

The study also reveals an interesting trend among friendships between people from different age groups. Among those with at least one close friend, 46% have a friend from a different generation (defined as being at least 15 years older or younger). Of these, 52% have friends from both older and younger generations, while 35% have friends only from younger generations, and 13% have friends only from older generations. This diversity in friendship age ranges suggests that meaningful connections can transcend generational boundaries.

The implications of these findings extend beyond individual relationships. Healthcare providers are encouraged to recognize the vital role that friends play in their patients’ health journeys, from encouraging preventive care to supporting healthy behaviors. Community organizations are urged to create more opportunities for social connection, particularly those that are inclusive and accessible to people with varying health status.

“When health care providers see older adults, we should also ask about their social support network, including close friends, especially for those with more serious health conditions,” says Dr. Jeffrey Kullgren, the poll director and primary care physician at the VA Ann Arbor Healthcare System.

As one considers the cycle of health and friendship revealed in this study, it becomes clear that the old adage about friendship being the best medicine might have more truth to it than we realized. In an age where healthcare increasingly focuses on holistic well-being, perhaps it’s time to add “friendship prescription” to the standard of care.

Source : https://studyfinds.org/friendship-after-50-social-support/

If the universe is already infinite, what is it expanding into?

NASA’s James Webb Space Telescope has produced the deepest and sharpest infrared image of the distant universe to date. Known as Webb’s First Deep Field, this image of galaxy cluster SMACS 0723 is overflowing with detail.Thousands of galaxies – including the faintest objects ever observed in the infrared – have appeared in Webb’s view for the first time. (Credits: NASA, ESA, CSA, and STScI)

When you bake a loaf of bread or a batch of muffins, you put the dough into a pan. As the dough bakes in the oven, it expands into the baking pan. Any chocolate chips or blueberries in the muffin batter become farther away from each other as the muffin batter expands.

The expansion of the universe is, in some ways, similar. But this analogy gets one thing wrong – while the dough expands into the baking pan, the universe doesn’t have anything to expand into. It just expands into itself.

It can feel like a brain teaser, but the universe is considered everything within the universe. In the expanding universe, there is no pan. Just dough. Even if there were a pan, it would be part of the universe and therefore it would expand with the pan.

Even for me, a teaching professor in physics and astronomy who has studied the universe for years, these ideas are hard to grasp. You don’t experience anything like this in your daily life. It’s like asking what direction is farther north of the North Pole.

Another way to think about the universe’s expansion is by thinking about how other galaxies are moving away from our galaxy, the Milky Way. Scientists know the universe is expanding because they can track other galaxies as they move away from ours. They define expansion using the rate that other galaxies move away from us. This definition allows them to imagine expansion without needing something to expand into.

The expanding universe
The universe started with the Big Bang 13.8 billion years ago. The Big Bang describes the origin of the universe as an extremely dense, hot singularity. This tiny point suddenly went through a rapid expansion called inflation, where every place in the universe expanded outward. But the name Big Bang is misleading. It wasn’t a giant explosion, as the name suggests, but a time where the universe expanded rapidly.

The universe then quickly condensed and cooled down, and it started making matter and light. Eventually, it evolved to what we know today as our universe.

The idea that our universe was not static and could be expanding or contracting was first published by the physicist Alexander Friedman in 1922. He confirmed mathematically that the universe is expanding.

While Friedman proved that the universe was expanding, at least in some spots, it was Edwin Hubble who looked deeper into the expansion rate. Many other scientists confirmed that other galaxies are moving away from the Milky Way, but in 1929, Hubble published his famous paper that confirmed the entire universe was expanding, and that the rate it’s expanding at is increasing.

This discovery continues to puzzle astrophysicists. What phenomenon allows the universe to overcome the force of gravity keeping it together while also expanding by pulling objects in the universe apart? And on top of all that, its expansion rate is speeding up over time.

Many scientists use a visual called the expansion funnel to describe how the universe’s expansion has sped up since the Big Bang. Imagine a deep funnel with a wide brim. The left side of the funnel – the narrow end – represents the beginning of the universe. As you move toward the right, you are moving forward in time. The cone widening represents the universe’s expansion.

Scientists haven’t been able to directly measure where the energy causing this accelerating expansion comes from. They haven’t been able to detect it or measure it. Because they can’t see or directly measure this type of energy, they call it dark energy.

According to researchers’ models, dark energy must be the most common form of energy in the universe, making up about 68% of the total energy in the universe. The energy from everyday matter, which makes up the Earth, the Sun and everything we can see, accounts for only about 5% of all energy.

Outside the expansion funnel
So, what is outside the expansion funnel?

Scientists don’t have evidence of anything beyond our known universe. However, some predict that there could be multiple universes. A model that includes multiple universes could fix some of the problems scientists encounter with the current models of our universe.

One major problem with our current physics is that researchers can’t integrate quantum mechanics, which describes how physics works on a very small scale, and gravity, which governs large-scale physics.

The rules for how matter behaves at the small scale depend on probability and quantized, or fixed, amounts of energy. At this scale, objects can come into and pop out of existence. Matter can behave as a wave. The quantum world is very different from how we see the world.

At large scales, which physicists call classical mechanics, objects behave how we expect them to behave on a day-to-day basis. Objects are not quantized and can have continuous amounts of energy. Objects do not pop in and out of existence.

The quantum world behaves kind of like a light switch, where energy has only an on-off option. The world we see and interact with behaves like a dimmer switch, allowing for all levels of energy.

But researchers run into problems when they try to study gravity at the quantum level. At the small scale, physicists would have to assume gravity is quantized. But the research many of them have conducted doesn’t support that idea.

Source: https://studyfinds.org/universe-infinite-still-expanding/

Human settlement of Mars isn’t as far off as we might think

Illustration of human colony on Mars. (© Anastasiya – stock.adobe.com)

Could humans expand out beyond their homeworld and establish settlements on the planet Mars? The idea of settling the Red Planet has been around for decades. However, it has been seen by skeptics as a delusion at best and mere bluster at worst.

Mars might seem superficially similar to Earth in a number of ways. But its atmosphere is thin and humans would need to live within pressurized habitats on the surface.

Yet in an era where space tourism has become possible, the Red Planet has emerged as a dreamland for rich eccentrics and techno-utopians. As is often the case with science communication, there’s a gulf between how close we are to this ultimate goal and where the general public understands it to be.

However, I believe there is a rationale for settling Mars and that this objective is not as far off as some would believe. There are actually a few good reasons to be optimistic about humanity’s future on the Red Planet.

First, Mars is reachable. During an optimal alignment between Earth and Mars as the two planets orbit the Sun, its possible to travel there in a spacecraft in six to eight months. Some very interesting new engine designs suggest that it could be done in two months. But based on technology that’s ready to go, it would take astronauts six months to travel to Mars and six months back to Earth.

Astronauts have already stayed for this long on the International Space Station (ISS) and on the Soviet orbiting lab Mir. We can get there safely and we have already shown that we can reliably land robots on the surface. There’s no technical reason why we couldn’t do the same with humans.

Second, Mars is abundant in the raw materials required for humans to “live off the land”; in other words, achieve a level of self-sufficiency. The Red Planet has plentiful carbon, nitrogen, hydrogen and oxygen which can be separated and isolated, using processes developed on Earth. Mars is interesting and useful in a multitude of ways that the moon isn’t. And we have technology on Earth to enable us to stay and settle Mars by making use of its materials.

A third reason for Mars optimism is the radical new technology that we can put to use on a crewed mission to the planet. For example, Moxie (Mars Oxygen In-Situ Resource Utilization Experiment) is an project developed by scientists at the California Institute of Technology (Caltech) that sucks in Martian atmosphere and separates it into oxygen. Byproducts of the process – carbon monoxide, nitrogen and argon – can be vented.

When scaled up, similar machines would be able to separate oxygen from hydrogen to produce breathable air, rocket fuel and water. This makes it easier to travel to the planet and live on the surface because it’s not necessary to bring these commodities from Earth – they can be made once on Mars. Generating fuel on the surface would also make any future habitat less reliant on electric or solar-powered vehicles.

But how would we build the habitats for our Mars settlers? Space architect Melodie Yasher has developed ingenious plans for using robots to 3D print the habitats, landing pads and everything needed for human life on Mars. Using robots means that these could all be manufactured on Mars before humans landed. 3D printed homes have already been demonstrated on Earth.

Volunteers have also spent time living in simulated Mars habitats here on Earth. These are known as Mars analogues. The emergency medicine doctor Beth Healey spent a year overwintering in Antarctica (which offers many parallels with living on another planet) for the European Space Agency (Esa) and communicates her experience regularly.

She is not alone, as each year sees new projects in caves, deserts and other extreme environments, where long term studies can explore the physical and psychological demands on humans living in such isolated environments.

Finally, the Mars Direct plan devised by Dr. Robert Zubrin has existed for more than 30 years, and has been modified to account for modern technology as the private sector has grown. The original plan was based on using a Saturn V rocket (used for the Apollo missions in the 1960s and 1970s) to launch people. However, this can now be accomplished using the SpaceX Falcon 9 rocket and a SpaceX Dragon capsule to carry crew members.

Several uncrewed launches from Earth could ferry necessary equipment to Mars. These could include a vehicle for crew members to return on. This means that everything could be ready for the first crew once they arrived.

Source : https://studyfinds.org/human-settlement-of-mars-closer/

It’s Friday the 13th. Why is this number feared worldwide?

(Credit: Prazis Images/Shutterstock)

Of all the days to stay in bed, Friday the 13th is surely the best. It’s the title of a popular (if increasingly corny) horror movie series; it’s associated with bad luck, and it’s generally thought to be a good time not to take any serious risks.

Even if you try to escape it, you might fail, as happened to New Yorker Daz Baxter. On Friday 13th in 1976, he decided to just stay in bed for the day, only to be killed when the floor of his apartment block collapsed under him. There’s even a term for the terror the day evokes: Paraskevidekatriaphobia was coined by the psychotherapist Donald Dossey, a specialist in phobias, to describe an intense and irrational fear of the date.

Unfortunately there is always one Friday 13th in a year, and sometimes there are as many as three. Today is one of them. But no matter how many times the masked killer Jason Voorhees from Friday the 13th returns to haunt our screens, this fear is in our own minds rather than any basis in science.

One study did show a small rise in accidents on that day for women drivers in Finland, but much of the problem was due to anxiety rather than general bad luck. Follow-up research found no consistent evidence of a rise in accidents on the day but suggested that if you’re superstitious, it might be better not to get behind the wheel of a car on it anyway.

The stigma against Friday 13th likely comes from a merging of two different superstitions. In the Christian tradition, the death of Jesus took place on a Friday, following the presence of 13 people at the Last Supper. In Teutonic legend, the god Loki appears at a dinner party seated for 12 gods, making him the outcast 13th at the table, leading to the death of another guest.

Elsewhere in the world, 13 is less unlucky. In Hinduism, people fast to worship Lord Shiva and Parvati on Trayodashi, the 13th day in Hindu month. There are 13 Buddhas in the Shingon sect of Buddhism, and there is mention of a lucky 13 signs, rather than unlucky, in The Tibetan Book of the Great Liberation.

In Italy, it is more likely to be “heptadecaphobia”, or fear of the number 17, that leads to a change of plans. In Greece, Spain, and Mexico, the “unlucky” day is not Friday 13th, but Tuesday 13th.

In China, the number four is considered significantly unlucky, as it is nearly homophonous to the word “death”. In a multicultural country like Australia you may find hotels and cinemas missing both 13th and fourth floors, out of respect for the trepidation people can have about those numbers.

The lure of superstition

Superstitions were one of the first elements of paranormal beliefs studied in the early 1900s. While many are now just social customs rather than a genuine conviction, their persistence is remarkable.

If you cross your fingers, feel alarmed at breaking a mirror, find a “lucky” horseshoe, or throw spilled salt over your shoulder, you are engaging in long-held practices that can have a powerful impact on your emotions. Likewise, many students are now heading towards their semester exams. In the lecture rooms, they may take lucky charms such as a particular pen or favorite socks.

In sports, baseballer Nomar Garciaparra is known for his elaborate batting ritual. Other sports people wear “lucky gear” or put on their gloves in a particular order. The great cricket umpire David Shepherd stood on one leg whenever the score reached 111. These sorts of superstitions are humorously depicted in the film Silver Linings Playbook. It’s interesting to note that it’s often the successful athletes who have these superstitions and stick to them.

Source : https://studyfinds.org/friday-the-13th-number-feared/

Scientists close to creating ‘simple pill’ that cures diabetes

Diabetes with insulin, syringe, vials, pills (© Sherry Young – stock.adobe.com)

Imagine a world where diabetes could be treated with a simple pill that essentially reprograms your body to produce insulin again. Researchers at Mount Sinai have taken a significant step toward making this possibility a reality, uncovering a groundbreaking approach that could potentially help over 500 million people worldwide living with diabetes.

Diabetes, affecting 537 million people globally, develops when cells in the pancreas known as beta cells become unable to produce insulin, a hormone essential for regulating blood sugar levels. In both Type 1 and Type 2 diabetes, patients experience a marked reduction in functional, insulin-producing beta cells. While current treatments help manage symptoms, researchers have been searching for ways to replenish these crucial cells.

The journey to this latest discovery began in 2015 when Mount Sinai researchers identified harmine, a drug belonging to a class called DYRK1A inhibitors, as the first compound capable of stimulating insulin-producing human beta cell regeneration. The research team continued to build on this foundation, reporting in 2019 and 2020 that harmine could work synergistically with other medications, including GLP-1 receptor agonists like semaglutide and exenatide, to enhance beta cell regeneration.

In July 2024, researchers reported remarkable results: harmine alone increased human beta cell mass by 300 percent in their studies, and when combined with a GLP-1 receptor agonist like Ozempic, that increase reached 700 percent.

However, there’s an even more exciting part of this discovery. These new cells might come from an unexpected source. Researchers discovered that alpha cells, another type of pancreatic cell that’s abundant in both Type 1 and Type 2 diabetes, could potentially be transformed into insulin-producing beta cells.

“This is an exciting finding that shows harmine-family drugs may be able to induce lineage conversion in human pancreatic islets,” says Dr. Esra Karakose, Assistant Professor of Medicine at Mount Sinai and the study’s corresponding author, in a statement. “It may mean that people with all forms of diabetes have a large potential ‘reservoir’ for future beta cells, just waiting to be activated by drugs like harmine.”

Using single-cell RNA sequencing technology, the researchers analyzed over 109,881 individual cells from human pancreatic islets donated by four adults. This technique allowed them to study each cell’s genetic activity in detail, suggesting that “cycling alpha cells” may have the potential to transform into insulin-producing beta cells. Alpha cells, being the most abundant cell type in pancreatic islets, could potentially serve as an important source for new beta cells if this transformation process can be successfully controlled.

The Mount Sinai team is now moving these studies toward human trials.

“A simple pill, perhaps together with a GLP1RA like semaglutide, is affordable and scalable to the millions of people with diabetes,” says Dr. Andrew F. Stewart, director of the Mount Sinai Diabetes, Obesity, and Metabolism Institute.

While the research is still in its early stages, it offers hope to millions of people who currently manage diabetes through daily insulin injections or complex medication regimens. The possibility of a treatment that could essentially restart the body’s insulin production is nothing short of revolutionary.

The study, published in the journal Cell Reports Medicine, represents a significant step forward in diabetes research. By potentially turning one type of pancreatic cell into another, researchers may have found a way to essentially reprogram the body’s own cellular mechanisms to combat diabetes.

Source : https://studyfinds.org/simple-pill-cure-diabetes/

Polio was supposedly wiped out – Now the virus has been found in Europe’s wastewater

(Credit: Babul Hosen/Shutterstock)

In 1988, the World Health Organization (WHO) called for the global eradication of polio. Within a decade, one of the three poliovirus strains was already virtually eradicated — meaning a permanent reduction of the disease to zero new cases worldwide.

Polio, also known as poliomyelitis, is an extremely contagious disease caused by the poliovirus. It attacks the nervous system and can lead to full paralysis within hours. The virus enters through the mouth and multiplies in the intestine. Infected people shed poliovirus into the environment by the fecal-oral route.

About one in every 200 infections results in irreversible paralysis (usually affecting the legs). Of those who become paralyzed, 5–10% die due to immobilized breathing muscles.

Since 1988, the global number of poliovirus cases has decreased by over 99%. Today, only two countries — Pakistan and Afghanistan — are considered “endemic” for polio. This means that the disease is regularly transmitted in the country.

Yet in recent months, poliovirus has been detected in wastewater in Germany, Spain and Poland. This discovery does not confirm infections in the population, but it is a wake-up call for Europe, which was declared polio-free in 2002. Any gaps in vaccination coverage could see a resurgence of the disease.

Poliovirus strains originating from regions where the virus remained in circulation led to outbreaks among unvaccinated people in Tajikistan and Ukraine in 2021, and Israel in 2022. By contrast, in the UK — where poliovirus was detected in wastewater in 2022 — no cases of paralytic disease were recorded.

This information highlights the varied effect of poliovirus detection. Why? In areas with under-immunized populations, the virus can circulate widely and cause paralysis. But in communities with strong vaccination coverage, the virus often remains limited to symptomless (“asymptomatic”) infections or is detectable only in wastewater.

In this sense, the mere detection of the virus in the environment can serve as a canary in the coal mine. It warns public health officials to check vaccination coverage and take measures such as boosting vaccination campaigns, improving access to healthcare and enhancing disease surveillance to prevent outbreaks.

Rich source of information

Wastewater surveillance, an approach reinvigorated during the COVID pandemic, has proven invaluable for early detection of disease outbreaks. Wastewater is a rich source of information. It contains a blend of human excrement, including viruses, bacteria, fungi and chemical traces. Analysing this mixture offers valuable insights for public health officials.

Routine wastewater testing in the three countries revealed a specific vaccine-derived strain. No polio cases were reported in any of the three countries.

Vaccine-derived poliovirus strains emerge from the weakened live poliovirus contained in oral polio vaccines. If this weakened virus circulates long enough among under-immunized or unimmunized groups or in people with weakened immune systems (such as transplant recipients or those undergoing chemotherapy), it can genetically shift back into a form capable of causing disease.

In this case, it is possible that the virus was shed in the sewage by an infected asymptomatic person. But it is also possible that a person who was recently vaccinated with the oral vaccine (with the weakened virus) shed the virus in the wastewater, which subsequently evolved until re-acquiring the mutations that cause paralysis.

A different type of vaccine exists. The inactivated polio vaccine (IPV) cannot revert to a dangerous form. However, it is more expensive and more complex to deliver, needing trained health workers to administer and more complex procedures. This can limit the feasibility of deploying it in poor countries — often where the need to vaccinate is greater.

This does not mean that the oral polio vaccine is not any good. On the contrary, they have been instrumental in eradicating certain poliovirus strains globally. The real issue arises when vaccination coverage is insufficient.

In 2023, polio immunization coverage in one-year-olds in Europe stood around 95%. This is well above the 80% “herd immunity” threshold — when enough people in a population are vaccinated so that vulnerable groups are protected from the disease.

In Spain, Germany and Poland, coverage with three doses ranges from 85–93%, protecting most people from severe disease. Yet under-immunized groups and those with weakened immune systems remain at risk.

The massive progress in polio eradication that happened over the past three decades is the result of the global effort to fight the disease. But mounting humanitarian crises — sparked by conflict, natural disasters and climate change — are significantly disrupting vaccination programs essential for safeguarding public health.

Source : https://studyfinds.org/polio-found-in-europe-wastewater/

Universe expanding faster than physics can explain: Webb telescope confirms mysterious growth spurt

Primordial creation: The universe begins with the Big Bang, an extraordinary moment of immense energy, igniting formation of everything in existence. (© Alla – stock.adobe.com)

When two of humanity’s most powerful eyes on the cosmos agree something strange is happening, astronomers tend to pay attention. Now, the James Webb Space Telescope has backed up what Hubble has been telling us for years: the universe is expanding faster than our best physics can explain, and nobody knows why.

Scientists have long known that our universe is expanding, but exactly how fast it’s growing is an ongoing and fascinating debate in the astronomy world. The expansion rate, known as the “Hubble constant,” helps scientists map the universe’s structure and understand its state billions of years after the Big Bang. This latest discovery suggests we may need to rethink our understanding of the universe itself.

“The discrepancy between the observed expansion rate of the universe and the predictions of the standard model suggests that our understanding of the universe may be incomplete,” says Nobel laureate and lead author Adam Riess, a Bloomberg Distinguished Professor at Johns Hopkins University, in a statement. “With two NASA flagship telescopes now confirming each other’s findings, we must take this problem very seriously—it’s a challenge but also an incredible opportunity to learn more about our universe.”

This research, published in The Astrophysical Journal, builds on Riess’ Nobel Prize-winning discovery that the universe’s expansion is accelerating due to a mysterious “dark energy” that permeates the vast stretches of space between stars and galaxies. Think of this expanding universe like a loaf of raisin bread rising in the oven. As the dough expands, the raisins (representing galaxies) move farther apart from each other. While this force pushes galaxies apart, exactly how fast this is happening remains hotly debated.

For over a decade, scientists have used two different methods to measure this expansion rate. One method looks at ancient light from the early universe, like examining a baby photo to understand how someone grew. The other method, using telescopes to observe nearby galaxies, looks at more recent cosmic events. These two methods give significantly different answers about how fast the universe is expanding – and not just slightly different.

While theoretical models predict the universe should be expanding at about 67-68 kilometers per second per megaparsec (a unit of cosmic distance), telescope observations consistently show a faster rate of 70-76 kilometers per second per megaparsec, averaging around 73. This significant discrepancy is what scientists call the “Hubble tension.”

To help resolve this mystery, researchers turned to the James Webb Space Telescope, the most powerful space observatory ever built. “The Webb data is like looking at the universe in high definition for the first time and really improves the signal-to-noise of the measurements,” says Siyang Li, a graduate student at Johns Hopkins University who worked on the study.

Webb’s super-sharp vision allowed it to examine these cosmic distances in unprecedented detail. The telescope looked at about one-third of the galaxies that Hubble had previously studied, using a nearby galaxy called NGC 4258 as a reference point – like using a well-known landmark to measure other distances.

The researchers used three different methods to measure these cosmic distances, each acting as an independent check on the others. First, they observed special pulsating stars called Cepheid variables, which astronomers consider the “gold standard” for measuring distances in space. These stars brighten and dim in a precise pattern that reveals their true brightness, making them reliable cosmic yardsticks. The team also looked at the brightest red giant stars in each galaxy and observed special carbon-rich stars, providing two additional ways to verify their measurements.

When they combined all these observations, they found something remarkable: All three methods pointed to nearly identical results, with Webb’s measurements matching Hubble’s almost exactly. The differences between measurements were less than 2% – far smaller than the roughly 8-9% discrepancy that creates the Hubble tension.

This agreement might seem like a simple confirmation, but it actually deepens one of astronomy’s biggest mysteries. Scientists now believe this discrepancy might point to missing pieces in our understanding of the cosmos. Recent research has revealed that mysterious components called dark matter and dark energy make up about 96% of the universe’s content and drive its accelerated expansion. Yet even these exotic components don’t fully explain the Hubble tension.

“One possible explanation for the Hubble tension would be if there was something missing in our understanding of the early universe, such as a new component of matter—early dark energy—that gave the universe an unexpected kick after the big bang,” explains Marc Kamionkowski, a Johns Hopkins cosmologist. “And there are other ideas, like funny dark matter properties, exotic particles, changing electron mass, or primordial magnetic fields that may do the trick. Theorists have license to get pretty creative.”

Whether this cosmic puzzle leads us to discover new forms of energy, exotic particles, or completely novel physics, one thing is certain: the universe is expanding our understanding just as surely as it’s expanding itself. And thanks to Webb and Hubble, we’re along for the ride.

Source : https://studyfinds.org/universe-expansion-rate-physics-webb-telescope/

AI Jesus can ‘listen’ to your confession, but here’s why it can’t absolve your sins

(Credit: New Africa/Shutterstock)

This autumn, a Swiss Catholic church installed an AI Jesus in a confessional to interact with visitors.

The installation was a two-month project in religion, technology, and art titled “Deus in Machina,” created at the University of Lucerne. The Latin title literally means “god from the machine”; it refers to a plot device used in Greek and Roman plays, introducing a god to resolve an impossible problem or conflict facing the characters.

This hologram of Jesus Christ on a screen was animated by an artificial intelligence program. The AI’s programming included theological texts, and visitors were invited to pose questions to the AI Jesus, viewed on a monitor behind a latticework screen. Users were advised not to disclose any personal information and confirm that they knew they were engaging with the avatar at their own risk.

Some headlines stated that the AI Jesus was actually engaged in the ritual act of hearing people’s confessions of their sins, but this wasn’t the case. However, even though AI Jesus was not actually hearing confessions, as a specialist in the history of Christian worship, I was disturbed by the act of placing the AI project in a real confessional that parishioners would ordinarily use.

A confessional is a booth where Catholic priests hear parishioners’ confessions of their sins and grant them absolution, forgiveness, in the name of God. Confession and repentance always take place within the human community that is the church. Human believers confess their sins to human priests or bishops.

Early history

The New Testament scriptures clearly stress a human, communal context for admitting and repenting for sins.

In the Gospel of John, for example, Jesus speaks to his apostles, saying, “Whose sins you shall forgive, they are forgiven, and whose sins you shall retain they are retained.” And in the Epistle of James, Christians are urged to confess their sins to one another.

Churches in the earliest centuries encouraged public confession of more serious sins, such as fornication or idolatry. Church leaders, called bishops, absolved sinners and welcomed them back into the community.

From the third century on, the process of forgiving sins became more ritualized. Most confessions of sins remained private – one-on-one with a priest or bishop. Sinners would express their sorrow in doing penance individually by prayer and fasting.

However, some Christians guilty of certain major offenses, such as murder, idolatry, apostasy or sexual misconduct, would be treated very differently.

These sinners would do public penance as a group. Some were required to stand on the steps of the church and ask for prayers. Others might be admitted in for worship but were required to stand in the back or be dismissed before the scriptures were read. Penitents were expected to fast and pray, sometimes for years, before being ritually reconciled to the church community by the bishop.

Medieval developments

During the first centuries of the Middle Ages, public penance fell into disuse, and emphasis was increasingly placed on verbally confessing sins to an individual priest. After privately completing the penitential prayers or acts assigned by the confessor, the penitent would return for absolution.

The concept of Purgatory also became a widespread part of Western Christian spirituality. It was understood to be a stage of the afterlife where the souls of the deceased who died before confession with minor sins or had not completed penance would be cleansed by spiritual suffering before being admitted to heaven.

Living friends or family of the deceased were encouraged to offer prayers and undertake private penitential acts, such as giving alms – gifts of money or clothes – to the poor to reduce the time these souls would have to spend in this interim state.

Other developments took place in the later Middle Ages. Based on the work of the theologian Peter Lombard, penance was declared a sacrament, one of the major rites of the Catholic Church. In 1215, a new church document mandated that every Catholic go to confession and receive Holy Communion at least once a year.

Priests who revealed the identity of any penitent faced severe penalties. Guidebooks for priests, generally called Handbooks for Confessors, listed various types of sins and suggested appropriate penances for each.

The first confessionals

Until the 16th century, those wishing to confess their sins had to arrange meeting places with their clergy, sometimes just inside the local church when it was empty.

But the Catholic Council of Trent changed this. The 14th session in 1551 addressed penance and confession, stressing the importance of privately confessing to priests ordained to forgive in Christ’s name.

Soon after, Charles Borromeo, the cardinal archbishop of Milan, installed the first confessionals along the walls of his cathedral. These booths were designed with a physical barrier between priest and penitent to preserve anonymity and prevent other abuses, such as inappropriate sexual conduct.

Similar confessionals appeared in Catholic churches over the following centuries: The main element was a screen or veil between the priest confessor and the layperson, kneeling at his side. Later, curtains or doors were added to increase privacy and ensure confidentiality.

Rites of penance in contemporary times

In 1962, Pope John XXIII opened the Second Vatican Council. Its first document, issued in December 1963, set new norms for promoting and reforming Catholic liturgy.

Since 1975, Catholics have three forms of the rite of penance and reconciliation. The first form structures private confession, while the second and third forms apply to groups of people in special liturgical rites. The second form, often used at set times during the year, offers those attending the opportunity to go to confession privately with one of the many priests present.

The third form can be used in special circumstances, when death threatens with no time for individual confession, like a natural disaster or pandemic. Those assembled are given general absolution, and survivors confess privately afterward.

In addition, these reforms prompted the development of a second location for confession: Instead of being restricted to the confessional booth, Catholics now had the option of confessing their sins face-to-face with the priest.

To facilitate this, some Catholic communities added a reconciliation room to their churches. Upon entering the room, the penitent could choose anonymity by using the kneeler in front of a traditional screen or walk around the screen to a chair set facing the priest.

Over the following decades, the Catholic experience of penance changed. Catholics went to confession less often or stopped altogether. Many confessionals remained empty or were used for storage. Many parishes began to schedule confessions by appointment only. Some priests might insist on face-to-face confession, and some penitents might prefer the anonymous form only. The anonymous form takes priority, since the confidentiality of the sacrament must be maintained.

Source : https://studyfinds.org/ai-jesus-cant-absolve-your-sins/

 

The 7 Best Ski Resorts In The World | From Colorado To Switzerland

From the powdery slopes of the French Alps to Japan’s legendary snowfields, the world’s elite ski resorts offer far more than pristine runs and breathtaking views. These winter wonderlands combine challenging terrain, luxurious amenities, and rich cultural experiences that make them bucket-list destinations for both serious athletes and leisure travelers. Whether you’re seeking champagne powder in Colorado, traditional Alpine charm in Switzerland, or the untouched backcountry of British Columbia, the best ski resorts define the pinnacle of winter sport destinations.

Pic: https://www.tripadvisor.in/

Best Ski Resorts in the World, According to Experts
1. Zermatt, Switzerland

Zermatt is one of the most well-known ski resorts in the world, known for the iconic Matterhorn peak. But besides the endless slopes, Zermatt is a skiing vacation heaven, with boutique shops, restaurants, skating rinks, and hotel rooms with a gorgeous view. Far and Wide says it all – this resort is legendary, practically trademarked by the iconic Matterhorn peak that graces Toblerone chocolate and even a Disneyland ride. But Zermatt’s true magic lies beyond the photo ops. Sure, snapping that perfect Matterhorn picture is a must, but it’s the vast, snow-covered slopes and charming car-free village that truly steal the show, keeping visitors coming back for more.

Serious skiers can ski all the way down to Zermatt from the top of the Matterhorn for a descent of 2200m, but there are lifts all over to take you to different sectors for intermediates. Speaking of the Matterhorn, PureWow calls it one of the most recognizable ski mountains in the world, and for good reason! Towering at almost 15,000 feet, it’s a skier’s dream backdrop and even more awe-inspiring in person than on a chocolate bar. But Zermatt offers so much more than just epic views. PureWow also recommends taking a ride on the Gornergrat Bahn, a legendary train that takes you up the mountain for breathtaking panoramas. Feeling adventurous? Book a lesson with a SkiBro instructor or a mountain guide to learn the best lines down and conquer those slopes like a pro.

Oyster Worldwide knows what’s up – they simply can’t create a “best ski destinations” list without mentioning Zermatt. As the highest resort in the Alps, the views are unbeatable, with the Matterhorn stealing the show from practically any angle on the slopes. Plus, it boasts the greatest vertical drop in all of Switzerland, meaning long, exhilarating runs for all skill levels. And for those who crave a true adrenaline rush, Zermatt offers incredible off-piste terrain – a playground for powderhounds and adventure seekers. Whether you’re a seasoned skier or a first-timer yearning for snowy bliss, Zermatt has something for everyone. So ditch the crowds and ordinary slopes, Zermatt might just be your perfect winter escape!

2. Aspen Snowmass, United States

Aspen, Colorado is home to four different mountains: historic Aspen Mountain, lively Snowmass, uncrowded Aspen Highlands, and the beginner-friendly Buttermilk. Located about four hours from Denver, it can be a little harder to get to, but has been a hub for ski culture since the early 1900s. U.S. News calls it synonymous with North American skiing, and for good reason. Imagine – over 5,600 acres of pristine slopes to explore, all accessible with a single lift ticket – skier’s paradise, anyone?

Aspen is home to luxurious resorts where you can often find celebrities vacationing, as well as many other award-winning restaurants, bars, spas, and other amenities. But Aspen’s charm goes beyond the slopes. Qantas says the heart of the town throbs with designer stores and hidden consignment gems, chic bars buzzing with après-ski energy, and world-class restaurants like Matsuhisa and Element 47 serving up culinary delights. And for those seeking a cultural fix, the Shigeru Ban-designed art museum offers a stunning backdrop for a dose of inspiration.

On the Snow acknowledges Aspen’s reputation as a luxurious ski resort, but emphasizes the true star of the show – the four mountains themselves. Each peak boasts its own character, with Aspen Mountain, nicknamed “Ajax” by the locals, rising from the heart of the town. Steep runs, challenging bumps, powdery glades, and perfectly groomed trails – Ajax has it all, making it a haven for experienced skiers and snowboarders. So, whether you’re a seasoned pro or a first-timer seeking a luxurious winter wonderland, Aspen promises an unforgettable experience.

3. Whistler Blackcomb, Canada

Whistler is only a two-hour drive from Vancouver, and is home to enough versatile snow space that’s perfect for families. Hit the slopes or take part in activities at the full-service resort. Ever dreamed of a ski vacation with endless slopes and Olympic-worthy thrills? Then Whistler Blackcomb in British Columbia, Canada, might be your perfect match! The Culture Trip calls it Canada’s most famous ski resort, and for good reason – it hosted the 2010 Winter Olympics and welcomes over two million visitors a year. With a massive skiable area of over 8,000 acres, it’s one of the biggest on the planet. Plus, thanks to those Olympics, the facilities are state-of-the-art, ensuring a smooth and luxurious experience.

Whistler has great terrain for all levels of skiers and snowboarders, and even enough snow that you can ski year round in some areas. The views on the slopes stretch all the way to the Pacific Ocean, and there are over 200 runs serviced by over 30 lifts. PlanetWare says that the resort’s two incredible mountains, Whistler and Blackcomb, are practically begging to be explored. Imagine over 200 marked runs, a combined 8,171 acres of skiable terrain, and not just one, but three glaciers to conquer. An average snowfall of 465 inches a year practically guarantees pristine slopes throughout the season. And with so many lifts to whisk you up the mountain, you’ll spend less time waiting and more time carving epic turns. Don’t miss the legendary Peak 2 Peak Gondola, a must-do aerial experience that takes you between the two mountains for breathtaking panoramas.

The Independent spills the tea on Whistler’s après-ski scene, calling it legendary. After a day of conquering those slopes, unwind in style at a high-end bar, sip on fresh oysters, and soak in the vibrant atmosphere – pure bliss! And for those seeking ultimate relaxation, luxurious chalets with world-class spas await. Whether you’re a hardcore skier craving challenging terrain or a luxury lover seeking a glamorous winter escape, Whistler Blackcomb has something for everyone. So pack your warmest coat, your sense of adventure, and get ready to experience winter wonderland magic at its finest.

Source: https://studyfinds.org/best-ski-resorts-in-the-world/

61% of shoppers say the holiday season is financially terrifying

(© Paolese – stock.adobe.com)

Many people “shop ’til they drop” during the holidays — but a new survey finds that may not be such a great thing. Researchers find one in four people grapple with compulsive overspending during the holiday season.

The research, commissioned by Beyond Finance and conducted by Talker Research among 2,000 people who celebrate a winter holiday, paints a stark picture of financial vulnerability. An overwhelming 56% of respondents feel pressured to spend money during the holidays, with family emerging as the primary source of financial strain (71%).

However, the challenges run far deeper than simple spending pressures. More than three-quarters of respondents (76%) experience what researchers call “money wounds” — emotional difficulties stemming from financial challenges that cut to the core of personal well-being.

“In my weekly therapy sessions with clients burdened by credit card debt, I regularly hear about the same challenges and mental health struggles highlighted in these survey findings, especially as they intensify during the holiday season,” says Dr. Erika Rasure, chief financial wellness advisor at Beyond Finance, in a statement. “It’s crucial to remember you’re not alone. Acknowledging these struggles and seeking support are key steps toward managing financial stress and finding peace.”

The study reveals a complex landscape of financial trauma. Low self-esteem (26%), compulsive overspending (21%), shame from past financial mistakes (21%), and a scarcity mindset (20%) emerge as the most common “money wounds.” During the holiday season specifically, compulsive overspending becomes the most prominent financial issue, affecting 25% of respondents.

The financial stress takes a significant emotional toll. Sixty-eight percent of those experiencing money wounds report that these challenges hold them back from feeling fulfilled and successful. This year, more than six in 10 respondents (61%) say they’re anxiously facing their finances for valid reasons.

Shoppers’ coping mechanisms are equally telling. Fifty-four percent of those with money wounds admit to avoiding their financial troubles during the holidays. This avoidance manifests in various ways: 37% refrain from buying gifts, 33% decline party invitations, and 29% avoid checking their bank account balances.

Perhaps most heartbreaking is the social isolation that follows. Forty-two percent of respondents say they’ll become distant from others to avoid feeling “less than” or experiencing spending pressure. This distancing comes at an emotional cost, with participants reporting feelings of shame (38%), guilt (39%), and loneliness (40%).

There is a glimmer of hope. Sixty-one percent of respondents are actively trying to embrace the philosophy that “money and spending don’t equal happiness.” Some are taking concrete steps toward healing, with 27% discussing their financial stress with a therapist or mental health expert, and 26% working with financial professionals to improve their habits.

However, the road to recovery is long. On average, respondents believe it takes six years for a money wound to heal. More sobering still, 37% don’t believe financial trauma ever completely resolves.

As the holiday season approaches, the study serves as a powerful reminder of the emotional complexity behind financial stress, urging compassion, understanding, and support for those struggling with money-related challenges.

Source: https://studyfinds.org/holidays-financially-terrifying/

Why do we exist? Invisible particles passing through our bodies could solve greatest mystery

(Photo by Labutin Art on Shutterstock)

Every second, trillions of invisible particles are passing through your body at nearly the speed of light. These ghostly travelers, called neutrinos, might hold the key to some of science’s biggest questions – including why we exist at all. Now, a global team of scientists has mapped out an ambitious decade-long plan to unlock their secrets.

“It might not make a difference in your daily life, but we’re trying to understand why we’re here,” explains Alexandre Sousa, a physics professor at the University of Cincinnati and one of the white paper’s editors, in a statement. “Neutrinos seem to hold the key to answering these very deep questions.”

These mysterious particles are born in various cosmic cookpots: the nuclear fusion powering our sun, radioactive decay in Earth’s crust and nuclear reactors, and specialized particle accelerator laboratories. As they zoom through space, neutrinos can shape-shift between three different “flavors” – electron, muon, and tau neutrinos.

For over two decades, however, something strange has been happening in neutrino experiments, leaving physicists scratching their heads. Several major studies have observed patterns that don’t match our current understanding of how these particles should behave.

The most famous puzzle emerged from the Liquid Scintillator Neutrino Detector (LSND) experiment at Los Alamos National Laboratory, which detected more electron antineutrinos than their theories predicted. This unexpected excess was later supported by similar findings at Fermilab’s MiniBooNE experiment. Meanwhile, measurements of neutrinos from nuclear reactors and radioactive sources have consistently shown fewer electron antineutrinos than expected.

These anomalies have led scientists to propose an intriguing possibility: there might be a fourth type of neutrino, dubbed “sterile” because it appears immune to three of the four fundamental forces of nature.

“Theoretically, it interacts with gravity, but it has no interaction with the others, weak nuclear force, strong nuclear force or electromagnetic force,” Sousa explains.

However, fitting all the experimental data together into a coherent picture has proven challenging. Some results seem to conflict with others, and observations of the early universe place strict limits on additional neutrino types. This has pushed theorists to consider more exotic explanations, from unknown forces to particle decay to quantum effects we don’t yet understand.

To crack these mysteries, physicists are deploying an arsenal of sophisticated new experiments. One of the most ambitious is DUNE (Deep Underground Neutrino Experiment) at Fermilab. Teams have excavated caverns in a former gold mine 5,000 feet underground – so deep it takes 10 minutes just to reach by elevator – to house massive neutrino detectors shielded from cosmic rays and background radiation.

“With these two detector modules and the most powerful neutrino beam ever we can do a lot of science,” says Sousa. “DUNE coming online will be extremely exciting. It will be the best neutrino experiment ever.”

Another major project called Hyper-Kamiokande is under construction in Japan.

“That should hold very interesting results, especially when you put them together with DUNE,” Sousa notes. “The two experiments combined will advance our knowledge immensely.”

According to the research published in the Journal of Physics G Nuclear and Particle Physics, the stakes couldn’t be higher. Beyond potentially discovering new fundamental particles or forces, neutrino research might help explain one of the universe’s greatest mysteries: why there is more matter than antimatter when the Big Bang should have created equal amounts of both. This asymmetry is the reason galaxies, planets, and we ourselves exist.

The new roadmap for neutrino research represents an extraordinary collaborative effort, bringing together more than 170 scientists from 118 institutions worldwide. Their vision will help guide funding decisions for these ambitious projects through the U.S. government’s Particle Physics Project Prioritization Panel.

As researchers venture deeper into the coming decade of discovery, these ethereal particles continue to surprise and perplex us – much as they did when Wolfgang Pauli first proposed their existence in 1930. Perhaps soon, through the combined power of modern technology and global scientific cooperation, neutrinos will finally reveal their full nature and help us understand not just the smallest scales of physics but the greatest mysteries of our cosmic existence.

Source : https://studyfinds.org/invisible-particles-could-solve-mystery/

 

Arctic could be ‘ice-free’ by 2027 — Scientists warn we’re closer to disaster than we thought

Melting icebergs by the coast of Greenland. (Photo by muratart on Shutterstock)

The Arctic Ocean’s pristine white ice cap, a defining feature of our planet visible even from space, could undergo a historic transformation in the next few years. A new study reveals that while most projections show the first ice-free day occurring within nine to 20 years after 2023, there’s an unlikely but significant possibility this milestone could arrive as soon as 2026-2027.

While scientists have long studied when the Arctic might become ice-free during September (typically when sea ice reaches its annual minimum), this is the first research to examine when we might see the very first day without significant ice cover. The distinction is crucial – like the difference between a lake being ice-free for an entire month versus experiencing its first ice-free day during an unusually warm spell.

The study, led by researchers Céline Heuzé from the University of Gothenburg and Alexandra Jahn from the University of Colorado Boulder, defines “ice-free” as less than one million square kilometers of sea ice remaining. For perspective, that’s about four times the size of the United Kingdom – a small fraction of the Arctic Ocean’s typical ice coverage. It mainly accounts for ice that tends to persist along northern coastlines even during extensive melting.

“The first ice-free day in the Arctic won’t change things dramatically,” says Jahn, an associate professor in the Department of Atmospheric and Oceanic Sciences and a fellow at CU Boulder’s Institute of Arctic and Alpine Research, in a statement. “But it will show that we’ve fundamentally altered one of the defining characteristics of the natural environment in the Arctic Ocean, which is that it is covered by sea ice and snow year-round, through greenhouse gas emissions.”

This transformation is already well underway. The National Snow and Ice Data Center reported that September 2023’s sea ice minimum – 4.28 million square kilometers – was one of the lowest measurements since satellite monitoring began in 1978. While this figure exceeded the record low set in September 2012, it represents a dramatic decline from the 1979-1992 average of 6.85 million square kilometers. Scientists have observed Arctic ice disappearing at an unprecedented rate of more than 12% each decade.

While researchers have long focused on predicting when the Arctic might become ice-free for an entire month (typically September, when sea ice reaches its annual minimum), this study breaks new ground by examining when we might see the very first day without significant ice cover. The distinction is crucial – like the difference between a lake being ice-free for an entire month versus experiencing its first ice-free day during an unusually warm spell.

“Because the first ice-free day is likely to happen earlier than the first ice-free month, we want to be prepared,” says Heuzé. “It’s also important to know what events could lead to the melting of all sea ice in the Arctic Ocean.”

To understand when this threshold might be crossed, the researchers analyzed 366 simulations from 11 carefully selected climate models. These models were chosen based on their accuracy in reproducing historical Arctic conditions and seasonal patterns. The simulations explored various future scenarios, from optimistic cases with reduced emissions (SSP1-1.9) to pessimistic ones with continued high emissions (SSP5-8.5). Nine of these simulations suggested the possibility of an ice-free day occurring within just three to six years – an extreme but plausible scenario.

Recent events demonstrate how quickly Arctic conditions can change. In March 2022, parts of the Arctic experienced temperatures 50°F above average, with areas around the North Pole approaching melting point – an unprecedented warm spell that hints at the kind of extreme events that could accelerate ice loss. The researchers found that such warming events, particularly when they occur in sequence, could trigger rapid ice decline.

These rapid transitions typically follow a pattern: an unusually warm fall weakens the ice, followed by a warm winter and spring that prevent normal ice formation. When these conditions persist for three or more years, they create the perfect environment for an ice-free day to occur in late summer. As climate change progresses, these warm spells are expected to become more frequent and intense.

The loss of Arctic sea ice creates a troubling feedback loop. Ice and snow reflect most incoming sunlight back to space, while dark ocean water absorbs it. As more ice melts, more solar energy is absorbed, further warming the region and potentially accelerating ice loss. This process could have far-reaching effects on global weather patterns and ecosystems.

Source: https://studyfinds.org/arctic-ice-free-by-2027/

Your heart has a hidden brain, game-changing study discovers

(© natali_mis – stock.adobe.com)

Inside your chest lies not just a muscular pump, but a sophisticated neural network that scientists are just beginning to fully understand. A new study reveals that the heart’s internal nervous system – known as the intracardiac nervous system (IcNS) – is far more complex than previously thought, challenging long-held views about how our hearts maintain their life-sustaining rhythm.

For decades, scientists believed this system was simply a relay station, passing along signals from the brain to the heart muscle. This new research, led by scientists from Karolinska Institutet in Sweden and Columbia University in New York, demonstrates that the IcNS is more like a local control center, capable of processing information and even generating rhythmic patterns independently.

A Microscopic Marvel
The researchers made these discoveries using zebrafish, which might seem like an unusual choice for studying human heart function. However, zebrafish hearts share remarkable similarities with human hearts in terms of rate, electrical patterns, and basic structure, making them invaluable models for cardiovascular research. Just like humans, zebrafish hearts have four chambers and require precise coordination between various types of cells to maintain proper function.

Working with young adult zebrafish (8-12 weeks old), the research team focused on a crucial region called the sinoatrial plexus (SAP) – essentially the heart’s pacemaker region. Using a combination of cutting-edge techniques, including single-cell RNA sequencing, detailed imaging, and electrical recordings, they uncovered an unexpectedly diverse population of neurons.

The team found that these neurons use different chemical messengers, or neurotransmitters, to communicate. The majority – about 81% – are cholinergic neurons, using acetylcholine as their primary messenger. The remaining neurons use a variety of other neurotransmitters: 8% are glutamatergic, 6% use GABA, 5% are serotonergic, and 4.6% are catecholaminergic. This diversity suggests a level of local control and fine-tuning previously unrecognized in cardiac function.

‘Complex Nervous System Within The Heart’
Perhaps the most intriguing discovery was a subset of neurons that display “pacemaker-like” or “rhythmogenic” properties. These neurons can generate rhythmic patterns of activity similar to those found in central pattern generator networks – neural circuits in the brain and spinal cord that produce rhythmic behaviors like breathing or walking. This finding suggests that the heart’s nervous system might play a more active role in maintaining cardiac rhythm than previously thought.

“We were surprised to see how complex the nervous system within the heart is,” says Dr. Konstantinos Ampatzis, the study’s lead researcher, in a university release.

To understand how these neurons affect heart function, the researchers developed an innovative experimental approach. They created a preparation that allowed them to study the intact heart while recording from individual neurons, using a compound called blebbistatin to temporarily stop the heart’s muscular contractions while keeping the neurons active. This technique revealed four distinct types of neurons with different firing patterns, from single spikes to rhythmic bursts of activity.

When the researchers manipulated these neurons, they could directly influence the heart’s beating pattern. For example, when they triggered the release of neurotransmitters from these cells using a specific chemical solution, they observed changes in heart rate and rhythm. This demonstrated that the IcNS actively participates in controlling cardiac function, rather than simply relaying signals from the brain.

“This ‘little brain’ has a key role in maintaining and controlling the heartbeat, similar to how the brain regulates rhythmic functions such as locomotion and breathing,” Dr. Ampatzis explains.

In other words, the heart isn’t just a passive recipient of brain commands – it’s an active participant in its own functioning.

More Research Ahead
These findings open up intriguing possibilities for treating heart conditions. Many cardiac problems involve disruptions to the heart’s rhythm, and understanding this local neural network could potentially lead to new therapeutic approaches.

“We aim to identify new therapeutic targets by examining how disruptions in the heart’s neuronal network contribute to different heart disorders,” Dr. Ampatzis notes.

The study, published in Nature Communications, also raises fascinating questions about how organs regulate themselves. The presence of such a sophisticated neural network in the heart suggests that individual organs might have more autonomous control over their function than previously recognized, with the brain providing overall coordination rather than micromanaging every aspect of organ function. This could represent an efficient biological design, allowing for rapid local responses while maintaining central oversight.

Source: https://studyfinds.org/your-heart-has-a-hidden-brain/

What Is Body Roundness Index? Everything You Need To Know

Whether you’re finally getting to your annual physical with your primary care physician or seeing a specialist, you’ve likely learned where you fall on the body mass index (BMI), a calculated measurement of weight relative to height. But some researchers are paving the way for another measurement to join the conversation in assessing a person’s health risks.

What Is Body Roundness Index?

A recent study published in JAMA Open Network makes the argument that a newer tool – body roundness index (BRI) – may be more effective at measuring a person’s health. Whereas BMI is determined using one’s height and weight only, BRI also includes an individual’s waist circumference (or roundness) in the calculation. The result is a clearer picture of how body fat is distributed: The more fat in the middle of your body, the more at-risk you are for some health conditions.

“Visceral fat protects organs, but too much can lead to health problems. Excess fat can also lead to chronic inflammation which is also linked to many diseases including cardiovascular disease, diabetes, arthritis, and cancer,” said Geralyn Plomitallo, registered dietitian and clinical nutrition manager at Stamford Health.

How Can I Calculate My BRI?

Introduced in 2013, the equation for BRI is complicated. It considers your heigh and weight, similar to BMI, but also your waist and hip measurements.

A healthy BRI generally falls below 10 on the scale of 1 – 20. The higher the score, the rounder the body and the more at risk the person is for diseases and obesity.

If you have a measuring tape handy, you can get an idea of where you fall on the BRI scale. “Your waist measurement should be no more than 35 inches for women and no more than 40 inches for men,” said Plomitallo.

Or, in more simpler terms, you can guesstimate by looking at your body shape. “Apple shape typically means you have more fat around the middle,” said Plomitallo. “With a pear shape, the fat is higher around the hips and less in the middle.”

Is BRI Better Than BMI?

BMI, while a helpful gauge of risk of disease, has long been criticized as incomplete or even misleading. “It just uses your height and weight and could overestimate body fat in athletes,” explained Plomitallo. “If using the BMI alone, Arnold Schwarzenegger would be considered obese.” A person who has a lower body weight, but no muscle mass would fall into the healthy BMI zone but could still be at high risk for diseases.

Source : https://www.stamfordhealth.org/healthflash-blog/primary-care/bmi-versus-bri/

One simple meal swap may significantly boost your heart health

(Credit: Panji Dwi Risantoro/Shutterstock)

What if the key to protecting your heart was as simple as rethinking what’s on your dinner plate? A 30-year study by Harvard researchers suggests just that — finding that the secret to preventing cardiovascular disease may be as simple as swapping your sources of protein.

Specifically, scientists revealed how the balance between plant and animal proteins could significantly reduce your risk of heart disease. The study, tracking nearly 203,000 health professionals, uncovered a compelling nutritional strategy: the more you move toward plant-based sources of protein, the better your heart may fare.

Results published in The American Journal of Clinical Nutrition show participants who consumed a diet with the highest ratio of plant to animal protein saw a remarkable 19% lower risk of cardiovascular disease (CVD) and a 27% lower risk of coronary heart disease.

Currently, the average American diet features a 1:3 ratio of plant to animal protein. The new research recommends a dramatic shift.

“The average American eats a 1:3 plant to animal protein ratio. Our findings suggest a ratio of at least 1:2 is much more effective in preventing CVD,” says lead author Andrea Glenn in a media release.

The research isn’t just about cutting meat — it’s about strategic replacement. Swapping red and processed meats for protein-rich plant alternatives like nuts and legumes appears to be the sweet spot. These plant proteins come packed with additional health bonuses: fiber, antioxidant vitamins, minerals, and healthy fats that contribute to improved blood pressure and reduced inflammation.

The study’s most intriguing finding is that more plant protein continues to provide benefits, particularly for coronary heart disease prevention. While cardiovascular disease risk levels off around a 1:2 plant-to-animal protein ratio, heart disease risk keeps declining with even higher plant protein intake.

“Most of us need to begin shifting our diets toward plant-based proteins. We can do so by cutting down on meat, especially red and processed meats, and eating more legumes and nuts,” explains senior author Frank Hu.

Source : https://studyfinds.org/simple-meal-swap-heart-health/

Oceans may never have existed on Venus, says new research

Instead of condensing on the planet’s surface, any water in Venus’ atmosphere likely remained as steam, suggests the research from the University of Cambridge.

Venus’ northern hemisphere as seen by NASA Magellan spacecraft. Pic: NASA

Venus may never have hosted oceans on its surface, according to new research.

Despite a scientific debate raging for years over the history of Venus and whether it ever held liquid oceans, new research by astrochemists from the University of Cambridge suggests it has always been dry.

“Two very different histories of water on Venus have been proposed: one where Venus had a temperate climate for billions of years with surface liquid water and the other where a hot early Venus was never able to condense surface liquid water,” said the report’s authors Tereza Constantinou, Oliver Shorttle and Paul B. Rimmer.

Ms Constantinou and her colleagues modelled the current chemical makeup of Venus’ atmosphere and discovered “the planet has never been liquid-water habitable”.

“Venus today is a hellish world,” suggests NASA. It has an average surface temperature of around 465C (869F) and a pressure 90 times greater than Earth’s at sea level, as well as being permanently shrouded in thick, toxic clouds of sulfuric acid.

In their study, the scientists found the planet’s interior lacks hydrogen, which suggests it is much drier than Earth’s interior.

Instead of condensing on the planet’s surface, any water in Venus’ atmosphere likely remained as steam, suggests the research.

Back in 2016, a team of scientists working for NASA’s Goddard Institute for Space Studies (GISS) in New York suggested the planet may once have been habitable.

The team used a computer model similar to the type used to predict climate change on Earth.

“Many of the same tools we use to model climate change on Earth can be adapted to study climates on other planets, both past and present,” said Michael Way at the time, a researcher at GISS and the paper’s lead author.

Source : https://news.sky.com/story/oceans-may-never-have-existed-on-venus-says-new-research-13265362

High-dose vitamin C: Promising treatment may extend survival of pancreatic cancer patients

(Credit: Numstocker/Shutterstock)

A study published in the November issue of Redox Biology has found that adding intravenous, high-dose vitamin C to a chemotherapy regimen doubled the survival of patients with late-stage, metastatic pancreatic cancer from eight months to 16 months.

“This is a deadly disease with very poor outcomes for patients. The median survival is eight months with treatment, probably less without treatment, and the five-year survival is tiny. When we started the trial, we thought it would be a success if we got to 12 months survival, but we doubled overall survival to 16 months. The results were so strong in showing the benefit of this therapy for patient survival that we were able to stop the trial early,” explains Joseph J. Cullen, MD, FACS, a professor of Surgery and Radiation Oncology at the University of Iowa, in a statement to StudyFinds.

The study consisted of 34 patients with stage 4 pancreatic cancer who were randomized to two groups. One group received standard chemotherapy (gemcitabine and nab-paclitaxel). The other group received the same chemotherapy plus intravenous infusions of 75 grams of vitamin C three times a week.

The average survival for patients who received chemotherapy and vitamin C was 16 months. Patients who received only chemotherapy survived an average of just eight months.

“Not only does it increase overall survival, but the patients seem to feel better with the treatment. They have fewer side effects, and appear to be able to tolerate more treatment, and we’ve seen that in other trials, too,” Cullen says.

There is additional evidence of the benefit of intravenous high-dose vitamin C in cancer treatment. Bryan Allen, MD, PhD, a professor and chief of Radiation Oncology at the University of Iowa, and Cullen collaborated on a trial of high dose vitamin C with chemotherapy and radiation for glioblastoma, a deadly brain cancer. These patients also showed a significant increase in survival.

Cullen, Allen, and their colleagues have been conducting research on the anti-cancer effect of high-dose, IV vitamin C for two decades. They demonstrated that IV vitamin C produces high levels in the blood that cannot be achieved by taking vitamin C orally. The high concentration results in changes in cancer cells which makes them more vulnerable to chemotherapy and radiation. Cullen describes the results of their innovation and perseverance as highly encouraging.

Source : https://studyfinds.org/vitamin-c-cancer-survival/

3 reasons why kids stick toys up their nose

(Credit: zeljkodan/Shutterstock)

Children, especially toddlers and preschoolers, have an uncanny ability to surprise adults. And one of the more alarming discoveries parents can make is their child has stuck a small object, such as a Lego piece, up their nose.

Queensland Children’s Hospital recently reported more than 1,650 children with foreign objects up their nose had presented to its emergency department over the past decade. Lego, beads, balls, batteries, buttons, and crayons were among the most common objects.

With the Christmas season approaching, it’s likely more of these small objects will be brought into our homes as toys, gifts or novelty items.

But why do children stick things like these up their nose? Here’s how natural curiosity, developing motor skills, and a limited understanding of risk can be a dangerous combination.

1. Kids are curious creatures

Toddlers are naturally curious creatures. During the toddler and preschool years, children explore their environment by using their senses. They touch, taste, smell, listen to and look at everything around them. It’s a natural part of their development and a big part of how they learn about the world.

Researchers call this “curiosity-based learning”. They say children are more likely to explore unfamiliar objects or when they don’t completely understand how they work. This may explain why toddlers tend to gravitate towards new or unfamiliar objects at home.

Unfortunately, this healthy developmental curiosity sometimes leads to them putting things in places they shouldn’t, such as their nose.

2. Kids are great mimics

Young children often mimic what they see. Studies that tracked the same group of children over time confirm imitation plays a vital role in a child’s development. This activates certain critical neural pathways in the brain. Imitation is particularly important when learning to use and understand language and when learning motor skills such as walking, clapping, catching a ball, waving, and writing.

Put simply, when a child imitates, it strengthens brain connections and helps them learn new skills faster. Anecdotally, parents of toddlers will relate to seeing their younger children copying older siblings’ phrases or gestures.

Inserting items into their nose is no different. Toddlers see older children and adults placing items near their face – when they blow their nose, put on makeup or eat – and decide to try it themselves.

3. Kids don’t yet understand risk

Toddlers might be curious. But they don’t have the cognitive capacity or reasoning ability to comprehend the consequences of placing items in their nose or mouth. This can be a dangerous combination. So, supervising your toddler is essential.

Small, bright-colored objects, items with interesting textures, or items that resemble food are especially tempting for little ones.

What can I do?

Sometimes, it’s obvious when a child has put something up their nose, but not always. Your child might have pain or itchiness around the nose, discharge or bleeding from the nose, and be upset or uncomfortable.

If your child has difficulty breathing or you suspect your child has inserted a sharp object or button battery seek immediate medical care. Button batteries can burn and damage tissues in as little as 15 minutes, which can lead to infection and injury.

If your child inserts an object where they shouldn’t:

stay calm: your child will react to your emotions, so try to remain calm and reassuring
assess the situation: can you see the object? Is your child in distress?
encourage your child to blow their nose gently. This may help dislodge the object
take your child outside in the Sun: brief exposure for a minute or two might prompt a “Sun sneeze”, which may dislodge the object. But avoid sniffing, which may cause the object to travel further in the airways and into the lungs
never try to remove the object yourself using tweezers, cotton swabs or other tools. This can push the object further into the nose, causing more damage.
If these methods don’t dislodge the item, your child is not distressed and you don’t suspect a sharp object or button battery, go to your GP. They may be able to see and remove the item.

Source : https://studyfinds.org/why-kids-stick-toys-up-nose/

 

American Nightmare: Only 31% think they’ve financially ‘made it’

(© alfa27 – stock.adobe.com)

Despite being the land of opportunity, the American Dream remains frustratingly out of reach for most Americans, with a mere 31% believing they’ve financially “made it” in life. The surprising twist? Millennials are leading the pack in financial confidence, with 34% claiming they’ve achieved financial success – the highest percentage among all generations.

The comprehensive survey of 2,000 employed Americans, conducted by Talker Research for BOK Financial, reveals a complex landscape where traditional markers of success are evolving, and external factors weigh heavily on financial aspirations. For those still climbing the corporate ladder, there’s hope: 54% believe they’re well on their way to financial success in their lifetime.

However, the picture becomes less optimistic with age. Only 27% of baby boomers feel they’ve reached financial success, and among those who haven’t, just one-third believe they ever will. The survey found that Americans consider their path to financial success threatened by various external factors, including presidential elections (46%), interest rate changes (45%), and the job market (42%).

What exactly does it mean to ‘make it’ financially in today’s America?

The goalposts have shifted significantly, with 79% of respondents saying their definition has evolved over time. The magic number appears to be around $234,000 in net worth – though reaching that milestone faces modern obstacles like the high cost of living (42%) and inflation (26%), with some citing their own spending habits (7%) as a barrier.

“The uncertainty around the economy, politics and other external factors can weigh heavily on people — and are right now,” says Jessica Jones with BOK Financial Advisors, an affiliate of BOK Financial, in a statement. “And financial headwinds like high inflation and interest rates can make it feel like it’s harder to get ahead, but baby steps are key. If someone is struggling to see success in their financial future, it’s important to just get started, even with a small savings account.”

Nearly half of baby boomers (48%) and Gen X respondents (47%) point to higher cost of living as a major obstacle, compared to just 34% of Gen Z. Meanwhile, younger generations – Gen Z (28%) and millennials (30%) – are more likely to cite inflation as their primary concern.

The markers of financial success have also undergone a dramatic shift. Today’s Americans consider owning a home (78%) and a vehicle (64%) as necessary indicators of financial success, while traditional milestones like having children (40%) or getting married (34%) – which were crucial for their parents’ generation – have become less significant. Modern indicators now include having an established long-standing career (48%) and earning a college degree (30%).

When it comes to spending, Gen Z (27%) and millennials (31%) direct the largest portion of their money toward family expenses, while Gen X (43%) and baby boomers (50%) prioritize retirement savings. Younger generations are planning ahead too, with Gen Z expecting to start retirement planning at around age 41, and millennials at age 46.

Interestingly, Gen Z shows both practical and personal financial priorities. While they’re the most confident about planning their financial future without professional help (70%), they also lead in prioritizing purchases that make them happy (20%). In contrast, baby boomers express the least confidence in their financial future during retirement (33%) and their ability to plan without professional assistance (49%).

Source : https://studyfinds.org/financially-made-it/

1.5 million years ago, two human species shared the same morning commute

A footprint hypothesized to have been created by a Paranthropus boisei individual. (Photo credit: Kevin Hatala/Chatham University)

Study of ancient footprints first ever to show that early ancestors coexisted in shared space

In the arid landscapes of northern Kenya, a remarkable discovery is reshaping our understanding of human evolution. Scientists have uncovered 1.5-million-year-old footprints that provide the first direct evidence that two different early human species likely encountered one another, potentially sharing the same territories and resources.

The exciting research, published in Science, centers on a series of fossilized footprints found at a site called ET-2022-103-FE22 (abbreviated as FE22) near Lake Turkana. What makes these tracks extraordinary isn’t just their age, but what they reveal about our ancient relatives’ coexistence and movement patterns.

The site preserves a continuous trackway made by one individual and three isolated footprints from different individuals, all pressed into what was once wet, muddy ground near an ancient lakeshore. Alongside the human footprints are tracks from various animals, including massive bird prints likely left by ancient marabou storks, as well as tracks from bovids (antelope-like animals) and equids (horse family members).

But the real story lies in the distinct differences between the footprints. The research team, led by Kevin Hatala from Chatham University, found two clearly different walking patterns preserved in these ancient tracks. One set of prints shows characteristics very similar to modern human footprints, while the other set reveals a notably different way of walking.

“Fossil footprints are exciting because they provide vivid snapshots that bring our fossil relatives to life,” says Kevin Hatala, the study’s first author, and an associate professor of biology at Chatham University, in a statement. “With these kinds of data, we can see how living individuals, millions of years ago, were moving around their environments and potentially interacting with each other, or even with other animals. That’s something that we can’t really get from bones or stone tools.”

Through careful analysis using advanced 3D imaging technologies, the research team identified two distinct patterns of movement in the human footprints. The continuous trackway shows evidence of someone walking at a brisk pace of 1.81 meters per second, but with notably different foot mechanics than modern humans. These tracks are flatter and show signs of a more mobile big toe. In contrast, the isolated footprints more closely match the arch patterns and toe alignment seen in modern human feet.

“In biological anthropology, we’re always interested in finding new ways to extract behavior from the fossil record, and this is a great example,” says Rebecca Ferrell, a program director at the National Science Foundation. “The team used cutting-edge 3D imaging technologies to create an entirely new way to look at footprints, which helps us understand human evolution and the roles of cooperation and competition in shaping our evolutionary journey.”

This distinction is crucial because it suggests these tracks were made by two different early human species: Homo erectus and Paranthropus boisei. H. erectus is often considered our direct ancestor and is thought to have walked very similarly to modern humans. P. boisei, meanwhile, was a more robust species with a distinctly different body build and, as these footprints suggest, a different way of walking.

The lake margin environment where these tracks were preserved offers a rare snapshot of ancient life, frozen in time. The footprints were made within hours or days of each other, suggesting these two species weren’t just living in the same general region, but were actively using the same spaces at nearly the same time.

What’s particularly intriguing is that this pattern of coexistence shows up repeatedly in the fossil record of this region between 1.4 and 1.6 million years ago. Multiple sites preserve evidence of these two distinct walking styles, indicating this wasn’t a one-time encounter but rather a sustained pattern of shared habitat use.

“This proves beyond any question that not only one, but two different hominins were walking on the same surface, literally within hours of each other,” says co-author Craig Feibel, a professor in the Department of Earth and Planetary Sciences and Department of Anthropology in the Rutgers School of Arts and Sciences. “The idea that they lived contemporaneously may not be a surprise. But this is the first time demonstrating it. I think that’s really huge.”

Interestingly, Feibel notes that Homo erectus lived for 1 million years more than. Paranthropus boisei. As to why the latter went extinct much sooner remains a mystery.

This discovery challenges previous limitations in studying ancient human coexistence. While fossilized bones can tell us different species lived in the same general area over thousands of years, footprints provide a much more intimate window into their daily lives and interactions. These tracks show that different human species weren’t just inhabiting the same general region – they were walking the same paths, perhaps even encountering one another face to face.

The findings suggest that despite their differences, these two species found ways to share resources without excessive competition. The lake margin environments where they left their tracks would have provided various food sources and other resources that could have supported both species’ needs. This peaceful coexistence might help explain how multiple human species managed to survive alongside each other for hundreds of thousands of years.

The implications of this research extend beyond just understanding ancient human behavior. It provides insights into how species adapt to share environments and resources, a topic that remains relevant today as we grapple with questions of human impact on other species and their habitats.

While we may never know if these different species exchanged greetings or avoided each other’s gaze, their footprints tell us something profound: Long before we built cities or drew maps, different kinds of humans were already figuring out how to share their world. Perhaps that’s the most human trait of all.

Source : https://studyfinds.org/ancient-footprints-two-human-species/

Do crabs feel pain? Study shells out answer to burning question

(Credit: TasfotoNL/Shutterstock)

For the first time, scientists have directly observed pain signals being transmitted to the brains of shore crabs, providing the strongest evidence yet that these creatures can sense and process pain. This discovery, made by researchers at the University of Gothenburg, could revolutionize how we treat crustaceans – from seafood restaurants to research laboratories.

The study, published in the journal Biology, represents the first time scientists have used EEG-style measurements to record pain responses directly from a crab’s brain.

Shore crabs, those small greenish-brown crustaceans you might spot scuttling along beaches, were the focus of the investigation. The study examined whether these creatures possess what scientists call “nociceptors” – specialized sensory neurons that detect potentially harmful stimuli and send warning signals to the brain. Think of nociceptors as your body’s built-in alarm system: when you touch something too hot or sharp, these neurons quickly fire off signals saying, “Danger! Pull away!”

“We could see that the crab has some kind of pain receptors in its soft tissues, because we recorded an increase in brain activity when we applied a potentially painful chemical, a form of vinegar, to the crab’s soft tissues. The same happened when we applied external pressure to several of the crab’s body parts,” explains lead author Eleftherios Kasiouras, a PhD student at the University of Gothenburg, in a statement.

Unlike previous research that only observed how crustaceans behave when exposed to harmful stimuli, this study directly measured their neural responses – similar to how doctors use an EEG to monitor human brain activity. The research team examined 20 shore crabs, focusing on how their nervous systems responded to both physical pressure and chemical irritants.

The research team used sophisticated equipment to record electrical activity in different parts of the crabs’ nervous systems. They tested various body parts, including the eyes, antennae, claws, and leg joints, applying either gentle pressure with fine instruments or small amounts of acetic acid (similar to vinegar).

The results revealed fascinating differences in how crabs respond to different types of potentially harmful stimuli. When touched with pressure-testing instruments, their nervous systems produced short, intense bursts of activity. However, when exposed to acetic acid, the response was more prolonged but less intense – suggesting crabs can distinguish between different types of threats.

Particularly striking was the discovery that different body parts showed varying levels of sensitivity. The eyes and soft tissues between leg joints were incredibly responsive to touch, detecting pressure as light as 0.008 grams – about 75 times more sensitive than human skin. Meanwhile, their antennae and antennules appeared specialized for detecting chemical threats rather than physical pressure.

The antennae and antennules (smaller antenna-like structures) showed a fascinating specialization: they responded strongly to chemical stimuli but showed no response to mechanical pressure. This suggests these appendages may be specifically tuned to detect harmful chemicals in their environment, similar to how our nose can alert us to dangerous fumes.

“It is a given that all animals need some kind of pain system to cope by avoiding danger. I don’t think we need to test all species of crustaceans, as they have a similar structure and therefore similar nervous systems. We can assume that shrimps, crayfish and lobsters can also send external signals about painful stimuli to their brain which will process this information,” says Kasiouras.

The findings have significant implications for animal welfare practices. Currently, crustaceans aren’t protected under European Union animal welfare legislation, meaning they can legally be cut up while still alive – a practice that would be unthinkable with mammals. As researcher Lynne Sneddon notes, “We need to find less painful ways to kill shellfish if we are to continue eating them. Because now we have scientific evidence that they both experience and react to pain.”

The study builds on previous research showing that crustaceans exhibit protective behaviors when injured, such as rubbing affected areas or avoiding situations that previously caused them harm. However, this is the first time scientists have directly observed the neural signals that drive these behaviors.

Previous studies relied mainly on observing how crustaceans reacted to various stimuli – including mechanical impacts, electric shocks, and acids applied to soft tissues like their antennae. While these crustaceans showed defensive behaviors like touching the affected areas or trying to avoid the threatening stimulus, scientists couldn’t definitively say these responses indicated pain sensation until now.

Whether this research will change how we treat crustaceans remains to be seen, but one thing’s clear: these sideways-walking creatures might deserve a second look – and perhaps a more humane perspective.

Source : https://studyfinds.org/do-crabs-feel-pain/

Why people routinely dismiss (and miss) life’s meaningful moments

(© deagreez – stock.adobe.com)

Imagine skipping Thanksgiving dinner with your family or passing up a chance to write a heartfelt thank-you note, believing these moments aren’t worth your time. Think again. A researcher from the University of Florida finds that people consistently underestimate the profound emotional impact of life’s seemingly mundane experiences.

Dr. Erin Westgate, an assistant professor of psychology leading the research, has uncovered a curious human tendency: we’re remarkably bad at predicting how meaningful our experiences will be.

“We don’t make sense of events until they actually happen,” Westgate explains in a university release. “We don’t process events until we need to, when they actually happen and not before.”

The research began with a simple yet provocative question during Westgate’s graduate school days: Do people accurately anticipate the emotional significance of future events? Her initial study with University of Virginia undergraduates provided a surprising answer. Students consistently misjudged how meaningful their Thanksgiving holiday would be, underestimating the emotional depth of the experience.

Intrigued by these initial findings, Westgate expanded her research during the pandemic, replicating the study with a larger group of University of Florida students. The results were consistent: people systematically fail to recognize the potential meaning in upcoming experiences.

This isn’t just about holiday gatherings. The three-year National Science Foundation-funded study will explore how this psychological blind spot affects major life decisions — from career choices and volunteer work to personal milestones like starting a family. Perhaps most intriguingly, the research will examine how people might avoid potentially transformative experiences that involve discomfort, missing out on opportunities for personal growth and resilience.

“We want to live meaningful lives, we want to do meaningful things,” Westgate notes. “If we are not realizing that an experience is going to be meaningful, we may be less likely to do it and miss out on these potential sources of meaning in our own lives.”

Source: https://studyfinds.org/shouldnt-dismiss-meaningful-moments/

Breaking the scale: Study finds 208 million Americans are now overweight or obese

Obesity problem in United States (© andriano_cz – stock.adobe.com)

Nearly half of adolescents and three-quarters of adults in the U.S. were classified as being clinically overweight or obese in 2021. The rates have more than doubled compared with 1990.

Without urgent intervention, our study forecasts that more than 80% of adults and close to 60% of adolescents will be classified as overweight or obese by 2050. These are the key findings of our recent study, published in the journal The Lancet.

Synthesizing body mass index data from 132 unique sources in the U.S., including national and state-representative surveys, we examined the historical trend of obesity and the condition of being overweight from 1990 to 2021 and forecast estimates through 2050.

For people 18 and older, the condition health researchers refer to as “overweight” was defined as having a body mass index, or BMI, of 25 kilograms per square meter (kg/m²) to less than 30 kg/m² and obesity as a BMI of 30 kg/m² or higher. For those younger than 18, we based definitions on the International Obesity Task Force criteria.

This study was conducted by the Global Burden of Disease Study 2021 U.S. Obesity Forecasting Collaborator Group, which comprises over 300 experts and researchers specializing in obesity.

Why it matters

The U.S. already has one of the highest rates of obesity and people who are overweight globally. Our study estimated that in 2021, a total of 208 million people in the U.S. were medically classified as overweight or obese.

Obesity has slowed health improvements and life expectancy in the U.S. compared with other high-income nations. Previous research showed that obesity accounted for 335,000 deaths in 2021 alone and is one of the most dominant and fastest-growing risk factors for poor health and early death. Obesity increases the risk of diabetes, heart attack, stroke, cancer and mental health disorders.

The economic implications of obesity are also profound. A report by Republican members of the Joint Economic Committee of the U.S. Congress, published in 2024, predicted that obesity-related health care costs will rise to US$9.1 trillion over the next decade.

The rise in childhood and adolescent obesity is particularly concerning, with the rate of obesity more than doubling among adolescents ages 15 to 24 since 1990. Data from the National Health and Nutrition Examination Survey revealed that nearly 20% of children and adolescents in the U.S. ages 2 to 19 live with obesity.

By 2050, our forecast results suggest that 1 in 5 children and 1 in 3 adolescents will experience obesity. The increase in obesity among children and adolescents not only triggers the early onset of chronic diseases but also negatively affects mental health, social interactions and physical functioning.

What other research is being done

Our research highlighted substantial geographical disparities in overweight and obesity prevalence across states, with southern U.S. states observing some of the highest rates.

Other studies on obesity in the United States have also underscored significant socioeconomic, racial and ethnic disparities. Previous studies suggest that Black and Hispanic populations exhibit higher obesity rates compared with their white counterparts. These disparities are further exacerbated by systemic barriers, including discrimination, unequal access to education, health care and economic inequities.

Another active area of research involves identifying effective obesity interventions, including a recent study in Seattle demonstrating that taxation on sweetened beverages reduced average body mass index among children. Various community-based studies also investigated initiatives aimed at increasing access to physical activity and healthy foods, particularly in underserved areas.

Clinical research has been actively exploring new anti-obesity medications and continuously monitoring the effectiveness and safety of current medications.

Furthermore, there is a growing body of research examining technology-driven behavioral interventions, such as mobile health apps, to support weight management. However, whether many of these programs are scalable and sustainable is not yet clear. This gap hinders the broader adoption and adaptation of effective interventions, limiting their potential impact at the population level.

Source : https://studyfinds.org/208-million-americans-obese/

 

Why do diets fail? Study discovers your fat cells remember being fat

(Photo by Towfiqu Barbhuiya on Unsplash)

If you’ve ever lost weight only to watch the pounds creep back on, you’re not alone. Now, scientists have uncovered a biological explanation for this frustrating phenomenon known as the “yo-yo effect” – and it turns out our fat cells have a surprisingly long memory.

Researchers at ETH Zurich have discovered that being overweight leaves a lasting imprint on our fat cells through a process called epigenetics – chemical markers that act like tiny switches controlling which genes are turned on or off in our cells. These markers can persist for years, making it easier for the body to regain weight even after successful dieting.

“The fat cells remember the overweight state and can return to this state more easily,” explains Professor Ferdinand von Meyenn, who led the study published in Nature.

To reach this conclusion, the research team first studied mice, examining fat cells from both overweight mice and those that had successfully lost weight through dieting. They found that obesity created distinctive epigenetic “stamps” on the fat cells that stubbornly remained even after weight loss. When these mice were later given access to high-fat foods, they regained weight more quickly than mice without these cellular memories.

The findings weren’t limited to mice. The team also analyzed fat tissue samples from formerly overweight people who had undergone weight loss surgery, using samples from medical centers in Sweden and Germany. While they looked at different cellular markers in the human samples, the results aligned with their mouse studies, suggesting that human fat cells also “remember” their previous size. Perhaps most striking is how long this cellular memory might last.

“Fat cells are long-lived cells. On average, they live for ten years before our body replaces them with new cells,” says Laura Hinte, a doctoral student involved in the research.

Currently, there’s no way to erase these cellular memories with medication, though that could change in the future. For now, the researchers emphasize that prevention is key, particularly for young people.

“It’s precisely because of this memory effect that it’s so important to avoid being overweight in the first place. Because that’s the simplest way to combat the yo-yo phenomenon,” von Meyenn notes.

The team is now investigating whether other types of cells, such as those in the brain or blood vessels, might also harbor memories of previous weight gain. If so, this could help explain why maintaining weight loss is such a complex challenge for so many people.

This breakthrough research not only helps explain a frustrating aspect of weight loss that millions have experienced but also underscores the importance of preventing weight gain in the first place – our cells, it seems, never quite forget.

Source : https://studyfinds.org/fat-cells-remember-being-fat/

Climate change after the Ice Age: How CO2 and ‘reverse tsunamis’ created a ‘slushy’ Earth

The beginning of the ice age on the Ob River with snow and ice hummocks off the coast. Berdsk, Novosibirsk region, Western Siberia of Russia. (Photo by Starover Sibiriak on Shutterstock)

The Earth underwent a complete makeover after the last Ice Age, turning from a frozen wasteland to a slushy planet surrounded by oceans. In a new study, researchers looked at how it was possible for the once snowball Earth to rapidly melt and enter its “plumeworld ocean” era.

The surface ocean remained deeply frozen for several million years during the Ice Ages, which occurred about 635 to 650 million years ago. Scientists believe global temperatures dropped, causing the polar ice caps to spread around the hemispheres. More ice meant more sunlight reflected away from the Earth, contributing further to the frigid temperatures.

In a new study published in the Proceedings of the National Academy of Sciences journal, researchers show the first geochemical evidence of Earth setting conditions for the climate to change, with carbon dioxide from the sky eventually thawing out the ice.

“Our results have important implications for understanding how Earth’s climate and ocean chemistry changed after the extreme conditions of the last global ice age,” says Tian Gan, a former Virginia Tech postdoctoral researcher and lead author of the study, in a press release.

Along with sunlight reflected off the polar ice caps, a quarter of the ocean stayed deeply frozen because of low carbon dioxide levels. The frozen ocean stopped several chain reactions. The water cycle locked up, preventing evaporation, rain, and snow. With no water available, chemical weathering declined. This carbon dioxide-consuming process involves rocks breaking down because they interact with environmental chemicals. A lack of weathering and erosion causes carbon dioxide to build up in the atmosphere, trapping heat.

“It was just a matter of time until the carbon dioxide levels were high enough to break the pattern of ice,” says Shuhai Xiao, a geologist at Virginia Tech and study coauthor. “When it ended, it probably ended catastrophically.”

Over time, the accumulation of carbon dioxide trapping sunlight caused more heat to pile in the atmosphere. This caused ice caps to melt, and Earth’s climate turned frozen to slushy. Over 10 million years, the average global temperatures moved from -50 to 120 degrees Fahrenheit.

In the current study, researchers analyzed lithium isotopes from carbonate rocks formed after the Ice Age ended. The rocks’ geochemical signatures would give researchers a better idea of what the climate was like after the Ice Age.

Source:https://studyfinds.org/climate-change-after-ice-age-slushy-earth/

Surprising study claims being ‘fat but fit’ is a real thing

CHARLOTTESVILLE, Va. — Forget everything you thought you knew about weight and health. A head-turning study suggests that being physically fit might matter more than how much you weigh when it comes to your risk of dying from heart disease or other causes.

A team led by researchers from the University of Virginia has turned traditional wisdom about health on its head, finding that people who are overweight or obese but physically fit have essentially the same risk of death as those at a “normal” weight.

The real killer? Being unfit, regardless of body size.

This comprehensive analysis, published in the British Journal of Sports Medicine, examined nearly 400,000 individuals and found that people who were out of shape faced a dramatically higher risk of death – being roughly two to three times more likely to die from cardiovascular disease or other causes compared to their physically fit counterparts.

“Fitness, it turns out, is far more important than fatness when it comes to mortality risk,” says Siddhartha Angadi, an associate professor of exercise physiology at the University of Virginia School of Education and Human Development, in a media release.

“Exercise is more than just a way to expend calories. It is excellent ‘medicine’ to optimize overall health and can largely reduce the risk of cardiovascular disease and all-cause death for people of all sizes.”

The study tracked participants across multiple groups, with an average age range of 42 to 64 years. Importantly, the research included a more diverse group than previous studies, with 33% of participants being women – a significant improvement over earlier research that was dominated by male participants.

Participants were categorized into groups based on two key measurements: body mass index (BMI) and cardiorespiratory fitness. BMI is a standard measure that uses height and weight to estimate body fat, while cardiorespiratory fitness measures how efficiently your body can transport and use oxygen during exercise.

Remarkably, the researchers discovered that being “fit” appeared to neutralize the traditionally understood health risks associated with being overweight or obese. Individuals who were overweight or obese but maintained good fitness levels showed no statistically significant increase in mortality risk compared to those at a normal weight.

“The largest reduction in all-cause and cardiovascular disease mortality risk occurs when completely sedentary individuals increase their physical activity modestly,” Angadi reports. “This could be achieved with activities such as brisk walking several times per week with the goal of accumulating approximately 30 minutes per day.”

Conversely, individuals who were unfit – regardless of their weight – faced substantially higher risks. The unfit group, across all weight categories, showed a two to three-fold increase in the likelihood of dying from all causes, including heart disease.

This doesn’t mean weight doesn’t matter at all. Instead, the study suggests that physical activity and fitness might be more critical to long-term health than previously understood. The researchers propose a radical shift in approach: instead of focusing exclusively on weight loss, public health strategies should emphasize improving physical fitness.

Source : https://studyfinds.org/fat-but-fit-is-a-real-thing/

Breathe deeply: Research finds you can absorb nutrients and vitamins from fresh air

(Photo by Rido on Shutterstock)

You know that feeling you get when you take a breath of fresh air in nature? There may be more to it than a simple lack of pollution.

When we think of nutrients, we think of things we obtain from our diet. But a careful look at the scientific literature shows there is strong evidence humans can also absorb some nutrients from the air.

In a new perspective article published in Advances in Nutrition, we call these inhaled nutrients “aeronutrients” – to differentiate them from the “gastronutrients” that are absorbed by the gut.

We propose that breathing supplements our diet with essential nutrients such as iodine, zinc, manganese, and some vitamins. This idea is strongly supported by published data. So, why haven’t you heard about this until now?

Breathing is constant
We breathe in about 9,000 liters of air a day and 438 million liters in a lifetime. Unlike eating, breathing never stops. Our exposure to the components of air, even in very small concentrations, adds up over time.

To date, much of the research around the health effects of air has been centered on pollution. The focus is on filtering out what’s bad rather than what could be beneficial. Also, because a single breath contains minuscule quantities of nutrients, it hasn’t seemed meaningful.

For millennia, different cultures have valued nature and fresh air as healthful. Our concept of aeronutrients shows these views are underpinned by science. Oxygen, for example, is technically a nutrient – a chemical substance “required by the body to sustain basic functions”.

We just don’t tend to refer to it that way because we breathe it rather than eat it.

How do aeronutrients work, then?
Aeronutrients enter our body by being absorbed through networks of tiny blood vessels in the nose, lungs, olfactory epithelium (the area where smell is detected), and the oropharynx (the back of the throat).

The lungs can absorb far larger molecules than the gut – 260 times larger, to be exact. These molecules are absorbed intact into the bloodstream and brain.

Drugs that can be inhaled (such as cocaine, nicotine, and anesthetics, to name a few) will enter the body within seconds. They are effective at far lower concentrations than would be needed if they were being consumed by mouth.

In comparison, the gut breaks substances down into their smallest parts with enzymes and acids. Once these enter the bloodstream, they are metabolized and detoxified by the liver.

The gut is great at taking up starches, sugars, and amino acids, but it’s not so great at taking up certain classes of drugs. In fact, scientists are continuously working to improve medicines so we can effectively take them by mouth.

The evidence has been around for decades
Many of the scientific ideas that are obvious in retrospect have been beneath our noses all along. Research from the 1960s found that laundry workers exposed to iodine in the air had higher iodine levels in their blood and urine.

More recently, researchers in Ireland studied schoolchildren living near seaweed-rich coastal areas, where atmospheric iodine gas levels were much higher. These children had significantly more iodine in their urine and were less likely to be iodine-deficient than those living in lower-seaweed coastal areas or rural areas. There were no differences in iodine in their diet.

This suggests that airborne iodine – especially in places with lots of seaweed – could help supplement dietary iodine. That makes it an aeronutrient our bodies might absorb through breathing.

Source: https://studyfinds.org/nutrients-and-vitamins-from-air/

The egg came before the chicken! Billion-year-old clue answers epic question

A cell of the ichthyosporean C. perkinsii showing distinct signs of polarity, with clear cortical localization of the nucleus before the first cleavage. Microtubules are shown in magenta, DNA in blue, and the nuclear envelope in yellow. © DudinLab

Which came first, the chicken or the egg? It’s been a puzzle that has stumped humanity for ages. However, an ancient cellular clue may have finally answered this timeless question!

A team in Switzerland says that long before chickens clucked or embryos developed, a microscopic marine creature was rehearsing the intricate dance of cellular division. This served as a billion-year-old preview of life’s most fundamental magic.

Specifically, scientists at the University of Geneva discovered something extraordinary in Chromosphaera perkinsii, a single-celled organism that seems to preview animal embryonic development. It turns out that the genetic machinery for creating eggs — the fundamental starting point of complex life — existed over a billion years before animals emerged.

“It’s fascinating, a species discovered very recently allows us to go back in time more than a billion years,” says Marine Olivetta, the study’s first author, in a university release.

In other words, the “egg” came before the “chicken” — but not in the way you might think. The cellular processes that allow an egg to develop into a complex organism were already developing in simple, single-celled life forms. This tiny organism shows that the blueprint for creating life — the ability to divide, specialize, and develop — predates animals by hundreds of millions of years. This research is published in the journal Nature.

The organism undergoes a process called palintomy — synchronized cell divisions without growth — creating multicellular colonies that bear a striking resemblance to early embryonic stages. These colonies persist for about a third of the organism’s life cycle and contain at least two distinct cell types, an unprecedented complexity for a single-celled creature.

Intriguingly, when C. perkinsii reaches its maximum size, it divides into three types of free-living cells: flagellates, amoeboflagellates, and dividing cells. Like a microscopic dress rehearsal for animal life, these cells activate different genes in successive waves, mimicking early embryonic development.

“Although C. perkinsii is a unicellular species, this behavior shows that multicellular coordination and differentiation processes are already present in the species, well before the first animals appeared on Earth,” explains lead researcher Omaya Dudin.

The discovery doesn’t just solve this age-old scientific puzzle — it challenges our understanding of life’s complexity. It also suggests that the genetic tools for creating sophisticated organisms existed far earlier than previously thought, waiting in the wings of evolutionary history.

Who knew the secret to understanding life’s grand performance was hiding in a tiny marine organism, patiently waiting to tell its story?

Source:https://studyfinds.org/egg-came-before-chicken/

How aging men can maintain a satisfying love life, according to a doctor

An older couple in bed (© pikselstock – stock.adobe.com)

For men, sex in their 40s or 60s is different from their 20s. However, it can still be healthy and enjoyable. Sex isn’t just for young men. Seniors can enjoy sex into their 80s and beyond. Moreover, it’s good for their physical health and self-esteem.

Although sex can be healthy for adults of all ages, there are some changes to take note of as men get older:

  • Lower sex drive
  • Erection changes
  • Ejaculation changes (premature or delayed)
  • Discomfort or pain
  • Body, hair, and genital changes
  • Less stamina or strength
  • Depression or stress
  • Lower fertility
  • Fatigue
  • Changes in your partner’s ability or sexual desire

With these challenges in mind, working with your body is key for ongoing and fulfilling sexual enjoyment.

What health problems can disrupt sexual ability?
Age-related changes, long-term health conditions, and drugs can affect you sexually. Blood pressure drugs, antidepressants, antihistamines, and acid-blocking drugs can also affect sexual functioning. So can heart disease, diabetes, cancer, and prostate problems.

However, these don’t have to end sexual functioning. There are different ways to be intimate. Start by talking with your primary healthcare provider. Often, medication dosages can be modified, or different drugs which cause fewer side-effects can be substituted.

Arthritis
Different sexual positions may relieve pain or discomfort during intimacy. Try using heat to lessen joint pain before or after sex. Sexual partners dealing with arthritis should focus on what works rather than what doesn’t work.

Heart disease
Following a heart attack or heart disease diagnosis, talk with your healthcare provider about your concerns regarding sexual activity and how to engage in intimacy safely.

Emotional issues
Feelings affect sex at any age. Being older can actually work in your favor. There may be fewer distractions, more privacy, more time, and fewer concerns about pregnancy. Many older couples say their sex life is the best it’s ever been.

Other couples feel stressed by health conditions, money troubles, or other lifestyle changes. If either of you feel depressed, consult your healthcare provider.

Sex tips for seniors
Talk with your partner. Talking about sex is difficult for many people. Being vulnerable can be uncomfortable. Remember, however, that your partner is probably feeling vulnerable, too. You need to discuss your and your partner’s needs, wants, and worries. If necessary, include a sex therapist in your discussions.

Talk with your healthcare provider. By age 65, you’ll probably be seeing your provider about every six months. They manage your chronic health conditions and medications. Erection problems may be your earliest sign of heart disease. Your doctor can check your testosterone level. Tell them about any incidents of smoking, alcohol misuse, or illicit drug use.

Change your routine. Try sex in the morning, when you are fully rested, and testosterone is usually at its peak.

Expand how you define sex. Intercourse is not the only way to have sex. Oral contact and touching in intimate ways can be satisfying too.

Consult a sex therapist. Your healthcare professional can provide a referral. A therapist can educate you about sexuality, suggest new behaviors, recommend reading material and devices, and address your personal concerns. If your partner refuses to see a sex therapist, going by yourself can still be enlightening.

Laugh together. A sense of humor, especially about the foibles of senior sex, can ease the counterproductive stress that can inhibit functioning.

Reignite romance. Find a way to romance your partner. There are plenty of books with ideas. If you lose your partner, begin socializing. People never lose their need for emotional closeness and intimacy. If you have a new partner, use a condom. Sexually transmitted infections have skyrocketed in older men and women.

Source: https://studyfinds.org/maintain-satisfying-sex-men-age/

Add 11 years to your life? Science says it’s as simple as a daily walk

(© RawPixel.com – stock.adobe.com)

In what might be the best return on investment since Bitcoin’s early days, scientists have discovered that every hour of walking could yield up to six hours of additional life. Unlike cryptocurrency, however, this investment is guaranteed by the laws of human biology. An exciting modeling study reveals that if every American over the age of 40 was as physically active as the most active quarter of the population, they could expect to live an extra five years on average.

While scientists have long known that physical inactivity increases the risk of diseases like heart disease and stroke, this study is the first to quantify exactly how many years of life Americans might be losing due to insufficient physical activity. The findings suggest that the impact of physical inactivity on life expectancy may be substantially larger than previously estimated.

The study, led by researchers from Griffith University in Australia and various institutions worldwide, challenges previous estimates of physical activity’s benefits, which were largely based on self-reported data. By using more accurate device-based measurements, the researchers found that the relationship between physical activity and mortality is about twice as strong as earlier studies suggested.

Consider this: The most active 25% of Americans over age 40 engage in physical activity equivalent to about 160 minutes of normal-paced walking (at 3 miles per hour) every day. If all Americans over 40 matched this level of activity, it would boost the national life expectancy at birth from 78.6 years to nearly 84 years.

For the least active quarter of the population to match the most active group, they would need to add about 111 minutes of daily walking (or equivalent activity) to their routine. While this might sound challenging, the potential reward is significant: nearly 11 additional years of life expectancy.

To put this in perspective, that’s roughly equivalent to eliminating half the life expectancy gap between the U.S. and countries with the highest life expectancy globally.

Not All Talk: How Scientists Walked The Walk
Study authors analyzed data from the National Health and Nutritional Examination Survey (NHANES), focusing on Americans aged 40 and older who wore activity monitors for at least four days. Unlike previous studies that relied on participants’ memory and honesty about their activity levels, these monitors provided objective measurements of every movement throughout the day.

The results showed a striking “diminishing returns” effect in the relationship between physical activity and longevity. The greatest benefits were seen among the least active individuals: moving from the lowest activity quarter to the second-lowest required just 28.5 minutes of additional walking per day but could add 6.3 years to life expectancy. That means every hour of walking for this group translated to an extra 6.3 hours of life — an impressive return on investment.

As people became more active, the additional benefits per hour of activity decreased but remained significant. For those in the second-lowest activity quarter, reaching the activity level of the most active group would require about 83 additional minutes of walking per day and could add 4.6 years to their life expectancy.

Never Too Late

These findings, published in the British Journal of Sports Medicine, have important implications for public health policy and urban planning. Creating walkable neighborhoods, maintaining safe parks and green spaces, and designing cities that encourage active transportation could help populations achieve these higher activity levels naturally. The researchers emphasize that increasing physical activity at the population level requires a comprehensive approach that considers social determinants and addresses inequalities in access to activity-promoting environments.

The study also highlighted significant disparities in physical activity levels across socioeconomic groups. In 2020, only 16.2% of men and 9.9% of women in the lowest income group met the guidelines for aerobic and muscle-strengthening activities, compared to 32.4% and 25.9% in the highest-income group, respectively. This suggests that initiatives to promote physical activity could help reduce health inequalities.

Source: https://studyfinds.org/add-11-years-to-your-life/

Yes, the universe began with the Big Bang – Here’s how scientists know for sure

NASA’s Goddard Space Flight Center/CI Lab

How did everything begin? It’s a question that humans have pondered for thousands of years. Over the last century or so, science has homed in on an answer: the Big Bang.

This describes how the Universe was born in a cataclysmic explosion almost 14 billion years ago. In a tiny fraction of a second, the observable universe grew by the equivalent of a bacterium expanding to the size of the Milky Way. The early universe was extraordinarily hot and extremely dense. But how do we know this happened?

Let’s look first at the evidence. In 1929, the American astronomer Edwin Hubble discovered that distant galaxies are moving away from each other, leading to the realisation that the universe is expanding. If we were to wind the clock back to the birth of the cosmos, the expansion would reverse and the galaxies would fall on top of each other 14 billion years ago. This age agrees nicely with the ages of the oldest astronomical objects we observe.

The idea was initially met with skepticism – and it was actually a sceptic, the English astronomer Fred Hoyle, who coined the name. Hoyle sarcastically dismissed the hypothesis as a “Big Bang” during an interview with BBC radio on March 28 1949.

Then, in 1964, Arno Penzias and Robert Wilson detected a particular type of radiation that fills all of space. This became known as the cosmic microwave background (CMB) radiation. It is a kind of afterglow of the Big Bang explosion, released when the cosmos was a mere 380,000 years old.

The CMB provides a window into the hot, dense conditions at the beginning of the universe. Penzias and Wilson were awarded the 1978 Nobel Prize in Physics for their discovery.

More recently, experiments at particle accelerators like the Large Hadron Collider (LHC) have shed light on conditions even closer to the time of the Big Bang. Our understanding of physics at these high energies suggests that, in the very first moments after the Big Bang, the four fundamental forces of physics that exist today were initially combined in a single force.

The present day four forces are gravity, electromagnetism, the strong nuclear force and the weak nuclear force. As the universe expanded and cooled down, a series of dramatic changes, called phase transitions (like the boiling or freezing of water), separated these forces.

Experiments at particle accelerators suggest that a few billionths of a second after the Big Bang, the latest of these phase transitions took place. This was the breakdown of electroweak unification, when electromagnetism and the weak nuclear force ceased to be combined. This is when all the matter in the Universe assumed its mass.

Source: https://studyfinds.org/universe-began-big-bang/

Breakthrough stem cell surgery restores vision among several human patients

(Photo of an eye Vanessa Bumbeers on Unsplash.com)

In a remarkable medical breakthrough, doctors have successfully used stem cells to treat a debilitating eye condition that can lead to vision loss. The world-first procedure, which involves transplanting lab-grown corneal cells derived from human stem cells, has the potential to restore sight for those suffering from a condition called limbal stem cell deficiency.

LSCD is a devastating disorder that occurs when the stem cells responsible for maintaining the cornea’s outer layer are damaged or depleted. This can lead to the growth of fibrous tissue over the cornea, clouding vision and causing pain, inflammation, and even blindness. Until now, treatments have been limited, often involving complex surgeries or risky immunosuppressant drugs.

However, the new research published in the medical journal The Lancet shows remarkable success with a novel approach using induced pluripotent stem cells (iPSCs) — adult cells that have been reprogrammed to behave like embryonic stem cells. Researchers in Japan were able to generate corneal epithelial cell sheets from iPSCs and successfully transplant them into the eyes of four patients with LSCD.

These patients — three men and one woman — ranged in age from 39 to 72. All had been diagnosed with LSCD stemming from various causes, including chemical burns, immune disorders, and a rare skin condition. After undergoing a procedure to remove the clouded corneal tissue, the research team carefully transplanted the lab-grown stem cell-derived corneal cell sheets onto the patients’ eyes.

Slit-lamp microscopy images of the treated eyes (Credit: The Lancet)

Remarkably, the transplanted cells were able to successfully integrate and restore the corneal surface in all four patients, with no serious side-effects reported over a two-year follow-up period. Three of the patients experienced significant improvements in visual acuity, corneal clarity, and overall eye health. Even the fourth patient, who had the most severe condition, showed some improvement initially, though this was not sustained long-term.

The researchers hypothesize that the transplanted cells either directly regenerate the corneal epithelium or prompt the patient’s own conjunctival cells to take on a corneal-like function, a process called “conjunctival transdifferentiation.” Further research will still be necessary to fully understand the underlying mechanisms of this vision-saving process.

Source : https://studyfinds.org/stem-cell-surgery-restores-vision

12,000-year-old discovery forces experts to spin new story about wheel’s origins

(Credit: Marijana Batinic/Shutterstock)

While most of us learned that the wheel was invented around 3500 BCE for transportation, a groundbreaking discovery in Israel suggests we need to roll back our understanding of rotational technology by several thousand years. Researchers have unearthed over 100 perforated stone discs from a 12,000-year-old village that may represent humanity’s first experiments with wheel-like objects – not for moving carts or chariots, but for spinning thread.

The archaeological site of Nahal Ein-Gev II, located near the Sea of Galilee in Israel, has yielded an extraordinary collection of 113 limestone pebbles, each carefully drilled through the center. While such perforated stones are not uncommon in ancient sites, this collection is special because of its age, quantity, and the careful way the holes were made. These weren’t just random rocks with holes – they appeared to be carefully selected and modified tools that served a specific purpose.

Think of them as prehistoric fidget spinners but with a practical application. The researchers believe these perforated stones served as spindle whorls – weighted discs that, when attached to a wooden stick, helped transform plant or animal fibers into thread through spinning. It’s similar to how a modern spinning wheel works, just more primitive and portable.

The research team, led by Talia Yashuv and Leore Grosman from the Hebrew University of Jerusalem, used cutting-edge 3D scanning technology to analyze these ancient artifacts in unprecedented detail. They discovered that despite their seemingly simple appearance, these tools showed remarkable sophistication in their design and creation.

The stones weren’t just randomly selected – most were made from soft limestone, weighed between 1-34 grams (with most falling between 2-15 grams), and had holes drilled precisely through their centers. This central positioning was crucial for the spinning process to work effectively, much like how a modern fidget spinner needs perfect balance to rotate smoothly.

What’s particularly fascinating is how these holes were created. In 95% of the stones, the holes were drilled from both sides to meet in the middle – a more complex but more effective technique than drilling straight through. This bi-directional drilling created a distinctive hourglass-shaped hole that, as experimental archaeology would later prove, actually helped secure the wooden spindle in place.

Spinning methods. (a) Manual thigh-spinning [64]; (b) Spindle-and-whorl “supported spinning” [68]; (c) “drop spinning” [66]; (d) the experimental spindles and whorls, the 3D scans of the pebbles and their negative perforations. The bottom pictures show Yonit Kristal experimenting spinning fibers with replicas of the perforated pebbles, using supported spinning and drop spinning techniques (photographed by Talia Yashuv). (Credit Yashuv, Grosman, 2024, PLOS ONE, CC-BY 4.0)
To test their theory about these objects being spindle whorls, the researchers created replicas and enlisted the help of a traditional craft expert, Yonit Kristal. Using these reconstructed tools, they successfully spun both wool and flax into thread, though flax proved more effective. The experiments showed that while these ancient tools weren’t as efficient as modern spinning wheels, they represented a significant technological advancement over hand-spinning techniques.

This study, published in PLOS ONE, challenges our understanding of when humans first began experimenting with rotational technology. While the wheel-and-axle system is commonly associated with transportation in the Bronze Age (around 5,000 years ago), these spindle whorls show that humans were already manipulating rotational motion for practical purposes thousands of years earlier.

Source : https://studyfinds.org/12000-year-old-discovery-wheels

Frozen sabre-toothed kitten reveals ‘significant differences’ with modern lion cub

The cub’s mummified remains, including its head, front arms and paws, and part of its chest, were found well-preserved in Yakutia, Russia, in 2020.

The frozen sabre-toothed cub. Pic: A V Lopatin/Scientific Reports

The frozen remains of a sabre-toothed cat thought to be about 31,800 years old have been studied for the first time in history, according to a study.

The cub’s mummified remains, including its head, front arms and paws, and part of its chest, were found well-preserved in Arctic permafrost on the banks of the Badyarikha River in Yakutia, in Russia’s Siberia region, in 2020.

“Findings of frozen mummified remains of the Late Pleistocene mammals are very rare,” the researchers explained, referring to the period in which it lived.

They added: “For the first time in the history of palaeontology, the appearance of an extinct mammal that has no analogues in the modern fauna has been studied.”

A modern day lion cub. Pic: A V Lopatin/Scientific Reports

When compared to the remains of a modern lion cub of a similar age, there were “significant differences”, said the experts.

The kitten, which was about three weeks old, has wider paws with their width almost the same as their length.

It also does not have carpal pads (shock absorbers) which is thought to be an adaptation to low temperatures and walking in snow.

‘Large mouth, small ears and massive neck’

The prehistoric animal also has a “large mouth opening”, small ears and a “very massive neck region” along with elongated forelimbs.

Pics A, B and C are of the prehistoric animal. Pic D shows the modern lion cub, including 1 – the first digital pad, and 2 – the carpal pad. Images: Pic: A V Lopatin/Scientific Reports

Its neck is “longer and more than twice as thick” as the modern cub’s, and the mouth opening is about 11% to 19% bigger.

“The difference in (neck) thickness is explained by the large volume of muscles, which is visually observed at the site of separation of the skin from the mummified flesh,” said the study, which was carried out by A V Lopatin of the Russian Academy of Sciences in Moscow and colleagues.

Source : https://news.sky.com/story/frozen-sabre-toothed-kitten-reveals-significant-differences-with-modern-lion-cub-13254961

New Therapeutic Vaccine Gives Hope Against Super-Aggressive Triple-Negative Breast Cancer

A new therapeutic vaccine offers fresh hope for women battling super aggressive triple-negative cancer. The vaccine teaches patients’ immune systems to identify and attack cancer cells. Around 16 out of 18 patients remained cancer-free three years after receiving the vaccine. The researchers from Washington University School of Medicine now want larger clinical trials to prove the vaccine’s effectiveness. Read on to know in detail.

Triple-negative breast cancer is characterized by cancer cells that lack or have low levels of estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2

An experimental vaccine could be the best hope for women who are battling an aggressive and hard-to-treat breast cancer, a new study has revealed. The shots, according to experts, are safe and totally effective against triple-negative breast cancer – a kind that otherwise cannot be treated with hormone therapy.

Triple-negative breast cancer is characterized by cancer cells that lack or have low levels of estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2.

The new vaccine killed cancer cells in the immune system 

However, in the latest trial for the vaccine, all the patients remained cancer-free even three years after receiving the vaccine – which killed the remaining cancer cells in their immune systems, according to results published Nov. 13 in the journal Genome Medicine.

By comparing this vaccine to the traditional surgery undertaken to treat breast cancer, only half of patients usually remain cancer-free after three years, according to historical data. “These results were better than we expected,” said Dr. William Gillanders, senior researcher and professor of surgery at Washington University School of Medicine in St. Louis.

How was the trial conducted? 

An early clinical trial for the vaccine was conducted with 18 patients of triple-negative breast cancer that had not spread elsewhere in the body, said scientists. Around 10-15 per cent of the breast cancers that occur in the United States are triple-negative, according to the National Breast Cancer Foundation.

To date, triple-negative breast cancer has no targeted therapies. It must be treated with traditional approaches like surgery, chemotherapy, and radiation therapy, researchers said in background notes.

Source: https://www.timesnownews.com/health/new-therapeutic-vaccine-gives-hope-against-super-aggressive-triple-negative-breast-cancer-article-115319099

Scientists discover life in the most uninhabitable place on Earth

(Credit: PositiveTravelArt/Shutterstock)

In a remarkable discovery, researchers have found evidence of living microbes thriving in one of the most inhospitable environments on Earth – the Atacama Desert of Chile. This vast, arid landscape is often described as the driest place on the planet, making it seemingly impossible for any life to exist. Yet, the new study reveals a diverse microbial community actively colonizing this extreme wasteland.

The key to this breakthrough was a novel technique developed by an international team of scientists led by geomicrobiologist Dirk Wagner, Ph.D., from the GFZ German Research Centre for Geosciences. Their method allows researchers to separate the genetic material of living microbes from the fragments of dead cells, providing a clearer picture of the active microbial community.

“Microbes are the pioneers colonizing this kind of environment and preparing the ground for the next succession of life,” explains Dr. Wagner in a media release.

This newfound understanding could have implications far beyond the Atacama, as similar processes may occur in other extreme environments, such as areas affected by natural disasters or even on other planets.

The researchers, who published their research in the journal Applied and Environmental Microbiology, collected soil samples from the Atacama Desert, stretching from the Pacific coast to the foothills of the Andes mountains. By using their innovative separation technique, they were able to identify a diverse array of living and potentially active microbes, including Actinobacteria and Proteobacteria, in even the most arid regions.

Location of the study sites and bacterial abundances. (A) Location of the study sites along the Atacama Desert. Moisture gradient including Coastal Sand (CS), Alluvial Fan (AL), Red Sands (RS), Yungay (YU), and two hyperarid reference sites, Maria Elena (ME) and Lomas Bayas (LB). (B) Bacterial abundance based on 16S rRNA gene copy numbers of the e- and iDNA pools (means ± SE, n = 3–4, see Table S1), and phospholipid fatty acids (PLFAs) in the different investigation sites along the Atacama transect. Missing gene copy numbers for the eDNA pool indicate less than three replicates for the respective study site. (Credit: Applied and Environmental Microbiology)

Interestingly, the team found that in the shallow soil samples (less than 5 centimeters deep), Chloroflexota bacteria dominated the group of living, intracellular DNA. This suggests that these microbes are the most active members of the community, constantly replenishing the pool of genetic material.

“If a community is really active, then a constant turnover is taking place, and that means the 2 pools should be more similar to each other,” Wagner notes.

The researchers plan to further investigate the active microbial processes in the Atacama Desert through metagenomic sequencing of the intracellular DNA. This approach, they believe, will provide deeper insights into the microbes thriving in this extreme environment, paving the way for a better understanding of life’s resilience in the most inhospitable corners of our planet.

Source: https://studyfinds.org/life-most-uninhabitable-place/

How your 11-year-old brain predicts your 80-year-old mind

(© areebarbar – stock.adobe.com)

In 1932, while the rest of the world was grappling with the Great Depression, Scotland was busy giving intelligence tests to nearly every 11-year-old in the country. Little did those children know they were kickstarting one of the most illuminating studies ever conducted on how our brains age – or that their test scores would still be making scientific waves nearly a century later.

The Lothian Birth Cohorts study, led by researchers at the University of Edinburgh, has followed hundreds of people born in 1921 and 1936, tracking their cognitive abilities from age 11 into their 70s, 80s, and 90s. After 25 years of research, the scientists have unveiled some surprising discoveries about aging minds, including the fact that about half of our intelligence in old age is determined by how bright we were as children.

Think of it like a cognitive savings account – we start with a certain balance in childhood, and while life experiences can grow or diminish that initial deposit, our early cognitive capabilities have an outsized influence on our mental acuity decades later. The study found that someone who scored well on intelligence tests at age 11 was likely to perform well on similar tests in their 70s and beyond.

But what about the other half of the equation? The researchers discovered that various factors influence how well we maintain our mental edge as we age. Some are within our control, like staying physically active and socially engaged, while others aren’t, such as our genetic makeup. However, the effects of any single factor tend to be small – there’s no magic bullet for keeping our brains young.

The study’s setup was remarkably comprehensive. In the original 1932 and 1947 testing days, almost every 11-year-old child in Scotland (over 87,000 in 1932 and 70,000 in 1947) took the same intelligence test. Decades later, researchers tracked down hundreds of these individuals living in the Edinburgh area and convinced them to participate in detailed follow-up studies.

The participants, now in their later years, underwent regular testing every few years. They completed cognitive assessments, physical examinations, and brain scans. They provided blood samples for genetic analysis and other biological markers. Some even agreed to donate their brains after death for further research.

One of the study’s most striking findings was the dramatic variation in how people’s brains age. Brain scans of participants at age 73 revealed that some had brains that looked decades younger than others of the exact same age. This variation helped explain why some people maintain their mental sharpness while others experience more significant cognitive decline.

“What’s particularly fascinating is that even after seven decades, we found correlations of about 0.7 between childhood and older-age cognitive scores,” explains study co-author Ian Deary, a professor at Edinburgh, in a statement. “This means that just under half of the variance in intelligence in older age was already present at age 11.”

The research also challenged some common assumptions. For instance, many factors that seem to affect cognitive ability in old age – like physical fitness, social engagement, and inflammation levels – turned out to be partially explained by childhood intelligence. In other words, smarter kids were more likely to maintain healthy lifestyles and engage in mentally stimulating activities throughout their lives.

But it’s not all predetermined by childhood ability. The researchers identified several factors that appear to help maintain cognitive function, including education, physical activity, and not smoking. However, each factor’s individual effect tends to be modest – suggesting that the best approach to cognitive aging is to make multiple small positive changes rather than seeking a single solution.

The study has also contributed to our understanding of how genes influence cognitive aging. While genetic factors play a role, their effects are complex and often tiny. One exception is the APOE e4 gene variant, which was found to be associated with lower cognitive performance in old age but, interestingly, showed no effect on childhood cognitive ability.

The researchers made another fascinating discovery about DNA methylation – chemical modifications to our DNA that can change with age. They found that these age-related DNA changes could predict how long people would live, offering a new way to measure biological aging at the molecular level.

Perhaps most encouragingly, the study showed that cognitive decline isn’t inevitable or uniform. While some participants experienced significant decreases in mental ability, others maintained sharp minds well into their 80s and 90s. This suggests that while we can’t completely prevent cognitive aging, we may be able to influence its trajectory through lifestyle choices and environmental factors.

Source: https://studyfinds.org/11-year-old-brain-80-year-old-mind/

Scientists propose shocking new theory for origin of the Moon

(Credit: muratart/Shutterstock)

For nearly 40 years, scientists have generally agreed that Earth’s Moon formed from debris after a Mars-sized object slammed into our young planet. However, new research suggests a different possibility: our Moon may have been captured from space, originally paired with another rocky object before Earth’s gravity pulled it into orbit.

This new theory helps address some puzzling aspects of the Moon’s orbit that are difficult to explain with the traditional collision theory. Moreover, the traditional belief doesn’t account for certain chemical signatures found in Moon rocks brought back by Apollo astronauts.

The study, published in The Planetary Science Journal by Penn State researchers Darren Williams and Michael Zugger, demonstrates that Earth could have acquired its Moon through a process called binary-exchange capture – the same mechanism thought to explain how Neptune captured its largest moon, Triton.

“The Moon is more in line with the sun than it is with the Earth’s equator,” Williams explains in a media release.

This observation doesn’t align well with the collision theory, which suggests the Moon should orbit above Earth’s equator. The process of binary-exchange capture occurs when a planet encounters two objects orbiting each other. During this cosmic encounter, the planet’s gravity can separate the pair, capturing one object as a satellite while ejecting the other into space. This mechanism has already been demonstrated for larger planets in our solar system, but this new research shows it could work for Earth-sized planets as well.

Through mathematical modeling and computer simulations, the researchers found that Earth could potentially capture satellites ranging from 1% to 10% of its mass through this process. Our Moon, at 1.2% of Earth’s mass, falls comfortably within this range. The study showed that specific conditions would need to be met for successful capture: the approaching speed would need to be less than three kilometers per second, or about 6,700 miles per hour. While that might sound fast, it’s actually quite leisurely by solar system standards. Second, the binary pair needed to pass within about 20 Earth radii of our planet (about 80,000 miles) for Earth’s gravity to work its magic.

But catching a Moon is only half the story. The researchers also had to explain how a captured Moon would settle into its current well-behaved circular orbit. When first captured, the Moon would have followed a highly elliptical path, swooping very close to Earth at its nearest approach and far away at its most distant point. This is where the power of tides comes into play.

The team’s calculations showed that tidal forces between Earth and its newly captured Moon would have gradually civilized this wild orbit. Over time, these gravitational interactions would have pulled the Moon into a more circular path and slowed its rotation until it always showed the same face to Earth – exactly what we observe today.

“Today, the Earth tide is ahead of the Moon,” Williams explains in a statement. “High tide accelerates the orbit. It gives it a pulse, a little bit of boost. Over time, the Moon drifts a bit farther away.”

This ongoing process continues today, with the Moon moving approximately three centimeters farther from Earth each year.

This new theory could help explain some lunar mysteries that have puzzled scientists. For instance, it might account for why Moon rocks show chemical similarities to Earth (because the Moon formed in the same region of the solar system) while also having some distinct differences (because it originally formed as part of a separate object).

The researchers acknowledge that their scenario requires some fortunate circumstances. Not only would Earth need to encounter a binary pair of objects, but one member of that pair would need to be just the right size to become our Moon. However, they point out that binary objects were likely common in the early solar system – we still see many of them today in the asteroid belt and Kuiper belt beyond Neptune.

Perhaps the most intriguing aspect of this research is that it suggests similar captures could occur around planets in other solar systems. This raises the possibility that large moons might be more common around rocky planets than previously thought, with potential implications for habitability and the emergence of life.

Of course, proving this theory will be challenging, as the events in question occurred over 4.5 billion years ago. However, the mathematical modeling shows it’s physically possible, adding an intriguing new chapter to the ongoing debate about our Moon’s origins.

“No one knows how the Moon was formed. For the last four decades, we have had one possibility for how it got there. Now, we have two. This opens a treasure trove of new questions and opportunities for further study,” Williams states.

Source: https://studyfinds.org/new-theory-origin-of-moon/

Mpox clade Ia has evolved to jump from humans to humans: new study

Researchers have found a surge in the prevalence of mutations that can be attributed to a protein family in the human body called APOBEC

Extracellular, brick-shaped mpox virions (colorised pink). Backlighting shows surface membranes of the virions and the outlines of nucleocapsids. | Photo Credit: NIAID

Since the world eradicated smallpox in 1980, scientists have known that the battle against poxviruses was far from over. Of the multiple types that exist, scientists have been wary of one in particular: mpox. In fact, one of the points in the World Health Assembly’s post-eradication policies was the “continuation of monkeypox surveillance in West and Central Africa, at least until 1985”.

In the 2022-2023, the World Health Organisation (WHO) declared the then global outbreak of mpox a ‘public health emergency of international concern’. In August this year, the WHO declared mpox to be a public health emergency for the second time in two years.

Source: https://www.thehindu.com/sci-tech/science/mpox-clade-1a-human-to-human-transmission-virologica-study/article68851982.ece

Plastic-eating insect discovered in Kenya

Lesser mealworm larvae chew through polystyrene. (Photo courtesy Fathiya Khamis)

There’s been an exciting new discovery in the fight against plastic pollution: mealworm larvae that are capable of consuming polystyrene. They join the ranks of a small group of insects that have been found to be capable of breaking the polluting plastic down, though this is the first time that an insect species native to Africa has been found to do this.

Polystyrene, commonly known as styrofoam, is a plastic material that’s widely used in food, electronic and industrial packaging. It’s difficult to break down and therefore durable. Traditional recycling methods – like chemical and thermal processing – are expensive and can create pollutants. This was one of the reasons we wanted to explore biological methods of managing this persistent waste.

I am part of a team of scientists from the International Centre of Insect Physiology and Ecology who have found that the larvae of the Kenyan lesser mealworm can chew through polystyrene and host bacteria in their guts that help break down the material.

The lesser mealworm is the larval form of the Alphitobius darkling beetle. The larval period lasts between 8 and 10 weeks. The lesser mealworm are mostly found in poultry-rearing houses which are warm and can offer a constant food supply – ideal conditions for them to grow and reproduce.

Though lesser mealworms are thought to have originated in Africa, they can be found in many countries around the world. The species we identified in our study, however, could be a sub-species of the Alphitobius genus. We are conducting further investigation to confirm this possibility.

Our study also examined the insect’s gut bacteria. We wanted to identify the bacterial communities that may support the plastic degradation process.

Plastic pollution levels are at critically high levels in some African countries. Though plastic waste is a major environmental issue globally, Africa faces a particular challenge due to high importation of plastic products, low re-use and a lack of recycling of these products.

By studying these natural “plastic-eaters,” we hope to create new tools that help get rid of plastic waste faster and more efficiently. Instead of releasing a huge number of these insects into trash sites (which isn’t practical), we can use the microbes and enzymes they produce in factories, landfills and cleanup sites. This means plastic waste can be tackled in a way that’s easier to manage at a large scale.

Key findings
We carried out a trial, lasting over a month. The larvae were fed either polystyrene alone, bran (a nutrient-dense food) alone, or a combination of polystyrene and bran.

We found that mealworms on the polystyrene-bran diet survived at higher rates than those fed on polystyrene alone. We also found that they consumed polystyrene more efficiently than those on a polystyrene-only diet. This highlights the benefits of ensuring the insects still had a nutrient-dense diet.

While the polystyrene-only diet did support the mealworms’ survival, they didn’t have enough nutrition to make them efficient in breaking down polystyrene. This finding reinforced the importance of a balanced diet for the insects to optimally consume and degrade plastic. The insects could be eating the polystyrene because it’s mostly made up of carbon and hydrogen, which may provide them an energy source.

The mealworms on the polystyrene-bran diet were able to break down approximately 11.7% of the total polystyrene over the trial period.

Gut bacteria
The analysis of the mealworm gut revealed significant shifts in the bacterial composition depending on the diet. Understanding these shifts in bacterial composition is crucial because it reveals which microbes are actively involved in breaking down plastic. This will help us to isolate the specific bacteria and enzymes that can be harnessed for plastic degradation efforts.

The guts of polystyrene-fed larvae were found to contain higher levels of Proteobacteria and Firmicutes, bacteria that can adapt to various environments and break down a wide range of complex substances. Bacteria such as Kluyvera, Lactococcus, Citrobacter and _Klebsiella were also particularly abundant and are known to produce enzymes capable of digesting synthetic plastics. The bacteria won’t be harmful to the insect or to the environment when used at scale.

The abundance of bacteria indicates that they play a crucial role in breaking down the plastic. This may mean that mealworms may not naturally have the ability to eat plastic. Instead, when they start eating plastic, the bacteria in their guts might change to help break it down. Thus, the microbes in the mealworms’ stomachs can adjust to unusual diets, like plastic.

These findings support our hypothesis that the gut of certain insects can enable plastic degradation. This is likely because the bacteria in their gut can produce enzymes that break down plastic polymers.

This raises the possibility of isolating these bacteria, and the enzymes produced, to create microbial solutions that will address plastic waste on a larger scale.

Source: https://studyfinds.org/plastic-eating-insect-discovered-in-kenya/

Bombshell study rewrites history of Jupiter’s iconic ‘Great Red Spot’

Jupiter’s Great Red Spot, as seen from a Juno flyby in 2018. The Red Spot we see today is likely not the same one famously observed by Cassini in the 1600s, according to a new Geophysical Research Letters paper. Credit: Enhanced Image by Gerald Eichstadt and Sean Doran (CC BY-NC-SA) based on images provided Courtesy of NASA/JPL-Caltech/SwRI/MSSS

For nearly 400 years, the swirling red vortex known as Jupiter’s Great Red Spot has captured the attention and imagination of astronomers. But surprising new research suggests this planetary icon is actually not the same one described by Italian astronomer Giovanni Domenico Cassini. A detailed analysis of historical observations dating back to the 1600s reveals that what we now know today as Jupiter’s Great Red Spot may have actually formed in the 19th century and is far smaller than its predecessor.

Back in 1665, Cassini noted a large dark oval feature near the location of today’s Great Red Spot. This blemish, dubbed the “Permanent Spot,” was observed intermittently until 1713. Many assumed this was an early sighting of the Great Red Spot, implying the massive storm had been churning for centuries.

However, in a new paper published in Geophysical Research Letters, a team of astronomers scrutinized measurements of both the Permanent Spot and the Great Red Spot over time. To their surprise, the two spots don’t match up. In the late 1800s, the Great Red Spot was two to three times bigger than the Permanent Spot had been. There are also no recorded observations of either spot for over 100 years between 1713 and 1830. This gap strongly suggests the Permanent Spot had disappeared, and the Great Red Spot is a more recent storm that first flared up in 1831.

(a) In this 1711 painting by Donato Creti, a red spot is shown prominently on Jupiter—likely influenced by communications with astronomers Cassini or Manfredi. Two late-1800s drawings (b,c, Trouvelot and Elger, respectively) show elongated spots on the planet. Researchers used these illustrations and others to track Jupiter’s red spots through time. They determined today’s Great Red Spot is a different one than that which Cassini first observed.

What’s more, the Great Red Spot has been steadily shrinking since it was first definitively observed. The researchers found that its length has been decreasing by about 210 kilometers (roughly 130 miles) per year. At this rate, if the Great Red Spot and Permanent Spot were the same storm, the Permanent Spot would have had to grow dramatically in size between 1713 and 1830 to reach the dimensions of the Great Red Spot in the 1800s. There are no observations supporting this, and such growth would be uncharacteristic of Jupiter’s vortices.

“From the measurements of sizes and movements, we deduced that it is highly unlikely that the current Great Red Spot was the ‘Permanent Spot’ observed by Cassini,” adds research leader Agustín Sánchez-Lavega, a planetary scientist at the University of the Basque Country, in a media release. “The ‘Permanent Spot’ probably disappeared sometime between the mid-18th and 19th centuries, in which case we can now say that the longevity of the Red Spot exceeds 190 years.”

Methodology: How Did Scientists Tell These Storms Apart?
To reach these conclusions, the researchers carefully combed through centuries of historical records, notes, and drawings from past astronomers. They compiled as many measurements as possible of the size and position of both the Permanent Spot and the Great Red Spot.

For the earliest records of the Permanent Spot between 1665 and 1713, the researchers analyzed drawings from Giovanni Cassini and others. Precise measurements were difficult due to the limitations of early telescopes, but they were able to approximate the spot’s dimensions. They then compared these to more detailed records of the Great Red Spot starting in the 1800s, including drawings, photographs starting in 1879, and modern digital images.

For each observation, the researchers carefully measured the east-west length and north-south width of the spots. When photographs weren’t available, they analyzed astronomers’ drawings, accounting for potential inaccuracies. They also tracked the longitude of the spots over time to calculate their drift rate and velocity. By compiling data points across 350+ years, they could chart long-term trends in the spots’ size and motion.

Source: https://studyfinds.org/history-jupiter-great-red-spot/

The shocking facts about Jupiter — A giant planet that has no surface

A photo of Jupiter taken by NASA’s Juno spacecraft in September 2023. NASA/JPL-Caltech/SwRI/MSSS, image processing by Tanya Oleksuik

The planet Jupiter has no solid ground – no surface, like the grass or dirt you tread here on Earth. There’s nothing to walk on, and no place to land a spaceship.

But how can that be? If Jupiter doesn’t have a surface, what does it have? How can it hold together?

Even as a professor of physics who studies all kinds of unusual phenomena, I realize the concept of a world without a surface is difficult to fathom. Yet much about Jupiter remains a mystery, even as NASA’s robotic probe Juno begins its ninth year orbiting this strange planet.

First, some facts
Jupiter, the fifth planet from the Sun, is between Mars and Saturn. It’s the largest planet in the solar system, big enough for more than 1,000 Earths to fit inside, with room to spare.

While the four inner planets of the solar system – Mercury, Venus, Earth and Mars – are all made of solid, rocky material, Jupiter is a gas giant with a composition similar to the Sun; it’s a roiling, stormy, wildly turbulent ball of gas. Some places on Jupiter have winds of more than 400 mph (about 640 kilometers per hour), about three times faster than a Category 5 hurricane on Earth.

Searching for solid ground
Start from the top of Earth’s atmosphere, go down about 60 miles (roughly 100 kilometers), and the air pressure continuously increases. Ultimately you hit Earth’s surface, either land or water.

Compare that with Jupiter: Start near the top of its mostly hydrogen and helium atmosphere, and like on Earth, the pressure increases the deeper you go. But on Jupiter, the pressure is immense.

As the layers of gas above you push down more and more, it’s like being at the bottom of the ocean – but instead of water, you’re surrounded by gas. The pressure becomes so intense that the human body would implode; you would be squashed.

Go down 1,000 miles (1,600 kilometers), and the hot, dense gas begins to behave strangely. Eventually, the gas turns into a form of liquid hydrogen, creating what can be thought of as the largest ocean in the solar system, albeit an ocean without water.

Go down another 20,000 miles (about 32,000 kilometers), and the hydrogen becomes more like flowing liquid metal, a material so exotic that only recently, and with great difficulty, have scientists reproduced it in the laboratory. The atoms in this liquid metallic hydrogen are squeezed so tightly that its electrons are free to roam.

Keep in mind that these layer transitions are gradual, not abrupt; the transition from normal hydrogen gas to liquid hydrogen and then to metallic hydrogen happens slowly and smoothly. At no point is there a sharp boundary, solid material, or surface.

Source: https://studyfinds.org/jupiter-has-no-surface/

Transplanting brain cells could be the revolutionary treatment to cure neurological disorders

The loss and degeneration of astrocytes are present in many neurodegenerative conditions. (S. Chierzi), CC BY

 

Astrocytes — named for their star-like shape — are a type of brain cell as abundant as neurons in the central nervous system, but little is known about their role in brain health and disease.

Many neurological diseases are caused by or result in the loss of cells in the central nervous system. Some diseases are a result of the loss of specific cells, such as the loss of motor neurons in amyotrophic lateral sclerosis (ALS), the loss of dopaminergic neurons in Parkinson’s disease, and the loss of GABAergic neurons in Huntington’s disease.

For other neurodegenerative conditions, like Alzheimer’s disease, a key hallmark is the general loss of cells in brain regions responsible for memory formation.

Although many brain diseases are marked by the loss of specific cells, a common link among these diseases is the loss of astrocytes. Interestingly, in some animal studies involving cases such as ALS, introducing disease-causing mutations selectively in astrocytes alone produces ALS symptoms and disease progression.

Transplantation therapy
Emerging evidence indicates that astrocytes take part in major functions of the brain, including homeostasis and neural network modulation that are essential to everyday cognition. A functioning brain requires healthy astrocytes, and finding strategies to heal or replace damaged astrocytes could help in the treatment of neurological diseases.

Cell replacement therapy involves transplanting functional cells in patients. In recent years there have been exciting developments in this area with respect to astrocyte transplantation in animal disease models, with one approach even moving to early clinical trial in ALS patients. While there have been some promising outcomes, treatment success varies from one study to the other.

Our recent study, published in The Journal of Neuroscience, examines how transplanted astrocyte integrate into the recipient central nervous system. We studied the types of transplanted astrocytes, timing of treatment and routes of transplantation.

Preparing astrocytes
First, we prepared astrocyte cultures in petri dishes by extracting immature astrocytes from the cerebral cortex of newborn mice and expanding the cell population. To track the development of transplanted astrocytes following their delivery to recipient mice, we used astrocytes from genetically modified mice in which astrocytes glow red, and they are transplanted into the brain of mice where astrocytes glow green.

We found that the transplanted astrocytes could survive for up to one year after transplantation, developing normally and integrating into the recipient brain just like the native astrocytes, with just minor differences.

Astrocytes depend on their capability to sense signals and exchange materials within the brain environment through molecules such as receptors and ion channels located on their cell surface. Transplanted astrocytes displayed comparable numbers of such receptors and channels and possessed similar sizes and complexity when compared to native astrocytes.

Transplanted astrocytes do appear to take some time to catch up to and perfectly match astrocytes in the recipient mice in terms of the production of these receptors and ion channels.

Source: https://studyfinds.org/transplanting-brain-cells/

Your body forms memories too — Just like your brain!

concept of a human brain full of memories (© Studio_East – stock.adobe.com)

In a remarkable finding that challenges what scientists have long believed, a new study finds that the ability to learn and form memories is not exclusive to the brain, but is in fact a fundamental property shared by cells throughout the human body.

The study, led by Nikolay V. Kukushkin of New York University and published in the prestigious journal Nature Communications, reveals that non-brain cells, including those from nerve and kidney tissues, can detect patterns in their environment and respond by activating a “memory gene” – the same gene that brain cells use to restructure their connections and form memories.

“Learning and memory are generally associated with brains and brain cells alone, but our study shows that other cells in the body can learn and form memories, too,” explains Kukushkin, a clinical associate professor at NYU, in a media release.

To uncover this unexpected discovery, the researchers exposed two types of non-brain human cells to different patterns of chemical signals, mimicking the way brain cells respond to neurotransmitters during the learning process. By engineering the cells to produce a glowing protein when the memory gene was activated, the team was able to monitor the cells’ learning and memory capabilities.

The striking results reveal that the non-brain cells were able to distinguish between continuous and spaced-out patterns of the chemical signals, just as neurons in the brain can recognize the difference between cramming information and learning through repeated exposure over time.

“This reflects the massed-space effect in action,” says Kukushkin, referring to the well-established neurological principle that we retain information better when it is studied in spaced intervals rather than all at once.

Specifically, the researchers found that when the chemical pulses were delivered in spaced-out intervals, the non-brain cells turned on the memory gene more strongly and for a longer duration than when the same treatment was delivered continuously.

“It shows that the ability to learn from spaced repetition isn’t unique to brain cells, but, in fact, might be a fundamental property of all cells,” Kukushkin observes.

This discovery not only challenges our understanding of memory, but also opens up new avenues for enhancing learning and treating memory-related disorders. Kukushkin suggests that in the future, we may need to consider what other cells in the body “remember” in order to maintain healthy function.

Source: https://studyfinds.org/your-body-forms-memories-too-just-like-your-brain/

Just 5 extra minutes of exercise may save you from high blood pressure

(© M. Business – stock.adobe.com)

New research shows that a small amount of daily exercise can dramatically improve your heart health.
SYDNEY — For most of us, finding time to exercise is a constant struggle. Between work, family, and the countless other demands on our schedules, it’s all too easy to let physical activity fall by the wayside. But what if I told you that just five minutes of exercise per day could make a significant difference in lowering your blood pressure?

That’s the remarkable finding from a new study published in the journal Circulation. Researchers from the University of Sydney and University College London analyzed data from over 14,000 volunteers across five countries. What they discovered is that simple activities like stair-climbing and brisk walking can have a big impact on cardiovascular health.

“High blood pressure is one of the biggest health issues globally, but unlike some major causes of cardiovascular mortality there may be relatively accessible ways to tackle the problem in addition to medication,” explains Professor Emmanuel Stamatakis, Director of the ProPASS Consortium, which conducted the study, in a media release.

The key is replacing sedentary behavior – things like sitting, lounging, or inactivity – with just 20 to 27 minutes of exercise per day. This could include jogging, cycling, or even power-walking up hills. The researchers estimate that this small amount of movement could reduce your risk of cardiovascular disease by up to 28%.

“Our findings suggest that, for most people, exercise is key to reducing blood pressure, rather than less strenuous forms of movement such as walking,” says first author Dr. Jo Blodgett from UCL. “The good news is that whatever your physical ability, it doesn’t take long to have a positive effect on blood pressure.”

Even if you’re not a fan of traditional workouts, the researchers say incidental exercise like taking the stairs or walking briskly to the store can make a big difference.

“What’s unique about our exercise variable is that it includes all exercise-like activities, from running for a bus or a short cycling errand, many of which can be integrated into daily routines,” Blodgett notes.

So, the next time you’re tempted to take the elevator or drive to the corner store, consider squeezing in a quick burst of activity instead. Your heart will thank you.

Source: https://studyfinds.org/5-minutes-exercise-blood-pressure/

7 Best Countries To Leave America For: Consensus List Ranked By Reviewers

A woman in Porto, Portugal (margouillat photo/Shutterstock)

Looking to leave the United States of America? With costs of living rising and remote work now a reality, many Americans dream of living abroad. The allure? A better work-life balance, cheaper housing, and a taste of adventure. But before making your dream a reality, it’s essential to research and consider important factors like healthcare, safety, and the country’s economy. Some countries roll out the welcome mat for expats, while others present challenges like cultural adjustments and language barriers. Ready to pack your bags? We have curated a list of the best countries for Americans to move to if you are seeking a fresh start abroad based on expert insights across nine websites. A few places on the list may surprise you! If we’ve missed a country that tops your list, let us know in the comments below.

Top 7 Countries for Americans to Move to

1. Portugal

Portugal, highlighted by Travel + Leisure as a top expat destination, offers a low cost of living and a business-friendly environment. While Lisbon is a favorite, Porto boasts vibrant culture and scenic views. Explore Braga’s Baroque architecture or the sunny Algarve, and take advantage of Portugal’s digital nomad visas for a welcoming new home.

In addition to its affordability, Portugal’s warm climate and high safety ranking are a few more reasons why many Americans love living there. English is widely spoken, and international schools offer American curriculums, making it easy for families with children to settle in. Greenback Tax Services states these factors as key contributors to Portugal’s growing popularity.

Additionally, Portugal boasts top-notch higher education, with 14 universities ranked worldwide and degrees recognized globally. According to Immigrant Invest, expats also have the opportunity to apply for citizenship after just five years.

2. Spain

Spain, which shares a border with Portugal, ranks second on our list since it’s a bit more expensive. Housing and groceries are still affordable, though. However, Americans are drawn to its “relaxed lifestyle, delectable cuisine, and vibrant culture,” so says Visitors Coverage. Jobs are also abundant in many different industries.

The Spanish government rolls out the red carpet for expats with its Golden Visa program, making it easy to invest in the country and eventually seek permanent residency or even citizenship. According to Greenback Tax Services, this program is a major draw for those looking to make Spain their new home, offering a streamlined path to living in one of Europe’s most vibrant countries.

One downside to living in Spain is the language barrier, as only about 12 percent of the population speaks English. However, the country makes up for it with top-notch education and healthcare options, both public and private. Additionally, Immigrant Invest says Spain offers a digital nomad visa valid for one year, making it attractive to remote workers from around the globe.

3. Canada

Our neighbor to the north is also a haven for expats, who like cooler weather. Canada is known for its high quality of life, safety, and stable political landscape. Key benefits include impressive expat salaries averaging $111,000 and a world-famous universal healthcare system. However, William Russell notes the high cost of living in cities like Toronto and Vancouver, and some provinces have private healthcare restrictions.

Another plus for Canada? It’s easy to get to from the U.S. If you want a more affordable city, Travel + Leisure recommends Calgary with its trendy neighborhoods, or Montreal and Quebec City which offer a European flair.

If you move to Canada, you won’t feel like an outsider. According to Visitors Coverage, Canada welcomes immigrants and makes expats feel accepted and valued thanks to its strong sense of community.

4. Costa Rica

This Central American gem, nestled between the Pacific and Caribbean coasts, charms visitors beyond belief. Costa Rica boasts stunning volcanoes, cloud forests, and diverse wildlife, including sloths and toucans. Embracing the “Pura Vida” (Pure Life) lifestyle, Travel + Leisure says the country offers a straightforward residency program, affordable healthcare, and a stable democracy.

According to Worldpackers, this country offers affordable rentals as well. It also has a low crime rate. In addition, you’ll find plenty of opportunities to live and work whether you’re interested in permaculture projects, NGOs, eco-lodges, or retreats.

If you love warm weather, Costa Rica is ideal due to its proximity to the equator. Whether you’re drawn to the ocean, the mountains, or the rich culinary scene, the country has it all. Beyond the States says you’ll never tire of the national parks, offering endless outdoor adventures.

5. South Korea

South Korea might be a surprise on our list, but it has plenty to offer and many networking opportunities. Seoul boasts amazing restaurants, shopping, and entertainment. But if you’re a beach lover, you’ll want to check out Busan. To move there, you’ll need a work visa, according to Travel + Leisure.

Despite the higher cost of housing, South Korea offers a low cost of living. While the country boasts some of the fastest internet speeds in the world, you may want to invest in a VPN due to online censorship. Overall, Greenback Tax Services believes South Korea is an excellent choice for younger, tech-savvy expats eager to enjoy the good life abroad.

With many people in South Korea speaking English, getting around is easy. The incredible food options, from hot buns to Korean BBQ, ensure you won’t go hungry. Plus, Beyond the States guarantees you’ll enjoy all the modern conveniences of home and have the chance to learn a new language.

Source: https://studyfinds.org/best-countries-for-americans-to-live-in/

Millennials spend 60+ hours a week sitting down — significantly speeding up their biological clocks

Researchers say even the fittest of young adults can’t escape the dangers of desk jobs and couch potato life. (Photo by Studio Republic on Unsplash)

The research focused on participants between the ages of 28 and 49, with an average age of 33. On average, these individuals reported sitting almost nine hours daily, with some participants sitting up to 16 hours. They averaged between 80 and 160 minutes of moderate physical activity weekly and less than 135 minutes of vigorous exercise – numbers that researchers believe are actually better than national averages due to Colorado’s active lifestyle.

The study measured two key indicators of cardiovascular and metabolic health: Body Mass Index (BMI) and the ratio of total cholesterol to high-density lipoprotein cholesterol (TC/HDL), also known as the Cardiac Risk Ratio. These measurements provide important insights into heart health and metabolic function, with higher numbers generally indicating increased health risks.

The findings were striking: those who sat for longer periods showed higher TC/HDL ratios and BMIs, even when meeting the minimum recommended physical activity guidelines of about 20 minutes per day of moderate exercise. Simply put, the more someone sat, the “older” their body appeared in terms of these health markers.

However, the research did identify one potential buffer against the effects of prolonged sitting: vigorous exercise. Participants who engaged in 30 minutes of vigorous daily activity – such as running or intense cycling – showed health markers similar to those of people five to 10 years younger who sat the same amount but didn’t exercise vigorously. Yet even this level of intense activity couldn’t completely neutralize the negative impacts of extended sitting.

The study’s use of twin participants proved particularly valuable in understanding these relationships. Because identical twins share 100% of their genes, comparing twins with different activity levels allowed researchers to isolate the specific effects of lifestyle choices on health outcomes. When examining twins with different sitting and exercise habits, the researchers discovered that replacing sitting time with exercise appeared more beneficial for cholesterol levels than simply adding exercise to a day full of sitting.

“Our research suggests that sitting less throughout the day, getting more vigorous exercise, or a combination of both may be necessary to reduce the risk of premature aging in early adulthood,” says senior author Chandra Reynolds, a professor in the Department of Psychology and Neuroscience and the Institute for Behavioral Genetics.

What exactly counts as “vigorous” exercise? Activities that really get your heart pumping – think running, fast cycling, high-intensity interval training, or vigorous swimming. These activities typically require more effort than moderate exercises like brisk walking or casual cycling.

The findings serve as a crucial wake-up call for young adults and suggest that current physical activity guidelines may need revision to account for our increasingly sedentary lifestyles. As Reynolds advises, “This is the time to build habits that will benefit health over the long term.”

Source: https://studyfinds.org/millennials-sitting-biological-clock/

Doctors find key to turning controversial COVID drug into a promising cancer treatment

(© Sherry Young – stock.adobe.com)

Hydroxychloroquine went from a relatively unknown malaria drug just a few years ago to a highly controversial treatment for COVID-19 during the pandemic. Now, doctors are uncovering surprising ways that this repurposed medication may be the answer for treating cancer.

Although cancer cells can become resistant to hydroxychloroquine, the new findings open the door for more effective combination treatments. Simply put, researchers have discovered how to team this versatile drug with other treatments which cover up any weaknesses hydroxychloroquine may have.

As scientists race to find new weapons in the war on cancer, some are taking a fresh look at old drugs that may have untapped cancer-fighting potential. One such drug is hydroxychloroquine, which has shown promise in attacking cancer cells by disrupting their ability to recycle resources.

Despite hydroxychloroquine’s effectiveness at cutting off this vital lifeline for cancer, clinical trials have been disappointing, with cancer cells often finding ways to overcome the drug’s effects. Now, researchers at the Medical University of South Carolina’s Hollings Cancer Center believe they’ve uncovered the key to this resistance – and it isn’t what they expected.

“We thought the main interaction of hydroxychloroquine with cancer was this process of autophagy, but it appears instead that processes unrelated to autophagy may be the most important for cancer cells to survive this therapy,” explains Joe Delaney, Ph.D., who led the study published in the journal Cell Cycle.

To be clear, autophagy is the cellular recycling process. This surprising finding opens up new possibilities for pairing hydroxychloroquine with other drugs that target these newly identified resistance mechanisms, potentially making the treatment more effective and longer-lasting.

A Two-Pronged Approach to Uncover Resistance

To understand how cancer cells were evading hydroxychloroquine, Delaney and his team took a comprehensive approach, using two different whole-genome screening methods to observe how cells adapted when continuously exposed to the drug.

“Targeting single proteins can be extremely effective to treat cancer,” Delaney notes. “However, the more specific the treatment becomes, the more likely resistance is to occur.”

“By using two completely different methods, we were able to home in on the true biological players in the system,” the researcher continues.

Rather than simply looking at which genes were turned on or off, the researchers were able to see the cascading changes happening across entire cellular pathways. This revealed that cancer cells weren’t modifying their recycling processes at all – instead, they were altering their cell division, metabolism, and export mechanisms to survive the hydroxychloroquine onslaught.

Paving the Way for Combination Treatments
These findings set the stage for developing new combination treatments that could boost the power of hydroxychloroquine. By pairing it with drugs targeting the cell division, metabolism, or export pathways that cancer cells rely on, researchers hope to prevent resistance from developing.

“Our study has identified the potential mechanisms that we will need to target with a second drug to prevent resistance against hydroxychloroquine,” says Delaney.

Source: https://studyfinds.org/hydroxychloroquine-cancer-treatment/

Trouble sleeping could cause your brain to age faster than normal

(© svetazi – stock.adobe.com)

Trouble sleeping could be a red flag for poor brain health in middle age. Researchers with the American Academy of Neurology found issues with falling asleep and staying asleep are both associated with signs of brain aging.

“Sleep problems have been linked in previous research to poor thinking and memory skills later in life, putting people at higher risk for dementia,” says Dr. Clémence Cavaillès, a researcher at the University of California San Francisco and co-author of the study, in a media release. “Our study which used brain scans to determine participants’ brain age, suggests that poor sleep is linked to nearly three years of additional brain aging as early as middle age.”

People who had two to three sleep issues showed signs of brain aging. Their brains were 1.6 years older than those who reported only one issue related to poor sleep. Meanwhile, people who reported three or more sleep issues had brains aged up to 2.6 years faster.

Trouble falling and staying asleep, poor sleep quality, and early morning awakenings were associated with accelerated brain aging. Brain aging was linked to people consistently having these sleep issues for over five years.

The study published in Neurology invited 589 people, with an average age of 40, to participate. They completed sleep questionnaires at the start of the study and again five years later. Brain scans were taken 15 years after the study commenced.

The sleep survey asked questions such as, “Do you usually have trouble falling asleep?” “Do you usually wake up several times at night?” and “Do you usually wake up far too early?” People also filled in their sleep behavior, including the six indicators of poor sleep: short sleep duration, low sleep quality, trouble falling asleep, issues staying asleep, early morning awakening, and daytime sleepiness.

People were divided into three groups. The low group was comprised of people with one poor sleep characteristic. People in the middle group had two to three poor sleep characteristics, while the high group possessed over three signs of poor sleep. Seventy percent of participants were in the low group, 22% were in the middle group, and 8% were in the high group. Fifteen years after the study started, the researchers looked at the participants’ brain scans to measure the brain age of each person. They calculated the level of brain shrinkage expected at that age with what they saw on the scans.

“Our findings highlight the importance of addressing sleep problems earlier in life to preserve brain health, including maintaining a consistent sleep schedule, exercising, avoiding caffeine and alcohol before going to bed and using relaxation techniques,” says Dr. Kristine Yaffe, a researcher at the University of California San Francisco and a member of the American Academy of Neurology. “Future research should focus on finding new ways to improve sleep quality and investigating the long-term impact of sleep on brain health in younger people.”

Source: https://studyfinds.org/trouble-sleeping-brain-aging/

Novel way to beat dengue: Deaf mosquitoes stop having sex

Getty Images

Scientists believe they have found a quirky way to fight mosquito-spread diseases such as dengue, yellow fever and Zika – by turning male insects deaf so they struggle to mate and breed.

Mosquitoes have sex while flying in mid-air and the males rely on hearing to chase down a female, based on her attractive wingbeats.

The researchers did an experiment, altering a genetic pathway that male mosquitoes use for this hearing. The result – they made no physical contact with females, even after three days in the same cage.

Female mosquitoes are the ones that spread diseases to people, and so trying to prevent them having babies would help reduce overall numbers.

The team from the University of California, Irvine studied Aedes aegypti mosquitoes, which spread viruses to around 400 million people a year.

They closely observed the insects’ aerial mating habits – that can last between a few seconds to just under a minute – and then figured out how to disrupt it using genetics.

They targeted a protein called trpVa that appears to be essential for hearing.

In the mutated mosquitoes, neurons normally involved in detecting sound showed no response to the flight tones or wingbeats of potential mates.

The alluring noise fell on deaf ears.

In contrast, wild (non-mutant) males were quick to copulate, multiple times, and fertilised nearly all the females in their cage.

The researchers from the University of California, Santa Barbara, who have published their work in the journal PNAS, said the effect of the gene knock-out was “absolute”, as mating by deaf males was entirely eliminated.

Dr Joerg Albert, from the University of Oldenburg in Germany, is an expert on mosquito mating and I asked him what he made of the research.

He said attacking sense of sound was a promising route for mosquito control, but it needed to be studied and managed.

Source: https://www.bbc.com/news/articles/c207gvrn65do

Global study of 600,000 people finds loneliness sends risk of dementia skyrocketing by 31%

(© De Visu – stock.adobe.com)

The largest study of its kind has discovered a troubling link between loneliness and dementia. Researchers at Florida State University found that people who experience feelings of loneliness are over 30% more likely to develop dementia than those who don’t.

The research, published in Nature Mental Health, analyzed data from more than 600,000 individuals worldwide, combining results from 21 long-term studies to paint a comprehensive picture of how social isolation affects our cognitive health.

“These results are not surprising, given the mounting evidence that link loneliness to poor health,” says Assistant Professor Martina Luchetti, who led the study, in a university release.

Her team’s work takes on special significance in the aftermath of the COVID-19 pandemic, which forced both the World Health Organization and U.S. Surgeon General to declare loneliness a public health crisis.

People who experience feelings of loneliness are over 30% more likely to develop dementia than those who don’t. (© soupstock – stock.adobe.com)

The study’s findings suggest that loneliness isn’t just about feeling sad or isolated – it could have serious implications for brain health. The researchers found that feeling dissatisfied with social relationships affects cognitive function regardless of a person’s age or gender. This impact extends beyond general cognitive decline to include specific forms of dementia, including Alzheimer’s disease.

“Dementia is spectrum, with neuropathological changes that start decades before clinical onset,” Luchetti explains. “It is important to continue studying the link of loneliness with different cognitive outcomes or symptoms across this spectrum. Loneliness – the dissatisfaction with social relationships – may impact how you are functioning cognitively, and in daily life.”

The research team conducted a meta-analysis, combining and analyzing data from multiple studies to identify patterns and trends. This approach allowed them to draw conclusions from a massive dataset of over half a million participants, though the researchers note that most data came from wealthy Western nations.

“We know there are rising cases of dementia in low-income countries,” Luchetti says. “Future studies need to gather more data from those countries to evaluate what are the effects of loneliness in different national and cultural contexts.”

The finding of a 31% increase in dementia risk could help shape future strategies for preventing severe cognitive decline.

“Now that there is solid evidence of an association, it is critical to identify the sources of loneliness to both prevent or manage loneliness and support the well-being and cognitive health of aging adults,” Luchetti concludes.

Source: https://studyfinds.org/loneliness-risk-of-dementia/

Keto diet could help women restart stopped periods, study explains

(Credit: New Africa/Shutterstock)

Could the right diet actually kickstart reproductive health? For women struggling with irregular or absent menstrual cycles, the popular ketogenic diet might offer an unexpected solution, according to researchers from The Ohio State University.

In a striking discovery published in PLoS ONE, researchers found that women who followed a keto diet or took ketone supplements experienced significant changes in their menstrual cycles – including the restart of periods that had been absent for over a year. Even more remarkably, one 33-year-old participant experienced her first-ever period after just five days on the diet.

“There were six women who hadn’t had a period in over a year– who felt like their typical cycles were over. And their periods actually restarted on the diet,” says Madison Kackley, the study’s lead author and research scientist at Ohio State, in a media release.

What is a keto diet?
The ketogenic diet, often called “keto” for short, is a low-carbohydrate, high-fat eating plan that changes how the body fuels itself. Instead of using glucose from carbohydrates as its primary energy source, the body switches to burning fat and producing ketones – a state known as “nutritional ketosis.”

The small but significant study followed 19 healthy but overweight women with an average age of 34. Researchers divided them into three groups: some followed a ketogenic diet alone, others combined the diet with ketone supplements, and a control group followed a low-fat diet. To ensure accuracy, the research team provided all meals throughout the six-week study period.

The results were remarkable: 11 out of 13 women who achieved nutritional ketosis reported changes in their menstrual cycles, either in frequency or intensity. This effect appeared to be independent of weight loss, as women in all groups lost similar amounts of body fat. Notably, the only participants who didn’t experience menstrual changes were taking oral contraceptives.

While the study’s sample size was modest, its findings could go on to help countless women worldwide. An estimated 5-7% of American women of reproductive age go without periods for three months or longer each year, a condition that can be both distressing and medically concerning.

“This research is incredibly important because there are so many unanswered questions for women,” Kackley says. “We’re trying to change things for women and give them some control – something we historically haven’t had over our reproductive status.”

The researchers theorize that ketones might play a crucial role in regulating women’s hormonal health, going beyond their traditional role as an energy source. This could open new doors for treating various women’s health conditions, including polycystic ovary syndrome, perimenopause, and postpartum depression – all areas that Kackley’s lab is currently investigating.

Looking ahead, Kackley’s team is working to understand exactly how ketones influence menstrual cycles. They’re conducting comprehensive monitoring of women’s bodies throughout their cycles, tracking everything from muscle strength and fat composition to hormone levels and body temperature – data that, surprisingly, has never been collected in this way before.

While more research is necessary to fully understand the connection between ketones and menstrual health, this initial study suggests that the ketogenic diet might offer more benefits than just weight loss. For women struggling with irregular or absent periods, it could provide a potential dietary approach to regaining control over their reproductive health.

However, as with any significant dietary change, women should consult with their healthcare providers before starting a ketogenic diet, particularly if they have underlying health conditions or are taking medications.

Source: https://studyfinds.org/keto-diet-women-periods/

Mahjong: The ultimate brain booster you didn’t know you needed

A group of older people playing Mahjong

Looking for a game that exercises your mind, feeds your social life, and brings a unique twist to game night? American Mahjong could be the perfect fit! Combining strategy, pattern recognition, and an element of luck, Mahjong has a long history and a fresh, modern appeal.

Originally rooted in Chinese culture, Mahjong became a cherished pastime in Jewish-American communities in the 1920s and is now gaining popularity across diverse backgrounds and generations.

The Mental Benefits of Mahjong

An American Mahjong game board (Photo by Amy Chodroff)

Unlike many games that rely on pure chance, Mahjong requires strategic thinking, memory, and adaptability. A recent scoping review published in The Journal of Prevention of Alzheimer’s Disease in 2024 confirms that playing Mahjong offers substantial cognitive and social benefits for older adults. Analyzing data from multiple studies, the review found that regular Mahjong players demonstrate improved memory, cognitive flexibility, and slower rates of cognitive decline. The findings suggest that Mahjong, especially when played consistently, could be protective against cognitive impairments like dementia. Additionally, the social aspect of the game offers further mental health benefits by reducing feelings of loneliness.

Mahjong has also been shown to reduce symptoms of depression. A recent study shows that regularly playing Mahjong was linked to reduced depression rates among middle-aged and older Chinese adults, according to researchers at the University of Georgia. Based on these findings, playing Mahjong is a great way for you to keep your brain engaged, strong, and happy.

A Social Game with Heart
Mahjong’s value extends beyond individual mental gains; it also builds community. Unlike solo puzzle games or brain teasers, Mahjong is best enjoyed in a group, providing the perfect setting for laughter, conversation, and bonding. This unique social dynamic allows players to reconnect with friends or make new ones, deepening relationships that play a crucial role in overall well-being. Many regular Mahjong groups evolve into close-knit communities, offering an ongoing support network that extends beyond game time.

Melissa Serpa, an empty nester from The Colony, Texas, is a prime example of the social power of Mahjong. For her, it’s more than just a game; it’s a way to stay connected with friends and enjoy quality time in a busy world.

“I love playing to spend time with friends, catch up, and just have a good time together,” says Serpa.

Melissa’s journey with Mahjong began two years ago with a challenge. Learning the game was no small feat, but thanks to a patient neighbor and some online tutorials, she’s now an avid player who looks forward to each weekly game.

American vs. Traditional

While Mahjong originated in China, the American version has evolved with unique twists that make it especially engaging. Traditional Chinese Mahjong uses 144 tiles and is usually played without the jokers and specific winning combinations characteristic of the American version.

The American game includes 152 tiles, adding joker tiles and an annual card from the National Mahjong League (NMJL) that sets winning hand patterns for the year. This annual update brings a fresh challenge each season, allowing players to learn and adapt to new strategies and combinations. The variation keeps American Mahjong dynamic and appealing, encouraging both seasoned players and beginners to hone their skills with each new card.

Getting Started with American Mahjong

If you’re new to Mahjong, don’t worry – it’s easier than it sounds!

The NMJL card might look complex initially, but beginner guides and online tutorials can quickly make it second nature. The NMJL website is a great place to start, offering the annual card each March, available in regular or large print for convenience.

Melissa’s advice for beginners? “Go for the large print card; it’s easier on the eyes, especially when you’re learning!” she says with a laugh.

Many long-time players agree this tip can make gameplay more enjoyable, especially during those intense, late-night rounds.

To truly enjoy Mahjong, you don’t need to rush into advanced strategy right away. Start by learning basic tile combinations and patterns, and take time to study the new card each year. You’ll find yourself picking up the nuances of the game with practice, gaining confidence as you play.

Source: https://studyfinds.org/mahjong-brain-booster/

Lifestyle counts: Experts say about 40% of cancer cases are preventable

American Cancer Society study finds about half of cancer-attributed death are preventable when the leading factor is smoking and the second is being overweight; ‘More than 80% of lung cancer cases are related to smoking’

A new study by the American Cancer Society found that about 40% of cancer cases among people age 30 and over – and almost half of the deaths – are attributed to preventable risk factors. That is, these cases can be prevented through lifestyle changes.

According to the study, smoking remains the biggest risk factor, causing almost one in five cancer cases and almost one-third of cancer deaths. Other factors in cancer cases for about 40% of people over 30 and almost half of the cancer death cases include alcohol consumption, physical activity, diet and infections. In other words, the majority of cancer cases are preventable.

Source: https://www.ynetnews.com/health_science/article/hyuzhepeke

World Stroke Day 2024: Follow These Tips For Better Brain Health & Stroke Prevention

World Stroke Day 2024: We recognise this day by understanding key prevention factors that can reduce stroke risk.

World Stroke Day 2024: High blood pressure, or hypertension, is one of the leading causes of strokes

World Stroke Day is observed annually on October 29 to raise awareness about strokes, their causes, and the importance of prevention, treatment, and rehabilitation. It was established by the World Stroke Organisation to draw attention to the high prevalence and impact of strokes worldwide, as well as to educate people about risk factors and preventive measures. The day serves as a reminder of the importance of acting fast in the event of a stroke and emphasises the FAST (Face, Arms, Speech, Time) warning signs that can help identify a stroke and seek immediate help. By encouraging lifestyle changes and preventive practices, World Stroke Day aims to reduce the global stroke burden and enhance the quality of life for stroke survivors. Let’s recognise this day by understanding key prevention factors that can reduce stroke risk.

Tips for better brain health & to reduce stroke risk

1. Follow a heart-healthy diet

Eating foods that are rich in antioxidants, healthy fats, and fibre, like leafy greens, berries, nuts, and fatty fish, supports cardiovascular health and reduces stroke risk. These foods improve blood flow to the brain, nourish brain cells, and help lower cholesterol and blood pressure.

2. Stay physically active

Regular exercise, such as walking, cycling, or swimming, strengthens the heart and improves blood circulation, which reduces the risk of stroke. Exercise also stimulates brain plasticity, improves mood, and can help prevent cognitive decline.

3. Maintain a healthy weight

Largest-ever study finds loneliness sends risk of dementia skyrocketing by 31%

(© De Visu – stock.adobe.com)

The largest study of its kind has discovered a troubling link between loneliness and dementia. Researchers at Florida State University found that people who experience feelings of loneliness are over 30% more likely to develop dementia than those who don’t.

The groundbreaking research, published in Nature Mental Health, analyzed data from more than 600,000 individuals worldwide, combining results from 21 long-term studies to paint a comprehensive picture of how social isolation affects our cognitive health.

“These results are not surprising, given the mounting evidence that link loneliness to poor health,” says Assistant Professor Martina Luchetti, who led the study, in a university release.

Her team’s work takes on special significance in the aftermath of the COVID-19 pandemic, which forced both the World Health Organization and U.S. Surgeon General to declare loneliness a public health crisis.

The study’s findings suggest that loneliness isn’t just about feeling sad or isolated – it could have serious implications for brain health. The researchers found that feeling dissatisfied with social relationships affects cognitive function regardless of a person’s age or gender. This impact extends beyond general cognitive decline to include specific forms of dementia, including Alzheimer’s disease.

“Dementia is spectrum, with neuropathological changes that start decades before clinical onset,” Luchetti explains. “It is important to continue studying the link of loneliness with different cognitive outcomes or symptoms across this spectrum. Loneliness – the dissatisfaction with social relationships – may impact how you are functioning cognitively, and in daily life.”

The research team conducted a meta-analysis, combining and analyzing data from multiple studies to identify patterns and trends. This approach allowed them to draw conclusions from a massive dataset of over half a million participants, though the researchers note that most data came from wealthy Western nations.

“We know there are rising cases of dementia in low-income countries,” Luchetti says. “Future studies need to gather more data from those countries to evaluate what are the effects of loneliness in different national and cultural contexts.”

The finding of a 31% increase in dementia risk could help shape future strategies for preventing severe cognitive decline.

“Now that there is solid evidence of an association, it is critical to identify the sources of loneliness to both prevent or manage loneliness and support the well-being and cognitive health of aging adults,” Luchetti concludes.

 

Source: https://studyfinds.org/loneliness-risk-of-dementia/

The road not taken: What do Americans regret most in life?

(Credit: karen roach/Shutterstock)

A new poll finds that Americans are more concerned about the road not taken in their lives. When it comes to regrets, people are more likely to dwell on things they didn’t do than the things they have done.

That’s according to a survey of 2,000 U.S. adults split evenly by generation, which found that only 11% of Americans don’t have any regrets. Between not speaking up (40%), not visiting family or friends enough (36%), and not pursuing their dreams (35%), those missed opportunities add up.

In their lifetime, Americans average three missed chances to take a once-in-a-lifetime trip, four lost opportunities to ask their crush out on a date, and six instances of not having the perfect comeback in an argument. On the other hand, the top actions Americans regret doing include spending money or purchasing something they later regret (49%), fighting with friends or family (43%), and making an unnecessary comment (36%).

Over the years, Americans also regret an average of five angry text messages and two break-ups. In fact, nearly one-third (32%) of baby boomers have a regret that spans three decades and still crosses their minds an average of three times per month. While millennials’ oldest regret is only about 11 years-old, they fret about it on average almost once per week, more than any other generation.

Conducted by Talker Research on behalf of Mucinex, results also revealed that Americans are almost twice as likely to make bad decisions at night (41%) than in the morning (22%). Moreover, Americans tend to regret something more at night (43%).

Nighttime decisions such as not going to bed at a decent time (47%), eating too many snacks or too much food (36%), and arguing with a loved one (35%) are the most likely to negatively impact Americans the next morning. For Gen Zers, failing to do their nighttime routine (29%) or forgetting to turn on their alarm (22%) will almost always ensure morning distress.

These poor choices not only cause regret but also put Americans in a bad mood (39%), leave them unable to tackle the day (29%), or even inhibit them from fulfilling the day’s responsibilities (20%).

So, what factors are contributing to these bad decisions? According to the results, being tired (40%), sick or desperate for relief (20%), or enduring a long night out (15%) are the most likely culprits.

“We don’t make the best decisions when we’re sick or tired, especially at night,” says Albert So, Marketing Director, Upper Respiratory at Reckitt, in a statement. “And while no one is going to get it right every single time, it’s important to have products you can rely on to help you make better decisions, so you don’t wake up with regrets.”

For all the bad decisions made and opportunities missed, 48% of Americans still agree with the common saying, “Never regret anything because, at one moment, it was exactly what you wanted.” This may be because almost two-thirds (64%) believe that their decision-making has gotten better as they’ve gotten older.

Source: https://studyfinds.org/the-road-not-taken-what-do-americans-regret-the-most/

The 100 Best TV Episodes of All Time

Inside every great show is a 22- to 60-minute story that stays with you forever

Photo Illustration by Matthew Cooley. Images used in Illustration: Ursula Coyote/AMC; Russ Martin/FX; Amazon Studios; Disney Entertainment/Getty Images; Michael Yarish/CBS/Getty Images; Fox; AMC; Guy D’Alema/FX

The thing that has always distinguished TV storytelling from its big-screen counterpart is the existence of individual episodes. We consume our series — even the ones that we binge — in distinct chunks, and the medium is at its best when it embraces this. The joy of watching an ongoing series comes as much from the separate steps on the journey as it does from the destination, if not more. Few pop-culture experiences are more satisfying than when your favorite show knocks it out of the park with a single chapter, whether it’s an episode that wildly deviates from the series’ norm, or just an incredibly well-executed version of the familiar formula.

Still, that episodic nature makes TV fundamentally inconsistent. The greatest drama ever made, The Sopranos, was occasionally capable of duds like the Columbus Day episode. And even mediocre shows can churn out a single episode at the level of much stronger overall series.

For this Rolling Stone list of the 100 greatest episodes of all time, we looked at both the peak installments of classic series, as well as examples of lesser shows that managed to briefly punch way above their weight class. We have episodes from the Fifties all the way through this year. We stuck with narrative dramas and comedies only — so, no news, no reality TV, no sketch comedy, talk shows, etc. In a few cases, there are two-part episodes, but we mostly picked solo entries. And while it’s largely made up of American shows (as watched by our American staff), a handful of international entries made the final cut.

100 Fargo, “Bisquik” (Season 5, Episode 10)
Our list of classic episodes starts with its most recent entry, from a January 2024 installment of the great FX anthology drama inspired by the work of the Coen brothers. Fargo Season Five dealt with the growing sense of polarization in America, and the debts — both literal and figurative — that everyone feels they’re owed from everyone else. It all culminates in a long, surprising, utterly gorgeous scene where our firecracker of a heroine, Dot Lyon (Juno Temple) finds herself face-to-face with immortal sin-eater Ole Munch (Sam Spruell), who has come for a rematch of their clash in the season premiere. With her husband and daughter in the house with her, Dot declines to fight this terrifying man, and instead explains, patiently and with palpable kindness, that perhaps Ole Munch might prefer a world focused less on resentment and more on love. —Alan Sepinwall

99 The Cosby Show, “Theo’s Holiday” (Season 2, Episode 22)
There’s a temptation with these lists to immediately disqualify anything associated with the true monsters like Bill Cosby. But his crimes shouldn’t erase from the history books the wonderful work of everyone else involved in “Theo’s Holiday,” in which the Huxtables get together for an elaborate role-playing exercise to teach Theo (Malcolm Jamal-Warner) a lesson about the economics of life in, as he puts it, “the real world.” All the actors throws themselves into these larger-than-life characters, like Clair (Phylicia Rashad) as a cheery restaurant owner as well as a fast-talking furniture saleslady, or little Rudy (Keshia Knight Pulliam) as a powerful businesswoman. The idea of the whole clan teaming up to both mock Theo and help him out is so intoxicating that even his best friend Cockroach (Carl Anthony Payne II) admits, “I wish they did this kind of stuff at my house!” —A.S.

98 South Park, “Scott Tenorman Must Die” (Season 5, Episode 4)

A show that features an anthropomorphized turd in a Christmas hat and at least one projectile vomit scene per episode, South Park has never been known as highbrow. Yet there are elements of “Scott Tenorman Must Die,” a Season Five episode focused on Cartman’s elaborate revenge plot against a high schooler who scammed him by selling his pubes, that are nothing less than virtuosic. There’s the plot itself, a retelling of Shakespeare’s Titus Andronicus, which culminates (spoiler alert, I guess) with the protagonist forcing a woman to unwittingly eat her own children. There’s the exquisite cameo appearance by Radiohead, the culmination of Scott Tenorman’s debasement. And there’s Cartman’s classic taunt, “Charade you are, Scott Tenorman,” a reference to an obscure track of Pink Floyd’s Animals. Co-creators Matt Stone and Trey Parker have often referred to “Scott Tenorman Must Die” as the apex of Cartman’s villainy, marking the character’s transition from obnoxious troll to next-level sociopath. But really, the episode marks another transition entirely: that of Stone and Parker from poop joke purveyors to dark-comedy masters. —Ej Dickson

97 You’re the Worst, “There Is Not Currently a Problem” (Season 2, Episode 7)
Here’s an odd but welcome trend: FX not only has an excellent track record with extremely niche half-hour comedies (some of which you’ll find higher on this list), but many of them manage to weave thoughtful, even dramatic, material about mental health issues into their usual humor. The hip-hop comedy Dave did it with a terrific episode where we learn that Lil Dicky’s hype man GaTa struggles with bipolar disorder. The final Reservation Dogs season revolved around a character who’d spent much of his life institutionalized. And You’re the Worst — a romantic comedy about two selfish, immature people who would be horrified to learn they were the main characters in a romantic comedy — found a new level with an episode revealing that Gretchen (Aya Cash) suffers from clinical depression. Much of “There Is Not Currently a Problem” is fairly comedic: a bottle episode where the gang is stuck together with Gretchen and Jimmy (Chris Geere) because a local marathon has caused a traffic jam in their neighborhood. But this forced closeness comes while Gretchen is trapped in her latest depressive episode, with no choice but to finally reveal her condition to Jimmy — and to admit that she’s less worried that he’ll reject her for it than that he’ll become the latest man convinced he can “fix” her. Cash conveys every bit of the pain and fear Gretchen is experiencing, in a way that enriches the laughter rather than undercutting it. —A.S.

96 In Treatment, “Alex: Week Eight” (Season 1, Episode 37)
Most episodes of this drama were presented as real-time therapy sessions between Dr. Paul Weston (Gabriel Byrne) and one of his patients, or Paul visiting his own shrink. Occasionally, though, outsiders found their way into Paul’s office, like Alex Prince, Sr. (Glynn Turman), the father of one of Paul’s patients, seeking answers as to why his son committed suicide. Alex Jr. had spent most of his sessions to that point painting his dad as such a monster, it should have been impossible for any actor to both live up to those stories and not seem like a cartoon. Turman, in one of the best dramatic performances you will ever see on television, somehow did it, channeling both the bogeyman and the grieving father, in a riveting two-hander with Byrne. —A.S.

95 Bob’s Burgers, “Tina-rannosaurus Wrecks” (Season 3, Episode 7)
Bob’s Burgers loves puns, but “Tina-rannosaurus Wrecks” is a groaner of a title even for them. No matter, because the episode so expertly combines many of the series’ hallmarks into one tight, funny, awkward package. Once again, a well-meaning parenting gesture by Bob (H. Jon Benjamin) goes awry, when he lets Tina (Dan Mintz) drive the family station wagon in a nearly empty parking lot, and she somehow crashes into the only other car there. Once again, the Belchers find themselves on the verge of financial calamity, when the other car turns out to belong to Bob’s ruthless rival, Jimmy Pesto (Jay Johnston). Once again, the family gets mixed up in the plans of a lunatic, when insurance adjuster Chase (Bob Odenkirk) forces them to aid him in an insurance fraud scheme in order to get out of the mess with Jimmy. And, once again, Bob’s lovable but terrible children somehow prove surprisingly useful, when Tina uses her brother’s Casio keyboard to get incriminating evidence that frees them from Chase’s clutches. All’s well that ends… not necessarily well, but at least not substantially worse than usual. —A.S.

94 Enlightened, “Consider Helen” (Season 1, Episode 9)

Today, it seems almost obligatory for cable and streaming shows to devote one or two episodes a season to presenting the POV of a minor character. When future White Lotus creator Mike White did it with his first HBO series, Enlightened, it was still relatively rare. And in this case, the shifts in perspective came as a welcome, even necessary, relief from all the time spent in the head of the show’s fascinating but maddening main character, Amy Jellicoe (Laura Dern), a toxically narcissistic former executive trying to rebuild her life after a nervous breakdown. With “Consider Helen,” White moved the focus to Amy’s mother Helen (played by Dern’s real-life mom, the great Diane Ladd), to present a day in her life, to show what a chore it is to have to deal with such a pathologically needy child, and to make clear that Enlightened itself understood exactly how its audience would respond to Amy. —A.S.

93 Maude, “Maude’s Dilemma” (Season 1, Episodes 9 & 10)
This two-parter, in which Maude (Bea Arthur) is shocked to discover that she’s pregnant again at 47, and has to decide whether she wants to get an abortion, was so ahead of its time, even the original Supreme Court verdict on Roe v. Wade was two months away. Well after Maude decided to end her pregnancy, the rest of television shied away from the subject, often having pregnant characters suffer conveniently-timed miscarriages before they could make up their minds and potentially alienate viewers and sponsors. But “Maude’s Dilemma,” with a teleplay by future Golden Girls creator Susan Harris, ran toward the thorny subject, and handled it with both humor and grace. —A.S.

92 Scrubs, “My Screw Up” (Season 3, Episode 14)
There are plenty of shows we call dramedies, even though they’re really just half-hour dramas, as well as lots of alleged comedies that aren’t particularly interested in making the audience laugh. The hospital show Scrubs, though, was remarkably comfortable at balancing silliness and sadness throughout its run, especially in “My Screw Up.” Brendan Fraser reprises his role as Ben, wisecracking brother-in-law to John C. McGinley’s bitterly sarcastic Dr. Cox. Ben’s leukemia appeared to be in remission when last we saw him, so there’s room for him to relentlessly tease J.D. (Zach Braff) about having made out with both of Ben’s sisters, as well as a lighthearted subplot where Turk (Donald Faison) tries to convince Carla (Judy Reyes) to take his name when they’re married, in exchange for having a mole she hates removed. But things also get plausibly serious, even before we get to the Sixth Sense-style twist: Ben was the patient whose death earlier in the episode caused a rift between Cox and J.D., and Cox has been in denial about it ever since. Even the revelation that Cox has been imagining conversations with his dead friend is reflective of the show’s juggling of comedy and drama — it’s the dark mirror of how Scrubs generates so much humor from taking us inside the highly-distractible mind of J.D. —A.S.

91 Watchmen, “This Extraordinary Being” (Episode 6)

Even for a series as sophisticated and layered as Watchmen, this episode is an acrobatic feat. In the most dramatic departure from the show’s source material, the 1980s comic of the same name, “This Extraordinary Being” tells the origin story of one of this world’s seminal vigilante superheroes, Hooded Justice (a man lionized in a modern-day TV show-within-the-show that kicks off the episode). Told almost entirely in black and white, it sees our current-day heroine Angela Abar (Regina King) — herself a vigilante who goes by Sister Night, when she’s not working her day job as a cop — sucked into the memories of her grandfather, Will Reeves, after swallowing a bottle of his “nostalgia pills.” Transported to 1930s New York, we watch Will (played as a young man by Jovan Adepo), and sometimes Angela-as-Will, join the NYPD, where he encounters racism so virulent, his fellow cops stage a near-lynching, covering him with a hood and briefly hanging him from a tree as a warning to stand down. The message he takes away, though, is that there is plenty of evil to fight in the world, even in his own precinct. He just has to do it undercover — appropriating for his costume the very hood and noose that had been used to terrorize him. With balletic camerawork, a period soundtrack of big band standards, and visceral performances from King and Adepo, the episode is a sweeping achievement that inverts a fundamental truth of the series’ world — this revered hero that everyone assumed was white is Black — and underscores one about ours: Justice often comes at a steep price. —Maria Fontoura

90 The Golden Girls, “Mrs. George Devereaux” (Season 6, Episode 9)
The Golden Girls experienced so many adventures together, as Dorothy (Bea Arthur), Rose (Betty White), Blanche (Rue McClanahan), and Sophia (Estelle Getty) lived together as pals and confidantes. But “Mrs. George Devereaux” is a truly touching treatment of grief and loss. Blanche, the most frivolous of the Girls (and the funniest), opens the door and beholds a strange sight: her late husband George, telling her that he faked his death and now wants her back. The episode explores how all the characters live with their different kinds of grief — and how that grief is what brought them here together in the first place. It has the most emotional resonance of any Golden Girls episode, but it’s also the funniest in terms of pure farcical comedy, as Dorothy gets swept up in a bizarre love triangle with two 1970s heartthrobs, guest stars Sonny Bono and Lyle Waggoner. As usual, Blanche gets the best line, when she confronts Cher’s ex-husband with the command, “Sonny Bono, get off my lanai!” —Rob Sheffield

89 SpongeBob SquarePants, “Pizza Delivery” (Season 1, Episode 5)

The absurdist humor that made SpongeBob SquarePants beloved across multiple generations is already at full strength in this early episode. At the end of another shift at the Krusty Krab, a customer calls in to order a pizza to be delivered to his home. Never mind that the restaurant doesn’t make pizzas: Mr. Krabs (Clancy Brown) sees a few bucks to be earned, and somehow turns a Krabby Patty burger into a pizza, complete with box, then orders SpongeBob (Tom Kenny) and Squidward (Rodger Bumpass) to take it to its destination. Instead, SpongeBob’s usual difficulty with driving strands the odd couple far from Bikini Bottom, trying various bizarre methods to get home — all of them borrowed from the “pioneers,” like the idea of riding on giant rocks. In the end, we get one last, great punchline: The customer lives right next door to the Krusty Krab, and they could have just walked the pizza over to him. —A.S.

88 Roseanne, “War and Peace” (Season 5, Episode 14)

Both in its Nineties heyday and its modern reinvention as The Conners, Roseanne had a real knack for blending domestic comedy with candid material about poverty, addiction, sexuality, and more. In this terrific conclusion of a two-part story, Dan (John Goodman) gets hauled off to jail after beating up Fisher, the abusive boyfriend of Jackie (Laurie Metcalf), while Roseanne tends to her sister, and Darlene (Sara Gilbert) gets to briefly relish the sight of her disciplinarian father behind bars. “War and Peace” doesn’t hide from the horror of Jackie’s experience, but even its dark moments are flavored with sass, like when Roseanne warns Fisher, “If you ever come near her again, you’re gonna have to deal with me, and I am way more dangerous than Dan. I got a loose-meat restaurant. I know what to do with the body!” —A.S.

87 The Dick Van Dyke Show, “Never Bathe on Saturday” (Season 4, Episode 27)
Somehow, the best showcase for Dick Van Dyke and Mary Tyler Moore as one of TV’s all-time couples is in an episode where Moore is frequently off-camera. A romantic getaway for Rob and Laura goes horribly awry when Laura’s big toe gets stuck in a hotel bathtub faucet, the bathroom door gets locked, and Rob makes the ill-timed decision to draw a fake mustache on his upper lip that he can’t wipe off — leading every hotel worker who arrives to help assuming he’s up to no good. Written by Dick Van Dyke Show creator Carl Reiner, this installment keeps finding new and amusing ways to escalate the sticky situation, and to push the outer edge of the envelope of censorship circa 1965, with a story about the risk of other people seeing Laura naked. By this point in the series’ run, Reiner knew exactly how to use his leading man’s fluency with physical comedy, and how his leading lady’s voice on the other side of that locked door was all that was needed to sell Laura’s dismay at being trapped in such an embarrassing position. —A.S.

86 Black Mirror, “San Junipero” (Season 3, Episode 4)
What would your ideal afterlife look like? Black Mirror — the British dystopian anthology series with a nihilistic approach to rapidly-developing technology — is known for being a show that doesn’t only answer questions about the future but depicts the worst possible alternative you’ve never even considered. Maybe that’s why, when fans were introduced to the couple at the heart of “San Junipero,” and found the answer of the ideal afterlife to be an Eighties beach town party that never ends, they responded so fondly. Yorkie (Mackenzie Davis) and Kelly (Gugu Mbatha-Raw) meet on a night out and quickly fall into a romantic entanglement. But what begins as a love story about two lesbians finding each other in a heaven on earth is quickly revealed to be a virtual reality — one where the elderly and those who have died can be uploaded and then live on forever as their younger selves. The two — both dying in real life — must deal with whether or not the love they’ve found in pixels is enough for both of their forevers. It’s a touching love story that embodies Black Mirror at its very best. —CT Jones

85 Sex and the City, “My Motherboard, My Self” (Season 4, Episode 8)

Family is, arguably, everywhere in Sex and the City — from those the core four start with their partners to the ones they marry into (have there ever been more terrifying mothers-in-law than Frances Sternhagen or Anne Meara?) and the one they build just among themselves. But when it comes to the blood relations of Carrie (Sarah Jessica Parker), Charlotte (Kristin Davis), Miranda (Cynthia Nixon), and Samantha (Kim Cattrall), the show is surprisingly thin, which is what makes “My Motherboard, My Self” stand out so much. It’s not that the other subplots aren’t memorable — the endless physical comedy of Samantha losing her orgasm; Carrie’s Macintosh meltdown and trip to Manhattan 1990s mainstay Tekserve (R.I.P.), where technician Dmitri (a brilliantly dry Aasif Mandvi) rags on her for not “backing up” — but Miranda’s turn here feels different. As she attends her mother’s funeral in Philadelphia (where she is, apparently, from, and where she has, apparently, multiple siblings), we see a more human side of a character who until this point has largely maintained her station as “the analytical one.” (Though it’s notable that the most intimate moment she has in the City of Brotherly Love isn’t with a direct relation, but the fitting room attendant trying to sell her a bra.) While the show has been criticized for celebrating solipsistic behavior, this episode is a prime example of the four women grappling with their ability to be vulnerable. —Elisabeth Garber-Paul

84 Broad City, “Knockoffs” (Season 2, Episode 4)
Both stories in the stoner comedy’s most laugh-out-loud installment involve imitation products. In one, Ilana (Ilana Glazer) and her mother Bobbi (Susie Essman) travel into the sewers of Manhattan to obtain counterfeit designer purses. In the other, Abbi (Abbi Jacobson) is shocked when her boyfriend Jeremy (Stephen Schneider) asks her to peg him with a strap-on — a development that so thrills Ilana, she does an upside-down twerk on her friend’s behalf — then has to scramble to find a reasonable facsimile after her dishwasher melts Jeremy’s custom-made dildo. In the end, the replacements prove shoddier than the real thing, but “Knockoffs” is so perfectly constructed, and so memorable, that when the friends met Hillary Clinton in a later episode later, among the first things a flustered Abbi can think to tell her is, “I pegged!” —A.S.

83 The Fresh Prince of Bel-Air, “Papa’s Got a Brand New Excuse” (Season 4, Episode 24)

When The Fresh Prince of Bel-Air went on the air in 1990, Will Smith was such an inexperienced actor that he literally mouthed the lines of his co-stars while they spoke. But it didn’t take long for Smith to learn his craft and land roles in dramatic movies like Six Degrees of Separation. That’s why the creative team behind this series knew he was ready for a Season Four episode where Will reunites with his father (played by Ben Vereen) 14 years after he walked out on the family, only to see him leave once again after they reconciled. “I’ll be a better father than he ever was, and I sure as hell don’t need him for that, ’cause ain’t a damn thing he could ever teach me about how to love my kids!” Smith roars, before breaking down in the arms of Uncle Phil. “How come he don’t want me, man?” For anyone who grew up without a father, the moment cut deep. “I shed a tear til this day every time I see this episode,” LeBron James wrote on Instagram in 2015. “This hit home for me growing up and I couldn’t hold my tears in. Til this day they still coming out when this episode come on.” —Andy Greene

82 Doctor Who, “Blink” (Season 3, Episode 10)

The scariest, cleverest episode of the British sci-fi institution Doctor Who features monsters who are elegant in their simplicity: the Weeping Angels, predatory aliens who resemble stone statues of angels, and who can only move when you’re not looking at them. Writer Steven Moffat places these disturbing creatures in service of a story that barely features the Doctor (David Tennant) and his then-companion Martha Jones (Freema Agyeman), instead focusing on a young Carey Mulligan as Sally Sparrow, a woman who keeps running afoul of the Weeping Angels. Her only hope of surviving the ordeal comes in the form of a DVD Easter Egg that creates the illusion of the Doctor having a conversation with her, and even the Time Lord himself struggles to adequately explain all the seeming paradoxes contained within Moffat’s tale. “People assume that time is a strict progression of cause to effect,” he tells Sally, “but actually from a non-linear, non-subjective viewpoint, it’s more like a big ball of wibbly-wobbly, timey-wimey stuff.” Yet it all makes exciting sense by the end. —A.S.

81 Alias, “Truth Be Told” (Season 1, Episode 1)
Throughout his career, J.J. Abrams has struggled with endings, as anyone who sat through The Rise of Skywalker can tell you. Few, though, are better at beginnings, and the pilot episode of his spy drama Alias is so fantastic that it bought years of goodwill from viewers, no matter how nonsensical the plots grew as the show went along. While undercover agent Sydney Bristow (Jennifer Garner) is in Taiwan being interrogated by a torture expert, we flash back through the events that led her here, starting with her double life as a grad student by day, CIA agent by night. This turns out to be a triple life when Sydney discovers that she’s been tricked into working for a terrorist organization called SD-6, and that her father, Jack (Victor Garber), is secretly her co-worker. Oh, and Sydney’s fiancé gets murdered on the order of SD-6 boss Arvin Sloane (Ron Rifkin), plus a half-dozen other characters have to be introduced, Sydney has to try on multiple hair colors and accents, and more. Between the fractured timeline and the multiple lies Sydney has to live at once, “Truth Be Told” should be absolute gibberish. But Abrams, in one of his earliest efforts as director as well as writer, keeps everything coherent and thrilling in an episode that made him into a star just as much as it did Jennifer Garner. —A.S.

81 Alias, “Truth Be Told” (Season 1, Episode 1)
Throughout his career, J.J. Abrams has struggled with endings, as anyone who sat through The Rise of Skywalker can tell you. Few, though, are better at beginnings, and the pilot episode of his spy drama Alias is so fantastic that it bought years of goodwill from viewers, no matter how nonsensical the plots grew as the show went along. While undercover agent Sydney Bristow (Jennifer Garner) is in Taiwan being interrogated by a torture expert, we flash back through the events that led her here, starting with her double life as a grad student by day, CIA agent by night. This turns out to be a triple life when Sydney discovers that she’s been tricked into working for a terrorist organization called SD-6, and that her father, Jack (Victor Garber), is secretly her co-worker. Oh, and Sydney’s fiancé gets murdered on the order of SD-6 boss Arvin Sloane (Ron Rifkin), plus a half-dozen other characters have to be introduced, Sydney has to try on multiple hair colors and accents, and more. Between the fractured timeline and the multiple lies Sydney has to live at once, “Truth Be Told” should be absolute gibberish. But Abrams, in one of his earliest efforts as director as well as writer, keeps everything coherent and thrilling in an episode that made him into a star just as much as it did Jennifer Garner. —A.S.

79 Grey’s Anatomy, “It’s the End of the World/As We Know It” (Season 2, Episodes 16 & 17)
Hearing main character Meredith Grey (Ellen Pompeo) refuse to get out of bed for fear that she’ll die at work should have been a clue that it wouldn’t be a good week. But viewers were still terrified when the series seemingly tried its hardest to make every main character (plus guest stars Christina Ricci and Kyle Chandler) have near-death experiences in this two-parter, which began airing after Super Bowl XL. Bailey (Chandra Wilson) is in labor at the hospital waiting for her husband, who won’t answer his phone. Derek (Patrick Dempsey) can’t concentrate on saving his patient’s life while the man’s cell keeps going off (put two and two together here). And when a newbie paramedic shoves her hands into the chest cavity of a patient who’s bleeding out, it’s Meredith who learns that what’s currently killing him is unexploded ammunition that could go off at any minute, taking her and the entire O.R. with it. The bomb squad evacuates the floor, but if Derek leaves, Bailey’s husband dies. Meredith steps in for the paramedic, who’s had a panic attack, so now, if Meredith moves, she and Derek and Bailey’s husband die. Richard (James Pickens, Jr.) has a heart attack from the stress of the evacuation. Izzy (Katherine Heigl) and Alex (Justin Chambers) are off hooking up in a closet, which is also life-threatening if you consider Alex’s numerous confirmed STDs. And if Bailey, who is refusing to push without her husband being present, doesn’t give birth, she and the baby will die. It’s an all-in, melodramatic pivot for a series that has since become known for putting its main characters in life-threatening situations. And yet, in the midst of these increasingly heightened stakes, the standout scene remains George’s (T.J. Knight) gentle cajoling that finally convinces Bailey to push — and to name her son after him. “You’re Doctor Bailey,” he says, in a scene that remains one of the most tender of the entire series. “You don’t hide from a fight.” —CTJ

78 Girls, “American Bitch” (Season 6, Episode 3)

If ever Hannah Horvath was a voice of a generation, this was it. Airing just a few months before the #MeToo movement exploded in 2017, this quiet cri de coeur — in which famous author Chuck Palmer (Matthew Rhys, nimble as ever) confronts Hannah (Lena Dunham) about a blog post she wrote slamming his alleged misconduct with several college girls — taps into every conversation we’re still having about power and consent. Chuck summons Hannah to his stately apartment, where she attempts to explain why taking advantage of his literary stature to hook up with young women is predatory, while he hurls every trick in the Bad Men Handbook at her: flattery (“You’re very bright”); faux honesty (“I’m a horny motherfucker with the impulse control of a toddler”); defensiveness (“These girls throw themselves at me!”); casual intimacy (“You’re more to me than just a pretty face”). With astonishing precision and economy, Dunham turns the tables such that by the end of the episode — that is, by the time Chuck and Hannah are lying clothed atop his bed, and he takes out his dick and flops it onto her thigh — Hannah has fallen prey to the very manipulations she was calling out. A hallmark moment in a show that will only age better with time. —M.F.

Source: https://www.rollingstone.com/tv-movies/tv-movie-lists/best-tv-episodes-of-all-time-1235090945/girls-american-bitch-2-1235091122/

8 question test reveals if you have what it takes to succeed

(© Cherries – stock.adobe.com)

In classrooms, boardrooms, and athletic fields across the world, people with a “growth mindset” consistently outperform their peers. Now, for the first time, scientists at the Norwegian University of Science and Technology (NTNU) have developed a reliable way to measure this crucial trait — and their findings challenge conventional wisdom about who possesses it.

The study, published in New Ideas in Psychology, introduces an eight-question scale that effectively measures growth mindset — the belief that personal abilities can be developed through effort, learning, and persistence. Testing the scale on 723 participants ranging from ages 16 to 85, the researchers found it to be reliable across age groups and more comprehensive than previous measurement tools.

A growth mindset has become an increasingly important concept in psychology, education, and personal development. People with a growth mindset believe their talents and abilities can be developed through effort, good teaching, and persistence. In contrast, those with a “fixed mindset” tend to believe their basic qualities, like intelligence or talent, are static traits that can’t be changed significantly.

The research team, led by Professor Hermundur Sigmundsson from NTNU’s Department of Psychology and Professor Monika Haga from the Department of Teacher Education, developed their new scale to measure growth mindset more broadly than existing tools. Their scale asks people to rate how much they agree with statements like “I know that with effort I can improve my skills and knowledge” and “I see learning as my goal.”

These statements assess various aspects of a growth mindset, from belief in the power of effort to openness to challenges and commitment to learning. Beyond its practical applications, the scale showed stronger connections to other important psychological factors like passion and grit compared to previous measurement tools.

Source: https://studyfinds.org/8-question-test-succeed/

Late parenthood linked to greater wealth, study shows

(Credit: Rawpixel.com/Shutterstock)

Want to build wealth? Your family transitions might matter more than you think. A groundbreaking study from Norway reveals that the timing of major family events – like becoming a parent, experiencing the death of your parents, or welcoming grandchildren – can significantly impact your wealth accumulation over decades.

The research, published in the journal Social Forces, followed nearly 48,000 Norwegians born in 1953 from age 40 to 64, tracking how their wealth changed alongside key family milestones. The findings challenge conventional wisdom about wealth building, suggesting that it’s not just about how much you earn or save but also about when certain family events occur in your life.

Perhaps most surprisingly, people who had children later in life or remained childless generally accumulated more wealth than those who became parents earlier. Those who experienced the death of their parents later in life also tended to build more wealth than those whose parents died earlier, particularly after age 55.

The study, led by scientists from the Max Planck Institute for Demographic Research, identified six distinct patterns of family life courses. At one end of the spectrum were childless individuals whose parents died either early (around age 45) or late (around age 59). At the other end were those who became parents and grandparents early, creating what researchers called “four-generation families” – situations where great-grandparents, grandparents, parents, and children were all alive simultaneously for about 15 years.

The wealth differences between these groups were substantial. By 2017, the gap between the highest and lowest wealth groups translated to about $32,600 in gross wealth (before subtracting debts) and nearly $36,000 in net wealth (after subtracting debts).

Interestingly, childless individuals started with the lowest wealth positions at age 40 but showed the strongest increase over time, eventually catching up to or surpassing many parent groups by their late 50s. This might be because they had fewer financial obligations and more opportunities to invest and save.

The study also found that those who became parents and grandparents later in life consistently maintained higher wealth positions. This supports the idea that delaying parenthood might allow for better financial foundation-building in early adulthood – a critical time for major investments like housing.

Source: https://studyfinds.org/late-parenthood-linked-to-greater-wealth-study-shows/

Music may be the best medicine after surgery

(Credit: Trzykropy/Shutterstock)

Have you ever felt the stress melt away while listening to your favorite song? Well, it turns out that music might do more than just lift your spirits – it could actually help you recover faster after surgery.

A new study presented at the American College of Surgeons Clinical Congress 2024 in San Francisco suggests that simply pressing play on your favorite playlist could be a game-changer for post-surgery recovery. Researchers from California Northstate University College of Medicine dove into the world of music and medicine, analyzing a whopping 3,736 studies before narrowing it down to 35 solid research papers. What they found might just make you want to pack your headphones for your next hospital stay.

It turns out that patients who listened to music after surgery experienced some pretty impressive benefits. Let’s break down the results:

Less Pain, More Gain
Remember that awful post-surgery pain that makes you want to curl up and hide? Well, music might be your new best friend. Patients who tuned in reported feeling significantly less pain the day after surgery. We’re talking about a 19% reduction on one pain scale and a 7% reduction on another. That’s nothing to scoff at when you’re trying to get back on your feet!

Anxiety? Not Today!
Hospitals can be scary places, especially when you’re recovering from surgery. Here’s where music works its magic again. Patients who listened to tunes reported feeling less anxious overall. While a 3% reduction in anxiety might not sound like much, every little bit helps when you’re trying to heal.

Opioid Use Takes a Nosedive
Patients who listened to music used less than half the amount of morphine compared to those who didn’t on the first day after surgery. That’s a big deal, especially considering the ongoing concerns about opioid use in healthcare.

Heart Rate Harmony
Your ticker gets in on the action too. Music listeners experienced about 4.5 fewer heartbeats per minute compared to non-listeners. Why does this matter? A steady, calm heart rate helps your body circulate oxygen and nutrients more effectively, which is crucial for healing. Plus, it reduces the risk of scary complications like abnormal heart rhythms.

“When patients wake up after surgery, sometimes they feel really scared and don’t know where they are. Music can help ease the transition from the waking up stage to a return to normalcy and may help reduce stress around that transition,” explains Dr. Eldo Frezza, a professor of surgery and senior author of the study, in a media release.

What makes music such a powerful healing tool?
For starters, it’s incredibly easy to use. Unlike other therapies that might require concentration or movement, listening to music is a passive experience. You don’t need special training or equipment – just pop in your earbuds or turn on a speaker, and you’re good to go.

“Although we can’t specifically say they’re in less pain, the studies revealed that patients perceive they are in less pain, and we think that is just as important. When listening to music, you can disassociate and relax. In that way, there’s not much you have to do or focus on, and you can calm yourself down,” says Shehzaib Raees, the study’s first author and a medical student.

The science behind this musical medicine isn’t fully understood yet, but researchers think it might have something to do with cortisol levels. Cortisol is often called the “stress hormone,” and music might help keep those levels in check, easing your body’s stress response and helping you heal.

Now, before you start curating the perfect post-surgery playlist, keep in mind that this study looked at existing research and couldn’t control for all variables, like how long patients listened to music. But don’t let that stop you from giving it a try.

Source: https://studyfinds.org/music-best-medicine-surgery/

65 is the new 25: The training technique that’s turning back the clock for older adults

(Photo by PeopleImages.com – Yuri A on Shutterstock)

Attention, retirees: It’s time to dust off those sneakers and sharpen those pencils. Scientists have cooked up a recipe for staying sharp and fit that combines the best of both worlds – and it’s not prune juice and power walking. A groundbreaking new study suggests that combining brain training with physical exercise could be the key to staying fit and mentally sharp as we grow older.

Researchers from the University of Extremadura in Spain and the University of Birmingham in the U.K. have found that a novel training approach called Brain Endurance Training (BET) can significantly enhance both cognitive and physical performance in older adults. Published in the journal Psychology of Sport & Exercise, the study shows that BET not only improves performance when participants are fresh but also helps them maintain high-performance levels even when fatigued.

For the research, the study authors turned to 24 healthy, sedentary women between the ages of 65 and 78. These women were randomly divided into three groups: one group underwent BET, another group did only physical exercise training, and a control group did no training at all.

The BET and exercise-only groups followed the same physical training regimen: three 45-minute sessions per week for eight weeks. Each session included 20 minutes of resistance exercises (like squats and bicep curls) and 25 minutes of walking. The key difference was that the BET group also performed a 20-minute cognitive task before each exercise session.

To test the effectiveness of the training, the researchers assessed participants’ cognitive and physical performance at four points: before training began, halfway through the eight-week program, immediately after the program ended, and four weeks after the program finished.

The cognitive tests included a psychomotor vigilance task, which measures reaction time and alertness, and a Stroop test, which assesses the ability to override automatic responses – a key aspect of cognitive control. Physical tests included a six-minute walk test, a 30-second chair stand test (repeatedly standing up and sitting down), and a 30-second arm curl test.

Importantly, these tests were performed twice during each assessment: once when participants were “fresh” and again after they had completed a mentally fatiguing 30-minute cognitive task. This allowed researchers to evaluate how well the different training approaches helped participants maintain their performance even when mentally tired.

The results revealed that both the BET and exercise-only groups experienced improvements in cognitive and physical performance compared to the control group. However, the BET group consistently outperformed the exercise-only group, especially when participants were in a fatigued state.

For instance, from the beginning to the end of the study, the BET group improved their performance on the chair stand test by a whopping 59.4% when fatigued, compared to a 47.5% improvement in the exercise-only group. In cognitive tasks, the BET group showed a 12.1% improvement in accuracy on the Stroop test when fatigued, while the exercise-only group improved by 6.9%.

“We have shown that BET could be an effective intervention to improve cognitive and physical performance in older adults, even when fatigued,” says corresponding author Chris Ring from the University of Birmingham in a statement. “This could have significant implications for improving healthspan in this population, including reducing the risk of falls and accidents.”

Source: https://studyfinds.org/training-turning-back-the-clock/

Sugar shock: Average person consumes 80 pounds of sugar each year

(© Wayhome Studio – stock.adobe.com)

We all get the occasional sugar craving, but a new survey finds that most Americans have gone completely overboard when it comes to sweet treats and drinks. In a startling revelation, the average American consumes an astonishing 36,000 grams of sugar per year — equivalent to nearly 80 pounds.

This eye-opening statistic emerges from a poll of 2,000 Americans conducted by Talker Research on behalf of Hint Water, shedding light on the nation’s sugar habits and their surprising impact on hydration levels. The study found that the typical American ingests 99 grams of sugar daily, surpassing the sugar content of two 12-ounce cans of soda. This excessive consumption comes despite 85% of respondents actively working to reduce their sugar intake.

The survey also reveals that for 34% of respondents, the majority of their daily sugar intake in beverages comes from their morning coffee. Another 28% say that soda makes up most of their liquid consumption.

Interestingly, more than half of the participants (51%) believe that their sugar cravings might actually be a sign of dehydration. This insight gains significance when considering that respondents reported drinking only 48 ounces of water on a typical day, far below recommended levels.

“The study revealed that, on a regular day, the average respondent consumes more than twice the amount of sugar recommended by the American Heart Association and significantly less water than is recommended by the U.S. National Academy of Medicine. And while it’s important to showcase how much room we have for improvement, it’s also important to understand why this is the case,” says Amy Calhoun Robb, chief marketing officer at Hint Water, in a statement.

The average American consumes an astonishing 36,000 grams of sugar per year. (© colnihko – stock.adobe.com)

The survey explored the emotional triggers behind sugar cravings, with stress (39%), boredom (36%), fatigue (24%), and loneliness (17%) topping the list. When experiencing these cravings, respondents reported feeling anxious (23%), irritable (22%), impatient (20%), and unproductive (20%).

Common situations which often bring on sweet cravings include watching a movie (31%), finishing a meal (31%), needing a midday energy boost (30%), and having a bad day at work (19%). The survey also found that 3:12 p.m. as the peak time for sugar cravings to strike.

Perhaps most alarmingly, the average person can only resist a sugar craving for 13 minutes before giving in. Some go to extreme lengths to satisfy their sweet tooth, with 12% of respondents admitting they’ll make time in their busy schedules — no matter what — to obtain a sweet treat.

The aftermath of a sugar binge isn’t pretty either. Respondents reported experiencing a “crash” just 33 minutes after indulging, often feeling fatigued (42%), regretful (25%), and unfocused (21%).

As Americans grapple with their sugar addiction, the survey highlights a growing awareness of the importance of hydration. Over half of the respondents (51%) are actively working to drink more water specifically to prevent sugar cravings, making improved hydration their number one health goal.

Source: https://studyfinds.org/80-pounds-of-sugar-each-year/

An asteroid wiped out the dinosaurs — But it invented something amazing for ants

A lower-fungus-farming worker of the rare fungus-farming ant species Mycetophylax asper, collected in Santa Catarina, Brazil, in 2014, on its fungus garden. (Credit: Don Parsons)

The asteroid that wiped out the dinosaurs also brought opportunities for new life. Scientists have found that after the asteroid wiped out many plants, ants started farming fungi to help them survive and get the food they needed in tough times.

The meteor impact 66 million years ago created a low-light environment that allowed fungi that fed on organic matter to survive, as many plants and animals died. Additionally, the dust in the skies made it difficult for plants to undergo photosynthesis — converting light energy to make food. With the spread of fungus, researchers found it allowed fungus-farming ants to thrive in these dark times. The findings preview the start of the mutualistic relationship shared between several fungi species and ants.

“The origin of fungus-farming ants was relatively well understood, but a more precise timeline for these microorganisms was lacking. The work provides the smallest margin of error to date for the emergence of these fungal strains, which were previously thought to be more recent,” says study co-author André Rodrigues, a professor at the Institute of Biosciences of São Paulo State University (IB-UNESP) in Brazil, in a media release. The study is published in the journal Science.

Researchers studied the genetic remains of 475 fungal species cultivated by ants from all over the Americas. They narrowed their focus on ultra-conserved elements of the fungal genomes. These regions stay in the genome through the evolution of a group, genetic evidence that links back to the most ancient ancestors.

“In this case, we were interested in the regions close to these elements. They show the most recent differences between species and allow us to trace a fairly accurate evolutionary line,” says study co-author Pepijn Wilhelmus Kooij, a researcher at IB-UNESP supported by FAPESP.

The genetic evidence on fungal species allowed researchers to track two distinct fungal lineages from the same ancestor of present-day leafcutter ants 66 million years ago. The study also showed the emergence of the ancestor of coral fungi, which was cultivated by ants 21 million years ago.

In the current study, researchers suggest the ancestor of the leafcutter ants lived close to fungi. The fungi may have been inside any colonies or occasionally collected for food.

Mutualism — a relationship in which both parties benefit — was forced on several fungi species and the ancestor of leafcutter ants. Researchers explain that the asteroid impact made the relationship necessary for survival, with the fungi needing ants for food and reproduction. Ants also used fungi as a significant food source.

Nowadays, four different ant groups cultivate four types of fungi. Some insects even influence how the fungi grow so they can produce certain nutrients.

“When we cultivate them in the lab, the fungi take the expected form of hyphae. However, inside the colony, one of these hyphae types becomes swollen and forms structures similar to grape clusters, rich in sugars. We still don’t know how the ants do this,” Kooij explains.

The authors suggest that cultivating fungi was likely a way for ants to adapt to a nutritional shortage ants faced after the asteroid’s impact. Fungi, in turn, found it more helpful when cultivated by ants, creating a mutualistic relationship. The process goes with the fungus breaking down organic matter ants carried over. Afterward, ants eat the products made from the fungus that would not be found in other food sources at the time.

Another major event affected the future of fungus-farming ants. Ants previously lived in humid forests. However, 27 million years ago environmental changes changed the terrain to more savanna-like territories. These dry and vast areas led to more places for fungus-farming ants to roam and eventually diversify into today’s leafcutter ants.

The diversification of ants also provided an opportunity for fungi to diversify. This made them better at making food for the ants and decomposing organic matter. The way fungi evolved to decompose organic matter efficiently is now being studied as a potential way to decompose other materials like plastics.

Source: https://studyfinds.org/asteroid-dinosaurs-ants/

 

Ancient DNA suggests our love for carbs goes back 800,000+ years

(Credit: Johnny Rizk from Pexels)

Our craving for bread, pasta, and potatoes may be more than just a cultural preference – it could be encoded in our DNA. Research reveals that our ability to digest starchy foods has much deeper roots than previously thought, potentially explaining why so many of us find carbohydrates irresistible.

Published in Science, this eye-opening research delves into the evolutionary history of a gene called AMY1, which produces salivary amylase – the enzyme that starts breaking down starches as soon as they hit our mouths. For years, scientists have known that humans carry multiple copies of this gene but pinpointing when and how these copies multiplied has been as tricky as resisting a warm, crusty baguette.

“The idea is that the more amylase genes you have, the more amylase you can produce and the more starch you can digest effectively,” says the study’s corresponding author, Omer Gokcumen, PhD, a professor in the Department of Biological Sciences at the University at Buffalo, in a media release.

A team of researchers from the University at Buffalo and the Jackson Laboratory decided to take a fresh bite out of this mystery. Armed with cutting-edge genomic tools like optical genome mapping and long-read sequencing, they set out to map the AMY1 gene region in a level of detail that would make a master chef proud.

The genetic feast they uncovered was more varied and complex than anyone had imagined. Among the 98 individuals studied from around the world, the team identified 52 distinct amylase haplotypes – think of these as different recipes for the AMY1 gene. Thirty of these stood out as particularly well-supported findings, suggesting that this gene region has been simmering with change throughout human history.

But the real showstopper came when the researchers peered into our evolutionary past. By examining ancient DNA from Neanderthals, Denisovans, and early modern humans, they found evidence of AMY1 gene duplications that push back the origin of our starch-digesting prowess to over 800,000 years ago. That’s long before our ancestors even dreamed of agriculture!

“This suggests that the AMY1 gene may have first duplicated more than 800,000 years ago, well before humans split from Neanderthals and much further back than previously thought,” says Kwondo Kim, one of the lead authors on this study from the Lee Lab at JAX.

This ancient genetic prep work didn’t go to waste. When the team analyzed 68 ancient human genomes, including one from a 45,000-year-old individual found in Siberia, they discovered that even these ancient hunter-gatherers were carrying multiple copies of the AMY1 gene. It seems our ancestors were genetically equipped for a carb-heavy diet long before they started cultivating grains.

The plot thickens like a rich risotto when we look at more recent history. Over the last 4,000 years, European farmers experienced a surge in high-copy AMY1 haplotypes. As agriculture spread, so did genetic variations allowing for even more efficient starch processing. It’s as if our genes and our growing appetite for grains were evolving in perfect harmony.

So how did all this genetic diversification happen? The researchers identified several mechanisms, but one stands out: non-allelic homologous recombination, or NAHR. Think of it as nature’s way of accidentally duplicating recipe cards – sometimes you end up with extra copies, sometimes fewer. This genetic lottery explains why some of us hit the jackpot with extra AMY1 copies, potentially making us carb-digesting champions.

Interestingly, while the number of AMY1 copies can vary widely between individuals, the actual protein-coding sequences remain remarkably stable. It’s as if evolution is saying, “Feel free to make more copies of this recipe, but don’t change the ingredients!”

This research isn’t just food for thought about our past – it has real implications for our present and future health. Understanding how our genes have adapted to dietary changes could shed light on modern issues related to starch consumption and digestion. It might even explain why some of us find it harder to resist that second helping of mashed potatoes.

Source: https://studyfinds.org/ancient-dna-love-for-carbs/

PFAS problem: Forever chemicals found in 99% of bottled water samples worldwide

(Photo by Towfiqu Ahamed Barbhuiya on Shutterstock)

The purity of our drinking water is being challenged by the presence of synthetic “forever chemicals,” according to a new international study. Researchers have detected per- and polyfluoroalkyl substances (PFAS) in water samples from taps and bottles across different countries, raising concerns about potential health risks associated with long-term exposure.

PFAS are a group of synthetic chemicals used in various industrial applications and consumer products due to their water and stain-repellent properties. Their persistence in the environment and potential adverse health effects have made them a subject of increasing scientific and regulatory scrutiny.

The study, conducted by researchers from the University of Birmingham, Southern University of Science and Technology, and Hainan University, analyzed 112 glass and plastic bottled water samples (87 brands) from 15 countries, and 55 tap water samples from the UK and China. Their findings, published in the journal ACS ES&T Water, paint a picture of widespread contamination and highlight the need for increased monitoring and regulation of these chemicals.

The researchers focused on ten specific PFAS compounds, finding that two of the most well-known PFAS – perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS) – were detected in over 99% of bottled water samples.

One of the most striking findings was the difference in the levels of forever chemicals between tap water in the UK and China. Chinese tap water contained significantly higher concentrations of these chemicals, with some samples exceeding the latest health guidelines set by the U.S. Environmental Protection Agency (EPA). This disparity may be partly due to differences in environmental regulations and industrial practices between the two countries.

The researchers also investigated bottled water from 15 different countries, discovering that even this supposedly “pure” source of hydration is not immune to PFAS contamination. Natural mineral water sourced from groundwater typically contained higher concentrations of forever chemicals compared to purified water. However, the study found no significant differences in PFAS levels between glass and plastic bottles or between still and sparkling water.

The finding debunks the common perception that “natural” always equates to “cleaner” or “safer” when it comes to drinking water.

The researchers didn’t stop at simply measuring forever chemical levels; they also explored potential methods for reducing exposure to these chemicals. Their experiments showed that common household water treatment methods, such as boiling and activated carbon filtration, can significantly reduce PFAS concentrations in drinking water. This information provides a practical approach for concerned individuals to minimize their exposure to these persistent pollutants.

“Our findings highlight the widespread presence of PFAS in drinking water and the effectiveness of simple treatment methods to reduce their levels,” says co-author Professor Stuart Harrad from the University of Birmingham, in a statement. “Either using a simple water filtration jug or boiling the water removes a substantial proportion of these substances.”

The study’s results underscore the need for ongoing monitoring and regulation of forever chemicals in drinking water sources. “Increased awareness about the presence of PFAS in both tap and bottled water can lead to more informed choices by consumers, encouraging the use of water purification methods,” says Professor Yi Zheng from Southern University of Science and Technology.

The discovery of widespread PFAS contamination in drinking water sources worldwide will undoubtedly spark further research, policy debates, and technological innovations. Armed with this knowledge, we are better equipped to face the challenge of ensuring clean, safe drinking water for generations to come.

Source: https://studyfinds.org/pfas-problem-forever-chemicals-bottled-water/

Too many toys can be bad for kids

Nostalgic toys (Photo by Jane Slack-Smith on Unsplash)

Does your home look like there was an explosion in a toy store? With the holidays coming, are you going to add to that chaos? Take heart: there are good reasons to stop the madness. Giving kids fewer toys results in healthier play and deeper cognitive development.

According to a study in the journal Infant Behavior and Development, having fewer toys can lead a child to engage in more creative play. Researchers, at the University of Toledo in Ohio, reported that an abundance of toys reduced the quality of toddlers’ play.

The scientists studied 36 toddlers, from 18 to 30 months of age, in free-play sessions. They were given either four toys or 16 toys and were observed during sustained play sessions which involved various manner of play.

The toddlers with four toys had a greater quality of play. They interacted with their toys for 1.5 times longer than the children with 16 toys. Children with fewer toys also played with them in more varied ways.

The researchers in Toledo noted that all the young participants played with either four or 16 toys on different days, and in random order. That ensured that the differences in results could be attributed to the environment, not any variability among the children.

What are the downsides of having too many toys?

Just like a messy office or desk can distract adults, an overabundance of toys is distracting to children. There are additional negative aspects to the toy glut:

What are the advantages of having fewer toys?
Cutting back on toys actually encourages creativity. With fewer toys, kids create their own stories and invent new games and activities. They develop critical thinking skills.

Kids also develop resourcefulness. They will repurpose toys and combine them innovatively.

Playing with fewer toys also avoids overstimulation. With fewer toys, kids focus more effectively. A calmer environment promotes concentration and mindfulness.

Having fewer toys can promote appreciation. Kids will value their toys, as well as learn responsibility. They learn to take care of their possessions.

For parents and caregivers, it saves money. Buying fewer toys and avoiding impulse buying means allocating resources more effectively. It also lessens spending on storage for a mountain of different toys.

Dealing with fewer toys develops organization skills. Less clutter promotes appreciation for organization.

Kids are more likely to engage in collaborative play, promoting their social skills. They learn negotiation, communication, and conflict resolution skills. Fewer toys also fosters empathy and an awareness of multiple perspectives.

Cutting down on the toy bulk encourages quality time. Having fewer toys can be conducive to quality family time with shared experiences. They can create lasting memories.

Finally, fewer toys decrease their environmental impact. Many toys are made of plastic or other non-biodegradable materials. Fewer, higher quality toys made from sustainable materials reduces a family’s environmental footprint.

Source: https://studyfinds.org/too-many-toys-bad-for-kids/

Most space rocks crashing into Earth likely come from this one source

(© IgorZh – stock.adobe.com)

The sight of a fireball streaking across the sky brings wonder and excitement to children and adults alike. It’s a reminder that Earth is part of a much larger and incredibly dynamic system.

Each year, roughly 17,000 of these fireballs not only enter Earth’s atmosphere, but survive the perilous journey to the surface. This gives scientists a valuable chance to study these rocky visitors from outer space.

Scientists know that while some of these meteorites come from the Moon and Mars, the majority come from asteroids. But two separate studies published in Nature today have gone a step further. The research was led by Miroslav Brož from Charles University in the Czech Republic, and Michaël Marsset from the European Southern Observatory in Chile.

The papers trace the origin of most meteorites to just a handful of asteroid breakup events – and possibly even individual asteroids. In turn, they build our understanding of the events that shaped the history of the Earth – and the entire solar system.

What is a meteorite?
Only when a fireball reaches Earth’s surface is it called a meteorite. They are commonly designated as three types: stony meteorites, iron meteorites, and stony-iron meteorites.

Stony meteorites come in two types.

The most common are the chondrites, which have round objects inside that appear to have formed as melt droplets. These comprise 85% of all meteorites found on Earth.

Most are known as “ordinary chondrites”. They are then divided into three broad classes – H, L and LL – based on the iron content of the meteorites and the distribution of iron and magnesium in the major minerals olivine and pyroxene. These silicate minerals are the mineral building blocks of our solar system and are common on Earth, being present in basalt.

“Carbonaceous chondrites” are a distinct group. They contain high amounts of water in clay minerals, and organic materials such as amino acids. Chondrites have never been melted and are direct samples of the dust that originally formed the solar system.

The less common of the two types of stony meteorites are the so-called “achondrites”. These do not have the distinctive round particles of chondrites, because they experienced melting on planetary bodies.

The asteroid belt
Asteroids are the primary sources of meteorites.

Most asteroids reside in a dense belt between Mars and Jupiter. The asteroid belt itself consists of millions of asteroids swept around and marshalled by the gravitational force of Jupiter.

The interactions with Jupiter can perturb asteroid orbits and cause collisions. This results in debris, which can aggregate into rubble pile asteroids. These then take on lives of their own.

It is asteroids of this type which the recent Hayabusa and Osiris-REx missions visited and returned samples from. These missions established the connection between distinct asteroid types and the meteorites that fall to Earth.

S-class asteroids (akin to stony meteorites) are found on the inner regions of the belt, while C-class carbonaceous asteroids (akin to carbonaceous chondrites) are more commonly found in the outer regions of the belt.

But, as the two Nature studies show, we can relate a specific meteorite type to its specific source asteroid in the main belt.

One family of asteroids
The two new studies place the sources of ordinary chondrite types into specific asteroid families – and most likely specific asteroids. This work requires painstaking back-tracking of meteoroid trajectories, observations of individual asteroids, and detailed modelling of the orbital evolution of parent bodies.

The study led by Miroslav Brož reports that ordinary chondrites originate from collisions between asteroids larger than 30 kilometers in diameter that occurred less than 30 million years ago.

The Koronis and Massalia asteroid families provide appropriate body sizes and are in a position that leads to material falling to Earth, based on detailed computer modelling. Of these families, asteroids Koronis and Karin are likely the dominant sources of H chondrites. Massalia (L) and Flora (LL) families are by far the main sources of L- and LL-like meteorites.

The study led by Michaël Marsset further documents the origin of L chondrite meteorites from Massalia.

It compiled spectroscopic data – that is, characteristic light intensities which can be fingerprints of different molecules – of asteroids in the belt between Mars and Jupiter. This showed that the composition of L chondrite meteorites on Earth is very similar to that of the Massalia family of asteroids.

The scientists then used computer modelling to show an asteroid collision that occurred roughly 470 million years ago formed the Massalia family. Serendipitously, this collision also resulted in abundant fossil meteorites in Ordovician limestones in Sweden.

In determining the source asteroid body, these reports provide the foundations for missions to visit the asteroids responsible for the most common outer space visitors to Earth. In understanding these source asteroids, we can view the events that shaped our planetary system.

Source: https://studyfinds.org/space-rocks-crashing-into-earth/

Paternal plaque attack: The hidden heart risk fathers pass to daughters

(Credit: Creativa Images/Shutterstock)

Recent scientific research has uncovered a surprising connection between a father’s diet and his daughter’s risk of heart disease. A study conducted by researchers at the University of California-Riverside has shown that male mice consuming a high-cholesterol diet can significantly increase the risk of cardiovascular disease in their female offspring, even when those offspring maintain a healthy diet throughout their lives.

Cardiovascular disease (CVD) is the leading cause of death globally, encompassing a range of disorders affecting the heart and blood vessels. In the United States alone, heart disease claimed the lives of nearly 703,000 people in 2022, accounting for one in every five deaths. While many factors contributing to CVD risk are well-known, this study sheds light on a previously unexplored aspect: the role of a father’s diet on their child’s health.

The research, published in the journal JCI Insight, focused on atherosclerosis, a chronic inflammatory condition that is the primary cause of CVD. Atherosclerosis occurs when plaque, composed of cholesterol, fat, and other substances, accumulates in artery walls, narrowing them and restricting blood flow to vital organs.

To investigate the impact of paternal diet on offspring heart health, the research team, led by Changcheng Zhou, a professor of biomedical sciences at UC Riverside, used mice lacking the LDL receptor (LDLR-deficient mice). These mice are prone to developing high cholesterol and atherosclerosis, making them an ideal model for studying heart disease.

Male LDLR-deficient mice were fed either a normal diet or a high-cholesterol diet for eight weeks before mating with females on a normal diet. The resulting offspring were then raised on a normal diet and examined for signs of atherosclerosis at 19 weeks of age.

The results were striking: female offspring of fathers who had consumed a high-cholesterol diet developed significantly larger arterial plaques compared to those whose fathers ate a normal diet. Surprisingly, male offspring showed no such difference, indicating that the effect is sex-specific.

To understand the mechanism behind this intergenerational effect, the researchers examined gene expression in the inner lining of the arteries (the intima) of the offspring. They discovered that female offspring of high-cholesterol diet fathers had increased expression of genes associated with inflammation and immune responses – key factors in the development of atherosclerosis.

Two proteins, CCN1 and CCN2, were found to be particularly elevated in the arterial plaques of female offspring from high-cholesterol diet fathers. These proteins can promote inflammation and the accumulation of immune cells in artery walls, potentially explaining the increased plaque formation.

The study also investigated how information about a father’s diet might be passed on to his offspring. Using an advanced sequencing technique called PANDORA-Seq, developed at UC Riverside, the researchers examined small RNA molecules in the fathers’ sperm. They found that a high-cholesterol diet altered the profile of these small RNAs, which can influence gene expression.

“It had been previously thought that sperm contribute only their genome during fertilization,” Zhou says in a media release. “However, recent studies by us and others have demonstrated that environmental exposures, including unhealthy diet, environmental toxicants, and stress, can alter the RNA in sperm to mediate intergenerational inheritance.”

The implications of this study are significant, suggesting that a man’s diet in the months before conception could have a lasting impact on his daughters’ heart health, even if those daughters maintain a healthy lifestyle themselves.

Source: https://studyfinds.org/dad-diet-daughters-heart-disease/

Medical marvel: How a man lived 78 years unaware he had 3 penises

Male reproductive system model. (Photo by NMK-Studio on Shutterstock)

Do you really know what you look like on the inside? Most people do not, and usually it takes surgery or medical imaging to take a look while we are still alive.

A case study was published last week where researchers made the rare finding of a man with “triphallia.” Most people would say the man had three penises. But anatomists, like myself, who teach health professionals about the structure of the human body, prefer the term penes (plural of penis).

This finding emerged from the dissection of the body of a 78-year-old man who had donated his body to science. It is a case that has left many anatomists scratching their heads, and ignited discussions about typical human anatomy and anatomical variation.

I too have an extra organ – an extra spleen – plus other anatomical variations regarding two muscles. It is highly likely you might also have anatomical variations, and not necessarily know.

Back to this case
According to the latest study, only one penis was externally visible. But when his body was dissected, there were two extra, smaller penises inside the scrotum.

The main penis was 77mm long and 24mm wide, with the smaller ones about half the size. However, the images provided in the study don’t seem to match the written descriptions in all places. So the study does need clarification.

Intriguingly, researchers identified a single urethra – the hollow tube from the bladder that allows urine (and sperm from the testes) to leave the body. This urethra traveled from the bladder through part of one of the smaller penises and along the length of the main penis, leaving out the third penis entirely.

Was there a misunderstanding in identifying these anatomical structures? Could the second penis simply be a misidentified part of the main one? Is this actually a case of diphallia – two penises? In either case, the man’s anatomy was different to what you’d typically see in anatomy textbooks.

The study suggests all three penises contained erectile tissue capable of engorgement. But it remains unclear whether they worked independently or together. Unfortunately, the authors did not confirm structures by examining them under the microscope, or report tracing the nerves or blood vessels, to shed more light.

There was an earlier case in a baby

A separate case of someone with three penises, which was documented in 2020, involved a three-month-old infant.

In this instance, the main penis was in its typical position, but you could see the extra ones on the perineum (between the anus and the scrotum in males).

Neither of the extra penises had a urethra, making them incapable of functioning typically. Ultimately, these non-functional penises were safely removed.

Such cases are rare, with only these two examples reported in medical databases.

So how does this happen? The answer may lie in how embryos develop.

Early in development
The penis begins to develop early in the first trimester of a 40-week pregnancy, a time when a woman may not know she’s pregnant.

During this critical period, the embryo may be exposed to various influences. These include toxins passed through the bloodstream if the mother falls ill, takes certain drugs while pregnant or is exposed to certain chemicals. There are also genetic factors that shape how organs develop.

By the fifth week of pregnancy, cells migrate to the midline of the embryo, where they help form the precursor to the penis.

Problems in this migratory process, abnormalities in a developmental gene (called “sonic hedgehog”), or fluctuations in testosterone levels or receptors during early fetal development, could potentially lead to the formation of additional penises.

Source: https://studyfinds.org/man-with-3-penises-triphallia/

300-year terror: How the printing press fueled witch hunts, misinformation in Europe

Three women executed as witches in Derneburg Germany in October 1555. Europeans began prosecuting suspected witches in the 14th century. 16th century woodcut with modern watercolor. (Photo by Everett Collection on Shutterstock)

The invention of the printing press in 1450 revolutionized how people communicated. Books and newspapers could be easily printed and sent across towns in hours, allowing people to spread ideas and knowledge. Though the printing press represented an intellectual milestone for humanity, a new study finds it is also the reason behind the mass hysteria and eventual deaths of so many “witches.”

One publication that became extremely popular across Europe in 1487 was witch-hunting manuals. The Malleus Maleficarum was a fan-favorite, with copies spread across cities and fueled the hunt for demonic witches. People read the manual describing how to spot a witch and other published news of witch trials in other towns. According to the authors, seeing what neighboring towns were doing to deal with witches influenced whether another town would adopt their own witch trials.

“Cities weren’t making these decisions in isolation,” says lead author Kerice Doten-Snitker, a Complexity Postdoctoral Fellow at the Santa Fe Institute, in a statement. “They were watching what their neighbors were doing and learning from those examples. The combination of new ideas from books and the influence of nearby trials created the perfect conditions for these persecutions to spread.”

The belief in witchcraft was not something that sprung up one day. Europeans had believed in witches for centuries, but it was only discussed amongst small circles, such as religious scholars and local inquisitors. However, printing witch-hunting manuals like Malleus Maleficarum put a greater spotlight on witches, providing a guide for finding, questioning, and prosecuting witches. In the 300 years of persecution and trials, roughly 90,000 people were accused, and nearly half of them were sentenced to death.

The new study, published in Theory and Society, built on previous research examining factors influencing the spread of witchcraft. These works emphasized economic and environmental factors, but the authors focused this time on social and trade networks and how they influenced people’s behaviors.

Researchers tracked the timing with the publication of witch-hunting manuals between 1400 and 1679 and the timing of witch trials in 553 cities. They found that with every new edition of Malleus Maleficarum, there was an increase in witch trials.

Along with printing witch manuals, the authors noticed neighboring cities also influenced whether a city would host witch trials. When one city adopted the practices in Malleus Maleficarum, others copied their behavior. This behavior is known as ideational diffusion, which is how ideas are spread through a population. It took years for people to learn and accept the new ideas surrounding witchcraft. Once they did, however, it led to unprecedented persecution for witches.

Source: https://studyfinds.org/printing-presses-fueled-misinformation-witch-hunts-europe/

Amazing 30-year experiment captures evolution happening in real time

Two ecotypes of Littorina saxatilis marine snails, adapted to different environments. The Crab ecotype (left) is larger and wary of predators. The Wave ecotype (right) is smaller and has bold behavior. © David Carmelet

Normally, scientists have believed that it takes countless centuries for evolution to produce major changes in any species. However, a new study has witnessed this amazing process unfold in a figurate blink of an eye.

A team of researchers from the Institute of Science and Technology Austria (ISTA) and Norway’s Nord University have observed marine snails evolve to closely resemble their predecessors over just 30 years – which is a tiny fraction of time in evolutionary terms.

The story begins in 1988 when a toxic algal bloom wiped out populations of marine snails from small rocky outcrops, known as skerries, in the Koster archipelago near the Swedish-Norwegian border. While this environmental disaster might have seemed insignificant to most, for marine ecologist Kerstin Johannesson from the University of Gothenburg, it presented a unique opportunity to study evolution in action.

Four years after the algal bloom, in 1992, Johannesson decided to reintroduce snails to one of these now-empty skerries. Here’s the twist: instead of bringing back the same type of snails that previously lived there, she introduced a distinctly different population of the same species, Littorina saxatilis.

These marine snails, commonly found along North Atlantic shores, have evolved different traits to suit their specific environments. The two main types are known as “Wave snails” and “Crab snails.” Wave snails, which originally inhabited the skerries, are small with thin shells, large, rounded openings, and bold behavior – adaptations that help them survive in wave-battered environments. Crab snails, on the other hand, are larger with thicker shells, smaller elongated openings, and more cautious behavior – traits that protect them from crab predators in calmer waters.

Johannesson’s experiment involved introducing Crab snails to the skerry that had previously been home to Wave snails. The question was: How would these Crab snails adapt to their new wave-exposed environment?

The results published in the journal Science Advances were nothing short of remarkable. Within just a few generations – snails reproduce once or twice a year – scientists began to see evidence of adaptation. Over the course of 30 years, the transplanted Crab snails evolved to closely resemble the Wave snails that had inhabited the skerry before the algal bloom.

“Over the experiment’s 30 years, we were able to predict robustly what the snails will look like and which genetic regions will be implicated. The transformation was both rapid and dramatic,” says Diego Garcia Castillo, a graduate student at ISTA and one of the study’s lead authors, in a media release.

What makes this study particularly fascinating is that the snails didn’t evolve these new traits from scratch. Instead, they tapped into genetic diversity that was already present in their population, albeit at low levels. This existing genetic variation, combined with possible gene flow from neighboring Wave snail populations, allowed for rapid adaptation to the new environment.

The implications of this study extend far beyond the world of snails. In an era of rapid environmental change, understanding how species can adapt quickly is crucial.

“This work allows us to have a closer look at repeated evolution and predict how a population could develop traits that have evolved separately in the past under similar conditions,” explains Garcia Castillo.

Anja Marie Westram, a researcher at Nord University and co-corresponding author of the study, emphasizes the importance of genetic diversity in adaptation.

“Not all species have access to large gene pools and evolving new traits from scratch is tediously slow. Adaptation is very complex and our planet is also facing complex changes with episodes of weather extremes, rapidly advancing climate change, pollution, and new parasites,” says Westram. “Perhaps this research helps convince people to protect a range of natural habitats so that species do not lose their genetic variation.”

As our planet faces complex changes, including extreme weather events, climate change, pollution, and new parasites, the ability of species to adapt quickly could be the key to their survival. This study provides a glimpse into how evolution can work on relatively short timescales, offering hope for species facing rapid environmental changes.

Source: https://studyfinds.org/snail-evolution-in-real-time/

Recent mass extinction event never actually happened, shock study reveals

Aerial image of farmland for dairy cattle next to a surviving forest patch (Credit: Dawson White)

In a jaw-dropping study, researchers say that a recent mass extinction event in South America never took place! For 40 years, there has been a long-standing belief about a large-scale plant extinction event in a tropical cloud forest in Ecuador. Now, however, an international team of botanists is questioning if it actually happened like history records it did.

The study, published in Nature Plants and led by Dawson M. White of Harvard University, challenges the concept of “Centinelan extinction” — the idea that deforestation can cause the immediate extinction of plant species known only from a single location.

The story begins in the 1980s when botanists Calaway Dodson and Alwyn Gentry reported that the Centinela ridge in western Ecuador harbored around 90 plant species found nowhere else on Earth. They also claimed that these unique plants had likely gone extinct due to widespread deforestation in the area. This dramatic scenario, dubbed “Centinelan extinction,” became a cautionary tale in conservation biology, highlighting the potential for rapid biodiversity loss in tropical regions.

The new research tells a very different story

By meticulously combing through herbarium records and databases and conducting extensive fieldwork, White and his colleagues found that 99% of the supposedly extinct plants have actually been discovered elsewhere. Only one species, a tiny orchid called Bifrenaria integrilabia, remains known solely from Centinela.

“It’s a miracle,” says White, a postdoctoral researcher in the Department of Organismic and Evolutionary Biology at Harvard, in a media release. “Many of Centinela’s plants are still on the brink of extinction, but fortunately the reports of their demise were exaggerated. There’s still time to save them and turn this story around.”

This finding doesn’t diminish the threat of deforestation to biodiversity. The study reveals that many of these plants are still rare and endangered, with over 150 species qualifying as globally threatened. What it does show is the importance of continued botanical exploration and the resilience of some plant species in the face of habitat loss.

“Understanding which plants are growing in a given Andean cloud forest is a monumental task because you will undoubtedly find new species,” White concludes. “What our investigation highlights is that it takes decades of work from taxonomic experts to describe new species in such forests. And only once we have names for these species that are then noted in our scientific networks can we begin to understand where else these plants grow and their risk of extinction.”

The researchers also made surprising discoveries during their fieldwork. Despite reports of complete deforestation, they found numerous small remnants of the original forest and thousands of mature trees left standing in pastures and ravines. These patches, while fragmented, continue to harbor many of the rare plant species once thought extinct.

Perhaps most excitingly, the team discovered at least eight new plant species during their recent surveys. This highlights that even well-studied tropical areas can still yield botanical surprises and emphasizes the need for ongoing research in these biodiversity hotspots.

“One of our most astonishing discoveries is a totally new species of canopy tree in the Cotton family,” says study co-author Andrea Fernández of Northwestern University and the Chicago Botanic Garden. “It’s one of the tallest trees we have encountered, but it’s extremely rare; there could be only 15 individuals alive in Centinela. It’s now being actively targeted by local loggers, so we are rushing to describe this new tree species and get its seeds growing in botanic gardens.”

The Centinela case serves as a powerful reminder of the complexities involved in understanding and conserving biodiversity. It underscores the value of persistent scientific investigations and the danger of jumping to conclusions based on limited data. While the immediate extinction threat may have been overstated, the study reinforces the urgent need for conservation efforts in tropical cloud forests, which remain severely threatened by human activities.

This research also highlights the critical role of herbaria — collections of preserved plant specimens — in biodiversity research. By allowing scientists to track plant distributions over time and space, these “libraries of life” provide invaluable data for understanding and protecting Earth’s plant diversity.

Source: https://studyfinds.org/mass-extinction-never-happened/

The secret to beating cancer may be sitting in beer yeast

Photo by Alexa on Pixabay

In an unexpected twist, the humble yeast used to brew your favorite beer might hold the key to revolutionizing cancer treatment. Scientists at the University of Virginia School of Medicine, collaborating with researchers at EMBL in Germany, have uncovered a surprising survival strategy in yeast cells that could unlock new ways to combat cancer.

The study, published in Nature Communications, reveals how a common brewing yeast, Schizosaccharomyces pombe (S. pombe), can essentially hibernate when faced with nutrient shortages. This ability to “go dormant” bears a striking resemblance to how cancer cells survive in nutrient-deprived environments, making this discovery potentially game-changing for cancer research.

“Cells can take a break when things get tough by going into deep sleep in order to stay alive, then at a later point they seemingly just come back. That’s why we need to understand the basics of adaptation to starvation and how these cells become dormant to stay alive and avoid death,” explains Dr. Ahmad Jomaa, a researcher from UVA’s Department of Molecular Physiology and Biological Physics, in a media release.

Why study beer yeast to understand cancer?
S. pombe has been a brewer’s friend for centuries, but it’s also a scientist’s best pal. This yeast shares remarkable similarities with human cells, making it an invaluable research tool for understanding cellular processes in both healthy and cancerous cells.

Using cutting-edge imaging techniques called cryo-electron microscopy and tomography – think of it as a super-powerful 3D microscope – the research team made a startling discovery. When yeast cells face starvation, they wrap their cellular batteries, known as mitochondria, in an unexpected blanket. This blanket is made up of deactivated ribosomes, which are usually responsible for producing proteins in the cell.

“We knew that cells will try to save energy and shut down their ribosomes, but we were not expecting them to attach in an up-side state on the mitochondria,” says Maciej Gluc, a graduate student involved in the study.

This peculiar “upside-down” attachment had never been observed before and could be the key to understanding how cells enter and exit dormancy. While the exact reason for this unusual behavior remains a mystery, the researchers have some theories.

“There could be different explanations. A starved cell will eventually start digesting itself, so the ribosomes might be coating the mitochondria to protect them. They might also attach to trigger a signaling cascade inside the mitochondria,” suggests Dr. Simone Mattei from EMBL.

How does this relate to cancer?
Cancer cells, in their relentless growth, often face nutrient shortages. To survive, they can slip into a dormant state, becoming “invisible” to our immune system and resistant to treatments. Understanding how cells enter and exit this dormant state could lead to new strategies for targeting cancer cells, potentially improving patient outcomes and preventing relapses.

Dr. Jomaa and his team are now setting their sights on the next big question: how do cells wake up from this deep sleep? They plan to continue their work with yeast while also investigating the process in cultured cancer cells, though Jomaa admits this is “not an easy task.” The ultimate goal? To discover new markers that can track dormant cancer cells.

“These cells are not easily detected in diagnostic settings,” Jomaa explains, “but we are hopeful that our research will generate more interest in helping us reach our goal.”

This groundbreaking research was conducted at the UVA Cancer Center, one of only 57 “comprehensive” cancer centers recognized by the National Cancer Institute for excellence in cancer research and treatment.

While we’re still a long way from seeing these findings translated into new cancer treatments, this study offers a fascinating glimpse into the unexpected connections between the ancient art of brewing and cutting-edge cancer research. It’s a reminder that in science, breakthrough insights can come from the most unlikely places – even the bottom of your beer glass.

 

Source: https://studyfinds.org/beating-cancer-beer-yeast/

Coffee during pregnancy likely won’t harm baby’s brain, study says

(Photo by Yan Krukov from Pexels)

For many, that morning cup of coffee is a non-negotiable ritual. For expectant mothers, however, the decision to indulge in a daily caffeine fix can be fraught with anxiety. Despite the concern among many women, a new study from Norway is offering reassuring news for coffee-loving moms-to-be.

Researchers from several institutions, including the University of Queensland and the University of Oslo, set out to investigate whether maternal coffee consumption during pregnancy affects children’s neurodevelopment. Their findings, published in the journal Psychological Medicine, suggest that moderate coffee intake during pregnancy is unlikely to significantly impact a child’s brain development.

The study, one of the largest of its kind, analyzed data from over 71,000 Norwegian families participating in the Norwegian Mother, Father and Child Cohort Study (MoBa). This massive dataset allowed researchers to examine the relationship between mothers’ coffee habits during pregnancy and their children’s developmental outcomes up to eight years of age.

Initially, the results seemed to confirm what many expectant mothers fear: higher maternal coffee consumption was associated with various neurodevelopmental difficulties in children, including problems with social communication, attention, and hyperactivity. However, when researchers dug deeper and accounted for other factors like smoking, alcohol use, education, and income, most of these associations disappeared.

To further investigate any potential causal relationships, the team employed a sophisticated genetic technique called Mendelian randomization. This method uses genetic variants associated with coffee consumption to estimate the effect of coffee intake on child development, helping to overcome some limitations of traditional observational studies.

The genetic analysis found little evidence that maternal coffee consumption during pregnancy causes most neurodevelopmental difficulties in children. While there was a hint of an association with social communication difficulties at age eight, further investigation suggested this link might be due to other factors rather than coffee itself.

“We used a method called Mendelian randomization which uses genetic variants that predict coffee drinking behavior and can separate out the effect of different factors during pregnancy,” says co-lead author Dr. Gunn-Helen Moen, from the University of Queensland’s Institute for Molecular Bioscience, in a statement. “It mimics a randomized controlled trial without subjecting pregnant mothers and their babies to any ill effects. “The benefit of this method is the effects of caffeine, alcohol, cigarettes and diet can be separated in the data, so we can look solely at the impact of caffeine on the pregnancy.”

The study’s results align with current health guidelines, which typically allow for moderate caffeine intake during pregnancy. The American College of Obstetricians and Gynecologists, for instance, states that consuming less than 200 milligrams of caffeine per day (about one 12-ounce cup of coffee) is not linked to an increased risk of miscarriage or preterm birth.

“Scandinavians are some of the biggest coffee consumers in the world, drinking at least 4 cups a day, with little stigma about drinking coffee during pregnancy,” adds Dr. Moen. “Our analysis found no link between coffee consumption during pregnancy and children’s neurodevelopmental difficulties.”

It’s important to note that while this study focused on coffee, caffeine can be found in various foods and beverages, including tea, chocolate, and some soft drinks. Pregnant women should be aware of their total caffeine intake from all sources.

The research team acknowledges that more studies are needed to fully understand the effects of maternal coffee consumption on child development. However, this large-scale study provides valuable evidence that can help inform both medical advice and personal decisions for expectant mothers.

Source: https://studyfinds.org/coffee-pregnancy-babys-brain/?nab=0

Rabbit hole of despair: The more content you consume, the worse mental health gets

(Photo by SB Arts Media on Shutterstock)

Whether it’s internet rabbit holes or endless social media interactions, a new poll finds the more time you spend obsessing over what’s happening on your screen, the worse your mental health gets.

According to the survey, the average American feels like they lose three days per month while consuming online content. The poll of 2,000 Americans revealed that 36 days of our year are lost to scrolling, streaming, and bingeing content. It’s even worse for younger people. Gen Z Americans feel like they lose closer to five days per month.

The comprehensive study on media consumption trends by Talker Research also revealed that excessive content consumption can result in feelings of guilt, with the average respondent having three pangs of guilt per month. On average, Americans consume about six hours of content per day, with Gen Z Americans consuming closer to seven.

In honor of World Mental Health Day, the data split respondents based on their self-reported mental health and found that poor mental health and media consumption appear to have an uncomfortable connection.

Those with “very poor” mental health lose nearly six days per month to content consumption, while 19% of those who say their mental health is “very poor” estimate that they lose 15 or more days per month. In comparison, those with excellent and good mental health lose the fewest days (2.7).

This also aligned with feelings of guilt. Those with“very poor” mental health feel guilty most often — roughly seven times per month. Nearly half of those polled (42%) admit they feel like they consume “too much” media, and 36% say their mood is “often” negatively affected by something they see on social media.

Gen Z Americans were the most likely to feel like they consumed too much media, with 66% agreeing with that sentiment. Interestingly, those with “very poor” mental health were also found to be the most likely to use TikTok regularly (38%) and the most likely to report being “very likely” to be on their phone while watching something on television (46%).

Dr. Sham Singh, a Harbor UCLA-trained psychiatrist at Winit Clinic, offered three helpful tips for managing screen time to alleviate feelings of guilt.

  • Implement a “Tech-Free” Zone. “Creating designated areas where technology is off-limits in your home can significantly impact your daily habits,” Singh says in a statement. “For instance, by making your bedroom a tech-free zone, you promote better sleep hygiene and relaxation, free from the distractions of notifications and screens. Similarly, establishing a tech-free dining room encourages meaningful conversations and family bonding during meals. These intentional spaces reduce screen time, foster healthier interactions, or enjoy quiet moments.”
  • Set Phone-Free Intentions: “Before you reach for your phone, I advise you to take a moment to set a clear intention for its use,” says Singh. “Ask yourself what you need to accomplish—whether it’s checking messages, researching a topic, or responding to an email. This practice encourages a more mindful approach to technology, helping you avoid the trap of mindless scrolling. Having a defined purpose lets you stay focused on your task and minimize the likelihood of getting sidetracked by social media or other distractions.”
  • Reflect on Content Consumption: “Keeping a journal of your experiences with various types of content can be an enlightening practice,” Singh suggests. “After consuming media—be it social media, news articles, or videos—take a moment to jot down your feelings and thoughts. Did you feel inspired, informed, or drained? This reflection helps you discern which content enriches your life and which might feel like a time-waster. Over time, you’ll develop a clearer picture of your media consumption patterns, enabling you to make more informed choices about what to engage with in the future.”

Rabbit hole of despair: The more content you consume, the worse mental health gets

(Photo by SB Arts Media on Shutterstock)

Whether it’s internet rabbit holes or endless social media interactions, a new poll finds the more time you spend obsessing over what’s happening on your screen, the worse your mental health gets.

According to the survey, the average American feels like they lose three days per month while consuming online content. The poll of 2,000 Americans revealed that 36 days of our year are lost to scrolling, streaming, and bingeing content. It’s even worse for younger people. Gen Z Americans feel like they lose closer to five days per month.

The comprehensive study on media consumption trends by Talker Research also revealed that excessive content consumption can result in feelings of guilt, with the average respondent having three pangs of guilt per month. On average, Americans consume about six hours of content per day, with Gen Z Americans consuming closer to seven.

In honor of World Mental Health Day, the data split respondents based on their self-reported mental health and found that poor mental health and media consumption appear to have an uncomfortable connection.

Those with “very poor” mental health lose nearly six days per month to content consumption, while 19% of those who say their mental health is “very poor” estimate that they lose 15 or more days per month. In comparison, those with excellent and good mental health lose the fewest days (2.7).

This also aligned with feelings of guilt. Those with“very poor” mental health feel guilty most often — roughly seven times per month. Nearly half of those polled (42%) admit they feel like they consume “too much” media, and 36% say their mood is “often” negatively affected by something they see on social media.

Gen Z Americans were the most likely to feel like they consumed too much media, with 66% agreeing with that sentiment. Interestingly, those with “very poor” mental health were also found to be the most likely to use TikTok regularly (38%) and the most likely to report being “very likely” to be on their phone while watching something on television (46%).

Dr. Sham Singh, a Harbor UCLA-trained psychiatrist at Winit Clinic, offered three helpful tips for managing screen time to alleviate feelings of guilt.

  • Implement a “Tech-Free” Zone. “Creating designated areas where technology is off-limits in your home can significantly impact your daily habits,” Singh says in a statement. “For instance, by making your bedroom a tech-free zone, you promote better sleep hygiene and relaxation, free from the distractions of notifications and screens. Similarly, establishing a tech-free dining room encourages meaningful conversations and family bonding during meals. These intentional spaces reduce screen time, foster healthier interactions, or enjoy quiet moments.”
  • Set Phone-Free Intentions: “Before you reach for your phone, I advise you to take a moment to set a clear intention for its use,” says Singh. “Ask yourself what you need to accomplish—whether it’s checking messages, researching a topic, or responding to an email. This practice encourages a more mindful approach to technology, helping you avoid the trap of mindless scrolling. Having a defined purpose lets you stay focused on your task and minimize the likelihood of getting sidetracked by social media or other distractions.”
  • Reflect on Content Consumption: “Keeping a journal of your experiences with various types of content can be an enlightening practice,” Singh suggests. “After consuming media—be it social media, news articles, or videos—take a moment to jot down your feelings and thoughts. Did you feel inspired, informed, or drained? This reflection helps you discern which content enriches your life and which might feel like a time-waster. Over time, you’ll develop a clearer picture of your media consumption patterns, enabling you to make more informed choices about what to engage with in the future.”

Indulging your sweet tooth could lead to depression, surprising study reveals

(© Drobot Dean – stock.adobe.com)

Do you find yourself reaching for that extra cookie or can’t resist a sugary soda with your lunch? Your sweet tooth might be doing more than just adding a few extra ounces to your waistline. A new study by a team at the University of Surrey has revealed a strong connection between our love of sweet treats and serious diseases, including depression and diabetes.

Researchers, publishing their work in the Journal of Translational Medicine, dug into the food preferences of a whopping 180,000 volunteers from the UK Biobank. Using artificial intelligence, they sorted everyone into three main groups based on what they like to eat. Think of it as your food personality type.

There are the health nuts who are all about those fruits and veggies, passing on the sugary stuff and animal products. Then there are the “everything bagels” who enjoy a bit of everything – meats, fish, some veggies, and yeah, they’ll have that dessert too. Finally, we have the sweet tooths, for whom sugar is king. They’re all about those sweet treats and sugary drinks, often leaving the healthier options on the side.

Now, here’s where it gets really interesting. The researchers didn’t just stop at sorting people into groups. They took a deep dive into the volunteers’ blood samples, looking at nearly 3,000 proteins and 168 metabolites.

For reference, proteins are like the body’s multitool. They do everything from fighting off nasty infections to helping you flex those muscles and even powering your thoughts. Metabolites, on the other hand, are tiny molecules that pop up during digestion and other chemical processes in your body. Think of them as little biological clues that can tell us a lot about how well your body is running. By comparing these biological markers between the different food preference groups, the researchers uncovered some eye-opening results.

“The foods that you like or dislike seem to directly link to your health. If your favorite foods are cakes, sweets, and sugary drinks, then our study’s results suggest that this may have negative effects on your health,” says Professor Nophar Geifman, who led the study, in a media release.

The study found that people in the sweet tooth group were 31% more likely to have depression. That’s not all – these individuals also had higher rates of diabetes and heart problems compared to the other groups.

Digging deeper into the blood work, the researchers found more cause for concern.

“In the sweet tooth group, they had higher levels of C reactive protein, which is a marker for inflammation. Their blood results also show higher levels of glucose and poor lipid profiles, which is a strong warning sign for diabetes and heart disease,” Professor Geifman explains.

It’s not all doom and gloom, though. The health-conscious eaters, who tended to have more fiber in their diets, showed lower risks for heart failure, chronic kidney diseases, and stroke. The omnivores, our “everything bagels,” fell somewhere in the middle with moderate health risks.

Now, you might be thinking, “But I thought a little sugar was okay!” Well, you’re not wrong. The British Nutrition Foundation notes that, on average, adults in the U.K. get between 9% to 12.5% of their daily calories from “free sugars” – that’s the kind added to food and drinks, not the natural sugars found in whole fruits and vegetables. The biggest culprits? Biscuits, buns, cakes, pastries, and fruit pies top the list for adults. When you add in sugary soft drinks and alcoholic beverages, however, that’s where most of our added sugar intake comes from.

“Processed sugar is a key factor in the diet of many, and these results are yet more evidence that, as a society, we should do all that we can to think before we eat, stressing that no one wants to tell people what to do, our job is just informing people,” Prof. Geifman concludes.

So, what’s the takeaway here? It’s not about swearing off sweets forever or feeling guilty about every cookie. Instead, it’s about being aware of how our food choices might be impacting our health in ways we hadn’t considered before. Maybe next time you’re faced with a choice between an apple and a candy bar, you might think twice. Your body – and your future self – just might thank you for it.

Source: https://studyfinds.org/sweet-tooth-depression/?nab=0

‘Absolutely wild’: Showerheads and toothbrushes found to be covered in viruses

(Credit: ezps/Shutterstock)

Researchers are giving us an unnerving look at the hidden world of microbes living in our bathrooms. Specifically, the study finds our showerheads and toothbrushes are teeming with viruses.

The study, published in Frontiers in Microbiomes, reveals that the microbial populations found on showerheads and toothbrushes are surprisingly distinct, despite both being located in bathrooms and regularly exposed to water. This finding challenges the notion that all bathroom microbes are created equal.

Researchers at Northwestern University analyzed 92 showerhead and 34 toothbrush samples from across the United States, using advanced DNA sequencing techniques to identify the bacteria and viruses present. The results paint a fascinating picture of the microbial world that surrounds us in our most private spaces.

“The number of viruses that we found is absolutely wild,” says Northwestern’s Erica M. Hartmann, who led the study, in a university release. “We found many viruses that we know very little about and many others that we have never seen before. It’s amazing how much untapped biodiversity is all around us. And you don’t even have to go far to find it; it’s right under our noses.”

One of the most intriguing discoveries was the presence of bacteriophages — viruses that infect bacteria — in both showerheads and toothbrushes. These tiny viral predators play a crucial role in shaping bacterial communities and may even influence our health in ways we don’t yet fully understand.

Interestingly, the study also found that toothbrushes harbor a more diverse range of microbes compared to showerheads. This difference is likely due to the variety of inputs toothbrushes receive, including bacteria from our mouths, food particles, and environmental microbes. Showerheads, on the other hand, are primarily exposed to tap water and thus host a more limited microbial community.

“We saw basically no overlap in virus types between showerheads and toothbrushes,” Hartmann explains. “We also saw very little overlap between any two samples at all. Each showerhead and each toothbrush is like its own little island. It just underscores the incredible diversity of viruses out there.”

The research team also identified several bacterial families that were common to both showerheads and toothbrushes, including Burkholderiaceae, Caulobacteraceae, and Sphingomonadaceae. These bacterial groups seem to thrive in both environments, suggesting they may be particularly well-adapted to bathroom conditions.

Source: https://studyfinds.org/showerheads-toothbrushes-viruses/?nab=0

Exit mobile version