Screen Time in Bed Raises Insomnia Risk by 59% Per Hour

(© Point of view – stock.adobe.com)

Using a smartphone or tablet for just one hour after going to bed raises the risk of insomnia by 59%, according to new research. This finding comes from one of the largest studies conducted on screen use and sleep among university students, highlighting how our nightly digital habits may be robbing us of crucial rest.

Researchers from the Norwegian Institute of Public Health examined data from over 45,000 university students and found that each additional hour spent using screens after going to bed not only significantly increased insomnia risk but also cut sleep duration by about 24 minutes. What’s particularly notable is how consistent this effect appears to be—regardless of whether students were scrolling through social media, watching movies, or gaming.

The Digital Bedtime Crisis

Sleep problems have reached concerning levels among university students globally. The study reports that about 30% of Norwegian students sleep less than the recommended 7-9 hours per night. Even more troubling, over 20% of male students and 34% of female students report sleep issues meeting clinical criteria for insomnia, numbers that have been rising in recent years.

Smartphones have transformed our bedrooms into entertainment centers. Previous research shows that over 95% of students use screens in bed, with an average screen time after going to bed of 46 minutes. Some studies have even found that 12% of young adults engage with their smartphones during periods they’ve self-reported as sleep time.

Many sleep experts have speculated that social media might be especially harmful for sleep compared to more passive activities like watching television. The reasoning seems logical – social media platforms are designed to keep users engaged through interactions, notifications, and endless scrolling features that make it difficult to disconnect. Plus, the social obligations and fear of missing out associated with platforms like Instagram and TikTok might make users more reluctant to put their devices away at bedtime.

Surprising Findings Challenge Common Beliefs

Researchers divided participants into three groups: those who exclusively used social media in bed (about 15% of the sample), those who used social media combined with other screen activities (69%), and those who engaged in non-social media screen activities only (15%).

Contrary to expectations, students who exclusively used social media in bed reported fewer insomnia symptoms and longer sleep duration compared to the other groups. The non-social media group experienced the highest rates of insomnia and shortest sleep duration, while those mixing social media with other activities were intermediate.

This unexpected outcome challenges the notion that social media is uniquely harmful to sleep. Instead, the research points to the total time spent on screens in bed, regardless of the specific activity, as the strongest predictor of sleep problems. Each additional hour of screen time after going to bed was consistently associated with poorer sleep outcomes across all three groups.

Why might social media-only users sleep better? Researchers propose that exclusively using social media might reflect a preference for socializing and maintaining connections with others, which generally protects against sleep problems. Being socially engaged has been linked to better sleep in numerous studies.

Alternatively, those experiencing the most sleep difficulties might deliberately avoid social media before bed, instead turning to activities like watching movies or listening to music as sleep aids. Many people with insomnia use screen-based activities to distract themselves from negative thoughts or anxiety that prevent sleep.

What This Means For Your Sleep

The study, published in Frontiers in Psychiatry, reveals how screens affect sleep through several pathways: direct displacement (screen time replacing sleep time), light exposure (suppressing melatonin production), increased mental arousal (making it harder to fall asleep), and sleep interruption (notifications disturbing sleep).

The findings from this study largely support the displacement hypothesis. If increased arousal from interactive content were the main factor, we would expect to see different associations between sleep and various screen activities. Instead, the consistent relationship between screen time and sleep problems across activity types indicates that simply spending time on screens—time that could otherwise be spent sleeping—may be the most important factor.

For university students already struggling with academic pressure, social adjustment, and mental health challenges, poor sleep represents an additional burden with potentially serious consequences. Sleep deprivation impairs attention, memory, and other cognitive functions crucial for academic success.

Non-screen users had 24% lower odds of reporting insomnia symptoms, confirming that keeping devices out of the bedroom is a worthwhile sleep hygiene practice. Even if it’s not a complete solution to sleep difficulties, it represents a behavior that could meaningfully improve sleep for many young adults.

“If you struggle with sleep and suspect that screen time may be a factor, try to reduce screen use in bed, ideally stopping at least 30–60 minutes before sleep,” says lead author, Dr. Gunnhild Johnsen Hjetland, in a statement. “If you do use screens, consider disabling notifications to minimize disruptions during the night.”

The next time you’re tempted to bring your phone to bed “just to check a few things,” remember the Norwegian students’ experience: each hour spent on that screen, regardless of what you’re doing, might cost you 24 minutes of precious sleep and significantly increase your chances of developing insomnia. Given what we know about the essential role of sleep in physical and mental health, learning, and overall wellbeing, that’s a trade-off worth reconsidering.

Source : https://studyfinds.org/screen-time-bed-insomnia-risk/

Webb Telescope Catches Earliest Evidence of the Universe Turning On Its Lights

A close view on one of the most distant galaxies known: On the left are some 10,000 galaxies at all distances, observed with the James Webb Space Telescope. The zoom-in on the right shows, in the center as a red dot, the galaxy JADES-GS-z13-1. Its light was emitted 330 million years after the Big Bang and traveled for almost 13.5 billion years before reaching Webb’s golden mirror. Credit: ESA/Webb, NASA & CSA, JADES Collaboration, J. Witstok, P. Jakobsen, A. Pagan (STScI), M. Zamani (ESA/Webb).

At a time when light couldn’t easily travel through space due to a thick fog of neutral hydrogen, one galaxy managed to carve out its own bubble of clear space, allowing us to detect a specific light signal that should have been completely absorbed. This cosmic lighthouse from 13 billion years ago gives us our earliest direct glimpse of how the universe transitioned from darkness to light.

The galaxy, cataloged as JADES-GS-z13-1-LA, was observed at what scientists call a redshift of 13. While that technical term might not mean much to most of us, it represents an incredible distance in both space and time. When we look at this galaxy, we see light that has traveled for over 13 billion years to reach us.

This study, published in Nature, used the James Webb Telescope to observe this early galaxy. Scientists also detected a Lyman-alpha emission, a specific wavelength of light that’s easily absorbed by neutral hydrogen, the gas that filled the early universe. Finding this emission suggests this galaxy was actively clearing the cosmic fog around it, like turning on a light in a dark room.

From Cosmic Dark Ages to First Light

Recent observations with the James Webb Space Telescope have already revealed surprisingly bright galaxies existed earlier than astronomers expected. But this new finding provides something more concrete: direct evidence of reionization, the cosmic transformation that brought the universe out of darkness.

For context, in the first few hundred thousand years after the Big Bang, the universe expanded and cooled enough for protons and electrons to combine into neutral hydrogen atoms. This created a cosmic fog that blocked most light from traveling freely for hundreds of millions of years, a period astronomers call the cosmic “dark ages.”

Eventually, the first stars and galaxies began forming and producing ultraviolet radiation that started breaking apart these neutral hydrogen atoms. This gradually made the universe transparent to light (reionization).

Breaking Through the Cosmic Fog

The research team analyzed this distant galaxy using imaging and spectroscopy from JWST’s powerful instruments. The data revealed not just the usual signs of light being blocked by early-universe hydrogen, but also a surprisingly bright signal of light breaking through. Such strong emissions had previously only been seen in much younger galaxies when more of the universe had already been cleared of neutral hydrogen.

Astronomers also saw what they call an “extremely blue ultraviolet continuum” (essentially meaning this galaxy appears very blue in color). The fact that we could even see the Lyman-alpha emission means the galaxy was incredibly good at making and releasing powerful radiation, strong enough to break apart the hydrogen gas around it.

“We know from our theories and computer simulations, as well as from observations at later epochs, that the most energetic UV light from the galaxies ‘fries’ the surrounding neutral gas, creating bubbles of ionized, transparent gas around them,” says study author Joris Witstok from the University of Copenhagen, in a statement. “These bubbles percolate the Universe, and after around a billion years, they eventually overlap, completing the epoch of reionization. We believe that we have discovered one of the first such bubbles.”

What could produce such powerful radiation in this ancient galaxy? One explanation involves extremely massive, hot stars that are much more efficient at producing ionizing radiation than typical stars today. These cosmic giants could be heating surrounding gas to temperatures exceeding 100,000 Kelvin, far hotter than our Sun’s surface at about 5,800 Kelvin.

Another possibility is that this galaxy contains an active supermassive black hole. The intense radiation from material falling into such a black hole could efficiently ionize nearby gas. Supporting this idea, the researchers found the galaxy appears extremely compact, smaller than 114 light-years across which is more compact than most galaxies seen at similar distances.

“Most galaxies are known to host a central, supermassive black hole. As these monsters engulf surrounding gas, the gas is heated to millions of degrees, making it shine brightly in X-rays and UV before disappearing forever,” says Witstok.

The researchers also considered whether this might be one of the universe’s very first generation of stars, called Population III stars, formed from pristine gas containing only hydrogen and helium. These stars would be substantially more massive and hotter than later stars. However, the galaxy seems slightly too bright to fit this explanation perfectly.

Rewriting the Timeline of Cosmic Dawn

Whatever is powering this ancient light source, its discovery reshapes our understanding of how the universe transitioned from darkness to light. Until recently, the consensus among astronomers was that reionization did not begin until the Universe was around half a billion years old, completing another half billion years later. But this study pushes the beginning of reionization significantly earlier than previously thought.

The finding also provides evidence for an important physical process called Wouthuysen-Field coupling, where Lyman-alpha photons affect the spin temperature of hydrogen atoms. Scientists hope to detect this with radio telescopes searching for signals from the early universe.

“We knew that we would find some of the most distant galaxies when we built Webb,” says study author Peter Jakobsen from the University of Copenhagen. “But we could only dream of one day being able to probe them in such detail that we can now see directly how they affect the whole Universe.”

The universe’s first light didn’t switch on all at once; it started with galaxies like this one, each creating its own bubble of clear space that eventually merged with others to transform the entire cosmos. By pushing back the timeline of this process and showing it began with ordinary galaxies rather than exceptional ones, this discovery connects the dots between the universe’s first few hundred million years and the transparent cosmos that would eventually allow for our existence.

Source : https://studyfinds.org/webb-telescope-universe-turning-on-its-lights/

How Being Bilingual May Help the Brain Resist Alzheimer’s Damage

Knowing two languages could preserve your brain for longer, even with Alzheimer’s. (stoatphoto/Shutterstock)

Learning a second language offers benefits beyond ordering food on vacation or reading foreign literature. Recent research from Concordia University suggests bilingualism might actually help protect the brain against some devastating effects of Alzheimer’s disease.

Scientists have long observed that some people maintain their thinking abilities despite significant brain damage. This disconnect, where brain deterioration doesn’t necessarily cause expected cognitive problems, has prompted researchers to develop ideas like “brain reserve,” “cognitive reserve,” and “brain maintenance” to explain this resilience. This study, published in Bilingualism: Language and Cognition, found evidence that speaking two or more languages might boost this resilience, especially through brain maintenance in people with Alzheimer’s.

Alzheimer’s accounts for about two-thirds of dementia cases worldwide and typically progresses from subjective cognitive decline (SCD) to mild cognitive impairment (MCI) before developing into full Alzheimer’s. This progression usually comes with brain shrinkage, particularly in the medial temporal lobe, which includes the hippocampus, a structure essential for forming new memories.

Earlier studies suggested bilingual individuals might experience a 4-to-5-year delay in Alzheimer’s symptom onset compared to those who speak just one language. But how exactly bilingualism might shield against cognitive decline hasn’t been fully understood. This new research examines the structural brain differences between bilinguals and monolinguals (people who only speak one language) across various stages of Alzheimer’s progression.

The research team analyzed data from 364 participants from two major Canadian studies. Participants ranged from cognitively healthy individuals to those with subjective cognitive decline, mild cognitive impairment, and Alzheimer’s disease.

Using brain imaging, researchers measured the thickness and volume of specific brain regions involved in language processing and areas typically affected by Alzheimer’s. They wanted to see if bilinguals showed signs of greater “brain reserve” (more neural tissue in language-related regions) or “cognitive reserve” (maintaining cognitive function despite significant brain deterioration).

Unlike some previous studies, bilinguals didn’t show greater brain reserve in language-related regions compared to monolinguals. However, a difference emerged when looking at the hippocampus, one of the first areas damaged by Alzheimer’s.

Older monolinguals with Alzheimer’s showed substantial reduction in hippocampal volume compared to those with milder impairment, following the expected pattern of brain degeneration. But bilinguals with Alzheimer’s showed a different pattern: their hippocampal volumes weren’t significantly smaller than bilinguals with milder cognitive issues.

While monolingual brains showed progressive shrinkage as the disease worsened, bilingual brains seemed to maintain their hippocampal volume despite disease progression. This points to what researchers call “brain maintenance,” preserving brain structure over time despite aging or disease.

The hippocampus is vital for forming new memories, and its deterioration closely connects with the memory loss so characteristic of Alzheimer’s. If bilingualism helps preserve hippocampal volume, it could explain why some studies have found delayed symptom onset in bilingual Alzheimer’s patients.

The bilingual participants came from diverse backgrounds, with about 38% reporting English as their first language, 39% reporting French, and the rest reporting various other languages. About 68% knew two languages, 22% knew three, and some participants reported knowing up to seven languages. Interestingly, many bilingual participants could be described as “late bilinguals,” those who learned their second language after age 5, with moderate self-reported second language ability.

The potential brain benefits of bilingualism might not be limited to those who grew up speaking multiple languages or who are highly fluent in their second language. Even learning a second language later in life and achieving moderate skill might contribute to cognitive resilience.

What does this mean for ordinary people? While the study doesn’t suggest that learning a second language will prevent Alzheimer’s, it adds to growing evidence that certain lifestyle factors, including language learning, may help build resilience against cognitive decline.

The benefits of learning a second language extend far beyond communication skills. The mental demands of managing multiple languages may help build a more resilient brain, one better equipped to withstand the challenges of aging and disease. While learning a second language is no cure, it could help maintain thinking abilities for longer despite underlying brain damage.

Source : https://studyfinds.org/being-bilingual-resist-alzheimers-damage/

Average American Spends 138 Minutes Mired in Worrisome Thoughts Every Day

Photo by Road Trip with Raj on Unsplash

Anxiety has become an unwelcome companion for many, creeping into everyday life with relentless persistence. But for a growing number of young Americans, worry is no longer an uncontrolled intruder—it’s being managed, contained, and strategically addressed.

A recent survey of 2,000 adults across all generations by Talker Research uncovers a surprising trend: one in 10 young Americans deliberately carve out dedicated “worry time” in their daily routines. This approach stands in sharp contrast to older generations, with just 3% of Gen X and baby boomers adopting similar strategies.

A Generation Wrestling with Anxiety

The most striking revelation is the pervasive nature of worry among younger Americans. An overwhelming 62% of Gen Z and millennial respondents report feeling constantly anxious, compared to 38% of older generations. On average, people spend two hours and 18 minutes each day caught in the grip of worrisome thoughts—a significant chunk of time that could otherwise fuel productivity, creativity, or personal growth.

The timing of these worry periods reveals interesting patterns. A third of respondents find themselves most anxious when alone, while 30% are plagued by worries as they prepare to fall asleep. Another 17% are tormented by anxious thoughts upon waking, and 12% experience peak worry while getting ready for bed.

The Weight of Worry

When it comes to specific concerns, finances top the list. More than half (53%) of respondents cite money as their primary source of anxiety. Family worries follow closely, with 42% expressing deep concern about their loved ones. The same percentage fret about pending tasks and to-do lists.

Health concerns (37%), sleep anxiety (22%), and political uncertainties (19%) round out the top worries. For parents, the concerns extend far beyond personal anxieties. Seventy-seven percent express profound worry about the world their children are inheriting, with 34% specifically calling out climate change as a significant concern.

One parent’s raw emotion captures this generational anxiety: “Honestly, I worry that there won’t be a world for my child to grow up in.” Another wondered whether their children would experience the same opportunities they once enjoyed.

Strategic Approach to Mental Health?

The concept of scheduled worry time might seem counterintuitive, but mental health experts suggest it’s a deliberate approach to managing anxiety. By allocating a specific time to process and acknowledge worries, individuals can potentially reduce the overall impact of anxiety on their daily lives.

“Worry doesn’t just cloud our thoughts — it can seriously disrupt our sleep,” says Brooke Witt, Vice President of Marketing at Avocado Green Mattress, which commissioned the study. “When our minds are consumed by finances, family, or endless to-do lists, falling and staying asleep becomes a challenge, which directly impacts how rested we feel the next day.”

The survey suggests more than just a coping mechanism—it reveals a generational approach to mental health that is proactive, intentional, and self-aware. Younger Americans are not simply experiencing anxiety; they’re developing structured methods to understand, limit, and manage it.

The 10% who schedule dedicated worry time represent a potentially transformative approach to mental wellness. By containing their anxieties within a specific timeframe, they may be finding a way to prevent worry from consuming their entire day.

“There’s always something brewing in our minds — whether it’s work, family, or future concerns,” notes Amy Sieman, an affiliate manager with Avocado Green Mattress. “This research reveals how these everyday worries can follow us to bed, affecting both our sleep and our overall quality of life.”

Source : https://studyfinds.org/americans-worry-time-anxiety/

(© pikselstock – stock.adobe.com)

Male sexual desire tends to decline with age—it’s a biological fact that many men face as the years pass. By age 70, about a quarter of men report a noticeable drop in sexual drive. But what if there were a relatively simple dietary approach that could help maintain libido well into later years?

A fascinating study published in Cell Metabolism reveals that intermittent fasting significantly boosts sexual behavior in male mice by altering brain chemistry in ways that enhance sexual motivation. The research suggests that brain chemistry might matter more than physical reproductive metrics when it comes to maintaining sexual function with age.

Scientists from the German Center for Neurodegenerative Diseases and University of Health and Rehabilitation Sciences in China discovered that mice subjected to intermittent fasting—alternating 24-hour periods of eating and not eating—maintained much higher reproductive success rates in old age compared to their continuously-fed counterparts. While only 38% of aged mice with unlimited food access successfully reproduced, a remarkable 83% of intermittently fasted mice remained fertile.

What makes this finding truly surprising isn’t just the striking difference in reproductive success, but the mechanism behind it. The fasting regimen didn’t improve traditional markers of reproductive health like testosterone levels, sperm count, or sperm quality. In fact, the fasting mice actually showed greater testis weight reduction than continuously-fed mice. The secret to their reproductive success lay entirely in behavior—the fasting mice simply showed more interest in mating.

The research team, led by Kan Xie, Yu Zhou, and Dan Ehninger, identified a clear chemical pathway for this behavioral change. Aging typically raises levels of serotonin in the brain, which acts as a sexual inhibitor. Intermittent fasting prevented this age-related serotonin increase by reducing the amount of its precursor, the amino acid tryptophan, available to the brain.

Study authors explain that this mechanism works through a unique metabolic pathway. When mice fast and then refeed, their skeletal muscles draw more tryptophan from the bloodstream. With less tryptophan circulating in the blood, less crosses into the brain, resulting in lower serotonin production and consequently less inhibition of sexual behavior.

To confirm their findings, the researchers administered 5-HTP—a direct precursor to serotonin that bypasses the rate-limiting step in serotonin synthesis—to fasting mice. This promptly reversed the behavioral benefits, with the treated mice showing decreased sexual interest. This confirmed that reduced brain serotonin was indeed responsible for the enhanced sexual behavior in fasting mice.

While the study was conducted in mice, the core biochemical pathways involved function similarly in humans. Tryptophan metabolism and serotonin synthesis operate through comparable mechanisms across mammalian species, suggesting the potential for similar effects in humans.

The intermittent fasting regimen used in the study wasn’t extreme. The mice alternated between 24 hours of unlimited food access and 24 hours of fasting. During feeding days, they ate more than usual, compensating for fasting days. Overall, they consumed only about 13% fewer calories than continuously-fed mice. This modest reduction in calorie intake, combined with the cyclical fasting/feeding pattern, produced significant effects on brain chemistry.

It’s worth noting that the benefits weren’t immediate—a brief six-week intervention didn’t improve sexual behavior. The changes required longer-term adaptation, suggesting that lasting modifications to brain chemistry take time to develop.

For men concerned about age-related decline in sexual interest, this research offers food for thought. While human studies are needed to confirm similar effects, the fundamental biological mechanisms are plausible. Before making any changes to your dietary routine, it’s important to speak with your doctor first.

From an evolutionary perspective, these findings challenge the notion that dietary restriction necessarily suppresses reproduction. While many theories suggest organisms redirect resources from reproduction to survival during food scarcity, this research indicates that certain patterns of food availability might actually enhance reproductive behavior, at least in males.

It’s something many of us probably haven’t thought of before, but perhaps what happens in the kitchen might influence what happens in the bedroom. While results from animal studies don’t automatically transfer to humans, the fundamental mechanisms involved are similar enough to warrant further investigation. After all, when it comes to maintaining quality of life with age, few aspects matter more than preserving the capacity for romantic connection.

Source : https://studyfinds.org/could-intermittent-fasting-refuel-an-aging-libido/

 

“Taking in the good”: A simple way to offset your brain’s negativity bias

Credit: paul / Adobe Stock

Imagine lounging in a hammock on a sunny beach, palm trees swaying in the breeze, the bright turquoise of the sea barely dimmed by your sunglasses. You glance up and down the beach: not a soul in sight. It’s the first day of your holiday, and your whole body feels so relaxed; you could dissolve into the sand and be swept out to sea. You take a lazy sip of your pina colada and take it all in. Out of nowhere, a voice whispers into your ear: “No, really, take it in.”

That inner voice? It’s echoing a simple but often overlooked idea: Good experiences don’t always stick unless we make an effort to let them. That’s the premise behind Hardwiring Happiness, a book by psychologist Rick Hanson, who explores how consciously lingering on positive moments can help counterbalance the brain’s built-in negativity bias. That bias might have served a useful evolutionary purpose ages ago when our survival was more frequently at stake, but in a relatively stable 21st-century environment, it often traps us in cycles of rumination.

Hanson’s approach isn’t about forced optimism — it’s grounded in the idea of neuroplasticity, the brain’s capacity to change over time through repeated experience. Drawing on psychological theory and early research suggesting that “deliberately taking in the good” may help build resilience and emotional well-being, Hanson developed the HEAL method:

  • Have a good experience
  • Enrich it
  • Absorb it
  • Link it to other positive or negative experiences.

While Hanson’s HEAL method draws on established neuroscience concepts, it remains a clinical and contemplative approach rather than a rigorously validated scientific intervention. In a small exploratory study using pre-post self-report measures, Hanson and colleagues assessed the effects of this intervention on 21 healthy subjects and found statistically significant self-reported improvements in measures like savoring and self-compassion, though the small sample size and lack of a control group limit the strength of the conclusions. The participants also reported statistically borderline improvements in self-esteem, positive rumination (self-focus), pride, happiness, and satisfaction with life. Many of these effects persisted after two months.

Change the mind, change the brain?

Can you really rewire your brain this way — simply by changing your mind? That’s the idea behind neuroplasticity: the brain’s ability to adapt and reorganize in response to experience. Researchers investigate it through a combination of brain imaging and behavioral assessments. For example, if someone is able to learn a new skill more quickly following an intervention, scientists can correlate this with changes in brain activity, using what’s called “task-based fMRI.” But the details of cause-and-effect are far from simple, and the research methods far from perfect. Although there is considerable evidence for neuroplasticity as a phenomenon related to health and well-being, skeptics warn of “neuroplasticity hype,” and positive neuroplasticity itself has not been corroborated neuroscientifically in humans.

Still, Hanson says, we’ve come a long way toward understanding the relationship between mind and brain.

“As science has progressed in the last hundred to a hundred and fifty years with the study of the nervous system,” he told Big Think, “the correlations have become increasingly well understood and tight between ongoing mental activity — hearing, seeing, loving, hating, wanting, remembering — and the underlying neural activity that is their physical basis.”

A number of brain imaging studies suggest that certain mental practices, such as mindfulness meditation, are associated with structural and functional brain changes, though questions remain about causality and long-term effects.

In the 1960s, researchers began using electroencephalogram (EEG) to study neural activity during meditation. In the 1970s came magnetic resonance imaging (MRI), and by the 1990s — the so-called “decade of the brain” — scientists were increasingly able to associate specific mental experiences with distinct patterns of neural activity. For instance, one seminal study of nuns praying inside fMRI machines showed that their brains’ reward centers lit up in ways similar to people using cocaine.

“It doesn’t mean connecting with Christ consciousness is the same as taking cocaine,” Hanson notes, “but they were starting to find underlying neural correlates.”

A growing body of research shows that meditation and other contemplative practices can promote neuroplasticity, encouraging the brain to form new connections and adapt over time. In the mid-2000s, as Hanson and his colleagues began combing through the research literature, they wondered whether they could flip things around and harness what scientists had gathered about the brain to use in contemplative and clinical practice, an investigation which ultimately became the basis for HEAL.

Could they deliberately activate the brain to induce certain mental activities that would lead to lasting changes in the brain and, ultimately, support the development of optimal traits like a more positive outlook on life? As Hanson put it: “Could we use our mind to stimulate and change our brain to benefit our mind?”

If so, harnessing brain science could, in theory, motivate people who wouldn’t otherwise think to take up a “mental hygiene” regime such as meditation.

“When people realize this airy-fairy, woo-woo stuff is actually helping their own brain, they get much more motivated,” he said. Ruminating over the state of the world may not be helpful, but “when you slow down, take a moment to feel close to your friend or partner, and let that really land inside, that’s changing your brain for the better.”

While the precise mechanisms remain uncertain, Hanson points to the role of the autonomic nervous system — particularly how social connection and safety cues can downregulate stress responses — as one pathway through which positive experiences may shape long-term well-being.

“If I want to calm myself down, it’s important to touch my partner, or my dog, because that social engagement is going to ripple down and calm my heart,” he said.

Source : https://bigthink.com/neuropsych/rick-hanson-heal-method/

The More Partners the Merrier? Non-Monogamous Relationships Just as Satisfying, Study Shows

(Credit: Casimiro – stock.adobe.com )

For decades, we’ve been fed a consistent message: monogamous relationships represent the gold standard of romantic fulfillment. This belief runs so deep that researchers have now given it a name—the “monogamy-superiority myth.” It’s a belief that has shaped personal choices, public policies, and professional practices, despite remarkably little evidence supporting the claim.

A new review published in The Journal of Sex Research directly challenges this assumption with data from nearly 25,000 individuals. The findings? When it comes to both relationship and sexual satisfaction, there’s virtually no difference between people in monogamous relationships and those in consensually non-monogamous arrangements.

This extensive review, led by Joel R. Anderson of La Trobe University, represents the first comprehensive analysis comparing satisfaction levels across different relationship structures. The findings effectively challenge the notion that non-monogamous relationships are somehow lacking or less fulfilling than monogamous ones.

The Persistence of Monogamy as the ‘Ideal’

Western society has long operated under the assumption that monogamy is not just normal, but optimal. This belief has been reinforced through cultural messages, religious teachings, and even healthcare practices. People in non-monogamous relationships often face judgment, discrimination, and the assumption that their relationship choices indicate personal problems or instability.

The research team identified several reasons these beliefs persist. For many, monogamy is seen as a moral choice guided by religion or cultural norms. It’s often viewed as “normal” and beneficial because it allows people to avoid stigma. Monogamous relationships are frequently assumed to result in better health outcomes, greater stability, and even better intimacy—assumptions the new research directly contradicts.

‘Monagamish’ Relationships Are Better?

The researchers examined studies conducted between 2007 and 2024, mostly in Western countries like the United States, Canada, and Australia. This body of research included diverse participants across sexuality and gender identity, though most samples were predominantly white.

Non-monogamy in these studies covered various relationship structures, including:

  • Polyamory: maintaining several loving relationships at once
  • Open relationships: agreements allowing sex outside the primary relationship
  • Swinging: partners engaging in outside sexual activities together, often at organized events
  • “Monogamish” arrangements: mostly monogamous relationships with occasional agreed-upon exceptions

Across these diverse relationship structures, the analysis found that monogamous and non-monogamous people reported basically identical levels of both relationship and sexual satisfaction. This pattern held true regardless of participants’ sexuality, with both straight and LGBTQ+ samples showing no significant differences.

Some interesting details emerged when researchers looked at specific types of non-monogamous arrangements. People in “monogamish” relationships reported slightly higher relationship satisfaction than those in strictly monogamous relationships. Similarly, polyamorous individuals and swingers reported somewhat higher sexual satisfaction than their monogamous counterparts.

Another surprising finding emerged when researchers examined different aspects of relationship satisfaction. Non-monogamous individuals actually rated trust higher than monogamous individuals, while scoring equally on commitment, intimacy, and passion. This challenges the common assumption that non-monogamous relationships necessarily involve less trust or commitment.

Study authors suggest that non-monogamous relationships might actually strengthen certain relationship skills. The nature of managing multiple relationships might encourage people to put more effort into communication, openness, and understanding—all key components of trust.

Changing Norms?

Despite the stigma and discrimination that non-monogamous people often face, their reported satisfaction matched or sometimes exceeded those of monogamous individuals.

The research team offers several explanations for these findings. Non-monogamous relationships may allow people to experience more variety and freedom. These structures let people have different needs met by different partners, whereas monogamous individuals must find all their needs satisfied by one person. Research also indicates that non-monogamy can encourage personal growth and independence, which may boost relationship and sexual satisfaction.

These findings matter for therapists, counselors, and other healthcare professionals who work with non-monogamous clients. Previous studies have shown that healthcare practitioners sometimes view non-monogamy as a problem or sign of trouble, making assumptions that can damage the therapeutic relationship.

For the roughly 5% of adults currently in non-monogamous relationships—and the approximately 20% who have tried consensual non-monogamy at some point—these findings validate that their relationship choices can lead to satisfying, fulfilling partnerships.

It’s worth noting that while these results show equal satisfaction across relationship structures, they don’t suggest any particular relationship style is right for everyone. Personal preferences, values, and needs remain most important in determining the best relationship arrangement for each person.

Ultimately, these findings don’t just validate non-monogamous relationships—they invite us to question assumptions about relationships that we may have never examined. Perhaps satisfaction has less to do with relationship structure than with how well any relationship meets the unique needs of the people involved.

Source : https://studyfinds.org/non-monogamous-relationships-just-as-satisfying/

Goodbye, Breakfast? This Science-Backed Eating Window Burns More Fat Than Exercise Alone

(© RasaBasa- stock.adobe.com)

There’s promising news for fitness enthusiasts looking to optimize their body composition: combining a time-restricted eating (also known as intermittent fasting) regimen with your exercise routine may help reduce body fat while preserving muscle mass.

Researchers have discovered that coordinating when you eat with your exercise routine might significantly improve body composition results, according to a comprehensive study published in the International Journal of Obesity. The new meta-analysis by scientists at the University of Mississippi, along with colleagues from Texas Tech University, reveals an intriguing fitness strategy that doesn’t involve fancy equipment, expensive supplements, or complicated diet plans.

The secret to a truly fit body may be all about timing your meals and your workout in concert with one another.

The Power of Time-Restricted Eating with Exercise

Time-restricted eating (TRE) involves limiting all food consumption to a specific window—typically 4-12 hours daily—while fasting for the remaining hours. Unlike other dietary approaches that dictate specific foods or calorie counts, TRE focuses simply on when you eat.

The research team analyzed 15 studies involving 338 participants who followed TRE protocols while engaging in structured exercise programs. They compared these individuals to control groups who performed identical exercises but ate without time restrictions.

The results were clear: people who combined TRE with exercise lost approximately 1.3 kg (2.9 pounds) more fat and reduced their body fat percentage by an additional 1.3% compared to those who exercised without meal timing restrictions. Perhaps most importantly, muscle mass wasn’t significantly affected, indicating that TRE doesn’t compromise muscle preservation during exercise programs.

Most studies used a 16:8 schedule—16 hours fasting, 8 hours eating—with feeding windows typically between noon and 8 P.M. Importantly, exercise was performed during feeding hours, not while fasting, which likely helped preserve muscle mass and optimize performance.

“[T]ime-restricted eating appears to induce a small decrease in fat mass and body fat percentage while conserving fat-free mass in adults adhering to a structured exercise regimen, as opposed to exercise-matched controls without temporal eating restrictions,” the authors write.

Why This Combination Works

Several mechanisms might explain why restricting your eating window enhances fat loss beyond exercise alone.

For many people, TRE naturally reduces caloric intake by limiting opportunities to eat. However, benefits persisted even in studies that controlled for calories, indicating that timing itself matters regardless of how much you eat.

Eating during daylight hours may better align with our body’s natural biological rhythms—the internal clocks that regulate numerous physiological processes. This alignment could optimize metabolic function compared to the typical modern pattern of eating from early morning until late night.

TRE may also trigger beneficial hormonal changes, including increased levels of compounds that enhance fat burning (adiponectin, noradrenaline, growth hormone) while decreasing stress hormones like cortisol. Additionally, fasting periods activate metabolic pathways that promote fat oxidation, potentially amplifying exercise’s effects.

The research examined multiple exercise approaches, including aerobic training (running, cycling), resistance training (weightlifting), and concurrent training (combining both). The benefits held across these different exercise modes, indicating the TRE plus exercise formula works regardless of your preferred workout style.

What This Means for Your Fitness Routine

Before rushing to adopt this approach, however, several factors deserve consideration. The benefits, while statistically significant, were moderate in size. Individual responses likely vary considerably based on factors not fully captured in current research. And since most studies were short-term (typically 4-8 weeks), the long-term sustainability and effects remain largely unknown.

It’s also worth noting that most participants were already experienced exercisers in good metabolic health, with relatively few studies including those with obesity. Whether the same benefits apply to beginners or those with significant metabolic challenges remains unclear.

For active individuals looking to fine-tune their approach to body composition, this research provides preliminary support for a simple yet potentially effective strategy: timing meals alongside exercise may help optimize fat loss while preserving valuable muscle tissue.

As always before making any changes to your diet or daily health regimens, you should always talk to your doctor first.

Source : https://studyfinds.org/goodbye-breakfast-this-science-backed-eating-window-burns-more-fat-than-exercise-alone/

Why Women’s Pain Has Been Misunderstood For Decades

Treating female pain may require a different approach than pain management for men. (My Ocean Production/Shutterstock)

For decades, women suffering from chronic pain have been told “it’s all in your head” when treatments that work for men fail them. Now, research from the University of Calgary reveals that women’s pain actually operates through entirely different biological pathways than men’s. Scientists have discovered that the same protein triggers pain in both sexes, but through completely different immune cells and chemical signals.

A new study published in Neuron reveals that a protein called pannexin-1 (or Panx1 for short) works differently in males and females. This helps to explain why women are more likely to develop chronic pain conditions and why they often don’t respond as well to certain treatments.

The Divide in Pain Research

Most pain research has historically focused on male subjects, even though women make up the majority of chronic pain patients. This study aims to fix that imbalance by looking at both male and female animals to understand why pain works differently between sexes.

While both sexes use the Panx1 protein in their immune cells to create pain signals, they use completely different cells and chemical messengers to do it.

In males, Panx1 works through cells called microglia (the immune cells of the spinal cord and brain) to release a protein called VEGF, which increases pain sensitivity. In females, however, Panx1 works through CD8+ T cells (a type of white blood cell) to release leptin, a hormone best known for its role in hunger and metabolism. This may help explain why treatments that target microglia cells work well for reducing pain in males but often fail in females.

Crossing Biological Boundaries

When they took microglia cells from male animals, activated them, and transferred them into females, the females developed pain. Similarly, when they transferred activated female T cells into males, the males also developed pain. This confirmed that these sex-specific mechanisms weren’t just correlations; they actually cause pain.

The researchers also created mice that lacked the Panx1 protein specifically in microglia cells. Male mice without this protein showed much less pain after nerve injury, while female mice still developed normal pain sensitivity.

When they analyzed fluid from the spinal cord, they found that after nerve injury, males had higher levels of VEGF while females had higher levels of leptin. Blocking VEGF reduced pain in males but not females, while neutralizing leptin reduced pain in females but not males.

Hope for Better Pain Treatments

This could lead to better pain treatments for everyone. Current treatments for neuropathic pain (pain caused by nerve damage) include anticonvulsants, antidepressants, and opioids. These treatments tend to work less effectively in women and often cause worse side effects.

With this new knowledge, doctors might eventually be able to prescribe treatments targeted specifically to each sex, like VEGF blockers for men and leptin blockers for women.

This discovery is particularly important for conditions like fibromyalgia, which affects women much more often than men. Previous studies have shown that leptin levels can predict pain severity in women with fibromyalgia. This research now provides a possible explanation for that connection.

Panx1 could be a treatment target that works for both sexes. Although the way the body reacts to this protein is different for men and women, medications targeting it could help both, potentially transforming pain treatment.

For women who have struggled to have their pain taken seriously or effectively treated, this research provides solid evidence that treating female pain really may deserve specific research and targeted treatments. Doctors may eventually move beyond one-size-fits-all approaches to develop treatments tailored to each person’s unique biology.

Source : https://studyfinds.org/womens-pain-misunderstood-medication/

 

Who Is Liable When AI Makes a Medical Mistake?

Who is held accountable when AI systems make mistakes in medicine? (© BiancoBlue | Dreamstime.com)

Doctors are increasingly being asked to use AI systems to help diagnose patients, but when mistakes happen, they take the blame. New research shows physicians are caught in an impossible trap: use AI to avoid mistakes, but shoulder all responsibility when that same AI fails. This “superhuman dilemma” is the healthcare crisis nobody’s talking about.

The Doctor’s Burden: Caught Between AI and Accountability
New research published in JAMA Health Forum explains how the rapid deployment of artificial intelligence in healthcare is creating an impossible situation for doctors. While AI promises to reduce medical errors and physician burnout, it may be worsening both problems by placing an unrealistic burden on physicians.

Researchers from the University of Texas at Austin found that healthcare organizations are adopting AI technologies much faster than regulations and legal standards can keep pace. This regulatory gap forces physicians to shoulder an extraordinary burden: they must rely on AI to minimize errors while simultaneously bearing full responsibility for determining when these systems might be wrong.

Studies reveal that the average person assigns greater moral responsibility to physicians when they’re advised by AI than when guided by human colleagues. Even when there’s clear evidence that the AI system produced wrong information, people still blame the human doctor.

Physicians are often viewed as superhuman. They are expected to have exceptional mental, physical, and moral abilities. These expectations that go far beyond what is reasonable for any human being.

When Two Decision-Making Systems Collide

Physicians face a complex challenge when working with AI systems. They must navigate between “false positives” (putting too much trust in wrong AI guidance) and “false negatives” (not trusting correct AI recommendations). This balancing act occurs amid competing pressures.

Healthcare organizations often promote evidence-based decision-making, encouraging physicians to view AI systems as objective data interpreters. This can lead to overreliance on flawed tools. Meanwhile, physicians also feel pressure to trust their own experience and judgment, even when AI systems may perform better in certain tasks.

Adding to the complexity is the “black box” problem. Many AI systems provide recommendations without explaining their reasoning. Even when systems are made more transparent, physicians and AI approach decisions differently. AI identifies statistical patterns from large datasets, while physicians rely on reasoning, experience, and intuition, often focusing on patient-specific contexts.

The Hidden Costs of Superhuman Expectations
The consequences of these expectations affect both patient care and physician wellbeing. Research from other high-pressure fields shows that employees burdened with unrealistic expectations often hesitate to act, fearing criticism. Similarly, physicians might become overly cautious, only trusting AI when its recommendations align with established care standards.

This defensive approach creates problems of its own. As AI systems improve, excessive caution becomes harder to justify, especially when rejecting sound AI recommendations leads to worse patient outcomes. Physicians may second-guess themselves more frequently, potentially increasing medical errors.

Beyond patient care, these expectations take a psychological toll. Research shows that even highly motivated professionals struggle to maintain engagement under sustained unrealistic pressures. This can undermine both quality of care and physicians’ sense of purpose.

Source: https://studyfinds.org/ai-medical-mistake/

Could Salty Foods Be Fueling Depression Rates?

(© atipong – stock.adobe.com)

Too much salt has long been blamed for heart problems, but new research suggests it might harm our minds too. Scientists from Nanjing Medical University have discovered a surprising connection between high-salt diets and depression-like behaviors in mice, potentially explaining why depression rates continue rising alongside our consumption of processed foods.

The research team found that excessive salt intake triggers specific immune responses in the brain that can lead to behaviors resembling depression. Their findings, published in The Journal of Immunology, offer a biological explanation for previously observed connections between processed food consumption and mood disorders.

Depression affects millions worldwide, with lifetime prevalence reaching 15-18% in many populations. Modern Western diets, especially fast food, contain dramatically more sodium than home-cooked meals—sometimes exceeding homemade options by 100-fold.

The Salt-Depression Connection
In the study, mice fed high-salt diets showed behaviors remarkably similar to those experiencing chronic stress. They explored less, displayed heightened anxiety, and spent more time motionless during tests measuring “behavioral despair”—patterns that parallel human depression symptoms.

The researchers investigated the biological mechanisms behind these behavioral changes. High-salt diets significantly increased production of Interleukin-17A (IL-17A), an immune signaling molecule, particularly in specialized immune cells called gamma delta T cells (γδT cells).

Previous research had linked elevated IL-17A to depression, but this study reveals a direct pathway from dietary salt to increased IL-17A production to depression-like symptoms.

To confirm this connection, the team tested mice genetically modified to lack the ability to produce IL-17A. These mice showed no signs of depression despite consuming high-salt diets. Even more convincingly, when researchers removed the specific immune cells that produce IL-17A, the animals no longer developed depression-like behaviors on high-salt diets.

What This Means for Humans
While conducted in mice, the research has compelling implications for human health. Population studies have already shown links between high-salt diets and increased depression rates. This study offers a potential explanation for those observations.

The average American diet contains about 3,400 mg of sodium daily—far exceeding the American Heart Association’s recommended maximum of 2,300 mg. Fast food meals often deliver an entire day’s worth of recommended sodium in a single sitting.

This isn’t the first research connecting diet and mental health. Mediterranean diets rich in fruits, vegetables, whole grains, olive oil, and lean proteins correlate with lower depression rates. Conversely, diets heavy in processed foods, sugars, and unhealthy fats tend to increase depression risk.

The distinctive aspect of this study is identifying a specific biological pathway connecting diet directly to depression-like behaviors. This precision opens doors to potential new treatment approaches targeting the immune system rather than just brain chemistry.

Simple Ways To Reduce Salt Intake
Current depression treatments typically focus on neurotransmitter imbalances using medications like SSRIs or on changing thought patterns through therapy. The discovery that dietary factors might contribute to depression through immune pathways represents an important shift in how we might approach mental health care.

Applying these findings doesn’t necessarily require waiting for new pharmaceutical treatments. Simple dietary changes are accessible to most people:

  • Reducing processed food intake
  • Eating more home-cooked meals
  • Checking food labels for sodium content
  • Using herbs and spices instead of salt for flavoring

Some health professionals already recommend the DASH diet (Dietary Approaches to Stop Hypertension) for patients with high blood pressure. This diet emphasizes fruits, vegetables, whole grains, lean proteins, and reduced sodium. This new research hints such approaches might benefit mental health too.

Beyond individual choices, these findings could influence public health policies around sodium reduction in processed foods. Some countries have already implemented such regulations: the United Kingdom’s salt reduction program has achieved a 15% decrease in average salt intake since implementation.

While more research is needed before definitive conclusions about salt reduction as a depression treatment in humans, this study adds to mounting evidence that what we eat affects both body and mind. For those struggling with depression, these findings don’t suggest dietary changes should replace established treatments like therapy and medication, but they highlight diet as an important complementary factor in mental health care.

Source: https://studyfinds.org/salty-food-depression-sodium/

Sibling Study: Longer Breastfeeding Linked to Better Brain Development

(Photo by PeopleImages.com – Yuri A on Shutterstock)

Children who are breastfed for longer periods of time during infancy experience fewer developmental delays and a reduced risk of neurodevelopmental conditions, including disorders like autism and ADHD, acording to new research. The study led by scientists at the KI Research Institute in Israel confirms what many parents might hope to hear: breastfeeding babies for at least six months appears to boost their developmental outcomes.

While health organizations have recommended breastfeeding for the first six months of life for years, this study offers particularly strong evidence by addressing problems that weakened earlier research on the topic.

Published in JAMA Network Open, the study involved health data from 570,532 Israeli children, including nearly 38,000 sibling pairs. It ranks among the largest investigations into breastfeeding and development ever conducted.

Led Dr. Inbal Goldshtein and Dr. Yair Sadaka, the research team used an innovative approach to ensure their findings were reliable. The study uniquely combined routine developmental checkup records from Israel’s maternal-child health clinics with national insurance disability data, allowing researchers to track both developmental milestone achievement and diagnosed conditions.

They compared siblings within the same families who had different breastfeeding experiences but shared genes and home environment. This clever design controlled for family factors like parental intelligence and involvement that often confuse results in other studies.

Children exclusively breastfed for at least six months had 27% lower odds of developmental delays compared to those breastfed for shorter periods. Even children who received both breast milk and formula for six months or more showed a 14% reduction. When examining siblings with different breastfeeding histories, those who breastfed longer had 9% lower odds of milestone delays and 27% lower odds of neurodevelopmental conditions compared to siblings who breastfed for shorter periods or not at all.

The benefits remained clear even after accounting for numerous factors, including pregnancy duration, birth weight, maternal education, family income, and postpartum depression.

The advantages appeared most notable in language and social development—crucial areas for school success and forming friendships. Motor skills improved too, though less dramatically. Premature babies, who typically face higher developmental risks, seemed to benefit even more from extended breastfeeding than full-term infants.

For parents struggling with breastfeeding choices, there’s reassuring news. When researchers specifically examined siblings who both breastfed for at least six months—one exclusively on breast milk and one receiving some formula—exclusive breastfeeding didn’t show a meaningful additional advantage. This indicates that maintaining some breastfeeding for longer might matter more than avoiding formula completely.

The study’s authors believe that their findings should inform public health policies and support systems rather than pressure individual families. Their goal remains helping children reach their potential, not creating guilt among parents facing breastfeeding challenges.

Researchers emphasize that while breastfeeding is linked to better development, it’s just one of many factors that shape a child’s growth. They noted that identifying changeable factors like nutrition is essential to helping each child reach their potential.

Despite expert recommendations, actual breastfeeding rates often fall below targets. Many mothers struggle to balance breastfeeding with work demands, inadequate parental leave, and aggressive formula marketing.

Formula companies spend around $55 billion yearly promoting their products, sometimes undermining women’s confidence in their ability to breastfeed. The authors advocate for stronger supportive policies, including better parental leave and limits on formula marketing practices.

The biological mechanism for these benefits may relate to breast milk’s effects on brain development. Earlier research has shown differences in brain structure between breastfed and formula-fed babies. Some scientists believe these benefits might work through effects on the infant’s gut microbiome, which connects to brain development through what’s known as the gut-brain axis.

As the researchers conclude, these results may help guide not only parents but also public health initiatives aimed at giving children the best developmental start possible. When every advantage counts for our children, supporting breastfeeding appears to be a worthwhile investment.

Source : https://studyfinds.org/breastfeeding-for-six-months-boosts-child-developmen/

Ginseng’s Secret Anti-Aging Weapon: How Compound K is Changing Skincare Science

Ginseng roots. (© mnimage – stock.adobe.com)

For thousands of years, ginseng has been treasured in Eastern medicine for its health-promoting properties. Now, modern science is uncovering the remarkable potential of one specific component within this ancient herb – Compound K, a rare metabolite formed when certain ginsenosides from ginseng are broken down in the gut. This substance is becoming a focal point in skin aging research, offering new possibilities for combating wrinkles, skin laxity, and other visible signs of aging.

Research published in the Journal of Dermatologic Science and Cosmetic Technology reveals that Compound K (CK) fights skin aging through multiple biological pathways, targeting different aspects of the aging process simultaneously. The study was conducted by scientists at Yunnan University and Guangdong Industry Polytechnic University.

How Skin Ages and Why Compound K Matters

Skin aging happens because of internal factors like genetics and metabolism, along with external forces such as ultraviolet radiation and pollution. These elements combine to create thinning skin, reduced elasticity, wrinkles, and uneven color. The research reveals Compound K tackles these issues through several different mechanisms at once.

One key way Compound K benefits aging skin is by strengthening its protective barrier. The research shows that CK boosts levels of desmosome adhesive protein 1 (DSC1) while reducing harmful enzymes that can compromise skin integrity. In everyday terms, this means skin treated with this ginseng compound retains moisture better and has improved defense against environmental damage.

One key way Compound K benefits aging skin is by strengthening its protective barrier. The research shows that CK boosts protective proteins, such as desmosome adhesive protein 1 (DSC 1) while reducing harmful enzymes that can compromise skin integrity. In everyday terms, this means skin treated with this ginseng compound retains moisture better and has improved defense against environmental damage.

Collagen breakdown is a major culprit behind skin aging. UV radiation triggers enzymes called matrix metalloproteinases (MMPs), which degrade collagen and lead to wrinkles and sagging. Studies demonstrate that Compound K effectively blocks these collagen-destroying enzymes in skin cells exposed to UV light, helping maintain the skin’s structural framework.

Beyond just preventing damage, Compound K actively promotes repair by stimulating collagen production. It also increases hyaluronic acid in the skin by enhancing the gene responsible for producing this moisture-binding molecule that naturally decreases as we age.

Beyond Surface-Level Benefits: Cellular and Genetic Effects

Particularly interesting is Compound K’s effect on cellular “housekeeping” – the process where cells clean out damaged components (known scientifically as autophagy). This natural maintenance system slows with age, contributing to cellular dysfunction. Research indicates that CK regulates this cleaning process, helping cells function optimally for longer periods.

The compound’s anti-inflammatory benefits are substantial too. Low-grade chronic inflammation, sometimes called “inflammaging,” increasingly appears to drive various age-related conditions, including skin aging. Through several pathways, Compound K reduces inflammation and resulting cellular damage.

At the genetic level, Compound K activates SIRT1, often referred to as a longevity gene because of its role in cellular health. Studies reveal that UV exposure significantly reduces SIRT1 expression, speeding up aging, while CK counteracts this effect depending on the dose used.

For those concerned about cellular energy decline – a hallmark of aging – research points to Compound K improving mitochondrial function, our cells’ power plants. Studies show it promotes mitochondrial health, maintains proper dynamics, and increases energy production. Since mitochondrial dysfunction characterizes aging cells, this benefit could significantly improve skin health and appearance.

From Lab to Skincare: The Practical Applications

Getting active ingredients through the skin barrier presents a major challenge in skincare. Fortunately, Compound K’s relatively small molecular weight allows it to penetrate skin layers more effectively than many other ingredients. Research using artificial skin models confirms CK can move through skin layers, making it a viable option for topical applications.

Remarkably, studies suggest that when applied to skin, other ginsenosides in skincare products can transform into Compound K within the skin itself, potentially boosting the effectiveness of ginseng-based products. This conversion process in skin mirrors what happens in the digestive system when ginsenosides are consumed orally.

While typical anti-aging ingredients often target just one aspect of aging, Compound K’s wide-ranging approach gives it unique value. It simultaneously improves skin barrier function, collagen production, moisture retention, inflammation control, and cellular energy – addressing virtually every major contributor to visible aging.

This research coincides with growing consumer preference for plant-based skincare with scientific backing. The natural cosmetics market continues expanding rapidly as consumers seek evidence-based natural alternatives to synthetic compounds. Ginseng extracts rich in Compound K could meet both the demand for natural ingredients and the expectation for proven results.

Is Ginseng the Future of Anti-Aging Research

Skincare developers now face the task of creating stable delivery systems that maximize Compound K’s benefits. The compound’s multifaceted effects suggest it could enhance products targeting various signs of aging, from fine lines to skin firmness and radiance.

For consumers, the study shows that products containing Compound K or its precursors might offer broader anti-aging benefits than single-action ingredients. However, concentration matters – many studies used relatively high amounts of the compound, which may not be present in all commercial products claiming ginseng benefits.

Meanwhile, more studies like this one could completely change the future of the skin aging industry. Simple moisturizers claiming miraculous anti-aging benefits are being replaced by ingredients like Compound K that work through specific cellular pathways, genetic expression, and metabolic processes.

While Compound K isn’t a magical fountain of youth, it represents a scientifically validated approach to supporting skin’s natural functions and resilience. In aging, this resilience – rather than fighting the inevitable – may be the key to aging well.

Source : https://studyfinds.org/ginsengs-secret-anti-aging-weapon-how-compound-k-is-changing-skincare-science/

 

Children Glued to Phones More Likely to Become High-Strung, Depressed Teens

(Credit: Andrea Piacquadio from Pexels)

In case you needed another reason to hold off on buying your child a phone, research shows a troubling connection between childhood screen habits and teenage mental well-being. The eight-year study, which tracked children from elementary school into adolescence, found that kids who racked up more screen time—especially on mobile devices—showed higher levels of stress and depressive symptoms as teenagers.

The study adds to the large body of research that should make parents think twice about unlimited device access, especially as more children experience mental health struggles at an early age. Between one-quarter and one-third of adolescents worldwide experience mental health problems, with symptoms typically first appearing during the teenage years. Researchers now have more concrete evidence about lifestyle factors that might help prevent psychological distress before it takes root.

Digital Habits and Mental Health: What the Research Shows

Study authors used data from the Physical Activity and Nutrition in Children (PANIC) study, which followed 187 Finnish children over eight years, from ages 6-9 into their mid-teens. Researchers regularly checked in on their physical activity, screen time, sleep patterns, and eating habits. When these children reached adolescence (average age 15.8), the researchers assessed their mental health using standardized measures of stress and depression.

The data painted a clear picture: teenagers who had accumulated more total screen time and mobile device use throughout childhood showed significantly higher levels of stress and depressive symptoms. The connection between mobile device use and depression was particularly strong, showing a “moderate effect size”—substantial in behavioral research terms.

The team found that adolescents spent nearly five hours daily on screens, with over two hours on mobile devices alone. Many parents might find these numbers unsurprising, but the mental health correlations deserve attention.

Physical activity told the opposite story. Teens who maintained higher activity levels during childhood, especially in supervised settings like sports or structured exercise programs, showed better mental health outcomes. This protective effect remained significant even after researchers accounted for factors like parental education, body composition, and puberty status.

Gender differences added another dimension to the findings. For boys, physical activity showed stronger protective effects against stress than for girls.

Surprisingly, neither diet quality nor sleep duration showed strong relationships with teen mental health in this study. This doesn’t mean these factors aren’t important for overall health—just that screen time and physical activity may have more direct impacts on adolescent mental wellbeing.

More Screen Time Should Mean More Physical Activity

For parents struggling with screen time battles, this research provides compelling evidence for setting reasonable limits. The findings highlight that mobile device use specifically—more than television or computer time—warrants special attention. With smartphones and tablets become increasingly central to education and social connections, creating healthy boundaries becomes more challenging but potentially more important.

The study, published in JAMA Network Open, also emphasizes the value of supervised physical activities. Children who participated in more structured exercise from ages 6-15 showed fewer mental health problems in adolescence. It’s all the more reason schools and community programs aimed at promoting youth mental health should find more ways to get children moving.

Most revealing were the outcomes showing that teenagers with both low physical activity and high screen time had the worst mental health outcomes. This demonstrates that addressing either factor alone might not be as effective as a balanced approach that both limits screen time and increases physical activity.

Creating Healthier Digital Habits for Children

While conducted in Finland, the study’s findings likely apply to children in other developed countries with similar technology access patterns. As smartphone use continues rising globally, understanding its potential psychological impact grows increasingly urgent.

For families navigating the complex digital landscape, this research offers practical guidance: limit screen time (especially on mobile devices), encourage regular physical activity (particularly supervised activities like sports), and remember that these choices may affect not just current behavior but long-term mental health.

Mental health professionals and pediatricians may want to include screen time discussions in their preventive care conversations. Creating balanced digital environments and promoting consistent physical activity within supportive social contexts could become key strategies for protecting youth mental health.

Incorporating technology into children’s lives at younger ages is understandably commonplace these days. But here have another study showing why childhood habits matter. How we balance screens and physical activity today may shape the psychological landscape our children navigate tomorrow.

Source : https://studyfinds.org/children-glued-to-phones-stressed-depressed-teens/

 

Night Owls Are More Likely to Have Depression

Apparently, if you’re a night owl, you’re more prone to developing depression.

Night owls tend to get a bad rep. They’re often told they’re less productive and lazier than early risers, merely because they sleep more during daylight—you know, when the world is expected to be most active.

Now, according to recent research, they’re also apparently more likely to experience depression.

“Depression affects daily functioning and can impact a person’s work and education,” Simon Evans, PhD, a neuroscience lecturer and researcher in the School of Psychology of the University of Surrey in the U.K., told Medical News Today. “It also increases the risk of going on to develop other serious health conditions, including heart disease and stroke, so it’s important for us to study ways to reduce depression.”

Obviously, if there was a simple way to decrease your risk of developing depression, most of us would take it. In this case, that might mean getting to sleep earlier in the night rather than staying up until the early morning hours. However, unfortunately, some of us don’t have the luxury to change our sleeping hours.

Does that mean those who work night shifts or lead lifestyles that require them to be active at night are doomed to be depressed?

The study, published in the journal PLOS One, found that “evening-types had significantly higher levels of depression symptoms, poorer sleep quality, and lower levels of ‘acting with awareness’ and ‘describing,’ as well as higher rumination and alcohol consumption.”

With so many young adults self-identifying as “night owls” (or evening-types, as the study refers to them), it’s concerning to note this negative link between their sleep patterns and mental health.

“A large proportion (around 50%) of young adults are ‘night owls,’ and depression rates among young adults are higher than ever,” said Evans, lead author of the study. “Studying the link is therefore important.”

“More important is the finding that the link between chronotype and depression was fully mediated by certain aspects of mindfulness—‘acting with awareness’ in particular—sleep quality, and alcohol consumption,” Evans continued. “This means that these factors seem to explain why night owls report more depression symptoms.”

Source : https://www.vice.com/en/article/night-owls-are-more-likely-to-have-depression/

The Science of Falling Out of Love: Study Identifies ‘Point of No Return’ in Dying Relationships

(Photo by Prostock-Studio on Shutterstock)

Most of us believe relationship endings happen in messy, unpredictable ways—a betrayal discovered, a fight that goes too far, or a slow drift apart. But what if breakups actually follow a mathematical pattern? What if the end of your relationship is as predictable as the phases of the moon?

New research published in the Journal of Personality and Social Psychology reveals exactly that. Scientists have discovered that failing relationships don’t just randomly deteriorate—they follow a specific two-phase decline that can be measured, tracked, and even predicted with surprising accuracy.

Researchers Janina Bühler from Johannes Gutenberg University Mainz and Ulrich Orth from the University of Bern analyzed data from four major longitudinal studies across different countries. They found that couples who eventually break up typically experience a mild decline in happiness for years, followed by a dramatic drop in the final months or years before separation.

The Countdown to Breakup

Scientists call this phenomenon “terminal decline,” borrowing a concept previously used to describe how cognitive abilities and happiness deteriorate before death. The research reveals that our romantic relationships follow similar predictable patterns before they end.

The study found that “time-to-separation was a much better predictor of change than time-since-beginning.” While we often think about relationships in terms of how long couples have been together, this research shows that the time remaining until separation tells us more about relationship health.

Perhaps most fascinating is how differently breakup initiators and recipients experience this decline. People who eventually initiate breakups start becoming dissatisfied much earlier—about a year before the actual split. Meanwhile, their partners often remain relatively happy until just months before the end, when their satisfaction plummets dramatically.

Many people intuitively sense when their relationship is heading downhill. This research confirms these feelings aren’t just subjective impressions—they reflect a scientific trajectory toward separation that looks remarkably similar across cultures, age groups, and relationship types.

Exploring The Phases of Decline

In the study, researchers tracked thousands of couples over time, measuring their relationship satisfaction annually. They compared people who eventually separated with similar people who stayed together.

The pattern emerged consistently across all four datasets. According to the paper: “The decline prior to separation was divided into a preterminal phase, characterized by a smaller decline, and a terminal phase, characterized by a sharp decline,” the authors write. The major shift between these phases—what researchers call the “transition point”—occurred anywhere from 7 months to 2.3 years before the actual breakup, depending on the study.

The researchers also examined whether overall life satisfaction followed the same trajectory. They found that “terminal decline was less visible in life satisfaction than in relationship satisfaction.” This indicates that while people recognize their relationships are deteriorating, they might already be preparing emotionally for life after the relationship.

If most relationships fade according to this pattern instead of a dramatic, sudden event or spat, is there any hope for relationships already in this spiral? In many cases, the relationship is effectively over long before the actual separation occurs—couples are just living through the terminal phase.

For couples therapists and relationship counselors, these findings could transform how they evaluate troubled relationships. By identifying whether a couple is in the early “preterminal” phase versus the steep “terminal” decline, professionals might better determine which relationships can be saved and which have likely passed the point of no return.

Demographic factors influenced these patterns in interesting ways. The researchers found that “age at separation and marital status explained variance in the effect sizes.” Younger adults showed less dramatic terminal declines than older adults, possibly because younger people expect more relationship transitions.

The study also revealed that “individuals who were the recipients of the separation (in contrast to individuals who initiated the separation) entered the terminal phase later but then decreased more strongly.” This explains why breakups often feel so asymmetrical, with one partner seemingly more prepared than the other.

What This Means For Your Relationship

Many of us stay in declining relationships hoping things will improve. The study, unfortunately, indicates there might be a point of no return—a transition into terminal decline—after which recovery becomes highly unlikely.

For those currently in relationships, the findings offer both caution and hope. On one hand, recognizing the signs of terminal decline might help people make more informed decisions about when to seek help or when to move on. On the other hand, understanding that the steepest decline typically happens only after crossing a specific threshold might encourage couples to address problems before reaching that critical transition point.

The researchers frame it this way: “If unsatisfied couple members are still in the preterminal phase and have not yet reached the transition point, efforts to improve the relationship may be more effective, potentially preventing the onset of the terminal phase and the eventual dissolution of the relationship.”

The study also brings some comfort to those blindsided by breakups. If you’ve ever been shocked when a partner suddenly announced they wanted to separate, the science explains why: they likely crossed into terminal decline months or even years before you did. By the time you recognized the severity of the problems, they had already been mentally preparing for the end.

Like many aspects of human behavior, from birth to cognitive development to aging, romantic relationships appear to follow predictable patterns that can be scientifically observed and mapped. The terminal decline of relationship satisfaction isn’t just a feeling—it’s a measurable phenomenon that operates according to consistent rules across different cultures and contexts.

The study’s authors emphasize couples in rocky relationships should seek help before hitting the point of no return. “It is important to be aware of these relationship patterns,” says Bühler, who works as a couple therapist in addition to being a professor. “Initiating measures in the preterminal phase of a relationship, i.e., before it begins to go rapidly downhill, may thus be more effective and even contribute to preserving the relationship.”

Source : https://studyfinds.org/falling-out-of-love-point-of-no-return-in-dying-relationships/

Cannabis users under 50 are 6 times more likely to have a heart attack, new study shows

A new study shows that young people who consume marijuana are six times more likely to experience a heart attack than their counterparts.

Research published in the Journal of the American College of Cardiology (JACC) documents that people under the age of 50 who consume marijuana are about 6.2 times more likely to experience a myocardial infarction, commonly known as a heart attack, than non-marijuana users. Young marijuana users are also 4.3 times more likely to experience an ischemic stroke and 2 times more likely to experience heart failure, the study shows.

Researchers surveyed over 4.6 million people under the age of 50, of which 4.5 million do not use marijuana and 93,000 do. All participants were free of health conditions commonly associated with cardiovascular risks, like hypertension, coronary artery disease, diabetes, and a history of myocardial infarctions. The study also excluded people who use tobacco to eliminate another potential risk factor.

Ahmed Mahmoud, lead researcher and clinical instructor at Boston University, told USA TODAY that though the numbers appear significant, researchers’ biggest concern right now is studying more data, as research on marijuana’s effects on the cardiovascular system remains limited.

“Until we have more solid data, I advise users to try to somehow put some regulation in the using of cannabis,” Mahmoud said. “We are not sure if it’s totally, 100% safe for your heart by any amount or any duration of exposure.”

How does marijuana affect the heart?

As studies remain inconclusive and few and far between, scientists and doctors are still unclear how marijuana affects the cardiovascular system. But generally, researchers understand that marijuana can make the heart beat faster and raise blood pressure, as reported by the Centers for Disease Control and Prevention.

Mahmoud said researchers believe marijuana may make small defects in the coronary arteries’ lining, the thin layer of cells that forms the inner surface of blood vessels and hollow organs.

“Because cannabis increases the blood pressure and makes the blood run very fast and make some detects in the lining to the coronary arteries, this somehow could make a thrombosis (formation of a blood clot) or a temporary thrombosis in these arteries, which makes a cardiac ischemic (stroke) or the heart muscle is not getting enough oxygen to function,” Mahmoud said. “This is what makes the heart injured and this is a myocardial infarction or heart attack.”

Stanton Glantz, a retired professor from the University of California, San Francisco School of Medicine, co-authored a study published in the Journal of the American Heart Association last year that also addresses marijuana’s effects on the cardiovascular system.

Stanton Glantz is retired professor from the University of California, San Francisco School of Medicine. He also is the founder of the Center for Tobacco Control Research and Education.
Glantz told USA TODAY he believes smoking marijuana has the same effects on the cardiovascular system as smoking tobacco.

When smoking a cigarette, the blood that is distributed through the body becomes contaminated with the cigarette smoke’s chemicals, which can damage the heart and blood vessels, the CDC reports. This damage can result in coronary heart disease, hypertension, heart attack, stroke, aneurysms and peripheral artery disease.

Changes in blood chemistry from cigarette smoke can also cause plaque in the body’s arteries, resulting in a disease called atherosclerosis, according to the CDC. When arteries become full of plaque, it’s harder for blood to move throughout the body. This can create blood clots and ultimately lead to a heart attack, stroke or death.

How does the new study correspond with previous research?

The recently published study aligns with previous research in the field.

The Journal of the American Heart Association study, which surveyed more than 434,000 people between the ages 18-74 , found that marijuana affects the cardiovascular system. The study also singled out marijuana users who didn’t use tobacco.

The 2024 study found that people who consume − specifically inhale − marijuana are more likely to experience coronary heart disease, myocardial infraction and stroke. There is a “statistically significant increase in risk,” Glantz said.

The main difference between the new study, co-authored by Mahmoud, and the 2024 study, is the populations studied, Glantz said.

The 2024 study analyzed data from the Behavioral Risk Factor Surveillance Survey, a CDC-operated telephone survey that includes responses from across the country. The new study analyzed data from 53 healthcare organizations using the TriNetX health research network.

Source : https://www.usatoday.com/story/news/health/2025/03/21/cannabis-users-heart-attack-risk/82574623007/

Why Can’t We Remember the First Few Years of Life?

Why don’t we remember being a baby? (Miramiska/Shutterstock)

Have you ever wondered why you can’t remember being a baby? This blank space in our memory, known as “infantile amnesia,” has puzzled scientists for years. Most of us can’t recall anything before age three or four. Until recently, researchers thought baby brains simply couldn’t form memories yet, that the memory-making part of our brain (the hippocampus) wasn’t developed enough.

But it turns out babies might remember more than we thought. Research just published in the journal Science shows that babies as young as one year old can actually form memories in their hippocampus. The study, led by researchers at various American universities, suggests our earliest memories aren’t missing, we just can’t access them later.

How Do You Study Memory in Babies Who Can’t Talk?
You can’t exactly ask a baby, “Do you remember this?” The researchers came up with a clever solution. They showed 26 babies (ages 4 months to 2 years) pictures of faces, objects, and scenes while scanning their brains. Later, they showed each baby two pictures side by side, one they’d seen before and one new one, and tracked where the babies looked.

“When babies have seen something just once before, we expect them to look at it more when they see it again,” says lead study author Nick Turk-Browne from Yale University, in a statement. “So in this task, if an infant stares at the previously seen image more than the new one next to it, that can be interpreted as the baby recognizing it as familiar.”

Getting babies to lie still in a brain scanner is no small feat. The research team has spent years developing special techniques to make this possible. They made the babies comfortable and only scanned them when they were naturally awake and content.

The Big One-Year Memory Milestone

The brain scans showed that when a baby’s hippocampus was more active while seeing a picture for the first time, they were more likely to stare at that same picture later, showing they may have remembered it.

This ability to remember showed a clear age pattern. Babies younger than 12 months didn’t show consistent memory signals in their brains, but the older babies did. And the specific part of the hippocampus that lit up, the back portion, is the same area adults use for episodic memories.

The researchers had previously discovered that even younger babies (as young as three months) can do a different kind of memory called “statistical learning.” This is basically spotting patterns across experiences rather than remembering specific events.

Source: https://studyfinds.org/cant-remember-first-years-of-life/

This Smartphone App Helps Seniors in Assisted Living Fight Cognitive Decline

Providing seniors with an app that boosts brain health solves many accessibility challenges in assisted living facilities. (Pressmaster/Shutterstock)

Let’s face it, we’re all worried about memory loss as we age. But what if the same device you use for calling grandkids could actually strengthen your mind? A new study revealed that a smartphone app improved thinking abilities in older adults living in assisted living facilities.

Residents of assisted living often feel isolated and might not have easy access to specialized brain health services. That’s why an app would make perfect sense, giving residents access to brain training at their fingertips. Scientists from the University of Utah, Texas A&M, and a company called Silvia Health tested an app called the “Silvia Program” with older folks in assisted living.

Research published in Public Health in Practice shows the promising potential of this app’s capabilities for fighting cognitive decline. Instead of just including memory games like many brain apps, this one took a kitchen-sink approach, mixing brain training with exercise routines, food tracking, and other lifestyle stuff all in one app.

While seniors who didn’t use the app actually lost some brain function over the 12 weeks (yikes), the app users actually saw their scores improve. That’s kind of a big deal for anyone with parents or grandparents in assisted living who worry about their mental sharpness.

The idea behind the app’s design is actually pretty simple. Instead of just doing one thing for your brain, it mixes several approaches together. It’s cross-training for your mind instead of only doing one exercise.

Earlier studies already showed this mix-and-match approach helps fight memory loss. But getting regular in-person brain training can be tough, especially if you live in a facility with limited transportation options. That’s why putting these tools on a smartphone could be such a great approach. It brings brain health right to where seniors already are.

This Silvia Program isn’t your run-of-the-mill brain games app. It bundles five different tools:

  • Daily goals to keep you motivated
  • Brain exercises targeting different thinking skills
  • Trackers for food/exercise/sleep habits
  • Workout routines you can do sitting in your living room
  • A talking AI that tests your thinking and adjusts the difficulty

The app also provides personalized coaching with a clinical psychologist, along with cognitive exercises, tailored activity suggestions, and a voice analysis tool capable of identifying early signs of dementia. It engages in interactive conversations to assess the user’s needs and adjusts its functions accordingly.

The Science Behind Silvia

For the study, the researchers recruited 20 folks living in an assisted living facility in Indiana who were experiencing mild cognitive impairment but didn’t have dementia or serious depression. They split them into two groups of 10. One group used the Silvia app for about an hour twice a week for three months. The other group just kept doing whatever they normally did.

They used a test called the MoCA to measure brain function. Doctors use this test to check for early signs of dementia.

Now, 20 people isn’t exactly huge for a study, but what they found still raised some eyebrows. The app seemed to help with visual thinking, language, memory recall, and knowing the time and place.

Why does this matter? Many people in assisted living start feeling cut off from the world after moving in. They might not see family as often, can’t always get to brain health specialists, and sometimes feel like they’re just waiting around. That’s exactly when memory tends to nosedive.

Two things make this app approach especially practical. First, it’s right there on a device many seniors already use. Second, it adapts to each person. You can dial the brain games up or down in difficulty so they’re not too easy or impossibly hard. The exercise instructions show pictures of each move, so you’re not left wondering what “lateral arm raise” means. The chatty AI keeps tabs on how you’re doing, then adjusts everything accordingly, like having a personal trainer for your brain who lives in your pocket.

That said, there are limitations to consider. As noted, the study sample was small, and it only ran for 12 weeks. We have no idea if the brain boosts last longer than that or if they’d show up in different groups of people. Most participants were white women, which doesn’t tell us how the app might work for men or people from different backgrounds. Oddly enough, the app users had more years of education than the non-users, which might have affected the results.

What This Means for Aging and Memory Care

With baby boomers hitting their 70s and 80s, we’re staring down a tsunami of potential memory problems. The old-school fix? Regular visits with specialists, which means transportation hassles, scheduling headaches, and hefty bills. Phone apps skip all that. You just tap and train whenever it’s convenient.

Still, this isn’t the first hint that digital tools might help aging brains. Other studies have already shown that brain games and regular exercise each help slow mental decline. This research suggests bundling them together in one easy-to-use app might pack an even bigger punch.

Nursing homes and assisted living centers should also take note. Their staff is always stretched thin. Apps that residents can use independently might supplement care without breaking budgets or requiring extra personnel. One iPad and a handful of good apps could potentially benefit dozens of residents.

Phones and tablets often get a bad rap for making us dumber, shortening attention spans, and replacing memory with Google searches. But this study flips that narrative. The same devices blamed for digital brain drain might actually build brain power when loaded with the right software.

Source : https://studyfinds.org/app-helping-seniors-fight-cognitive-decline-assisted-living/

Can a daily nap do more harm than good? A sleep researcher explains

Woman listening to music (© Prostock-studio – stock.adobe.com)

You’re in the middle of the afternoon, eyelids heavy, focus slipping. You close your eyes for half an hour and wake up feeling recharged. But later that night, you’re tossing and turning in bed, wondering why you can’t drift off. That midday snooze which felt so refreshing at the time might be the reason.

Naps have long been praised as a tool for boosting alertness, enhancing mood, strengthening memory, and improving productivity. Yet for some, they can sabotage nighttime sleep.

Napping is a double-edged sword. Done right, it’s a powerful way to recharge the brain, improve concentration, and support mental and physical health. Done wrong, it can leave you groggy, disoriented, and struggling to fall asleep later. The key lies in understanding how the body regulates sleep and wakefulness.

Most people experience a natural dip in alertness in the early afternoon, typically between 1 p.m. and 4 p.m. This isn’t just due to a heavy lunch – our internal body clock, or circadian rhythm, creates cycles of wakefulness and tiredness throughout the day. The early afternoon lull is part of this rhythm, which is why so many people feel drowsy at that time.

Studies suggest that a short nap during this period – ideally followed by bright light exposure – can help counteract fatigue, boost alertness, and improve cognitive function without interfering with nighttime sleep. These “power naps” allow the brain to rest without slipping into deep sleep, making it easier to wake up feeling refreshed.

But there’s a catch: napping too long may result in waking up feeling worse than before. This is due to “sleep inertia” – the grogginess and disorientation that comes from waking up during deeper sleep stages.

Once a nap extends beyond 30 minutes, the brain transitions into slow-wave sleep, making it much harder to wake up. Studies show that waking from deep sleep can leave people feeling sluggish for up to an hour. This can have serious implications if they then try to perform safety-critical tasks, make important decisions or operate machinery, for example. And if a nap is taken too late in the day, it can eat away from the “sleep pressure build-up” – the body’s natural drive for sleep – making it harder to fall asleep at night.

When napping is essential

For some, napping is essential. Shift workers often struggle with fragmented sleep due to irregular schedules, and a well-timed nap before a night shift can boost alertness and reduce the risk of errors and accidents. Similarly, people who regularly struggle to get enough sleep at night – whether due to work, parenting or other demands – may benefit from naps to bank extra hours of sleep that compensate for their sleep loss.

Nonetheless, relying on naps instead of improving nighttime sleep is a short-term fix rather than a sustainable solution. People with chronic insomnia are often advised to avoid naps entirely, as daytime sleep can weaken their drive to sleep at night.

Certain groups use strategic napping as a performance-enhancing tool. Athletes incorporate napping into their training schedules to speed up muscle recovery and improve sports-related parameters such as reaction times and endurance. Research also suggests that people in high-focus jobs, such as healthcare workers and flight crews, benefit from brief planned naps to maintain concentration and reduce fatigue-related mistakes. NASA has found that a 26-minute nap can improve performance of long-haul flight operational staff by 34%, and alertness by 54%.

How to nap well

To nap effectively, timing and environment matter. Keeping naps between ten and 20 minutes prevents grogginess. The ideal time is before 2 p.m. – napping too late can push back the body’s natural sleep schedule.

The best naps happen in a cool, dark, and quiet environment, similar to nighttime sleep conditions. Eye masks and noise-canceling headphones can help, particularly for those who nap in bright or noisy settings.

Despite the benefits, napping isn’t for everyone. Age, lifestyle and underlying sleep patterns all influence whether naps help or hinder. A good nap is all about strategy – knowing when, how, and if one should nap at all.

For some it’s a life hack, improving focus and energy. For others, it’s a slippery slope into sleep disruption. The key is to experiment and observe how naps affect your overall sleep quality.

Source : https://studyfinds.org/daily-nap-more-harm-than-good/

How social media expectations are destroying teenage friendships

Social media adds a whole new layer of stress to teen friendships. (SpeedKingz/Shutterstock)

Today’s teens face a challenge that their parents never did: the pressure to be constantly available to their friends online. New research from the University of Padua in Italy reveals how this digital pressure is creating stress that leads to real-world friendship conflicts for teenagers.

The study, published in Frontiers in Digital Health, tracked 1,185 teenagers over six months to understand how social media affects their friendships. What they found paints a concerning picture of modern teen relationships.

When Friends Don’t Text Back

“We show that adolescents’ perceptions of social media norms and perceptions of unique features of social media contribute to digital stress, which in turn increases friendship conflicts,” says lead study author Federica Angelini from the University of Padua, in a statement.

The researchers identified two main types of digital stress that teens experience. The first, entrapment, refers to the pressure teens feel to always be available and responsive to their friends online. The second, disappointment, arises when friends don’t respond as quickly or as often as expected, leading to negative feelings. Both types of stress play significant roles in the challenges teens face in their digital friendships.

Surprisingly, it’s not the pressure to be available that causes most problems, it’s the disappointment when friends aren’t available to them.

“Disappointment from unmet expectations on social media—such as when friends do not respond or engage as expected—is a stronger predictor of friendship conflict than the pressure to be constantly available,” explains Angelini.

In other words, teens aren’t fighting because they feel burdened by needing to respond to every message, but because they feel upset when their friends don’t respond to them.

The Problem with Pictures and Videos

When examining different features of social media, the researchers found that the visual nature of content (photos, videos, stories) was most connected to creating disappointment and conflict.

“Visual content makes it easier for teens to see what their friends are doing at any given time. If teens notice that their friends are active online or spend time with others while ignoring their messages, they may feel excluded, jealous, or rejected,” Angelini explained.

We’ve all had that moment: seeing a friend post a fun story while they still haven’t answered the message you sent hours ago. For teens, these visual cues can trigger strong emotional responses that lead to real-world arguments.

The good news is that parents and educators can help teens develop healthier social media habits. Teaching teens strategies to protect their mental health online is crucial as parents navigate the uncharted territory of raising a generation growing up with social media.

“One such habit for teenagers could be setting boundaries, for example scheduling ‘offline’ times or managing notifications. When done in discussion with friends this can also help reduce misunderstandings,” says Angelini.

The researchers also recommend helping teens understand that not every message needs an immediate response. Learning this can reduce stress while maintaining healthy friendships.

Boys and girls experience these pressures slightly differently. Boys who saw social media as highly available actually reported feeling less trapped by it compared to girls, possibly because there are different expectations around response times between different friend groups.

The study followed the same teens over six months, which allowed researchers to see how digital stress actually caused more conflicts over time, rather than just being connected to them.

Understanding these pressures is key to helping teens build healthy, sustainable friendships in the digital age. By recognizing the emotional impact of unmet digital expectations, parents and educators can guide teenagers toward more balanced social connections both online and offline.

Source : https://studyfinds.org/teen-friendships-pressure-social-media-expectations/

End Of Headphones? New ‘Audible Enclaves’ Deliver Sound Only to Your Ears

(© Julia – stock.adobe.com)

Ever been annoyed by someone else’s music in a shared space? Or struggled to have a private conversation in a busy office? Researchers at Penn State University might have just solved these everyday acoustic headaches with a breakthrough that creates “sound bubbles” only the intended listener can hear.

These localized audio spots, which the researchers dubbed “audible enclaves,” can be placed with pinpoint accuracy—even behind obstacles like human heads—while remaining silent to everyone else in the room.

“We essentially created a virtual headset,” said Jia-Xin “Jay” Zhong, a postdoctoral scholar in acoustics at Penn State. “Someone within an audible enclave can hear something meant only for them — enabling sound and quiet zones.”

How Audible Enclaves Work

Published in the Proceedings of the National Academy of Sciences, the research tackles a challenge in acoustics that has long frustrated audio engineers. Sound waves naturally spread out as they travel, making it nearly impossible to contain them without physical barriers. This is why conversations carry across rooms and why traditional speakers fill entire spaces with sound.

“We use two ultrasound transducers paired with an acoustic metasurface, which emit self-bending beams that intersect at a certain point,” said corresponding author Yun Jing, professor of acoustics in the Penn State College of Engineering. “The person standing at that point can hear sound, while anyone standing nearby would not. This creates a privacy barrier between people for private listening.”

The system works by sending out two beams of ultrasonic sound—frequencies too high for humans to hear—that travel along curved paths and meet at a specific target location. Using 3D-printed structures called metasurfaces, they shape these ultrasonic beams to bend around obstacles like a person’s head.

By positioning the metasurfaces in front of the two transducers, the ultrasonic waves travel at two slightly different frequencies along a crescent-shaped trajectory until they intersect. The metasurfaces were 3D printed by co-author Xiaoxing Xia, staff scientist at the Lawrence Livermore Laboratory.

Neither beam is audible on its own—it is the intersection of the beams together that creates a local nonlinear interaction, which generates audible sound. The beams can bypass obstacles, such as human heads, to reach a designated point of intersection.

Breaking Sound Barriers

Most audio technologies work within narrow frequency ranges, but this system demonstrated effectiveness across an impressive spectrum from 125 Hz to 4 kHz. This range covers most frequencies needed for speech and music reproduction, making it practical for real-world applications.

The approach differs fundamentally from existing directional sound technologies. Previous attempts to create focused audio have required massive speaker arrays and complex processing, especially for lower frequencies with longer wavelengths. Commercial “sound beam” products exist but can’t bend around obstacles or create such sharply defined listening spots.

Perhaps most impressive is the system’s compact size. The researchers achieved their results using a source aperture measuring just 0.16 meters—tiny compared to conventional approaches that would require much larger equipment to direct low-frequency sounds.

To verify the technology works with actual content rather than just test tones, the team conducted rigorous testing. “We used a simulated head and torso dummy with microphones inside its ears to mimic what a human being hears at points along the ultrasonic beam trajectory, as well as a third microphone to scan the area of intersection,” said Zhong. “We confirmed that sound was not audible except at the point of intersection, which creates what we call an enclave.”

The researchers tested the system in a common room with normal reverberations, meaning it could work in various environments like classrooms, vehicles, or even outdoors.

Where Will We See Audible Enclaves?

This technology opens up fascinating possibilities. Museums could deliver exhibit narration to visitors in specific spots without creating audio overlap. Office workers could receive private notifications without disrupting colleagues. Cars could create individual sound zones for each passenger, letting the driver hear navigation instructions while rear passengers enjoy different music.

The applications extend beyond convenience. The same approach could create targeted quiet zones by delivering precisely placed noise-cancellation signals. Hospitals could maintain quiet areas while allowing necessary communication in adjacent spaces—something traditional noise control systems struggle to accomplish.

For now, researchers can remotely transfer sound about a meter away from the intended target, and the sound volume is about 60 decibels, equivalent to speaking volume. However, the researchers said that distance and volume may be able to be increased if they increased the ultrasound intensity.

The current system requires high-intensity ultrasound to produce moderate audio levels due to conversion inefficiency. While the levels used fall within safety guidelines, this aspect needs further refinement.

Audio quality presents another hurdle. The interaction introduces some distortion, which could affect complex audio content. However, the team believes signal processing techniques could compensate for these effects in future versions.

Audible enclaves certainly offer a compelling and exciting solution to a long-standing problem, creating bubbles of sound that exist only where wanted and nowhere else. By focusing sound with laser-like precision, this technology could transform our relationship with audio in shared spaces, making private listening truly private without isolating listeners from their surroundings.

Source : https://studyfinds.org/audible-enclaves-sound-waves-penn-state/

Why ‘fake it till you make it’ at work may be draining your mental health

In the sales industry, “fake it till you make it” isn’t just a saying; it’s often a job requirement. Behind those seemingly genuine smiles and enthusiastic pitches, salespeople are performing complex emotional gymnastics that researchers call emotional labor. According to new international research, this emotional performance is seriously impacting employee mental health and job satisfaction.

Faking your emotions at work may lead to employee burnout and stress. (© Prostock-studio – stock.adobe.com)

A recent study published in Industrial Marketing Management explores how salespeople’s moral character influences how they manage their emotions at work and how this ultimately affects their well-being. Poor employee well-being costs U.S. companies an estimated $500 billion and results in 550 million lost workdays annually, so this is a big deal for both businesses and individuals.

Reports show that about 63% of salespeople struggle with mental health issues, and sales jobs are known for their intense pressure. This has only gotten worse since the pandemic, with salespeople facing new challenges and changing customer expectations.

“We are all under a lot of pressure, a lot of deadlines at work, right?” says study co-author Khashayar Afshar Bakeshloo (Kash) from the University of Mississippi, in a statement. “We wanted to look at the different factors that threaten employee’s mental health and lead to emotional exhaustion. One such factor that is very interesting to us was emotional labor.”

The Hidden Cost of Putting on a Happy Face

Emotional labor is the work of managing one’s emotions to meet job requirements. It comes in two main forms: surface acting and deep acting.

Surface acting is basically putting on a mask and showing emotions you don’t actually feel, like forcing a smile during a tough customer meeting. Deep acting goes further, where you actually try to generate the required emotions internally, like really trying to feel excited about a product you’re selling.

The researchers wanted to know how a salesperson’s moral character affects which approach they use, and how these approaches impact both customer behavior and the salesperson’s well-being.

They surveyed 313 B2B salespeople across various industries in the United States, representing different company sizes and offering various products and services. Most people in the study (72.5%) were men, which is typical in B2B sales.

When Values and Job Requirements Collide
Salespeople who deeply value moral traits as part of who they are (what researchers call “moral identity internalization”) are more likely to try genuinely feeling the emotions their job requires, rather than just faking them.

On the other hand, salespeople who focus more on publicly showing their morality (called “moral identity symbolization”) tend to use both approaches depending on the situation—sometimes genuinely trying to feel the emotions, other times just putting on a show.

Customers can often tell when a salesperson is being fake, and they frequently respond by treating the salesperson poorly or disrespectfully. This negative customer behavior then makes salespeople less satisfied with their jobs, creating a harmful cycle.

“Managing emotions to meet job demands can lead to exhaustion, dissatisfaction, and negative customer reactions,” says study co-author Omar Itani from Lebanese American University. “Job satisfaction is essential for overall well-being, emphasizing the need for supportive workplace cultures.”

In sales roles, where rejection is common, the pressure to perform can lead to significant emotional strain. More than 70% of people working in sales reported struggling with mental health in the 2024 State of Mental Health in Sales report.

“Salespeople are expensive employees,” explains Afshar. “They bring in money for the organization. So, if they miss an opportunity, it means that there’s no money coming in. When a salesperson burns out, it’s not just a loss of the person, but it’s also everything they bring to the company.”

Creating Healthier Work Environments

So, what can employees and employers do? Aligning personal values with job expectations can help salespeople manage emotional labor more effectively. Those in roles that require frequent emotional acting should consider workplaces that support authenticity, mental health resources, and ethical leadership to reduce burnout. Sales managers can work to foster environments like these.

“Communication is the key here,” adds Afshar. “When employees can communicate their problems, they aren’t dealing with problems alone. When they feel safe talking to their managers, their colleagues, it tends to remove some of that burden.”

Source: https://studyfinds.org/fake-it-till-you-make-it-work-mental-health/

Log out or lean in? The way you use social media matters more than how long you scroll

Using social media with more intention can help to protect your mental health. (PeopleImages.com – Yuri A/Shutterstock)

Every few months, another headline warns us about social media’s toxic effects on mental health, followed by calls to digital detox. Yet for many of us, completely unplugging isn’t super realistic. Now, new research from the University of British Columbia suggests we might not have to choose between staying connected and staying mentally healthy; there’s a middle path that could deliver the best of both worlds.

The study, published in the Journal of Experimental Psychology: General, challenges the popular belief that we must cut back on social media to protect our mental health. Instead, learning to use social media differently by focusing on meaningful connections rather than mindless scrolling or comparing ourselves to others, might be just as helpful for our emotional well-being.

“There’s a lot of talk about how damaging social media can be, but our team wanted to see if this was really the full picture or if the way people engage with social media might make a difference,” says lead study author Amori Mikami, a psychology professor from the University of British Columbia, in a statement.

The Love-Hate Relationship With Social Media
For most young adults, social media is a mixed bag. On one hand, platforms like Instagram and Facebook make it easy to stay in touch with friends, find communities of like-minded people, and get emotional support when needed. On the other hand, these same platforms can increase anxiety, depression, and loneliness when we find ourselves constantly comparing our regular lives to others’ highlight reels or feeling like we’re missing out on what everyone else is doing.

The research team recruited 393 social media users between the ages of 17 and 29 who reported some negative impacts from social media and had some symptoms of mental health concerns. They split these participants into three groups:

  1. A tutorial group that learned healthier ways to use social media
  2. An abstinence group that was asked to stop using social media entirely
  3. A control group that continued their usual social media habits

Over six weeks, researchers tracked participants’ social media use with phone screen time apps and self-reports. They also measured various aspects of mental well-being, including loneliness, anxiety, depression, and fear of missing out (FOMO).

Two Different Paths to Better Mental Health
As you might expect, people in the abstinence group drastically reduced their time on social media. But, the tutorial group also cut back on their social media use compared to the control group, even though they were never specifically told to do so. Just becoming more mindful about social media naturally led them to be more selective about their usage.

Both the tutorial and abstinence groups made fewer social comparisons and did less passive scrolling. While the abstinence group showed the biggest changes, the tutorial group also improved significantly compared to the control group.

When it came to mental health benefits, each approach seemed to help with different things. The tutorial approach was especially good at reducing FOMO and feelings of loneliness. The abstinence approach, meanwhile, was particularly effective at lowering symptoms of depression and anxiety but did not improve loneliness, possibly due to reduced social connections.

“Cutting off social media might reduce some of the pressures young adults feel around presenting a curated image of themselves online. But stopping social media might also deprive young adults of social connections with friends and family, leading to feelings of isolation,” explains Mikami.

Creating a Healthier Social Media Experience

The tutorial approach taught participants how to use social media in ways that boost genuine connection while reducing the stress of constant comparison. Participants learned to:

  • Reflect on when social media made them feel good versus bad
  • Recognize that most posts are carefully curated and don’t reflect real life
  • Unfollow or mute accounts that triggered negative feelings about themselves
  • Actively engage with friends through comments or messages instead of just passively scrolling

Completely stopping social media reduced activity on friends’ pages, which actually predicted greater loneliness. It seems that commenting on friends’ content provides a valuable social connection. However, reducing engagement with celebrity or influencer content predicted lower loneliness and fewer symptoms of depression and anxiety—showing that not all social media activity affects us the same way.

“Social media is here to stay,” says Mikami. “And for many people, quitting isn’t a realistic option. But with the right guidance, young adults can curate a more positive experience, using social media to support their mental health instead of detracting from it.”

Mikami believes these findings could help develop mental health programs and school workshops where young people learn to use social media as a tool for strengthening relationships rather than as a source of stress and comparison.

Don’t beach and booze: Why alcohol makes it easier to get a sunburn

Drinking in the sun can make you unaware that you are getting sunburnt. (STEKLO/Shutterstock)

BOCA RATON, Fla. — When was the last time you got a sunburn? If you’re like nearly a third of American adults who were toasted by the sun at least once last year, you might want to pay attention to a revealing new study about skin cancer risk. Researchers from Florida Atlantic University have found some eye-opening patterns in how Americans think about cancer risk and protect their skin—or don’t.

Your beach cocktail might be making your sunburn worse. Research published in the American Journal of Lifestyle Medicine reveals that more than one in five people who got sunburned were drinking alcohol at the time. In other words, there seems to be a real connection between having drinks and getting burned.

The Skin Cancer Problem You Need to Know About
Skin cancer tops the charts as America’s most common cancer. Millions of cases are diagnosed every year, costing the healthcare system nearly $9 billion annually. While most of us have heard of melanoma (the deadliest type), basal cell carcinoma and squamous cell carcinoma are actually more common.

Despite how common skin cancer is, the study found most Americans aren’t particularly worried about getting it. Only about 10% of people said they were “extremely worried,” while most were just “somewhat” (28.3%) or “slightly” (27.3%) concerned.

Sunburns significantly raise your cancer risk. According to dermatologists, getting just five blistering sunburns between ages 15 and 20 increases your melanoma risk by a whopping 80%. That’s a massive jump from something many people experience regularly.

Who Gets Burned? The Surprising Patterns

The research team surveyed over 6,000 American adults about their sun habits and sunburn experiences. Rich people get more sunburns. Yes, you read that correctly. People earning $200,000+ per year were four times more likely to report sunburns than those in the lowest income bracket. This completely flips what you might expect: wouldn’t wealthier people be more informed and have better access to sun protection?

Education doesn’t help either. College graduates and those with advanced degrees reported more sunburns than people with a high school diploma or less.

Other patterns:

  • Young adults (18-39) burn more often than older folks
  • Men get more sunburns than women
  • White Americans report more sunburns than Black or Hispanic Americans

“While Hispanics and Black Americans generally report lower rates of sunburn, Hispanics often perceive greater benefits of UV exposure, which increases their risk,” says study author Lea Sacca, in a statement.

Why might wealthy, educated people get more sunburns? They probably spend more time on outdoor vacations or leisure activities. Think about it: boating, skiing, beach vacations, and outdoor sports are all activities more accessible to those with higher incomes and more flexible work schedules.

Source: https://studyfinds.org/alcohol-easier-to-get-a-sunburn/

Is your money gone before it arrives? The sad reality of American paychecks

Most working Americans have already spent more than half their paycheck before they even get it. This financial balancing act, revealed in a recent survey, shows how millions of workers may be finding themselves counting money they haven’t yet received just to keep up with basic expenses.

A survey of 2,000 employed Americans making less than $75,000 annually shows what happens to the modern paycheck—where it goes, how fast it disappears, and how many people need to plan carefully just to make it through each month.

The poll, conducted by Talker Research and commissioned by EarnIn, found that 59% of Americans map out which bills to pay first while waiting for payday, with 51% of their money already earmarked before it hits their account. This happens mainly because living costs don’t match what people earn (44%) and bill due dates are scattered throughout the month (31%).

Past-due bills are another big reason people count their chickens before they hatch, making up 38% of pre-spent funds. Only 40% of those surveyed keep up with all their bills, while 55% typically juggle between one and four overdue bills every month.

When payday finally arrives, people know exactly where the money needs to go. Housing costs like rent or mortgage payments come first for 56% of respondents, then necessities like food and medicine (51%). Utility bills follow at 38%, with catching up on overdue bills in fourth place at 29%.

Three Days to Empty
The money that does arrive disappears quickly. Americans spend about 43% of their paycheck within just three days of getting it. When you add this to the 51% that’s already spoken for before arrival, very little remains for the rest of the pay period.

This quick drain creates a cycle of stress that most Americans find themselves stuck in. Only 20% of respondents said they don’t run out of money or need to tighten their belt before their next check comes—meaning 80% feel the squeeze as payday approaches.

For those caught short at the end of each pay cycle, the effects hit home: 62% struggle to buy groceries, 30% have trouble paying major bills, another 30% can’t cover smaller bills, and 16% find it hard to afford medicine and make loan payments.

Budget Advice vs. Real Life
The survey compared Americans’ actual spending with the popular 50/30/20 budget rule—which suggests putting 50% toward needs, 30% toward wants, and 20% into savings. The results show the gap between this advice and what people actually face.

On average, respondents put 64% of their money toward basic needs like food, bills, and housing—far more than the recommended 50%. Meanwhile, “wants” or personal spending gets just 16% of their income, and savings also account for only 16% of the average paycheck.

The savings picture looks even worse on closer inspection. More than half (56%) of those surveyed said less than 10% of their money goes into savings, while 23% couldn’t remember when they last saved 20% as the budget rule suggests.

When money runs low before the next check arrives, Americans use various tactics to get by. Nearly 39% pick up side hustles for extra cash, while 31% ask family for help and 28% turn to credit cards.

Worryingly, 14% of respondents said they have nowhere to turn when they need more money—showing a group of people living with extreme money troubles and no safety net.

Banking on help
Banks, which might seem like obvious helpers in this situation, offer few solutions. Only 5% of respondents can get their paycheck early through their bank, and even fewer (4%) can access early pay through their job.

“In today’s world, employees shouldn’t have to wait days to access the money they’ve already earned,” said an EarnIn spokesperson. “People deserve financial solutions that provide faster access to their pay—regardless of where they bank—so they can manage their money on their own terms, not their bank’s schedule.”

Despite limited help from banks, Americans stay loyal to them for years. The average person has used the same bank for nine years, with 14% reporting relationships lasting between 19 and 20 years.

This loyalty seems based more on habit than benefits. More than half (57%) stay with their bank simply because it feels familiar. Only 20% said they stay because their bank lets them get their money sooner.

Getting out of the paycheck-to-paycheck life
The survey asked how getting paychecks a bit earlier might ease financial pressure. If Americans could get paid up to two days earlier than usual, 34% said they could pay bills on time, and 29% thought they would worry less about money.

Additionally, 19% said earlier access would help them pay rent on time, while 15% could save more. Overall, 56% felt that getting their paycheck up to two days earlier would make them feel more secure about their finances.

For many, the standard two-week or monthly pay cycle creates roadblocks to financial stability, forcing even careful people to make tough choices about which necessities get paid first. This mismatch between when money is earned and when bills come due adds to financial worry.

The gap between budget advice and real spending patterns further shows the money pressures facing working Americans. When nearly two-thirds of income must cover just the basics, building savings becomes much harder.

The findings also raise questions about how employers and banks might either help reduce or accidentally increase these pressures. With so few workers able to access early pay options, there’s room for new approaches in payroll and banking that better fit people’s actual financial lives.

Financial advice often focuses on budgeting skills and personal habits, but this survey suggests that timing issues like pay frequency and bill due dates matter just as much. Solutions that fix these broader issues may work better than putting all the burden on individual choices.

Source: https://studyfinds.org/sad-reality-american-paycheck/

The burned-out generation: Americans feeling peak stress earlier than ever

(© RawPixel.com – stock.adobe.com)

“I’m completely burned out”—once a phrase associated with decades of career advancement and family responsibilities—is now commonly heard from professionals in their twenties. According to a new survey, 25% of Americans experience burnout before age 30, challenging traditional assumptions about when life’s pressures reach their peak and raising important questions about how modern stressors affect different generations.

The poll of 2,000 adults from Talker Research examined how the cumulative stress of the past decade has affected Americans across generations. While the average American experiences peak burnout at approximately 42 years old, the picture looks dramatically different for younger adults. Gen Z and millennial respondents, currently aged 18 to 44, reported reaching their highest point of stress at an average age of just 25—a finding that suggests fundamental changes in how modern life impacts mental well-being across age groups.

The finding that a quarter of Americans experience burnout before age 30 represents a significant shift from traditional life course expectations. Historically, peak stress periods were often associated with mid-life challenges such as simultaneously managing career advancement, child-rearing, and caring for aging parents. The early burnout phenomenon suggests that younger generations may be facing an accelerated or compressed experience of life stressors.

The state of American stress
Currently, the average person reports operating at half their stress capacity—already a concerning level for overall well-being. Even more troubling, 42% of respondents indicated feeling even more stressed than this baseline, with a notable generational divide emerging in the data. Gen Z and millennial participants reported significantly higher current stress levels (51%) compared to their Gen X and older counterparts (37%).

Ehab Youssef, a licensed clinical psychologist, mental health researcher and writer at Mentalyc, provided insight into why stress is peaking earlier than ever.

“As a psychologist, I’ve worked with clients across different generations, and I can tell you stress doesn’t look the same for everyone,” Youssef told Talker Research. “It’s fascinating — and a little concerning — to see how younger Americans are experiencing peak stress earlier than ever before. I see it in my practice all the time: twenty-somethings already feeling completely burned out, something I never used to see at that age.

“I often hear from my younger clients, ‘Why does life feel so overwhelming already?’ They’re not just talking about work stress; they’re feeling pressure from every direction — career, finances, relationships, even social media expectations. Compare this to my older clients, who often describe their peak stress happening later in life — maybe in their 40s or 50s, when financial or family responsibilities became heavier. The shift is real, and it’s taking a toll.”

The primary drivers of burnout
When asked to identify the primary causes of their burnout, financial concerns topped the list, with 30% of respondents ranking money matters as their number one stressor. This was followed closely by politics (26%), work-related pressures (25%), and physical health concerns (23%).

The data reveals interesting generational differences in what’s causing the most stress. For younger Americans (Gen Z and millennials), work represents the greatest point of contention (33%), followed by finances (27%) and mental health (24%). In contrast, older generations (Gen X, baby boomers, and the silent generation) identified politics as their most significant concern (27%), with physical health following as a close second (24%).

Relationships of all kinds are also contributing significantly to American stress levels. One in six respondents who identified either their love life or family relationships as stressors ranked these areas as their top source of burnout (18% each).

Source: https://studyfinds.org/the-burnt-out-generation-americans-feeling-peak-stress-earlier-than-ever/

100,000-year-old cultural melting pot discovered in Israeli cave may rewrite early human history

Tinshemet cave during the excavations. (Credit- Yossi Zaidner)

In a limestone cave in Israel, archaeologists have uncovered evidence of what might be the oldest case of cultural sharing between different human species. The discovery reveals that around 100,000 years ago, early Homo sapiens and their Neanderthal-like neighbors weren’t just occasionally bumping into each other—they were participating in a shared cultural world, complete with identical toolmaking traditions, hunting practices, and even burial rituals. This finding turns the traditional story of human evolution on its head, suggesting that cultural exchange between different human species was the rule, not the exception, in our ancient past.

The findings at Tinshemet Cave, published in Nature Human Behaviour, provide a rare glimpse into a pivotal period when multiple human species coexisted in the Middle East. The site has yielded fully articulated human skeletons carefully positioned in burial positions, thousands of ochre fragments transported from distant sources, stone tools made with consistent manufacturing techniques, and animal bones that reveal specific hunting preferences—all dating to what scientists call the mid-Middle Paleolithic period (130,000-80,000 years ago).

“Our data show that human connections and population interactions have been fundamental in driving cultural and technological innovations throughout history,” says lead researcher Prof. Yossi Zaidner of the Hebrew University in Jerusalem, in a statement.

The discovery is especially significant because the Levant region (modern-day Israel, Lebanon, Syria, and Jordan) served as a crossroads where different human populations met. Previous discoveries in the region had uncovered fossils with mixed physical characteristics, suggesting that interbreeding occurred between Homo sapiens migrating out of Africa and local Neanderthal-like populations.

What makes the Tinshemet Cave findings transformative is that they demonstrate these different-looking humans weren’t just meeting and mating—they were sharing their unique cultural behaviors and traditions across population boundaries.

Located just 10 kilometers from another significant archaeological site called Nesher Ramla (where Neanderthal-like fossils were previously discovered), Tinshemet Cave preserves evidence of sustained human occupation over thousands of years. The research team excavated multiple layers of sediments inside the cave and on its terrace, uncovering a wealth of artifacts that tell a cohesive story of sophisticated human activity.

Among the most striking discoveries are the human burials. The excavations revealed at least five individuals, including two complete articulated skeletons—one adult and one child. The bodies were deliberately placed in a fetal position on their sides with bent limbs, a burial position remarkably similar to contemporaneous burials found at other Middle Paleolithic sites in the region, including the famous Qafzeh and Skhul caves.

These burials represent the earliest known examples of intentional human burial anywhere in the world, predating similar practices in Europe and Africa by tens of thousands of years. More importantly, they show that diverse human populations were treating their dead with similar ceremonial care, suggesting shared symbolic behaviors and possibly shared beliefs.

Another fascinating discovery was the abundant presence of ochre—a naturally occurring mineral pigment that produces red, yellow, and purple hues. The research team recovered more than 7,500 ochre fragments throughout the site, with the highest concentrations found in layers containing human burials. Chemical analysis revealed that these ochre materials came from at least four different sources, some located as far as 60-80 kilometers away in Galilee, and others possibly from the central Negev, more than 100 kilometers to the south.

The significant effort invested in obtaining these pigments from distant sources suggests their importance in the lives of these ancient people. The presence of large chunks of ochre near human remains—including a 4-5 cm piece found between the legs of one buried individual—hints at their ritual significance. Evidence of heat treatment to enhance the red color of some ochre pieces further reveals sophisticated knowledge and intentional manipulation of these materials.

Stone tool production at Tinshemet Cave demonstrates another dimension of cultural uniformity. The researchers analyzed nearly 2,800 stone artifacts and found that a specific flint-knapping technique known as the centripetal Levallois method dominated tool production. This method, which involves careful preparation of a stone core to produce standardized flakes, appears consistently across mid-Middle Paleolithic sites in the region.

This technological consistency is particularly remarkable because it differs significantly from both earlier and later stone tool traditions in the Levant. Earlier Middle Paleolithic populations (around 250,000-140,000 years ago) primarily used methods to produce blade-like tools, while later populations (after 80,000 years ago) employed a more diverse set of techniques. The dominance of the centripetal Levallois method during this middle period represents a distinct technological tradition shared across populations.

Analysis of animal bones from the site reveals a third element of behavioral uniformity: a focus on hunting large game animals. Unlike earlier and later periods, when smaller prey like gazelles dominated the diet, the mid-Middle Paleolithic hunters at Tinshemet and similar sites showed a clear preference for larger ungulates, particularly aurochs (wild cattle) and equids (horse-like animals). This pattern suggests either a shift in hunting strategies or different approaches to transporting animal resources, possibly connected to changes in settlement patterns.

To establish the age of the findings, the research team employed multiple dating techniques, including thermoluminescence dating of burnt flint, optically stimulated luminescence dating of quartz grains in the sediments, and uranium-series dating of snail shells and flowstones. These methods consistently dated the main human occupation layers to approximately 97,000-106,000 years ago, placing them firmly within the mid-Middle Paleolithic period.

The timing corresponds to a warm interglacial period known as Marine Isotope Stage 5, when climatic conditions in the Levant were relatively favorable. Pollen analysis from the lowest layers of the cave indicates a Mediterranean open forest environment with wide-spaced trees, small shrubs, and herbs dominated by evergreen oak.

Perhaps most intriguing about the Tinshemet Cave discovery is what it suggests about interactions between different human populations. “These findings paint a picture of dynamic interactions shaped by both cooperation and competition,” says co-lead author Dr. Marion Prévost.

Scientists have long associated specific behaviors or technologies exclusively with particular human species. Now we have strong evidence that points to a landscape of interaction, where cultural innovations spread across population boundaries through social learning and exchange.

As excavations at Tinshemet Cave continue, researchers hope to uncover additional evidence about the lives and interactions of these ancient people. The site has already yielded remarkable insights into a crucial chapter of human prehistory—a time when different human populations met, exchanged ideas, and created shared traditions despite their physical differences. What began as a simple archaeological survey has evolved into a profound reconsideration of what it means to be human, showing that cultural connections can transcend biological boundaries.

Source : https://studyfinds.org/100000-year-old-cultural-melting-pot-discovered-israeli-cave-early-human-history/

America is becoming a nation of homebodies

(Photo by Dragana Gordic on Shutterstock)

In his February 2025 cover story for The Atlantic, journalist Derek Thompson dubbed our current era “the anti-social century.” He isn’t wrong. According to our recent research, the U.S. is becoming a nation of homebodies.

Using data from the American Time Use Survey, we studied how people in the U.S. spent their time before, during and after the pandemic.

The COVID-19 pandemic did spur more Americans to stay home. But this trend didn’t start or end with the pandemic. We found that Americans were already spending more and more time at home and less and less time engaged in activities away from home stretching all the way back to at least 2003.

And if you thought the end of lockdowns and the spread of vaccines led to a revival of partying and playing sports and dining out, you would be mistaken. The pandemic, it turns out, mostly accelerated ongoing trends.

All of this has major implications for traffic, public transit, real estate, the workplace, socializing and mental health.

Life inside

The trend of staying home is not new. There was a steady decline in out-of-home activities in the two decades leading up to the pandemic.

Compared with 2003, Americans in 2019 spent nearly 30 minutes less per day on out-of-home activities and eight fewer minutes a day traveling. There could be any number of reasons for this shift, but advances in technology, whether it’s smartphones, streaming services or social media, are likely culprits. You can video chat with a friend rather than meeting them for coffee; order groceries through an app instead of venturing to the supermarket; and stream a movie instead of seeing it in a theater.

Of course, there was a sharp decline in out-of-home activities during the pandemic, which dramatically accelerated many of these stay-at-home trends.

Outside of travel, time spent on out-of-home activities fell by over an hour per day, on average, from 332 minutes in 2019 to 271 minutes in 2021. Travel, excluding air travel, fell from 69 to 54 minutes per day over the same period.

But even after the pandemic lockdowns were lifted, out-of-home activities and travel through 2023 remained substantially depressed, far below 2019 levels. There was a dramatic increase in remote work, online shopping, time spent using digital entertainment, such as streaming and gaming, and even time spent sleeping.

Time spent outside of the home has rebounded since the pandemic, but only slightly. There was hardly any recovery of out-of-home activities from 2022 to 2023, meaning 2023 out-of-home activities and travel were still far below 2019 levels. On the whole, Americans are spending nearly 1.5 hours less outside their homes in 2023 than they did in 2003.

While hours worked from home in 2022 were less than half of what they were in 2021, they’re still about five times what they were ahead of the pandemic. Despite this, only about one-quarter of the overall travel time reduction is due to less commuting. The rest reflects other kinds of travel, for activities such as shopping and socializing.

Ripple effects

This shift has already had consequences.

With Americans spending more time working, playing and shopping from home, demand for office and retail space has fallen. While there have been some calls by major employers for workers to spend more time in the office, research suggests that working from home in the U.S. held steady between early 2023 and early 2025 at about 25% of paid work days. As a result, surplus office space may need to be repurposed as housing and for other uses.

There are advantages to working and playing at home, such as avoiding travel stress and expenses. But it has also boosted demand for extra space in apartments and houses, as people spend more time under their own roof. It has changed travel during the traditional morning – and, especially, afternoon – peak periods, spreading traffic more evenly throughout the day but contributing to significant public transit ridership losses. Meanwhile, more package and food delivery drivers are competing with parked cars and bus and bike lanes for curb space.

Perhaps most importantly, spending less time out and about in the world has sobering implications for Americans well beyond real estate and transportation systems.

Research we’re currently conducting suggests that more time spent at home has dovetailed with more time spent alone. Suffice it to say, this makes loneliness, which stems from a lack of meaningful connections, a more common occurrence. Loneliness and social isolation are associated with increased risk for early mortality.

Because hunkering down appears to be the new norm, we think it’s all the more important for policymakers and everyday people to find ways to cultivate connections and community in the shrinking time they do spend outside of the home.

Source : https://studyfinds.org/america-becoming-nation-of-homebodies/

Happy husband or wife really could be the key to a stress-free life

Your significant other’s positive emotions can be contagious, especially in older couples. (Darren Baker/ Shutterstock)

When your spouse is in a good mood, you might feel happier too, but according to new research, their emotional state could be affecting you on a much deeper level. Scientists have discovered that when your partner experiences positive emotions, it might actually lower your cortisol levels, the primary stress hormone in your body, regardless of how you yourself are feeling. This biological connection between older couples adds a whole new dimension to what it means to be in a relationship.

“Having positive emotions with your relationship partner can act as a social resource,” says lead study author Tomiko Yoneda, an assistant professor of psychology at the University of California, Davis, in a statement.

The Aging Body and Stress Management

Study results, published in Psychoneuroendocrinology, are especially telling for older adults in committed relationships. As we get older, our bodies become worse at regulating stress responses, making us more vulnerable to the harmful effects of high cortisol. But a partner who maintains positive emotions might act as a biological buffer against stress.

The research team analyzed data from 321 older couples from Canada and Germany. These weren’t new relationships. The average couple had been together for 43.97 years. Each participant, aged between 56 and 87, completed surveys multiple times daily for a week, reporting their emotions while also providing saliva samples to measure cortisol. Partners completed surveys at the same time but separately, so they couldn’t influence each other’s responses.

Your Mood, My Body

When people reported feeling more positive than usual, their cortisol levels were lower. But when someone’s partner reported more positive emotions than usual, that person’s cortisol was also lower, regardless of how they themselves were feeling. In simple terms, your partner’s good mood might be doing your body good, even if you’re not sharing their happiness.

This connection extended beyond moment-to-moment measurements to total daily cortisol output. When someone’s partner reported higher positive emotions than usual throughout the day, that person showed lower overall cortisol for the day. This link was stronger for older participants and those who reported being happier in their relationships. In some cases, the effect of a partner’s emotions on cortisol was even stronger than the effect of one’s own emotions.

While a partner’s positive emotions were linked to lower cortisol, the researchers didn’t find any connection between a partner’s negative emotions and cortisol levels. Yoneda explained that this makes sense because older adults often develop ways to shield their partners from the physiological effects of negative emotions.

Quality Relationships Make a Difference

The emotional climate of your relationship may be an overlooked factor in your physical health. When your partner tends toward happiness, interest, or relaxation, their emotional state could be protecting your stress physiology.

This doesn’t mean you should pressure your partner to be constantly happy. Rather, these findings point to potential health benefits that come from fostering positive emotional experiences together. Creating opportunities for shared good times might be more than just relationship maintenance; it could be a mutual health boost.

“Relationships provide an ideal source of support, especially when those are high-quality relationships,” says Yoneda. “These dynamics may be particularly important in older adulthood.”

The association between a partner’s positive emotions and lower cortisol was most pronounced for people who reported higher relationship satisfaction. In happy relationships, partners may be more tuned in to each other’s emotional states.

Yoneda noted that these results fit with psychological theories suggesting positive emotions help us act more fluidly in the moment. These experiences can create positive feedback loops that enhance this capability over time. People in relationships can share these benefits when they experience positive emotions together.

Your partner’s happiness might be doing more than lighting up the room. It could be helping regulate your stress physiology in ways that boost your long-term health. In long-term relationships, emotions truly become a shared resource. What’s yours really is mine, right down to the hormonal level. So perhaps the age-old advice to “choose a happy partner” carries more biological wisdom than we ever realized?

Source : https://studyfinds.org/lower-stress-as-you-age-happy-partner/

 

Your clothes could soon charge your phone: New thermoelectric yarn makes it possible

Conceptual image of a man walking on the street with his smartphone being charged by his hoodie. (AI-generated image created by StudyFinds)

Forget to bring your charger with you on vacation? What if your clothing could generate electricity from the heat your body naturally produces? This futuristic concept is now approaching reality thanks to scientists at Chalmers University of Technology in Sweden and Linköping University.

Researchers say the remarkable new textile technology converts body heat into electricity through thermoelectric effects, potentially powering wearable devices from your clothing. The innovation, described in an Advanced Science paper, centers on a newly developed polymer called poly(benzodifurandione), or PBFDO, which serves as a coating for ordinary silk yarn.

“The polymers that we use are bendable, lightweight and are easy to use in both liquid and solid form. They are also non-toxic,” says study first author Mariavittoria Craighero, a doctoral student at the Department of Chemistry and Chemical Engineering at Chalmers, in a statement.

Unlike previous attempts at creating thermoelectric textiles, this breakthrough addresses a critical barrier that has long hampered progress: the lack of air-stable n-type polymers. These materials are characterized by their ability to move negative charges and are essential counterparts to the more common p-type polymers in creating efficient thermoelectric devices.

“We found the missing piece of the puzzle to make an optimal thread – a type of polymer that had recently been discovered. It has outstanding performance stability in contact with air, while at the same time having a very good ability to conduct electricity. By using polymers, we don’t need any rare earth metals, which are common in electronics,” explains Craighero.

How Thermoelectric Textiles Work

Thermoelectric generators work by converting temperature differences into electrical energy. When one side of a thermoelectric material is warmer than the other, electrons move from the hot side to the cold side, generating an electrical current. The human body continuously generates heat, creating natural temperature gradients between the skin and the surrounding environment.

For efficient thermoelectric generation, both p-type (positive) and n-type (negative) materials must work together. While p-type materials have been well-established in previous research, creating stable n-type materials has been a persistent challenge. Most n-type organic materials degrade rapidly when exposed to oxygen in the air, often becoming ineffective within days.

What makes this development particularly exciting is the remarkable stability of PBFDO-coated silk. Unlike similar materials that degrade within days when exposed to air, these new thermoelectric yarns maintain their performance for over 14 months under normal conditions without any protective coating. The researchers project a half-life of 3.2 years for these materials – an unprecedented achievement for this type of organic conductor.

Beyond electrical performance, the mechanical properties of the PBFDO-coated silk are equally impressive. The coated yarn can stretch up to 14% before breaking and, more importantly for everyday use, it can withstand machine washing.

“After seven washes, the thread retained two-thirds of its conducting properties. This is a very good result, although it needs to be improved significantly before it becomes commercially interesting,” states Craighero.

The material also demonstrates remarkable temperature resilience. During testing, the researchers found that PBFDO remains flexible even when cooled with liquid nitrogen to extremely low temperatures. This exceptional mechanical stability allows the material to withstand various environmental conditions and physical stresses that would be encountered in real-world use.

The Future of Daily Wear?

To showcase the technology’s potential, the research team created two different thermoelectric textile devices: a thermoelectric button and a larger textile generator with multiple thermoelectric legs.

The thermoelectric button demonstrated an output of about 6 millivolts at a temperature difference of 30 degrees Celsius. Meanwhile, the larger textile generator achieved an open-circuit voltage of 17 millivolts at a temperature difference of 70 degrees Celsius.

With a voltage converter, this could help power ultra-low-energy devices, such as certain types of sensors. However, the current power output—0.67 microWatts at a 70-degree temperature difference—is far below what would be required for USB charging of standard electronics.

While these power outputs mark a major step forward in thermoelectric textiles, it’s important to note that the temperature differences used in lab tests—up to 70 degrees Celsius—are significantly higher than what would typically be experienced in everyday clothing. This means real-world performance may be lower than laboratory results suggest.

Potential Uses in Healthcare and Wearable Tech

Despite current limitations in power output, the technology shows particular promise for healthcare applications. Small sensors that monitor vital signs like heart rate, body temperature, or movement patterns could potentially operate using this technology, eliminating the need for battery changes or recharging.

For patients with chronic conditions requiring continuous monitoring, self-powered sensors embedded in clothing could provide valuable data without the hassle of managing battery life. Similarly, fitness enthusiasts could benefit from wearables that never need charging, seamlessly tracking performance metrics during activities.

Beyond health monitoring, the technology could eventually support other low-power functions in smart clothing, such as environmental sensing, location tracking, or simple LED indicators. As power conversion efficiency improves, applications could expand to include more power-hungry features.

The Challenges Ahead

Currently, the production process is time-intensive and not suitable for commercial manufacturing, with the demonstrated fabric requiring four days of manual needlework to produce.

“We have now shown that it is possible to produce conductive organic materials that can meet the functions and properties that these textiles require. This is an important step forward. There are fantastic opportunities in thermoelectric textiles and this research can be of great benefit to society,” says Christian Müller, Professor at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology and research leader of the study.

One key challenge identified through computer simulations is the electrical contact resistance between components. Reducing this resistance could potentially increase power output by three times or more. The researchers also investigated how factors like thermoelectric leg length and thread count affect performance, providing valuable insights for future designs.

Interest in these types of conducting polymers has grown significantly in recent years. They have a chemical structure that allows them to conduct electricity similar to silicon while maintaining the physical properties of plastic materials, making them flexible. Research on conducting polymers is ongoing in many areas such as solar cells, Internet of Things devices, augmented reality, robotics, and various types of portable electronics.

Looking Forward

What’s clear is that there is a viable pathway toward practical thermoelectric textiles that can function reliably in everyday conditions. By addressing both the electrical and mechanical requirements for textile integration, this work bridges the gap between laboratory demonstrations and potential real-world applications.

The development of these polymers also aligns with sustainability goals by eliminating the need for rare earth metals commonly used in electronics. With further refinement and scaling of the manufacturing process, this technology could eventually lead to clothing that powers our devices using nothing but our body heat.

For widespread adoption, researchers will need to develop automated production methods that can efficiently coat and assemble the thermoelectric textiles at scale. Additionally, improving power output while maintaining stability remains a critical goal for future research.

Source : https://studyfinds.org/your-clothes-could-soon-charge-your-phone-new-thermoelectric-yarn/

Tesla vs. BYD: A look inside their cutting-edge EV batteries

Which electric vehicle giant has a better battery? (gguy/Shutterstock)

In the race to dominate the electric vehicle market, two companies stand above the rest: Tesla and China’s BYD. While Tesla pioneered the use of lithium-ion batteries and leads EV sales in North America and Europe, BYD began as a battery manufacturer before expanding into vehicles, surpassing Tesla in global EV sales in 2024. New research from multiple German universities gives us a look at the battery technology powering these automotive giants by directly comparing Tesla’s 4680 cylindrical cell with BYD’s Blade prismatic cell.

The research, published in Cell Reports Physical Science, reveals rare insights into the design, performance, and manufacturing processes of these cutting-edge batteries. By dismantling and analyzing both cell types, the researchers found major differences in energy density, thermal efficiency, and material composition that show the distinct design philosophies of each manufacturer.

“There is very limited in-depth data and analysis available on state-of-the-art batteries for automotive applications,” says lead study author Jonas Gorsch from RWTH Aachen University, in a statement.

For the average consumer, these differences translate into real-world impacts on driving range, charging speed, vehicle cost, and safety. The study offers a window into how battery technology, the heart of any electric vehicle, is evolving through different approaches to solve the same fundamental challenge: how to store more energy safely and efficiently while reducing costs.

The Tale of Two Battery Designs

Tesla’s 4680 cell (named for its 46mm diameter by 80mm height dimensions) represents the company’s latest innovation in battery design. It’s significantly larger than previous cells used in the Model 3, allowing for higher energy density and reduced production costs. The “tabless” design further cuts costs by eliminating the need for certain manufacturing steps.

BYD’s Blade cell takes a completely different approach, using a rectangular prism shape with dimensions of 965mm in length, 90mm in height, and 14mm in thickness. This long, thin design prioritizes safety and cost-effectiveness while offering surprisingly competitive performance metrics despite using different materials.

The most striking difference between the cells is their chemistry. Tesla opts for NMC811 (a nickel-manganese-cobalt blend with high nickel content), delivering impressive energy density of 241 Wh/kg and 643 Wh/l. In simpler terms, Tesla packs more energy into the same weight and volume. BYD uses LFP (lithium iron phosphate), which achieves a more modest 160 Wh/kg and 355 Wh/l. This choice reflects BYD’s focus on cost-effectiveness and longevity over maximum range.

When examining heat management, the researchers found that the Tesla 4680 cell generates twice the heat per volume compared to the BYD Blade cell at the same charging rate. This difference impacts the cooling systems needed for fast charging and has implications for battery longevity and safety. Overall, the study revealed that BYD’s battery is more efficient because it allows easier temperature management.

Looking Inside: Construction and Materials

When researchers took apart the batteries, they found some major differences in how Tesla and BYD build their cells. Inside BYD’s Blade battery, the key components, the positive and negative layers (cathodes and anodes), are stacked in a Z-folded pattern with many thin layers in between. This design makes the battery safer and more durable, but it also means that electricity has to travel a longer path through the battery, which can reduce efficiency. To keep everything securely in place, BYD uses a special lamination method, sealing the edges of the separator (the thin layer that prevents short circuits between the positive and negative sides).

Tesla takes a different approach with its 4680 battery, using a “jelly roll” design, sort of like rolling up a long strip of paper. This setup helps electricity flow more directly, improving performance. One noticeable feature is a small empty space in the center, which likely helps with manufacturing and connecting the battery’s internal parts.

Unlike many other battery manufacturers that use ultrasonic welding, both Tesla and BYD rely on laser welding to connect their thin electrode foils. Despite the BYD cell being significantly larger than Tesla’s, both batteries have a similar proportion of non-active components, such as current collectors, housing, and busbars.

Source: https://studyfinds.org/tesla-vs-byd-ev-batteries/

An ugly truth? Attractive workers earn $20K more annually than ‘unattractive’ colleagues, survey shows

(Photo by PeopleImages.com – Yuri A)

We all know the saying “Don’t judge a book by its cover,” but a new survey suggests that in the workplace, your “cover” might matter more than you think — especially when it comes to income. A recent survey asked 1,050 Americans about “pretty privilege” – the idea that better-looking people get more advantages in life – and found that a whopping 81.3% believe it exists at work.

The results show how our appearance might be influencing everything from who gets hired to who gets that next big promotion.

Pretty privilege isn’t just limited to modeling or acting jobs. Eight in ten people surveyed believe attractive coworkers are more likely to be promoted, hired, or given raises. Even more telling, 66.9% of people have actually seen someone treated unfairly or talked about negatively because of how they look.

The survey, conducted by Standout CV, shows that the pressure to look good at work is real. About 64.2% of people feel pushed to change their natural features – like straightening their hair or wearing makeup – just to fit in at the office. And 83.4% think colleagues who put more effort into their appearance are seen as more capable professionals.

How We See Ourselves
When asked to rate their own workplace attractiveness on a scale of 1 to 10, the average person gave themselves a 7.7. Men seemed more confident about their looks, with 37.5% rating themselves a 9 or perfect 10, compared to only 27.4% of women.

These self-ratings revealed a lot about career experiences. Nearly half (46%) of people who rated themselves as unattractive (scoring 1-3) said their looks had hurt their careers – that’s five times higher than the average of 7.6%.

On the flip side, those who considered themselves good-looking (rating above 7) were likely to say their appearance helped them professionally (60.7%). This number jumped to 66.8% for those who gave themselves a 9 or 10.

People who saw themselves as average lookers (rating 4-6) were most likely to say their appearance had no impact on their work life (38% compared to just 16.2% overall).

Interestingly, one in five people said their looks affected their careers both positively and negatively. This stayed consistent regardless of how attractive they thought they were. This might happen when someone benefits from good looks but also faces issues like not being taken seriously.

In fact, 55.7% of people admitted to downplaying their appearance to be taken more seriously at work. This number rose to 68.7% among those who considered themselves very attractive.

Source: https://studyfinds.org/attractive-workers-earn-more-pretty-privilege/

How tattoo ink travels through the body, raising risks of skin cancer and lymphoma

(Photo by Getty Images in collaboration with Unsplash+)

Tattoos have become a mainstream form of self-expression, adorning the skin of millions worldwide. But a new study from Danish researchers uncovers concerning connections between tattoo ink exposure and increased risks of both skin cancer and lymphoma.

Approximately one in four adults in many Western countries now sport tattoos, with prevalence nearly twice as high among younger generations. The study, published in BMC Public Health, adds to growing evidence that the popular form of body art may carry long-term health consequences previously unrecognized.

The study’s lead author, Signe Bedsted Clemmensen, along with colleagues at the University of Southern Denmark, analyzed data from two complementary twin studies – a case-control study of 316 twins and a cohort study of 2,367 randomly selected twins born between 1960 and 1996. The team created a specialized “Danish Twin Tattoo Cohort” that allowed them to control for genetic and environmental factors when examining cancer outcomes among tattooed and non-tattooed individuals.

When comparing twins where one had cancer and one didn’t, researchers found that the tattooed twin was more likely to be the one with cancer. In the case-control study, tattooed individuals had a 62% higher rate of skin cancer compared to non-tattooed people. The cohort study showed even stronger associations, with tattooed individuals having nearly four times higher rate of skin cancer and 2.83 times higher rate of basal cell carcinoma.

Size appears to matter significantly. Large tattoos (bigger than the palm of a hand) were associated with substantially higher lymphoma and skin cancer risks than smaller tattoos, potentially due to higher exposure levels or longer exposure time. This dose-response relationship strengthens the case for causality rather than mere correlation.

“This suggests that the bigger the tattoo and the longer it has been there, the more ink accumulates in the lymph nodes. The extent of the impact on the immune system should be further investigated so that we can better understand the mechanisms at play,” says Clemmensen, an assistant professor of biostatistics, in a statement.

The Journey of Tattoo Ink Through the Body

Scientists have long known that tattoo ink doesn’t simply stay put in the skin. Particles from tattoo pigments migrate through the bloodstream and accumulate in lymph nodes and potentially other organs. The researchers proposed an “ink deposit conjecture” – suggesting that tattoo pigments trigger inflammation at deposit sites, potentially leading to chronic inflammation and increased risk of abnormal cell growth.

Black ink, the most commonly used tattoo color, has been a particular focus of concern. It typically contains soot products like carbon black, which the International Agency for Research on Cancer (IARC) has listed as possibly cancer-causing to humans. Through incomplete burning during carbon black production, harmful compounds form as byproducts, including benzo[a]pyrene, which IARC classifies as cancer-causing to humans.

“We can see that ink particles accumulate in the lymph nodes, and we suspect that the body perceives them as foreign substances,” explains study co-author Henrik Frederiksen, a consultant in hematology at Odense University Hospital and clinical professor at the university. “This may mean that the immune system is constantly trying to respond to the ink, and we do not yet know whether this persistent strain could weaken the function of the lymph nodes or have other health consequences.”

Colored inks pose their own problems. Red ink – often associated with allergic reactions – contains compounds that may release harmful substances when exposed to sunlight or during laser tattoo removal.

“We do not see a clear link between cancer occurrence and specific ink colors, but this does not mean that color is irrelevant,” notes Clemmensen. “We know from other studies that ink can contain potentially harmful substances, and for example, red ink more often causes allergic reactions. This is an area we would like to explore further.”

The researchers suggest that with tattoo prevalence rising dramatically, especially among younger people, public awareness campaigns might be needed to educate about potential risks.

“We are concerned that tattoo ink has severe public health consequences since tattooing is abundant among the younger generation,” they write in their conclusion. The team recommends further studies to pinpoint the exact biological mechanisms through which tattoo ink might induce cancer.

A Growing Body Of Research

This isn’t the first research to raise alarms about tattoo safety. Previous studies have documented cases of skin conditions and tumors occurring within tattoo areas. However, this large-scale study provides some of the strongest evidence yet for a relationship between tattoos and cancer.

For those already sporting tattoos, the research doesn’t suggest panic – but awareness. The time between tattoo exposure and cancer diagnosis in the study was substantial – a median of 8 years for lymphoma and 14 years for skin cancer. This suggests that cancers develop gradually over time, and monitoring for any changes in tattooed areas might be prudent.

The rise in popularity of tattoo removal services presents its own concerns. The researchers specifically highlight that laser tattoo removal breaks down pigments into smaller fragments that may be more mobile within the body, potentially increasing migration to lymph nodes and other organs.

As with many health studies, this research doesn’t definitively prove causation, but it adds significant weight to growing evidence of long-term risks. The researchers point out that even with new European restrictions on harmful compounds in tattoo inks, the body’s immune response to foreign substances might be problematic regardless of specific ink components.

Balancing Expression and Health

As tattoo culture continues to thrive globally, balancing personal expression through body art with health considerations becomes increasingly important.

With tattoos now firmly embedded in mainstream culture, this research doesn’t aim to stigmatize body art but rather to inform safer practices. Whether this means developing safer inks, improving tattoo application techniques, or simply making more informed choices about tattoo size and placement, understanding the biological impact of tattoo ink is essential for public health.

As the researchers conclude, further studies that pinpoint the biological mechanisms of tattoo ink-induced cancer are needed. Until then, those considering getting inked might want to weigh the aesthetic benefits against potential long-term health considerations – a balance that, like the perfect tattoo design, will be uniquely personal.

Source : https://studyfinds.org/tattoo-ink-skin-cancer-lymphoma/

How the pursuit of happiness ends up sending people on a path to misery

(Photo by Erce on Shutterstock)

We live in a happiness-obsessed world. Self-help gurus promise paths to bliss, Instagram influencers peddle happiness as a lifestyle, and corporations build marketing campaigns around the pursuit of positive emotions. But new research suggests a surprising twist: trying too hard to be happy might actually be making us miserable.

Researchers from the University of Toronto Scarborough and the University of Sydney found that actively pursuing happiness drains our mental energy – the same energy we need for self-control. Their study, published in Applied Psychology: Health and Well-Being, challenges what many of us believe about happiness.

“The pursuit of happiness is a bit like a snowball effect. You decide to try making yourself feel happier, but then that effort depletes your ability to do the kinds of things that make you happier,” says Sam Maglio, marketing professor at the University of Toronto Scarborough and the Rotman School of Management, in a statement.

This might sound familiar: You wake up determined to have a great day. You plan mood-boosting activities and work hard to stay positive. But by evening, you’re ordering takeout instead of cooking, mindlessly scrolling social media, and snapping at your partner. Why? Your happiness pursuit itself might be the problem.

Maglio puts it bluntly: “The more mentally rundown we are, the more tempted we’ll be to skip cleaning the house and instead scroll social media.”

Testing the Happiness Drain

The research team ran four studies that gradually built their case.

First, they surveyed 532 adults about how much they valued and pursued happiness, then measured their self-reported self-control. The results showed a clear pattern: people who placed higher value on seeking happiness reported worse self-control abilities.

For their second study, they moved beyond self-reports to actual behavior. They had 369 participants complete a series of consumer choice rankings and measured how long they persisted at the task. Those with stronger tendencies to pursue happiness showed less persistence, suggesting their mental resources were already running low.

From Happiness Ads to Chocolate Cravings

For their third study, the researchers got clever. They intercepted 36 people at a university library and showed them either an advertisement that prominently featured the word “happiness” or a neutral ad without any happiness messaging. Then they offered participants chocolate candies, telling them to eat as many as they wanted while rating the taste.

“The story here is that the pursuit of happiness costs mental resources,” Maglio explains. “Instead of just going with the flow, you are trying to make yourself feel differently.”

The results were striking: people exposed to the happiness ad ate nearly twice as many chocolates (2.94 vs. 1.56 on average) – a classic sign of decreased self-control. This raises questions about happiness-themed marketing campaigns – they might actually be draining our willpower and setting us up to make choices we later regret.

Not All Goals Drain You the Same

For their final experiment, the researchers tackled an important question: Is happiness-seeking uniquely depleting, or does pursuing any goal require mental energy?

They had 188 participants make 25 choices between pairs of everyday products (like choosing between an iced latte and green tea). One group was told to choose options that would “improve their happiness,” while the other group chose based on what would “improve their accurate judgment.” Then everyone worked on a challenging anagram puzzle where they could quit whenever they wanted.

The happiness group quit much sooner – lasting only 444 seconds on average compared to 574 seconds for the accuracy group. This significant difference suggested that pursuing happiness specifically drains mental energy more than other types of goals.

This wasn’t Maglio’s first investigation into happiness backfiring. In a 2018 study with Kim, he found that people actively seeking happiness tend to feel like they’re running short on time, creating stress that ultimately makes them unhappier.

The Pressure To Feel Even Better

The self-improvement industry rakes in over $10 billion largely by promising to boost happiness. Bestsellers like “The Happiness Project,” “The Art of Happiness,” and “The Happiness Advantage” sell millions of copies with strategies for maximizing positive emotions. But this research suggests many of these approaches might be working against themselves.

The researchers note that the self-help industry puts “a lot of pressure and responsibility on the self.” Many people now treat happiness like money – “something we can and should gather and hoard as much as we can.” This commodification of happiness may be part of the problem, creating a mindset where we’re constantly striving for more rather than appreciating what we have.

Why This Happens

Think of self-control like a gas tank that gets emptied throughout the day. Psychologist Roy Baumeister’s research shows that every act of self-control – resisting temptation, controlling emotions, making decisions – uses fuel from the same tank.

Seeking happiness burns through this fuel quickly because it requires managing your actions, monitoring your thoughts, and actively changing your emotions. When your tank runs low, you’re more likely to make poor choices like overeating, overspending, or being short with others – creating a cycle that ultimately makes you less happy.

The Real Secret To Happiness

So should we abandon the pursuit of well-being? Not exactly. But the research suggests a more balanced approach might work better.

Maglio suggests we think of happiness like sand at the beach: “You can cling to a fistful of sand and try to control it, but the harder you hold, the more your hand will cramp. Eventually, you’ll have to let go.”

His advice cuts through the complexity with refreshing simplicity: “Just chill. Don’t try to be super happy all the time,” says Maglio, whose work is supported by a grant from the Social Sciences and Humanities Research Council of Canada. “Instead of trying to get more stuff you want, look at what you already have and just accept it as something that gives you happiness.”

When we ease up on constantly trying to maximize happiness and accept a wider range of emotions, we might actually preserve the mental energy needed to make better decisions – and ultimately feel better.

Source : https://studyfinds.org/the-happiness-paradox-chasing-joy-backfires/

How financial stress can sabotage job satisfaction by fueling workplace burnout

Being stressed about your finances can lead to burnout at work. (PeopleImages.com – Yuri A/Shutterstock)

In today’s world, the boundaries between our personal and professional lives often blur. Many of us try to keep financial worries separate from our work life, but a new study from the University of Georgia suggests this separation may be wishful thinking. Research reveals that our financial well-being significantly impacts our job satisfaction, with workplace burnout playing a key role.

The study, published in the Journal of Workplace Behavioral Health, shows that when employees experience financial stress, it follows them to work, affecting their performance and satisfaction through increased burnout.

The Hidden Cost of Financial Stress at Work

The U.S. Surgeon General recognized this connection in 2024 by naming workplace well-being one of the top public health priorities. Yet remarkably, 60% of employers don’t consider employee well-being a top 10 initiative. This disconnect is costly with dissatisfied employees reportedly costing the U.S. economy around $1.9 trillion in lost productivity in 2023 alone.

“Stress from work can often leave people feeling tired and overwhelmed. Anxiety in other parts of life could make this even worse,” says lead author Camden Cusumano from the University of Georgia, in a statement. “Just as injury in one part of the body could lead to pain in another, personal financial stress can manifest in someone’s work performance.”

While previous research has examined connections between compensation and job satisfaction, this study takes a more holistic approach. Rather than focusing merely on salary figures, researchers investigated how employees’ overall assessment of their financial health impacts their workplace experience.

When Money Worries Follow You to Work

Their research distinguishes between two dimensions of financial well-being: current money management stress (present concerns) and expected future financial security (future outlook). Both of these affect job satisfaction in different ways.

“We call them different life domains. There’s the work domain, there might be the family domain, things like that,” says Cusumano. “But sometimes there’s spillover from one to the other. My finances might impact the way I’m feeling about the stress in my family, or if I’m working long hours, that might cause some conflict with my family as well.”

The researchers used the Conservation of Resources theory as their framework. This theory suggests people experience stress when they lose resources, face threats to their resources, or fail to gain new resources despite their efforts. In this context, financial well-being represents a crucial resource: a sense of security and control regarding one’s finances.

Burnout Beyond the Workplace

For the study, the researchers surveyed 217 full-time U.S. employees who earned at least $50,000 annually. This sample was deliberately chosen to focus on workers not predisposed to financial insecurity due to low income.

Burnout shows up in three main ways: feeling detached from yourself or others, feeling constantly tired, and feeling like your accomplishments don’t matter. All three combine to make employees tired and disengaged from their work.

Current money management stress didn’t directly affect job satisfaction but operated through increased burnout. In contrast, expected future financial security had a direct positive association with job satisfaction that wasn’t mediated by burnout.

These findings highlight that financial stress doesn’t just create problems at home; it fundamentally alters how employees experience their work. People feeling stressed about making ends meet today are more likely to experience burnout, which in turn reduces their job satisfaction. Meanwhile, those who feel secure about their financial future tend to be more satisfied with their jobs, regardless of burnout levels.

Future financial concerns may also play a role in job satisfaction. If a worker is feeling stressed about their current position, believing their financial situation may improve could enhance their views on their job.

Creating Better Workplace Support Programs

Employers often focus on compensation as the primary financial factor affecting employee satisfaction. However, if an employee’s financial struggles are leading to burnout and job dissatisfaction, addressing work-related factors alone won’t fully resolve the problem.

This research highlights the importance of developing personal financial management skills alongside professional development for employees. Building financial resilience may not only improve the quality of life at home but could also enhance workplace experience and career success, especially in today’s workforce where remote and hybrid work have further blurred the boundaries between work and personal life.

“Some companies are actually providing financial counseling to some of their employees,” says Cusumano. “They’re paying attention to how finances can really permeate different areas of life.”

Organizations could benefit from broadening their wellness initiatives to include financial well-being resources. Providing tools and support to help employees manage current financial stress and build future security could yield significant returns through improved job satisfaction and reduced burnout.

In the end, money might not buy happiness, but financial stress certainly seems capable of diminishing workplace satisfaction. By understanding these connections, both organizations and individuals can develop more effective strategies for navigating the complex relationship between financial health and workplace well-being.

Source : https://studyfinds.org/financial-stress-sabotaging-job-satisfaction-workplace-burnout/

What’s the shape of the universe?

(© Vector Tradition – stock.adobe.com)

Mathematicians use topology to study the shape of the world and everything in it
When you look at your surrounding environment, it might seem like you’re living on a flat plane. After all, this is why you can navigate a new city using a map: a flat piece of paper that represents all the places around you. This is likely why some people in the past believed the earth to be flat. But most people now know that is far from the truth.

You live on the surface of a giant sphere, like a beach ball the size of the Earth with a few bumps added. The surface of the sphere and the plane are two possible 2D spaces, meaning you can walk in two directions: north and south or east and west.

What other possible spaces might you be living on? That is, what other spaces around you are 2D? For example, the surface of a giant doughnut is another 2D space.

Through a field called geometric topology, mathematicians like me study all possible spaces in all dimensions. Whether trying to design secure sensor networks, mine data or use origami to deploy satellites, the underlying language and ideas are likely to be that of topology.

The shape of the universe
When you look around the universe you live in, it looks like a 3D space, just like the surface of the Earth looks like a 2D space. However, just like the Earth, if you were to look at the universe as a whole, it could be a more complicated space, like a giant 3D version of the 2D beach ball surface or something even more exotic than that.

While you don’t need topology to determine that you are living on something like a giant beach ball, knowing all the possible 2D spaces can be useful. Over a century ago, mathematicians figured out all the possible 2D spaces and many of their properties.

In the past several decades, mathematicians have learned a lot about all of the possible 3D spaces. While we do not have a complete understanding like we do for 2D spaces, we do know a lot. With this knowledge, physicists and astronomers can try to determine what 3D space people actually live in.

While the answer is not completely known, there are many intriguing and surprising possibilities. The options become even more complicated if you consider time as a dimension.

To see how this might work, note that to describe the location of something in space – say a comet – you need four numbers: three to describe its position and one to describe the time it is in that position. These four numbers are what make up a 4D space.

Now, you can consider what 4D spaces are possible and in which of those spaces do you live.

Topology in higher dimensions
At this point, it may seem like there is no reason to consider spaces that have dimensions larger than four, since that is the highest imaginable dimension that might describe our universe. But a branch of physics called string theory suggests that the universe has many more dimensions than four.

There are also practical applications of thinking about higher dimensional spaces, such as robot motion planning. Suppose you are trying to understand the motion of three robots moving around a factory floor in a warehouse. You can put a grid on the floor and describe the position of each robot by their x and y coordinates on the grid. Since each of the three robots requires two coordinates, you will need six numbers to describe all of the possible positions of the robots. You can interpret the possible positions of the robots as a 6D space.

As the number of robots increases, the dimension of the space increases. Factoring in other useful information, such as the locations of obstacles, makes the space even more complicated. In order to study this problem, you need to study high-dimensional spaces.

There are countless other scientific problems where high-dimensional spaces appear, from modeling the motion of planets and spacecraft to trying to understand the “shape” of large datasets.

Tied up in knots
Another type of problem topologists study is how one space can sit inside another.

For example, if you hold a knotted loop of string, then we have a 1D space (the loop of string) inside a 3D space (your room). Such loops are called mathematical knots.

The study of knots first grew out of physics but has become a central area of topology. They are essential to how scientists understand 3D and 4D spaces and have a delightful and subtle structure that researchers are still trying to understand.

Source: https://studyfinds.org/whats-the-shape-of-the-universe/

I Cut Out Sugar for a Month—Here’s What It Did for My Mental Health

All good things come in moderation

d3sign / Getty Images

I’ve never been one to turn down something sweet. A bar of chocolate to reward myself for a successful grocery shop, some dessert after dinner—since I only indulged a few times a week, I thought it was pretty harmless.

But after noticing how sluggish, irritable, and foggy I felt after sugar-heavy days, I started wondering: could my sugar intake be affecting my mental health?

With that question in mind, I decided to cut out added sugar for an entire month. No packets of jelly beans, no sweetened boba teas, and no honey in my morning oats. The goal wasn’t just to see how my body felt, but to observe whether eliminating sugar had any impact on my mood, energy levels, and mental clarity.

The result? Let’s just say it wasn’t what I expected.

Why I Decided to Cut Out Sugar
I don’t eat added sugar every day. Instead, I tend to indulge in a (very) sweet treat twice a week or so. I usually justify it by saying that I “deserve” a treat—to reward myself for a work victory, to celebrate a special occasion, or to comfort myself after a hard day.

There’s nothing wrong with treating yourself. But I eventually noticed that my sugar binge led to some uncomfortable symptoms, particularly brain fog, poor sleep, and mood swings.

“Excess sugar intake, especially from refined sources, can cause rapid spikes and crashes in blood sugar levels, which can lead to irritability, fatigue, and difficulty concentrating,” says dietician Jessica M. Kelly, MS, RDN, LDN, the founder and owner of Nutrition That Heals. “Over time, frequent blood sugar fluctuations can contribute to increased anxiety.”

“Over time, a high-sugar diet may increase the risk of depression by causing inflammation and disrupting brain chemicals like serotonin and dopamine,” adds Marjorie Nolan Cohn, MS, RD, LDN, CEDS-S, the clinical director of Berry Street. “These ups and downs make it harder to manage emotions, making mood swings more frequent.”

A 2017 study, which looked at data collected from 23,245 people, found that higher sugar intake is associated with depression, particularly in men. Participants with the highest level of sugar consumption were 23% more likely to have a diagnosed mental illness than those with the lowest level of sugar consumption.1

Over time, a high-sugar diet may increase the risk of depression by causing inflammation and disrupting brain chemicals like serotonin and dopamine.

— MARJORIE NOLAN COHN, MS, RD, LDN, CEDS-S
Other research, like this 2024 study, also suggests a link between depression and sugar consumption—but the authors point out this connection might be because mental distress can lead to emotional eating and make it harder to control cravings.2

For the purpose of my experiment, I needed to set some ground rules about the sugars I would and wouldn’t cut out.

According to Kelly and Nolan Cohn, not all sugars affect mental health in the same way. “Natural sugars found in, for example, fruit and dairy, accompany fiber, vitamins, and antioxidants that are health-promoting and slow glucose absorption,” Kelly explains. “Refined sugars, like those in sodas and candy, can cause rapid blood sugar spikes and crashes which can lead to mood swings and brain fog.”

Excited to see the results, I began my experiment!

Week 1: The “Oh Wow, Does That Really Contain Sugar?” Phase
During my first week, I didn’t experience changes in my mood, but rather in my behavior and mindset.

This experiment required me to pick up a new habit: reading nutritional labels and ingredient lists. Although giving up sugar was easy for the first few days, this habit was pretty hard.

I was surprised to learn that sugar is in a lot of things. Most of my favorite savory treats contained sugar. Even my usual “healthy” post-gym treat—a protein bar—was off-limits.

Surprisingly, I didn’t really have any sugar withdrawals, which can be common among people who typically consume a lot of sugar.

“Cutting out sugar can trigger strong cravings since it affects the brain’s reward system, this can lead to withdrawal-like urges, and for some, it can feel very intense,” says Nolan Cohn. Sugar withdrawal symptoms often include headaches, fatigue, and mood swings.

On day four, I had my first major challenge—I realized I could no longer grab some milk chocolate on the way out of the grocery store. Talking myself out of this was harder than I’d like to admit.

The biggest challenge for week one? Choosing what to eat in a restaurant. Most menus don’t specify which dishes contain sugar, and there’s a surprising amount of sugar in savory dishes, like tomato-based curries and wraps filled with sugary salad dressings.

By the end of week one, I felt like giving up. Although I didn’t have any major cravings, constantly checking food labels was annoying, and there were no notable benefits—at least, not yet.

Week 2: A Shift in Mood and Energy
Around the 10-day mark, things started changing for the better.

Even if I don’t eat a lot of sugar in my day-to-day diet and my home-cooked meals, I tend to treat myself—a lot. Food is a go-to source of comfort for me, often to my detriment. My mindset is often along the lines of, “Oh, who cares? It’s just a treat. It’s a special occasion!”

Because I wanted to stick to the experiment, I had to pause my “treat yo’self” mindset. As I was more mindful of sugar, I planned my snacks better, avoided getting takeout, and practiced more self-control while shopping for groceries.

More importantly, I had to actually engage with my feelings instead of eating them away.

On my therapist’s recommendation, I paid attention to the uncomfortable feelings that’d usually lead me to eat, and I journaled about them instead.

I also noticed some changes in my mood—finally! Because I wasn’t eating a lot of sugar and then crashing twice a week, my energy levels felt a bit more stable. This meant that my mood also felt more stable.

Week 3: Mental Clarity and Emotional Balance
By week three, I was genuinely surprised by how good I felt.

Not only was my energy and mood a little calmer, I was really chuffed with myself for managing to avoid sugar for such a long time.

 

Source: https://www.verywellmind.com/does-sugar-affect-mental-health-11683665

Morning blue light therapy can greatly improve sleep quality for older adults

Researchers say blue light exposure in the morning may be a healthier alternative to taking sleep medications. (amenic181/Shutterstock)

Getting older brings many changes, and unfortunately, worse sleep is often one of them. Many seniors struggle with falling asleep, waking up frequently during the night, and generally feeling less rested. But what if something as simple as changing your light exposure could help?

A new study from the University of Surrey has found that the right light, at the right time, might make a significant difference in older adults’ sleep and daily activity patterns. This research, published in GeroScience, reveals that morning exposure to blue-enriched light can be beneficial, while that same light in the evening can actually make sleep problems worse.

“Our research shows that carefully timed light intervention can be a powerful tool for improving sleep and day-to-day activity in healthy older adults,” explains study author Daan Van Der Veen from the University of Surrey, in a statement. “By focusing on morning blue light and maximizing daylight exposure, we can help older adults achieve more restful sleep and maintain a healthier, more active lifestyle.”

Why light timing matters

So why do older adults have more sleep troubles in the first place? Part of the problem lies in the aging eye. As we get older, our eyes undergo natural changes—the lens yellows, pupils get smaller, and we have fewer photoreceptor cells. All these changes mean less light reaches the brain’s master clock, located in a tiny region called the hypothalamic suprachiasmatic nuclei (SCN).

That yellowing lens is particularly problematic because it filters out blue light wavelengths specifically. It’s like wearing subtle yellow sunglasses all the time. This matters because blue light (wavelengths between 420 and 480 nanometers) is especially powerful at regulating our body clocks. With less blue light reaching their brains, older adults’ internal clocks can become weaker and more prone to disruption.

Many seniors also spend less time outdoors and have fewer social engagements, further reducing their exposure to bright natural light. Meanwhile, they might be getting too much artificial light at night, which can confuse the body’s natural rhythms.

The Surrey researchers wanted to see if they could improve sleep for older adults living independently at home by tweaking their light exposure. They recruited 36 people aged 60 and over who reported having sleep problems. None were in full-time employment, and all were free from eye disorders or other conditions that might complicate the study.

Over an 11-week period during fall and winter (when natural daylight is limited in the UK), participants followed a carefully designed protocol. They spent one week establishing baseline measurements, followed by three weeks using either blue-enriched white light (17,000 K) or standard white light (4,000 K) for two hours each morning and evening. After a two-week break, they switched to the other light condition for three weeks, followed by another two-week washout period.

Participants used desktop light boxes while going about normal activities like reading or watching TV. They wore activity monitors on their wrists around the clock and light sensors around their necks during the day. They kept sleep diaries and collected urine samples to measure melatonin metabolites, markers indicating how their internal clocks were functioning.

Morning light helps, evening light hurts

The results were telling. Longer morning exposure to the blue-enriched light significantly improved the stability of participants’ daily activity patterns and reduced sleep fragmentation. By contrast, evening exposure to that same light made it harder to fall asleep and reduced overall sleep quality.

Another key discovery was that participants who spent more time in bright light (above 2,500 lux, roughly the brightness you’d experience outdoors on a cloudy day) had more active days, stronger daily rhythms, and tended to go to bed earlier. This finding reinforces long-standing advice from sleep experts: getting outside during the day is really important for good sleep.

Morning people (early birds) naturally started their morning light sessions earlier than night owls. However, most participants used their evening light sessions at similar times, suggesting that social habits might influence evening routines more than biological clocks.

The women in the study showed more variable activity patterns throughout the day than men, and those who took more daytime naps had less stable daily rhythms and were generally less active.

Practical tips

By the end of the study, participants reported meaningful improvements in their sleep quality. This means light therapy could be a potential alternative to sleep medications, which often come with side effects.

“We believe that this is one of the first studies that have looked into the effects of self-administered light therapy on healthy older adults living independently to aid their sleep and daily activity,” says study author Débora Constantino, a postgraduate research student. “It highlights the potential for accessible and affordable light-based therapies to address age-related sleep issues without the need for medication.”

For older adults seeking better rest, the advice is clear:

  • Get bright, blue-enriched light in the morning: Use a light box or spend time outdoors after waking up.
  • Dim the lights in the evening: Reduce exposure to phones, tablets, and bright overhead lights.
  • Stay consistent: Establishing regular morning and evening routines can further support healthy sleep patterns.

This approach isn’t just for people in care homes or those with cognitive impairments; it can also benefit healthy, independent older adults. With an aging population worldwide, finding simple and effective strategies to improve sleep has never been more important. The right light at the right time might be a key part of aging well.

Source : https://studyfinds.org/morning-blue-light-therapy-boosts-sleep-quality-older-adults/

Belly fat can boost brain health? Yes — but to a point, study shows

(© sun_apple – stock.adobe.com)

Age-related cognitive decline sneaks up on millions of people worldwide. It begins with those frustrating “senior moments” in middle age and can progress to more serious memory and thinking problems later in life. While scientists have traditionally focused their attention directly on the brain to understand these changes, new research out of Toho University in Japan points to an unexpected contributor: your belly fat.

A study published in the journal GeroScience reveals that visceral fat—the deep fat surrounding your internal organs—plays a role in maintaining brain health through a chemical messaging system. You might have heard of BDNF (brain-derived neurotrophic factor)—think of it as brain fertilizer. It helps brain cells grow, survive, and form new connections. The more BDNF you have, the better your brain functions. But as you age, your BDNF levels naturally drop, and that’s when memory problems can start.

Here’s where belly fat comes in. This new study found that CX3CL1, a protein made by visceral fat, plays a big role in maintaining healthy BDNF levels. In younger mice, their belly fat produced plenty of CX3CL1, keeping their brain function strong. But as the mice aged, both their belly fat and their brain’s BDNF levels took a nosedive. When scientists artificially lowered CX3CL1 in young mice, their BDNF levels dropped too, mimicking the effects of aging. But when they gave older mice an extra dose of CX3CL1, their brain’s BDNF bounced back.

These findings flip conventional wisdom about belly fat on its head. While excess visceral fat is still harmful and linked to many health problems, this research suggests that healthy amounts of visceral fat early on serve an important purpose by producing signaling molecules that support brain health.

The research tracked male mice at different ages—5, 10, and 18 months old (roughly equivalent to young adult, middle-aged, and elderly humans). The 5-month-old and 10-month-old mice had similar levels of BDNF in their hippocampus, but by 18 months, these levels had dropped by about a third. This pattern matches the typical trajectory of cognitive aging, where significant decline often doesn’t begin until later in life.

Similarly, CX3CL1 production in visceral fat remained stable in younger mice but declined significantly in older animals, supporting a link between the two proteins.

Stress Hormones and the Fat-Brain Connection

To dig deeper, the researchers asked: What causes the drop in fat-derived CX3CL1 in the first place? The answer involved stress hormones like cortisol (in humans) and corticosterone (in mice).

“Glucocorticoids boost CX3CL1 production. An enzyme in belly fat called 11β-HSD1 reactivates inactive forms of glucocorticoids and keeps them active in cells, promoting glucocorticoid-dependent expression of CX3CL1,” study co-author Dr. Yoshinori Takei tells StudyFinds. “11β-HSD1 is essential for belly fat to respond to circulating glucocorticoids properly.”

But as we age, the amount of this enzyme declines, leading to lower CX3CL1 and BDNF levels. When 11β-HSD1 decreases with age, this entire system weakens, potentially contributing to memory loss.

The paper notes that while lower 11β-HSD1 in aging is problematic for CX3CL1 production and brain health, excessive 11β-HSD1 expression is linked to obesity-related diseases. High 11β-HSD1 levels are associated with metabolic syndrome, which is a known risk factor for cognitive decline.

Rethinking Belly Fat

The connection between belly fat and brain health highlights how intertwined our body systems really are. Our brains don’t operate in isolation but depend on signals from throughout the body—including, surprisingly, our fat tissue.

Before you start thinking about packing on belly fat for the sake of your brain, don’t! The researchers stress that balance is key. Too little belly fat and you lose the brain-protecting effects, but too much can cause serious health problems.

The best way to maintain brain health as you age is to focus on proven strategies: staying active, eating a balanced diet, managing stress, and keeping your mind engaged.

While this research is still in its early stages and was conducted in mice, it opens up fascinating possibilities for understanding how our bodies and brains are connected. Scientists may one day find ways to tap into this fat-brain communication system to slow cognitive decline and keep our minds sharper for longer.

The next time you pinch an inch around your middle, remember: there’s a conversation happening between your belly and your brain that science is just beginning to understand.

Paper Summary

How the Study Worked

The researchers used male mice of three different ages: 5 months (young adult), 10 months (middle-aged), and 18 months (elderly). They measured BDNF protein levels in the hippocampus using a test called ELISA that can detect specific proteins in tissue samples. They also measured CX3CL1 levels in visceral fat tissue using two methods: one that detects the RNA instructions for making the protein and another that detects the protein itself. To determine whether fat-derived CX3CL1 directly affects brain BDNF, they used a technique called RNA interference to reduce CX3CL1 production specifically in the belly fat of younger mice, then checked what happened to brain BDNF levels. They also injected CX3CL1 into older mice to see if it would restore their brain BDNF levels. To understand what regulates CX3CL1 production, they treated fat cells grown in the lab with different stress hormones. Finally, they measured levels and activity of the enzyme 11β-HSD1 in fat tissue from younger and older mice, and used RNA interference to reduce this enzyme in younger mice to see how it affected the fat-brain signaling system.

Results

The study uncovered several key findings. First, hippocampal BDNF levels were similar in 5-month-old and 10-month-old mice (about 300 pg BDNF/mg protein) but dropped by about one-third in 18-month-old mice (about 200 pg BDNF/mg protein). CX3CL1 levels in visceral fat showed a similar pattern, decreasing significantly in the oldest mice. When the researchers reduced CX3CL1 production in the belly fat of younger mice, their brain BDNF levels fell within days, similar to levels seen in naturally aged mice. On the flip side, a single injection of CX3CL1 into the abdominal cavity of older mice boosted their brain BDNF back up, confirming the connection between these proteins. The researchers also found that natural stress hormones (corticosterone in mice, cortisol in humans) increased CX3CL1 production in fat cells, while the enzyme 11β-HSD1 that activates these hormones was much less abundant in the fat tissue of older mice. When they reduced this enzyme in younger mice, both fat CX3CL1 and brain BDNF levels decreased, revealing another link in the signaling chain. Together, these results mapped out a communication pathway from belly fat to brain that becomes disrupted with age.

Limitations

While the study presents intriguing findings, several limitations should be kept in mind. The research used only male mice to avoid complications from female hormonal cycles, so we don’t know if the same patterns exist in females. The sample sizes were small, with most tests using just three mice per group. While this is common in basic science research, larger studies would strengthen confidence in the results. The researchers demonstrated connections between fat tissue signals and brain BDNF levels but didn’t directly test whether these changes affected the mice’s memory or cognitive abilities, though their previous work had shown that CX3CL1 injections improved recognition memory in aged mice. The study was also limited to specific ages in mice, and we don’t yet know how these findings might translate to humans across our much longer lifespan. Finally, the researchers used artificial RNA interference techniques to reduce CX3CL1 and enzyme levels for short periods—different from the gradual changes that occur during natural aging—which might affect how the results apply to real-world aging.

Discussion and Takeaways

This research reveals a previously unknown communication system between belly fat and the brain. Under normal conditions, stress hormones in the blood are activated by the enzyme 11β-HSD1 in visceral fat, which then produces CX3CL1. This fat-derived CX3CL1 signals through immune cells and the vagus nerve (a major nerve connecting internal organs to the brain) to maintain healthy BDNF levels in the hippocampus. As we age, reduced 11β-HSD1 in belly fat disrupts this signaling chain, contributing to lower brain BDNF and potentially to age-related memory problems. This discovery changes how we think about visceral fat, suggesting that while excess belly fat is harmful, healthy amounts serve important functions in supporting brain health. The findings also hint at future therapeutic possibilities—perhaps treatments could target components of this pathway to maintain brain function in aging. The researchers note that a careful balance is needed, as both too little 11β-HSD1 (associated with cognitive decline) and too much (linked to obesity and metabolic problems) appear harmful. For the average person concerned about brain health, this research underscores that the body works as an interconnected whole, with tissues we don’t typically associate with thinking—like fat—playing important roles in maintaining our cognitive abilities.

Funding and Disclosures

The study was supported by grants from the Japan Society for the Promotion of Science (JSPS KAKENHI). The lead researcher, Yoshinori Takei, and two colleagues received research funding through grants numbered 23K10878, 23K06148, and 24K14786. The researchers declared no competing interests, meaning they didn’t have financial or other relationships that might have influenced their research or how they reported it.

Publication Information

The paper “Adipose chemokine ligand CX3CL1 contributes to maintaining the hippocampal BDNF level, and the effect is attenuated in advanced age” was written by Yoshinori Takei, Yoko Amagase, Ai Goto, Ryuichi Kambayashi, Hiroko Izumi-Nakaseko, Akira Hirasawa, and Atsushi Sugiyama from Toho University and other Japanese institutions. It appeared in the journal GeroScience in February 2025, after being submitted in October 2024 and accepted for publication in January 2025. The paper can be accessed online using the identifier https://doi.org/10.1007/s11357-025-01546-4

Before you start thinking about packing on belly fat for the sake of your brain, don’t! The researchers stress that balance is key. Too little belly fat and you lose the brain-protecting effects, but too much can cause serious health problems. The best way to support your brain as you age is to focus on proven strategies: staying active, eating a balanced diet, managing stress, and keeping your mind engaged.

While this research is still in its early stages and was conducted in mice, it opens up fascinating possibilities for understanding how our bodies and brains are connected. Scientists may one day find ways to tap into this fat-brain communication system to slow cognitive decline and keep our minds sharper for longer.

Source : https://studyfinds.org/belly-fat-brain-health/

.

Menopause starting earlier? Half of women in their 30s reporting symptoms

A woman experiencing hot flashes due to menopause (Photo by Pheelings media on Shutterstock)

Perimenopause—the transitional phase leading up to menopause—has long been considered a mid-life experience, typically affecting women in their late 40s. However, new research reveals that a significant number of women in their 30s are already experiencing perimenopausal symptoms severe enough to seek medical attention.

In a survey of 4,432 U.S. women, researchers from Flo Health and the University of Virginia found that more than half of those in the 30-35 age bracket reported moderate to severe menopause symptoms using the validated Menopause Rating Scale (MRS). Among those who consulted medical professionals about their symptoms, a quarter were diagnosed as perimenopausal. This challenges the assumption that perimenopause is primarily a concern for women approaching 50.

The findings, published in the journal npj Women’s Health, highlight a significant gap in healthcare awareness and support for women experiencing early-onset perimenopause.

Unrecognized Symptoms and Healthcare Gaps

“Physical and emotional symptoms associated with perimenopause are understudied and often dismissed by physicians. This research is important in order to more fully understand how common these symptoms are, their impact on women, and to raise awareness amongst physicians as well as the general public,” says study co-author Dr. Jennifer Payne, MD, an expert in reproductive psychiatry at UVA Health and the University of Virginia School of Medicine, in a statement.

Despite medical definitions being well established, public understanding remains muddled. Many people use “menopause” as a catch-all term for both perimenopause and post-menopause. This confusion contributes to women feeling unprepared and unsupported during this transition.

The journey through perimenopause varies. Some women experience a smooth 5-7 year transition with manageable symptoms, while others face a decade-long struggle with physical and psychological challenges that impact daily life.

Early vs. Late Perimenopause

“Perimenopause can be broadly split into early and late stages,” the researchers explained. Early perimenopause typically involves occasional missed periods or cycle irregularity, while late perimenopause features greater menstrual irregularity with longer periods without menstruation, ranging from 60 days to one year.

The study identified eight symptoms significantly associated with perimenopause:

  • Absence of periods for 12 months or 60 days
  • Hot flashes
  • Vaginal dryness
  • Pain during sexual intercourse
  • Recent cycle length irregularity
  • Heart palpitations
  • Frequent urination

While symptom severity generally increased with age, women in their 30s and early 40s still experienced significant symptom burden. Among 30-35-year-olds, 55.4% reported moderate or severe symptoms, increasing to 64.3% in women aged 36-40.

“We had a significant number of women who are typically thought to be too young for perimenopause tell us that they have high levels of perimenopause-related symptoms,” said Liudmila Zhaunova, PhD, director of science at Flo. “It’s important that we keep doing research to understand better what is happening with these women so that they can get the care they need.”

Psychological vs. Physical Symptoms With Menopause

The study revealed patterns in symptom presentation across different perimenopause stages. Psychological symptoms—such as anxiety, depression, and irritability—tend to appear first, peaking among women ages 41-45 before declining. Physical problems, including sexual dysfunction, bladder issues, and vaginal dryness, peaked in women 51 and older. Classic menopause symptoms like hot flashes and night sweats were most prevalent between ages 51-55 and were least common among younger women.

These findings suggest that perimenopause follows a predictable symptom progression, with mood changes and cognitive issues appearing first, followed by more recognized physical symptoms in later stages.

Delayed Medical Attention

Despite high symptom burden, younger women are far less likely to seek medical help for perimenopause. The study found that while 51.5% of women over 56 consulted a doctor, only 4.3% of 30-35-year-olds did. However, among those who sought medical advice, over a quarter of 30-35-year-olds and 40% of 36-40-year-olds were diagnosed as perimenopausal.

The study used the Menopause Rating Scale (MRS), a validated tool that measures symptom severity across three domains: psychological symptoms, somato-vegetative symptoms (including hot flashes and sleep problems), and urogenital symptoms. While MRS scores were highest in the 51-55 age group, younger women still reported a significant symptom burden.

Implications for Healthcare and Awareness

“This study is important because it plots a trajectory of perimenopausal symptoms that tells us what symptoms we can expect when and alerts us to the fact that women are experiencing perimenopausal symptoms earlier than we expected,” Payne said.

These findings underscore the need for earlier education and support. Women in their 30s and early 40s may not recognize symptoms like irregular cycles, mood changes, and sleep disturbances as signs of perimenopause, leading to misdiagnosis or missed opportunities for treatment. This research calls for healthcare providers to adopt a more age-inclusive approach when evaluating these symptoms.

Additionally, the variability of perimenopause means a one-size-fits-all approach to management is inadequate. Psychological symptoms may dominate early perimenopause, while vasomotor and urogenital symptoms become more pronounced in later stages. Understanding these transitions can help tailor treatment strategies for individual needs.

Source : https://studyfinds.org/perimenopause-early-symptoms-women/

How one sleepless night upends the immune system, fueling inflammation

(© Andrii Lysenko – stock.adobe.com)

When you toss and turn all night, your immune system takes notice – and not in a good way. New research reveals that sleep deprivation doesn’t just leave you groggy and irritable; it actually transforms specific immune cells in your bloodstream, potentially fueling chronic inflammation throughout your body.

The study, published in The Journal of Immunology, finds a direct link between poor sleep quality and significant changes in specialized immune cells called monocytes. These altered cells appear to drive widespread inflammation – the same type of inflammation associated with obesity and numerous chronic diseases.

The research, conducted by scientists at Kuwait’s Dasman Diabetes Institute, demonstrates how sleep deprivation triggers an increase in inflammatory “nonclassical monocytes” (NCMs) – immune cells that amplify inflammation. More remarkably, these changes occurred regardless of a person’s weight, suggesting that even lean, healthy individuals may face inflammatory consequences from poor sleep.

Study authors examined three factors increasingly recognized as critical determinants of overall health: sleep, body weight, and inflammation. Though previous research established connections between obesity and poor sleep, this study goes further by identifying specific immune mechanisms that may explain how sleep disruption contributes to chronic inflammatory conditions.

“Our findings underscore a growing public health challenge. Advancements in technology, prolonged screen time, and shifting societal norms are increasingly disruptive to regular sleeping hours. This disruption in sleep has profound implications for immune health and overall well-being,” said Dr. Fatema Al-Rashed, who led the study, in a statement.

How the study worked

The research team recruited 237 healthy Kuwaiti adults across a spectrum of body weights and carefully monitored their sleep patterns using advanced wearable activity trackers. Participants were fitted with ActiGraph GT3X+ devices for seven consecutive days, providing objective data on sleep efficiency, duration, and disruptions. Meanwhile, blood samples revealed striking differences in immune cell populations and inflammatory markers across weight categories.

Obese participants demonstrated significantly lower sleep quality compared to their lean counterparts, along with elevated levels of inflammatory markers. Most notably, researchers observed marked differences in monocyte subpopulations across weight categories. Obese individuals showed decreased levels of “classical” monocytes (which primarily perform routine surveillance) and increased levels of “nonclassical” monocytes – cells known to secrete inflammatory compounds.

The study’s most compelling finding emerged when researchers discovered that poor sleep quality correlated with increased nonclassical monocytes regardless of body weight. Even lean participants who experienced sleep disruption showed elevated NCM levels, suggesting that sleep deprivation itself – independent of obesity – may trigger inflammatory responses.

To further test this hypothesis, researchers conducted a controlled experiment with five lean, healthy individuals who underwent 24 hours of complete sleep deprivation. The results were striking: after just one night without sleep, participants showed significant increases in inflammatory nonclassical monocytes. These changes mirrored the immune profiles seen in obese participants, supporting the role of sleep health in modulating inflammation. Even more remarkably, these alterations reversed when participants resumed normal sleep patterns, demonstrating the body’s ability to recover from short-term sleep disruption.

‘Sleep quality matters as much as quantity’

These findings highlight sleep’s crucial role in immune regulation and suggest that chronic sleep deprivation may contribute to inflammation-driven health problems even in individuals without obesity. The research points to a potential vicious cycle: obesity disrupts sleep, sleep disruption alters immune function, and altered immune function exacerbates inflammation associated with obesity and related conditions.

Modern life often treats sleep as a luxury rather than a necessity. We sacrifice rest for productivity, entertainment, or simply because our environments and schedules make quality sleep difficult to achieve. This study adds to mounting evidence that such trade-offs may have serious long-term health consequences.

For most adults, the National Sleep Foundation recommends 7-9 hours of sleep per night. Study participants averaged approximately 7.8 hours (466.7 minutes) of sleep nightly, but importantly, the research suggests that sleep quality matters as much as quantity. Disruptions, awakenings, and reduced sleep efficiency all appeared to influence immune function, even when total sleep duration seemed adequate.

Sleep efficiency – the percentage of time in bed actually spent sleeping – averaged 91.4% among study participants but was significantly lower in obese individuals. Those with higher body weights also experienced more “wake after sleep onset” (WASO) periods, indicating fragmented sleep patterns that may contribute to immune dysregulation.

How sleep impacts inflammation

The study also revealed intriguing connections between specific inflammatory markers and monocyte subpopulations. Nonclassical monocytes showed positive correlations with multiple inflammatory compounds, including TNF-α and MCP-1 – molecules previously linked to sleep regulation. This suggests that sleep disruption may initiate a cascade of inflammatory signals throughout the body, potentially contributing to various health problems.

While obesity emerged as a significant factor in driving inflammation, mediation analyses revealed that sleep disruption independently contributes to inflammation regardless of weight status. This finding challenges simplistic views of obesity as the primary driver of inflammation and highlights sleep’s importance as a modifiable risk factor for inflammatory conditions.

The implications extend beyond obesity-related concerns. Sleep disruption has been associated with numerous health problems, including cardiovascular disease, diabetes, and mental health disorders. This research provides potential mechanisms explaining these connections and suggests that improving sleep quality could reduce inflammation and associated risks.

Monocytes, crucial components of the innate immune system, patrol the bloodstream looking for signs of trouble. They differentiate into three main types: classical monocytes (which primarily perform surveillance), intermediate monocytes (which excel at presenting antigens and activating other immune cells), and nonclassical monocytes (which specialize in patrolling blood vessels and producing inflammatory compounds).

In healthy individuals, these monocyte populations maintain a careful balance. Sleep disruption appears to tip this balance toward inflammatory nonclassical monocytes, potentially contributing to a state of chronic low-grade inflammation throughout the body.

Is lack of quality sleep becoming a public health crisis?

This research provides compelling evidence that sleep quality deserves serious attention as a public health concern. The study suggests that even temporary sleep disruption can alter immune function, while chronic sleep problems may contribute to persistent inflammation – a condition increasingly recognized as a driver of numerous diseases.

For individuals struggling with obesity or inflammatory conditions, addressing sleep quality may provide additional benefits beyond traditional interventions focused on diet and exercise. The research also highlights potential concerns for shift workers, parents of young children, and others who regularly experience disrupted sleep patterns.

Healthcare providers may need to consider sleep quality as a critical factor when evaluating and treating patients with inflammatory conditions. Similarly, public health initiatives addressing obesity and related disorders might benefit from incorporating sleep improvement strategies alongside dietary and exercise recommendations.

The researchers are now planning to explore in greater detail the mechanisms linking sleep deprivation to immune changes. They also want to investigate whether interventions such as structured sleep therapies or technology-use guidelines can reverse these immune alterations.

“In the long term, we aim for this research to drive policies and strategies that recognize the critical role of sleep in public health,” said Dr. Al-Rashed. “We envision workplace reforms and educational campaigns promoting better sleep practices, particularly for populations at risk of sleep disruption due to technological and occupational demands. Ultimately, this could help mitigate the burden of inflammatory diseases like obesity, diabetes, and cardiovascular diseases.”

Source : https://studyfinds.org/sleep-deprivation-immune-system-inflammation/

How grapes could help preserve muscle health as you age

(Photo by J Yeo on Shutterstock)

Could adding grapes to your daily diet help maintain muscle strength and health as you age? A new mouse model study suggests these antioxidant-rich fruits might help reshape muscle composition, particularly in women, as they enter their later years.

Published in the journal Foods, this investigation — partially funded by the California Table Grape Commission — tracked 480 mice over two and a half years, examining how grape consumption affects muscle gene expression at a fundamental level. The findings highlight how something as simple as adding grapes to our daily diet might help support muscle health during aging.

Muscle loss affects millions of older adults worldwide, with 10-16% of elderly individuals experiencing sarcopenia—the progressive deterioration of muscle mass and function that comes with age. Women often face greater challenges maintaining muscle mass, particularly after menopause, making this research especially relevant for aging females.

Researchers from several U.S. universities discovered that consuming an amount of grapes equivalent to two human servings daily led to notable changes in muscle-related gene expression. While both males and females showed genetic shifts, the effects were particularly pronounced in females, whose gene activity patterns began shifting toward those typically observed in males.

This convergence occurred at the genetic level, where researchers identified 25 key genes affected by grape consumption. Some genes associated with lean muscle mass increased their activity, while others linked to muscle degeneration showed decreased expression.

What makes grapes so special? The fruit contains over 1,600 natural compounds that work together in complex ways. Rather than any single component being responsible for the benefits, it’s likely the combination of these compounds that produces such significant effects.

“This study provides compelling evidence that grapes have the potential to enhance muscle health at the genetic level,” says Dr. John Pezzuto, senior investigator of the study and professor and dean of pharmacy and health sciences at Western New England University, in a statement. “Given their safety profile and widespread availability, it will be exciting to explore how quickly these changes can be observed in human trials.”

Proper muscle function plays a crucial role in everyday activities, from maintaining balance to supporting bone health and regulating metabolism. The potential to help maintain muscle health through dietary intervention could significantly impact quality of life for aging adults.

The research adds to a growing body of evidence supporting grapes’ health benefits. Previous studies have shown positive effects on heart health, kidney function, skin protection, vision, and digestive health. This new understanding of grapes’ influence on muscle gene expression opens another avenue for potential therapeutic applications.

While the physical appearance and weight of muscles didn’t change significantly between groups, the underlying genetic activity showed marked differences. This suggests that grapes might influence muscle health at a fundamental cellular level, even before measurable functional changes occur—though further research is needed to confirm these effects.

For older adults concerned about maintaining their strength and independence, these findings suggest that a daily bowl of grapes in addition to regular exercise just might offer an additional tool in the healthy aging toolkit.. However, the researchers emphasize that human studies are still needed to confirm these effects.

Source : https://studyfinds.org/grapes-muscle-strength/

Why some people remember their dreams (and others don’t)

About a fourth of people don’t remember their dreams. (Roman Samborskyi/Shutterstock)

What were you dreaming about last night? For roughly one in four people, that question draws a blank. For others, the answer comes easily, complete with vivid details about flying through clouds or showing up unprepared for an exam. This stark contrast in dream recall ability has baffled researchers for decades, but a new study reveals there’s more to remembering dreams than pure chance.

From March 2020 to March 2024, scientists from multiple Italian research institutions conducted a sweeping investigation to uncover what determines dream recall. Published in Communications Psychology, their research surpassed typical dream studies by combining detailed sleep monitoring, cognitive testing, and brain activity measurements. The study involved 217 healthy adults between ages 18 and 70, who did far more than simply keep dream journals; they underwent brain tests, wore sleep-tracking wristbands, and some even had their brain activity monitored throughout the night.

Understanding dream recall has long puzzled researchers. Early studies in the 1950s focused mainly on REM sleep, the sleep stage characterized by rapid eye movements and vivid dreams. Scientists initially thought they had solved the mystery of dreaming by linking it exclusively to REM sleep. However, later research revealed that people also dream during non-REM sleep stages, though these dreams tend to be less vivid and harder to remember.

According to researchers at the IMT School for Advanced Studies Lucca, three main factors emerged as strong predictors of dream recall: a person’s general attitude toward dreaming, their tendency to let their mind wander during waking hours, and their typical sleep patterns.

To measure attitudes about dreaming, participants completed a questionnaire rating how strongly they agreed or disagreed with statements like “dreams are a good way of learning about my true feelings” versus “dreams are random nonsense from the brain.” People who viewed dreams as meaningful and worthy of attention were more likely to remember them compared to those who dismissed dreams as meaningless brain static.

Mind wandering proved to be another crucial factor. Using a standardized questionnaire that measures how often people’s thoughts drift away from their current task, researchers found that participants who frequently caught themselves daydreaming or engaging in spontaneous thoughts during the day were more likely to recall their dreams. This connection makes sense considering both daydreaming and dreaming involve similar brain networks, particularly regions associated with self-reflection and creating internal mental experiences.

The relationship between daydreaming and dream recall points to an intriguing possibility: people who spend more time engaged in spontaneous mental activity during the day may be better equipped to generate and remember dreams at night. Both activities involve creating mental experiences disconnected from the immediate external environment.

People who typically had longer periods of lighter sleep with less deep sleep (technically called N3 sleep) were better at remembering their dreams. During deep sleep, the brain produces large, slow waves that help consolidate memories but may make it harder to generate or remember dreams. In contrast, lighter sleep stages maintain brain activity patterns more similar to wakefulness, potentially making it easier to form and store dream memories.

Age was also a factor in dream recall. While younger participants were generally better at remembering specific dream content, older individuals more frequently reported “white dreams,” those frustrating experiences where you wake up knowing you definitely had a dream but can’t remember anything specific about it. This age-related pattern suggests that the way our brains process and store dream memories may change as we get older.

The researchers also discovered that dream recall fluctuates seasonally, with people remembering fewer dreams during winter months compared to spring and autumn. While the exact reason remains unclear, this pattern wasn’t explained by changes in sleep habits across seasons. One possibility is that seasonal variations in light exposure affect brain chemistry in ways that influence dream formation or recall.

Rather than relying on written dream journals, participants used voice recorders each morning to describe everything that was going through their minds just before waking up. This approach reduced the effort required to record dreams and minimized the chance that the act of recording would interfere with the memory of the dream itself.

Throughout the study period, participants wore wristwatch-like devices called actigraphs that track movement patterns to measure sleep quality, duration, and timing. A subset of 50 participants also wore special headbands equipped with electrodes to record their brain activity during sleep. This comprehensive approach allowed researchers to connect dream recall with objective measures of how people were actually sleeping, not just how they thought they slept.

“Our findings suggest that dream recall is not just a matter of chance but a reflection of how personal attitudes, cognitive traits, and sleep dynamics interact,” says lead author Giulio Bernardi, professor in general psychology at the IMT School, in a statement. “These insights not only deepen our understanding of the mechanisms behind dreaming but also have implications for exploring dreams’ role in mental health and in the study of human consciousness.”

The study authors plan to use these findings as a reference for future research, particularly in clinical settings. Further investigations could explore the diagnostic and prognostic value of dream patterns, potentially improving our understanding of how dreams relate to mental health and neurological conditions.

Understanding dream recall could provide insights into how the brain processes and stores memories during sleep. Dreams appear to draw upon our previous experiences and memories while potentially playing a role in emotional processing and memory consolidation. Changes in dream patterns or recall ability might serve as early indicators of neurological or psychiatric conditions.

Source : https://studyfinds.org/why-some-people-remember-their-dreams-others-dont/

This one change to your phone can reverse age-related cognitive issues by 10 years

(Photo by Alliance Images on Shutterstock)

New research reveals a surprisingly simple way to improve mental health and focus: turn off your phone’s internet. A month-long study found that blocking mobile internet access for just two weeks led to measurable improvements in well-being, mental health, and attention—comparable to the effects of cognitive behavioral therapy and reductions in age-related cognitive decline.

Researchers from multiple universities across the U.S. and Canada worked with 467 iPhone users (average age 32) to test how removing constant internet access would affect their daily lives. Instead of asking people to give up their phones completely, the study took a more practical approach. Participants installed an app that blocked mobile internet while still allowing calls and texts. This way, phones remained useful for basic communication but lost their ability to provide endless scrolling, social media, and constant online access.

The average smartphone user now spends nearly 5 hours each day on their device. More than half of Americans with smartphones worry they use them too much, and this jumps to 80% for people under 30. Despite these concerns, few studies have actually tested what happens when people cut back.

The results were significant. After two weeks without mobile internet, participants showed clear improvements in multiple areas. They reported feeling happier and more satisfied with their lives, and their mental health improved—an effect size that was greater than what is typically seen with antidepressant medications in clinical trials. They also performed better on attention tests, showing improvements comparable to reversing 10 years of age-related cognitive decline.

To measure attention, participants completed a computer task that tested their ability to stay focused over time. The improvements were meaningful—similar in size to the difference between an average adult and someone with mild attention difficulties. This suggests that constant mobile internet access may impair our natural ability to focus.

The study design was particularly strong because it included a swap halfway through. After the first two weeks, the groups switched roles—people who had blocked mobile internet got access back, while the other group had to block their internet. This strengthened the evidence that the improvements were caused by reduced mobile internet access rather than other factors.

“Smartphones have drastically changed our lives and behaviors over the past 15 years, but our basic human psychology remains the same,” says lead author Adrian Ward, an associate professor of marketing at the University of Texas at Austin, in a statement. “Our big question was, are we adapted to deal with constant connection to everything all the time? The data suggest that we are not.”

An impressive 91% of participants improved in at least one area. Without the ability to check their phones constantly, people spent more time socializing in person, exercising, and being outdoors—activities known to boost mental health and cognitive function.

Throughout the study, researchers checked in with participants via text messages to track their moods. Those who blocked mobile internet reported feeling progressively better over the two weeks. Even after regaining internet access, many retained some of their improvements, suggesting the break helped reshape their digital habits.

Interestingly, the benefits weren’t just from less screen time. While phone use dropped significantly during the study (from over 5 hours to under 3 hours daily), the improvements appeared linked specifically to breaking the habit of constant online connection. Even after getting internet access back, many participants kept their usage lower and continued feeling better.

One surprising finding involved people who started the study with a high “fear of missing out” (FOMO). Rather than making their anxiety worse, disconnecting from mobile internet led to the biggest improvements in their well-being. This suggests that constant access to social media and online updates may fuel digital anxiety rather than relieve it.

Blocking mobile internet also helped participants feel more in control of their behavior and improved their sleep. Without instant access to endless entertainment and social media, people reported having better control over their attention and averaged about 17 more minutes of sleep per night.

However, sticking to the program was difficult—only about 25% of participants kept their mobile internet blocked for the full two weeks. This highlights how dependent many of us have become on constant connectivity. Still, even those who didn’t fully adhere to the program showed improvements, suggesting that simply reducing mobile internet use can be beneficial.

The researchers noted that a less extreme approach might work better for most people. Instead of blocking all mobile internet, limiting access during certain times or restricting specific apps could provide similar benefits while being easier to maintain.

The takeaway is simple: reducing mobile internet access—even temporarily—can help improve well-being, mental health, and focus. While not everyone is ready to disconnect completely, finding ways to limit our online exposure could make us happier, healthier, and more present in our daily lives.

Source : https://studyfinds.org/digital-detox-keeping-phone-internet-off-wellbeing-focus-sleep/

Why morning people are more likely to conquer challenges

(© Anatoliy Karlyuk – stock.adobe.com)

It’s no surprise that our mental acuity and mood wax and wane during the day, but it may be surprising that most of us seem to be morning people.

In a study at University College London, researchers analyzed data collected from a dozen surveys of 49,218 respondents between March 2020 and March 2022. According to the report published recently in the British Medical Journal Metal Health, the data showed a trend of people claiming better mental health and wellbeing early in the day. They reported greater life satisfaction, increased happiness, and less severe depressive symptoms. They also reported a greater sense of self-worth earlier in the day. People felt worst around midnight. Mental health and mood were more variable on weekends. Loneliness was more stable throughout the week.

Dr. Feifei Bu, principal research fellow in statistics and epidemiology at University College, said in an email to CNN, “Our study suggests that people’s mental health and wellbeing could fluctuate over time of day. On average people seem to feel best early in the day and worst late at night.”

Research Limitations

Even though a correlation was discovered between morning, better mood, life satisfaction, and self-worth, there may be factors affecting the results not apparent in the research, Dr. Bu says.

How people were feeling may have affected when they filled out the surveys. As with most research, the findings need to be replicated. Studies need to be designed to adjust for or eliminate confounding variables, isolating specific questions as much as possible.

In addition, although mental health and well-being are associated, they are not the same thing. Well-being is a complex medley of mental, emotional, physical, cognitive, psychological, and spiritual factors. According to the World Health Organization, well-being is a positive state determined by social, economic, and environmental conditions that include quality of life and a sense of meaning and purpose.

Mental health is a significant contributor to well-being, but they don’t entirely overlap. Many people with mental health issues also enjoy what they describe as a good quality of life.

Also, while many reported feeling better in the morning, better is relative. When someone feels better in the morning, that doesn’t necessarily mean that they feel good.

In addition, mood is a temporary state; mental health and well-being are more stable conditions.

Do hard work when it’s best for you

Do these results mean to confront problems or do your hardest work first thing in the morning? Or does it mean not to problem-solve in the evening – just go to bed and tackle your issues in the morning? Not all research agrees, but more evidence points to late morning as the most productive time for problem-solving. Studies suggest that mood is more stable in the late morning, making it easier to confront more demanding matters with a cool head and less emotional influence.

Cortisol, an important body-regulating hormone that your adrenal glands produce and release, has a daily rhythm of highs and lows. It can also be secreted in bursts in response to stress. Cortisol tends to be lower in the midafternoon. This time is also associated with dips in mood and “decision fatigue.”

Source : https://studyfinds.org/why-morning-people-conquer-challenges/

 

Why intermittent fasting could be harmful for teens

(© anaumenko – stock.adobe.com)

Intermittent fasting has become one of the most popular eating patterns of the past decade. The practice, which involves cycling between periods of eating and fasting, has been praised for its potential health benefits. But a new mouse model study suggests that age plays a crucial role in how the body responds to fasting — and for young individuals, it might do more harm than good.

A team of German researchers recently discovered that while intermittent fasting improved health markers in older mice, it actually impaired important cellular development in younger ones. Their findings, published in Cell Reports, raise important questions about who should (and shouldn’t) try this trending eating pattern.

Inside our bodies, specialized cells in the pancreas produce insulin, a hormone that helps control blood sugar levels. These cells, called beta cells, are particularly important during youth when the body is still developing. The researchers found that in young mice, long-term intermittent fasting disrupted how these cells grew and functioned.

“Our study confirms that intermittent fasting is beneficial for adults, but it might come with risks for children and teenagers,” says Stephan Herzig, a professor at Technical University of Munich and director of the Institute for Diabetes and Cancer at Helmholtz Munich, in a statement.

The study looked at three groups of mice: young (equivalent to adolescence in humans), middle-aged (adult), and elderly. Each group followed an eating pattern where they fasted for 24 hours, followed by 48 hours of normal eating. The researchers tracked how this affected their bodies over both short periods (5 weeks) and longer periods (10 weeks).

At first, all age groups showed improvements in how their bodies handled sugar, which, of course, is a positive sign. But after extended periods of intermittent fasting, significant differences emerged between age groups. While older and middle-aged mice continued to show benefits, the young mice began showing troubling changes.

The pancreatic cells in young mice became less effective at producing insulin, and they weren’t maturing properly. Even more concerning, these cellular changes resembled patterns typically seen in Type 1 diabetes, a condition that usually develops in childhood or adolescence.

“Intermittent fasting is usually thought to benefit beta cells, so we were surprised to find that young mice produced less insulin after the extended fasting,” explains co-lead author Leonardo Matta, from Helmholtz Munich.

The older mice, however, actually benefited from the extended fasting periods. Their insulin-producing cells worked better, and they showed improved blood sugar control. Middle-aged mice maintained stable function, suggesting that mature bodies handle fasting periods differently than developing ones.

This age-dependent response challenges the common belief that intermittent fasting is suitable for everyone. The research suggests that while mature adults might benefit from this eating pattern, young people could be putting themselves at risk, particularly if they maintain the practice for extended periods.

The findings are especially relevant given how popular intermittent fasting has become among young people looking to manage their weight. While short-term fasting appeared safe across all age groups, the long-term effects on young practitioners could be significant.

“The next step is digging deeper into the molecular mechanisms underlying these observations,” says Herzig. “If we better understand how to promote healthy beta cell development, it will open new avenues for treating diabetes by restoring insulin production.”

Despite the attention they receive from athletes and wellness influencers, popular dietary trends aren’t one-size-fits-all. What works for adults might not be appropriate for growing bodies — all the more reason that understanding these age-related differences becomes increasingly important.

Source : https://studyfinds.org/intermittent-fasting-harmful-teens/

Brake dust could be more harmful to health than diesel exhaust

(© kichigin19 – stock.adobe.com)

As cities worldwide crack down on diesel vehicle emissions, a more insidious form of air pollution has been quietly growing alongside increased traffic – brake dust. Research concludes that the particles released when vehicles brake may actually be more harmful to human lung cells than diesel exhaust, with copper-rich brake pads emerging as a particular concern.

This finding comes at a critical time, as the shift toward heavier electric vehicles means more brake wear and potentially higher exposure to these harmful particles. While governments have made substantial progress in reducing exhaust emissions, brake dust remains largely unregulated despite contributing up to 55% of all traffic-related fine particles in urban areas.

Researchers at the University of Southampton and their collaborators examined how tiny particles from different types of brake pads affected human lung cells, focusing on the delicate air sacs where oxygen enters our bloodstream. They compared brake dust from four common types of brake pads against diesel exhaust particles. Much like comparing different recipes to see which ingredients might cause problems, they tested low-metallic, semi-metallic, non-asbestos organic (NAO), and ceramic brake pads.

Their findings, published in Particle and Fibre Toxicology, painted a concerning picture: brake dust from copper-enriched NAO and ceramic brake pads caused significantly more cellular stress and inflammation than both other brake pad types and diesel exhaust. These copper-rich particles triggered inflammatory responses and altered cell metabolism in ways that could potentially lead to disease.

Modern brake pads contain a complex mixture of materials that help vehicles stop safely. NAO brake pads, the most common type in the U.S. due to their low cost and good performance, were developed to replace asbestos-containing pads. However, manufacturers added copper fibers to maintain heat conductivity – a role previously filled by asbestos. This copper content turned out to be problematic.

When researchers exposed lung cells to NAO brake dust, copper accumulated inside the cells steadily as exposure increased. Using specialized molecules that bind to specific metals – like a magnet that only attracts one type of metal – they confirmed that copper was driving the harmful effects.

Perhaps most concerning was the discovery that copper-rich brake dust triggered a cellular response called “pseudohypoxic HIF signaling.” In simple terms, this means the cells behaved as if they were starving for oxygen even though plenty was available – similar to a false alarm that keeps cells in an unnecessary state of emergency. This same mechanism has been linked to various diseases, including certain cancers and scarring of lung tissue.

Some U.S. states, including California and Washington, have already begun restricting copper in brake pads – but these rules were originally created to protect fish and aquatic life from copper washing off roads into waterways, not to address human health concerns. This study suggests these restrictions may have the unexpected benefit of protecting human health as well.

Source: https://studyfinds.org/brake-dust-more-harmful-than-diesel-exhaust/

Eating yogurt may offer protection against hard-to-detect colon cancer

Yogurt has many health benefits. Now, new research shows it might be effective against certain colorectal cancers. (Photo by Vicky Ng on Unsplash)

For years, experts have praised yogurt’s potential benefits for digestive health, but that’s not the only punch it packs. New research suggests its cancer-fighting properties might be more nuanced than previously thought. A new study reveals that yogurt consumption may help prevent certain types of colorectal cancer, specifically those containing higher levels of beneficial bacteria called Bifidobacterium.

Colorectal cancer ranks as the third most common cancer worldwide, affecting both men and women. Prevention strategies have become increasingly important as rates rise, particularly among younger adults. While regular screening through colonoscopy remains the gold standard for early detection, researchers continue searching for dietary and lifestyle factors that might reduce cancer risk.

Research teams from Mass General Brigham and Harvard Medical School analyzed data from over 132,000 health professionals spanning multiple decades. Research published in Gut Microbes reveals a surprising link between yogurt consumption patterns and subsequent colorectal cancer diagnoses.

“Our study provides unique evidence about the potential benefit of yogurt,” says Dr. Shuji Ogino, chief of the Program in Molecular Pathological Epidemiology at Brigham and Women’s Hospital, in a statement. “My lab’s approach is to try to link long-term diets and other exposures to a possible key difference in tissue, such as the presence or absence of a particular species of bacteria. This kind of detective work can increase the strength of evidence connecting diet to health outcomes.”

Through two major studies, the Nurses’ Health Study and the Health Professionals Follow-up Study, researchers tracked more than 100,000 female nurses since 1976 and 51,000 male health professionals since 1986. Every two years, participants answered detailed questions about their health, lifestyle, and medical history. Every four years, they provided specific information about their diets, including how much plain and flavored yogurt they consumed.

This long-term tracking allowed researchers to understand not just occasional yogurt consumption but established eating patterns over decades. When participants developed colorectal cancer, researchers analyzed tumor samples for the presence of Bifidobacterium, a type of beneficial bacteria naturally present in the human gut and commonly added to yogurt products.

Among 3,079 documented colorectal cancer cases, researchers examined 1,121 for Bifidobacterium content. The findings revealed that this beneficial bacteria was quite common. Thirty-one percent of cases were Bifidobacterium-positive, while 69% were negative. For participants who ate two or more servings of yogurt per week, researchers observed a 20% lower rate of Bifidobacterium-positive tumors compared to those who ate yogurt less than once per month.

Most notably, this protective effect appeared strongest in the proximal colon, also known as the right side of the colon. Located near where the small intestine connects to the large intestine, the proximal colon poses unique challenges for cancer detection and treatment. Cancers in this area often grow with fewer obvious symptoms and are harder to spot during routine colonoscopy procedures. Research has shown that patients with proximal colon cancer typically face worse survival outcomes than those with cancers in other parts of the colon.

“It has long been believed that yogurt and other fermented milk products are beneficial for gastrointestinal health,” says co-senior author Dr. Tomotaka Ugai. “Our new findings suggest that this protective effect may be specific for Bifidobacterium-positive tumors.”

Bifidobacterium, a beneficial gut bacterium often found in yogurt, plays a role in digesting dietary fiber, maintaining gut barrier integrity, and regulating immune responses—all factors linked to colorectal cancer risk. The study’s authors hypothesize that yogurt consumption may contribute to a healthier gut microbiome, which in turn could influence cancer risk, particularly in the proximal colon.

However, because different yogurt products contain varying levels and strains of probiotics, more research is needed to determine whether specific types of yogurt provide greater protective benefits than others. Future studies may explore how dietary patterns interact with individual gut microbiomes to influence cancer risk, potentially leading to more personalized dietary recommendations for colorectal cancer prevention, though this remains an emerging area of research.

Regular yogurt consumers in the study demonstrated other healthy habits as well. They typically exercised more, smoked less, and maintained better overall dietary patterns than those who rarely ate yogurt. However, even after accounting for these factors, the association between yogurt consumption and reduced risk of Bifidobacterium-positive proximal colon cancer remained significant.

“This paper adds to the growing evidence that illustrates the connection between diet, the gut microbiome, and risk of colorectal cancer,” says Dr. Andrew Chan, chief of the Clinical and Translational Epidemiology Unit at Massachusetts General Hospital.

Beyond the general recommendation to consume yogurt, this research raises questions about which products might offer the most benefit. Not all yogurts contain the same bacterial strains or concentrations. While many products include Bifidobacterium, the amounts can vary significantly. Future research may help determine whether certain formulations provide better protection against colorectal cancer.

Different subtypes of colorectal cancer may respond differently to preventive measures, suggesting that a one-size-fits-all approach to prevention might not be optimal. This understanding could eventually lead to more personalized prevention strategies based on individual risk factors and gut bacterial composition.

Source: https://studyfinds.org/eating-yogurt-colon-cancer/

Is AI making us stupider? Maybe, according to one of the world’s biggest AI companies

Deferring to machines to make our decisions can have disastrous consequences when it comes human lives. (Credit: © Jakub Jirsak | Dreamstime.com)

There is only so much thinking most of us can do in our heads. Try dividing 16,951 by 67 without reaching for a pen and paper. Or a calculator. Try doing the weekly shopping without a list on the back of last week’s receipt. Or on your phone.

By relying on these devices to help make our lives easier, are we making ourselves smarter or dumber? Have we traded efficiency gains for inching ever closer to idiocy as a species?

This question is especially important to consider with regard to generative artificial intelligence (AI) technology such as ChatGPT, an AI chatbot owned by tech company OpenAI, which at the time of writing is used by 300 million people each week.

According to a recent paper by a team of researchers from Microsoft and Carnegie Mellon University in the United States, the answer might be yes. But there’s more to the story.

Thinking well
The researchers assessed how users perceive the effect of generative AI on their own critical thinking.

Generally speaking, critical thinking has to do with thinking well.

One way we do this is by judging our own thinking processes against established norms and methods of good reasoning. These norms include values such as precision, clarity, accuracy, breadth, depth, relevance, significance and cogency of arguments.

Other factors that can affect quality of thinking include the influence of our existing world views, cognitive biases, and reliance on incomplete or inaccurate mental models.

The authors of the recent study adopt a definition of critical thinking developed by American educational psychologist Benjamin Bloom and colleagues in 1956. It’s not really a definition at all. Rather it’s a hierarchical way to categorise cognitive skills, including recall of information, comprehension, application, analysis, synthesis and evaluation.

The authors state they prefer this categorization, also known as a “taxonomy”, because it’s simple and easy to apply. However, since it was devised it has fallen out of favor and has been discredited by Robert Marzano and indeed by Bloom himself.

In particular, it assumes there is a hierarchy of cognitive skills in which so-called “higher-order” skills are built upon “lower-order” skills. This does not hold on logical or evidence-based grounds. For example, evaluation, usually seen as a culminating or higher-order process, can be the beginning of inquiry or very easy to perform in some contexts. It is more the context than the cognition that determines the sophistication of thinking.

An issue with using this taxonomy in the study is that many generative AI products also seem to use it to guide their own output. So you could interpret this study as testing whether generative AI, by the way it’s designed, is effective at framing how users think about critical thinking.

Also missing from Bloom’s taxonomy is a fundamental aspect of critical thinking: the fact that the critical thinker not only performs these and many other cognitive skills, but performs them well. They do this because they have an overarching concern for the truth, which is something AI systems do not have.

Higher confidence in AI equals less critical thinking
Research published earlier this year revealed “a significant negative correlation between frequent AI tool usage and critical thinking abilities”.

The new study further explores this idea. It surveyed 319 knowledge workers such as healthcare practitioners, educators and engineers who discussed 936 tasks they conducted with the help of generative AI. Interestingly, the study found users consider themselves to use critical thinking less in the execution of the task, than in providing oversight at the verification and editing stages.

In high-stakes work environments, the desire to produce high-quality work combined with fear of reprisals serve as powerful motivators for users to engage their critical thinking in reviewing the outputs of AI.

But overall, participants believe the increases in efficiency more than compensate for the effort expended in providing such oversight.

The study found people who had higher confidence in AI generally displayed less critical thinking, while people with higher confidence in themselves tended to display more critical thinking.

This suggests generative AI does not harm one’s critical thinking – provided one has it to begin with.

Problematically, the study relied too much on self-reporting, which can be subject to a range of biases and interpretation issues. Putting this aside, critical thinking was defined by users as “setting clear goals, refining prompts, and assessing generated content to meet specific criteria and standards”.

Source: https://studyfinds.org/is-ai-making-us-stupider-maybe-according-to-one-of-the-worlds-biggest-ai-companies/

What’s the best time for taking a nap?

(© fizkes – stock.adobe.com)

If you’ve ever wondered about the best time to take a nap, researchers have found your answer: 1:42 p.m. This oddly specific time emerged from a new nationwide study that looked at how Americans nap and what makes some people better nappers than others.

The survey, conducted by Talker Research and commissioned by Avocado Green Mattress, found that most people aim for a 51-minute nap, which would have them waking up at 2:33 p.m. But there’s a catch – napping too long can leave you feeling worse than before you closed your eyes.

“As a psychologist, I see firsthand how sleep — especially napping — affects mood, focus and overall well-being. So many people nap the wrong way and then wonder why they feel groggy instead of refreshed,” says Nick Bach, who holds a doctorate in psychology, in a statement.

When Does a Nap Become Too Long?

The study found that naps lasting longer than an hour and 26 minutes – about 35 minutes past the “perfect” length – enter what researchers call the “danger zone.” At this point, you might feel groggy and disoriented instead of refreshed. And if you’re still sleeping after an extra hour and 44 minutes? That’s no longer a nap – you’ve drifted into a full sleep session.

But even the ideal 51-minute nap might be too long for most people. Bach warns, “I always tell people that if they nap too long, they risk entering deep sleep, which makes waking up harder. A quick 20-minute nap is perfect for a recharge without the dreaded sleep inertia.”

The Great Debate: TV vs. Silence

While sleep experts often recommend quiet, dark rooms for napping, many Americans have different ideas. The study found that 44% of people like having some background noise during their naps – similar to the 50% who prefer noise while sleeping at night. Nearly half of these nappers (47%) fall asleep with the TV on, while only 7% use a white noise machine.

Bach suggests a middle ground: “I always recommend napping in a quiet, dark and cool space. If total silence isn’t an option, using white noise or soft music can help.”

When it comes to where people nap, there’s another split between expert advice and real-world habits. While 53% follow the traditional route and nap in bed, 38% prefer catching their midday rest on the couch. As Bach notes, “Napping on the couch can work, but a bed with good support is usually better.”

Are Nappers More Successful?

Here’s where the research gets interesting: people who regularly take naps might have better social lives. The study found that 48% of self-described “nappers” report having a “thriving” social life, compared to 34% of non-nappers. The pattern continues in their love lives too, with 50% of nappers reporting satisfaction versus 39% of non-nappers.

While both groups were equally likely to be happy (74% of nappers versus 73% of non-nappers), nappers had a slight edge in feeling successful – 39% compared to 32% of non-nappers. They’re also more likely to care about making sustainable choices, with 74% of nappers considering environmental impact in their decisions versus 68% of non-nappers.

Getting the Timing Right

The study’s finding that 1:42 p.m. is the perfect nap time isn’t just a random number – it fits right into expert recommendations. “I think one of the biggest mistakes people make is napping too late,” Bach explains. “If you nap in the late afternoon or evening, it can mess with your nighttime sleep. Ideally, napping before 3 p.m. keeps your sleep schedule on track.”

The benefits of a well-timed nap are clear: 55% of people in the study said they felt more productive right after waking up from a nap. However, there’s a concerning trend – the Americans surveyed only felt well-rested for about half of an average week, suggesting that many might be using naps to make up for poor nighttime sleep.

Source : https://studyfinds.org/best-time-nap/

Why smart people cheat — even when there’s nothing to gain

Man crossing his fingers behind his back (© Bits and Splits – stock.adobe.com)

Study shows uncertainty might be the key to breaking self-deceptive behaviors

A fitness tracker mysteriously logs extra steps. A calorie-counting app somehow shows lower numbers. An online quiz score seems surprisingly high. While these scenarios might seem like harmless self-improvement tools, new research reveals they represent a fascinating psychological phenomenon: we often cheat unconsciously simply to feel better about ourselves, even when there’s nothing tangible to gain.

“I found that people do cheat when there are no extrinsic incentives like money or prizes but intrinsic rewards, like feeling better about yourself,” explains Sara Dommer, assistant professor of marketing at Penn State and lead researcher of a groundbreaking study published in the Journal of the Association for Consumer Research. “For this to work, it has to happen via diagnostic self-deception, meaning that I have to convince myself that I am actually not cheating. Doing so allows me to feel smarter, more accomplished or healthier.”

This phenomenon, which researchers call “diagnostic self-deception,” helps explain behaviors that traditional theories about cheating cannot. While previous research focused on cheating for material gain, Dommer’s work examines why people cheat even when the only reward is an enhanced self-image.

Inside the Self-Deception Experiments

Through four carefully designed studies, Dommer and her team revealed how this self-deceptive behavior works in everyday situations.

Calorie Counting Study

One of the most illuminating experiments tackled everyday calorie tracking. Researchers presented 288 undergraduate students with a three-day food diary scenario, including restaurant meals like pancakes, sandwiches, and pasta dishes. Some students received exact calorie counts from restaurant websites (e.g., “450 calories for a short stack of buttermilk pancakes”), while others only saw multiple options ranging from 300 to 560 calories.

The results showed that when students lacked specific caloric information, they consistently chose lower calorie estimates. Importantly, the study was designed so that averaging the provided calorie options would match the true caloric value. Instead, participants routinely selected lower numbers, effectively deceiving themselves about their food choices.

IQ Test Study

Another study examined intelligence self-deception using a cleverly designed IQ test. 195 Amazon Mechanical Turk workers took a multiple-choice IQ test. Half the participants saw the correct answers highlighted after a few seconds, allowing them to cheat if they wished. The other half took the test normally.

Not only did the group with access to answers score significantly higher, but they also predicted they would perform better on a future test where cheating wouldn’t be possible. Even more telling, when offered a monetary bonus for accurate predictions of their future performance, they still maintained these inflated expectations. This suggests they truly believed their enhanced scores reflected their intelligence rather than their ability to see the answers.

Anagram Study

A third study used word scrambles to measure intelligence, presenting participants with jumbled words like “konreb” (broken) and “eoshu” (house).” Some participants had to type their answers immediately, while others saw the correct answers after three minutes and were asked to self-report how many they had solved. Those who could self-report their scores claimed solving significantly more anagrams than those who had to prove their answers in real-time.

Financial Literacy Study

The final study tackled financial literacy with an interesting twist. Before taking a financial knowledge test, some participants read the statement: “MOST Americans rate themselves highly on financial knowledge, but two-thirds of American adults CANNOT pass a basic financial literacy test.” This simple reminder of uncertainty significantly reduced cheating behavior, suggesting that when people question their capabilities in an area, they become more interested in accurate self-assessment than self-enhancement.

The Results: What It All Means

These studies revealed a consistent pattern: when people could cheat without obvious external rewards, they did—but only if they could maintain the belief that their performance reflected real ability. In the calorie-tracking study, participants entered about 244 fewer calories per day when they could choose from multiple options. In the IQ test, those who could see answers scored an average of 8.82 out of 10, compared to 5.36 for the control group.

“Participants in the cheat group engaged in diagnostic self-deception and attributed their performance to themselves,” Dommer said. “The thinking goes, ‘I’m performing well because I’m smart, not because the task allowed me to cheat.’”

Importantly, this wasn’t just about inflating numbers. Participants genuinely seemed to believe in their enhanced performance. They predicted similar high scores on future tests where cheating wouldn’t be possible, rated the assessments as legitimate measures of ability, and showed increased confidence in their capabilities afterward.

This pattern only broke down when participants’ certainty about their abilities was shaken. When reminded about widespread overconfidence in financial literacy, participants’ cheating decreased significantly, and their self-assessments became more modest.

“I don’t think there’s a good cheating or a bad cheating,” Dommer said. “I just think it’s interesting that not all cheating has to be conscious, explicit and intentional. That said, these illusory self-beliefs can still be harmful, especially when assessing your financial or physical health.”

These findings give us a new understanding of why people might fudge their step counts or peek at answers during online assessments. It’s not just about hitting arbitrary goals or earning meaningless badges—it’s about maintaining and enhancing beliefs about our capabilities, even if we have to deceive ourselves to do it.

Even this seemingly harmless form of cheating comes with consequences. When people convince themselves they’re naturally gifted rather than acknowledging their shortcuts, they might avoid seeking necessary help or purchasing beneficial products and services.

“These illusory self-beliefs can be harmful, especially when assessing your financial or physical health,” Dommer warns.

The research suggests a potential solution: “How do we stop people from engaging in diagnostic self-deception and get a more accurate representation of who they are? One way is to draw their attention to uncertainty around the trait itself. This seems to mitigate the effect,” explains Dommer.

Final Takeaway: How to Avoid Self-Deception

So what’s the big takeaway, especially if you believe you might be guilty of such behavior? While self-deception can provide temporary emotional comfort, it’s worth examining our own tendencies toward unconscious cheating.

Take note when you round down calories, peek at answers, or inflate self-assessments. The goal isn’t to eliminate these behaviors entirely — they’re deeply human — but to recognize when uncertainty about our abilities might actually serve us better than false confidence.

As Dommer’s research shows, acknowledging our limitations often leads to more accurate self-assessment and, ultimately, genuine self-improvement. Companies offering self-assessment tools might consider building in reality checks or uncertainty cues to help users maintain more accurate perceptions of their abilities. After all, real growth starts with honest self-awareness, not comfortable self-deception.

Source : https://studyfinds.org/why-smart-people-cheat/

Devoted nap-takers explain the benefits of sleeping on the job

AP Illustration/Annie Ng

They snooze in parking garages, on side streets before the afternoon school run, in nap pods rented by the hour or stretched out in bed while working from home.

People who make a habit of sleeping on the job comprise a secret society of sorts within the U.S. labor force. Inspired by famous power nappers Winston Churchill and Albert Einstein, today’s committed nap-takers often sneak in short rest breaks because they think the practice will improve their cognitive performance but still carries a stigma.

Multiple studies have extolled the benefits of napping, such as enhanced memory and focus. A mid-afternoon siesta is the norm in parts of Spain and Italy. In China and Japan, nodding off is encouraged since working to the point of exhaustion is seen as a display of dedication, according to a study in the journal Sleep.

Yet it’s hard to catch a few z’s during regular business hours in the United States, where people who nap can be viewed as lazy. The federal government even bans sleeping in its buildings while at work, except in rare circumstances.

Individuals who are willing and able to challenge the status quo are becoming less hesitant to describe the payoffs of taking a dose of microsleep. Marvin Stockwell, the founder of PR firm Champion the Cause, takes short naps several times a week.

“They rejuvenate me in a way that I’m exponentially more useful and constructive and creative on the other side of a nap than I am when I’m forcing myself to gut through being tired,” Stockwell said.

The art of napping

Sleep is as important to good health as diet and exercise, but too many people don’t get enough of it, according to James Rowley, program director of the Sleep Medicine Fellowship at Rush University Medical Center.

“A lot of it has to do with electronics. It used to be TVs, but now cellphones are probably the biggest culprit. People just take them to bed with them and watch,” Rowley said.”

Napping isn’t common in academia, where there’s constant pressure to publish, but University of Southern California lecturer Julianna Kirschner fits in daytime naps when she can. Kirschner studies social media, which she says is designed to deliver a dopamine rush to the brain. Viewers lose track of time on the platforms, interrupting sleep. Kirschner says she isn’t immune to this problem — hence, her occasional need to nap.

The key to effective napping is to keep the snooze sessions short, Rowley said. Short naps can be restorative and are more likely to leave you more alert, he said.

“Most people don’t realize naps should be in the 15- to 20-minute range,” Rowley said. “Anything longer, and you can have problems with sleep inertia, difficulty waking up, and you’re groggy.”

Individuals who find themselves consistently relying on naps to make up for inadequate sleep should probably also examine their bedtime habits, he said.

A matter of timing

Mid-afternoon is the ideal time for a nap because it coincides with a natural circadian dip, while napping after 6 p.m. may interfere with nocturnal sleep for those who work during daylight hours, said Michael Chee, director of the Centre for Sleep and Cognition at the National University of Singapore.

“Any duration of nap, you will feel recharged. It’s a relief valve. There are clear cognitive benefits,” Chee said.

A review of napping studies suggests that 30 minutes is the optimal nap length in terms of practicality and benefits, said Ruth Leong, a research fellow at the Singapore center.

“When people nap for too long, it may not be a sustainable practice, and also, really long naps that cross the two-hour mark affect nighttime sleep,” Leong said.

Experts recommend setting an alarm for 20 to 30 minutes, which gives nappers a few minutes to fall asleep.

But even a six-minute nap can be restorative and improve learning, said Valentin Dragoi, scientific director of the Center for Neural Systems Restoration, a research and treatment facility run by Houston Methodist hospital and Rice University.

 

Neuroscience mystery solved? How our brains use experiences to make sense of time

Your brain learns patterns through your experiences to create timelines. (McCarony/Shutterstock)

Time flows as a constant stream of moments, but your brain sees patterns in this flow. Now, scientists have discovered exactly how individual neurons learn to recognize and predict these patterns, providing the first direct evidence of how our brains map out the structure of time.

The study, published in Nature, was conducted by researchers at UCLA Health. It required recording the activity of individual neurons in patients who had electrodes implanted in their brains for epilepsy treatment. These recordings offer a rare glimpse into how individual brain cells behave during learning and memory formation—something that’s impossible to observe with standard brain imaging techniques.

“Recognizing patterns from experiences over time is crucial for the human brain to form memory, predict potential future outcomes, and guide behaviors,” says Dr. Itzhak Fried, director of epilepsy surgery at UCLA Health, in a statement. “But how this process is carried out in the brain at the cellular level had remained unknown – until now.”

Prior to the main experiment, researchers needed to identify which images would trigger strong neural responses in each participant. They showed participants about 120 different pictures over 40 minutes, including images of celebrities, landmarks, and other subjects chosen partly based on each person’s interests. Based on how brain cells responded, researchers selected six specific images for each participant to use in the main experiment.

The main study had three phases. In the first phase, images appeared in random order while participants performed simple tasks, like identifying whether the person shown was male or female. During the middle phase, images appeared in sequences that followed specific rules, though participants weren’t told about these rules. Instead, they focused on a new task: determining whether each image was shown normally or in a mirror image. The final phase returned to random sequences and the original gender identification task.

The sequence rules were based on what researchers called a pyramid graph. Six points were arranged in a triangle shape, with each point representing one of the selected images. Lines connected certain points, indicating which images could appear after others. Some images were directly connected, like neighboring points on the graph. Others required taking an indirect path through multiple points to get from one to another.

What makes this study particularly fascinating is that it revealed how individual neurons adapted as participants became familiar with these sequences. At first, a neuron would respond strongly to just one specific image. But over time, these same neurons began responding to images that frequently appeared close together in the sequence, essentially mapping out the temporal relationships between different images.

The brain’s ability to encode these temporal patterns shares remarkable similarities with how it represents physical space. Previous research discovered that certain neurons act as “place cells,” firing when an animal reaches specific locations, while others function as “grid cells” that help measure distances. The new study shows the brain uses comparable mechanisms to map out sequences of events and experiences.

This research also builds on earlier discoveries about “concept cells,” neurons that respond to specific individuals, places, or objects. These specialized brain cells appear to be fundamental building blocks of memory. The new findings show how these neurons work together to create structured representations of our experiences through time.

The researchers discovered that this neural mapping created what they call a “successor representation,” a predictive map that considers not just immediate connections but likely future events. Rather than simply linking one moment to the next, your brain builds a broader model of likely future possibilities based on learned patterns.

“This study shows us for the first time how the brain uses analogous mechanisms to represent what are seemingly very different types of information: space and time,” explains Fried. “We have demonstrated at the neuronal level how these representations of object trajectories in time are incorporated by the human hippocampal-entorhinal system.”

During breaks between testing phases, researchers observed “replay” events, moments when neurons would rapidly rehearse the learned sequences in a compressed timeframe. This neural replay happened in milliseconds, suggesting a mechanism for consolidating learned patterns into memory.

Understanding how the brain encodes temporal patterns goes beyond basic science. The findings could help develop new treatments for memory disorders and advance the design of brain-computer interfaces. They may also inform artificial intelligence systems that aim to process sequential information in ways that mirror human cognition.

Source : https://studyfinds.org/brain-experiences-sense-of-time/

9 predictions for the biggest research breakthroughs of 2025

(Photo by Nan_Got on Shutterstock)

From personalized medicine to wearable technology to hair loss innovations, this year could provide no shortage of ways for humans to live healthier
Remember when science fiction promised us flying cars and robot butlers? Well, 2025’s actual breakthroughs might not help you commute through the clouds, but they’re poised to transform something far more important: how we understand and care for our human bodies and minds. From reversing hair loss to regenerating teeth, from predicting mental health patterns to personalized genetic treatments, we’re standing on the edge of discoveries that would have seemed like science fiction just a few years ago.

We asked a panel of nine experts to provide us with their predictions for this year’s biggest research breakthroughs. If there’s one thing we can say, we’re looking forward to a world where the ability to gauge and improve our health might be easier than ever before.

What makes these predictions (or should we really call them expectations?) especially fascinating is how they’re all connected by two powerful threads: the rise of personalized medicine and the integration of artificial intelligence. Gone are the days of one-size-fits-all healthcare – whether we’re talking about stress management, dental care, or treating obesity, researchers are uncovering ways to tailor treatments to each person’s unique genetic makeup, gut microbiome, and lifestyle patterns.

But perhaps the most exciting shift isn’t just in what these breakthroughs might achieve, but in how they’re changing our entire approach to healthcare. Instead of waiting for problems to occur and then treating them, 2025’s innovations are all about prevention and early intervention. Imagine a world where your smartwatch can predict a mental health dip before you feel it, where your genes can be edited to prevent diseases before they start, or where your teeth could actually repair themselves. That world isn’t just science fiction anymore – it’s right around the corner.

Advancements in Aging and Mental Health Research

As a geriatric psychiatrist and someone deeply immersed in caregiving and aging issues, I predict 2025 will bring significant advancements in research focused on aging, mental health, and caregiver support. One of the most exciting areas is the use of AI-driven health technologies to detect and manage age-related conditions earlier. For example, wearable devices are becoming smarter at identifying early signs of cognitive decline or physical frailty. I anticipate new breakthroughs in how these tools deliver actionable insights, empowering families and caregivers to intervene before major health events occur.

Another area I’m watching closely is personalized medicine for mental health. Research into biomarkers and genetic testing is advancing quickly, and I believe we’ll soon see targeted treatments for depression, anxiety, and cognitive disorders that are more effective and have fewer side effects. This could be life-changing for older adults who struggle with medication tolerance or for caregivers managing their own stress.

Finally, I predict a surge in studies exploring the psychosocial aspects of caregiving. Researchers are diving deeper into the mental health impacts of caregiving and testing interventions — like mindfulness programs, virtual support groups, and even VR therapy-that help caregivers cope with stress and maintain their well-being. These innovations are essential as caregiving responsibilities grow more common and complex.

What excites me most is the focus on holistic approaches that integrate mental, emotional, and physical health. Whether it’s smarter tech, personalized care, or emotional resilience tools, I believe these breakthroughs will make life better for aging adults and their caregivers too — helping us all age with more grace, dignity, and support.

Mind-Body Connection and Stress Management

2025 will be the year of the mind-body connection — specifically, in understanding how chronic stress physically impacts our bodies. Eighty percent of our nervous system carries information from the body to the brain, not the other way around – yet our approach to mental health targets the mind first.

We’ve already seen at NEUROFIT that our average active user reports a 54% reduction in stress after just one week of mind-body practices — more studies will show how physical interventions can be more effective than traditional cognitive approaches for managing stress and mental health.

Measurement technology can help lead this trend. With wearables becoming more sophisticated, I anticipate studies showing how real-time biometric data can predict stress-related health issues before they become severe. Our own research analyzing millions of stress data points shows that certain physiological patterns consistently precede burnout. Given that chronic stress leads to $1T+ in healthcare expenses each year, I expect to see major studies validating these early warning signals, potentially revolutionizing preventive healthcare.

Another exciting area is the intersection of behavioral science and technology. Studies are currently exploring how brief, targeted interventions can create lasting changes in stress response patterns. We’ve found that 95% of our users experience immediate stress relief within five minutes of specific somatic exercises. I predict we’ll see research showing how short, consistent practices can rewire the nervous system more effectively than longer, sporadic interventions.

Finally, I think we’ll see breakthrough research on social connection’s role in nervous system regulation. Our data shows that prioritizing social play can improve emotional balance by up to 26%. I expect studies in 2025 will further validate how structured social interactions can significantly impact stress resilience and overall mental health outcomes.

These developments could radically change how we approach stress management and mental health care, moving from reactive treatment to proactive regulation and prevention.

Regenerative Medicine in Dentistry

As a dentist, I’m particularly excited about advancements in regenerative medicine and biomaterials for dentistry. Researchers are exploring ways to grow dental tissues or repair teeth using stem cells, which could revolutionize how we treat tooth decay and damage. Imagine being able to regenerate lost enamel or even replace a missing tooth without needing implants. These breakthroughs could lead to less invasive and more natural dental solutions for patients.

In the broader medical field, wearable technology and AI-driven diagnostics are also advancing quickly. Devices that monitor health metrics like glucose levels, heart rate, and oral health indicators in real time could become more accurate and accessible. These tools could improve preventive care by catching potential health issues early, leading to better outcomes for patients. I believe 2025 will bring us closer to more personalized and proactive healthcare.

AI-Driven Personalized Medicine

By 2025, I foresee significant advancements in AI-driven personalized medicine, wherein the integration of genomics, patient data analytics, and AI will result in much more precise and targeted treatments. There is a growing interest among researchers in developing AI-powered algorithms that could forecast disease progression based on a person’s genetic makeup, lifestyle, and environmental exposure. This would facilitate more proactive and personalized interventions, especially in chronic disease management, oncology, and neurology.

Another field that I predict breakthroughs in is the integration of AI and wearables for real-time health monitoring. Various studies are underway that test wearable technologies that gather continuous physiological data, to be analyzed by AI to spot early signs of impending heart attacks, strokes, or complications arising from diabetes, even before symptoms begin to appear. This will change healthcare from being a reactive practice to proactive care and ensure timely intervention for patients.

Finally, I foresee a rapid increase in research on regenerative medicine, specifically on stem cell therapies and tissue engineering. With technological advancement, there may come the ability to regenerate tissues and organs damaged from specific life traumas, diseases, and conditions that, until today, could not be cured-such as heart disease, spinal cord injury, and neurodegenerative diseases. This space will no doubt interconnect with AI and machine learning to better results and hastened efficacies in treatments.

Genetics and Personalized Preventative Medicine

As a recruiter working in the life sciences industry, I have insider knowledge of the hiring shifts promising to transform medicine in the coming years.

Right now, it’s all about genetics. Personalized preventative medicine is what everyone wants. In other words, why treat a disease if you can avoid it? Tailored care takes into account a patient’s predispositions on a genetic level and neutralizes the threat before it manifests. It’s more possible than ever before, and I’m placing top talent in the sector daily. These candidates range from analysts looking at large data samples to patient-facing counselors focusing on a single profile, but by far, genetic therapy is the most exciting. With CRISPR technology, we’re on the cusp of being able to rework genetic abnormalities to our advantage, instead of simply waiting for them to be expressed. This has the potential to disrupt our understanding of the entire human body.

Progress in Hair Regeneration Research

In 2025, I anticipate significant progress in hair regeneration research, particularly in stem cell therapy and gene editing technologies. These studies aim to revolutionize treatments for hair loss by targeting the root causes at the cellular level. For example, researchers are exploring ways to reactivate dormant hair follicles or create lab-grown hair that matches the individual’s natural growth patterns.

Additionally, advancements in understanding the scalp’s microbiome could lead to personalized solutions for conditions like dandruff and inflammation, which impact hair health. These breakthroughs can potentially make treatments more effective, less invasive, and tailored to the unique needs of every individual. It’s an exciting time for the field of hair health.

Personalized Nutrition Through Microbiome Research

Work-in-progress research conducted within the microbiome of the gut in 2025 will start having a direct influence on health. Scientists are unpacking how unique gut bacterial profiles influence everything from nutrient absorption to immune functions, thus opening the door to personalized nutrition solutions.

Several ongoing studies aim to develop new microbiome-based tools to offer cures for chronic ailments, like obesity, IBS, and diabetes. These advances promise to allow personalized nutrition based on individual gut composition.

Introducing support changes the approach whereby monitoring gut health will be done using wearable technology and combined with microbiome research. This will allow individuals to take proactive and data-driven approaches to wellness.

Revolutionizing Obesity Treatment with GLP-1

The year 2025 can be considered a significant milestone in the treatment of obesity as there will be advancements in the GLP-1 receptor agonist drugs. These drugs, in particular Ozempic, are being investigated for their benefits going beyond weight loss and improving the patient’s metabolic profile and decreasing the likelihood of chronic diseases including diabetes and cardiovascular diseases.

The study is not only confined to the management of obesity and weight loss but it also focuses on the effects of GLP-1 on the brain, control of appetite, and inflammatory cytokines. It is also found that it can help in preventing neurodegenerative diseases and is associated with enhanced cognitive performance.

GLP-1 therapies are expected to enhance these developments when combined with AI-based personalized medicine. Through the use of genetic and metabolic data, clinicians may be able to determine the best course of treatment for each patient thus achieving better outcomes.

Source : https://studyfinds.org/9-predictions-biggest-research-breakthroughs-202/

Teens spend 90+ minutes on their phones during typical school day

(Photo by BearFotos on Shutterstock)

As schools nationwide grapple with smartphone policies, new research provides unprecedented and shocking insight into how teenagers use their phones during school hours. Using sophisticated tracking technology, researchers discovered that students spend an average of 92 minutes on their smartphones during a typical school day, with a quarter of students exceeding 2 hours of use.

Moving beyond simple screen time measurements, researchers deployed passive sensing technology to paint a detailed picture of how and when adolescents use their phones during the school day. Their findings raise important questions about the role of smartphones in modern education and their potential impact on learning.

Research led by Dr. Dimitri A. Christakis at Seattle Children’s Research Institute found that this school-day phone use accounts for approximately 27% of students’ total daily phone usage, which averages 5.59 hours. More revealing than the raw numbers is how students spend their phone time during school hours.

Social media and messaging dominate school-hour phone use, with Instagram leading social platforms. Instagram users in the study spent an average of about 25 minutes on the platform during school hours alone. Messaging and chat applications averaged 19.5 minutes of use during school hours, while video streaming services claimed about 17 minutes.

Looking at demographic patterns, older teens (ages 16-18) logged significantly more phone time during school hours compared to younger teens (ages 13-15), spending about 33 more minutes on their devices. Female students showed higher usage rates than male students, using their phones approximately 29 minutes more during school hours.

Parental attempts to limit screen time appeared to have little impact on school-hour phone use. Students with parental limits on screen time showed similar usage patterns to those without restrictions, suggesting that school-based interventions might be more effective than home-based rules.

Educational background of parents emerged as a significant factor. Students whose parents held bachelor’s degrees spent about 32 minutes less time on their phones during school hours compared to peers whose parents did not have college degrees. This correlation raises important questions about the role of family educational culture in shaping student technology habits.

The study, published in JAMA Pediatrics, also revealed interesting patterns among different demographic groups. Hispanic students showed significantly higher social media use during school hours compared to their white peers, spending about 25 more minutes on social platforms. Meanwhile, students identifying as LGBTQIA+ showed similar usage patterns to their non-LGBTQIA+ peers, with no statistically significant differences in overall phone use.

While smartphones offer potential benefits for learning and communication, these findings suggest their primary use during school hours may be misaligned with educational goals. More schools are expected to implement phone restrictions in the coming years, with research like this providing valuable data to inform those policy decisions.

Source : https://studyfinds.org/teens-spend-90-minutes-phones-during-school/

From A to Zzzs: The science behind a better night’s sleep

It’s no secret that a good night’s sleep plays a vital role in mental and physical health and well-being. The way you feel during your waking hours depends greatly on how you are sleeping, say sleep experts.

A pattern of getting inadequate or unsatisfying sleep over time can raise the risk for chronic health problems and can affect how well we think, react, work, learn and get along with others.

According to the National Heart, Lung and Blood Institute, an estimated 50 to 70 million Americans have sleep disorders, and one in three adults does not regularly get the recommended amount of uninterrupted sleep needed to protect their health.

Many factors play a role in preparing the body to fall asleep and wake up, according to the National Institutes of Health. Our internal “body clock” manages the sleep and waking cycles and runs on a 24-hour repeating rhythm, called the circadian rhythm. This rhythm is controlled both by the amount of a sleep-inducing compound called adenosine in our system and cues in our environment, such as light and darkness. This is why sleep experts suggest keeping your bedroom dark during your preferred sleeping hours.

Sleep is also controlled by two main hormones, melatonin and cortisol, which our bodies release in a daily rhythm that is controlled by the body clock.

Exposure to bright artificial light—such as from television, computer and phone screens—late in the evening can disrupt this process, making it hard to fall asleep, explained Sanjay Patel, director of the UPMC Comprehensive Sleep Disorders Clinical Program and a professor of medicine and epidemiology at the University of Pittsburgh.

Keeping our body clock and hormone levels more-or-less regulated are the best ways to consistently achieve good sleep, Patel said. He encouraged people with sleeping struggles to focus more on behavioral changes than seeking quick fixes, such as with over-the-counter sleep supplements like melatonin or by upping alcohol intake to feel drowsy.

Patel said there’s not much clinical evidence that melatonin supplements work very well, and that “a lot of the clinical trials of melatonin haven’t shown consistent evidence that it helps with insomnia.”

He did point out that the supplement isn’t particularly harmful either, except when “people start increasing and increasing the dose. And in particular, we worry about the high doses that a lot of children are being given by their parents, where it really can cause problems,” he said. Taking any more than three to five milligrams doesn’t increase the sedative effects, “and yet, we see people showing up to clinic all the time taking 20 milligrams.”

Sleeping potions

Many have suggested that warm milk, chamomile tea or tart cherry juice can induce a somniferous effect. While Patel said there’s no evidence they work, he did point out that they’re preferable to a nightcap.

“Alcohol is really bad for your sleep long term, for a number of reasons,” Patel said. First, alcohol can relax the throat muscles and can make sleep apnea and snoring worse for sufferers. Secondly, the body metabolizes alcohol rather quickly so its sedation effects do not last throughout the night.

“So while it may put you to sleep, what happens is, three or four hours later, the alcohol has been metabolized, and now you will wake up from not having alcohol in your system,” he said.

Evening libations can also increase acid reflux and long-term drinking can cause “changes in your brain chemistry and is a big cause of insomnia,” he said. Heavy drinkers who suffer from insomnia will often increase their intake of alcohol in an effort to fall asleep, thus creating a dangerous cycle that could lead to alcohol use disorder.

Cannabis is not much better, Patel said.

While a handful of pot users—specifically those who use it to treat anxiety—may see some sleep benefits, for the most part cannabis often does not help chronic insomnia and will likely make it worse.

“They actually see a lot of people whose sleep gets better when they stop using (cannabis),” Patel said.

Instead of turning to sleep aids—natural or otherwise—Patel said developing a bedtime routine that promotes relaxation and unwinding is a much better route to a good night’s rest.

Whether it’s taking hot bath, reading a book, meditating or even tuning into the nightly news, the brain will associate an oft-repeated bedtime ritual with the relaxation required to fall asleep, he explained.

You can watch television, but stay off social media, he said. “The algorithms on social media are designed to keep us engaged and end up contributing to people not closing their eyes until much later than they planned.”

Other common reasons that sleep can be unsatisfying or elusive are stress, worry and the simple fact that many people don’t give themselves enough time for rest.

“We see all the time that people plan to go to bed at a certain time, but then once they get into bed, they do other things and keep their mind active,” such as responding to emails, paying bills or scrolling on social platforms.

Aging influence

The rhythm and timing of the body clock changes with age, Patel said.

People need more sleep early in life when they’re growing and developing. For example, newborns may sleep more than 16 hours a day, and preschool-age children need to take naps.

In the teen years, the internal clock shifts so that they fall asleep later in the night, but then want to sleep in late. This is troublesome for teens because “they need to be up for school at 6:30 a.m. and so that’s causing lots of problems,” Patel said.

Some school districts in the region, including Pittsburgh Public in 2023, have shifted to later start times with this in mind.

For adults, sleep during middle age can be tricky with young children in the home who disrupt parents’ sleeping patterns. This is also a time of life when stress and worry are heightened, he said.

Older adults tend to go to bed earlier and wake up earlier, but they’ve got their own unique challenges, Patel said.

“A lot of physical problems mean that people are often waking up more in the night as they age. They have to get up to go to the bathroom. They have chronic aches and pains that wake them up. They’re often taking medications that … have side effects that affect your sleep,” he said.

Source : https://medicalxpress.com/news/2025-02-zzzs-science-night.html

 

Vacation days are the key to well-being? Study explains important link

(© Monkey Business – stock.adobe.com)

If you’re like many Americans, you probably didn’t take all your vacation time this past year. Even if you did, chances are you didn’t fully unplug while away from the office. But according to new research from the University of Georgia, those vacation days aren’t just a nice perk—they’re crucial for your well-being.

The research, published in the Journal of Applied Psychology, analyzed 32 different studies across nine countries. Researchers discovered something surprising: vacation benefits last much longer than previously believed. While we’ve long known that vacations can improve well-being, this comprehensive review found these positive effects persist well after returning to work, challenging earlier beliefs that vacation benefits quickly disappear.

“We think working more is better, but we actually perform better by taking care of ourselves,” explains lead author Ryan Grant, a doctoral student in psychology at UGA’s Franklin College of Arts and Sciences, in a statement. “We need to break up these intense periods of work with intense periods of rest and recuperation.”

The catch? How you spend your vacation matters significantly. The research team found that truly disconnecting from work produced the greatest benefits. This means avoiding work emails, skipping those “quick check-ins” with the office, and genuinely allowing yourself to mentally detach from workplace responsibilities.

“If you’re not at work but you’re thinking about work on vacation, you might as well be at the office,” says Grant. “Vacations are one of the few opportunities we get to fully just disconnect from work.”

Physical activity emerged as another key factor in maximizing vacation benefits. But don’t worry, this doesn’t mean you need to run marathons during your beach trip.

“Basically anything that gets your heart rate up is a good option,” explains Grant. “Plus, a lot of physical activities you’re doing on vacation, like snorkeling, for example, are physical. So they’re giving you the physiological and mental health benefits. But they’re also unique opportunities for these really positive experiences that you probably don’t get in your everyday life.”

The length of your vacation also plays a crucial role. The study found that longer vacations generally led to greater improvements in well-being, though these effects also tended to decline more quickly upon return. The researchers recommend building in buffer days both before and after your trip. Taking time to pack and prepare reduces pre-vacation stress while having a day or two to readjust after returning can ease the transition back to work life.

Cultural differences revealed interesting patterns, too. In countries where work achievement and success are highly valued, people experience more dramatic benefits from vacation time, likely because they really need the break. However, they also show steeper declines in well-being when returning to work. Workers in countries with more mandatory vacation days tended to get more out of their time off, possibly because taking vacations is more normalized and accepted.

These findings arrive at a critical moment, as vacation usage has declined in recent decades. In 2018 alone, American workers left 768 million vacation days unused, surrendering approximately $65 billion in benefits. This trend persists despite mounting evidence that prolonged work without adequate breaks can lead to burnout, anxiety, depression, and even physical health problems.

Maybe we should all rethink how we view vacations. Rather than seeing them as optional luxuries, we should recognize them as essential tools for maintaining well-being and long-term productivity. Whether it’s a two-week adventure or a long weekend getaway, the key is to fully disconnect and engage in activities that provide both physical and mental benefits.

Source : https://studyfinds.org/vacation-days-long-term-health/

The secret to career success? It might be hidden in your free time

(© Drobot Dean – stock.adobe.com)

In an age of endless productivity hacks and work-life balance tips, new research offers a refreshing perspective: what if you could advance your career while actually enjoying your leisure time? A study suggests this elusive goal might be more achievable than previously thought, introducing a concept called “leisure-work synergizing” that could revolutionize how we think about professional development.

Conventional wisdom has long suggested that work and leisure should remain separate. Clock out, go home, and leave work behind. But researchers Kate Zipay from Purdue University and Jessica Rodell from the University of Georgia have uncovered evidence that thoughtfully blending certain work-related elements into leisure activities might actually enhance both professional growth and personal enjoyment.

The concept, published in Organization Science, goes beyond simply answering emails after hours or catching up on work during weekends.

“We found that employees who intentionally integrate professional growth into their free time – like listening to leadership podcasts, watching TED Talks or reading engaging business books – report feeling more confident, motivated and capable at work,” explains Zipay. This innovative approach allows people to develop professionally without sacrificing the fundamental pleasure of leisure time.

The Science Behind the Strategy

The research team tracked 89 professionals over five weeks, examining how their leisure choices influenced their work performance and emotional state. Participants completed surveys about their activities and experiences during evenings and weekends, followed by assessments of their workplace mindset and performance the next day.

What emerged was a clear pattern: when people engaged in leisure activities that had some connection to professional growth, they reported significantly higher levels of self-assurance, feeling more confident and capable at work. This boost in confidence translated into better overall workplace performance and satisfaction.

However, the research revealed an important caveat: personality matters. Not everyone benefits equally from blending work and leisure. The study identified two distinct types of people: “integrators” who naturally prefer fluid boundaries between work and personal life, and “segmenters” who thrive on keeping these domains separate.

“Employees who prefer a clear separation between work and personal life might struggle with this approach,” notes Zipay, “highlighting the importance of tailoring the practice to individual preferences.”

For integrators, leisure-work synergizing proved particularly beneficial, actually reducing fatigue rather than adding to it. Meanwhile, segmenters showed less positive results from the practice, suggesting that forcing this approach when it doesn’t align with personal preferences could be counterproductive.

‘Done right, it’s a game-changer’

This research arrives at a crucial moment when traditional boundaries between work and personal life continue to blur, especially in the wake of remote work trends. Rather than fighting against this evolution, the study suggests we might benefit from being more strategic about it.

“This isn’t about making your free time feel like work,” emphasizes Zipay. “It’s about leveraging activities you already love in a way that fuels your professional growth. Done right, it’s a game-changer for employees and employers alike.”

Look for those natural overlaps where professional growth can occur alongside genuine enjoyment. For instance, the explosive growth of platforms like MasterClass and the surging popularity of business and personal development podcasts suggest many people already naturally gravitate toward this kind of enriching leisure activity.

For organizations and employees alike, these findings open up new possibilities for professional development. Instead of relying solely on traditional training programs or expecting employees to sacrifice personal time for growth, companies might benefit from supporting more flexible and integrated approaches to skill development.

Rather than choosing between career advancement and personal enjoyment, careful integration of the two might offer the best of both worlds, proving that sometimes you really can have your cake and eat it too.

Source : https://studyfinds.org/secret-to-career-success-free-time/

Why being a ‘bingo night’ regular could buy your brain an extra 5 years

(© Monkey Business – stock.adobe.com)

Going out to restaurants, playing bingo, visiting friends, or attending religious services could give you extra years of healthy brain function, according to new research from Rush University Medical Center. Their study found that older adults who stayed socially active typically developed dementia five years later than those who were less social. It’s a difference that could both extend life and save hundreds of thousands in healthcare costs.

“This study shows that social activity is related to less cognitive decline in older adults,” said Bryan James, PhD, associate professor of internal medicine at Rush, in a statement. “The least socially active older adults developed dementia an average of five years before the most socially active.”

The research team followed 1,923 older adults who were initially dementia-free, checking in with them yearly to track their social activities and cognitive health. They looked at six everyday social activities: dining out, attending sporting events or playing bingo, taking trips, doing volunteer work, visiting relatives or friends, participating in groups, and attending religious services.

Over nearly seven years of follow-up, 545 participants developed dementia, while 695 developed mild cognitive impairment (MCI), which often precedes dementia. After accounting for factors like age, education, gender, and marital status, the researchers found that each increase in social activity was linked to a 38% lower chance of developing dementia.

Being social seems to help the brain in several ways. When we engage socially, we exercise the parts of our brain involved in memory and thinking. “Social activity challenges older adults to participate in complex interpersonal exchanges, which could promote or maintain efficient neural networks in a case of ‘use it or lose it,’” explains James.

The benefits of social activity appear to work independently of other social factors, like how many friends someone has or how supported they feel. This suggests that simply getting out and doing things with others could be more important than the size of your social circle.

The research takes on new urgency following the COVID-19 pandemic, which left many older adults isolated. The findings suggest that communities might benefit from creating more opportunities for older adults to engage socially, whether through organized activities, volunteer programs, or regular social gatherings.

Source: https://studyfinds.org/social-seniors-five-years-dementia/

The bitter truth: Science reveals why coffee tastes different to everyone

What affects coffee’s bitterness more: roasting techniques or your predisposed genetics? (Photo by Mix and Match Studio on Shutterstock)

Next time you take a sip of coffee and scrunch your nose at its bitter taste, your DNA might be to blame. New research from scientists in Germany has uncovered fascinating insights into why Arabica coffee’s signature bitterness varies from person to person, and it’s not just about how dark the roast is.

The study, published in Food Chemistry, was conducted at the Technical University of Munich’s Leibniz Institute for Food Systems Biology. Researchers have identified a new group of bitter compounds formed during coffee roasting.

“Indeed, previous studies have identified various compound classes that contribute to bitterness. During my doctoral thesis, I have now identified and thoroughly analyzed another class of previously unknown roasting substances,” says study author Coline Bichlmaier, a doctoral student, in a statement.

While caffeine has long been known as coffee’s primary bitter component, even decaffeinated coffee tastes bitter, indicating other compounds are at work. At the heart of this bitter business is a compound called mozambioside, found naturally in raw coffee beans. It’s about ten times more bitter than caffeine and particularly abundant in naturally caffeine-free coffee varieties. However, this may not be at the root of that bitter taste.

“Our investigations showed that the concentration of mozambioside decreases significantly during roasting so that it only makes a small contribution to the bitterness of coffee,” says principal investigator Roman Lang.

Through detailed chemical analysis, researchers tracked mozambioside as coffee beans roasted. They found it breaks down into seven specific compounds, each contributing its own bitter properties. Using ultra-high-performance liquid chromatography and mass spectrometry, essentially very precise chemical detection methods, they measured exactly how much of each compound forms during roasting and transfers into your cup.

When studying Colombian Arabica coffee specifically, they found that not everyone experiences these bitter compounds the same way. A specific gene called TAS2R43, which codes for one of our approximately 25 bitter taste receptors, plays a crucial role. About 20% of Europeans have a deletion in this gene, meaning they’re missing that particular bitter taste receptor entirely.

In standardized taste tests with 11 volunteers, researchers analyzed each participant’s DNA using saliva samples to determine their TAS2R43 gene status. Their genetic test revealed that two participants had both copies of the TAS2R43 gene variant defective, seven had one intact and one defective variant, and only two people had both copies fully intact.

The results revealed striking differences in bitter perception based on genetics. When combining mozambioside with its roasting products in a sample, eight out of eleven test subjects perceived a bitter taste, one found it astringent, and two didn’t notice any particular taste.

During roasting experiments at different temperatures, researchers discovered that some bitter compounds peaked at 240°C, while others continued increasing up to 260°C. These findings join our existing knowledge about other bitter-tasting substances formed during roasting, including compounds called caffeoylquinides (from chlorogenic acids), diketopiperazines (from coffee proteins), and oligomers of 4-vinylcatechols (from caffeic acids).

Bitter taste receptors aren’t only found in our mouths. They exist throughout the body in various organs and tissues. Studies indicate they help fight pathogens in our respiratory tract, assist with defense mechanisms in our intestines and blood cells, and may play a role in metabolism regulation.

“The new findings deepen our understanding of how the roasting process influences the flavor of coffee and open up new possibilities for developing coffee varieties with coordinated flavor profiles,” says Lang. “They are also an important milestone in flavor research, but also in health research. Bitter substances and their receptors have further physiological functions in the body, most of which are still unknown.”

With global production reaching 102.2 million 60-kilo bags of Arabica coffee in 2023/24, understanding these bitter compounds and their perception is major. For coffee lovers and producers alike, this research provides scientific validation for something many have long suspected: we really do experience coffee differently from one another, and it’s written in our genes.

Source : https://studyfinds.org/why-coffee-tastes-different-to-everyone/

Mice created from two biological fathers are first to live into adulthood

Lab mouse unrelated to study. (© filin174 – stock.adobe.com)

The idea of same-sex biological reproduction in mammals has long been thought impossible, like trying to build a house with only half the blueprint. But researchers in China have achieved what many believed couldn’t be done: they’ve created viable mice that lived until adulthood using genetic material from two fathers, unlocking new possibilities in reproductive science.

This landmark achievement, published in Cell Stem Cell, represents a significant advance in reproductive biology and opens new possibilities in regenerative medicine.

The team led by researchers at the Chinese Academy of Sciences (CAS) in Beijing successfully modified specific genetic regions in mouse embryonic stem cells to overcome what scientists have long considered a fundamental barrier to same-sex reproduction in mammals. Previous attempts to create bi-paternal mice had failed, with embryos stalling at early developmental stages. However, this new approach, targeting 20 key genetic locations, enabled the first-ever successful development of bi-paternal mice to adulthood.

“The unique characteristics of imprinting genes have led scientists to believe that they are a fundamental barrier to unisexual reproduction in mammals,” explains Qi Zhou, a co-corresponding author from CAS. “Even when constructing bi-maternal or bi-paternal embryos artificially, they fail to develop properly, and they stall at some point during development due to these genes.”

Previous scientists tried a different strategy to create mice with two fathers. They first attempted to create egg cells in the lab using special cells from male mice that can transform into any type of cell, like blank building blocks that can become whatever the body needs. The idea was to then fertilize these lab-created eggs with sperm from another male mouse. However, this approach didn’t work because when genetic material from two males was combined in this way, it created problems in how genes functioned.

The new research team took a completely different approach. Instead of trying to create eggs, they focused on carefully editing specific parts of the genetic code. They used various techniques to change, remove, or adjust specific genes that control how genetic material from parents normally works together. This new method not only allowed them to successfully create mice with two fathers, but it also resulted in especially versatile stem cells.

“This work will help to address a number of limitations in stem cell and regenerative medicine research,” says Wei Li, the study’s corresponding author from CAS. The implications extend beyond reproductive biology, potentially advancing our understanding of cellular development and regenerative medicine applications.

One of the most fascinating aspects of this research was the creation of functional placentas from bi-paternal embryos. The placenta, an organ crucial for mammalian development, typically requires precise genetic contributions from both parents. The team’s success in creating functional bi-paternal placentas represents a significant breakthrough in understanding reproductive biology.

The surviving bi-paternal mice showed intriguing characteristics. They grew faster than normal mice and displayed reduced anxiety-like behaviors in behavioral tests. However, they also had shorter lifespans, living only about 60% as long as typical mice. These differences provide valuable insights into how parental genes influence development and aging.

“These findings provide strong evidence that imprinting abnormalities are the main barrier to mammalian unisexual reproduction,” notes Guan-Zheng Luo of Sun Yat-sen University in Guangzhou. “This approach can significantly improve the developmental outcomes of embryonic stem cells and cloned animals, paving a promising path for the advancement of regenerative medicine.”

The research team plans to extend their experimental approaches to larger animals, including monkeys, though they acknowledge this will require considerable effort due to different imprinting gene combinations across species.

The potential application to human medicine remains unclear, particularly given current ethical guidelines. The International Society for Stem Cell Research explicitly prohibits heritable genome editing for reproductive purposes and the use of human stem cell-derived gametes for reproduction, citing safety concerns.

Source : https://studyfinds.org/scientists-create-healthy-mice-with-two-biological-fathers/

Gen Z Are More Anxious Than Any Other Generation

Gen Z students are experiencing poor mental health and a lack of hope for the future. (To be honest, I think most generations are).

According to professors who teach Gen Zers, the generation appears even more anxious than their Millennial counterparts and has completely lost hope in the American Dream. Gen Z also reports the poorest mental health of any generation, and only 44 percent of Gen Zers say they feel prepared for the future.

“The biggest change that I’ve seen is they have this fear of failure or making the wrong decision, and I think it’s because they just don’t want to go through more mental anguish,” Matt Prince, an adjunct professor at Chapman University, told Fortune.

Prince added that Gen Z seems to have a “huge weight on their shoulders.”

Millennials have a long-held reputation for being lazy complainers who spend too much money on avocado toast. For a while, that’s why we couldn’t afford homes… it had nothing to do with the insane housing market, of course. Gen Zers tend to wear similar labels, receiving criticisms for things like being “chronically online” and “easily offended.”

Gen Z Doesn’t believe in the American dream

But let’s be honest: all that time spent on social media (and from such a young age) can’t be healthy for your brain.

Alyssa Mancao, a Los Angeles therapist with a Gen Z client base, told Axios that because this generation grew up with social media, they tend to compare themselves more to others, which oftentimes naturally leads to feelings of inadequacy.

Plus, given the state of the world right now, I empathize with Gen Zers who are trying to make a name for themselves in this economy.

It’s also not shocking that so many Gen Zers are losing faith in the American Dream. I mean, it’s hard to imagine living a comfortable life filled with love, family, and freedom when many of us work long hours and still can’t afford groceries or a home.

“I think there is just an overarching fear of failure or making mistakes or making the wrong turn in their career trajectory that would emotionally or physically set them back years,” Prince told Fortune. “And so I think that anguish is just an anchor that’s holding them back.”

Another California-based therapist, Erica Basso, pointed out that there’s a ton of uncertainty plaguing Gen Zers today.

Source : https://www.vice.com/en/article/gen-z-are-more-anxious-than-any-other-generation/

Beauty bias? Attractive people land better jobs, higher salaries

(© deagreez – stock.adobe.com)

Think your next promotion depends purely on your skills and experience? A recent study suggests your appearance might matter more than you’d expect. Research looking at over 43,000 business school graduates found that attractive professionals earn thousands more each year than their equally qualified colleagues — and this advantage grows stronger over time.

The study, conducted by researchers at the University of Southern California and Carnegie Mellon University, tracked MBA graduates for 15 years after they left business school. What they discovered was eye-opening: People rated as attractive were 52.4% more likely to land prestigious positions, leading to an average bump in salary of $2,508 per year. For the most attractive individuals — those in the top 10% — that yearly advantage jumped to over $5,500.

This advantage, which researchers call the “attractiveness premium,” shows up across different industries but not always in the same way. Fields involving lots of face-time with clients and colleagues, like management and consulting, showed the biggest benefits for attractive individuals. Meanwhile, technical roles like IT and engineering, where work often happens behind the scenes, showed much smaller effects.

This disparity may explain why attractive professionals tend to gravitate away from technical fields and toward management positions, a phenomenon the researchers termed “horizontal sorting.”

Even more remarkable was the “extreme attractiveness premium.” Individuals in the top 10% of the attractiveness scale enjoyed an 11% advantage in career outcomes compared to those in the bottom 10%.

What makes these findings particularly noteworthy is that the benefits of being attractive don’t fade over time, even after people have proven their abilities. Each year, attractive professionals gained a small but consistent advantage over their peers, which added up significantly over the course of their careers. For perspective, the salary difference linked to attractiveness was about one-third the size of the gender pay gap among the same group of graduates.

“This study shows how appearance shapes not just the start of a career, but its trajectory over decades,” explains Nikhil Malik, who led the study at USC, in a statement. “These findings reveal a persistent and compounding effect of beauty in professional settings.”

To reach these conclusions, the researchers used advanced computer programs to analyze professional profile pictures and career progression data. Unlike previous studies that only looked at short-term effects or specific jobs, this research followed real careers across many industries and positions over 15 years.

The attractiveness premium was particularly evident among graduates from top-tier MBA programs, where competition for advancement is especially intense. In these high-stakes environments, where candidates already possess strong qualifications, appearance appeared to play a notable role in determining who reached senior leadership positions.

“It’s a stark reminder that success is influenced not just by skills and qualifications but also by societal perceptions of beauty,” observes Kannan Srinivasan, another researcher from Carnegie Mellon University.

The findings raise important questions about fairness in the workplace. While many companies now offer training to address unconscious bias related to gender and race, appearance-based advantages may be harder to tackle. These biases often operate through subtle social preferences rather than obvious discrimination.

“This research underscores how biases tied to physical appearance persist in shaping career outcomes, even for highly educated professionals,” notes Param Vir Singh, one of the study’s co-authors from Carnegie Mellon University.

Creating more equitable workplaces has been a top priority for corporations in recent years, yet these findings suggest that appearance-based advantages may require new approaches to workplace policy and practice. The persistent nature of the attractiveness premium indicates that simple awareness or training programs may be insufficient to address this form of bias.

Source : https://studyfinds.org/beauty-bias-attractive-people-better-jobs-salary/

Aging ‘hot spot’: Where the brain first starts showing signs of getting older

(© vegefox.com – stock.adobe.com)

What if we could pinpoint exactly where aging begins in the brain? Scientists at the Allen Institute have done just that, creating the first detailed cellular atlas of brain aging by analyzing millions of individual cells and identifying key regions where age-related changes first emerge.

The brain is like a massive city with thousands of different neighborhoods, each populated by unique types of cells performing specific jobs. Until now, researchers haven’t had a detailed “census” showing how each neighborhood changes as the city ages. This study, published in Nature, provides exactly that, examining cells from young adult mice (2 months old) and aged mice (18 months old). While mice age differently than humans, this comparison roughly mirrors the differences between young adult and older adult human brains.

Researchers analyzed 16 different brain regions, covering about 35% of the mouse brain’s total volume. They identified 847 distinct types of cells and discovered that certain cell populations, particularly support cells called glia, were especially sensitive to aging. They found significant changes around the third ventricle in the hypothalamus, which is the brain’s master control center that regulates essential functions like hunger, body temperature, sleep, and hormone production.

As the brain ages, it shows increased immune activity across various cell types. The researchers observed this, particularly in microglia, which are specialized cells that act as the brain’s maintenance and immune defense system. They also found this in border-associated macrophages, another type of immune cell. These cells showed signs of increased inflammatory activity in aged mice, suggesting they were working harder to maintain brain health.

The research team discovered fascinating changes in specialized cells called tanycytes and ependymal cells that line fluid-filled chambers in the brain, particularly around the third ventricle.

“Our hypothesis is that those cell types are getting less efficient at integrating signals from our environment or from things that we’re consuming,” says lead author Kelly Jin, Ph.D., in a statement. This inefficiency might contribute to broader aging effects throughout the body.

The study revealed changes in cells that produce myelin, the crucial insulating material around nerve fibers. Like the protective coating around electrical wires, myelin helps neurons communicate effectively. The researchers found that aging affects these insulator-producing cells, which could impact how well brain circuits function.

Most intriguingly, the researchers identified specific groups of neurons in the hypothalamus that showed dramatic changes with age. These neurons, which help control appetite, metabolism, and energy use throughout the body, showed signs of both decreased function and increased immune activity. This finding aligns with previous research suggesting that dietary factors, like intermittent fasting or calorie restriction, might influence lifespan.

“Aging is the most important risk factor for Alzheimer’s disease and many other devastating brain disorders. These results provide a highly detailed map for which brain cells may be most affected by aging,” says Dr. Richard J. Hodes, director of NIH’s National Institute on Aging.

While this research was conducted in mice, the findings provide a crucial roadmap for understanding human brain aging. The identification of specific vulnerable cell types and regions gives scientists clear targets for future development of therapies to maintain brain health throughout life.

Source : https://studyfinds.org/where-your-brain-first-starts-aging/

Smartphone use leads to hallucinations, detachment from reality, aggression in teens as young as 13: Study

Smartphones are making teenagers more aggressive, detached from reality and causing them to hallucinate, according to new research.

Scientists concluded the younger a person starts using a phone, the more likely they would be crippled by a whole host of psychological ills after surveying 10,500 teens between 13 and 17 from both the US and India for the study, by Sapien Labs.

“People don’t fully appreciate that hyper-real and hyper-immersive screen experiences can blur reality at key stages of development,” addiction psychologist Dr. Nicholas Kardaras, who was not part of the team who did the study, told The Post.

More than a third of 13-year-olds surveyed said they feel aggression, while a fifth experience hallucinations, the survey by Sapien Labs showed.

“Their digital world can compromise their ability to distinguish between what’s real and what’s not. A hallucination by any other name.

“Screen time essentially acts as a toxin that stunts both brain development and social development,” Kardaras explained. “The younger a kid is when given a device, the higher the likelihood of mental health issues later on.”

The teens surveyed for “The Youth Mind: Rising Aggression and Anger” were significantly worse off than older Gen Zers in Sapien Labs’ database and the youngest ages were more likely to suffer aggression, anger and hallucinations compared to their older counterparts.

A staggering 37% of 13-year-olds reported experiencing aggression, compared with 27% of 17-year-olds.

Frighteningly, 20% of 13-year-olds say they suffer from hallucinations, compared to 12% of 17-year-olds.

“Whereas today’s 17-year-olds typically got a phone at age 11 or 12, today’s 13-year-olds got their phones at age 10,” the report noted.

Respondents also reported they could pose a harm to themselves. 42% of American girls and 27% of boys aged 13 to 17 admitted to problems with suicidal thoughts.

The majority of teens polled said they had feelings of hopelessness, guilt, anxiety, and unwanted strange thoughts. More than 40% reported a sense of detachment from reality, mood swings, withdrawal, and traumatic flashbacks.

The researchers also warned phones are making kids withdraw from society.

“Once you have a phone, you spend a lot less time with in-person interaction, and the less you have in-person interaction, the less integrated you are into the real social fabric,” Sapien Labs chief scientist Tara Thiagarajan told The Post.

“You’re no longer connected in the way humans have been wired for hundreds of thousands of years.”

Kardaras, author of “Glow Kids”, also wasn’t surprised aggression was associated with phone use.

He runs Omega Recovery tech addiction recovery center in Austin, where teens are often admitted after violently attacking their parents for taking their phones away.

Kids around the country have also been assaulting their teachers at school after having their devices confiscated, with one Tennessee teacher even pepper-sprayed by a female student after he took her cell phone.

The CDC also warned in 2023 teen girls are at risk of increased violence — often at the hands of one another. Sapien Labs also flagged the uptick in aggression is disproportionately taking place in females, according to their research.

“There’s a fairly rapid rise now in kids experiencing actual violence in school, and kids are fearing for their safety,” Thiagarajan said. “That is something that everyone should sit up and take note of.”

She pointed to a December school shooting in Wisconsin was anomalously carried out by a teen girl. It had been 45 years since a female juvenile perpetrated a school shooting.

That shooter, Natalie “Samantha” Rupnow, 15, was known to have spent a great deal of her life online and had exhibited extremist views on the internet, but authorities are still looking for a motive for her shooting, after which she turned her gun on herself.

Source : https://nypost.com/2025/01/23/lifestyle/smartphone-use-leads-to-hallucinations-aggression-in-teens-study/

Adults with ADHD die 7 to 9 years sooner, alarming study shows

(© Rainer Hendla | Dreamstime.com)

Seven years. That’s how much sooner men with ADHD are dying compared to their neurotypical peers, and for women, the outlook is even bleaker at nearly nine years. These sobering numbers emerge from a new study examining life expectancy in adults with ADHD, painting a picture far more serious than the familiar narrative of forgotten appointments and misplaced keys.

The research, published in The British Journal of Psychiatry, analyzed data from nearly 10 million people across UK general practices, identifying over 30,000 adults with diagnosed ADHD. This represents just one in nine of the estimated ADHD population, as most cases remain undiagnosed.

“It is deeply concerning that some adults with diagnosed ADHD are living shorter lives than they should,” says Professor Josh Stott, senior author from University College London Psychology & Language Sciences, in a statement. “People with ADHD have many strengths and can thrive with the right support and treatment. However, they often lack support and are more likely to experience stressful life events and social exclusion, negatively impacting their health and self-esteem.”

Living with ADHD extends far beyond difficulties with focus and organization. People with the condition often experience differences in how they focus their attention. While they may possess high energy and an ability to focus intensely on their interests, they frequently struggle with mundane tasks. This can lead to challenges with impulsiveness, restlessness, and differences in planning and time management, potentially impacting success at school and work.

The study revealed that adults with ADHD were more likely to develop physical health conditions like diabetes, heart disease, chronic respiratory problems, and epilepsy. Mental health challenges were particularly prevalent: anxiety, depression, and self-harm occurred at notably higher rates than in the general population.

Treatment access remains a critical issue. A national survey found that while a third of adults with ADHD traits received medication or counseling for mental health issues (compared to 11% without ADHD), nearly 8% reported being denied requested mental health treatment. That rate is eight times higher than those without ADHD.

“Only a small percentage of adults with ADHD have been diagnosed, meaning this study covers just a segment of the entire community,” explains lead author Dr. Liz O’Nions. “More of those who are diagnosed may have additional health problems compared to the average person with ADHD. Therefore, our research may over-estimate the life expectancy gap for people with ADHD overall, though more community-based research is needed to test whether this is the case.”

The research carries particular weight because it drew from the UK’s primary care system, where almost everyone is registered. This comprehensive dataset allowed researchers to track real health outcomes rather than relying on self-reported information or smaller samples.

The gender disparity, with women losing even more years of life than men, raises important questions about how ADHD manifests and is treated across genders. Historically, the condition has been better recognized in males, potentially leaving many women undiagnosed until later in life, if at all.

“Although many people with ADHD live long and healthy lives,” Dr. O’Nions notes, “our finding that on average they are living shorter lives than they should indicates unmet support needs. It is crucial that we find out the reasons behind premature deaths so we can develop strategies to prevent these in future.”

These findings demand immediate attention from healthcare providers and policymakers. Treatment and support for ADHD is associated with better outcomes, including reduced mental health problems and substance use.

The numbers speak for themselves: 7 years, 9 years, 3% of adults affected. But behind these statistics are real lives being cut short by a condition that’s often dismissed as a simple attention problem. This study doesn’t just highlight a gap in life expectancy, it exposes a gap in our understanding of what ADHD truly means for those living with it.

Source : https://studyfinds.org/adults-with-adhd-die-7-to-9-years-sooner-alarming-study-shows/

Why camel’s milk will be the next big immune-boosting dairy alternative

Camel milk may be better for our immune health than cow’s milk. (Leo Morgan/Shutterstock)

Move over almond milk. There’s a new dairy alternative in town, and it comes from camels. While that might sound strange to Western ears, new research from Edith Cowan University (ECU) in Australia suggests camel milk could offer some impressive health benefits, especially for our immune systems.

The study, published in Food Chemistry, explored an in-depth analysis comparing cow and camel milk, focusing particularly on proteins that affect immune function and digestion. While cow’s milk dominates global dairy production at over 81%, camel’s milk currently accounts for just 0.4% of global milk production. Unlike cow’s milk, it contains distinctive proteins that could make it especially valuable for immune system support and gut health.

When examining the cream portion of both milk types, scientists identified 1,143 proteins in camel milk compared to 851 in cow’s milk. The cream fraction proved particularly rich in immune system-supporting proteins and bioactive peptides that can help fight harmful bacteria and potentially protect against certain diseases. However, researchers emphasize that further testing is needed to confirm their potency.

“These bioactive peptides can selectively inhibit certain pathogens, and by doing so, create a healthy gut environment and also has the potential to decrease the risk of developing cardiovascular disease in future,” explains study researcher Manujaya Jayamanna Mohittige, a Ph.D. student at ECU, in a statement.

For people who struggle with dairy sensitivities, the study confirms that camel milk naturally lacks beta-lactoglobulin, the primary protein that triggers allergic reactions to cow’s milk. Additionally, camel milk contains lower lactose levels than cow’s milk, potentially making it easier to digest for some individuals.

Composition-wise, camel milk is slightly different from cow’s milk. Cow’s milk typically contains 85-87% water, with 3.8-5.5% fat, 2.9-3.5% protein, and 4.6% lactose. Camel milk, meanwhile, consists of 87-90% water, with protein content varying from 2.15-4.90%, fat ranging from 1.2-4.5%, and lactose levels between 3.5-4.5%.

Camel milk production currently ranks fifth globally behind cow, buffalo, goat, and sheep milk. Given Australia‘s semi-arid climate and camel population, increasing its production is an increasingly viable option.

“Camel milk is gaining global attention, in part because of environmental conditions. Arid or semi-arid areas can be challenging for traditional cattle farming, but perfect for camels,” adds Mohittige.

However, there are practical challenges to overcome. While dairy cows can produce up to 28 liters of milk daily, camels typically yield only about 5 liters. Several camel dairies already operate in Australia, but their production volumes remain relatively low.

This doesn’t mean everyone should rush out and switch to camel milk. It’s still relatively hard to find in many places and typically costs more than cow’s milk. But for people looking for alternatives to traditional dairy, especially those with certain milk sensitivities, camel milk might offer an interesting option.

While camel milk may not appear in your local supermarket just yet, this research reveals why it deserves attention beyond its novelty value. Its unique protein profile and immune-supporting properties may help explain why this unconventional dairy source has persisted for millennia in cultures worldwide.

Source : https://studyfinds.org/camel-milk-immune-boosting-alternative/

Gender shock: Study reveals men, not women, make more emotional money choices

(Credit: © Yuri Arcurs | Dreamstime.com)

When it comes to making financial decisions, conventional wisdom suggests keeping emotions out of the equation. But new research reveals that men, contrary to traditional gender stereotypes, may be significantly more susceptible to letting emotions influence their financial choices than women.

A study led by the University of Essex challenges long-held assumptions about gender and emotional decision-making. The research explores how emotions generated in one context can influence decisions in completely unrelated situations – a phenomenon known as the emotional carryover effect.

“These results challenge the long-held stereotype that women are more emotional and open new avenues for understanding how emotions influence decision-making across genders,” explains lead researcher Dr. Nikhil Masters from Essex’s Department of Economics.

Working with colleagues from the Universities of Bournemouth and Nottingham, Masters designed an innovative experiment comparing how different types of emotional stimuli affect people’s willingness to take financial risks. They contrasted a traditional laboratory approach targeting a single emotion (fear) with a more naturalistic stimulus based on real-world events that could trigger multiple emotional responses.

The researchers recruited 186 university students (100 women and 86 men) and randomly assigned them to one of three groups. One group watched a neutral nature documentary about the Great Barrier Reef. Another group viewed a classic fear-inducing clip from the movie “The Shining,” showing a boy searching for his mother in an empty corridor with tense background music. The third group watched actual news footage about the BSE crisis (commonly known as “mad cow disease”) from the 1990s, a real food safety scare that generated widespread public anxiety.

After watching their assigned videos, participants completed decision-making tasks involving both risky and ambiguous financial choices using real money. In the risky scenario, they had to decide between taking guaranteed amounts of money or gambling on a lottery with known 50-50 odds. The ambiguous scenario was similar, but participants weren’t told the odds of winning.

The results revealed striking gender differences. Men who watched either the horror movie clip or the BSE footage subsequently made more conservative financial choices compared to those who watched the neutral nature video. This effect was particularly pronounced for those who saw the BSE news footage, and even stronger when the odds were ambiguous rather than clearly defined.

Perhaps most surprisingly, women’s financial decisions remained remarkably consistent regardless of which video they watched. The researchers found that while women reported experiencing similar emotional responses to the videos as men did, these emotions didn’t carry over to influence their subsequent financial choices.

The study challenges previous assumptions about how specific emotions like fear influence risk-taking behavior. While earlier studies suggested that fear directly leads to more cautious decision-making, this new research indicates the relationship may be more complex. Even when the horror movie clip successfully induced fear in participants, individual variations in reported fear levels didn’t correlate with their financial choices.

Instead, the researchers discovered that changes in positive emotions may play a more important role than previously thought. When positive emotions decreased after watching either the horror clip or BSE footage, male participants became more risk-averse in their financial decisions.

The study also demonstrated that emotional effects on decision-making can be even stronger when using realistic stimuli that generate multiple emotions simultaneously, compared to artificial laboratory conditions designed to induce a single emotion. This suggests that real-world emotional experiences may have more powerful influences on our financial choices than controlled laboratory studies have indicated.

The research team is now investigating why only men appear to be affected by these carryover effects. “Previous research has shown that emotional intelligence helps people to manage their emotions more effectively. Since women generally score higher on emotional intelligence tests, this could explain the big differences we see between men and women,” explains Dr. Masters.

These findings could have significant implications for understanding how major news events or crises might affect financial markets differently across gender lines. They also suggest the potential value of implementing “cooling-off” periods for important financial decisions, particularly after exposure to emotionally charged events or information.

“We don’t make choices in a vacuum and a cooling-off period might be crucial after encountering emotionally charged situations,” says Dr. Masters, “especially for life-changing financial commitments like buying a home or large investments.”

Source : https://studyfinds.org/study-men-not-women-make-more-emotional-money-choices/

Having a bigger waist could help some diabetes patients live longer

(© spaskov – stock.adobe.com)

Most health professionals would likely raise an eyebrow at the suggestion that a larger waist circumference might benefit some diabetes patients. Yet that’s exactly what researchers discovered when they analyzed survival rates among more than 6,600 American adults with diabetes, finding that the relationship between waist size and mortality follows unexpected patterns that vary significantly between men and women.

Medical professionals have long preached the dangers of excess belly fat, particularly for people with diabetes. However, follows distinct U-shaped and J-shaped patterns for women and men respectively, suggesting that both too little and too much belly fat could be problematic.

Researchers from Northern Jiangsu People’s Hospital in China analyzed data from the National Health and Nutrition Examination Survey (NHANES), a massive health study of Americans conducted between 2003 and 2018. They tracked the survival outcomes of 3,151 women and 3,473 men with diabetes, following them for roughly six to seven years on average.

The findings challenge conventional wisdom: women with diabetes actually showed the lowest mortality risk when their waist circumference hit 107 centimeters (about 42 inches), well above what’s typically considered healthy. For men, the sweet spot was 89 centimeters (around 35 inches), closer to traditional recommendations but still surprising in its implications.

The relationship manifested differently between the sexes. For women, the association between waist size and mortality formed a U-shaped curve – meaning death rates were higher among those with both smaller and larger waists than the optimal point. Men showed a J-shaped pattern, with mortality risk rising more steeply as waist sizes increased beyond the optimal point.

This phenomenon, dubbed the “obesity paradox,” isn’t entirely new to medical research. Similar patterns have been observed with body mass index (BMI) in various populations. However, this study is among the first to demonstrate it specifically with waist circumference in people with diabetes.

The findings were consistent across different causes of death. Whether looking at overall mortality or deaths specifically from cardiovascular disease, the patterns held steady. For every centimeter increase in waist size below the optimal point, women saw their mortality risk decrease by 3%, while men saw a 6% reduction. Above these thresholds, each additional centimeter increased mortality risk by 4% in women and 3% in men.

What makes these findings particularly intriguing is their persistence even after researchers accounted for numerous other factors that could influence survival, including age, education, ethnicity, smoking status, drinking habits, physical activity, and various health conditions.

While these findings might seem counterintuitive, they align with a growing body of research suggesting that optimal health parameters might vary more widely between individuals than previously thought. For diabetes patients and their healthcare providers, this study offers compelling evidence that when it comes to waist circumference, the relationship with survival is more complex than simply “less is more.”

Source : https://studyfinds.org/bigger-waist-helps-diabetes-patients-live-longer/

Ancient tooth enamel shatters long-held beliefs about early human diet

Model depicting Australopithecus afarensis. (Credit: © Procyab | Dreamstime.com)

Breaking new ground in our understanding of early human diet and evolution, scientists have discovered that our ancient relatives may not have been the avid meat-eaters previously believed. Research reveals that Australopithecus, one of humanity’s earliest ancestors who lived in South Africa between 3.7 and 3.3 million years ago, primarily maintained a plant-based diet rather than regularly consuming meat.

Scientists have long debated when our ancestors began regularly consuming meat, as this dietary shift has been linked to several crucial evolutionary developments, including increased brain size and reduced gut size. Many researchers believed meat-eating began with early human ancestors like Australopithecus, partly because stone tools and cut marks on animal bones have been found dating back to this period.

“Tooth enamel is the hardest tissue of the mammalian body and can preserve the isotopic fingerprint of an animal’s diet for millions of years,” says geochemist Tina Lüdecke, the study’s lead author, in a statement. As head of the Emmy-Noether Junior Research Group for Hominin Meat Consumption at the Max Planck Institute for Chemistry and Honorary Research Fellow at the University of the Witwatersrand, Lüdecke regularly travels to Africa to collect fossilized teeth samples for analysis.

When living things digest food and process nutrients, they create a kind of chemical signature involving different forms of nitrogen. Think of it like leaving footprints in sand. Herbivores leave one type of print, while meat-eaters leave another. By examining these ancient chemical footprints preserved in tooth enamel, scientists can determine what kinds of foods an animal ate. Meat-eaters consistently show higher levels of a specific form of nitrogen compared to plant-eaters.

The research, published in Science, focused on specimens from the Sterkfontein cave near Johannesburg, part of South Africa’s “Cradle of Humankind,” an area renowned for its abundant early hominin fossils. Using innovative chemical analysis techniques, researchers examined fossilized teeth from seven Australopithecus specimens, comparing them with teeth from other animals that lived alongside them, including ancient relatives of antelopes, cats, dogs, and hyenas.

Source : https://studyfinds.org/ancient-tooth-enamel-early-human-diet-meat

Love bacon? Just one slice is all it takes to raise your risk of dementia

(© Boris Ryzhkov | Dreamstime.com)

If you could see inside your brain after eating processed meats, you might think twice about that morning bacon ritual. An eye-opening new study has revealed that even modest consumption of processed red meat could be aging your brain faster than normal.

Doctors from Brigham and Women’s Hospital and the Harvard T.H. Chan School of Public Health followed over 133,000 healthcare professionals for up to 43 years, finding that people who ate just a quarter serving or more of processed red meat per day had a 13% higher risk of developing dementia compared to those who consumed minimal amounts. For perspective, a serving of red meat is about three ounces – roughly the size of a deck of cards.

Most previous studies exploring the connection between red meat consumption and brain health have been relatively small or short-term, making this extensive research particularly noteworthy. The study, published in Neurology, carefully defined its terms: processed red meat included products like bacon, hot dogs, sausages, salami and bologna, while unprocessed red meat encompassed beef, pork, lamb and hamburger.

While both types of red meat have been previously linked to conditions like Type 2 diabetes and cardiovascular disease, processed meats carry additional risks due to their high levels of sodium, nitrites, and other potentially harmful compounds. These substances can trigger inflammation, oxidative stress, and vascular problems that may contribute to cognitive decline.

Participants were divided into three consumption groups for processed meat: those eating fewer than 0.10 servings per day (low), between 0.10 and 0.24 servings daily (medium), and 0.25 or more servings per day (high).

Beyond just tracking dementia diagnoses, researchers also assessed participants’ cognitive function through telephone interviews and questionnaires. Those who regularly consumed processed red meat showed signs of accelerated brain aging – approximately 1.6 years of additional cognitive aging for each daily serving. In practical terms, this means their brain function declined as if they were over a year and a half older than their actual age.

To assess cognitive decline from multiple angles, the researchers examined both subjective and objective measures. A group of nearly 44,000 participants with an average age of 78 completed surveys rating their own memory and thinking skills. This self-reported assessment revealed that those consuming 0.25 or more servings of processed meat daily had a 14% higher risk of subjective cognitive decline compared to minimal consumers.

Intriguingly, the study found that replacing processed red meat with healthier protein sources could help protect brain health. Swapping out that daily serving of bacon or hot dogs for nuts and legumes was associated with a 19% lower risk of dementia. Fish proved even more beneficial, with a 28% reduction in dementia risk when substituted for processed meat.

The research team focused on two large cohorts of health professionals: the Nurses’ Health Study and the Health Professionals Follow-Up Study. These groups were ideal for long-term research as they were already completing detailed dietary questionnaires every 2-4 years and had high rates of follow-up participation. The participants’ professional backgrounds also meant they were likely to provide accurate health information.

Women made up about two-thirds of the study population, with an average starting age of 49 years. By following participants for several decades, researchers could observe how dietary patterns in middle age influenced cognitive health later in life. This long-term perspective is crucial, as cognitive decline often begins subtly, years before noticeable symptoms appear.

“Dietary guidelines tend to focus on reducing risks of chronic conditions like heart disease and diabetes, while cognitive health is less frequently discussed, despite being linked to these diseases,” said corresponding author Dr. Daniel Wang, of the Channing Division of Network Medicine at Brigham and Women’s Hospital, in a statement. “Reducing how much red meat a person eats and replacing it with other protein sources and plant-based options could be included in dietary guidelines to promote cognitive health.”

Having that hot dog at the baseball game or bacon at Sunday brunch are certainly delicious traditions in the American diet. With dementia rates expected to soar in the next 30 years, it seems that developing the devastating condition could eventually be a tradition too. Taking the right steps to protect your brain can rewrite that fate.

Source : https://studyfinds.org/love-bacon-just-one-slice-dementia-risk/

Obesity redefined: Why doctors are ditching BMI for these key health markers

(© Feng Yu – stock.adobe.com)

When the issue is obesity, the questions are many, and the routes to answers anything but straight. What is abundantly clear is a need for consensus on two foundational matters:

  • What is a useful definition for obesity?
  • Is obesity a disease?

To answer these questions and standardize the concepts, a group of 58 experts, representing multiple medical specialties and countries, convened a commission and participated in a consensus development process. They were careful to include people who experienced obesity to ensure consideration of patients’ perspectives. The commission’s report was just published in The Lancet, Diabetes & Endocrinology.

The commission recognized that the current measure of obesity, which is body-mass index (BMI), can both overestimate and underestimate adiposity (how much of the body is fat). The global commission determined that to reduce misclassification, it is necessary to use other measures of body fat. Some of these included waist circumference, waist-to-hip ratio, waist-to-height ratio, direct fat measurement, and signs and symptoms of poor health that could be attributed to excess adiposity.

The experts proposed two distinct types of obesity:

  • Clinical obesity: A systemic chronic illness directly and specifically caused by excess adiposity
  • Preclinical obesity: Excess adiposity with preserved tissue and organ function, accompanied by an increased risk of progression to clinical or other noncommunicable disease

The commission’s leader, Dr. Francesco Rubino, of King’s College, London, explained the importance of distinction in these new definitions of disease. The group acknowledges the subtleties of obesity and support timely access to treatment for patients diagnosed with clinical obesity. That is appropriate for people with any chronic disease. For people with preclinical obesity, the definition points to risk-reduction strategies.

Clinical vs. preclinical obesity

Currently, clinical obesity is defined as a state of chronic illness. Some tissues or organs show reduced function which is attributed to excess fat and affects daily activities. Some of these conditions are breathlessness, joint pain or reduced mobility, often in the knees and hips, metabolic dysfunction and impaired function of organ systems.

Applying the proposed definition, the diagnosis of clinical obesity requires two main criteria: confirmation of excess adiposity plus chronic organ dysfunction and/or limitations on mobility or daily living.

To confirm the diagnosis of clinical obesity in those with excess body fat requires that a healthcare provider evaluate the individual’s medical history and conduct a physical exam, the usual laboratory tests, and additional diagnostic tests as indicated.

The commission authors stated that, “A diagnosis of clinical obesity should have the same implications as other chronic disease diagnoses. Patients diagnosed with clinical obesity should, therefore, have timely and equitable access to comprehensive care and evidence-based treatments.”

Preclinical obesity is more of a spectrum of risk. Excess fat is confirmed, but these individuals don’t have ongoing illness attributed to adiposity. They can perform daily activities and have no or mild organ dysfunction. These patients are at higher risk for diseases like clinical obesity, cardiovascular disease, some cancers, Type 2 diabetes, and other illnesses.

“Preclinical obesity is different from metabolically healthy obesity because it is defined by the preserved function of all organs potentially affected by obesity,” the authors write, “not only those involved in metabolic regulation.”

What these changes mean for you, if you have excess fat, is that your condition is treated like any other medical condition. It isn’t something you just “get over” with diet and exercise. The effects of your fat are clearly identified, including the consequences without intervention. Specific fat-mediated dysfunctions have specific protocols for intervention. You work side-by-side with your healthcare provider to manage risk and consequences, hopefully even reducing risks and possibly reversing some consequences.

Source : https://studyfinds.org/obesity-redefined-why-doctors-are-ditching-bmi-for-these-key-health-markers/

Yes, parents really do have a ‘favorite’ child. Study reveals how to tell if it’s you

(Photo by New Africa on Shutterstock)

Ever wondered if your parents really did have a favorite child? That nagging suspicion might not be all in your head. A study analyzing data from over 19,400 participants concludes that parents do indeed treat their children differently, and the way they choose their “favorites” is more systematic than you might think.

“For decades, researchers have known that differential treatment from parents can have lasting consequences for children,” said lead author Alexander Jensen, PhD, an associate professor at Brigham Young University, in a statement. “This study helps us understand which children are more likely to be on the receiving end of favoritism, which can be both positive and negative.”

So what makes a child more likely to receive the coveted “favorite” status? The research team discovered several fascinating patterns. First, contrary to what many might expect, both mothers and fathers tend to favor daughters. Children who demonstrate responsibility and organization in their daily lives, from completing homework on time to keeping their rooms tidy, also typically receive more favorable treatment from their parents.

The study, published in Psychological Bulletin, examined five key areas of parent-child interaction: overall treatment, positive interactions (such as displays of affection or praise), negative interactions (like conflicts or criticism), resource allocation (including time spent with each child and material resources), and behavioral control (rules and expectations).

Birth order influences how parents interact with their children, particularly regarding independence and rules. Parents tend to grant older siblings more autonomy, such as later curfews or more decision-making freedom. However, the researchers note this may reflect appropriate developmental adjustments rather than favoritism.

Personality characteristics emerged as significant predictors of parental treatment. Children who demonstrate conscientiousness — showing responsibility through behaviors like completing chores without reminders or planning ahead for school assignments – typically experience more positive interactions and fewer conflicts with parents.

Similarly, agreeable children who show cooperation and consideration in family life often receive more positive parental responses.

One particularly noteworthy finding involves the disconnect between parents’ and children’s perceptions. While parents acknowledged treating daughters more favorably, children themselves didn’t report noticing significant gender-based differences in treatment. This suggests that some aspects of parental favoritism operate so subtly that children may not consciously recognize them.

Research has shown that children who receive less favorable treatment may face increased challenges with mental health and family relationships. “Understanding these nuances can help parents and clinicians recognize potentially damaging family patterns,” Jensen explained. “It is crucial to ensure all children feel loved and supported.”

The researchers emphasize that their findings show correlation rather than causation. “It is important to note that this research is correlational, so it doesn’t tell us why parents favor certain children,” Jensen said. “However, it does highlight potential areas where parents may need to be more mindful of their interactions with their children.”

For families navigating these dynamics, Jensen offers this perspective: “The next time you’re left wondering whether your sibling is the golden child, remember there is likely more going on behind the scenes than just a preference for the eldest or youngest. It might be about responsibility, temperament or just how easy or hard you are to deal with.”

Source : https://studyfinds.org/parents-really-do-have-favorite-child/

Nightmare: Your dreams are for sale — and companies are already buying

(Image by Shutterstock AI Generator)

Shocking new survey reveals 54% of young Americans report ads infiltrating their dreams

Remember when sleep offered an escape from endless advertising? That era may be ending. While U.S. citizens already face up to 4,000 advertisements daily in their waking hours, research suggests that even our dreams are no longer safe from commercial messaging. A new study reveals that 54% of young Americans report experiencing dreams influenced by ads—and some companies might be doing it intentionally.

The findings come at a critical time, as the American Marketing Association previously reported that 77% of companies surveyed in 2021 expressed intentions to experiment with “dream ads” by this year. What was once considered science fiction may now be becoming reality, with major implications for consumer protection and marketing ethics.

According to The Media Image’s newly released consumer survey focusing on Gen Z and Millennials, 54% of Americans aged 18-35 report having experienced dreams that appeared to be influenced by advertisements or contained ad-like content. Even more striking, 61% of respondents report having such dreams within the past year, with 38% experiencing them regularly—ranging from daily occurrences to monthly episodes.

Conducted by Survey Monkey on behalf of The Media Image between January 2nd and 3rd, 2025, the research included a representative sample of 1,101 American respondents aged 18-35. While the sample skewed slightly female (62%), the findings are considered reflective of broader perspectives within this age group.

The data shows a striking pattern: 22% of respondents experience ad-like content in their dreams between once a week to daily, while another 17% report such occurrences between once a month to every couple of months.

The phenomenon isn’t merely passive. The survey reveals that these dream-based advertisements may be influencing consumer behavior in tangible ways. While two-thirds of consumers (66%) report resistance to making purchases based on their dreams, the other third admit that their dreams have encouraged them to buy products or services over the past year—a conversion rate that rivals or exceeds many traditional advertising campaigns.

The presence of major brands in dreams appears to be particularly prevalent, with 48% of young Americans reporting encounters with well-known companies such as Coca-Cola, Apple, or McDonald’s during their sleep. Harvard experts suggest this may be due to memory “reactivation” during sleep, where frequent exposure to brands in daily life increases their likelihood of appearing in dreams.

Perhaps most troubling is the apparent willingness of many consumers to accept this new frontier of advertising. The survey found that 41% of respondents would be open to seeing ads in their dreams if it meant receiving discounts on products or services. This raises serious ethical questions about the commercialization of human consciousness and the potential exploitation of vulnerable mental states for marketing purposes.

Despite these concerns, there appears to be limited interest in protecting dreams from commercial influence. Over two-thirds of respondents (68%) indicated they would not be willing to pay to keep their dreams ad-free, even if such technology existed. However, a significant minority (32%) expressed interest in a hypothetical “dream-ad blocker,” suggesting growing awareness and concern about this issue among some consumers.

The research comes in the wake of dream researchers issuing an open letter warning the public about corporate attempts to infiltrate dreams with advertisements, sparked by Coors Light’s experimental campaign that achieved notable success. This confluence of corporate interest and technological capability raises serious questions about the future of personal privacy and mental autonomy.

The potential manipulation of dreams for advertising purposes raises serious concerns about psychological well-being and the need for protective regulations. As companies explore ways to influence our subconscious minds, the lack of existing safeguards becomes increasingly problematic.

These results emerge against a backdrop of increasing advertising saturation in daily life. Current estimates suggest that U.S. citizens are exposed to up to 4,000 advertisements daily, making sleep one of the last remaining refuges from commercial messaging. The potential erosion of this final sanctuary raises important questions about consumer rights and mental well-being in an increasingly commercialized world.

The research presents a clear warning: without immediate attention to the ethical and regulatory challenges of dream-based advertising, we risk losing the last advertisement-free space in modern life. As companies develop new technologies to influence our dreams, the choice between consumer protection and commercial interests becomes increasingly pressing.

Source : https://studyfinds.org/your-dreams-are-for-sale-and-companies-are-already-buying/

How smoking cigarettes could sabotage your career and income

(Photo creidt: © Alem Bradic | Dreamstime.com

Most people know smoking is bad for their health, but a new study suggests it could also be bad for their wealth. Research from Finland reveals that smoking in early adulthood can significantly impact your career trajectory and earning potential, with effects that ripple through decades of working life.

Living in an age where smoking rates have declined significantly since the 1990s, you might wonder why this matters. Despite the downward trend, smoking remains surprisingly prevalent in high-income countries, with 18% of women and 27% of men still lighting up as of 2019. While most smokers are aware of the health risks, they might not realize how their habit could be affecting their professional lives and financial future.

The study, published in Nicotine and Tobacco Research, analyzed data from nearly 2,000 Finnish adults to explore how smoking habits in early adulthood influenced their long-term success in the job market. What they found was striking: for each pack-year of smoking (equivalent to smoking one pack of cigarettes daily for a year), people experienced an average 1.8% decrease in earnings and were employed for 0.5% fewer years over the study period.

“Smoking in early adulthood is closely linked to long-term earnings and employment, with lower-educated individuals experiencing the most severe consequences,” said the paper’s lead author, Jutta Viinikainen, from the University of Jyväskylä, in a statement. “These findings highlight the need for policies that address smoking’s hidden economic costs and promote healthier behaviors.”

Research from the Cardiovascular Risk in Young Finns Study tracked participants’ smoking habits and career trajectories from 2001 to 2019, providing a long-term look at how tobacco use influences professional success over time. The study focused on adults who were between 24 and 39 years old at the start of the study period. Beyond just counting cigarettes, researchers calculated “pack-years” – a measure that considers both how much and how long someone has smoked – to understand the cumulative impact of smoking on career outcomes.

Particularly interesting was how smoking’s impact varied across different demographic groups. Young smokers with lower education levels faced the steepest penalties in terms of reduced earnings, while older smokers in this educational bracket saw the most significant drops in employment years. This pattern suggests that smoking’s effects on career success evolve differently across age groups and education levels.

For younger workers, smoking appeared to create immediate barriers to earning potential, possibly due to reduced productivity or unconscious bias from employers. Meanwhile, older workers faced growing challenges maintaining steady employment as the long-term health effects of smoking began to manifest, particularly in physically demanding jobs that are more common among those with less formal education.

Consider this: reducing smoking by just five pack-years (equivalent to smoking one pack daily for five years) could potentially boost earnings by 9%. That’s a substantial difference in earning power that could compound significantly over a career span, affecting everything from lifestyle choices to retirement savings.

Of particular concern is how these effects might create a potential feedback loop of disadvantage. While the study found that those with lower education levels appeared to face greater economic consequences from smoking, it’s important to note that this relationship is complex and influenced by many factors. This suggests that smoking could be amplifying existing socioeconomic disparities, making it harder for people to climb the economic ladder.

Smoking’s impact on physical fitness and performance may explain part of this effect, particularly in jobs requiring manual labor or physical stamina. When you’re constantly short of breath or taking more frequent breaks for cigarettes, it’s harder to maintain the same level of productivity as non-smoking colleagues. Over time, these small differences in daily performance can translate into significant gaps in career advancement and earning potential.

Perhaps most encouraging was the finding that quitting smoking could help mitigate these negative effects, particularly regarding employment stability among less-educated workers. This suggests it’s never too late to improve your career prospects by putting out that last cigarette.

In a world where career success increasingly depends on maintaining peak performance and adaptability, smoking may be more than just a health risk – it could be a career liability that many can’t afford to ignore. As the costs of smoking continue to mount, both in terms of health and wealth, the message becomes clear: your wallet, not just your lungs, might breathe easier if you quit.

Source : https://studyfinds.org/smoking-cigarettes-career-income/

Glass of milk a day keeps colorectal cancer away, massive study reveals

(© Goran – stock.adobe.com)

What if reducing your cancer risk was as simple as adding a glass of milk to your daily diet? A study of over half a million women concludes that dairy products, particularly those rich in calcium, may help protect against colorectal cancer, while alcohol and processed meats continue to pose significant risks. The massive research project tracked the eating habits and health outcomes of 542,778 British women for over 16 years, identifying key foods and nutrients that could help prevent one of the world’s most common cancers.

Colorectal cancer shows striking differences between regions, with higher rates in wealthy nations like the United States, European countries, and Japan, compared to lower rates in much of Africa and South Asia. However, when people move to countries with higher rates, their risk begins matching that of their new home within about a decade, suggesting that lifestyle factors, particularly diet, play a crucial role.

The international research team analyzed 97 different dietary components, from specific foods to nutrients. During the study period, 12,251 women developed colorectal cancer, allowing scientists to identify clear patterns between eating habits and cancer risk.

Among the strongest protective factors was calcium intake. Women who consumed more calcium-rich foods showed a significantly lower risk of developing colorectal cancer. The benefit appeared consistent whether the calcium came from dairy products or other sources.

Dairy milk emerged as another powerful player in cancer prevention. Regular milk drinkers showed notably lower cancer risk, and other dairy products like yogurt demonstrated similar protective effects. Several nutrients commonly found in dairy — including riboflavin, magnesium, phosphorus, and potassium — also showed benefits.

On the flip side, alcohol consumption stood out as the strongest risk factor. Having about two standard drinks more per day was linked to a 15% higher risk of developing colorectal cancer. The risk appeared particularly pronounced for rectal cancer compared to colon cancer.

Red and processed meats maintained their concerning reputation. Each additional daily serving about the size of a slice of ham was associated with an 8% higher risk. This finding supports previous research that led international health organizations to classify processed meat as cancer-causing and red meat as probably cancer-causing in humans.

The researchers took an innovative approach to confirm dairy’s protective effects by examining genetic differences that affect how well people can digest milk products. This analysis provided additional evidence that dairy foods help protect against colorectal cancer, as people who were genetically better able to digest dairy had lower cancer rates.

While breakfast cereals, fruits, whole grains, and high-fiber foods showed some protective effects, these benefits became less pronounced when researchers accounted for overall lifestyle habits. This suggests that people who eat these foods might have generally healthier lifestyles that contribute to lower cancer risk.

Scientists believe calcium helps prevent cancer in several ways: by binding to harmful substances in the digestive system, promoting healthy cell development in the colon, and reducing inflammation. While dairy products aren’t suitable for everyone, particularly those with lactose intolerance or milk allergies, the research suggests that for many people, including more dairy in their diet might help reduce their cancer risk.

These findings provide compelling evidence that simple dietary changes, like having more dairy products while limiting alcohol and processed meats, could help reduce the risk of developing one of the world’s most common cancers. However, no single food acts as a magic bullet: it’s the overall pattern of dietary choices that matters most for cancer prevention.

Source : https://studyfinds.org/dairy-milk-keeps-colorectal-cancer-away/

Could AI replace politicians? A philosopher maps out three possible futures

(© jon – stock.adobe.com)

From business and public administration to daily life, artificial intelligence is reshaping the world – and politics may be next. While the idea of AI politicians might make some people uneasy, survey results tell a different story. A poll conducted by my university in 2021, during the early surge of AI advancements, found broad public support for integrating AI into politics across many countries and regions.

A majority of Europeans said they would like to see at least some of their politicians replaced by AI. Chinese respondents were even more bullish about AI agents making public policy, while normally innovation-friendly Americans were more circumspect.

As a philosopher who researches the moral and political questions raised by AI, I see three main pathways for integrating AI into politics, each with its own mixture of promises and pitfalls.

While some of these proposals are more outlandish than others, weighing them up makes one thing certain: AI’s involvement in politics will force us to reckon with the value of human participation in politics, and with the nature of democracy itself.

Chatbots running for office?

Prior to ChatGPT’s explosive arrival in 2022, efforts to replace politicians with chatbots were already well underway in several countries. As far back as 2017, a chatbot named Alisa challenged Vladimir Putin for the Russian presidency, while a chatbot named Sam ran for office in New Zealand. Denmark and Japan have also experimented with chatbot-led political initiatives.

These efforts, while experimental, reflect a longstanding curiosity about AI’s role in governance across diverse cultural contexts.

The appeal of replacing flesh and blood politicians with chatbots is, on some levels, quite clear. Chatbots lack many of the problems and limitations typically associated with human politics. They are not easily tempted by desires for money, power, or glory. They don’t need rest, can engage virtually with everyone at once, and offer encyclopedic knowledge along with superhuman analytic abilities.

However, chatbot politicians also inherit the flaws of today’s AI systems. These chatbots, powered by large language models, are often black boxes, limiting our insight into their reasoning. They frequently generate inaccurate or fabricated responses, known as hallucinations. They face cybersecurity risks, require vast computational resources, and need constant network access. They are also shaped by biases derived from training data, societal inequalities, and programmers’ assumptions.

Additionally, chatbot politicians would be ill-suited to what we expect from elected officials. Our institutions were designed for human politicians, with human bodies and moral agency. We expect our politicians to do more than answer prompts – we also expect them to supervise staff, negotiate with colleagues, show genuine concern for their constituents, and take responsibility for their choices and actions.

Without major improvements in the technology, or a more radical reimagining of politics itself, chatbot politicians remain an uncertain prospect.

AI-powered direct democracy

Another approach seeks to completely do away with politicians, at least as we know them. Physicist César Hidalgo believes that politicians are troublesome middlemen that AI finally allows us to cut out. Instead of electing politicians, Hidalgo wants each citizen to be able to program an AI agent with their own political preferences. These agents can then negotiate with each other automatically to find common ground, resolve disagreements, and write legislation.

Hidalgo hopes that this proposal can unleash direct democracy, giving citizens more direct input into politics while overcoming the traditional barriers of time commitment and legislative expertise. The proposal seems especially attractive in light of widespread dissatisfaction with conventional representative institutions.

However, eliminating representation may be more difficult than it seems. In Hidalgo’s “avatar democracy,” the de facto kingmakers would be the experts who design the algorithms. Since the only way to legitimately authorize their power would likely be through voting, we might merely replace one form of representation with another.

The specter of algocracy

One even more radical idea involves eliminating humans from politics altogether. The logic is simple enough: if AI technology advances to the point where it makes reliably better decisions than humans, what would be the point of human input?

An algocracy is a political regime run by algorithms. While few have argued outright for a total handover of political power to machines (and the technology for doing so is still far off), the specter of algocracy forces us to think critically about why human participation in politics matters. What values – such as autonomy, responsibility, or deliberation – must we preserve in an age of automation, and how?

Source : https://studyfinds.org/could-ai-replace-politicians/

Obesity label is medically flawed, says global report

People with excess body fat can still be active and healthy, experts say

Calling people obese is medically “flawed” – and the definition should be split into two, a report from global experts says.

The term “clinical obesity” should be used for patients with a medical condition caused by their weight, while “pre-clinically obese” should be applied to those remaining fat but fit, although at risk of disease.

This is better for patients than relying only on body mass index (BMI) – which measures whether they are a healthy weight for their height – to determine obesity.

More than a billion people are estimated to be living with obesity worldwide and prescription weight-loss drugs are in high demand.

The report, published in The Lancet Diabetes & Endocrinology journal, is supported by more than 50 medical experts around the world.

“Some individuals with obesity can maintain normal organ function and overall health, even long term, whereas others display signs and symptoms of severe illness here and now,” Prof Francesco Rubino, from King’s College London, who chaired the expert group, said.

“Obesity is a spectrum,” he added.

The current, blanket definition means too many people are being diagnosed as obese but not receiving the most appropriate care, the report says.

Natalie, from Crewe, goes to the gym four times a week and has a healthy diet, but is still overweight.

“I would consider myself on the larger side, but I’m fit,” she told the BBC 5 Live phone-in with Nicky Campbell.

“If you look at my BMI I’m obese, but if I speak to my doctor they say that I’m fit, healthy and there’s nothing wrong with me.

“I’m doing everything I can to stay fit and have a long healthy life,” she said.

Richard, from Falmouth, said there is a lot of confusion around BMI.

“When they did my test, it took me to a level of borderline obesity, but my body fat was only 4.9% – the problem is I had a lot of muscle mass,” he says.

In Mike’s opinion, you cannot be fat and fit – he says it is all down to diet.

“All these skinny jabs make me laugh, if you want to lose weight stop eating – it’s easy.”

Currently, in many countries, obesity is defined as having a BMI over 30 – a measurement that estimates body fat based on height and weight.

How is BMI calculated?
It is calculated by dividing an adult’s weight in kilograms by their height in metres squared.

For example, if they are 70kg (about 11 stone) and 1.70m (about 5ft 7in):

square their height in metres: 1.70 x 1.70 = 2.89
divide their weight in kilograms by this amount: 70 ÷ 2.89 = 24.22
display the result to one decimal place: 24.2
Find out what your body mass index (BMI) means on the NHS website

But BMI has limitations.

It measures whether someone is carrying too much weight – but not too much fat.

So very muscular people, such as athletes, tend to have a high BMI but not much fat.

The report says BMI is useful on a large scale, to work out the proportion of a population who are a healthy weight, overweight or obese.

But it reveals nothing about an individual patient’s overall health, whether they have heart problems or other illnesses, for example, and fails to distinguish between different types of body fat or measure the more dangerous fat around the waist and organs.

Measuring a patient’s waist or the amount of fat in their body, along with a detailed medical history, can give a much clearer picture than BMI, the report says.

Source: https://www.bbc.com/news/articles/c79dz14d30ro

Keeping the thermostat between these temperatures is best for seniors’ brains

(Credit: © Lopolo | Dreamstime.com)

That perfect thermostat setting might be more important than you think, especially at grandma and grandpa’s house. A new study finds that indoor temperature significantly affects older adults’ ability to concentrate, even in their own homes where they control the climate. The research suggests that as climate change brings more extreme temperatures, elderly individuals may face increased cognitive challenges unless their indoor environments are properly regulated.

Researchers at the Hinda and Arthur Marcus Institute for Aging Research, the research arm of Hebrew SeniorLife affiliated with Harvard Medical School, conducted a year-long study monitoring 47 community-dwelling adults aged 65 and older. The study tracked both their home temperatures and their self-reported ability to maintain attention throughout the day. What they discovered was a clear U-shaped relationship between room temperature and cognitive function. In other words, attention spans were optimal within a specific temperature range and declined when rooms became either too hot or too cold.

The sweet spot for cognitive function appeared to be between 20-24°C (68-75°F). When temperatures deviated from this range by just 4°C (7°F) in either direction, participants were twice as likely to report difficulty maintaining attention on tasks. This finding is particularly concerning given that many older adults live on fixed incomes and may struggle to maintain optimal indoor temperatures, especially during extreme weather events.

Many previous studies have examined temperature’s effects on cognition in controlled laboratory settings, but this research breaks new ground by studying people in their natural home environments over an extended period. The research team used smart sensors placed in participants’ primary living spaces to continuously monitor temperature and humidity levels, while participants completed twice-daily smartphone surveys about their thermal comfort and attention levels.

The study’s findings revealed an interesting asymmetry in how people responded to temperature variations. While both hot and cold conditions impaired attention, participants seemed particularly sensitive to cold temperatures. When reporting feeling cold, they showed greater cognitive difficulties across a wider range of actual temperatures compared to when they felt hot. This suggests that maintaining adequate heating may be especially crucial for preserving cognitive function in older adults during winter months.

“Our findings underscore the importance of understanding how environmental factors, like indoor temperature, impact cognitive health in aging populations,” said lead author Dr. Amir Baniassadi, an assistant scientist at the Marcus Institute, in a statement. “This research highlights the need for public health interventions and housing policies that prioritize climate resilience for older adults. As global temperatures rise, ensuring access to temperature-controlled environments will be crucial for protecting their cognitive well-being.”

This study follows a 2023 investigation measuring how temperature affected older adults’ sleep and cognitive ability, building a growing body of evidence that climate change impacts extend beyond physical health. While much attention has been paid to the direct health impacts of heat waves and cold snaps, this research suggests that even moderate temperature variations inside homes could affect older adults’ daily cognitive functioning.

The participant group, while relatively small, was carefully monitored. With an average age of 79 years, the cohort completed over 17,000 surveys during the study period. Most participants lived in private, market-rate housing (34 participants) rather than subsidized housing (13 participants), suggesting they had reasonable control over their home environments. This makes the findings particularly striking: if even relatively advantaged older adults experience cognitive effects from temperature variations, more vulnerable populations may face even greater challenges.

The connection between temperature and cognition isn’t entirely surprising. As we age, our bodies become less efficient at regulating temperature, a problem often compounded by chronic conditions like diabetes or medications that affect thermoregulation. What’s novel about this research is its demonstration that these physiological vulnerabilities may extend to cognitive function in real-world settings.

As winter gives way to spring and thermostats across the country get adjusted, this research suggests we might want to pay closer attention to those settings — especially in homes where older adults reside. The cognitive sweet spot of 68-75°F might just be the temperature range where wisdom flourishes.

Source : https://studyfinds.org/cold-homes-linked-to-attention-problems-in-older-adults/

Process this: 50,000 grocery products reveal shocking truth about America’s food supply

(Credit: © Photopal604 | Dreamstime.com)

Minimally processed foods make up just a small percentage of what’s available in the U.S. supermarkets

Next time you walk down the aisles of your local grocery store, take a closer look at what’s actually available on those shelves. A stunning report reveals the majority of food products sold at major U.S. grocery chains are highly processed, with most of them priced significantly cheaper than less processed alternatives.

In what may be the most comprehensive analysis of food processing in American grocery stores to date, researchers examined over 50,000 food items sold at Walmart, Target, and Whole Foods to understand just how processed our food supply really is. Using sophisticated machine learning techniques, they developed a database called GroceryDB that scores foods based on their degree of processing.

What exactly makes a food “processed“? While nearly all foods undergo some form of processing (like washing and packaging), ultra-processed foods are industrial formulations made mostly from substances extracted from foods or synthesized in laboratories. Think instant soups, packaged snacks, and soft drinks – products that often contain additives like preservatives, emulsifiers, and artificial colors.

Research has suggested that diets high in ultra-processed foods can contribute to health issues like obesity, diabetes and heart disease. Over-processing can also strip foods of beneficial nutrients. Despite these risks, there has been no easy way for consumers to identify what foods are processed, highly processed, or ultra-processed.

“There are a lot of mixed messages about what a person should eat. Our work aims to create a sort of translator to help people look at food information in a more digestible way,” explains Giulia Menichetti, PhD, an investigator in the Channing Division of Network Medicine at Brigham and Women’s Hospital and the study’s corresponding author, in a statement.

The findings paint a concerning picture of American food retail. Across all three stores, minimally processed products made up a relatively small fraction of available items, while ultra-processed foods dominated the shelves. Even more troubling, the researchers found that for every 10% increase in processing scores, the price per calorie dropped by 8.7% on average. This means highly processed foods tend to be substantially cheaper than their less processed counterparts.

However, the degree of processing varied significantly between stores. Whole Foods offered more minimally processed options and fewer ultra-processed products compared to Walmart and Target. The researchers also found major differences between food categories. Some categories, like jerky, popcorn, chips, bread, and mac and cheese, showed little variation in processing levels – meaning consumers have limited choices if they want less processed versions of these foods. Other categories, like cereals, milk alternatives, pasta, and snack bars, displayed wider ranges of processing levels.

Looking at specific examples helps illustrate these differences. When examining breads, researchers found that Manna Organics multi-grain bread from Whole Foods scored low on the processing scale since it’s made primarily from whole wheat kernels and basic ingredients. In contrast, certain breads from Walmart and Target scored much higher due to added ingredients like resistant corn starch, soluble corn fiber, and various additives.

The research team also developed a novel way to analyze individual ingredients’ contributions to food processing. They found that certain oils, like brain octane oil, flaxseed oil, and olive oil, contributed less to ultra-processing compared to palm oil, vegetable oil, and soybean oil. This granular analysis helps explain why seemingly similar products can have very different processing scores.

Study authors have made their findings publicly accessible through a website called TrueFood.tech, where consumers can look up specific products and find less processed alternatives within the same category.

“When people hear about the dangers of ultra-processed foods, they ask, ‘OK, what are the rules? How can we apply this knowledge?’” Menichetti notes. “We are building tools to help people implement changes to their diet based on information currently available about food processing. Given the challenging task of transforming eating behaviors, we want to nudge them to eat something that is within what they currently want but a less-processed option.”

As Americans increasingly rely on grocery stores for their food — with over 60% of U.S. food consumption coming from retail establishments — understanding what’s actually available on store shelves becomes crucial for public health. While this research doesn’t definitively prove that ultra-processed foods are harmful, it does demonstrate that avoiding them may require both conscious effort and deeper pockets.

Source : https://studyfinds.org/ultra-processed-foods-america-grocery-stores-target-walmart/

 

Age 13 rule isn’t working — Most pre-teens already deep in social media

(Credit: Child Social Media © Andrii Iemelianenko | Dreamstime.com)

Ages 11 and 12 represent a pivotal transition from childhood to adolescence — a time traditionally marked by first crushes, growing independence, and deepening friendships. But according to new research, this age group is also marked by something more troubling: widespread social media addiction. The study of over 10,000 American youth reveals that most pre-teens are active on platforms they’re technically too young to use.

As the U.S. Supreme Court prepares to hear arguments against Congress’ TikTok ban, the research pulls back the curtain on what many parents have long suspected: nearly 64% of pre-teens have at least one social media account, flouting minimum age requirements and raising concerns about online safety and mental health impacts.

Drawing from a diverse sample of adolescents aged 11 to 15, researchers found that TikTok reigns supreme among young users, with 67% of social media-using teens maintaining an account on the short-form video platform. YouTube and Instagram followed closely behind at around 65% and 66% respectively.

“Policymakers need to look at TikTok as a systemic social media issue and create effective measures that protect children online,” said Dr. Jason Nagata, a pediatrician at UCSF Benioff Children’s Hospitals and the lead author of the study, in a statement. “TikTok is the most popular social media platform for children, yet kids reported having more than three different social media accounts, including Instagram and Snapchat.”

Notable gender differences emerged in platform preferences. Female adolescents gravitated toward TikTok, Snapchat, Instagram, and Pinterest, while their male counterparts showed stronger affinity for YouTube and Reddit. This digital divide hints at how social media may be shaping different aspects of adolescent development and socialization between genders.

Among the study’s more concerning findings was that 6.3% of young social media users admitted to maintaining “secret” accounts hidden from parental oversight. These covert profiles, sometimes dubbed “Finstas” (fake Instagram accounts), represent a digital double life that could put vulnerable youth at risk while hampering parents’ ability to protect their children online.

Signs of problematic use and potential addiction emerged as significant concerns. Twenty-five percent of children with social media accounts reported often thinking about social media apps, and another 25% said they use the apps to forget about their problems. Moreover, 17% of users tried to reduce their social media use but couldn’t, while 11% reported that excessive use had negatively impacted their schoolwork.

“Our study revealed a quarter of children reported elements of addiction while using social media, with some as young as eleven years old,” Nagata explained. “The research shows underage social media use is linked with greater symptoms of depression, eating disorders, ADHD, and disruptive behaviors. When talking about social media usage and policies, we need to prioritize the health and safety of our children.”

Recent legislative efforts, including the federal Protecting Kids on Social Media Act and various state-level initiatives, aim to strengthen safeguards around youth social media use. The U.S. Surgeon General has called for more robust age verification systems and warning labels on social media platforms, highlighting the growing recognition of this issue as a public health concern.

To address these challenges, medical professionals recommend structured approaches to managing screen time. The American Academy of Pediatrics has developed the Family Media Plan, providing families with tools to schedule both online and offline activities effectively.

“Every parent and family should have a family media plan to ensure children and adults stay safe online and develop a healthy relationship with screens and social media,” said Nagata, who practices this approach with his own children. “Parents can create strong relationships with their children by starting open conversations and modeling good behaviors.”

As social media continues evolving at breakneck speed, this research, published in Academic Pediatrics, provides a crucial snapshot of how the youngest generation navigates the digital landscape. The timing proves particularly relevant as the Supreme Court prepares to hear arguments about Congress’ TikTok ban, set to take effect January 19th. While the case primarily centers on national security concerns, the study’s findings suggest that children’s welfare should be an equally important consideration in platform regulation.

Source : https://studyfinds.org/most-pre-teens-already-deep-in-social-media/

Warning: Your pooch’s smooches really could make you quite sick

(Credit: © Natalia Skripnikova | Dreamstime.com)

39% of healthy dogs may silently carry dangerous Salmonella strains, researchers warn
UNIVERSITY PARK, Pa. — Next time your furry friend gives you those irresistible puppy dog eyes, you might want to think twice before sharing your snack. That’s because scientists say that household dogs could be silent carriers of dangerous antibiotic-resistant Salmonella bacteria, potentially putting their human families at risk.

Most pet owners know to wash their hands after handling raw pet food or cleaning up after their dogs, but researchers at Pennsylvania State University have uncovered a concerning trend: household dogs can carry and spread drug-resistant strains of Salmonella even when they appear perfectly healthy. This finding is particularly worrisome because these resistant bacteria can make treating infections much more challenging in both animals and humans.

The research takes on added significance considering that over half of U.S. homes include dogs. “We have this close bond with companion animals in general, and we have a really close interface with dogs,” explains Sophia Kenney, the study’s lead author and doctoral candidate at Penn State, in a statement. “We don’t let cows sleep in our beds or lick our faces, but we do dogs.”

To investigate this concerning possibility, the research team employed a clever detective-like approach. They first tapped into an existing network of veterinary laboratories that regularly test animals for various diseases. They identified 87 cases where dogs had tested positive for Salmonella between May 2017 and March 2023. These weren’t just random samples: they came from real cases where veterinarians had submitted samples for testing, whether the dogs showed symptoms or not.

The scientists then did something akin to matching fingerprints. For each dog case they found, they searched a national database of human Salmonella infections, looking for cases that occurred in the same geographic areas around the same times. This database, maintained by the National Institutes of Health, is like a library of bacterial information collected from patients across the country. Through this matching process, they identified 77 human cases that could potentially be connected to the dog infections.

The research team then used advanced DNA sequencing technology to analyze each bacterial sample. This allowed them to not only identify different varieties of Salmonella but also determine how closely related the bacteria from dogs were to those found in humans. They specifically looked for two key things: genes that make the bacteria resistant to antibiotics, and genes that help the bacteria cause disease.

What they found was eye-opening. Among the dog samples, they discovered 82 cases of the same type of Salmonella that commonly causes human illness. More concerning was that many of these bacterial strains carried genes making them resistant to important antibiotics, the same medicines doctors rely on to treat serious infections.

In particular, 16 of the human cases were found to be very closely related to six different dog-associated strains. While this doesn’t definitively prove the infections spread from dogs to humans, it’s like finding matching puzzle pieces that suggest a connection. The researchers also discovered that 39% of the dog samples contained a special gene called shdA, which allows the bacteria to survive longer in the dog’s intestines. This means infected dogs could potentially spread the bacteria through their waste for extended periods without appearing sick themselves.

The bacteria showed impressive diversity, with researchers identifying 31 different varieties in dogs alone. Some common types found in both dogs and humans included strains known as Newport, Typhimurium, and Enteritidis — names that might not mean much to the average person but are well-known to health officials for causing human illness.

The research has highlighted real-world implications. Study co-author Nkuchia M’ikanatha, lead epidemiologist for the Pennsylvania Department of Health, points to a recent outbreak where pig ear pet treats sickened 154 people across 34 states with multidrug-resistant Salmonella. “This reminds us that simple hygiene practices such as hand washing are needed to protect both our furry friends and ourselves — our dogs are family but even the healthiest pup can carry Salmonella,” he notes.

The historical context adds another layer to the findings. According to researchers, Salmonella has been intertwined with human history since agriculture began, potentially shadowing humanity for around 10,000 years alongside animal domestication.

While the study reveals concerning patterns about antibiotic resistance and disease transmission, lead researcher Erika Ganda emphasizes that not all bacteria are harmful. “Bacteria are never entirely ‘bad’ or ‘good’ — their role depends on the context,” she explains. “While some bacteria, like Salmonella, can pose serious health risks, others are essential for maintaining our health and the health of our pets.”

Of course, this doesn’t mean we should reconsider having dogs as pets. Instead, scientists say just be smart, and maybe try not to let your pooch kiss you on the lips.

“Several studies highlight the significant physical and mental health benefits of owning a dog, including reduced stress and increased physical activity,” Ganda notes. “Our goal is not to discourage pet ownership but to ensure that people are aware of potential risks and take simple steps, like practicing good hygiene, to keep both their families and their furry companions safe.”

Source : https://studyfinds.org/dogs-drug-resistant-salmonella/

‘Super Scoopers’ dumping ocean water on the Los Angeles fires: Why using saltwater is typically a last resort

A Croatian Air Force CL-415 Super Scooper firefighting aircraft in flight. (Photo by crordx on Shutterstock)

Firefighters battling the deadly wildfires that raced through the Los Angeles area in January 2025 have been hampered by a limited supply of freshwater. So, when the winds are calm enough, skilled pilots flying planes aptly named Super Scoopers are skimming off 1,500 gallons of seawater at a time and dumping it with high precision on the fires.

Using seawater to fight fires can sound like a simple solution – the Pacific Ocean has a seemingly endless supply of water. In emergencies like Southern California is facing, it’s often the only quick solution, though the operation can be risky amid ocean swells.

But seawater also has downsides.

Saltwater corrodes firefighting equipment and may harm ecosystems, especially those like the chaparral shrublands around Los Angeles that aren’t normally exposed to seawater. Gardeners know that small amounts of salt – added, say, as fertilizer – does not harm plants, but excessive salts can stress and kill plants.

While the consequences of adding seawater to ecosystems are not yet well understood, we can gain insights on what to expect by considering the effects of sea-level rise.

A seawater experiment in a coastal forest

As an ecosystem ecologist at the Smithsonian Environmental Research Center, I lead a novel experiment called TEMPEST that was designed to understand how and why historically salt-free coastal forests react to their first exposures to salty water.

Sea-level rise has increased by an average of about 8 inches globally over the past century, and that water has pushed salty water into U.S. forests, farms and neighborhoods that had previously known only freshwater. As the rate of sea-level rise accelerates, storms push seawater ever farther onto the dry land, eventually killing trees and creating ghost forests, a result of climate change that is widespread in the U.S. and globally.

In our TEMPEST test plots, we pump salty water from the nearby Chesapeake Bay into tanks, then sprinkle it on the forest soil surface fast enough to saturate the soil for about 10 hours at a time. This simulates a surge of salty water during a big storm.

Our coastal forest showed little effect from the first 10-hour exposure to salty water in June 2022 and grew normally for the rest of the year. We increased the exposure to 20 hours in June 2023, and the forest still appeared mostly unfazed, although the tulip poplar trees were drawing water from the soil more slowly, which may be an early warning signal.

Things changed after a 30-hour exposure in June 2024. The leaves of tulip poplar in the forests started to brown in mid-August, several weeks earlier than normal. By mid-September the forest canopy was bare, as if winter had set in. These changes did not occur in a nearby plot that we treated the same way, but with freshwater rather than seawater.

The initial resilience of our forest can be explained in part by the relatively low amount of salt in the water in this estuary, where water from freshwater rivers and a salty ocean mix. Rain that fell after the experiments in 2022 and 2023 washed salts out of the soil.

But a major drought followed the 2024 experiment, so salts lingered in the soil then. The trees’ longer exposure to salty soils after our 2024 experiment may have exceeded their ability to tolerate these conditions.

Seawater being dumped on the Southern California fires is full-strength, salty ocean water. And conditions there have been very dry, particularly compared with our East Coast forest plot.

Changes evident in the ground

Our research group is still trying to understand all the factors that limit the forest’s tolerance to salty water, and how our results apply to other ecosystems such as those in the Los Angeles area.

Tree leaves turning from green to brown well before fall was a surprise, but there were other surprises hidden in the soil below our feet.

Rainwater percolating through the soil is normally clear, but about a month after the first and only 10-hour exposure to salty water in 2022, the soil water turned brown and stayed that way for two years. The brown color comes from carbon-based compounds leached from dead plant material. It’s a process similar to making tea.

Our lab experiments suggest that salt was causing clay and other particles to disperse and move about in the soil. Such changes in soil chemistry and structure can persist for many years.

Source : https://studyfinds.org/super-scoopers-dumping-ocean-water-los-angeles-fires/

An eye for an eye: People agree about the values of body parts across cultures and eras

(Credit: © Kateryna Chyzhevska | Dreamstime.com)

The Bible’s lex talionis – “Eye for eye, tooth for tooth, hand for hand, foot for foot” (Exodus 21:24-27) – has captured the human imagination for millennia. This idea of fairness has been a model for ensuring justice when bodily harm is inflicted.

Thanks to the work of linguists, historians, archaeologists and anthropologists, researchers know a lot about how different body parts are appraised in societies both small and large, from ancient times to the present day.

But where did such laws originate?

According to one school of thought, laws are cultural constructions – meaning they vary across cultures and historical periods, adapting to local customs and social practices. By this logic, laws about bodily damage would differ substantially between cultures.

Our new study explored a different possibility – that laws about bodily damage are rooted in something universal about human nature: shared intuitions about the value of body parts.

Do people across cultures and throughout history agree on which body parts are more or less valuable? Until now, no one had systematically tested whether body parts are valued similarly across space, time and levels of legal expertise – that is, among laypeople versus lawmakers.

We are psychologists who study evaluative processes and social interactions. In previous research, we have identified regularities in how people evaluate different wrongful actions, personal characteristics, friends and foods. The body is perhaps a person’s most valuable asset, and in this study we analyzed how people value its different parts. We investigated links between intuitions about the value of body parts and laws about bodily damage.

How critical is a body part or its function?

We began with a simple observation: Different body parts and functions have different effects on the odds that a person will survive and thrive. Life without a toe is a nuisance. But life without a head is impossible. Might people intuitively understand that different body parts are have different values?

Knowing the value of body parts gives you an edge. For example, if you or a loved one has suffered multiple injuries, you could treat the most valuable body part first, or allocate a greater share of limited resources to its treatment.

This knowledge could also play a role in negotiations when one person has injured another. When person A injures person B, B or B’s family can claim compensation from A or A’s family. This practice appears around the world: among the Mesopotamians, the Chinese during the Tang dynasty, the Enga of Papua New Guinea, the Nuer of Sudan, the Montenegrins and many others. The Anglo-Saxon word “wergild,” meaning “man price,” now designates in general the practice of paying for body parts.

But how much compensation is fair? Claiming too little leads to loss, while claiming too much risks retaliation. To walk the fine line between the two, victims would claim compensation in Goldilocks fashion: just right, based on the consensus value that victims, offenders and third parties in the community attach to the body part in question.

This Goldilocks principle is readily apparent in the exact proportionality of the lex talionis – “eye for eye, tooth for tooth.” Other legal codes dictate precise values of different body parts but do so in money or other goods. For example, the Code of Ur-Nammu, written 4,100 years ago in ancient Nippur, present-day Iraq, states that a man must pay 40 shekels of silver if he cuts off another man’s nose, but only 2 shekels if he knocks out another man’s tooth.

Testing the idea across cultures and time

If people have intuitive knowledge of the values of different body parts, might this knowledge underpin laws about bodily damage across cultures and historical eras?

To test this hypothesis, we conducted a study involving 614 people from the United States and India. The participants read descriptions of various body parts, such as “one arm,” “one foot,” “the nose,” “one eye” and “one molar tooth.” We chose these body parts because they were featured in legal codes from five different cultures and historical periods that we studied: the Law of Æthelberht from Kent, England, in 600 C.E., the Guta lag from Gotland, Sweden, in 1220 C.E., and modern workers’ compensation laws from the United States, South Korea and the United Arab Emirates.

Participants answered one question about each body part they were shown. We asked some how difficult it would be for them to function in daily life if they lost various body parts in an accident. Others we asked to imagine themselves as lawmakers and determine how much compensation an employee should receive if that person lost various body parts in a workplace accident. Still others we asked to estimate how angry another person would feel if the participant damaged various parts of the other’s body. While these questions differ, they all rely on assessing the value of different body parts.

To determine whether untutored intuitions underpin laws, we didn’t include people who had college training in medicine or law.

Then we analyzed whether the participants’ intuitions matched the compensations established by law.

Our findings were striking. The values placed on body parts by both laypeople and lawmakers were largely consistent. The more highly American laypeople tended to value a given body part, the more valuable this body part seemed also to Indian laypeople, to American, Korean and Emirati lawmakers, to King Æthelberht and to the authors of the Guta lag. For example, laypeople and lawmakers across cultures and over centuries generally agree that the index finger is more valuable than the ring finger, and that one eye is more valuable than one ear.

But do people value body parts accurately, in a way that corresponds with their actual functionality? There are some hints that, yes, they do. For example, laypeople and lawmakers regard the loss of a single part as less severe than the loss of multiples of that part. In addition, laypeople and lawmakers regard the loss of a part as less severe than the loss of the whole; the loss of a thumb is less severe than the loss of a hand, and the loss of a hand is less severe than the loss of an arm.

Additional evidence of accuracy can be gleaned from ancient laws. For example, linguist Lisi Oliver notes that in Barbarian Europe, “wounds that may cause permanent incapacitation or disability are fined higher than those which may eventually heal.”

Although people generally agree in valuing some body parts more than others, some sensible differences may arise. For instance, sight would be more important for someone making a living as a hunter than as a shaman. The local environment and culture might also play a role. For example, upper body strength could be particularly important in violent areas, where one needs to defend oneself against attacks. These differences remain to be investigated.

Source : https://studyfinds.org/values-of-body-parts-across-cultures-and-eras/

One juice, three benefits: How elderberry could transform metabolism in just 7 days

(Photo credit: © Anna Komisarenko | Dreamstime.com)

Small study demonstrates the enormous fat-burning and gut-boosting powers of an ‘underappreciated’ berry

In an era where 74% of Americans are considered overweight and 40% have obesity, scientists have discovered that an ancient berry might offer modern solutions. Research from Washington State University reveals that elderberry juice could help regulate blood sugar levels and improve the body’s ability to burn fat, while also promoting beneficial gut bacteria.

Elderberries have long been used in traditional medicine, but this new research provides scientific evidence for their metabolic benefits. The study, published in the journal Nutrients, demonstrates that consuming elderberry juice for just one week led to significant improvements in how the body processes sugar and burns fat.

“Elderberry is an underappreciated berry, commercially and nutritionally,” says Patrick Solverson, an assistant professor in WSU’s Department of Nutrition and Exercise Physiology, in a statement. “We’re now starting to recognize its value for human health, and the results are very exciting.”

Solverson and his team recruited 18 overweight but otherwise healthy adults for this carefully controlled experiment. Most participants were women, with an average age of 40 years and an average body mass index (BMI) of 29.12, placing them in the overweight category.

This wasn’t your typical “drink this and tell us how you feel” study. Instead, the researchers implemented a sophisticated crossover design where participants served as their own control group. Each person completed two one-week periods: one drinking elderberry juice and another drinking a placebo beverage that looked and tasted similar but lacked the active compounds. A three-week “washout” period separated these phases to ensure no carryover effects.

During the study, participants consumed 355 grams (about 12 ounces) of either elderberry juice or placebo daily, split between morning and evening doses. The elderberry juice provided approximately 720 milligrams of beneficial compounds called anthocyanins, which give the berries their deep purple color.

Perhaps most remarkably, after just one week of elderberry juice consumption, participants showed a 24% reduction in blood glucose response following a high-carbohydrate meal challenge. This suggests that elderberry juice might help the body better regulate blood sugar levels, a crucial factor in metabolic health and weight management.

The study also revealed that participants burned more fat both while resting and during exercise when consuming elderberry juice. Using specialized equipment to measure breath gases, researchers found that those drinking elderberry juice burned 27% more fat compared to when they drank the placebo. This increased fat-burning occurred not only during rest but also persisted during a 30-minute moderate-intensity walking test.

But the benefits didn’t stop there. The research team also examined participants’ gut bacteria through stool samples and found that elderberry juice promoted the growth of beneficial bacterial species while reducing less desirable ones. Specifically, it increased levels of bacteria known for producing beneficial compounds called short-chain fatty acids, which play essential roles in metabolism and gut health.

What makes elderberry particularly special is its exceptionally high concentration of anthocyanins. According to Solverson, a person would need to consume four cups of blackberries to match the anthocyanin content found in just 6 ounces of elderberry juice. These compounds are believed to be responsible for the berry’s anti-inflammatory, anti-diabetic, and antimicrobial effects.

While further research is needed to confirm these effects over longer periods and in larger populations, this study suggests that elderberry juice might offer a practical dietary strategy for supporting metabolic health. It’s worth noting that participants reported no adverse effects from consuming the juice, suggesting it’s both safe and well-tolerated.

The timing of this research coincides with growing consumer interest in elderberry products. While these purple berries have long been popular in European markets, demand in the United States surged during the COVID-19 pandemic and continues to rise. This increasing market presence could make it easier for consumers to access elderberry products if further research continues to support their health benefits.

“Food is medicine, and science is catching up to that popular wisdom,” Solverson notes. “This study contributes to a growing body of evidence that elderberry, which has been used as a folk remedy for centuries, has numerous benefits for metabolic as well as prebiotic health.”

The research team isn’t stopping here. With an additional $600,000 in funding from the U.S. Department of Agriculture, they plan to investigate whether elderberry juice might help people maintain their weight after discontinuing weight loss medications. This could provide a natural solution for one of the most challenging aspects of weight management – maintaining weight loss over time.

As obesity rates continue to climb and are projected to reach 48-55% of American adults by 2050, finding natural, food-based approaches to support metabolic health becomes increasingly important. While elderberry juice shouldn’t be viewed as a magic bullet, this research suggests it might be a valuable addition to a healthy diet and lifestyle approach for managing weight and metabolic health.

Source : https://studyfinds.org/how-elderberry-might-transform-metabolism-in-just-7-days/

From first breath: Male and female brains really do differ at birth

(Credit: © Katrina Trninich | Dreamstime.com)

The age-old debate about differences between male and female brains has taken a dramatic turn with new evidence suggesting these variations begin before a baby’s first cry. In the largest study of its kind, researchers at Cambridge University’s Autism Research Centre have discovered that structural brain differences between the sexes don’t gradually emerge through childhood — they’re already established at birth.

Brain development during the first few weeks of life occurs at a remarkably rapid pace, making this period particularly crucial for understanding how sex differences in the brain emerge and evolve. Previous research has primarily focused on older infants, children, and adults, leaving a significant gap in our understanding of the earliest stages of brain development.

The research team analyzed brain scans of 514 newborns (236 females and 278 males) aged 0-28 days using data from the developing Human Connectome Project. The study, published in the journal Biology of Sex Differences, represents one of the largest and most comprehensive investigations of sex differences in neonatal brain structure to date, addressing a common limitation of past research: small sample sizes.

Male newborns showed larger overall brain volumes compared to females, even after accounting for differences in birth weight. This finding was particularly significant because the research team carefully controlled for body size differences between sexes, a factor that has complicated previous studies in this field.

When controlling for total brain volume, female babies exhibited greater amounts of gray matter — the outer brain tissue containing nerve cell bodies and dendrites responsible for processing and interpreting information, such as sensation, perception, learning, speech, and cognition. Meanwhile, male infants had higher volumes of white matter, which consists of long nerve fibers (axons) that connect different brain regions together.

“Our study settles an age-old question of whether male and female brains differ at birth,” says lead author Yumnah Khan, a PhD student at the Autism Research Centre, in a statement. “We know there are differences in the brains of older children and adults, but our findings show that they are already present in the earliest days of life.”

Several specific brain regions showed notable differences between males and females. Female newborns had larger volumes in areas related to memory and emotional regulation, while male infants showed greater volume in regions involved in sensory processing and motor control.

Dr. Alex Tsompanidis, who supervised the study, emphasizes its methodological rigor: “This is the largest such study to date, and we took additional factors into account, such as birth weight, to ensure that these differences are specific to the brain and not due to general size differences between the sexes.”

The research team is now investigating potential prenatal factors that might contribute to these differences. “To understand why males and females show differences in their relative grey and white matter volume, we are now studying the conditions of the prenatal environment, using population birth records, as well as in vitro cellular models of the developing brain,” explains Dr. Tsompanidis.

Importantly, the researchers stress that these findings represent group averages rather than individual characteristics.

“The differences we see do not apply to all males or all females, but are only seen when you compare groups of males and females together,” says Dr. Carrie Allison, Deputy Director of the Autism Research Centre. “There is a lot of variation within, and a lot of overlap between, each group.”

These findings mark a significant step forward in understanding early brain development, while raising new questions about the role of prenatal factors in shaping neurological differences. The research team’s ongoing investigations into prenatal conditions and cellular models may soon provide even more insights into how these sex-based variations emerge.

“These differences do not imply the brains of males and females are better or worse. It’s just one example of neurodiversity,” says Professor Simon Baron-Cohen, Director of the Autism Research Centre. “This research may be helpful in understanding other kinds of neurodiversity, such as the brain in children who are later diagnosed as autistic, since this is diagnosed more often in males.”

Source : https://studyfinds.org/how-male-and-female-brains-differ-at-birth/

Gender shock: Study reveals men, not women, make more emotional money choices

(Credit: © Yuri Arcurs | Dreamstime.com)

When it comes to making financial decisions, conventional wisdom suggests keeping emotions out of the equation. But new research reveals that men, contrary to traditional gender stereotypes, may be significantly more susceptible to letting emotions influence their financial choices than women.

A study led by the University of Essex challenges long-held assumptions about gender and emotional decision-making. The research explores how emotions generated in one context can influence decisions in completely unrelated situations – a phenomenon known as the emotional carryover effect.

“These results challenge the long-held stereotype that women are more emotional and open new avenues for understanding how emotions influence decision-making across genders,” explains lead researcher Dr. Nikhil Masters from Essex’s Department of Economics.

Working with colleagues from the Universities of Bournemouth and Nottingham, Masters designed an innovative experiment comparing how different types of emotional stimuli affect people’s willingness to take financial risks. They contrasted a traditional laboratory approach targeting a single emotion (fear) with a more naturalistic stimulus based on real-world events that could trigger multiple emotional responses.

The researchers recruited 186 university students (100 women and 86 men) and randomly assigned them to one of three groups. One group watched a neutral nature documentary about the Great Barrier Reef. Another group viewed a classic fear-inducing clip from the movie “The Shining,” showing a boy searching for his mother in an empty corridor with tense background music. The third group watched actual news footage about the BSE crisis (commonly known as “mad cow disease”) from the 1990s, a real food safety scare that generated widespread public anxiety.

After watching their assigned videos, participants completed decision-making tasks involving both risky and ambiguous financial choices using real money. In the risky scenario, they had to decide between taking guaranteed amounts of money or gambling on a lottery with known 50-50 odds. The ambiguous scenario was similar, but participants weren’t told the odds of winning.

The results revealed striking gender differences. Men who watched either the horror movie clip or the BSE footage subsequently made more conservative financial choices compared to those who watched the neutral nature video. This effect was particularly pronounced for those who saw the BSE news footage, and even stronger when the odds were ambiguous rather than clearly defined.

Perhaps most surprisingly, women’s financial decisions remained remarkably consistent regardless of which video they watched. The researchers found that while women reported experiencing similar emotional responses to the videos as men did, these emotions didn’t carry over to influence their subsequent financial choices.

The study challenges previous assumptions about how specific emotions like fear influence risk-taking behavior. While earlier studies suggested that fear directly leads to more cautious decision-making, this new research indicates the relationship may be more complex. Even when the horror movie clip successfully induced fear in participants, individual variations in reported fear levels didn’t correlate with their financial choices.

Instead, the researchers discovered that changes in positive emotions may play a more important role than previously thought. When positive emotions decreased after watching either the horror clip or BSE footage, male participants became more risk-averse in their financial decisions.

The study also demonstrated that emotional effects on decision-making can be even stronger when using realistic stimuli that generate multiple emotions simultaneously, compared to artificial laboratory conditions designed to induce a single emotion. This suggests that real-world emotional experiences may have more powerful influences on our financial choices than controlled laboratory studies have indicated.

The research team is now investigating why only men appear to be affected by these carryover effects. “Previous research has shown that emotional intelligence helps people to manage their emotions more effectively. Since women generally score higher on emotional intelligence tests, this could explain the big differences we see between men and women,” explains Dr. Masters.

These findings could have significant implications for understanding how major news events or crises might affect financial markets differently across gender lines. They also suggest the potential value of implementing “cooling-off” periods for important financial decisions, particularly after exposure to emotionally charged events or information.

“We don’t make choices in a vacuum and a cooling-off period might be crucial after encountering emotionally charged situations,” says Dr. Masters, “especially for life-changing financial commitments like buying a home or large investments.”

Source : https://studyfinds.org/study-men-not-women-make-more-emotional-money-choices/

Danger in drinking water? Flouride linked to lower IQ scores in children

(Photo by Tatevosian Yana on Shutterstock)

In a discovery that could reshape how we think about water fluoridation, researchers have uncovered a troubling pattern across 10 countries and nearly 21,000 children: higher fluoride exposure consistently correlates with lower IQ scores. The meta-analysis raises critical questions about the balance between preventing tooth decay and protecting cognitive development.

While fluoride has long been added to public drinking water systems to prevent tooth decay, this research suggests the need to carefully weigh the dental health benefits against potential developmental risks. In the United States, the recommended fluoride concentration for community water systems is 0.7 mg/L, with regulatory limits set at 4.0 mg/L by the Environmental Protection Agency (EPA).

The research team, led by scientists from the National Institute of Environmental Health Sciences, examined studies from ten different countries, though notably none from the United States. The majority of the research (45 studies) came from China, with others from Canada, Denmark, India, Iran, Mexico, New Zealand, Pakistan, Spain, and Taiwan.

Published in JAMA Pediatrics, the findings paint a consistent picture across different types of analyses. When comparing groups with higher versus lower fluoride exposure, children in the higher exposure groups showed significantly lower IQ scores. For every 1 mg/L increase in urinary fluoride levels, researchers observed an average decrease of 1.63 IQ points.

This effect size might seem small, but population-level impacts can be substantial. The researchers note that a five-point decrease in population IQ would nearly double the number of people classified as intellectually disabled, highlighting the potential public health significance of their findings.

The study employed three different analytical approaches to examine the relationship between fluoride and IQ. First, they compared mean IQ scores between groups with different exposure levels. Second, they analyzed dose-response relationships to understand how IQ scores changed with increasing fluoride concentrations. Finally, they examined individual-level data to calculate precise estimates of IQ changes per unit increase in fluoride exposure.

Of particular concern, the inverse relationship between fluoride exposure and IQ remained significant even at relatively low exposure levels. When researchers restricted their analysis to studies with fluoride concentrations below 2 mg/L (closer to levels found in fluoridated water systems), they still found evidence of cognitive impacts.

The implications of these findings are especially relevant for the United States, where fluoridated water serves about 75% of people using community water systems. While no U.S. studies were included in this analysis, the researchers note that significant inequalities exist in American water fluoride levels, particularly affecting Hispanic and Latino communities.

The study’s findings arrive at a crucial moment in public health policy. While water fluoridation has been hailed as one of the great public health achievements of the 20th century for its role in preventing tooth decay, this research suggests the need for a careful reassessment of fluoride exposure guidelines, particularly for vulnerable populations like pregnant women and young children.

Source : https://studyfinds.org/danger-in-drinking-water-flouride-linked-to-lower-iq-scores-in-children/

The disturbing trend discovered in 166,534 movies over past 50 years

(Credit: Prostock-studio on Shutterstock)

Movies are getting deadlier – at least in terms of their dialogue. A new study analyzing over 160,000 English-language films has revealed a disturbing trend: characters are talking about murder and killing more frequently than ever before, even in movies that aren’t focused on crime.

Researchers from the University of Maryland, University of Pennsylvania, and The Ohio State University examined movie subtitles spanning five decades, from 1970 to 2020, to track how often characters used words related to murder and killing. What they found was a clear upward trajectory that mirrors previous findings about increasing visual violence in films.

“Characters in noncrime movies are also talking more about killing and murdering today than they did 50 years ago,” says Brad Bushman, corresponding author of the study and professor of communication at The Ohio State University, in a statement. “Not as much as characters in crime movies, and the increase hasn’t been as steep. But it is still happening. We found increases in violence cross all genres.”

By applying sophisticated natural language processing techniques, the team calculated the percentage of “murderous verbs” – variations of words like “kill” and “murder” – compared to the total number of verbs used in movie dialogue. They deliberately took a conservative approach, excluding passive phrases like “he was killed,” negations such as “she didn’t kill,” and questions like “did he murder someone?” to focus solely on characters actively discussing committing violent acts.

“Our findings suggest that references to killing and murder in movie dialogue not only occur far more frequently than in real life but are also increasing over time,” explains Babak Fotouhi, lead author of the study and adjunct assistant research professor in the College of Information at the University of Maryland.

“We focused exclusively on murderous verbs in our analysis to establish a lower bound in our reporting,” notes Amir Tohidi, a postdoctoral researcher at the University of Pennsylvania. “Including less extreme forms of violence would result in a higher overall count of violence.”

Nearly 7% of all movies analyzed contained these murderous verbs in their dialogue. The findings demonstrate a steady increase in such language over time, particularly in crime-focused films. Male characters showed the strongest upward trend in violent dialogue, though female characters also demonstrated a significant increase in non-crime movies.

This rising tide of violent speech wasn’t confined to obvious genres like action or thriller films. Even movies not centered on crime showed a measurable uptick in murder-related dialogue over the 50-year period studied. This suggests that casual discussion of lethal violence has become more normalized across all types of movies, potentially contributing to what researchers call “mean world syndrome” – where heavy media consumption leads people to view the world as more dangerous and threatening than it actually is.

The findings align with previous research showing that gun violence in top movies has more than doubled since 1950, and more than tripled in PG-13 films since that rating was introduced in 1985. What makes this new study particularly noteworthy is its massive scale – examining dialogue from more than 166,000 films provides a much more comprehensive picture than earlier studies that looked at smaller samples.

Movie studios operate in an intensely competitive market where they must fight for audience attention. “Movies are trying to compete for the audience’s attention and research shows that violence is one of the elements that most effectively hooks audiences,” Fotouhi explains.

“The evidence suggests that it is highly unlikely we’ve reached a tipping point,” Bushman warns. Decades of research have demonstrated that exposure to media violence can influence aggressive behavior and mental health in both adults and children. This can manifest in various ways, from direct imitation of observed violent acts to a general desensitization toward violence and decreased empathy for others.

As content platforms continue to multiply and screen time increases, particularly among young people, these findings raise important questions about the cumulative impact of exposure to violent dialogue in entertainment media. The researchers emphasize that their results highlight the crucial need for promoting mindful consumption and media literacy, especially among vulnerable populations like children.

Source : https://studyfinds.org/movie-violence-dialogue-disturbing-trend/

Even small diet tweaks can lead to sustainable weight loss – here’s how

Woman stepping on scale (© Siam – stock.adobe.com)

It’s a well-known fact that to lose weight, you either need to eat less or move more. But how many calories do you really need to cut out of your diet each day to lose weight? It may be less than you think.

To determine how much energy (calories) your body requires, you need to calculate your total daily energy expenditure (TDEE). This is comprised of your basal metabolic rate (BMR) – the energy needed to sustain your body’s metabolic processes at rest – and your physical activity level. Many online calculators can help determine your daily calorie needs.

If you reduce your energy intake (or increase the amount you burn through exercise) by 500-1,000 calories per day, you’ll see a weekly weight loss of around one pound (0.45kg).

But studies show that even small calorie deficits (of 100-200 calories daily) can lead to long-term, sustainable weight-loss success. And although you might not lose as much weight in the short-term by only decreasing calories slightly each day, these gradual reductions are more effective than drastic cuts as they tend to be easier to stick with.

Hormonal changes

When you decrease your calorie intake, the body’s BMR often decreases. This phenomenon is known as adaptive thermogenesis. This adaptation slows down weight loss so the body can conserve energy in response to what it perceives as starvation. This can lead to a weight-loss plateau – even when calorie intake remains reduced.

Caloric restriction can also lead to hormonal changes that influence metabolism and appetite. For instance, thyroid hormones, which regulate metabolism, can decrease – leading to a slower metabolic rate. Additionally, leptin levels drop, reducing satiety, increasing hunger and decreasing metabolic rate.

Ghrelin, known as the “hunger hormone,” also increases when caloric intake is reduced, signaling the brain to stimulate appetite and increase food intake. Higher ghrelin levels make it challenging to maintain a reduced-calorie diet, as the body constantly feels hungrier.

Insulin, which helps regulate blood sugar levels and fat storage, can improve in sensitivity when we reduce calorie intake. But sometimes, insulin levels decrease instead, affecting metabolism and leading to a reduction in daily energy expenditure. Cortisol, the stress hormone, can also spike – especially when we’re in a significant caloric deficit. This may break down muscles and lead to fat retention, particularly in the stomach.

Lastly, hormones such as peptide YY and cholecystokinin, which make us feel full when we’ve eaten, can decrease when we lower calorie intake. This may make us feel hungrier.

Fortunately, there are many things we can do to address these metabolic adaptations so we can continue losing weight.

Weight loss strategies

Maintaining muscle mass (either through resistance training or eating plenty of protein) is essential to counteract the physiological adaptations that slow weight loss down. This is because muscle burns more calories at rest compared to fat tissue – which may help mitigate decreased metabolic rate.

Gradual caloric restriction (reducing daily calories by only around 200-300 a day), focusing on nutrient-dense foods (particularly those high in protein and fibre), and eating regular meals can all also help to mitigate these hormonal challenges.

Source : https://studyfinds.org/small-diet-tweaks-sustainable-weight-loss/

 

A proven way to stay younger longer — and all it takes is an hour each week

(© New Africa – stock.adobe.com)

Could you find an hour a week to devote to slowing your biological aging? You’ll get other, additional benefits – adding not just more years to your life but more life to your years. That hour can also create a sense of purpose, improve mental health, give you a psychological lift, boost your social connectedness, and you’ll know you’re making the world a better place. All you have to do is volunteer. If you can find a few hours a week, the benefits are even greater.

A study published in this month’s issue of Social Science and Medicine found that volunteering for as little as an hour a week is linked to slower biological aging.

Biologic age

Biologic age refers to the age of a body’s cells and tissues and, over time, how quickly they are aging, compared to the body’s chronologic age. The most common way to assess biological age examines how your behaviors and environment change the expression of your DNA; it’s called epigenetic testing.

Why volunteering is associated with slower aging

Experts explain that volunteering’s significant effect on biologic aging is multifactorial, with physical, social, and psychological benefits.

Volunteering often includes physical activity, like walking. Social connections are vital; we’re programmed for connectedness. Social connections decrease stress and improve cognitive function. According to the study authors, volunteering can also create a sense of purpose, improve mental health, and buffer any loss of important roles, like spouse or parent, as we age.

Family Volunteering

When my son was six, we volunteered at a soup kitchen in a less-affluent part of Detroit. On the Saturday after Thanksgiving, he was right in the thick of making gallons of turkey soup and hundreds of cheese or peanut butter and jelly sandwiches. Finally, he grabbed his own PB&J and munched out with our guests. It’s one of my favorite memories.

Family volunteering (whatever “family” means to you) is a win for everyone. It strengthens families and communities. When family members unite for a worthy cause, their collective power is greater than just adding together the strengths of individuals.

Children will develop compassion and tolerance. They may acquire new skills. More importantly, volunteering provides models from which children learn to respect and serve others. They discover the gratitude that flows only from giving. Children who volunteer are more likely to volunteer as adults and, later on in life, create their own traditions with their children.

Parents get to spend more time with their kids, instilling important values with action; those values run deeper than words could ever reach. Include your kids in planning. You may discover what’s truly important to them.

Nonprofit agencies, understaffed and overstressed, can do little without volunteers. Virtually everyone can find a nonprofit that matches their passion.

Getting started

To decide if volunteering is right for your family, consider:

  • About what issues are you passionate?
  • What are your children’s ages?
  • Who would you like to help?
  • What does your family enjoy doing together?
  • How frequently can you volunteer?
  • What skills and talents can your family offer?
  • What do you want your family to learn from the experience?

There are innumerable causes in which you can make a difference. About 3.5 million people a year will experience homelessness; about 40 percent are kids. Since 1989, the number of beds available in shelters has tripled. Collect toiletries. Give art and school supplies. Provide clothing and transportation.

Every day, 10% of Americans are hungry. Have a canned food drive. Make bag lunches for kids in a homeless shelter. Have a party – with an entrance fee of a can of food.

The elderly often need help the most. Adopt a grandparent. Deliver food – drive for Meals on Wheels. Look at photos and listen to stories. Give manicures and pedicures. Do seasonal yard work, rake leaves or shovel snow. Write letters. Play board games. Read books or newspapers. Bring your pet to visit. Write life stories. Provide transportation for medical appointments. Run errands. Make small home repairs.

I had elderly neighbors next door. When I cleared snow and ice (which was plentiful) from my car, I’d clear their car as well. Mrs. Neighbor watched through the living room window. Sometime later, she told me that she had a remote device to start and clear her car from inside her home! What can you do but laugh?

Source : https://studyfinds.org/volunteering-proven-way-stay-younger-longer/

‘Simple nasal swab’ could revolutionize childhood asthma treatment

(Credit: © Alena Stalmashonak | Dreamstime.com)

A novel diagnostic test using just a nasal swab could transform how doctors diagnose and treat childhood asthma. Researchers at the University of Pittsburgh have developed this non-invasive approach that, for the first time, allows physicians to precisely identify different subtypes of asthma in children without requiring invasive procedures.

Until now, determining the specific type of asthma a child has typically required bronchoscopy, an invasive procedure performed under general anesthesia to collect lung tissue samples. This limitation has forced doctors to rely on less accurate methods like blood tests and allergy screenings, potentially leading to suboptimal treatment choices.

“Because asthma is a highly variable disease with different endotypes, which are driven by different immune cells and respond differently to treatments, the first step toward better therapies is accurate diagnosis of endotype,” says senior author Dr. Juan Celedón, a professor of pediatrics at the University of Pittsburgh and chief of pulmonary medicine at UPMC Children’s Hospital of Pittsburgh, in a statement.

3 subtypes of asthma

The new nasal swab test analyzes the activity of eight specific genes associated with different types of immune responses in the airways. This genetic analysis reveals which of three distinct asthma subtypes, or endotypes, a patient has: T2-high (involving allergic inflammation), T17-high (showing a different type of inflammatory response), or low-low (exhibiting minimal inflammation of either type).

The research team validated their approach across three separate studies involving 459 young people with asthma, focusing particularly on Puerto Rican and African American youth, populations that experience disproportionately higher rates of asthma-related emergency room visits and complications. According to the researchers, Puerto Rican children have emergency department and urgent care visit rates of 23.5% for asthma, while Black children have rates of 26.6% — both significantly higher than the 12.1% rate among non-Hispanic white youth.

The findings, published in JAMA, challenge long-held assumptions about childhood asthma. While doctors have traditionally believed that most cases were T2-high, the nasal swab analysis revealed this type appears in only 23-29% of participants. Instead, T17-high asthma accounted for 35-47% of cases, while the low-low type represented 30-38% of participants.

“These tests allow us to presume whether a child has T2-high disease or not,” explained Celedón. “But they are not 100% accurate, and they cannot tell us whether a child has T17-high or low-low disease. There is no clinical marker for these two subtypes. This gap motivated us to develop better approaches to improve the accuracy of asthma endotype diagnosis.”

Precision medicine for patients

This breakthrough carries significant implications for treatment. Currently, powerful biological medications exist for T2-high asthma, but no available treatments specifically target T17-high or low-low types. The availability of this new diagnostic test could accelerate research into treatments for these previously understudied forms of asthma.

“We have better treatments for T2-high disease, in part, because better markers have propelled research on this endotype,” said Celedón. “But now that we have a simple nasal swab test to detect other endotypes, we can start to move the needle on developing biologics for T17-high and low-low disease.”

The test could also help researchers understand how asthma evolves throughout childhood and adolescence. Celedón noted that one of the “million-dollar questions in asthma” involves understanding why the condition affects children differently as they age.

“Before puberty, asthma is more common in boys, but the incidence of asthma goes up in females in adulthood. Is this related to endotype? Does endotype change over time or in response to treatments? We don’t know,” he says. “But now that we can easily measure endotype, we can start to answer these questions.”

Dr. Gustavo Matute-Bello, acting director of the Division of Lung Diseases at the National Heart, Lung, and Blood Institute, emphasizes the potential impact of this diagnostic advancement. “Having tools to test which biological pathways have a major role in asthma in children, especially those who have a disproportionate burden of disease, may help achieve our goal of improving asthma outcomes,” he says. “This research has the potential to pave the way for more personalized treatments, particularly in minority communities.”

Source : https://studyfinds.org/simple-nasal-swab-could-revolutionize-childhood-asthma-treatment/

Do You Believe in Life After Death? These Scientists Study It.

In an otherwise nondescript office in downtown Charlottesville, Va., a small leather chest sits atop a filing cabinet. Within it lies a combination lock, unopened for more than 50 years. The man who set it is dead.

On its own, the lock is unremarkable — the kind you might use at the gym. The code, a mnemonic of a six-letter word converted into numbers, was known only to the psychiatrist Dr. Ian Stevenson, who set it long before he died, and years before he retired as director of the Division of Perceptual Studies, or DOPS, a parapsychology research unit he founded in 1967 within the University of Virginia’s school of medicine.

Dr. Stevenson called this experiment the Combination Lock Test for Survival. He reasoned that if he could transmit the code to someone from the grave, it might help answer the questions that had consumed him in life: Is communication from the “beyond” possible? Can the personality survive bodily death? Or, simply: Is reincarnation real?

This last conundrum — the survival of consciousness after death — continues to be at the forefront of the division’s research. The team has logged hundreds of cases of children who claim to remember past lives from all continents except Antarctica. “And that’s only because we haven’t looked for cases there,” said Dr. Jim Tucker, who has been investigating claims of past lives for more than two decades. He recently retired after having been the director of DOPS since 2015.

It was an unexpected career path to begin with.

“As far as reincarnation itself goes, I never had any particular interest in it,” said Dr. Tucker, who set out to solely become a child psychiatrist and was, at one point, the head of U.Va.’s Child and Family Psychiatry Clinic. “Even when I was training, it never occurred to me that I’d end up doing this work.”

Now, at 64 years old, after traveling the world to record cases of possible past life recollections, and with books and papers of his own on the subject of past lives, he has left the position.

“There’s a level of stress in medicine, and in academics,” he reflected. “There are always things you should be doing, papers you should be writing, prescriptions you should be giving. I enjoyed my day to day work, both in the clinic and at DOPS, but you reach a point where you’re ready not to have so many responsibilities and demands.”

According to a job listing issued by the medical school, on top of their academic reputation, the ideal candidate to replace Dr. Tucker must have “a track record of rigorous investigation of extraordinary human experiences, such as the mind’s relationship to the body and the possibility of consciousness surviving physical death.”

None of the eight principal team members have the required academic status to undertake the role, making it necessary to find someone externally.

“I think there’s a feeling that it would be rejuvenating for the group to have an outside person come in,” said Dr. Jennifer Payne, vice-chair of research at the department of psychiatry, who leads the selection committee.

Scientists That Have Strayed From the Usual Path

Dr. Tucker was running a busy practice when he first learned about DOPS. It was 1996 and a local newspaper, The Daily Progress in Charlottesville, had profiled Dr. Stevenson after he received funding to interview individuals about their near-death experiences. Entranced by the pioneering work, Dr. Tucker began volunteering at the division before joining as a permanent researcher.

Each of the division’s researchers has committed their career — and, to some extent, risked their professional reputation — to the study of the so-called paranormal. This includes near-death and out-of-body experiences, altered states of consciousness, and past lives research, which all come under the portmanteau of “parapsychology.” They are scientists that have strayed from the usual path.

DOPS is a curious institution. There are only a few other labs in the world undertaking similar lines of research — the Koestler Parapsychology Unit at the University of Edinburgh, for instance — with DOPS being by far the most prominent. The only other major parapsychology unit in the United States was Princeton’s Engineering Anomalies Research Laboratory, or PEAR, which focused on telekinesis and extrasensory perception. That unit was shuttered in 2007.

While it is technically part of the U.Va., DOPS occupies four spacious would-be condominiums inside a residential building. It is notably distanced from the university’s leafy main campus, and at least a couple of miles from the medical school.

“Nobody knows we’re here,” said Dr. Bruce Greyson, 78, a former director of DOPS and a professor emeritus of psychiatry and neurobehavioral sciences at U.Va., who started working with Dr. Stevenson in the late 1970s. “Ian was very cautious about that, because he had faced a lot of prejudice,” Dr. Greyson said. “He kept a very low profile.”

Dr. Greyson received a lot of pushback before joining DOPS. He had worked at the University of Michigan for eight years early in his career, but his interest in near-death experiences began to ruffle feathers, much like it had for Dr. Stevenson.

“They told me, point blank, that I wouldn’t have a future there if I did near-death research, because you can’t measure that in a test tube,” he said. “Unless I could quantify it by a biological measure, they didn’t want to hear about it.” He left Michigan for the University of Connecticut, where he spent 11 years, and then found his way to DOPS.

The atmosphere within DOPS is one of studious calm. There are only a few signs of the team’s activities. In the basement laboratory one finds a copper-lined Faraday cage used to assess out of body experience subjects, and foam mannequin heads sporting Electroencephalogram caps. Upstairs, running the full length of the wall in the Ian Stevenson Memorial Library, which boasts over 5,000 books and papers pertaining to past lives research, is a glass display case containing a collection of knives, swords and mallets — weapons described by children who recalled a violent end in their previous life.

“It’s not the actual weapon, but the kind of weapon used,” explained Dr. Tucker. Each object is labeled with intricate, sometimes gory, detail. One display told the story of a young girl from Burma, Ma Myint Thein, who was born with deformities of her fingers and birthmarks across her back and neck. “According to villagers,” the label reads, “the man whose life she remembered being had been murdered, his fingers chopped off and his throat slashed by a sword.” It is accompanied by a photograph of the girl’s hands, her right missing two fingers.

That children who claim to remember past lives are most frequently found in South Asia, where reincarnation is a core tenet of many religious beliefs, has been used by critics to debunk the studies. After all, surely it’s all too easy to find corroborative evidence in places with a pre-existing belief in reincarnation.

The question of life after death has been an existential preoccupation for humans throughout time, however, and reincarnation is a central tenet of belief in many cultures. Buddhism, where there is thought to be a 49-day journey between death and rebirth; Hinduism, with its concept of samsara, the endless cycle; and Native American and West African nations, all share similar core concepts of the soul or spirit moving from one life to the next. Meanwhile, a 2023 Pew Research survey found that a quarter of Americans believe it is “definitely or probably true” that people who have died can be reincarnated.

When it comes to past life claims, the DOPS team works on cases that almost always have come directly from parents.

Common features in children who claim to have led a previous life include a verbal precocity and mannerisms at odds with that of the rest of the family. Unexplained phobias or aversions have also been thought to have been transferred over from a past existence. In some cases, extreme clarity besets the remembrances: the names, professions and quirks of a different set of relatives, or the particularities of the streets they used to live on and sometimes even recalling obscure historical events — details the child couldn’t possibly have known about.

One of the most famous cases the team worked on was that of James Leininger, an American boy who remembered being a fighter pilot in Japan. The case drew a great deal of attention to DOPS, but also brought with it numerous detractors.

Ben Radford, the deputy editor of Skeptical Inquirer, a magazine dedicated to scientific research, believes that wishful thinking and general death anxiety has fueled an increased interest in reincarnation, and finds flaws in the DOPS research methodology, which he often dissects in his blog. He said, “The fact is, no matter how sincere the person is, often recovered memories are false.”

‘The Evidence Is Not Flawless’

Remembered by many as a dignified man with a penchant for three-piece suits, Dr. Stevenson lived for his research. He almost never took time off. “I had to swing by the office once on New Year’s Eve and there was one car in the lot, and it was his,” Dr. Tucker recalled.

Born in 1918, Dr. Stevenson, who was Canadian and graduated from St. Andrews with a degree in history before studying biochemistry and psychiatry at McGill University, had served as chair of the department of psychiatry at U.Va. for 10 years until 1967.

By the early 1960s he had become disillusioned by conventional medicine. In an interview with The New York Times in 1999, he said that he had been drawn to studying past lives through his “discontent with other explanations of human personality. I wasn’t satisfied with psychoanalysis or behaviorism or, for that matter, neuroscience. Something seemed to be missing.”

And so he began recording potential cases of reincarnation, which he would come to call “cases of the reincarnation type,” or CORT. It was one of his initial CORT research papers, from a 1966 trip to India, that caught the attention of Chester F. Carlson, the inventor of the technology behind Xerox photocopying machines. It was Mr. Carlson’s generous financial assistance that enabled Dr. Stevenson to leave his role at the medical school and focus full-time on past lives research.

The dean of the medical school at the time, Kenneth Crispell, didn’t approve of this foray into the paranormal. He was happy to see Dr. Stevenson resign from his spot in the department of psychiatry, and, believing in academic freedom, agreed to the formation of a small research division. However, any hope Dr. Crispell had that Dr. Stevenson and his unorthodox ideas would disappear into the academic shadows was quickly dashed: Mr. Carlson died of a heart attack in 1968 and in his will he bequeathed $1 million to Dr. Stevenson’s endeavor.

While not all of the attention was positive in the division’s early years, some individuals in the science community were intrigued. “Either Dr. Stevenson is making a colossal mistake, or he will be known as the Galileo of the 20th century,” the psychiatrist Harold Lief wrote in a 1977 article for the Journal of Nervous and Mental Disease.

To this day, DOPS is still financed entirely by private donations. In October it was announced that the division had received the first installment of a $1 million estate gift from The Philip B. Rothenberg Legacy Fund, which will be used to finance early-career researchers. Other supporters have included the Bonner sisters, Priscilla Bonner-Woolfan and Margerie Bonner-Lowry — silent screen actresses of the 1920s, whose endowment continues to fund the DOPS directorship. Another unlikely supporter is the actor John Cleese, who first encountered the division at the Esalen Institute, a retreat and intentional community located in Big Sur, Calif.

“These people are behaving like good scientists,” Mr. Cleese said in a phone interview. “Good scientists are after the truth: they don’t just want to be right. I think it is absolutely astonishing and quite disgraceful, the way that orthodox contemporary, materialistic reductionist theory treats all the things — and there are so many of them — that they can’t begin to explain.”

In the early years of the department, Dr. Stevenson traveled the world extensively, recording more than 2,500 cases of children recalling past lives. In this pre-internet time, discovering so many similar accounts and trends served to strengthen his thesis. The findings from these excursions, collected in Dr. Stevenson’s neat handwriting, are stored by country in filing cabinets and is in the slow process of being digitized.

From this database, researchers have yielded findings they believe are interesting. The strongest cases, according to the DOPS researchers, have been found in children under the age of 10, and the majority of remembrances tend to occur between the ages of 2 and 6, after which they appear to fade. The median time between death and rebirth is about 16 months, a period the researchers see as a form of intermission. Very often, the child has memories that match up to the life of a deceased relative.

And yet for all of this meticulous work, Dr. Stevenson was aware of the limitations of past lives research. “The evidence is not flawless and it certainly does not compel such a belief,” he explained in a lecture at The University of Southwestern Louisiana (now the University of Louisiana at Lafayette) in 1989. “Even the best of it is open to alternative interpretations, and one can only censure those who say there is no evidence whatsoever.”

“Ian thought reincarnation was the best explanation, but he wasn’t positive,” said Dr. Greyson. “He thought a lot of the cases may be something else. It might be a kind of possession, it might even be delusion. There are lots of different possibilities. It may be clairvoyance, or picking up the information from some other sources that you’re not aware of.”

After spending more than half his life studying past lives, Dr. Stevenson retired from DOPS in 2002, handing the directorial baton to Dr. Greyson. Though he kept a watchful eye on proceedings from afar, offering guidance when solicited, he never set foot in the division again. He died of pneumonia five years later, at 88 years old.

‘Many of the Memories Are Difficult’

Each year DOPS receives more than 100 emails from parents regarding something their child has said. Reaching out to the division is often an attempt at clarity, but the researchers never promise answers. Their only promise is to take these claims seriously, “but as far as the case having enough to investigate, enough to potentially verify that it matches with a past life, those are very few,” said Dr. Tucker.

This summer, Dr. Tucker drove to the rural town of Amherst, Va., to visit a case of possible past life remembrance. He was joined by his colleagues Marieta Pehlivanova and Philip Cozzolino, who would be taking over his research in the new year.

Ms. Pehlivanova, 43, who specializes in near death experience and children who remember past lives, has been at DOPS for seven years and is launching a study into women who’ve had near death experiences during childbirth. When she tells people what she does, they find the subject matter both fascinating and disturbing. “We’ve had emails from people saying we’re doing the work of the devil,” she said.

Upon arrival at the family’s home, the team was shown into the kitchen. A child, who was three, the youngest of four home-schooled siblings, peeked from behind her mother’s legs, looking up shyly. She wore a baggy Minnie Mouse shirt and went to perch between her grandparents on a banquette, watching everyone take their seats around the dining table.

“Let’s start from the very beginning,” Dr. Tucker said after the paperwork had been signed by Misty, the child’s 28-year-old mother. “It all began with the puzzle piece?”

A few months earlier, mother and child had been looking at a wooden puzzle of the United States, with each state represented by a cartoon of a person or object. Misty’s daughter pointed excitedly at the jagged piece representing Illinois, which had an abstract illustration of Abraham Lincoln.

“That’s Pom,” her daughter exclaimed. “He doesn’t have his hat on.”

This was indeed a drawing of Abraham Lincoln without his hat, but more important, there was no name under the image indicating who he was. Following weeks of endless talk about “Pom” bleeding out after being hurt and being carried to a too-small bed — which the family had started to think could be related to Lincoln’s assassination — they began to consider that their daughter had been present for the historical moment. This was despite the family having no prior belief in reincarnation, nor any particular interest in Lincoln.

On the drive to Amherst, Dr. Tucker confessed his hesitation in taking on this particular case — or any case connected to a famous individual. “If you say your child was Babe Ruth, for example, there would be lots of information online,” he said. “When we get those cases, usually it’s that the parents are into it. Still, it’s all a little strange to be coming out of a three-year-old’s mouth. Now if she had said her daughter was Lincoln, I probably wouldn’t have made the trip.”

Lately, Dr. Tucker has been giving the children picture tests. “Where we think we know the person they’re talking about, we’ll show them a picture from that life, and then show them another picture — a dummy picture — from somewhere else, to see if they can pick out the right one,” he said. “You have to have a few pictures for it to mean anything. I had one where the kid remembered dying in Vietnam. I showed him eight pairs of pictures and a couple of them he didn’t make any choice on, but the others he was six out of six. So, you know, that makes you think. But this girl is so young, that I don’t think we can do that.”

On this occasion, the little girl decided not to engage, and pretended to be asleep. Then she actually fell asleep.

“She’ll come around to it soon,” Misty assured the researchers. As the minutes ticked by, Dr. Tucker decided the picture test would be best left for another time. The child was still asleep when the researchers returned to their car.

After the first meeting, the only course of action is to do nothing and wait, see if the memories develop into something more concrete. Since the onus for past lives research is on spontaneous recollections, the team are largely unconvinced by the concept of hypnotic regression. “People will be hypnotized and told to go back to their past lives and all that, which we’re quite skeptical about,” said Dr. Tucker. “You can also make up a lot of stuff, even if you’re talking about memories from this life.”

DOPS rarely takes accounts from adults into consideration. “They’re not our primary interest, partly because, as an adult, you’ve been exposed to a lot,” Dr. Tucker explained. “You may think that you don’t know things from history, but you may well have been exposed to it. But also, the phenomenon typically happens in young kids. It’s as if they carry the memories with them, and they are typically very young when they start talking.”

There is also the concern that parents are looking for attention. “There are people who say, ‘Well, the parents are just doing it to have their 15 minutes of fame or whatever,” said Dr. Tucker. “But most of them have no interest in anyone knowing about it, you know, because it’s kind of embarrassing, or they worry people will think their kid is weird.”

For a child, recalling a past life can be trying. “They might be missing people, or have a sense of unfinished business,” he said. After a silence, he continued, his voice contemplative. “Frankly it’s probably better for the child that they don’t have these memories, because so many of the memories are difficult. The majority of kids who remember how they died perished in some kind of violent, unnatural death.”

Source : https://dnyuz.com/2025/01/03/do-you-believe-in-life-after-death-these-scientists-study-it/

Why your couch could be killing you: Sedentary lifestyle linked to 19 chronic conditions

(Credit: © Tracy King | Dreamstime.com)

In an era where many of us spend our days hunched over computers or scrolling through phones, mounting evidence suggests our sedentary lifestyles may be quietly damaging our health. A new study from the University of Iowa reveals that physically inactive individuals face significantly higher risks for up to 19 different chronic health conditions, ranging from obesity and diabetes to depression and heart problems.

Medical researchers have long known that regular physical activity helps prevent disease and promotes longevity. However, this comprehensive study, which analyzed electronic medical records from over 40,000 patients at a major Midwestern hospital system, provides some of the most detailed evidence yet about just how extensively physical inactivity can impact overall health.

Leading the study, now published in the journal Preventing Chronic Disease, was a team of researchers from various departments at the University of Iowa, including pharmacy practice, family medicine, and human physiology. Their mission was to examine whether screening patients for physical inactivity during routine medical visits could help identify those at higher risk for developing chronic diseases.

The simple 30-second exercise survey

When patients at the University of Iowa Health Care Medical Center arrived for their annual wellness visits, they received a tablet during the standard check-in process. Researchers implemented the Exercise Vital Sign (EVS), which asks two straightforward questions: how many days per week they engaged in moderate to vigorous exercise (like a brisk walk) and for how many minutes per session. Based on their responses, patients were categorized into three groups: inactive (0 minutes per week), insufficiently active (1-149 minutes per week), or active (150+ minutes per week).

“This two-question survey typically takes fewer than 30 seconds for a patient to complete, so it doesn’t interfere with their visit. But it can tell us a whole lot about that patient’s overall health,” says Lucas Carr, associate professor in the Department of Health and Human Physiology and the study’s corresponding author, in a statement.

Study authors discovered clear patterns when they analyzed responses from 7,261 screened patients. About 60% met the recommended guidelines by exercising moderately for 150 or more minutes per week. However, 36% fell short of these guidelines, exercising less than 150 minutes weekly, and 4% reported no physical activity whatsoever. When the team examined the health records of these groups, they found remarkable differences in health outcomes.

Consequences of a sedentary lifestyle

The data painted a compelling picture of how physical activity influences overall health. Active patients showed significantly lower rates of depression (15% compared to 26% in inactive patients), obesity (12% versus 21%), and hypertension (20% versus 35%). Their cardiovascular health markers were also notably better, including lower resting pulse rates and more favorable cholesterol profiles.

Perhaps most revealing was the relationship between activity levels and chronic disease burden. Patients reporting no physical activity carried a median of 2.16 chronic conditions. This number dropped to 1.49 conditions among insufficiently active patients and fell further to just 1.17 conditions among those meeting exercise guidelines. This clear progression suggests that even small increases in physical activity might help reduce disease risk.

To provide context for their findings, the researchers compared the screened group against 33,445 unscreened patients from other areas of the hospital. This comparison revealed an important pattern: patients who completed the survey tended to be younger and healthier than the general patient population. As Carr notes, “We believe this finding is a result of those patients who take the time to come in for annual wellness exams also are taking more time to engage in healthy behaviors, such as being physically active.”

Based on the study’s findings, physical inactivity was associated with higher rates of:

  1. Obesity
  2. Liver disease
  3. Psychoses
  4. Chronic lung disease
  5. Neurological seizures
  6. Coagulopathy (blood clotting disorders)
  7. Depression
  8. Weight loss issues
  9. Uncontrolled hypertension (high blood pressure)
  10. Controlled hypertension
  11. Uncontrolled diabetes
  12. Anemia deficiency
  13. Neurological disorders affecting movement
  14. Peripheral vascular disease
  15. Autoimmune disease
  16. Drug abuse
  17. Hypothyroidism
  18. Congestive heart failure
  19. Valvular disease (heart valve problems)

Need for better exercise counseling

The findings highlight a crucial gap in healthcare delivery that needs addressing. “In our healthcare environment, there’s no easy pathway for a doctor to be reimbursed for helping patients become more physically active,” Carr explains. “And so, for these patients, many of whom report insufficient activity, we need options to easily connect them with supportive services like exercise prescriptions and/or community health specialists.”

However, there’s encouraging news about the financial feasibility of exercise counseling. A related study by Carr’s team found that when healthcare providers billed for exercise counseling services, insurance companies reimbursed these claims nearly 95% of the time. This suggests that expanding physical activity screening and counseling services could be both beneficial for patients and financially viable for healthcare providers.

Source : https://studyfinds.org/couch-potato-sedentary-lifestyle-chronic-diseases/

Science confirms: ‘Know-it-alls’ typically know less than they think

(Credit: © Robert Byron | Dreamstime.com)

The next time you find yourself in a heated argument, absolutely certain of your position, consider this: researchers have discovered that the more confident you feel about your stance, the more likely you are to be working with incomplete information. It’s a psychological quirk that might explain everything from family disagreements to international conflicts.

We’ve all been there: stuck in traffic, grumbling about the “idiot” driving too slowly in front of us or the “maniac” who just zoomed past. But what if that slow driver is carefully transporting a wedding cake, or the speeding car is rushing someone to the hospital? The fascinating new study published in PLOS ONE suggests that these snap judgments stem from what researchers call “the illusion of information adequacy” — our tendency to believe we have enough information to make sound decisions, even when we’re missing crucial details.

“We found that, in general, people don’t stop to think whether there might be more information that would help them make a more informed decision,” explains study co-author Angus Fletcher, a professor of English at The Ohio State University and member of the university’s Project Narrative, in a statement. “If you give people a few pieces of information that seems to line up, most will say ‘that sounds about right’ and go with that.”

In today’s polarized world, where debates rage over everything from vaccines to climate change, understanding why people maintain opposing viewpoints despite access to the same information has never been more critical. This research, conducted by Fletcher, Hunter Gehlbach of Johns Hopkins University, and Carly Robinson of Stanford University, reveals that we rarely pause to consider what information we might be missing before making judgments.

The researchers conducted an experiment with 1,261 American participants recruited through the online platform Prolific. The study centered around a hypothetical scenario about a school facing a critical decision: whether to merge with another school due to a drying aquifer threatening their water supply.

The participants were divided into three groups. One group received complete information about the situation, including arguments both for and against the merger. The other two groups only received partial information – either pro-merger or pro-separation arguments. The remarkable finding? Those who received partial information felt just as competent to make decisions as those who had the full picture.

“Those with only half the information were actually more confident in their decision to merge or remain separate than those who had the complete story,” Fletcher notes. “They were quite sure that their decision was the right one, even though they didn’t have all the information.”

Social media users might recognize this pattern in their own behavior: confidently sharing or commenting on articles after reading only headlines or snippets, feeling fully informed despite missing crucial context. It’s a bit like trying to review a movie after watching only the first half, yet feeling qualified to give it a definitive rating.

The study revealed an interesting finding regarding the influence of new information. When participants who initially received only one side of the story were later presented with opposing arguments, about 55% maintained their original position on the merger decision. That rate is comparable to that of the control group, which had received all information from the start.

Fletcher notes that this openness to new information might not apply to deeply entrenched ideological issues, where people may either distrust new information or try to reframe it to fit their existing beliefs. “But most interpersonal conflicts aren’t about ideology,” he points out. “They are just misunderstandings in the course of daily life.”

Beyond personal relationships, this finding has profound implications for how we navigate complex social and political issues. When people engage in debates about controversial topics, each side might feel fully informed while missing critical pieces of the puzzle. It’s like two people arguing about a painting while looking at it from different angles: each sees only their perspective but assumes they’re seeing the whole picture.

Fletcher, who studies how people are influenced by the power of stories, emphasizes the importance of seeking complete information before taking a stand. “Your first move when you disagree with someone should be to think, ‘Is there something that I’m missing that would help me see their perspective and understand their position better?’ That’s the way to fight this illusion of information adequacy.”

Source : https://studyfinds.org/science-confirms-know-it-alls-typically-know-less-than-they-think/

Ants smarter than humans? Watch as tiny insects outperform grown adults in solving puzzle

Longhorn Crazy Ants (Paratrechina longicornis) swarming and attacking a much larger ant. They are harmless to humans and found in the world`s tropical regions.(Credit: © Brett Hondow | Dreamstime.com)

Scientists have long been fascinated by collective intelligence, the idea that groups can solve problems better than individuals. Now, an interesting new study reveals some unexpected findings about group problem-solving abilities across species, specifically comparing how ants and humans tackle complex spatial challenges.

Researchers at the Weizmann Institute of Science designed an ingenious experiment pitting groups of longhorn crazy ants against groups of humans in solving the same geometric puzzle at different scales. The puzzle, known as a “piano-movers’ problem,” required moving a T-shaped load through a series of tight spaces and around corners. Imagine trying to maneuver a couch through a narrow doorway, but with more mathematical precision involved.

What makes this study, published in PNAS, particularly fascinating is that both ants and humans are among the few species known to cooperatively transport large objects in nature. In fact, of the approximately 15,000 ant species on Earth, only about 1% engage in cooperative transport of heavy loads, making this shared behavior between humans and ants especially remarkable.

The species chosen for this evolutionary competition was Paratrechina longicornis, commonly known as “crazy ants” due to their erratic movement patterns. These black ants, measuring just 3 millimeters in length, are widespread globally but particularly prevalent along Israel’s coast and southern regions. Their name derives from their distinctive long antennae, though their frenetic behavior earned them their more colorful nickname.

Recruiting participants for the study presented different challenges across species. While human volunteers readily joined when asked, likely motivated by the competitive aspect, the ants required a bit of deception. Researchers had to trick them into thinking the T-shaped load was food that needed to be transported to their nest.

In experiments spanning three years and involving over 1,250 human participants and multiple ant colonies, researchers tested different group sizes tackling scaled versions of the same puzzle. For the ants, they used both individual ants and small groups of about 7 ants, as well as larger groups averaging 80 ants. Human participants were divided into single solvers and groups of 6-9 or 16-26 people.

Perhaps most intriguingly, the researchers found that while larger groups of ants performed significantly better than smaller groups or individuals, the opposite was true for humans when their communication was restricted. When human groups were not allowed to speak or use gestures and had to wear masks and sunglasses, their performance actually deteriorated compared to individuals working alone.

This counterintuitive finding speaks to fundamental differences in how ants and humans approach collective problem-solving. Individual ants cannot grasp the global nature of the puzzle, but their collective motion translates into emergent cognitive abilities; in other words, they develop new problem-solving skills simply by working together. The large ant groups showed impressive persistence and coordination, maintaining their direction even after colliding with walls and efficiently scanning their environment until finding openings.

The study highlights a crucial distinction between ant and human societies. “An ant colony is actually a family. All the ants in the nest are sisters, and they have common interests. It’s a tightly knit society in which cooperation greatly outweighs competition,” explains study co-author Prof. Ofer Feinerman in a statement. “That’s why an ant colony is sometimes referred to as a super-organism, sort of a living body composed of multiple ‘cells’ that cooperate with one another.”

This familial structure appears to enhance the ants’ collective problem-solving abilities. Their findings validated this “super-organism” vision, demonstrating that ants acting as a group are indeed smarter, with the whole being greater than the sum of its parts. In contrast, human groups showed no such enhancement of cognitive abilities, challenging popular notions about the “wisdom of crowds” in the social media age.

Source : https://studyfinds.org/ants-smarter-than-humans/

5 consumer myths to ditch in 2025

(© Ivan Kruk – stock.adobe.com)

Over the past year, books like Less by Patrick Grant and documentaries like Buy Now: The Shopping Conspiracy have encouraged consumers to rethink their internalized beliefs that more consumption equals better living.

As we enter a new year, it’s the perfect time to reflect on and leave behind some consumer myths that are detrimental to ourselves and to the planet.

Myth 1: Buying more is better for consumers and society

Retail therapy is a common practice for coping with negative emotions and might seem easier than actual therapy. However, research has consistently shown that materialistic consumption leads to lower individual and societal well-being. In fact, emerging studies are pointing out that low-consumption lifestyles might bring greater personal satisfaction and higher environmental benefits.

Some might argue that buying more stimulates the economy, creates jobs and supports public services through taxes. However, the positive impact on local communities is often overstated due to globalized supply chains and corporate tax avoidance.

To ensure that your spending really does support your community and does not contribute to economic inequalities, it is helpful to learn more about the story behind the labels and the businesses you support with your money.

Myth 2: New is always better

While certain cutting-edge tech may indeed offer improvements over older versions, for most items new might not always be better. As Grant argues in his book Less, product quality has declined over the past few decades as manufacturers prioritize affordability and engage in planned obsolescence practices. That is, they purposely design products that will break after a certain number of uses to keep the cycle of consumption going and hit their sales targets.

But older products were often built to last, so choosing secondhand or repairing older items can save you money and actually secure you better-quality products.

Myth 3: Being sustainable is expensive

It’s true that some brands have used the term “sustainable” to justify premium prices. However, adopting sustainable consumer practices can often be free or even bring in some extra cash if you sell or donate the things you no longer need.

Instead of “buying new,” consider swapping unused items with others by hosting a “swapping party” for things like toys or clothes with your friends, family, or neighbours. Decluttering your home could free up space, bring you some joy, and could also help you to connect with others by exchanging items.

Myth 4: Buying experiences are better than buying material things

Previous research has found that spending money on experiences brings more happiness primarily because these purchases are better at bringing people together. But material purchases that help you to connect with others, such as a board game, could bring as much joy as an experience.

When spending money, my research has shown that the key is to understand whether the purchase will help you to connect with others, learn new things or help your community. It’s not about whether we spend our money on material items or experiences.

It is also worth remembering that there are plenty of activities that can help you to achieve those goals with no spending required. So, instead of instinctively reaching to our wallets, perhaps in the new year we could think about whether a non-consumer activity like a winter hike or doing some volunteering could bring us closer to those intrinsic goals like personal growth or developing relationships. These goals have been consistently linked to better well-being.

Source : https://studyfinds.org/5-consumer-myths-to-ditch-in-2025/

The rise of the intention economy: How AI could turn your thoughts into currency

(Image by Shutterstock AI Generator)

Imagine scrolling through your social media feed when your AI assistant chimes in: “I notice you’ve been feeling down lately. Should we book that beach vacation you’ve been thinking about?” The eerie part isn’t that it knows you’re sad — it’s that it predicted your desire for a beach vacation before you consciously formed the thought yourself. Welcome to what some experts believe will be known as the “intention economy,” a way of life for consumers in the not-too-distant future.

A new paper by researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence warns that large language models (LLMs) like ChatGPT aren’t just changing how we interact with technology, they’re laying the groundwork for a new marketplace where our intentions could become commodities to be bought and sold.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” says co-author Dr. Yaqub Chaudhary, a visiting scholar at the Centre, in a statement.

For decades, tech companies have profited from what’s known as the attention economy, where our eyeballs and clicks are the currency. Social media platforms and websites compete for our limited attention spans, serving up endless streams of content and ads. But according to researchers Chaudhary and Dr. Jonnie Penn, we’re witnessing early signs of something potentially more invasive: an economic system that could treat our motivations and plans as valuable data to be captured and traded.

What makes this potential new economy particularly concerning is its intimate nature. “What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary explains.

Early signs of this emerging marketplace are already visible. Apple’s new “App Intents” developer framework for Siri includes protocols to “predict actions someone might take in future” and suggest apps based on these predictions. OpenAI has openly called for “data that expresses human intention… across any language, topic, and format.” Meanwhile, Meta has been researching “Intentonomy,” developing datasets for understanding human intent.

Consider Meta’s AI system CICERO, which achieved human-level performance in the strategy game Diplomacy by predicting players’ intentions and engaging in persuasive dialogue. While currently limited to gaming, this technology demonstrates the potential for AI systems to understand and influence human intentions through natural conversation.

Major tech companies are positioning themselves for this potential future. Microsoft has partnered with OpenAI in what the researchers describe as “the largest infrastructure buildout that humanity has ever seen,” investing over $50 billion annually from 2024 onward. The researchers suggest that future AI assistants could have unprecedented access to psychological and behavioral data, often collected through casual conversation.

The researchers warn that unless regulated, this developing intention economy “will treat your motivations as the new currency” in what amounts to “a gold rush for those who target, steer, and sell human intentions.” This isn’t just about selling products — It could have implications for democracy itself, potentially affecting everything from consumer choices to voting behavior.

Source: https://studyfinds.org/rise-of-intention-economy-ai-assistant/

Farewell, 2024: You were just a so-so year for most Americans

(ID 327257589 | 2024 © Penchan Pumila | Dreamstime.com)

Americans may be divided on many issues, but when it comes to rating 2024, they’ve reached a surprising consensus: it was decidedly average. In a nationwide survey of 2,000 people, the year earned a 6.1 out of 10—though beneath this seemingly tepid score lies a heartening discovery about what truly matters to Americans: personal connections topped the list of memorable moments.

The comprehensive study, conducted by Talker Research, surveyed 2,000 Americans about their experiences throughout the year. Perhaps most touching was the discovery that the most memorable moment for many Americans wasn’t a grand achievement or milestone, but rather the simple joy of reconnecting with old friends and family members, with 17% of respondents citing this as their standout experience.

Overall, a notable 30% of Americans rated their year as exceptional, scoring it eight or higher on the ten-point scale.

Personal development emerged as a dominant theme in 2024, with an overwhelming 67% of Americans reporting some form of growth over the past year. This growth manifested in various aspects of their lives: more than half (52%) saw improvements in their personal relationships, while 38% experienced positive changes in their mental and emotional well-being. Physical health gains were reported by 29% of respondents, and a quarter celebrated advances in their financial situation.

The year proved transformative for many Americans in unexpected ways. Tied for second place among memorable experiences were three distinct life changes: creative and personal growth, welcoming a new pet, and mastering a new skill or hobby, each cited by 12% of respondents. Close behind, 11% found meaning in volunteering or contributing to causes they care about.

The survey revealed that 17% of respondents rated the year a seven out of ten, matched by another 17% giving it a five, while 16% scored it an eight. At the extremes, 8% of Americans had a fantastic year worthy of a perfect ten, while 5% rated it a disappointing one out of ten.

The survey highlighted how Americans found joy and achievement in various pursuits, from visiting new places (10%) to overcoming major health challenges (9%). Some celebrated financial victories, with 8% paying off significant debts and 7% reaching important savings goals. Others embraced adventure, with 6% embarking on dream vacations or relocating to new homes.

Source: https://studyfinds.org/americans-rate-2024-six-out-of-ten/

Exit mobile version