How tattoo ink travels through the body, raising risks of skin cancer and lymphoma

(Photo by Getty Images in collaboration with Unsplash+)

Tattoos have become a mainstream form of self-expression, adorning the skin of millions worldwide. But a new study from Danish researchers uncovers concerning connections between tattoo ink exposure and increased risks of both skin cancer and lymphoma.

Approximately one in four adults in many Western countries now sport tattoos, with prevalence nearly twice as high among younger generations. The study, published in BMC Public Health, adds to growing evidence that the popular form of body art may carry long-term health consequences previously unrecognized.

The study’s lead author, Signe Bedsted Clemmensen, along with colleagues at the University of Southern Denmark, analyzed data from two complementary twin studies – a case-control study of 316 twins and a cohort study of 2,367 randomly selected twins born between 1960 and 1996. The team created a specialized “Danish Twin Tattoo Cohort” that allowed them to control for genetic and environmental factors when examining cancer outcomes among tattooed and non-tattooed individuals.

When comparing twins where one had cancer and one didn’t, researchers found that the tattooed twin was more likely to be the one with cancer. In the case-control study, tattooed individuals had a 62% higher rate of skin cancer compared to non-tattooed people. The cohort study showed even stronger associations, with tattooed individuals having nearly four times higher rate of skin cancer and 2.83 times higher rate of basal cell carcinoma.

Size appears to matter significantly. Large tattoos (bigger than the palm of a hand) were associated with substantially higher lymphoma and skin cancer risks than smaller tattoos, potentially due to higher exposure levels or longer exposure time. This dose-response relationship strengthens the case for causality rather than mere correlation.

“This suggests that the bigger the tattoo and the longer it has been there, the more ink accumulates in the lymph nodes. The extent of the impact on the immune system should be further investigated so that we can better understand the mechanisms at play,” says Clemmensen, an assistant professor of biostatistics, in a statement.

The Journey of Tattoo Ink Through the Body

Scientists have long known that tattoo ink doesn’t simply stay put in the skin. Particles from tattoo pigments migrate through the bloodstream and accumulate in lymph nodes and potentially other organs. The researchers proposed an “ink deposit conjecture” – suggesting that tattoo pigments trigger inflammation at deposit sites, potentially leading to chronic inflammation and increased risk of abnormal cell growth.

Black ink, the most commonly used tattoo color, has been a particular focus of concern. It typically contains soot products like carbon black, which the International Agency for Research on Cancer (IARC) has listed as possibly cancer-causing to humans. Through incomplete burning during carbon black production, harmful compounds form as byproducts, including benzo[a]pyrene, which IARC classifies as cancer-causing to humans.

“We can see that ink particles accumulate in the lymph nodes, and we suspect that the body perceives them as foreign substances,” explains study co-author Henrik Frederiksen, a consultant in hematology at Odense University Hospital and clinical professor at the university. “This may mean that the immune system is constantly trying to respond to the ink, and we do not yet know whether this persistent strain could weaken the function of the lymph nodes or have other health consequences.”

Colored inks pose their own problems. Red ink – often associated with allergic reactions – contains compounds that may release harmful substances when exposed to sunlight or during laser tattoo removal.

“We do not see a clear link between cancer occurrence and specific ink colors, but this does not mean that color is irrelevant,” notes Clemmensen. “We know from other studies that ink can contain potentially harmful substances, and for example, red ink more often causes allergic reactions. This is an area we would like to explore further.”

The researchers suggest that with tattoo prevalence rising dramatically, especially among younger people, public awareness campaigns might be needed to educate about potential risks.

“We are concerned that tattoo ink has severe public health consequences since tattooing is abundant among the younger generation,” they write in their conclusion. The team recommends further studies to pinpoint the exact biological mechanisms through which tattoo ink might induce cancer.

A Growing Body Of Research

This isn’t the first research to raise alarms about tattoo safety. Previous studies have documented cases of skin conditions and tumors occurring within tattoo areas. However, this large-scale study provides some of the strongest evidence yet for a relationship between tattoos and cancer.

For those already sporting tattoos, the research doesn’t suggest panic – but awareness. The time between tattoo exposure and cancer diagnosis in the study was substantial – a median of 8 years for lymphoma and 14 years for skin cancer. This suggests that cancers develop gradually over time, and monitoring for any changes in tattooed areas might be prudent.

The rise in popularity of tattoo removal services presents its own concerns. The researchers specifically highlight that laser tattoo removal breaks down pigments into smaller fragments that may be more mobile within the body, potentially increasing migration to lymph nodes and other organs.

As with many health studies, this research doesn’t definitively prove causation, but it adds significant weight to growing evidence of long-term risks. The researchers point out that even with new European restrictions on harmful compounds in tattoo inks, the body’s immune response to foreign substances might be problematic regardless of specific ink components.

Balancing Expression and Health

As tattoo culture continues to thrive globally, balancing personal expression through body art with health considerations becomes increasingly important.

With tattoos now firmly embedded in mainstream culture, this research doesn’t aim to stigmatize body art but rather to inform safer practices. Whether this means developing safer inks, improving tattoo application techniques, or simply making more informed choices about tattoo size and placement, understanding the biological impact of tattoo ink is essential for public health.

As the researchers conclude, further studies that pinpoint the biological mechanisms of tattoo ink-induced cancer are needed. Until then, those considering getting inked might want to weigh the aesthetic benefits against potential long-term health considerations – a balance that, like the perfect tattoo design, will be uniquely personal.

Source : https://studyfinds.org/tattoo-ink-skin-cancer-lymphoma/

How the pursuit of happiness ends up sending people on a path to misery

(Photo by Erce on Shutterstock)

We live in a happiness-obsessed world. Self-help gurus promise paths to bliss, Instagram influencers peddle happiness as a lifestyle, and corporations build marketing campaigns around the pursuit of positive emotions. But new research suggests a surprising twist: trying too hard to be happy might actually be making us miserable.

Researchers from the University of Toronto Scarborough and the University of Sydney found that actively pursuing happiness drains our mental energy – the same energy we need for self-control. Their study, published in Applied Psychology: Health and Well-Being, challenges what many of us believe about happiness.

“The pursuit of happiness is a bit like a snowball effect. You decide to try making yourself feel happier, but then that effort depletes your ability to do the kinds of things that make you happier,” says Sam Maglio, marketing professor at the University of Toronto Scarborough and the Rotman School of Management, in a statement.

This might sound familiar: You wake up determined to have a great day. You plan mood-boosting activities and work hard to stay positive. But by evening, you’re ordering takeout instead of cooking, mindlessly scrolling social media, and snapping at your partner. Why? Your happiness pursuit itself might be the problem.

Maglio puts it bluntly: “The more mentally rundown we are, the more tempted we’ll be to skip cleaning the house and instead scroll social media.”

Testing the Happiness Drain

The research team ran four studies that gradually built their case.

First, they surveyed 532 adults about how much they valued and pursued happiness, then measured their self-reported self-control. The results showed a clear pattern: people who placed higher value on seeking happiness reported worse self-control abilities.

For their second study, they moved beyond self-reports to actual behavior. They had 369 participants complete a series of consumer choice rankings and measured how long they persisted at the task. Those with stronger tendencies to pursue happiness showed less persistence, suggesting their mental resources were already running low.

From Happiness Ads to Chocolate Cravings

For their third study, the researchers got clever. They intercepted 36 people at a university library and showed them either an advertisement that prominently featured the word “happiness” or a neutral ad without any happiness messaging. Then they offered participants chocolate candies, telling them to eat as many as they wanted while rating the taste.

“The story here is that the pursuit of happiness costs mental resources,” Maglio explains. “Instead of just going with the flow, you are trying to make yourself feel differently.”

The results were striking: people exposed to the happiness ad ate nearly twice as many chocolates (2.94 vs. 1.56 on average) – a classic sign of decreased self-control. This raises questions about happiness-themed marketing campaigns – they might actually be draining our willpower and setting us up to make choices we later regret.

Not All Goals Drain You the Same

For their final experiment, the researchers tackled an important question: Is happiness-seeking uniquely depleting, or does pursuing any goal require mental energy?

They had 188 participants make 25 choices between pairs of everyday products (like choosing between an iced latte and green tea). One group was told to choose options that would “improve their happiness,” while the other group chose based on what would “improve their accurate judgment.” Then everyone worked on a challenging anagram puzzle where they could quit whenever they wanted.

The happiness group quit much sooner – lasting only 444 seconds on average compared to 574 seconds for the accuracy group. This significant difference suggested that pursuing happiness specifically drains mental energy more than other types of goals.

This wasn’t Maglio’s first investigation into happiness backfiring. In a 2018 study with Kim, he found that people actively seeking happiness tend to feel like they’re running short on time, creating stress that ultimately makes them unhappier.

The Pressure To Feel Even Better

The self-improvement industry rakes in over $10 billion largely by promising to boost happiness. Bestsellers like “The Happiness Project,” “The Art of Happiness,” and “The Happiness Advantage” sell millions of copies with strategies for maximizing positive emotions. But this research suggests many of these approaches might be working against themselves.

The researchers note that the self-help industry puts “a lot of pressure and responsibility on the self.” Many people now treat happiness like money – “something we can and should gather and hoard as much as we can.” This commodification of happiness may be part of the problem, creating a mindset where we’re constantly striving for more rather than appreciating what we have.

Why This Happens

Think of self-control like a gas tank that gets emptied throughout the day. Psychologist Roy Baumeister’s research shows that every act of self-control – resisting temptation, controlling emotions, making decisions – uses fuel from the same tank.

Seeking happiness burns through this fuel quickly because it requires managing your actions, monitoring your thoughts, and actively changing your emotions. When your tank runs low, you’re more likely to make poor choices like overeating, overspending, or being short with others – creating a cycle that ultimately makes you less happy.

The Real Secret To Happiness

So should we abandon the pursuit of well-being? Not exactly. But the research suggests a more balanced approach might work better.

Maglio suggests we think of happiness like sand at the beach: “You can cling to a fistful of sand and try to control it, but the harder you hold, the more your hand will cramp. Eventually, you’ll have to let go.”

His advice cuts through the complexity with refreshing simplicity: “Just chill. Don’t try to be super happy all the time,” says Maglio, whose work is supported by a grant from the Social Sciences and Humanities Research Council of Canada. “Instead of trying to get more stuff you want, look at what you already have and just accept it as something that gives you happiness.”

When we ease up on constantly trying to maximize happiness and accept a wider range of emotions, we might actually preserve the mental energy needed to make better decisions – and ultimately feel better.

Source : https://studyfinds.org/the-happiness-paradox-chasing-joy-backfires/

How financial stress can sabotage job satisfaction by fueling workplace burnout

Being stressed about your finances can lead to burnout at work. (PeopleImages.com – Yuri A/Shutterstock)

In today’s world, the boundaries between our personal and professional lives often blur. Many of us try to keep financial worries separate from our work life, but a new study from the University of Georgia suggests this separation may be wishful thinking. Research reveals that our financial well-being significantly impacts our job satisfaction, with workplace burnout playing a key role.

The study, published in the Journal of Workplace Behavioral Health, shows that when employees experience financial stress, it follows them to work, affecting their performance and satisfaction through increased burnout.

The Hidden Cost of Financial Stress at Work

The U.S. Surgeon General recognized this connection in 2024 by naming workplace well-being one of the top public health priorities. Yet remarkably, 60% of employers don’t consider employee well-being a top 10 initiative. This disconnect is costly with dissatisfied employees reportedly costing the U.S. economy around $1.9 trillion in lost productivity in 2023 alone.

“Stress from work can often leave people feeling tired and overwhelmed. Anxiety in other parts of life could make this even worse,” says lead author Camden Cusumano from the University of Georgia, in a statement. “Just as injury in one part of the body could lead to pain in another, personal financial stress can manifest in someone’s work performance.”

While previous research has examined connections between compensation and job satisfaction, this study takes a more holistic approach. Rather than focusing merely on salary figures, researchers investigated how employees’ overall assessment of their financial health impacts their workplace experience.

When Money Worries Follow You to Work

Their research distinguishes between two dimensions of financial well-being: current money management stress (present concerns) and expected future financial security (future outlook). Both of these affect job satisfaction in different ways.

“We call them different life domains. There’s the work domain, there might be the family domain, things like that,” says Cusumano. “But sometimes there’s spillover from one to the other. My finances might impact the way I’m feeling about the stress in my family, or if I’m working long hours, that might cause some conflict with my family as well.”

The researchers used the Conservation of Resources theory as their framework. This theory suggests people experience stress when they lose resources, face threats to their resources, or fail to gain new resources despite their efforts. In this context, financial well-being represents a crucial resource: a sense of security and control regarding one’s finances.

Burnout Beyond the Workplace

For the study, the researchers surveyed 217 full-time U.S. employees who earned at least $50,000 annually. This sample was deliberately chosen to focus on workers not predisposed to financial insecurity due to low income.

Burnout shows up in three main ways: feeling detached from yourself or others, feeling constantly tired, and feeling like your accomplishments don’t matter. All three combine to make employees tired and disengaged from their work.

Current money management stress didn’t directly affect job satisfaction but operated through increased burnout. In contrast, expected future financial security had a direct positive association with job satisfaction that wasn’t mediated by burnout.

These findings highlight that financial stress doesn’t just create problems at home; it fundamentally alters how employees experience their work. People feeling stressed about making ends meet today are more likely to experience burnout, which in turn reduces their job satisfaction. Meanwhile, those who feel secure about their financial future tend to be more satisfied with their jobs, regardless of burnout levels.

Future financial concerns may also play a role in job satisfaction. If a worker is feeling stressed about their current position, believing their financial situation may improve could enhance their views on their job.

Creating Better Workplace Support Programs

Employers often focus on compensation as the primary financial factor affecting employee satisfaction. However, if an employee’s financial struggles are leading to burnout and job dissatisfaction, addressing work-related factors alone won’t fully resolve the problem.

This research highlights the importance of developing personal financial management skills alongside professional development for employees. Building financial resilience may not only improve the quality of life at home but could also enhance workplace experience and career success, especially in today’s workforce where remote and hybrid work have further blurred the boundaries between work and personal life.

“Some companies are actually providing financial counseling to some of their employees,” says Cusumano. “They’re paying attention to how finances can really permeate different areas of life.”

Organizations could benefit from broadening their wellness initiatives to include financial well-being resources. Providing tools and support to help employees manage current financial stress and build future security could yield significant returns through improved job satisfaction and reduced burnout.

In the end, money might not buy happiness, but financial stress certainly seems capable of diminishing workplace satisfaction. By understanding these connections, both organizations and individuals can develop more effective strategies for navigating the complex relationship between financial health and workplace well-being.

Source : https://studyfinds.org/financial-stress-sabotaging-job-satisfaction-workplace-burnout/

What’s the shape of the universe?

(© Vector Tradition – stock.adobe.com)

Mathematicians use topology to study the shape of the world and everything in it
When you look at your surrounding environment, it might seem like you’re living on a flat plane. After all, this is why you can navigate a new city using a map: a flat piece of paper that represents all the places around you. This is likely why some people in the past believed the earth to be flat. But most people now know that is far from the truth.

You live on the surface of a giant sphere, like a beach ball the size of the Earth with a few bumps added. The surface of the sphere and the plane are two possible 2D spaces, meaning you can walk in two directions: north and south or east and west.

What other possible spaces might you be living on? That is, what other spaces around you are 2D? For example, the surface of a giant doughnut is another 2D space.

Through a field called geometric topology, mathematicians like me study all possible spaces in all dimensions. Whether trying to design secure sensor networks, mine data or use origami to deploy satellites, the underlying language and ideas are likely to be that of topology.

The shape of the universe
When you look around the universe you live in, it looks like a 3D space, just like the surface of the Earth looks like a 2D space. However, just like the Earth, if you were to look at the universe as a whole, it could be a more complicated space, like a giant 3D version of the 2D beach ball surface or something even more exotic than that.

While you don’t need topology to determine that you are living on something like a giant beach ball, knowing all the possible 2D spaces can be useful. Over a century ago, mathematicians figured out all the possible 2D spaces and many of their properties.

In the past several decades, mathematicians have learned a lot about all of the possible 3D spaces. While we do not have a complete understanding like we do for 2D spaces, we do know a lot. With this knowledge, physicists and astronomers can try to determine what 3D space people actually live in.

While the answer is not completely known, there are many intriguing and surprising possibilities. The options become even more complicated if you consider time as a dimension.

To see how this might work, note that to describe the location of something in space – say a comet – you need four numbers: three to describe its position and one to describe the time it is in that position. These four numbers are what make up a 4D space.

Now, you can consider what 4D spaces are possible and in which of those spaces do you live.

Topology in higher dimensions
At this point, it may seem like there is no reason to consider spaces that have dimensions larger than four, since that is the highest imaginable dimension that might describe our universe. But a branch of physics called string theory suggests that the universe has many more dimensions than four.

There are also practical applications of thinking about higher dimensional spaces, such as robot motion planning. Suppose you are trying to understand the motion of three robots moving around a factory floor in a warehouse. You can put a grid on the floor and describe the position of each robot by their x and y coordinates on the grid. Since each of the three robots requires two coordinates, you will need six numbers to describe all of the possible positions of the robots. You can interpret the possible positions of the robots as a 6D space.

As the number of robots increases, the dimension of the space increases. Factoring in other useful information, such as the locations of obstacles, makes the space even more complicated. In order to study this problem, you need to study high-dimensional spaces.

There are countless other scientific problems where high-dimensional spaces appear, from modeling the motion of planets and spacecraft to trying to understand the “shape” of large datasets.

Tied up in knots
Another type of problem topologists study is how one space can sit inside another.

For example, if you hold a knotted loop of string, then we have a 1D space (the loop of string) inside a 3D space (your room). Such loops are called mathematical knots.

The study of knots first grew out of physics but has become a central area of topology. They are essential to how scientists understand 3D and 4D spaces and have a delightful and subtle structure that researchers are still trying to understand.

Source: https://studyfinds.org/whats-the-shape-of-the-universe/

I Cut Out Sugar for a Month—Here’s What It Did for My Mental Health

All good things come in moderation

d3sign / Getty Images

I’ve never been one to turn down something sweet. A bar of chocolate to reward myself for a successful grocery shop, some dessert after dinner—since I only indulged a few times a week, I thought it was pretty harmless.

But after noticing how sluggish, irritable, and foggy I felt after sugar-heavy days, I started wondering: could my sugar intake be affecting my mental health?

With that question in mind, I decided to cut out added sugar for an entire month. No packets of jelly beans, no sweetened boba teas, and no honey in my morning oats. The goal wasn’t just to see how my body felt, but to observe whether eliminating sugar had any impact on my mood, energy levels, and mental clarity.

The result? Let’s just say it wasn’t what I expected.

Why I Decided to Cut Out Sugar
I don’t eat added sugar every day. Instead, I tend to indulge in a (very) sweet treat twice a week or so. I usually justify it by saying that I “deserve” a treat—to reward myself for a work victory, to celebrate a special occasion, or to comfort myself after a hard day.

There’s nothing wrong with treating yourself. But I eventually noticed that my sugar binge led to some uncomfortable symptoms, particularly brain fog, poor sleep, and mood swings.

“Excess sugar intake, especially from refined sources, can cause rapid spikes and crashes in blood sugar levels, which can lead to irritability, fatigue, and difficulty concentrating,” says dietician Jessica M. Kelly, MS, RDN, LDN, the founder and owner of Nutrition That Heals. “Over time, frequent blood sugar fluctuations can contribute to increased anxiety.”

“Over time, a high-sugar diet may increase the risk of depression by causing inflammation and disrupting brain chemicals like serotonin and dopamine,” adds Marjorie Nolan Cohn, MS, RD, LDN, CEDS-S, the clinical director of Berry Street. “These ups and downs make it harder to manage emotions, making mood swings more frequent.”

A 2017 study, which looked at data collected from 23,245 people, found that higher sugar intake is associated with depression, particularly in men. Participants with the highest level of sugar consumption were 23% more likely to have a diagnosed mental illness than those with the lowest level of sugar consumption.1

Over time, a high-sugar diet may increase the risk of depression by causing inflammation and disrupting brain chemicals like serotonin and dopamine.

— MARJORIE NOLAN COHN, MS, RD, LDN, CEDS-S
Other research, like this 2024 study, also suggests a link between depression and sugar consumption—but the authors point out this connection might be because mental distress can lead to emotional eating and make it harder to control cravings.2

For the purpose of my experiment, I needed to set some ground rules about the sugars I would and wouldn’t cut out.

According to Kelly and Nolan Cohn, not all sugars affect mental health in the same way. “Natural sugars found in, for example, fruit and dairy, accompany fiber, vitamins, and antioxidants that are health-promoting and slow glucose absorption,” Kelly explains. “Refined sugars, like those in sodas and candy, can cause rapid blood sugar spikes and crashes which can lead to mood swings and brain fog.”

Excited to see the results, I began my experiment!

Week 1: The “Oh Wow, Does That Really Contain Sugar?” Phase
During my first week, I didn’t experience changes in my mood, but rather in my behavior and mindset.

This experiment required me to pick up a new habit: reading nutritional labels and ingredient lists. Although giving up sugar was easy for the first few days, this habit was pretty hard.

I was surprised to learn that sugar is in a lot of things. Most of my favorite savory treats contained sugar. Even my usual “healthy” post-gym treat—a protein bar—was off-limits.

Surprisingly, I didn’t really have any sugar withdrawals, which can be common among people who typically consume a lot of sugar.

“Cutting out sugar can trigger strong cravings since it affects the brain’s reward system, this can lead to withdrawal-like urges, and for some, it can feel very intense,” says Nolan Cohn. Sugar withdrawal symptoms often include headaches, fatigue, and mood swings.

On day four, I had my first major challenge—I realized I could no longer grab some milk chocolate on the way out of the grocery store. Talking myself out of this was harder than I’d like to admit.

The biggest challenge for week one? Choosing what to eat in a restaurant. Most menus don’t specify which dishes contain sugar, and there’s a surprising amount of sugar in savory dishes, like tomato-based curries and wraps filled with sugary salad dressings.

By the end of week one, I felt like giving up. Although I didn’t have any major cravings, constantly checking food labels was annoying, and there were no notable benefits—at least, not yet.

Week 2: A Shift in Mood and Energy
Around the 10-day mark, things started changing for the better.

Even if I don’t eat a lot of sugar in my day-to-day diet and my home-cooked meals, I tend to treat myself—a lot. Food is a go-to source of comfort for me, often to my detriment. My mindset is often along the lines of, “Oh, who cares? It’s just a treat. It’s a special occasion!”

Because I wanted to stick to the experiment, I had to pause my “treat yo’self” mindset. As I was more mindful of sugar, I planned my snacks better, avoided getting takeout, and practiced more self-control while shopping for groceries.

More importantly, I had to actually engage with my feelings instead of eating them away.

On my therapist’s recommendation, I paid attention to the uncomfortable feelings that’d usually lead me to eat, and I journaled about them instead.

I also noticed some changes in my mood—finally! Because I wasn’t eating a lot of sugar and then crashing twice a week, my energy levels felt a bit more stable. This meant that my mood also felt more stable.

Week 3: Mental Clarity and Emotional Balance
By week three, I was genuinely surprised by how good I felt.

Not only was my energy and mood a little calmer, I was really chuffed with myself for managing to avoid sugar for such a long time.

 

Source: https://www.verywellmind.com/does-sugar-affect-mental-health-11683665

Morning blue light therapy can greatly improve sleep quality for older adults

Researchers say blue light exposure in the morning may be a healthier alternative to taking sleep medications. (amenic181/Shutterstock)

Getting older brings many changes, and unfortunately, worse sleep is often one of them. Many seniors struggle with falling asleep, waking up frequently during the night, and generally feeling less rested. But what if something as simple as changing your light exposure could help?

A new study from the University of Surrey has found that the right light, at the right time, might make a significant difference in older adults’ sleep and daily activity patterns. This research, published in GeroScience, reveals that morning exposure to blue-enriched light can be beneficial, while that same light in the evening can actually make sleep problems worse.

“Our research shows that carefully timed light intervention can be a powerful tool for improving sleep and day-to-day activity in healthy older adults,” explains study author Daan Van Der Veen from the University of Surrey, in a statement. “By focusing on morning blue light and maximizing daylight exposure, we can help older adults achieve more restful sleep and maintain a healthier, more active lifestyle.”

Why light timing matters

So why do older adults have more sleep troubles in the first place? Part of the problem lies in the aging eye. As we get older, our eyes undergo natural changes—the lens yellows, pupils get smaller, and we have fewer photoreceptor cells. All these changes mean less light reaches the brain’s master clock, located in a tiny region called the hypothalamic suprachiasmatic nuclei (SCN).

That yellowing lens is particularly problematic because it filters out blue light wavelengths specifically. It’s like wearing subtle yellow sunglasses all the time. This matters because blue light (wavelengths between 420 and 480 nanometers) is especially powerful at regulating our body clocks. With less blue light reaching their brains, older adults’ internal clocks can become weaker and more prone to disruption.

Many seniors also spend less time outdoors and have fewer social engagements, further reducing their exposure to bright natural light. Meanwhile, they might be getting too much artificial light at night, which can confuse the body’s natural rhythms.

The Surrey researchers wanted to see if they could improve sleep for older adults living independently at home by tweaking their light exposure. They recruited 36 people aged 60 and over who reported having sleep problems. None were in full-time employment, and all were free from eye disorders or other conditions that might complicate the study.

Over an 11-week period during fall and winter (when natural daylight is limited in the UK), participants followed a carefully designed protocol. They spent one week establishing baseline measurements, followed by three weeks using either blue-enriched white light (17,000 K) or standard white light (4,000 K) for two hours each morning and evening. After a two-week break, they switched to the other light condition for three weeks, followed by another two-week washout period.

Participants used desktop light boxes while going about normal activities like reading or watching TV. They wore activity monitors on their wrists around the clock and light sensors around their necks during the day. They kept sleep diaries and collected urine samples to measure melatonin metabolites, markers indicating how their internal clocks were functioning.

Morning light helps, evening light hurts

The results were telling. Longer morning exposure to the blue-enriched light significantly improved the stability of participants’ daily activity patterns and reduced sleep fragmentation. By contrast, evening exposure to that same light made it harder to fall asleep and reduced overall sleep quality.

Another key discovery was that participants who spent more time in bright light (above 2,500 lux, roughly the brightness you’d experience outdoors on a cloudy day) had more active days, stronger daily rhythms, and tended to go to bed earlier. This finding reinforces long-standing advice from sleep experts: getting outside during the day is really important for good sleep.

Morning people (early birds) naturally started their morning light sessions earlier than night owls. However, most participants used their evening light sessions at similar times, suggesting that social habits might influence evening routines more than biological clocks.

The women in the study showed more variable activity patterns throughout the day than men, and those who took more daytime naps had less stable daily rhythms and were generally less active.

Practical tips

By the end of the study, participants reported meaningful improvements in their sleep quality. This means light therapy could be a potential alternative to sleep medications, which often come with side effects.

“We believe that this is one of the first studies that have looked into the effects of self-administered light therapy on healthy older adults living independently to aid their sleep and daily activity,” says study author Débora Constantino, a postgraduate research student. “It highlights the potential for accessible and affordable light-based therapies to address age-related sleep issues without the need for medication.”

For older adults seeking better rest, the advice is clear:

  • Get bright, blue-enriched light in the morning: Use a light box or spend time outdoors after waking up.
  • Dim the lights in the evening: Reduce exposure to phones, tablets, and bright overhead lights.
  • Stay consistent: Establishing regular morning and evening routines can further support healthy sleep patterns.

This approach isn’t just for people in care homes or those with cognitive impairments; it can also benefit healthy, independent older adults. With an aging population worldwide, finding simple and effective strategies to improve sleep has never been more important. The right light at the right time might be a key part of aging well.

Source : https://studyfinds.org/morning-blue-light-therapy-boosts-sleep-quality-older-adults/

Belly fat can boost brain health? Yes — but to a point, study shows

(© sun_apple – stock.adobe.com)

Age-related cognitive decline sneaks up on millions of people worldwide. It begins with those frustrating “senior moments” in middle age and can progress to more serious memory and thinking problems later in life. While scientists have traditionally focused their attention directly on the brain to understand these changes, new research out of Toho University in Japan points to an unexpected contributor: your belly fat.

A study published in the journal GeroScience reveals that visceral fat—the deep fat surrounding your internal organs—plays a role in maintaining brain health through a chemical messaging system. You might have heard of BDNF (brain-derived neurotrophic factor)—think of it as brain fertilizer. It helps brain cells grow, survive, and form new connections. The more BDNF you have, the better your brain functions. But as you age, your BDNF levels naturally drop, and that’s when memory problems can start.

Here’s where belly fat comes in. This new study found that CX3CL1, a protein made by visceral fat, plays a big role in maintaining healthy BDNF levels. In younger mice, their belly fat produced plenty of CX3CL1, keeping their brain function strong. But as the mice aged, both their belly fat and their brain’s BDNF levels took a nosedive. When scientists artificially lowered CX3CL1 in young mice, their BDNF levels dropped too, mimicking the effects of aging. But when they gave older mice an extra dose of CX3CL1, their brain’s BDNF bounced back.

These findings flip conventional wisdom about belly fat on its head. While excess visceral fat is still harmful and linked to many health problems, this research suggests that healthy amounts of visceral fat early on serve an important purpose by producing signaling molecules that support brain health.

The research tracked male mice at different ages—5, 10, and 18 months old (roughly equivalent to young adult, middle-aged, and elderly humans). The 5-month-old and 10-month-old mice had similar levels of BDNF in their hippocampus, but by 18 months, these levels had dropped by about a third. This pattern matches the typical trajectory of cognitive aging, where significant decline often doesn’t begin until later in life.

Similarly, CX3CL1 production in visceral fat remained stable in younger mice but declined significantly in older animals, supporting a link between the two proteins.

Stress Hormones and the Fat-Brain Connection

To dig deeper, the researchers asked: What causes the drop in fat-derived CX3CL1 in the first place? The answer involved stress hormones like cortisol (in humans) and corticosterone (in mice).

“Glucocorticoids boost CX3CL1 production. An enzyme in belly fat called 11β-HSD1 reactivates inactive forms of glucocorticoids and keeps them active in cells, promoting glucocorticoid-dependent expression of CX3CL1,” study co-author Dr. Yoshinori Takei tells StudyFinds. “11β-HSD1 is essential for belly fat to respond to circulating glucocorticoids properly.”

But as we age, the amount of this enzyme declines, leading to lower CX3CL1 and BDNF levels. When 11β-HSD1 decreases with age, this entire system weakens, potentially contributing to memory loss.

The paper notes that while lower 11β-HSD1 in aging is problematic for CX3CL1 production and brain health, excessive 11β-HSD1 expression is linked to obesity-related diseases. High 11β-HSD1 levels are associated with metabolic syndrome, which is a known risk factor for cognitive decline.

Rethinking Belly Fat

The connection between belly fat and brain health highlights how intertwined our body systems really are. Our brains don’t operate in isolation but depend on signals from throughout the body—including, surprisingly, our fat tissue.

Before you start thinking about packing on belly fat for the sake of your brain, don’t! The researchers stress that balance is key. Too little belly fat and you lose the brain-protecting effects, but too much can cause serious health problems.

The best way to maintain brain health as you age is to focus on proven strategies: staying active, eating a balanced diet, managing stress, and keeping your mind engaged.

While this research is still in its early stages and was conducted in mice, it opens up fascinating possibilities for understanding how our bodies and brains are connected. Scientists may one day find ways to tap into this fat-brain communication system to slow cognitive decline and keep our minds sharper for longer.

The next time you pinch an inch around your middle, remember: there’s a conversation happening between your belly and your brain that science is just beginning to understand.

Paper Summary

How the Study Worked

The researchers used male mice of three different ages: 5 months (young adult), 10 months (middle-aged), and 18 months (elderly). They measured BDNF protein levels in the hippocampus using a test called ELISA that can detect specific proteins in tissue samples. They also measured CX3CL1 levels in visceral fat tissue using two methods: one that detects the RNA instructions for making the protein and another that detects the protein itself. To determine whether fat-derived CX3CL1 directly affects brain BDNF, they used a technique called RNA interference to reduce CX3CL1 production specifically in the belly fat of younger mice, then checked what happened to brain BDNF levels. They also injected CX3CL1 into older mice to see if it would restore their brain BDNF levels. To understand what regulates CX3CL1 production, they treated fat cells grown in the lab with different stress hormones. Finally, they measured levels and activity of the enzyme 11β-HSD1 in fat tissue from younger and older mice, and used RNA interference to reduce this enzyme in younger mice to see how it affected the fat-brain signaling system.

Results

The study uncovered several key findings. First, hippocampal BDNF levels were similar in 5-month-old and 10-month-old mice (about 300 pg BDNF/mg protein) but dropped by about one-third in 18-month-old mice (about 200 pg BDNF/mg protein). CX3CL1 levels in visceral fat showed a similar pattern, decreasing significantly in the oldest mice. When the researchers reduced CX3CL1 production in the belly fat of younger mice, their brain BDNF levels fell within days, similar to levels seen in naturally aged mice. On the flip side, a single injection of CX3CL1 into the abdominal cavity of older mice boosted their brain BDNF back up, confirming the connection between these proteins. The researchers also found that natural stress hormones (corticosterone in mice, cortisol in humans) increased CX3CL1 production in fat cells, while the enzyme 11β-HSD1 that activates these hormones was much less abundant in the fat tissue of older mice. When they reduced this enzyme in younger mice, both fat CX3CL1 and brain BDNF levels decreased, revealing another link in the signaling chain. Together, these results mapped out a communication pathway from belly fat to brain that becomes disrupted with age.

Limitations

While the study presents intriguing findings, several limitations should be kept in mind. The research used only male mice to avoid complications from female hormonal cycles, so we don’t know if the same patterns exist in females. The sample sizes were small, with most tests using just three mice per group. While this is common in basic science research, larger studies would strengthen confidence in the results. The researchers demonstrated connections between fat tissue signals and brain BDNF levels but didn’t directly test whether these changes affected the mice’s memory or cognitive abilities, though their previous work had shown that CX3CL1 injections improved recognition memory in aged mice. The study was also limited to specific ages in mice, and we don’t yet know how these findings might translate to humans across our much longer lifespan. Finally, the researchers used artificial RNA interference techniques to reduce CX3CL1 and enzyme levels for short periods—different from the gradual changes that occur during natural aging—which might affect how the results apply to real-world aging.

Discussion and Takeaways

This research reveals a previously unknown communication system between belly fat and the brain. Under normal conditions, stress hormones in the blood are activated by the enzyme 11β-HSD1 in visceral fat, which then produces CX3CL1. This fat-derived CX3CL1 signals through immune cells and the vagus nerve (a major nerve connecting internal organs to the brain) to maintain healthy BDNF levels in the hippocampus. As we age, reduced 11β-HSD1 in belly fat disrupts this signaling chain, contributing to lower brain BDNF and potentially to age-related memory problems. This discovery changes how we think about visceral fat, suggesting that while excess belly fat is harmful, healthy amounts serve important functions in supporting brain health. The findings also hint at future therapeutic possibilities—perhaps treatments could target components of this pathway to maintain brain function in aging. The researchers note that a careful balance is needed, as both too little 11β-HSD1 (associated with cognitive decline) and too much (linked to obesity and metabolic problems) appear harmful. For the average person concerned about brain health, this research underscores that the body works as an interconnected whole, with tissues we don’t typically associate with thinking—like fat—playing important roles in maintaining our cognitive abilities.

Funding and Disclosures

The study was supported by grants from the Japan Society for the Promotion of Science (JSPS KAKENHI). The lead researcher, Yoshinori Takei, and two colleagues received research funding through grants numbered 23K10878, 23K06148, and 24K14786. The researchers declared no competing interests, meaning they didn’t have financial or other relationships that might have influenced their research or how they reported it.

Publication Information

The paper “Adipose chemokine ligand CX3CL1 contributes to maintaining the hippocampal BDNF level, and the effect is attenuated in advanced age” was written by Yoshinori Takei, Yoko Amagase, Ai Goto, Ryuichi Kambayashi, Hiroko Izumi-Nakaseko, Akira Hirasawa, and Atsushi Sugiyama from Toho University and other Japanese institutions. It appeared in the journal GeroScience in February 2025, after being submitted in October 2024 and accepted for publication in January 2025. The paper can be accessed online using the identifier https://doi.org/10.1007/s11357-025-01546-4

Before you start thinking about packing on belly fat for the sake of your brain, don’t! The researchers stress that balance is key. Too little belly fat and you lose the brain-protecting effects, but too much can cause serious health problems. The best way to support your brain as you age is to focus on proven strategies: staying active, eating a balanced diet, managing stress, and keeping your mind engaged.

While this research is still in its early stages and was conducted in mice, it opens up fascinating possibilities for understanding how our bodies and brains are connected. Scientists may one day find ways to tap into this fat-brain communication system to slow cognitive decline and keep our minds sharper for longer.

Source : https://studyfinds.org/belly-fat-brain-health/

.

Menopause starting earlier? Half of women in their 30s reporting symptoms

A woman experiencing hot flashes due to menopause (Photo by Pheelings media on Shutterstock)

Perimenopause—the transitional phase leading up to menopause—has long been considered a mid-life experience, typically affecting women in their late 40s. However, new research reveals that a significant number of women in their 30s are already experiencing perimenopausal symptoms severe enough to seek medical attention.

In a survey of 4,432 U.S. women, researchers from Flo Health and the University of Virginia found that more than half of those in the 30-35 age bracket reported moderate to severe menopause symptoms using the validated Menopause Rating Scale (MRS). Among those who consulted medical professionals about their symptoms, a quarter were diagnosed as perimenopausal. This challenges the assumption that perimenopause is primarily a concern for women approaching 50.

The findings, published in the journal npj Women’s Health, highlight a significant gap in healthcare awareness and support for women experiencing early-onset perimenopause.

Unrecognized Symptoms and Healthcare Gaps

“Physical and emotional symptoms associated with perimenopause are understudied and often dismissed by physicians. This research is important in order to more fully understand how common these symptoms are, their impact on women, and to raise awareness amongst physicians as well as the general public,” says study co-author Dr. Jennifer Payne, MD, an expert in reproductive psychiatry at UVA Health and the University of Virginia School of Medicine, in a statement.

Despite medical definitions being well established, public understanding remains muddled. Many people use “menopause” as a catch-all term for both perimenopause and post-menopause. This confusion contributes to women feeling unprepared and unsupported during this transition.

The journey through perimenopause varies. Some women experience a smooth 5-7 year transition with manageable symptoms, while others face a decade-long struggle with physical and psychological challenges that impact daily life.

Early vs. Late Perimenopause

“Perimenopause can be broadly split into early and late stages,” the researchers explained. Early perimenopause typically involves occasional missed periods or cycle irregularity, while late perimenopause features greater menstrual irregularity with longer periods without menstruation, ranging from 60 days to one year.

The study identified eight symptoms significantly associated with perimenopause:

  • Absence of periods for 12 months or 60 days
  • Hot flashes
  • Vaginal dryness
  • Pain during sexual intercourse
  • Recent cycle length irregularity
  • Heart palpitations
  • Frequent urination

While symptom severity generally increased with age, women in their 30s and early 40s still experienced significant symptom burden. Among 30-35-year-olds, 55.4% reported moderate or severe symptoms, increasing to 64.3% in women aged 36-40.

“We had a significant number of women who are typically thought to be too young for perimenopause tell us that they have high levels of perimenopause-related symptoms,” said Liudmila Zhaunova, PhD, director of science at Flo. “It’s important that we keep doing research to understand better what is happening with these women so that they can get the care they need.”

Psychological vs. Physical Symptoms With Menopause

The study revealed patterns in symptom presentation across different perimenopause stages. Psychological symptoms—such as anxiety, depression, and irritability—tend to appear first, peaking among women ages 41-45 before declining. Physical problems, including sexual dysfunction, bladder issues, and vaginal dryness, peaked in women 51 and older. Classic menopause symptoms like hot flashes and night sweats were most prevalent between ages 51-55 and were least common among younger women.

These findings suggest that perimenopause follows a predictable symptom progression, with mood changes and cognitive issues appearing first, followed by more recognized physical symptoms in later stages.

Delayed Medical Attention

Despite high symptom burden, younger women are far less likely to seek medical help for perimenopause. The study found that while 51.5% of women over 56 consulted a doctor, only 4.3% of 30-35-year-olds did. However, among those who sought medical advice, over a quarter of 30-35-year-olds and 40% of 36-40-year-olds were diagnosed as perimenopausal.

The study used the Menopause Rating Scale (MRS), a validated tool that measures symptom severity across three domains: psychological symptoms, somato-vegetative symptoms (including hot flashes and sleep problems), and urogenital symptoms. While MRS scores were highest in the 51-55 age group, younger women still reported a significant symptom burden.

Implications for Healthcare and Awareness

“This study is important because it plots a trajectory of perimenopausal symptoms that tells us what symptoms we can expect when and alerts us to the fact that women are experiencing perimenopausal symptoms earlier than we expected,” Payne said.

These findings underscore the need for earlier education and support. Women in their 30s and early 40s may not recognize symptoms like irregular cycles, mood changes, and sleep disturbances as signs of perimenopause, leading to misdiagnosis or missed opportunities for treatment. This research calls for healthcare providers to adopt a more age-inclusive approach when evaluating these symptoms.

Additionally, the variability of perimenopause means a one-size-fits-all approach to management is inadequate. Psychological symptoms may dominate early perimenopause, while vasomotor and urogenital symptoms become more pronounced in later stages. Understanding these transitions can help tailor treatment strategies for individual needs.

Source : https://studyfinds.org/perimenopause-early-symptoms-women/

How one sleepless night upends the immune system, fueling inflammation

(© Andrii Lysenko – stock.adobe.com)

When you toss and turn all night, your immune system takes notice – and not in a good way. New research reveals that sleep deprivation doesn’t just leave you groggy and irritable; it actually transforms specific immune cells in your bloodstream, potentially fueling chronic inflammation throughout your body.

The study, published in The Journal of Immunology, finds a direct link between poor sleep quality and significant changes in specialized immune cells called monocytes. These altered cells appear to drive widespread inflammation – the same type of inflammation associated with obesity and numerous chronic diseases.

The research, conducted by scientists at Kuwait’s Dasman Diabetes Institute, demonstrates how sleep deprivation triggers an increase in inflammatory “nonclassical monocytes” (NCMs) – immune cells that amplify inflammation. More remarkably, these changes occurred regardless of a person’s weight, suggesting that even lean, healthy individuals may face inflammatory consequences from poor sleep.

Study authors examined three factors increasingly recognized as critical determinants of overall health: sleep, body weight, and inflammation. Though previous research established connections between obesity and poor sleep, this study goes further by identifying specific immune mechanisms that may explain how sleep disruption contributes to chronic inflammatory conditions.

“Our findings underscore a growing public health challenge. Advancements in technology, prolonged screen time, and shifting societal norms are increasingly disruptive to regular sleeping hours. This disruption in sleep has profound implications for immune health and overall well-being,” said Dr. Fatema Al-Rashed, who led the study, in a statement.

How the study worked

The research team recruited 237 healthy Kuwaiti adults across a spectrum of body weights and carefully monitored their sleep patterns using advanced wearable activity trackers. Participants were fitted with ActiGraph GT3X+ devices for seven consecutive days, providing objective data on sleep efficiency, duration, and disruptions. Meanwhile, blood samples revealed striking differences in immune cell populations and inflammatory markers across weight categories.

Obese participants demonstrated significantly lower sleep quality compared to their lean counterparts, along with elevated levels of inflammatory markers. Most notably, researchers observed marked differences in monocyte subpopulations across weight categories. Obese individuals showed decreased levels of “classical” monocytes (which primarily perform routine surveillance) and increased levels of “nonclassical” monocytes – cells known to secrete inflammatory compounds.

The study’s most compelling finding emerged when researchers discovered that poor sleep quality correlated with increased nonclassical monocytes regardless of body weight. Even lean participants who experienced sleep disruption showed elevated NCM levels, suggesting that sleep deprivation itself – independent of obesity – may trigger inflammatory responses.

To further test this hypothesis, researchers conducted a controlled experiment with five lean, healthy individuals who underwent 24 hours of complete sleep deprivation. The results were striking: after just one night without sleep, participants showed significant increases in inflammatory nonclassical monocytes. These changes mirrored the immune profiles seen in obese participants, supporting the role of sleep health in modulating inflammation. Even more remarkably, these alterations reversed when participants resumed normal sleep patterns, demonstrating the body’s ability to recover from short-term sleep disruption.

‘Sleep quality matters as much as quantity’

These findings highlight sleep’s crucial role in immune regulation and suggest that chronic sleep deprivation may contribute to inflammation-driven health problems even in individuals without obesity. The research points to a potential vicious cycle: obesity disrupts sleep, sleep disruption alters immune function, and altered immune function exacerbates inflammation associated with obesity and related conditions.

Modern life often treats sleep as a luxury rather than a necessity. We sacrifice rest for productivity, entertainment, or simply because our environments and schedules make quality sleep difficult to achieve. This study adds to mounting evidence that such trade-offs may have serious long-term health consequences.

For most adults, the National Sleep Foundation recommends 7-9 hours of sleep per night. Study participants averaged approximately 7.8 hours (466.7 minutes) of sleep nightly, but importantly, the research suggests that sleep quality matters as much as quantity. Disruptions, awakenings, and reduced sleep efficiency all appeared to influence immune function, even when total sleep duration seemed adequate.

Sleep efficiency – the percentage of time in bed actually spent sleeping – averaged 91.4% among study participants but was significantly lower in obese individuals. Those with higher body weights also experienced more “wake after sleep onset” (WASO) periods, indicating fragmented sleep patterns that may contribute to immune dysregulation.

How sleep impacts inflammation

The study also revealed intriguing connections between specific inflammatory markers and monocyte subpopulations. Nonclassical monocytes showed positive correlations with multiple inflammatory compounds, including TNF-α and MCP-1 – molecules previously linked to sleep regulation. This suggests that sleep disruption may initiate a cascade of inflammatory signals throughout the body, potentially contributing to various health problems.

While obesity emerged as a significant factor in driving inflammation, mediation analyses revealed that sleep disruption independently contributes to inflammation regardless of weight status. This finding challenges simplistic views of obesity as the primary driver of inflammation and highlights sleep’s importance as a modifiable risk factor for inflammatory conditions.

The implications extend beyond obesity-related concerns. Sleep disruption has been associated with numerous health problems, including cardiovascular disease, diabetes, and mental health disorders. This research provides potential mechanisms explaining these connections and suggests that improving sleep quality could reduce inflammation and associated risks.

Monocytes, crucial components of the innate immune system, patrol the bloodstream looking for signs of trouble. They differentiate into three main types: classical monocytes (which primarily perform surveillance), intermediate monocytes (which excel at presenting antigens and activating other immune cells), and nonclassical monocytes (which specialize in patrolling blood vessels and producing inflammatory compounds).

In healthy individuals, these monocyte populations maintain a careful balance. Sleep disruption appears to tip this balance toward inflammatory nonclassical monocytes, potentially contributing to a state of chronic low-grade inflammation throughout the body.

Is lack of quality sleep becoming a public health crisis?

This research provides compelling evidence that sleep quality deserves serious attention as a public health concern. The study suggests that even temporary sleep disruption can alter immune function, while chronic sleep problems may contribute to persistent inflammation – a condition increasingly recognized as a driver of numerous diseases.

For individuals struggling with obesity or inflammatory conditions, addressing sleep quality may provide additional benefits beyond traditional interventions focused on diet and exercise. The research also highlights potential concerns for shift workers, parents of young children, and others who regularly experience disrupted sleep patterns.

Healthcare providers may need to consider sleep quality as a critical factor when evaluating and treating patients with inflammatory conditions. Similarly, public health initiatives addressing obesity and related disorders might benefit from incorporating sleep improvement strategies alongside dietary and exercise recommendations.

The researchers are now planning to explore in greater detail the mechanisms linking sleep deprivation to immune changes. They also want to investigate whether interventions such as structured sleep therapies or technology-use guidelines can reverse these immune alterations.

“In the long term, we aim for this research to drive policies and strategies that recognize the critical role of sleep in public health,” said Dr. Al-Rashed. “We envision workplace reforms and educational campaigns promoting better sleep practices, particularly for populations at risk of sleep disruption due to technological and occupational demands. Ultimately, this could help mitigate the burden of inflammatory diseases like obesity, diabetes, and cardiovascular diseases.”

Source : https://studyfinds.org/sleep-deprivation-immune-system-inflammation/

How grapes could help preserve muscle health as you age

(Photo by J Yeo on Shutterstock)

Could adding grapes to your daily diet help maintain muscle strength and health as you age? A new mouse model study suggests these antioxidant-rich fruits might help reshape muscle composition, particularly in women, as they enter their later years.

Published in the journal Foods, this investigation — partially funded by the California Table Grape Commission — tracked 480 mice over two and a half years, examining how grape consumption affects muscle gene expression at a fundamental level. The findings highlight how something as simple as adding grapes to our daily diet might help support muscle health during aging.

Muscle loss affects millions of older adults worldwide, with 10-16% of elderly individuals experiencing sarcopenia—the progressive deterioration of muscle mass and function that comes with age. Women often face greater challenges maintaining muscle mass, particularly after menopause, making this research especially relevant for aging females.

Researchers from several U.S. universities discovered that consuming an amount of grapes equivalent to two human servings daily led to notable changes in muscle-related gene expression. While both males and females showed genetic shifts, the effects were particularly pronounced in females, whose gene activity patterns began shifting toward those typically observed in males.

This convergence occurred at the genetic level, where researchers identified 25 key genes affected by grape consumption. Some genes associated with lean muscle mass increased their activity, while others linked to muscle degeneration showed decreased expression.

What makes grapes so special? The fruit contains over 1,600 natural compounds that work together in complex ways. Rather than any single component being responsible for the benefits, it’s likely the combination of these compounds that produces such significant effects.

“This study provides compelling evidence that grapes have the potential to enhance muscle health at the genetic level,” says Dr. John Pezzuto, senior investigator of the study and professor and dean of pharmacy and health sciences at Western New England University, in a statement. “Given their safety profile and widespread availability, it will be exciting to explore how quickly these changes can be observed in human trials.”

Proper muscle function plays a crucial role in everyday activities, from maintaining balance to supporting bone health and regulating metabolism. The potential to help maintain muscle health through dietary intervention could significantly impact quality of life for aging adults.

The research adds to a growing body of evidence supporting grapes’ health benefits. Previous studies have shown positive effects on heart health, kidney function, skin protection, vision, and digestive health. This new understanding of grapes’ influence on muscle gene expression opens another avenue for potential therapeutic applications.

While the physical appearance and weight of muscles didn’t change significantly between groups, the underlying genetic activity showed marked differences. This suggests that grapes might influence muscle health at a fundamental cellular level, even before measurable functional changes occur—though further research is needed to confirm these effects.

For older adults concerned about maintaining their strength and independence, these findings suggest that a daily bowl of grapes in addition to regular exercise just might offer an additional tool in the healthy aging toolkit.. However, the researchers emphasize that human studies are still needed to confirm these effects.

Source : https://studyfinds.org/grapes-muscle-strength/

Why some people remember their dreams (and others don’t)

About a fourth of people don’t remember their dreams. (Roman Samborskyi/Shutterstock)

What were you dreaming about last night? For roughly one in four people, that question draws a blank. For others, the answer comes easily, complete with vivid details about flying through clouds or showing up unprepared for an exam. This stark contrast in dream recall ability has baffled researchers for decades, but a new study reveals there’s more to remembering dreams than pure chance.

From March 2020 to March 2024, scientists from multiple Italian research institutions conducted a sweeping investigation to uncover what determines dream recall. Published in Communications Psychology, their research surpassed typical dream studies by combining detailed sleep monitoring, cognitive testing, and brain activity measurements. The study involved 217 healthy adults between ages 18 and 70, who did far more than simply keep dream journals; they underwent brain tests, wore sleep-tracking wristbands, and some even had their brain activity monitored throughout the night.

Understanding dream recall has long puzzled researchers. Early studies in the 1950s focused mainly on REM sleep, the sleep stage characterized by rapid eye movements and vivid dreams. Scientists initially thought they had solved the mystery of dreaming by linking it exclusively to REM sleep. However, later research revealed that people also dream during non-REM sleep stages, though these dreams tend to be less vivid and harder to remember.

According to researchers at the IMT School for Advanced Studies Lucca, three main factors emerged as strong predictors of dream recall: a person’s general attitude toward dreaming, their tendency to let their mind wander during waking hours, and their typical sleep patterns.

To measure attitudes about dreaming, participants completed a questionnaire rating how strongly they agreed or disagreed with statements like “dreams are a good way of learning about my true feelings” versus “dreams are random nonsense from the brain.” People who viewed dreams as meaningful and worthy of attention were more likely to remember them compared to those who dismissed dreams as meaningless brain static.

Mind wandering proved to be another crucial factor. Using a standardized questionnaire that measures how often people’s thoughts drift away from their current task, researchers found that participants who frequently caught themselves daydreaming or engaging in spontaneous thoughts during the day were more likely to recall their dreams. This connection makes sense considering both daydreaming and dreaming involve similar brain networks, particularly regions associated with self-reflection and creating internal mental experiences.

The relationship between daydreaming and dream recall points to an intriguing possibility: people who spend more time engaged in spontaneous mental activity during the day may be better equipped to generate and remember dreams at night. Both activities involve creating mental experiences disconnected from the immediate external environment.

People who typically had longer periods of lighter sleep with less deep sleep (technically called N3 sleep) were better at remembering their dreams. During deep sleep, the brain produces large, slow waves that help consolidate memories but may make it harder to generate or remember dreams. In contrast, lighter sleep stages maintain brain activity patterns more similar to wakefulness, potentially making it easier to form and store dream memories.

Age was also a factor in dream recall. While younger participants were generally better at remembering specific dream content, older individuals more frequently reported “white dreams,” those frustrating experiences where you wake up knowing you definitely had a dream but can’t remember anything specific about it. This age-related pattern suggests that the way our brains process and store dream memories may change as we get older.

The researchers also discovered that dream recall fluctuates seasonally, with people remembering fewer dreams during winter months compared to spring and autumn. While the exact reason remains unclear, this pattern wasn’t explained by changes in sleep habits across seasons. One possibility is that seasonal variations in light exposure affect brain chemistry in ways that influence dream formation or recall.

Rather than relying on written dream journals, participants used voice recorders each morning to describe everything that was going through their minds just before waking up. This approach reduced the effort required to record dreams and minimized the chance that the act of recording would interfere with the memory of the dream itself.

Throughout the study period, participants wore wristwatch-like devices called actigraphs that track movement patterns to measure sleep quality, duration, and timing. A subset of 50 participants also wore special headbands equipped with electrodes to record their brain activity during sleep. This comprehensive approach allowed researchers to connect dream recall with objective measures of how people were actually sleeping, not just how they thought they slept.

“Our findings suggest that dream recall is not just a matter of chance but a reflection of how personal attitudes, cognitive traits, and sleep dynamics interact,” says lead author Giulio Bernardi, professor in general psychology at the IMT School, in a statement. “These insights not only deepen our understanding of the mechanisms behind dreaming but also have implications for exploring dreams’ role in mental health and in the study of human consciousness.”

The study authors plan to use these findings as a reference for future research, particularly in clinical settings. Further investigations could explore the diagnostic and prognostic value of dream patterns, potentially improving our understanding of how dreams relate to mental health and neurological conditions.

Understanding dream recall could provide insights into how the brain processes and stores memories during sleep. Dreams appear to draw upon our previous experiences and memories while potentially playing a role in emotional processing and memory consolidation. Changes in dream patterns or recall ability might serve as early indicators of neurological or psychiatric conditions.

Source : https://studyfinds.org/why-some-people-remember-their-dreams-others-dont/

This one change to your phone can reverse age-related cognitive issues by 10 years

(Photo by Alliance Images on Shutterstock)

New research reveals a surprisingly simple way to improve mental health and focus: turn off your phone’s internet. A month-long study found that blocking mobile internet access for just two weeks led to measurable improvements in well-being, mental health, and attention—comparable to the effects of cognitive behavioral therapy and reductions in age-related cognitive decline.

Researchers from multiple universities across the U.S. and Canada worked with 467 iPhone users (average age 32) to test how removing constant internet access would affect their daily lives. Instead of asking people to give up their phones completely, the study took a more practical approach. Participants installed an app that blocked mobile internet while still allowing calls and texts. This way, phones remained useful for basic communication but lost their ability to provide endless scrolling, social media, and constant online access.

The average smartphone user now spends nearly 5 hours each day on their device. More than half of Americans with smartphones worry they use them too much, and this jumps to 80% for people under 30. Despite these concerns, few studies have actually tested what happens when people cut back.

The results were significant. After two weeks without mobile internet, participants showed clear improvements in multiple areas. They reported feeling happier and more satisfied with their lives, and their mental health improved—an effect size that was greater than what is typically seen with antidepressant medications in clinical trials. They also performed better on attention tests, showing improvements comparable to reversing 10 years of age-related cognitive decline.

To measure attention, participants completed a computer task that tested their ability to stay focused over time. The improvements were meaningful—similar in size to the difference between an average adult and someone with mild attention difficulties. This suggests that constant mobile internet access may impair our natural ability to focus.

The study design was particularly strong because it included a swap halfway through. After the first two weeks, the groups switched roles—people who had blocked mobile internet got access back, while the other group had to block their internet. This strengthened the evidence that the improvements were caused by reduced mobile internet access rather than other factors.

“Smartphones have drastically changed our lives and behaviors over the past 15 years, but our basic human psychology remains the same,” says lead author Adrian Ward, an associate professor of marketing at the University of Texas at Austin, in a statement. “Our big question was, are we adapted to deal with constant connection to everything all the time? The data suggest that we are not.”

An impressive 91% of participants improved in at least one area. Without the ability to check their phones constantly, people spent more time socializing in person, exercising, and being outdoors—activities known to boost mental health and cognitive function.

Throughout the study, researchers checked in with participants via text messages to track their moods. Those who blocked mobile internet reported feeling progressively better over the two weeks. Even after regaining internet access, many retained some of their improvements, suggesting the break helped reshape their digital habits.

Interestingly, the benefits weren’t just from less screen time. While phone use dropped significantly during the study (from over 5 hours to under 3 hours daily), the improvements appeared linked specifically to breaking the habit of constant online connection. Even after getting internet access back, many participants kept their usage lower and continued feeling better.

One surprising finding involved people who started the study with a high “fear of missing out” (FOMO). Rather than making their anxiety worse, disconnecting from mobile internet led to the biggest improvements in their well-being. This suggests that constant access to social media and online updates may fuel digital anxiety rather than relieve it.

Blocking mobile internet also helped participants feel more in control of their behavior and improved their sleep. Without instant access to endless entertainment and social media, people reported having better control over their attention and averaged about 17 more minutes of sleep per night.

However, sticking to the program was difficult—only about 25% of participants kept their mobile internet blocked for the full two weeks. This highlights how dependent many of us have become on constant connectivity. Still, even those who didn’t fully adhere to the program showed improvements, suggesting that simply reducing mobile internet use can be beneficial.

The researchers noted that a less extreme approach might work better for most people. Instead of blocking all mobile internet, limiting access during certain times or restricting specific apps could provide similar benefits while being easier to maintain.

The takeaway is simple: reducing mobile internet access—even temporarily—can help improve well-being, mental health, and focus. While not everyone is ready to disconnect completely, finding ways to limit our online exposure could make us happier, healthier, and more present in our daily lives.

Source : https://studyfinds.org/digital-detox-keeping-phone-internet-off-wellbeing-focus-sleep/

Why morning people are more likely to conquer challenges

(© Anatoliy Karlyuk – stock.adobe.com)

It’s no surprise that our mental acuity and mood wax and wane during the day, but it may be surprising that most of us seem to be morning people.

In a study at University College London, researchers analyzed data collected from a dozen surveys of 49,218 respondents between March 2020 and March 2022. According to the report published recently in the British Medical Journal Metal Health, the data showed a trend of people claiming better mental health and wellbeing early in the day. They reported greater life satisfaction, increased happiness, and less severe depressive symptoms. They also reported a greater sense of self-worth earlier in the day. People felt worst around midnight. Mental health and mood were more variable on weekends. Loneliness was more stable throughout the week.

Dr. Feifei Bu, principal research fellow in statistics and epidemiology at University College, said in an email to CNN, “Our study suggests that people’s mental health and wellbeing could fluctuate over time of day. On average people seem to feel best early in the day and worst late at night.”

Research Limitations

Even though a correlation was discovered between morning, better mood, life satisfaction, and self-worth, there may be factors affecting the results not apparent in the research, Dr. Bu says.

How people were feeling may have affected when they filled out the surveys. As with most research, the findings need to be replicated. Studies need to be designed to adjust for or eliminate confounding variables, isolating specific questions as much as possible.

In addition, although mental health and well-being are associated, they are not the same thing. Well-being is a complex medley of mental, emotional, physical, cognitive, psychological, and spiritual factors. According to the World Health Organization, well-being is a positive state determined by social, economic, and environmental conditions that include quality of life and a sense of meaning and purpose.

Mental health is a significant contributor to well-being, but they don’t entirely overlap. Many people with mental health issues also enjoy what they describe as a good quality of life.

Also, while many reported feeling better in the morning, better is relative. When someone feels better in the morning, that doesn’t necessarily mean that they feel good.

In addition, mood is a temporary state; mental health and well-being are more stable conditions.

Do hard work when it’s best for you

Do these results mean to confront problems or do your hardest work first thing in the morning? Or does it mean not to problem-solve in the evening – just go to bed and tackle your issues in the morning? Not all research agrees, but more evidence points to late morning as the most productive time for problem-solving. Studies suggest that mood is more stable in the late morning, making it easier to confront more demanding matters with a cool head and less emotional influence.

Cortisol, an important body-regulating hormone that your adrenal glands produce and release, has a daily rhythm of highs and lows. It can also be secreted in bursts in response to stress. Cortisol tends to be lower in the midafternoon. This time is also associated with dips in mood and “decision fatigue.”

Source : https://studyfinds.org/why-morning-people-conquer-challenges/

 

Why intermittent fasting could be harmful for teens

(© anaumenko – stock.adobe.com)

Intermittent fasting has become one of the most popular eating patterns of the past decade. The practice, which involves cycling between periods of eating and fasting, has been praised for its potential health benefits. But a new mouse model study suggests that age plays a crucial role in how the body responds to fasting — and for young individuals, it might do more harm than good.

A team of German researchers recently discovered that while intermittent fasting improved health markers in older mice, it actually impaired important cellular development in younger ones. Their findings, published in Cell Reports, raise important questions about who should (and shouldn’t) try this trending eating pattern.

Inside our bodies, specialized cells in the pancreas produce insulin, a hormone that helps control blood sugar levels. These cells, called beta cells, are particularly important during youth when the body is still developing. The researchers found that in young mice, long-term intermittent fasting disrupted how these cells grew and functioned.

“Our study confirms that intermittent fasting is beneficial for adults, but it might come with risks for children and teenagers,” says Stephan Herzig, a professor at Technical University of Munich and director of the Institute for Diabetes and Cancer at Helmholtz Munich, in a statement.

The study looked at three groups of mice: young (equivalent to adolescence in humans), middle-aged (adult), and elderly. Each group followed an eating pattern where they fasted for 24 hours, followed by 48 hours of normal eating. The researchers tracked how this affected their bodies over both short periods (5 weeks) and longer periods (10 weeks).

At first, all age groups showed improvements in how their bodies handled sugar, which, of course, is a positive sign. But after extended periods of intermittent fasting, significant differences emerged between age groups. While older and middle-aged mice continued to show benefits, the young mice began showing troubling changes.

The pancreatic cells in young mice became less effective at producing insulin, and they weren’t maturing properly. Even more concerning, these cellular changes resembled patterns typically seen in Type 1 diabetes, a condition that usually develops in childhood or adolescence.

“Intermittent fasting is usually thought to benefit beta cells, so we were surprised to find that young mice produced less insulin after the extended fasting,” explains co-lead author Leonardo Matta, from Helmholtz Munich.

The older mice, however, actually benefited from the extended fasting periods. Their insulin-producing cells worked better, and they showed improved blood sugar control. Middle-aged mice maintained stable function, suggesting that mature bodies handle fasting periods differently than developing ones.

This age-dependent response challenges the common belief that intermittent fasting is suitable for everyone. The research suggests that while mature adults might benefit from this eating pattern, young people could be putting themselves at risk, particularly if they maintain the practice for extended periods.

The findings are especially relevant given how popular intermittent fasting has become among young people looking to manage their weight. While short-term fasting appeared safe across all age groups, the long-term effects on young practitioners could be significant.

“The next step is digging deeper into the molecular mechanisms underlying these observations,” says Herzig. “If we better understand how to promote healthy beta cell development, it will open new avenues for treating diabetes by restoring insulin production.”

Despite the attention they receive from athletes and wellness influencers, popular dietary trends aren’t one-size-fits-all. What works for adults might not be appropriate for growing bodies — all the more reason that understanding these age-related differences becomes increasingly important.

Source : https://studyfinds.org/intermittent-fasting-harmful-teens/

Brake dust could be more harmful to health than diesel exhaust

(© kichigin19 – stock.adobe.com)

As cities worldwide crack down on diesel vehicle emissions, a more insidious form of air pollution has been quietly growing alongside increased traffic – brake dust. Research concludes that the particles released when vehicles brake may actually be more harmful to human lung cells than diesel exhaust, with copper-rich brake pads emerging as a particular concern.

This finding comes at a critical time, as the shift toward heavier electric vehicles means more brake wear and potentially higher exposure to these harmful particles. While governments have made substantial progress in reducing exhaust emissions, brake dust remains largely unregulated despite contributing up to 55% of all traffic-related fine particles in urban areas.

Researchers at the University of Southampton and their collaborators examined how tiny particles from different types of brake pads affected human lung cells, focusing on the delicate air sacs where oxygen enters our bloodstream. They compared brake dust from four common types of brake pads against diesel exhaust particles. Much like comparing different recipes to see which ingredients might cause problems, they tested low-metallic, semi-metallic, non-asbestos organic (NAO), and ceramic brake pads.

Their findings, published in Particle and Fibre Toxicology, painted a concerning picture: brake dust from copper-enriched NAO and ceramic brake pads caused significantly more cellular stress and inflammation than both other brake pad types and diesel exhaust. These copper-rich particles triggered inflammatory responses and altered cell metabolism in ways that could potentially lead to disease.

Modern brake pads contain a complex mixture of materials that help vehicles stop safely. NAO brake pads, the most common type in the U.S. due to their low cost and good performance, were developed to replace asbestos-containing pads. However, manufacturers added copper fibers to maintain heat conductivity – a role previously filled by asbestos. This copper content turned out to be problematic.

When researchers exposed lung cells to NAO brake dust, copper accumulated inside the cells steadily as exposure increased. Using specialized molecules that bind to specific metals – like a magnet that only attracts one type of metal – they confirmed that copper was driving the harmful effects.

Perhaps most concerning was the discovery that copper-rich brake dust triggered a cellular response called “pseudohypoxic HIF signaling.” In simple terms, this means the cells behaved as if they were starving for oxygen even though plenty was available – similar to a false alarm that keeps cells in an unnecessary state of emergency. This same mechanism has been linked to various diseases, including certain cancers and scarring of lung tissue.

Some U.S. states, including California and Washington, have already begun restricting copper in brake pads – but these rules were originally created to protect fish and aquatic life from copper washing off roads into waterways, not to address human health concerns. This study suggests these restrictions may have the unexpected benefit of protecting human health as well.

Source: https://studyfinds.org/brake-dust-more-harmful-than-diesel-exhaust/

Eating yogurt may offer protection against hard-to-detect colon cancer

Yogurt has many health benefits. Now, new research shows it might be effective against certain colorectal cancers. (Photo by Vicky Ng on Unsplash)

For years, experts have praised yogurt’s potential benefits for digestive health, but that’s not the only punch it packs. New research suggests its cancer-fighting properties might be more nuanced than previously thought. A new study reveals that yogurt consumption may help prevent certain types of colorectal cancer, specifically those containing higher levels of beneficial bacteria called Bifidobacterium.

Colorectal cancer ranks as the third most common cancer worldwide, affecting both men and women. Prevention strategies have become increasingly important as rates rise, particularly among younger adults. While regular screening through colonoscopy remains the gold standard for early detection, researchers continue searching for dietary and lifestyle factors that might reduce cancer risk.

Research teams from Mass General Brigham and Harvard Medical School analyzed data from over 132,000 health professionals spanning multiple decades. Research published in Gut Microbes reveals a surprising link between yogurt consumption patterns and subsequent colorectal cancer diagnoses.

“Our study provides unique evidence about the potential benefit of yogurt,” says Dr. Shuji Ogino, chief of the Program in Molecular Pathological Epidemiology at Brigham and Women’s Hospital, in a statement. “My lab’s approach is to try to link long-term diets and other exposures to a possible key difference in tissue, such as the presence or absence of a particular species of bacteria. This kind of detective work can increase the strength of evidence connecting diet to health outcomes.”

Through two major studies, the Nurses’ Health Study and the Health Professionals Follow-up Study, researchers tracked more than 100,000 female nurses since 1976 and 51,000 male health professionals since 1986. Every two years, participants answered detailed questions about their health, lifestyle, and medical history. Every four years, they provided specific information about their diets, including how much plain and flavored yogurt they consumed.

This long-term tracking allowed researchers to understand not just occasional yogurt consumption but established eating patterns over decades. When participants developed colorectal cancer, researchers analyzed tumor samples for the presence of Bifidobacterium, a type of beneficial bacteria naturally present in the human gut and commonly added to yogurt products.

Among 3,079 documented colorectal cancer cases, researchers examined 1,121 for Bifidobacterium content. The findings revealed that this beneficial bacteria was quite common. Thirty-one percent of cases were Bifidobacterium-positive, while 69% were negative. For participants who ate two or more servings of yogurt per week, researchers observed a 20% lower rate of Bifidobacterium-positive tumors compared to those who ate yogurt less than once per month.

Most notably, this protective effect appeared strongest in the proximal colon, also known as the right side of the colon. Located near where the small intestine connects to the large intestine, the proximal colon poses unique challenges for cancer detection and treatment. Cancers in this area often grow with fewer obvious symptoms and are harder to spot during routine colonoscopy procedures. Research has shown that patients with proximal colon cancer typically face worse survival outcomes than those with cancers in other parts of the colon.

“It has long been believed that yogurt and other fermented milk products are beneficial for gastrointestinal health,” says co-senior author Dr. Tomotaka Ugai. “Our new findings suggest that this protective effect may be specific for Bifidobacterium-positive tumors.”

Bifidobacterium, a beneficial gut bacterium often found in yogurt, plays a role in digesting dietary fiber, maintaining gut barrier integrity, and regulating immune responses—all factors linked to colorectal cancer risk. The study’s authors hypothesize that yogurt consumption may contribute to a healthier gut microbiome, which in turn could influence cancer risk, particularly in the proximal colon.

However, because different yogurt products contain varying levels and strains of probiotics, more research is needed to determine whether specific types of yogurt provide greater protective benefits than others. Future studies may explore how dietary patterns interact with individual gut microbiomes to influence cancer risk, potentially leading to more personalized dietary recommendations for colorectal cancer prevention, though this remains an emerging area of research.

Regular yogurt consumers in the study demonstrated other healthy habits as well. They typically exercised more, smoked less, and maintained better overall dietary patterns than those who rarely ate yogurt. However, even after accounting for these factors, the association between yogurt consumption and reduced risk of Bifidobacterium-positive proximal colon cancer remained significant.

“This paper adds to the growing evidence that illustrates the connection between diet, the gut microbiome, and risk of colorectal cancer,” says Dr. Andrew Chan, chief of the Clinical and Translational Epidemiology Unit at Massachusetts General Hospital.

Beyond the general recommendation to consume yogurt, this research raises questions about which products might offer the most benefit. Not all yogurts contain the same bacterial strains or concentrations. While many products include Bifidobacterium, the amounts can vary significantly. Future research may help determine whether certain formulations provide better protection against colorectal cancer.

Different subtypes of colorectal cancer may respond differently to preventive measures, suggesting that a one-size-fits-all approach to prevention might not be optimal. This understanding could eventually lead to more personalized prevention strategies based on individual risk factors and gut bacterial composition.

Source: https://studyfinds.org/eating-yogurt-colon-cancer/

Is AI making us stupider? Maybe, according to one of the world’s biggest AI companies

Deferring to machines to make our decisions can have disastrous consequences when it comes human lives. (Credit: © Jakub Jirsak | Dreamstime.com)

There is only so much thinking most of us can do in our heads. Try dividing 16,951 by 67 without reaching for a pen and paper. Or a calculator. Try doing the weekly shopping without a list on the back of last week’s receipt. Or on your phone.

By relying on these devices to help make our lives easier, are we making ourselves smarter or dumber? Have we traded efficiency gains for inching ever closer to idiocy as a species?

This question is especially important to consider with regard to generative artificial intelligence (AI) technology such as ChatGPT, an AI chatbot owned by tech company OpenAI, which at the time of writing is used by 300 million people each week.

According to a recent paper by a team of researchers from Microsoft and Carnegie Mellon University in the United States, the answer might be yes. But there’s more to the story.

Thinking well
The researchers assessed how users perceive the effect of generative AI on their own critical thinking.

Generally speaking, critical thinking has to do with thinking well.

One way we do this is by judging our own thinking processes against established norms and methods of good reasoning. These norms include values such as precision, clarity, accuracy, breadth, depth, relevance, significance and cogency of arguments.

Other factors that can affect quality of thinking include the influence of our existing world views, cognitive biases, and reliance on incomplete or inaccurate mental models.

The authors of the recent study adopt a definition of critical thinking developed by American educational psychologist Benjamin Bloom and colleagues in 1956. It’s not really a definition at all. Rather it’s a hierarchical way to categorise cognitive skills, including recall of information, comprehension, application, analysis, synthesis and evaluation.

The authors state they prefer this categorization, also known as a “taxonomy”, because it’s simple and easy to apply. However, since it was devised it has fallen out of favor and has been discredited by Robert Marzano and indeed by Bloom himself.

In particular, it assumes there is a hierarchy of cognitive skills in which so-called “higher-order” skills are built upon “lower-order” skills. This does not hold on logical or evidence-based grounds. For example, evaluation, usually seen as a culminating or higher-order process, can be the beginning of inquiry or very easy to perform in some contexts. It is more the context than the cognition that determines the sophistication of thinking.

An issue with using this taxonomy in the study is that many generative AI products also seem to use it to guide their own output. So you could interpret this study as testing whether generative AI, by the way it’s designed, is effective at framing how users think about critical thinking.

Also missing from Bloom’s taxonomy is a fundamental aspect of critical thinking: the fact that the critical thinker not only performs these and many other cognitive skills, but performs them well. They do this because they have an overarching concern for the truth, which is something AI systems do not have.

Higher confidence in AI equals less critical thinking
Research published earlier this year revealed “a significant negative correlation between frequent AI tool usage and critical thinking abilities”.

The new study further explores this idea. It surveyed 319 knowledge workers such as healthcare practitioners, educators and engineers who discussed 936 tasks they conducted with the help of generative AI. Interestingly, the study found users consider themselves to use critical thinking less in the execution of the task, than in providing oversight at the verification and editing stages.

In high-stakes work environments, the desire to produce high-quality work combined with fear of reprisals serve as powerful motivators for users to engage their critical thinking in reviewing the outputs of AI.

But overall, participants believe the increases in efficiency more than compensate for the effort expended in providing such oversight.

The study found people who had higher confidence in AI generally displayed less critical thinking, while people with higher confidence in themselves tended to display more critical thinking.

This suggests generative AI does not harm one’s critical thinking – provided one has it to begin with.

Problematically, the study relied too much on self-reporting, which can be subject to a range of biases and interpretation issues. Putting this aside, critical thinking was defined by users as “setting clear goals, refining prompts, and assessing generated content to meet specific criteria and standards”.

Source: https://studyfinds.org/is-ai-making-us-stupider-maybe-according-to-one-of-the-worlds-biggest-ai-companies/

What’s the best time for taking a nap?

(© fizkes – stock.adobe.com)

If you’ve ever wondered about the best time to take a nap, researchers have found your answer: 1:42 p.m. This oddly specific time emerged from a new nationwide study that looked at how Americans nap and what makes some people better nappers than others.

The survey, conducted by Talker Research and commissioned by Avocado Green Mattress, found that most people aim for a 51-minute nap, which would have them waking up at 2:33 p.m. But there’s a catch – napping too long can leave you feeling worse than before you closed your eyes.

“As a psychologist, I see firsthand how sleep — especially napping — affects mood, focus and overall well-being. So many people nap the wrong way and then wonder why they feel groggy instead of refreshed,” says Nick Bach, who holds a doctorate in psychology, in a statement.

When Does a Nap Become Too Long?

The study found that naps lasting longer than an hour and 26 minutes – about 35 minutes past the “perfect” length – enter what researchers call the “danger zone.” At this point, you might feel groggy and disoriented instead of refreshed. And if you’re still sleeping after an extra hour and 44 minutes? That’s no longer a nap – you’ve drifted into a full sleep session.

But even the ideal 51-minute nap might be too long for most people. Bach warns, “I always tell people that if they nap too long, they risk entering deep sleep, which makes waking up harder. A quick 20-minute nap is perfect for a recharge without the dreaded sleep inertia.”

The Great Debate: TV vs. Silence

While sleep experts often recommend quiet, dark rooms for napping, many Americans have different ideas. The study found that 44% of people like having some background noise during their naps – similar to the 50% who prefer noise while sleeping at night. Nearly half of these nappers (47%) fall asleep with the TV on, while only 7% use a white noise machine.

Bach suggests a middle ground: “I always recommend napping in a quiet, dark and cool space. If total silence isn’t an option, using white noise or soft music can help.”

When it comes to where people nap, there’s another split between expert advice and real-world habits. While 53% follow the traditional route and nap in bed, 38% prefer catching their midday rest on the couch. As Bach notes, “Napping on the couch can work, but a bed with good support is usually better.”

Are Nappers More Successful?

Here’s where the research gets interesting: people who regularly take naps might have better social lives. The study found that 48% of self-described “nappers” report having a “thriving” social life, compared to 34% of non-nappers. The pattern continues in their love lives too, with 50% of nappers reporting satisfaction versus 39% of non-nappers.

While both groups were equally likely to be happy (74% of nappers versus 73% of non-nappers), nappers had a slight edge in feeling successful – 39% compared to 32% of non-nappers. They’re also more likely to care about making sustainable choices, with 74% of nappers considering environmental impact in their decisions versus 68% of non-nappers.

Getting the Timing Right

The study’s finding that 1:42 p.m. is the perfect nap time isn’t just a random number – it fits right into expert recommendations. “I think one of the biggest mistakes people make is napping too late,” Bach explains. “If you nap in the late afternoon or evening, it can mess with your nighttime sleep. Ideally, napping before 3 p.m. keeps your sleep schedule on track.”

The benefits of a well-timed nap are clear: 55% of people in the study said they felt more productive right after waking up from a nap. However, there’s a concerning trend – the Americans surveyed only felt well-rested for about half of an average week, suggesting that many might be using naps to make up for poor nighttime sleep.

Source : https://studyfinds.org/best-time-nap/

Why smart people cheat — even when there’s nothing to gain

Man crossing his fingers behind his back (© Bits and Splits – stock.adobe.com)

Study shows uncertainty might be the key to breaking self-deceptive behaviors

A fitness tracker mysteriously logs extra steps. A calorie-counting app somehow shows lower numbers. An online quiz score seems surprisingly high. While these scenarios might seem like harmless self-improvement tools, new research reveals they represent a fascinating psychological phenomenon: we often cheat unconsciously simply to feel better about ourselves, even when there’s nothing tangible to gain.

“I found that people do cheat when there are no extrinsic incentives like money or prizes but intrinsic rewards, like feeling better about yourself,” explains Sara Dommer, assistant professor of marketing at Penn State and lead researcher of a groundbreaking study published in the Journal of the Association for Consumer Research. “For this to work, it has to happen via diagnostic self-deception, meaning that I have to convince myself that I am actually not cheating. Doing so allows me to feel smarter, more accomplished or healthier.”

This phenomenon, which researchers call “diagnostic self-deception,” helps explain behaviors that traditional theories about cheating cannot. While previous research focused on cheating for material gain, Dommer’s work examines why people cheat even when the only reward is an enhanced self-image.

Inside the Self-Deception Experiments

Through four carefully designed studies, Dommer and her team revealed how this self-deceptive behavior works in everyday situations.

Calorie Counting Study

One of the most illuminating experiments tackled everyday calorie tracking. Researchers presented 288 undergraduate students with a three-day food diary scenario, including restaurant meals like pancakes, sandwiches, and pasta dishes. Some students received exact calorie counts from restaurant websites (e.g., “450 calories for a short stack of buttermilk pancakes”), while others only saw multiple options ranging from 300 to 560 calories.

The results showed that when students lacked specific caloric information, they consistently chose lower calorie estimates. Importantly, the study was designed so that averaging the provided calorie options would match the true caloric value. Instead, participants routinely selected lower numbers, effectively deceiving themselves about their food choices.

IQ Test Study

Another study examined intelligence self-deception using a cleverly designed IQ test. 195 Amazon Mechanical Turk workers took a multiple-choice IQ test. Half the participants saw the correct answers highlighted after a few seconds, allowing them to cheat if they wished. The other half took the test normally.

Not only did the group with access to answers score significantly higher, but they also predicted they would perform better on a future test where cheating wouldn’t be possible. Even more telling, when offered a monetary bonus for accurate predictions of their future performance, they still maintained these inflated expectations. This suggests they truly believed their enhanced scores reflected their intelligence rather than their ability to see the answers.

Anagram Study

A third study used word scrambles to measure intelligence, presenting participants with jumbled words like “konreb” (broken) and “eoshu” (house).” Some participants had to type their answers immediately, while others saw the correct answers after three minutes and were asked to self-report how many they had solved. Those who could self-report their scores claimed solving significantly more anagrams than those who had to prove their answers in real-time.

Financial Literacy Study

The final study tackled financial literacy with an interesting twist. Before taking a financial knowledge test, some participants read the statement: “MOST Americans rate themselves highly on financial knowledge, but two-thirds of American adults CANNOT pass a basic financial literacy test.” This simple reminder of uncertainty significantly reduced cheating behavior, suggesting that when people question their capabilities in an area, they become more interested in accurate self-assessment than self-enhancement.

The Results: What It All Means

These studies revealed a consistent pattern: when people could cheat without obvious external rewards, they did—but only if they could maintain the belief that their performance reflected real ability. In the calorie-tracking study, participants entered about 244 fewer calories per day when they could choose from multiple options. In the IQ test, those who could see answers scored an average of 8.82 out of 10, compared to 5.36 for the control group.

“Participants in the cheat group engaged in diagnostic self-deception and attributed their performance to themselves,” Dommer said. “The thinking goes, ‘I’m performing well because I’m smart, not because the task allowed me to cheat.’”

Importantly, this wasn’t just about inflating numbers. Participants genuinely seemed to believe in their enhanced performance. They predicted similar high scores on future tests where cheating wouldn’t be possible, rated the assessments as legitimate measures of ability, and showed increased confidence in their capabilities afterward.

This pattern only broke down when participants’ certainty about their abilities was shaken. When reminded about widespread overconfidence in financial literacy, participants’ cheating decreased significantly, and their self-assessments became more modest.

“I don’t think there’s a good cheating or a bad cheating,” Dommer said. “I just think it’s interesting that not all cheating has to be conscious, explicit and intentional. That said, these illusory self-beliefs can still be harmful, especially when assessing your financial or physical health.”

These findings give us a new understanding of why people might fudge their step counts or peek at answers during online assessments. It’s not just about hitting arbitrary goals or earning meaningless badges—it’s about maintaining and enhancing beliefs about our capabilities, even if we have to deceive ourselves to do it.

Even this seemingly harmless form of cheating comes with consequences. When people convince themselves they’re naturally gifted rather than acknowledging their shortcuts, they might avoid seeking necessary help or purchasing beneficial products and services.

“These illusory self-beliefs can be harmful, especially when assessing your financial or physical health,” Dommer warns.

The research suggests a potential solution: “How do we stop people from engaging in diagnostic self-deception and get a more accurate representation of who they are? One way is to draw their attention to uncertainty around the trait itself. This seems to mitigate the effect,” explains Dommer.

Final Takeaway: How to Avoid Self-Deception

So what’s the big takeaway, especially if you believe you might be guilty of such behavior? While self-deception can provide temporary emotional comfort, it’s worth examining our own tendencies toward unconscious cheating.

Take note when you round down calories, peek at answers, or inflate self-assessments. The goal isn’t to eliminate these behaviors entirely — they’re deeply human — but to recognize when uncertainty about our abilities might actually serve us better than false confidence.

As Dommer’s research shows, acknowledging our limitations often leads to more accurate self-assessment and, ultimately, genuine self-improvement. Companies offering self-assessment tools might consider building in reality checks or uncertainty cues to help users maintain more accurate perceptions of their abilities. After all, real growth starts with honest self-awareness, not comfortable self-deception.

Source : https://studyfinds.org/why-smart-people-cheat/

Devoted nap-takers explain the benefits of sleeping on the job

AP Illustration/Annie Ng

They snooze in parking garages, on side streets before the afternoon school run, in nap pods rented by the hour or stretched out in bed while working from home.

People who make a habit of sleeping on the job comprise a secret society of sorts within the U.S. labor force. Inspired by famous power nappers Winston Churchill and Albert Einstein, today’s committed nap-takers often sneak in short rest breaks because they think the practice will improve their cognitive performance but still carries a stigma.

Multiple studies have extolled the benefits of napping, such as enhanced memory and focus. A mid-afternoon siesta is the norm in parts of Spain and Italy. In China and Japan, nodding off is encouraged since working to the point of exhaustion is seen as a display of dedication, according to a study in the journal Sleep.

Yet it’s hard to catch a few z’s during regular business hours in the United States, where people who nap can be viewed as lazy. The federal government even bans sleeping in its buildings while at work, except in rare circumstances.

Individuals who are willing and able to challenge the status quo are becoming less hesitant to describe the payoffs of taking a dose of microsleep. Marvin Stockwell, the founder of PR firm Champion the Cause, takes short naps several times a week.

“They rejuvenate me in a way that I’m exponentially more useful and constructive and creative on the other side of a nap than I am when I’m forcing myself to gut through being tired,” Stockwell said.

The art of napping

Sleep is as important to good health as diet and exercise, but too many people don’t get enough of it, according to James Rowley, program director of the Sleep Medicine Fellowship at Rush University Medical Center.

“A lot of it has to do with electronics. It used to be TVs, but now cellphones are probably the biggest culprit. People just take them to bed with them and watch,” Rowley said.”

Napping isn’t common in academia, where there’s constant pressure to publish, but University of Southern California lecturer Julianna Kirschner fits in daytime naps when she can. Kirschner studies social media, which she says is designed to deliver a dopamine rush to the brain. Viewers lose track of time on the platforms, interrupting sleep. Kirschner says she isn’t immune to this problem — hence, her occasional need to nap.

The key to effective napping is to keep the snooze sessions short, Rowley said. Short naps can be restorative and are more likely to leave you more alert, he said.

“Most people don’t realize naps should be in the 15- to 20-minute range,” Rowley said. “Anything longer, and you can have problems with sleep inertia, difficulty waking up, and you’re groggy.”

Individuals who find themselves consistently relying on naps to make up for inadequate sleep should probably also examine their bedtime habits, he said.

A matter of timing

Mid-afternoon is the ideal time for a nap because it coincides with a natural circadian dip, while napping after 6 p.m. may interfere with nocturnal sleep for those who work during daylight hours, said Michael Chee, director of the Centre for Sleep and Cognition at the National University of Singapore.

“Any duration of nap, you will feel recharged. It’s a relief valve. There are clear cognitive benefits,” Chee said.

A review of napping studies suggests that 30 minutes is the optimal nap length in terms of practicality and benefits, said Ruth Leong, a research fellow at the Singapore center.

“When people nap for too long, it may not be a sustainable practice, and also, really long naps that cross the two-hour mark affect nighttime sleep,” Leong said.

Experts recommend setting an alarm for 20 to 30 minutes, which gives nappers a few minutes to fall asleep.

But even a six-minute nap can be restorative and improve learning, said Valentin Dragoi, scientific director of the Center for Neural Systems Restoration, a research and treatment facility run by Houston Methodist hospital and Rice University.

 

Neuroscience mystery solved? How our brains use experiences to make sense of time

Your brain learns patterns through your experiences to create timelines. (McCarony/Shutterstock)

Time flows as a constant stream of moments, but your brain sees patterns in this flow. Now, scientists have discovered exactly how individual neurons learn to recognize and predict these patterns, providing the first direct evidence of how our brains map out the structure of time.

The study, published in Nature, was conducted by researchers at UCLA Health. It required recording the activity of individual neurons in patients who had electrodes implanted in their brains for epilepsy treatment. These recordings offer a rare glimpse into how individual brain cells behave during learning and memory formation—something that’s impossible to observe with standard brain imaging techniques.

“Recognizing patterns from experiences over time is crucial for the human brain to form memory, predict potential future outcomes, and guide behaviors,” says Dr. Itzhak Fried, director of epilepsy surgery at UCLA Health, in a statement. “But how this process is carried out in the brain at the cellular level had remained unknown – until now.”

Prior to the main experiment, researchers needed to identify which images would trigger strong neural responses in each participant. They showed participants about 120 different pictures over 40 minutes, including images of celebrities, landmarks, and other subjects chosen partly based on each person’s interests. Based on how brain cells responded, researchers selected six specific images for each participant to use in the main experiment.

The main study had three phases. In the first phase, images appeared in random order while participants performed simple tasks, like identifying whether the person shown was male or female. During the middle phase, images appeared in sequences that followed specific rules, though participants weren’t told about these rules. Instead, they focused on a new task: determining whether each image was shown normally or in a mirror image. The final phase returned to random sequences and the original gender identification task.

The sequence rules were based on what researchers called a pyramid graph. Six points were arranged in a triangle shape, with each point representing one of the selected images. Lines connected certain points, indicating which images could appear after others. Some images were directly connected, like neighboring points on the graph. Others required taking an indirect path through multiple points to get from one to another.

What makes this study particularly fascinating is that it revealed how individual neurons adapted as participants became familiar with these sequences. At first, a neuron would respond strongly to just one specific image. But over time, these same neurons began responding to images that frequently appeared close together in the sequence, essentially mapping out the temporal relationships between different images.

The brain’s ability to encode these temporal patterns shares remarkable similarities with how it represents physical space. Previous research discovered that certain neurons act as “place cells,” firing when an animal reaches specific locations, while others function as “grid cells” that help measure distances. The new study shows the brain uses comparable mechanisms to map out sequences of events and experiences.

This research also builds on earlier discoveries about “concept cells,” neurons that respond to specific individuals, places, or objects. These specialized brain cells appear to be fundamental building blocks of memory. The new findings show how these neurons work together to create structured representations of our experiences through time.

The researchers discovered that this neural mapping created what they call a “successor representation,” a predictive map that considers not just immediate connections but likely future events. Rather than simply linking one moment to the next, your brain builds a broader model of likely future possibilities based on learned patterns.

“This study shows us for the first time how the brain uses analogous mechanisms to represent what are seemingly very different types of information: space and time,” explains Fried. “We have demonstrated at the neuronal level how these representations of object trajectories in time are incorporated by the human hippocampal-entorhinal system.”

During breaks between testing phases, researchers observed “replay” events, moments when neurons would rapidly rehearse the learned sequences in a compressed timeframe. This neural replay happened in milliseconds, suggesting a mechanism for consolidating learned patterns into memory.

Understanding how the brain encodes temporal patterns goes beyond basic science. The findings could help develop new treatments for memory disorders and advance the design of brain-computer interfaces. They may also inform artificial intelligence systems that aim to process sequential information in ways that mirror human cognition.

Source : https://studyfinds.org/brain-experiences-sense-of-time/

9 predictions for the biggest research breakthroughs of 2025

(Photo by Nan_Got on Shutterstock)

From personalized medicine to wearable technology to hair loss innovations, this year could provide no shortage of ways for humans to live healthier
Remember when science fiction promised us flying cars and robot butlers? Well, 2025’s actual breakthroughs might not help you commute through the clouds, but they’re poised to transform something far more important: how we understand and care for our human bodies and minds. From reversing hair loss to regenerating teeth, from predicting mental health patterns to personalized genetic treatments, we’re standing on the edge of discoveries that would have seemed like science fiction just a few years ago.

We asked a panel of nine experts to provide us with their predictions for this year’s biggest research breakthroughs. If there’s one thing we can say, we’re looking forward to a world where the ability to gauge and improve our health might be easier than ever before.

What makes these predictions (or should we really call them expectations?) especially fascinating is how they’re all connected by two powerful threads: the rise of personalized medicine and the integration of artificial intelligence. Gone are the days of one-size-fits-all healthcare – whether we’re talking about stress management, dental care, or treating obesity, researchers are uncovering ways to tailor treatments to each person’s unique genetic makeup, gut microbiome, and lifestyle patterns.

But perhaps the most exciting shift isn’t just in what these breakthroughs might achieve, but in how they’re changing our entire approach to healthcare. Instead of waiting for problems to occur and then treating them, 2025’s innovations are all about prevention and early intervention. Imagine a world where your smartwatch can predict a mental health dip before you feel it, where your genes can be edited to prevent diseases before they start, or where your teeth could actually repair themselves. That world isn’t just science fiction anymore – it’s right around the corner.

Advancements in Aging and Mental Health Research

As a geriatric psychiatrist and someone deeply immersed in caregiving and aging issues, I predict 2025 will bring significant advancements in research focused on aging, mental health, and caregiver support. One of the most exciting areas is the use of AI-driven health technologies to detect and manage age-related conditions earlier. For example, wearable devices are becoming smarter at identifying early signs of cognitive decline or physical frailty. I anticipate new breakthroughs in how these tools deliver actionable insights, empowering families and caregivers to intervene before major health events occur.

Another area I’m watching closely is personalized medicine for mental health. Research into biomarkers and genetic testing is advancing quickly, and I believe we’ll soon see targeted treatments for depression, anxiety, and cognitive disorders that are more effective and have fewer side effects. This could be life-changing for older adults who struggle with medication tolerance or for caregivers managing their own stress.

Finally, I predict a surge in studies exploring the psychosocial aspects of caregiving. Researchers are diving deeper into the mental health impacts of caregiving and testing interventions — like mindfulness programs, virtual support groups, and even VR therapy-that help caregivers cope with stress and maintain their well-being. These innovations are essential as caregiving responsibilities grow more common and complex.

What excites me most is the focus on holistic approaches that integrate mental, emotional, and physical health. Whether it’s smarter tech, personalized care, or emotional resilience tools, I believe these breakthroughs will make life better for aging adults and their caregivers too — helping us all age with more grace, dignity, and support.

Mind-Body Connection and Stress Management

2025 will be the year of the mind-body connection — specifically, in understanding how chronic stress physically impacts our bodies. Eighty percent of our nervous system carries information from the body to the brain, not the other way around – yet our approach to mental health targets the mind first.

We’ve already seen at NEUROFIT that our average active user reports a 54% reduction in stress after just one week of mind-body practices — more studies will show how physical interventions can be more effective than traditional cognitive approaches for managing stress and mental health.

Measurement technology can help lead this trend. With wearables becoming more sophisticated, I anticipate studies showing how real-time biometric data can predict stress-related health issues before they become severe. Our own research analyzing millions of stress data points shows that certain physiological patterns consistently precede burnout. Given that chronic stress leads to $1T+ in healthcare expenses each year, I expect to see major studies validating these early warning signals, potentially revolutionizing preventive healthcare.

Another exciting area is the intersection of behavioral science and technology. Studies are currently exploring how brief, targeted interventions can create lasting changes in stress response patterns. We’ve found that 95% of our users experience immediate stress relief within five minutes of specific somatic exercises. I predict we’ll see research showing how short, consistent practices can rewire the nervous system more effectively than longer, sporadic interventions.

Finally, I think we’ll see breakthrough research on social connection’s role in nervous system regulation. Our data shows that prioritizing social play can improve emotional balance by up to 26%. I expect studies in 2025 will further validate how structured social interactions can significantly impact stress resilience and overall mental health outcomes.

These developments could radically change how we approach stress management and mental health care, moving from reactive treatment to proactive regulation and prevention.

Regenerative Medicine in Dentistry

As a dentist, I’m particularly excited about advancements in regenerative medicine and biomaterials for dentistry. Researchers are exploring ways to grow dental tissues or repair teeth using stem cells, which could revolutionize how we treat tooth decay and damage. Imagine being able to regenerate lost enamel or even replace a missing tooth without needing implants. These breakthroughs could lead to less invasive and more natural dental solutions for patients.

In the broader medical field, wearable technology and AI-driven diagnostics are also advancing quickly. Devices that monitor health metrics like glucose levels, heart rate, and oral health indicators in real time could become more accurate and accessible. These tools could improve preventive care by catching potential health issues early, leading to better outcomes for patients. I believe 2025 will bring us closer to more personalized and proactive healthcare.

AI-Driven Personalized Medicine

By 2025, I foresee significant advancements in AI-driven personalized medicine, wherein the integration of genomics, patient data analytics, and AI will result in much more precise and targeted treatments. There is a growing interest among researchers in developing AI-powered algorithms that could forecast disease progression based on a person’s genetic makeup, lifestyle, and environmental exposure. This would facilitate more proactive and personalized interventions, especially in chronic disease management, oncology, and neurology.

Another field that I predict breakthroughs in is the integration of AI and wearables for real-time health monitoring. Various studies are underway that test wearable technologies that gather continuous physiological data, to be analyzed by AI to spot early signs of impending heart attacks, strokes, or complications arising from diabetes, even before symptoms begin to appear. This will change healthcare from being a reactive practice to proactive care and ensure timely intervention for patients.

Finally, I foresee a rapid increase in research on regenerative medicine, specifically on stem cell therapies and tissue engineering. With technological advancement, there may come the ability to regenerate tissues and organs damaged from specific life traumas, diseases, and conditions that, until today, could not be cured-such as heart disease, spinal cord injury, and neurodegenerative diseases. This space will no doubt interconnect with AI and machine learning to better results and hastened efficacies in treatments.

Genetics and Personalized Preventative Medicine

As a recruiter working in the life sciences industry, I have insider knowledge of the hiring shifts promising to transform medicine in the coming years.

Right now, it’s all about genetics. Personalized preventative medicine is what everyone wants. In other words, why treat a disease if you can avoid it? Tailored care takes into account a patient’s predispositions on a genetic level and neutralizes the threat before it manifests. It’s more possible than ever before, and I’m placing top talent in the sector daily. These candidates range from analysts looking at large data samples to patient-facing counselors focusing on a single profile, but by far, genetic therapy is the most exciting. With CRISPR technology, we’re on the cusp of being able to rework genetic abnormalities to our advantage, instead of simply waiting for them to be expressed. This has the potential to disrupt our understanding of the entire human body.

Progress in Hair Regeneration Research

In 2025, I anticipate significant progress in hair regeneration research, particularly in stem cell therapy and gene editing technologies. These studies aim to revolutionize treatments for hair loss by targeting the root causes at the cellular level. For example, researchers are exploring ways to reactivate dormant hair follicles or create lab-grown hair that matches the individual’s natural growth patterns.

Additionally, advancements in understanding the scalp’s microbiome could lead to personalized solutions for conditions like dandruff and inflammation, which impact hair health. These breakthroughs can potentially make treatments more effective, less invasive, and tailored to the unique needs of every individual. It’s an exciting time for the field of hair health.

Personalized Nutrition Through Microbiome Research

Work-in-progress research conducted within the microbiome of the gut in 2025 will start having a direct influence on health. Scientists are unpacking how unique gut bacterial profiles influence everything from nutrient absorption to immune functions, thus opening the door to personalized nutrition solutions.

Several ongoing studies aim to develop new microbiome-based tools to offer cures for chronic ailments, like obesity, IBS, and diabetes. These advances promise to allow personalized nutrition based on individual gut composition.

Introducing support changes the approach whereby monitoring gut health will be done using wearable technology and combined with microbiome research. This will allow individuals to take proactive and data-driven approaches to wellness.

Revolutionizing Obesity Treatment with GLP-1

The year 2025 can be considered a significant milestone in the treatment of obesity as there will be advancements in the GLP-1 receptor agonist drugs. These drugs, in particular Ozempic, are being investigated for their benefits going beyond weight loss and improving the patient’s metabolic profile and decreasing the likelihood of chronic diseases including diabetes and cardiovascular diseases.

The study is not only confined to the management of obesity and weight loss but it also focuses on the effects of GLP-1 on the brain, control of appetite, and inflammatory cytokines. It is also found that it can help in preventing neurodegenerative diseases and is associated with enhanced cognitive performance.

GLP-1 therapies are expected to enhance these developments when combined with AI-based personalized medicine. Through the use of genetic and metabolic data, clinicians may be able to determine the best course of treatment for each patient thus achieving better outcomes.

Source : https://studyfinds.org/9-predictions-biggest-research-breakthroughs-202/

Teens spend 90+ minutes on their phones during typical school day

(Photo by BearFotos on Shutterstock)

As schools nationwide grapple with smartphone policies, new research provides unprecedented and shocking insight into how teenagers use their phones during school hours. Using sophisticated tracking technology, researchers discovered that students spend an average of 92 minutes on their smartphones during a typical school day, with a quarter of students exceeding 2 hours of use.

Moving beyond simple screen time measurements, researchers deployed passive sensing technology to paint a detailed picture of how and when adolescents use their phones during the school day. Their findings raise important questions about the role of smartphones in modern education and their potential impact on learning.

Research led by Dr. Dimitri A. Christakis at Seattle Children’s Research Institute found that this school-day phone use accounts for approximately 27% of students’ total daily phone usage, which averages 5.59 hours. More revealing than the raw numbers is how students spend their phone time during school hours.

Social media and messaging dominate school-hour phone use, with Instagram leading social platforms. Instagram users in the study spent an average of about 25 minutes on the platform during school hours alone. Messaging and chat applications averaged 19.5 minutes of use during school hours, while video streaming services claimed about 17 minutes.

Looking at demographic patterns, older teens (ages 16-18) logged significantly more phone time during school hours compared to younger teens (ages 13-15), spending about 33 more minutes on their devices. Female students showed higher usage rates than male students, using their phones approximately 29 minutes more during school hours.

Parental attempts to limit screen time appeared to have little impact on school-hour phone use. Students with parental limits on screen time showed similar usage patterns to those without restrictions, suggesting that school-based interventions might be more effective than home-based rules.

Educational background of parents emerged as a significant factor. Students whose parents held bachelor’s degrees spent about 32 minutes less time on their phones during school hours compared to peers whose parents did not have college degrees. This correlation raises important questions about the role of family educational culture in shaping student technology habits.

The study, published in JAMA Pediatrics, also revealed interesting patterns among different demographic groups. Hispanic students showed significantly higher social media use during school hours compared to their white peers, spending about 25 more minutes on social platforms. Meanwhile, students identifying as LGBTQIA+ showed similar usage patterns to their non-LGBTQIA+ peers, with no statistically significant differences in overall phone use.

While smartphones offer potential benefits for learning and communication, these findings suggest their primary use during school hours may be misaligned with educational goals. More schools are expected to implement phone restrictions in the coming years, with research like this providing valuable data to inform those policy decisions.

Source : https://studyfinds.org/teens-spend-90-minutes-phones-during-school/

From A to Zzzs: The science behind a better night’s sleep

It’s no secret that a good night’s sleep plays a vital role in mental and physical health and well-being. The way you feel during your waking hours depends greatly on how you are sleeping, say sleep experts.

A pattern of getting inadequate or unsatisfying sleep over time can raise the risk for chronic health problems and can affect how well we think, react, work, learn and get along with others.

According to the National Heart, Lung and Blood Institute, an estimated 50 to 70 million Americans have sleep disorders, and one in three adults does not regularly get the recommended amount of uninterrupted sleep needed to protect their health.

Many factors play a role in preparing the body to fall asleep and wake up, according to the National Institutes of Health. Our internal “body clock” manages the sleep and waking cycles and runs on a 24-hour repeating rhythm, called the circadian rhythm. This rhythm is controlled both by the amount of a sleep-inducing compound called adenosine in our system and cues in our environment, such as light and darkness. This is why sleep experts suggest keeping your bedroom dark during your preferred sleeping hours.

Sleep is also controlled by two main hormones, melatonin and cortisol, which our bodies release in a daily rhythm that is controlled by the body clock.

Exposure to bright artificial light—such as from television, computer and phone screens—late in the evening can disrupt this process, making it hard to fall asleep, explained Sanjay Patel, director of the UPMC Comprehensive Sleep Disorders Clinical Program and a professor of medicine and epidemiology at the University of Pittsburgh.

Keeping our body clock and hormone levels more-or-less regulated are the best ways to consistently achieve good sleep, Patel said. He encouraged people with sleeping struggles to focus more on behavioral changes than seeking quick fixes, such as with over-the-counter sleep supplements like melatonin or by upping alcohol intake to feel drowsy.

Patel said there’s not much clinical evidence that melatonin supplements work very well, and that “a lot of the clinical trials of melatonin haven’t shown consistent evidence that it helps with insomnia.”

He did point out that the supplement isn’t particularly harmful either, except when “people start increasing and increasing the dose. And in particular, we worry about the high doses that a lot of children are being given by their parents, where it really can cause problems,” he said. Taking any more than three to five milligrams doesn’t increase the sedative effects, “and yet, we see people showing up to clinic all the time taking 20 milligrams.”

Sleeping potions

Many have suggested that warm milk, chamomile tea or tart cherry juice can induce a somniferous effect. While Patel said there’s no evidence they work, he did point out that they’re preferable to a nightcap.

“Alcohol is really bad for your sleep long term, for a number of reasons,” Patel said. First, alcohol can relax the throat muscles and can make sleep apnea and snoring worse for sufferers. Secondly, the body metabolizes alcohol rather quickly so its sedation effects do not last throughout the night.

“So while it may put you to sleep, what happens is, three or four hours later, the alcohol has been metabolized, and now you will wake up from not having alcohol in your system,” he said.

Evening libations can also increase acid reflux and long-term drinking can cause “changes in your brain chemistry and is a big cause of insomnia,” he said. Heavy drinkers who suffer from insomnia will often increase their intake of alcohol in an effort to fall asleep, thus creating a dangerous cycle that could lead to alcohol use disorder.

Cannabis is not much better, Patel said.

While a handful of pot users—specifically those who use it to treat anxiety—may see some sleep benefits, for the most part cannabis often does not help chronic insomnia and will likely make it worse.

“They actually see a lot of people whose sleep gets better when they stop using (cannabis),” Patel said.

Instead of turning to sleep aids—natural or otherwise—Patel said developing a bedtime routine that promotes relaxation and unwinding is a much better route to a good night’s rest.

Whether it’s taking hot bath, reading a book, meditating or even tuning into the nightly news, the brain will associate an oft-repeated bedtime ritual with the relaxation required to fall asleep, he explained.

You can watch television, but stay off social media, he said. “The algorithms on social media are designed to keep us engaged and end up contributing to people not closing their eyes until much later than they planned.”

Other common reasons that sleep can be unsatisfying or elusive are stress, worry and the simple fact that many people don’t give themselves enough time for rest.

“We see all the time that people plan to go to bed at a certain time, but then once they get into bed, they do other things and keep their mind active,” such as responding to emails, paying bills or scrolling on social platforms.

Aging influence

The rhythm and timing of the body clock changes with age, Patel said.

People need more sleep early in life when they’re growing and developing. For example, newborns may sleep more than 16 hours a day, and preschool-age children need to take naps.

In the teen years, the internal clock shifts so that they fall asleep later in the night, but then want to sleep in late. This is troublesome for teens because “they need to be up for school at 6:30 a.m. and so that’s causing lots of problems,” Patel said.

Some school districts in the region, including Pittsburgh Public in 2023, have shifted to later start times with this in mind.

For adults, sleep during middle age can be tricky with young children in the home who disrupt parents’ sleeping patterns. This is also a time of life when stress and worry are heightened, he said.

Older adults tend to go to bed earlier and wake up earlier, but they’ve got their own unique challenges, Patel said.

“A lot of physical problems mean that people are often waking up more in the night as they age. They have to get up to go to the bathroom. They have chronic aches and pains that wake them up. They’re often taking medications that … have side effects that affect your sleep,” he said.

Source : https://medicalxpress.com/news/2025-02-zzzs-science-night.html

 

Vacation days are the key to well-being? Study explains important link

(© Monkey Business – stock.adobe.com)

If you’re like many Americans, you probably didn’t take all your vacation time this past year. Even if you did, chances are you didn’t fully unplug while away from the office. But according to new research from the University of Georgia, those vacation days aren’t just a nice perk—they’re crucial for your well-being.

The research, published in the Journal of Applied Psychology, analyzed 32 different studies across nine countries. Researchers discovered something surprising: vacation benefits last much longer than previously believed. While we’ve long known that vacations can improve well-being, this comprehensive review found these positive effects persist well after returning to work, challenging earlier beliefs that vacation benefits quickly disappear.

“We think working more is better, but we actually perform better by taking care of ourselves,” explains lead author Ryan Grant, a doctoral student in psychology at UGA’s Franklin College of Arts and Sciences, in a statement. “We need to break up these intense periods of work with intense periods of rest and recuperation.”

The catch? How you spend your vacation matters significantly. The research team found that truly disconnecting from work produced the greatest benefits. This means avoiding work emails, skipping those “quick check-ins” with the office, and genuinely allowing yourself to mentally detach from workplace responsibilities.

“If you’re not at work but you’re thinking about work on vacation, you might as well be at the office,” says Grant. “Vacations are one of the few opportunities we get to fully just disconnect from work.”

Physical activity emerged as another key factor in maximizing vacation benefits. But don’t worry, this doesn’t mean you need to run marathons during your beach trip.

“Basically anything that gets your heart rate up is a good option,” explains Grant. “Plus, a lot of physical activities you’re doing on vacation, like snorkeling, for example, are physical. So they’re giving you the physiological and mental health benefits. But they’re also unique opportunities for these really positive experiences that you probably don’t get in your everyday life.”

The length of your vacation also plays a crucial role. The study found that longer vacations generally led to greater improvements in well-being, though these effects also tended to decline more quickly upon return. The researchers recommend building in buffer days both before and after your trip. Taking time to pack and prepare reduces pre-vacation stress while having a day or two to readjust after returning can ease the transition back to work life.

Cultural differences revealed interesting patterns, too. In countries where work achievement and success are highly valued, people experience more dramatic benefits from vacation time, likely because they really need the break. However, they also show steeper declines in well-being when returning to work. Workers in countries with more mandatory vacation days tended to get more out of their time off, possibly because taking vacations is more normalized and accepted.

These findings arrive at a critical moment, as vacation usage has declined in recent decades. In 2018 alone, American workers left 768 million vacation days unused, surrendering approximately $65 billion in benefits. This trend persists despite mounting evidence that prolonged work without adequate breaks can lead to burnout, anxiety, depression, and even physical health problems.

Maybe we should all rethink how we view vacations. Rather than seeing them as optional luxuries, we should recognize them as essential tools for maintaining well-being and long-term productivity. Whether it’s a two-week adventure or a long weekend getaway, the key is to fully disconnect and engage in activities that provide both physical and mental benefits.

Source : https://studyfinds.org/vacation-days-long-term-health/

The secret to career success? It might be hidden in your free time

(© Drobot Dean – stock.adobe.com)

In an age of endless productivity hacks and work-life balance tips, new research offers a refreshing perspective: what if you could advance your career while actually enjoying your leisure time? A study suggests this elusive goal might be more achievable than previously thought, introducing a concept called “leisure-work synergizing” that could revolutionize how we think about professional development.

Conventional wisdom has long suggested that work and leisure should remain separate. Clock out, go home, and leave work behind. But researchers Kate Zipay from Purdue University and Jessica Rodell from the University of Georgia have uncovered evidence that thoughtfully blending certain work-related elements into leisure activities might actually enhance both professional growth and personal enjoyment.

The concept, published in Organization Science, goes beyond simply answering emails after hours or catching up on work during weekends.

“We found that employees who intentionally integrate professional growth into their free time – like listening to leadership podcasts, watching TED Talks or reading engaging business books – report feeling more confident, motivated and capable at work,” explains Zipay. This innovative approach allows people to develop professionally without sacrificing the fundamental pleasure of leisure time.

The Science Behind the Strategy

The research team tracked 89 professionals over five weeks, examining how their leisure choices influenced their work performance and emotional state. Participants completed surveys about their activities and experiences during evenings and weekends, followed by assessments of their workplace mindset and performance the next day.

What emerged was a clear pattern: when people engaged in leisure activities that had some connection to professional growth, they reported significantly higher levels of self-assurance, feeling more confident and capable at work. This boost in confidence translated into better overall workplace performance and satisfaction.

However, the research revealed an important caveat: personality matters. Not everyone benefits equally from blending work and leisure. The study identified two distinct types of people: “integrators” who naturally prefer fluid boundaries between work and personal life, and “segmenters” who thrive on keeping these domains separate.

“Employees who prefer a clear separation between work and personal life might struggle with this approach,” notes Zipay, “highlighting the importance of tailoring the practice to individual preferences.”

For integrators, leisure-work synergizing proved particularly beneficial, actually reducing fatigue rather than adding to it. Meanwhile, segmenters showed less positive results from the practice, suggesting that forcing this approach when it doesn’t align with personal preferences could be counterproductive.

‘Done right, it’s a game-changer’

This research arrives at a crucial moment when traditional boundaries between work and personal life continue to blur, especially in the wake of remote work trends. Rather than fighting against this evolution, the study suggests we might benefit from being more strategic about it.

“This isn’t about making your free time feel like work,” emphasizes Zipay. “It’s about leveraging activities you already love in a way that fuels your professional growth. Done right, it’s a game-changer for employees and employers alike.”

Look for those natural overlaps where professional growth can occur alongside genuine enjoyment. For instance, the explosive growth of platforms like MasterClass and the surging popularity of business and personal development podcasts suggest many people already naturally gravitate toward this kind of enriching leisure activity.

For organizations and employees alike, these findings open up new possibilities for professional development. Instead of relying solely on traditional training programs or expecting employees to sacrifice personal time for growth, companies might benefit from supporting more flexible and integrated approaches to skill development.

Rather than choosing between career advancement and personal enjoyment, careful integration of the two might offer the best of both worlds, proving that sometimes you really can have your cake and eat it too.

Source : https://studyfinds.org/secret-to-career-success-free-time/

Why being a ‘bingo night’ regular could buy your brain an extra 5 years

(© Monkey Business – stock.adobe.com)

Going out to restaurants, playing bingo, visiting friends, or attending religious services could give you extra years of healthy brain function, according to new research from Rush University Medical Center. Their study found that older adults who stayed socially active typically developed dementia five years later than those who were less social. It’s a difference that could both extend life and save hundreds of thousands in healthcare costs.

“This study shows that social activity is related to less cognitive decline in older adults,” said Bryan James, PhD, associate professor of internal medicine at Rush, in a statement. “The least socially active older adults developed dementia an average of five years before the most socially active.”

The research team followed 1,923 older adults who were initially dementia-free, checking in with them yearly to track their social activities and cognitive health. They looked at six everyday social activities: dining out, attending sporting events or playing bingo, taking trips, doing volunteer work, visiting relatives or friends, participating in groups, and attending religious services.

Over nearly seven years of follow-up, 545 participants developed dementia, while 695 developed mild cognitive impairment (MCI), which often precedes dementia. After accounting for factors like age, education, gender, and marital status, the researchers found that each increase in social activity was linked to a 38% lower chance of developing dementia.

Being social seems to help the brain in several ways. When we engage socially, we exercise the parts of our brain involved in memory and thinking. “Social activity challenges older adults to participate in complex interpersonal exchanges, which could promote or maintain efficient neural networks in a case of ‘use it or lose it,’” explains James.

The benefits of social activity appear to work independently of other social factors, like how many friends someone has or how supported they feel. This suggests that simply getting out and doing things with others could be more important than the size of your social circle.

The research takes on new urgency following the COVID-19 pandemic, which left many older adults isolated. The findings suggest that communities might benefit from creating more opportunities for older adults to engage socially, whether through organized activities, volunteer programs, or regular social gatherings.

Source: https://studyfinds.org/social-seniors-five-years-dementia/

The bitter truth: Science reveals why coffee tastes different to everyone

What affects coffee’s bitterness more: roasting techniques or your predisposed genetics? (Photo by Mix and Match Studio on Shutterstock)

Next time you take a sip of coffee and scrunch your nose at its bitter taste, your DNA might be to blame. New research from scientists in Germany has uncovered fascinating insights into why Arabica coffee’s signature bitterness varies from person to person, and it’s not just about how dark the roast is.

The study, published in Food Chemistry, was conducted at the Technical University of Munich’s Leibniz Institute for Food Systems Biology. Researchers have identified a new group of bitter compounds formed during coffee roasting.

“Indeed, previous studies have identified various compound classes that contribute to bitterness. During my doctoral thesis, I have now identified and thoroughly analyzed another class of previously unknown roasting substances,” says study author Coline Bichlmaier, a doctoral student, in a statement.

While caffeine has long been known as coffee’s primary bitter component, even decaffeinated coffee tastes bitter, indicating other compounds are at work. At the heart of this bitter business is a compound called mozambioside, found naturally in raw coffee beans. It’s about ten times more bitter than caffeine and particularly abundant in naturally caffeine-free coffee varieties. However, this may not be at the root of that bitter taste.

“Our investigations showed that the concentration of mozambioside decreases significantly during roasting so that it only makes a small contribution to the bitterness of coffee,” says principal investigator Roman Lang.

Through detailed chemical analysis, researchers tracked mozambioside as coffee beans roasted. They found it breaks down into seven specific compounds, each contributing its own bitter properties. Using ultra-high-performance liquid chromatography and mass spectrometry, essentially very precise chemical detection methods, they measured exactly how much of each compound forms during roasting and transfers into your cup.

When studying Colombian Arabica coffee specifically, they found that not everyone experiences these bitter compounds the same way. A specific gene called TAS2R43, which codes for one of our approximately 25 bitter taste receptors, plays a crucial role. About 20% of Europeans have a deletion in this gene, meaning they’re missing that particular bitter taste receptor entirely.

In standardized taste tests with 11 volunteers, researchers analyzed each participant’s DNA using saliva samples to determine their TAS2R43 gene status. Their genetic test revealed that two participants had both copies of the TAS2R43 gene variant defective, seven had one intact and one defective variant, and only two people had both copies fully intact.

The results revealed striking differences in bitter perception based on genetics. When combining mozambioside with its roasting products in a sample, eight out of eleven test subjects perceived a bitter taste, one found it astringent, and two didn’t notice any particular taste.

During roasting experiments at different temperatures, researchers discovered that some bitter compounds peaked at 240°C, while others continued increasing up to 260°C. These findings join our existing knowledge about other bitter-tasting substances formed during roasting, including compounds called caffeoylquinides (from chlorogenic acids), diketopiperazines (from coffee proteins), and oligomers of 4-vinylcatechols (from caffeic acids).

Bitter taste receptors aren’t only found in our mouths. They exist throughout the body in various organs and tissues. Studies indicate they help fight pathogens in our respiratory tract, assist with defense mechanisms in our intestines and blood cells, and may play a role in metabolism regulation.

“The new findings deepen our understanding of how the roasting process influences the flavor of coffee and open up new possibilities for developing coffee varieties with coordinated flavor profiles,” says Lang. “They are also an important milestone in flavor research, but also in health research. Bitter substances and their receptors have further physiological functions in the body, most of which are still unknown.”

With global production reaching 102.2 million 60-kilo bags of Arabica coffee in 2023/24, understanding these bitter compounds and their perception is major. For coffee lovers and producers alike, this research provides scientific validation for something many have long suspected: we really do experience coffee differently from one another, and it’s written in our genes.

Source : https://studyfinds.org/why-coffee-tastes-different-to-everyone/

Mice created from two biological fathers are first to live into adulthood

Lab mouse unrelated to study. (© filin174 – stock.adobe.com)

The idea of same-sex biological reproduction in mammals has long been thought impossible, like trying to build a house with only half the blueprint. But researchers in China have achieved what many believed couldn’t be done: they’ve created viable mice that lived until adulthood using genetic material from two fathers, unlocking new possibilities in reproductive science.

This landmark achievement, published in Cell Stem Cell, represents a significant advance in reproductive biology and opens new possibilities in regenerative medicine.

The team led by researchers at the Chinese Academy of Sciences (CAS) in Beijing successfully modified specific genetic regions in mouse embryonic stem cells to overcome what scientists have long considered a fundamental barrier to same-sex reproduction in mammals. Previous attempts to create bi-paternal mice had failed, with embryos stalling at early developmental stages. However, this new approach, targeting 20 key genetic locations, enabled the first-ever successful development of bi-paternal mice to adulthood.

“The unique characteristics of imprinting genes have led scientists to believe that they are a fundamental barrier to unisexual reproduction in mammals,” explains Qi Zhou, a co-corresponding author from CAS. “Even when constructing bi-maternal or bi-paternal embryos artificially, they fail to develop properly, and they stall at some point during development due to these genes.”

Previous scientists tried a different strategy to create mice with two fathers. They first attempted to create egg cells in the lab using special cells from male mice that can transform into any type of cell, like blank building blocks that can become whatever the body needs. The idea was to then fertilize these lab-created eggs with sperm from another male mouse. However, this approach didn’t work because when genetic material from two males was combined in this way, it created problems in how genes functioned.

The new research team took a completely different approach. Instead of trying to create eggs, they focused on carefully editing specific parts of the genetic code. They used various techniques to change, remove, or adjust specific genes that control how genetic material from parents normally works together. This new method not only allowed them to successfully create mice with two fathers, but it also resulted in especially versatile stem cells.

“This work will help to address a number of limitations in stem cell and regenerative medicine research,” says Wei Li, the study’s corresponding author from CAS. The implications extend beyond reproductive biology, potentially advancing our understanding of cellular development and regenerative medicine applications.

One of the most fascinating aspects of this research was the creation of functional placentas from bi-paternal embryos. The placenta, an organ crucial for mammalian development, typically requires precise genetic contributions from both parents. The team’s success in creating functional bi-paternal placentas represents a significant breakthrough in understanding reproductive biology.

The surviving bi-paternal mice showed intriguing characteristics. They grew faster than normal mice and displayed reduced anxiety-like behaviors in behavioral tests. However, they also had shorter lifespans, living only about 60% as long as typical mice. These differences provide valuable insights into how parental genes influence development and aging.

“These findings provide strong evidence that imprinting abnormalities are the main barrier to mammalian unisexual reproduction,” notes Guan-Zheng Luo of Sun Yat-sen University in Guangzhou. “This approach can significantly improve the developmental outcomes of embryonic stem cells and cloned animals, paving a promising path for the advancement of regenerative medicine.”

The research team plans to extend their experimental approaches to larger animals, including monkeys, though they acknowledge this will require considerable effort due to different imprinting gene combinations across species.

The potential application to human medicine remains unclear, particularly given current ethical guidelines. The International Society for Stem Cell Research explicitly prohibits heritable genome editing for reproductive purposes and the use of human stem cell-derived gametes for reproduction, citing safety concerns.

Source : https://studyfinds.org/scientists-create-healthy-mice-with-two-biological-fathers/

Gen Z Are More Anxious Than Any Other Generation

Gen Z students are experiencing poor mental health and a lack of hope for the future. (To be honest, I think most generations are).

According to professors who teach Gen Zers, the generation appears even more anxious than their Millennial counterparts and has completely lost hope in the American Dream. Gen Z also reports the poorest mental health of any generation, and only 44 percent of Gen Zers say they feel prepared for the future.

“The biggest change that I’ve seen is they have this fear of failure or making the wrong decision, and I think it’s because they just don’t want to go through more mental anguish,” Matt Prince, an adjunct professor at Chapman University, told Fortune.

Prince added that Gen Z seems to have a “huge weight on their shoulders.”

Millennials have a long-held reputation for being lazy complainers who spend too much money on avocado toast. For a while, that’s why we couldn’t afford homes… it had nothing to do with the insane housing market, of course. Gen Zers tend to wear similar labels, receiving criticisms for things like being “chronically online” and “easily offended.”

Gen Z Doesn’t believe in the American dream

But let’s be honest: all that time spent on social media (and from such a young age) can’t be healthy for your brain.

Alyssa Mancao, a Los Angeles therapist with a Gen Z client base, told Axios that because this generation grew up with social media, they tend to compare themselves more to others, which oftentimes naturally leads to feelings of inadequacy.

Plus, given the state of the world right now, I empathize with Gen Zers who are trying to make a name for themselves in this economy.

It’s also not shocking that so many Gen Zers are losing faith in the American Dream. I mean, it’s hard to imagine living a comfortable life filled with love, family, and freedom when many of us work long hours and still can’t afford groceries or a home.

“I think there is just an overarching fear of failure or making mistakes or making the wrong turn in their career trajectory that would emotionally or physically set them back years,” Prince told Fortune. “And so I think that anguish is just an anchor that’s holding them back.”

Another California-based therapist, Erica Basso, pointed out that there’s a ton of uncertainty plaguing Gen Zers today.

Source : https://www.vice.com/en/article/gen-z-are-more-anxious-than-any-other-generation/

Beauty bias? Attractive people land better jobs, higher salaries

(© deagreez – stock.adobe.com)

Think your next promotion depends purely on your skills and experience? A recent study suggests your appearance might matter more than you’d expect. Research looking at over 43,000 business school graduates found that attractive professionals earn thousands more each year than their equally qualified colleagues — and this advantage grows stronger over time.

The study, conducted by researchers at the University of Southern California and Carnegie Mellon University, tracked MBA graduates for 15 years after they left business school. What they discovered was eye-opening: People rated as attractive were 52.4% more likely to land prestigious positions, leading to an average bump in salary of $2,508 per year. For the most attractive individuals — those in the top 10% — that yearly advantage jumped to over $5,500.

This advantage, which researchers call the “attractiveness premium,” shows up across different industries but not always in the same way. Fields involving lots of face-time with clients and colleagues, like management and consulting, showed the biggest benefits for attractive individuals. Meanwhile, technical roles like IT and engineering, where work often happens behind the scenes, showed much smaller effects.

This disparity may explain why attractive professionals tend to gravitate away from technical fields and toward management positions, a phenomenon the researchers termed “horizontal sorting.”

Even more remarkable was the “extreme attractiveness premium.” Individuals in the top 10% of the attractiveness scale enjoyed an 11% advantage in career outcomes compared to those in the bottom 10%.

What makes these findings particularly noteworthy is that the benefits of being attractive don’t fade over time, even after people have proven their abilities. Each year, attractive professionals gained a small but consistent advantage over their peers, which added up significantly over the course of their careers. For perspective, the salary difference linked to attractiveness was about one-third the size of the gender pay gap among the same group of graduates.

“This study shows how appearance shapes not just the start of a career, but its trajectory over decades,” explains Nikhil Malik, who led the study at USC, in a statement. “These findings reveal a persistent and compounding effect of beauty in professional settings.”

To reach these conclusions, the researchers used advanced computer programs to analyze professional profile pictures and career progression data. Unlike previous studies that only looked at short-term effects or specific jobs, this research followed real careers across many industries and positions over 15 years.

The attractiveness premium was particularly evident among graduates from top-tier MBA programs, where competition for advancement is especially intense. In these high-stakes environments, where candidates already possess strong qualifications, appearance appeared to play a notable role in determining who reached senior leadership positions.

“It’s a stark reminder that success is influenced not just by skills and qualifications but also by societal perceptions of beauty,” observes Kannan Srinivasan, another researcher from Carnegie Mellon University.

The findings raise important questions about fairness in the workplace. While many companies now offer training to address unconscious bias related to gender and race, appearance-based advantages may be harder to tackle. These biases often operate through subtle social preferences rather than obvious discrimination.

“This research underscores how biases tied to physical appearance persist in shaping career outcomes, even for highly educated professionals,” notes Param Vir Singh, one of the study’s co-authors from Carnegie Mellon University.

Creating more equitable workplaces has been a top priority for corporations in recent years, yet these findings suggest that appearance-based advantages may require new approaches to workplace policy and practice. The persistent nature of the attractiveness premium indicates that simple awareness or training programs may be insufficient to address this form of bias.

Source : https://studyfinds.org/beauty-bias-attractive-people-better-jobs-salary/

Aging ‘hot spot’: Where the brain first starts showing signs of getting older

(© vegefox.com – stock.adobe.com)

What if we could pinpoint exactly where aging begins in the brain? Scientists at the Allen Institute have done just that, creating the first detailed cellular atlas of brain aging by analyzing millions of individual cells and identifying key regions where age-related changes first emerge.

The brain is like a massive city with thousands of different neighborhoods, each populated by unique types of cells performing specific jobs. Until now, researchers haven’t had a detailed “census” showing how each neighborhood changes as the city ages. This study, published in Nature, provides exactly that, examining cells from young adult mice (2 months old) and aged mice (18 months old). While mice age differently than humans, this comparison roughly mirrors the differences between young adult and older adult human brains.

Researchers analyzed 16 different brain regions, covering about 35% of the mouse brain’s total volume. They identified 847 distinct types of cells and discovered that certain cell populations, particularly support cells called glia, were especially sensitive to aging. They found significant changes around the third ventricle in the hypothalamus, which is the brain’s master control center that regulates essential functions like hunger, body temperature, sleep, and hormone production.

As the brain ages, it shows increased immune activity across various cell types. The researchers observed this, particularly in microglia, which are specialized cells that act as the brain’s maintenance and immune defense system. They also found this in border-associated macrophages, another type of immune cell. These cells showed signs of increased inflammatory activity in aged mice, suggesting they were working harder to maintain brain health.

The research team discovered fascinating changes in specialized cells called tanycytes and ependymal cells that line fluid-filled chambers in the brain, particularly around the third ventricle.

“Our hypothesis is that those cell types are getting less efficient at integrating signals from our environment or from things that we’re consuming,” says lead author Kelly Jin, Ph.D., in a statement. This inefficiency might contribute to broader aging effects throughout the body.

The study revealed changes in cells that produce myelin, the crucial insulating material around nerve fibers. Like the protective coating around electrical wires, myelin helps neurons communicate effectively. The researchers found that aging affects these insulator-producing cells, which could impact how well brain circuits function.

Most intriguingly, the researchers identified specific groups of neurons in the hypothalamus that showed dramatic changes with age. These neurons, which help control appetite, metabolism, and energy use throughout the body, showed signs of both decreased function and increased immune activity. This finding aligns with previous research suggesting that dietary factors, like intermittent fasting or calorie restriction, might influence lifespan.

“Aging is the most important risk factor for Alzheimer’s disease and many other devastating brain disorders. These results provide a highly detailed map for which brain cells may be most affected by aging,” says Dr. Richard J. Hodes, director of NIH’s National Institute on Aging.

While this research was conducted in mice, the findings provide a crucial roadmap for understanding human brain aging. The identification of specific vulnerable cell types and regions gives scientists clear targets for future development of therapies to maintain brain health throughout life.

Source : https://studyfinds.org/where-your-brain-first-starts-aging/

Smartphone use leads to hallucinations, detachment from reality, aggression in teens as young as 13: Study

Smartphones are making teenagers more aggressive, detached from reality and causing them to hallucinate, according to new research.

Scientists concluded the younger a person starts using a phone, the more likely they would be crippled by a whole host of psychological ills after surveying 10,500 teens between 13 and 17 from both the US and India for the study, by Sapien Labs.

“People don’t fully appreciate that hyper-real and hyper-immersive screen experiences can blur reality at key stages of development,” addiction psychologist Dr. Nicholas Kardaras, who was not part of the team who did the study, told The Post.

More than a third of 13-year-olds surveyed said they feel aggression, while a fifth experience hallucinations, the survey by Sapien Labs showed.

“Their digital world can compromise their ability to distinguish between what’s real and what’s not. A hallucination by any other name.

“Screen time essentially acts as a toxin that stunts both brain development and social development,” Kardaras explained. “The younger a kid is when given a device, the higher the likelihood of mental health issues later on.”

The teens surveyed for “The Youth Mind: Rising Aggression and Anger” were significantly worse off than older Gen Zers in Sapien Labs’ database and the youngest ages were more likely to suffer aggression, anger and hallucinations compared to their older counterparts.

A staggering 37% of 13-year-olds reported experiencing aggression, compared with 27% of 17-year-olds.

Frighteningly, 20% of 13-year-olds say they suffer from hallucinations, compared to 12% of 17-year-olds.

“Whereas today’s 17-year-olds typically got a phone at age 11 or 12, today’s 13-year-olds got their phones at age 10,” the report noted.

Respondents also reported they could pose a harm to themselves. 42% of American girls and 27% of boys aged 13 to 17 admitted to problems with suicidal thoughts.

The majority of teens polled said they had feelings of hopelessness, guilt, anxiety, and unwanted strange thoughts. More than 40% reported a sense of detachment from reality, mood swings, withdrawal, and traumatic flashbacks.

The researchers also warned phones are making kids withdraw from society.

“Once you have a phone, you spend a lot less time with in-person interaction, and the less you have in-person interaction, the less integrated you are into the real social fabric,” Sapien Labs chief scientist Tara Thiagarajan told The Post.

“You’re no longer connected in the way humans have been wired for hundreds of thousands of years.”

Kardaras, author of “Glow Kids”, also wasn’t surprised aggression was associated with phone use.

He runs Omega Recovery tech addiction recovery center in Austin, where teens are often admitted after violently attacking their parents for taking their phones away.

Kids around the country have also been assaulting their teachers at school after having their devices confiscated, with one Tennessee teacher even pepper-sprayed by a female student after he took her cell phone.

The CDC also warned in 2023 teen girls are at risk of increased violence — often at the hands of one another. Sapien Labs also flagged the uptick in aggression is disproportionately taking place in females, according to their research.

“There’s a fairly rapid rise now in kids experiencing actual violence in school, and kids are fearing for their safety,” Thiagarajan said. “That is something that everyone should sit up and take note of.”

She pointed to a December school shooting in Wisconsin was anomalously carried out by a teen girl. It had been 45 years since a female juvenile perpetrated a school shooting.

That shooter, Natalie “Samantha” Rupnow, 15, was known to have spent a great deal of her life online and had exhibited extremist views on the internet, but authorities are still looking for a motive for her shooting, after which she turned her gun on herself.

Source : https://nypost.com/2025/01/23/lifestyle/smartphone-use-leads-to-hallucinations-aggression-in-teens-study/

Adults with ADHD die 7 to 9 years sooner, alarming study shows

(© Rainer Hendla | Dreamstime.com)

Seven years. That’s how much sooner men with ADHD are dying compared to their neurotypical peers, and for women, the outlook is even bleaker at nearly nine years. These sobering numbers emerge from a new study examining life expectancy in adults with ADHD, painting a picture far more serious than the familiar narrative of forgotten appointments and misplaced keys.

The research, published in The British Journal of Psychiatry, analyzed data from nearly 10 million people across UK general practices, identifying over 30,000 adults with diagnosed ADHD. This represents just one in nine of the estimated ADHD population, as most cases remain undiagnosed.

“It is deeply concerning that some adults with diagnosed ADHD are living shorter lives than they should,” says Professor Josh Stott, senior author from University College London Psychology & Language Sciences, in a statement. “People with ADHD have many strengths and can thrive with the right support and treatment. However, they often lack support and are more likely to experience stressful life events and social exclusion, negatively impacting their health and self-esteem.”

Living with ADHD extends far beyond difficulties with focus and organization. People with the condition often experience differences in how they focus their attention. While they may possess high energy and an ability to focus intensely on their interests, they frequently struggle with mundane tasks. This can lead to challenges with impulsiveness, restlessness, and differences in planning and time management, potentially impacting success at school and work.

The study revealed that adults with ADHD were more likely to develop physical health conditions like diabetes, heart disease, chronic respiratory problems, and epilepsy. Mental health challenges were particularly prevalent: anxiety, depression, and self-harm occurred at notably higher rates than in the general population.

Treatment access remains a critical issue. A national survey found that while a third of adults with ADHD traits received medication or counseling for mental health issues (compared to 11% without ADHD), nearly 8% reported being denied requested mental health treatment. That rate is eight times higher than those without ADHD.

“Only a small percentage of adults with ADHD have been diagnosed, meaning this study covers just a segment of the entire community,” explains lead author Dr. Liz O’Nions. “More of those who are diagnosed may have additional health problems compared to the average person with ADHD. Therefore, our research may over-estimate the life expectancy gap for people with ADHD overall, though more community-based research is needed to test whether this is the case.”

The research carries particular weight because it drew from the UK’s primary care system, where almost everyone is registered. This comprehensive dataset allowed researchers to track real health outcomes rather than relying on self-reported information or smaller samples.

The gender disparity, with women losing even more years of life than men, raises important questions about how ADHD manifests and is treated across genders. Historically, the condition has been better recognized in males, potentially leaving many women undiagnosed until later in life, if at all.

“Although many people with ADHD live long and healthy lives,” Dr. O’Nions notes, “our finding that on average they are living shorter lives than they should indicates unmet support needs. It is crucial that we find out the reasons behind premature deaths so we can develop strategies to prevent these in future.”

These findings demand immediate attention from healthcare providers and policymakers. Treatment and support for ADHD is associated with better outcomes, including reduced mental health problems and substance use.

The numbers speak for themselves: 7 years, 9 years, 3% of adults affected. But behind these statistics are real lives being cut short by a condition that’s often dismissed as a simple attention problem. This study doesn’t just highlight a gap in life expectancy, it exposes a gap in our understanding of what ADHD truly means for those living with it.

Source : https://studyfinds.org/adults-with-adhd-die-7-to-9-years-sooner-alarming-study-shows/

Why camel’s milk will be the next big immune-boosting dairy alternative

Camel milk may be better for our immune health than cow’s milk. (Leo Morgan/Shutterstock)

Move over almond milk. There’s a new dairy alternative in town, and it comes from camels. While that might sound strange to Western ears, new research from Edith Cowan University (ECU) in Australia suggests camel milk could offer some impressive health benefits, especially for our immune systems.

The study, published in Food Chemistry, explored an in-depth analysis comparing cow and camel milk, focusing particularly on proteins that affect immune function and digestion. While cow’s milk dominates global dairy production at over 81%, camel’s milk currently accounts for just 0.4% of global milk production. Unlike cow’s milk, it contains distinctive proteins that could make it especially valuable for immune system support and gut health.

When examining the cream portion of both milk types, scientists identified 1,143 proteins in camel milk compared to 851 in cow’s milk. The cream fraction proved particularly rich in immune system-supporting proteins and bioactive peptides that can help fight harmful bacteria and potentially protect against certain diseases. However, researchers emphasize that further testing is needed to confirm their potency.

“These bioactive peptides can selectively inhibit certain pathogens, and by doing so, create a healthy gut environment and also has the potential to decrease the risk of developing cardiovascular disease in future,” explains study researcher Manujaya Jayamanna Mohittige, a Ph.D. student at ECU, in a statement.

For people who struggle with dairy sensitivities, the study confirms that camel milk naturally lacks beta-lactoglobulin, the primary protein that triggers allergic reactions to cow’s milk. Additionally, camel milk contains lower lactose levels than cow’s milk, potentially making it easier to digest for some individuals.

Composition-wise, camel milk is slightly different from cow’s milk. Cow’s milk typically contains 85-87% water, with 3.8-5.5% fat, 2.9-3.5% protein, and 4.6% lactose. Camel milk, meanwhile, consists of 87-90% water, with protein content varying from 2.15-4.90%, fat ranging from 1.2-4.5%, and lactose levels between 3.5-4.5%.

Camel milk production currently ranks fifth globally behind cow, buffalo, goat, and sheep milk. Given Australia‘s semi-arid climate and camel population, increasing its production is an increasingly viable option.

“Camel milk is gaining global attention, in part because of environmental conditions. Arid or semi-arid areas can be challenging for traditional cattle farming, but perfect for camels,” adds Mohittige.

However, there are practical challenges to overcome. While dairy cows can produce up to 28 liters of milk daily, camels typically yield only about 5 liters. Several camel dairies already operate in Australia, but their production volumes remain relatively low.

This doesn’t mean everyone should rush out and switch to camel milk. It’s still relatively hard to find in many places and typically costs more than cow’s milk. But for people looking for alternatives to traditional dairy, especially those with certain milk sensitivities, camel milk might offer an interesting option.

While camel milk may not appear in your local supermarket just yet, this research reveals why it deserves attention beyond its novelty value. Its unique protein profile and immune-supporting properties may help explain why this unconventional dairy source has persisted for millennia in cultures worldwide.

Source : https://studyfinds.org/camel-milk-immune-boosting-alternative/

Gender shock: Study reveals men, not women, make more emotional money choices

(Credit: © Yuri Arcurs | Dreamstime.com)

When it comes to making financial decisions, conventional wisdom suggests keeping emotions out of the equation. But new research reveals that men, contrary to traditional gender stereotypes, may be significantly more susceptible to letting emotions influence their financial choices than women.

A study led by the University of Essex challenges long-held assumptions about gender and emotional decision-making. The research explores how emotions generated in one context can influence decisions in completely unrelated situations – a phenomenon known as the emotional carryover effect.

“These results challenge the long-held stereotype that women are more emotional and open new avenues for understanding how emotions influence decision-making across genders,” explains lead researcher Dr. Nikhil Masters from Essex’s Department of Economics.

Working with colleagues from the Universities of Bournemouth and Nottingham, Masters designed an innovative experiment comparing how different types of emotional stimuli affect people’s willingness to take financial risks. They contrasted a traditional laboratory approach targeting a single emotion (fear) with a more naturalistic stimulus based on real-world events that could trigger multiple emotional responses.

The researchers recruited 186 university students (100 women and 86 men) and randomly assigned them to one of three groups. One group watched a neutral nature documentary about the Great Barrier Reef. Another group viewed a classic fear-inducing clip from the movie “The Shining,” showing a boy searching for his mother in an empty corridor with tense background music. The third group watched actual news footage about the BSE crisis (commonly known as “mad cow disease”) from the 1990s, a real food safety scare that generated widespread public anxiety.

After watching their assigned videos, participants completed decision-making tasks involving both risky and ambiguous financial choices using real money. In the risky scenario, they had to decide between taking guaranteed amounts of money or gambling on a lottery with known 50-50 odds. The ambiguous scenario was similar, but participants weren’t told the odds of winning.

The results revealed striking gender differences. Men who watched either the horror movie clip or the BSE footage subsequently made more conservative financial choices compared to those who watched the neutral nature video. This effect was particularly pronounced for those who saw the BSE news footage, and even stronger when the odds were ambiguous rather than clearly defined.

Perhaps most surprisingly, women’s financial decisions remained remarkably consistent regardless of which video they watched. The researchers found that while women reported experiencing similar emotional responses to the videos as men did, these emotions didn’t carry over to influence their subsequent financial choices.

The study challenges previous assumptions about how specific emotions like fear influence risk-taking behavior. While earlier studies suggested that fear directly leads to more cautious decision-making, this new research indicates the relationship may be more complex. Even when the horror movie clip successfully induced fear in participants, individual variations in reported fear levels didn’t correlate with their financial choices.

Instead, the researchers discovered that changes in positive emotions may play a more important role than previously thought. When positive emotions decreased after watching either the horror clip or BSE footage, male participants became more risk-averse in their financial decisions.

The study also demonstrated that emotional effects on decision-making can be even stronger when using realistic stimuli that generate multiple emotions simultaneously, compared to artificial laboratory conditions designed to induce a single emotion. This suggests that real-world emotional experiences may have more powerful influences on our financial choices than controlled laboratory studies have indicated.

The research team is now investigating why only men appear to be affected by these carryover effects. “Previous research has shown that emotional intelligence helps people to manage their emotions more effectively. Since women generally score higher on emotional intelligence tests, this could explain the big differences we see between men and women,” explains Dr. Masters.

These findings could have significant implications for understanding how major news events or crises might affect financial markets differently across gender lines. They also suggest the potential value of implementing “cooling-off” periods for important financial decisions, particularly after exposure to emotionally charged events or information.

“We don’t make choices in a vacuum and a cooling-off period might be crucial after encountering emotionally charged situations,” says Dr. Masters, “especially for life-changing financial commitments like buying a home or large investments.”

Source : https://studyfinds.org/study-men-not-women-make-more-emotional-money-choices/

Having a bigger waist could help some diabetes patients live longer

(© spaskov – stock.adobe.com)

Most health professionals would likely raise an eyebrow at the suggestion that a larger waist circumference might benefit some diabetes patients. Yet that’s exactly what researchers discovered when they analyzed survival rates among more than 6,600 American adults with diabetes, finding that the relationship between waist size and mortality follows unexpected patterns that vary significantly between men and women.

Medical professionals have long preached the dangers of excess belly fat, particularly for people with diabetes. However, follows distinct U-shaped and J-shaped patterns for women and men respectively, suggesting that both too little and too much belly fat could be problematic.

Researchers from Northern Jiangsu People’s Hospital in China analyzed data from the National Health and Nutrition Examination Survey (NHANES), a massive health study of Americans conducted between 2003 and 2018. They tracked the survival outcomes of 3,151 women and 3,473 men with diabetes, following them for roughly six to seven years on average.

The findings challenge conventional wisdom: women with diabetes actually showed the lowest mortality risk when their waist circumference hit 107 centimeters (about 42 inches), well above what’s typically considered healthy. For men, the sweet spot was 89 centimeters (around 35 inches), closer to traditional recommendations but still surprising in its implications.

The relationship manifested differently between the sexes. For women, the association between waist size and mortality formed a U-shaped curve – meaning death rates were higher among those with both smaller and larger waists than the optimal point. Men showed a J-shaped pattern, with mortality risk rising more steeply as waist sizes increased beyond the optimal point.

This phenomenon, dubbed the “obesity paradox,” isn’t entirely new to medical research. Similar patterns have been observed with body mass index (BMI) in various populations. However, this study is among the first to demonstrate it specifically with waist circumference in people with diabetes.

The findings were consistent across different causes of death. Whether looking at overall mortality or deaths specifically from cardiovascular disease, the patterns held steady. For every centimeter increase in waist size below the optimal point, women saw their mortality risk decrease by 3%, while men saw a 6% reduction. Above these thresholds, each additional centimeter increased mortality risk by 4% in women and 3% in men.

What makes these findings particularly intriguing is their persistence even after researchers accounted for numerous other factors that could influence survival, including age, education, ethnicity, smoking status, drinking habits, physical activity, and various health conditions.

While these findings might seem counterintuitive, they align with a growing body of research suggesting that optimal health parameters might vary more widely between individuals than previously thought. For diabetes patients and their healthcare providers, this study offers compelling evidence that when it comes to waist circumference, the relationship with survival is more complex than simply “less is more.”

Source : https://studyfinds.org/bigger-waist-helps-diabetes-patients-live-longer/

Ancient tooth enamel shatters long-held beliefs about early human diet

Model depicting Australopithecus afarensis. (Credit: © Procyab | Dreamstime.com)

Breaking new ground in our understanding of early human diet and evolution, scientists have discovered that our ancient relatives may not have been the avid meat-eaters previously believed. Research reveals that Australopithecus, one of humanity’s earliest ancestors who lived in South Africa between 3.7 and 3.3 million years ago, primarily maintained a plant-based diet rather than regularly consuming meat.

Scientists have long debated when our ancestors began regularly consuming meat, as this dietary shift has been linked to several crucial evolutionary developments, including increased brain size and reduced gut size. Many researchers believed meat-eating began with early human ancestors like Australopithecus, partly because stone tools and cut marks on animal bones have been found dating back to this period.

“Tooth enamel is the hardest tissue of the mammalian body and can preserve the isotopic fingerprint of an animal’s diet for millions of years,” says geochemist Tina Lüdecke, the study’s lead author, in a statement. As head of the Emmy-Noether Junior Research Group for Hominin Meat Consumption at the Max Planck Institute for Chemistry and Honorary Research Fellow at the University of the Witwatersrand, Lüdecke regularly travels to Africa to collect fossilized teeth samples for analysis.

When living things digest food and process nutrients, they create a kind of chemical signature involving different forms of nitrogen. Think of it like leaving footprints in sand. Herbivores leave one type of print, while meat-eaters leave another. By examining these ancient chemical footprints preserved in tooth enamel, scientists can determine what kinds of foods an animal ate. Meat-eaters consistently show higher levels of a specific form of nitrogen compared to plant-eaters.

The research, published in Science, focused on specimens from the Sterkfontein cave near Johannesburg, part of South Africa’s “Cradle of Humankind,” an area renowned for its abundant early hominin fossils. Using innovative chemical analysis techniques, researchers examined fossilized teeth from seven Australopithecus specimens, comparing them with teeth from other animals that lived alongside them, including ancient relatives of antelopes, cats, dogs, and hyenas.

Source : https://studyfinds.org/ancient-tooth-enamel-early-human-diet-meat

Love bacon? Just one slice is all it takes to raise your risk of dementia

(© Boris Ryzhkov | Dreamstime.com)

If you could see inside your brain after eating processed meats, you might think twice about that morning bacon ritual. An eye-opening new study has revealed that even modest consumption of processed red meat could be aging your brain faster than normal.

Doctors from Brigham and Women’s Hospital and the Harvard T.H. Chan School of Public Health followed over 133,000 healthcare professionals for up to 43 years, finding that people who ate just a quarter serving or more of processed red meat per day had a 13% higher risk of developing dementia compared to those who consumed minimal amounts. For perspective, a serving of red meat is about three ounces – roughly the size of a deck of cards.

Most previous studies exploring the connection between red meat consumption and brain health have been relatively small or short-term, making this extensive research particularly noteworthy. The study, published in Neurology, carefully defined its terms: processed red meat included products like bacon, hot dogs, sausages, salami and bologna, while unprocessed red meat encompassed beef, pork, lamb and hamburger.

While both types of red meat have been previously linked to conditions like Type 2 diabetes and cardiovascular disease, processed meats carry additional risks due to their high levels of sodium, nitrites, and other potentially harmful compounds. These substances can trigger inflammation, oxidative stress, and vascular problems that may contribute to cognitive decline.

Participants were divided into three consumption groups for processed meat: those eating fewer than 0.10 servings per day (low), between 0.10 and 0.24 servings daily (medium), and 0.25 or more servings per day (high).

Beyond just tracking dementia diagnoses, researchers also assessed participants’ cognitive function through telephone interviews and questionnaires. Those who regularly consumed processed red meat showed signs of accelerated brain aging – approximately 1.6 years of additional cognitive aging for each daily serving. In practical terms, this means their brain function declined as if they were over a year and a half older than their actual age.

To assess cognitive decline from multiple angles, the researchers examined both subjective and objective measures. A group of nearly 44,000 participants with an average age of 78 completed surveys rating their own memory and thinking skills. This self-reported assessment revealed that those consuming 0.25 or more servings of processed meat daily had a 14% higher risk of subjective cognitive decline compared to minimal consumers.

Intriguingly, the study found that replacing processed red meat with healthier protein sources could help protect brain health. Swapping out that daily serving of bacon or hot dogs for nuts and legumes was associated with a 19% lower risk of dementia. Fish proved even more beneficial, with a 28% reduction in dementia risk when substituted for processed meat.

The research team focused on two large cohorts of health professionals: the Nurses’ Health Study and the Health Professionals Follow-Up Study. These groups were ideal for long-term research as they were already completing detailed dietary questionnaires every 2-4 years and had high rates of follow-up participation. The participants’ professional backgrounds also meant they were likely to provide accurate health information.

Women made up about two-thirds of the study population, with an average starting age of 49 years. By following participants for several decades, researchers could observe how dietary patterns in middle age influenced cognitive health later in life. This long-term perspective is crucial, as cognitive decline often begins subtly, years before noticeable symptoms appear.

“Dietary guidelines tend to focus on reducing risks of chronic conditions like heart disease and diabetes, while cognitive health is less frequently discussed, despite being linked to these diseases,” said corresponding author Dr. Daniel Wang, of the Channing Division of Network Medicine at Brigham and Women’s Hospital, in a statement. “Reducing how much red meat a person eats and replacing it with other protein sources and plant-based options could be included in dietary guidelines to promote cognitive health.”

Having that hot dog at the baseball game or bacon at Sunday brunch are certainly delicious traditions in the American diet. With dementia rates expected to soar in the next 30 years, it seems that developing the devastating condition could eventually be a tradition too. Taking the right steps to protect your brain can rewrite that fate.

Source : https://studyfinds.org/love-bacon-just-one-slice-dementia-risk/

Obesity redefined: Why doctors are ditching BMI for these key health markers

(© Feng Yu – stock.adobe.com)

When the issue is obesity, the questions are many, and the routes to answers anything but straight. What is abundantly clear is a need for consensus on two foundational matters:

  • What is a useful definition for obesity?
  • Is obesity a disease?

To answer these questions and standardize the concepts, a group of 58 experts, representing multiple medical specialties and countries, convened a commission and participated in a consensus development process. They were careful to include people who experienced obesity to ensure consideration of patients’ perspectives. The commission’s report was just published in The Lancet, Diabetes & Endocrinology.

The commission recognized that the current measure of obesity, which is body-mass index (BMI), can both overestimate and underestimate adiposity (how much of the body is fat). The global commission determined that to reduce misclassification, it is necessary to use other measures of body fat. Some of these included waist circumference, waist-to-hip ratio, waist-to-height ratio, direct fat measurement, and signs and symptoms of poor health that could be attributed to excess adiposity.

The experts proposed two distinct types of obesity:

  • Clinical obesity: A systemic chronic illness directly and specifically caused by excess adiposity
  • Preclinical obesity: Excess adiposity with preserved tissue and organ function, accompanied by an increased risk of progression to clinical or other noncommunicable disease

The commission’s leader, Dr. Francesco Rubino, of King’s College, London, explained the importance of distinction in these new definitions of disease. The group acknowledges the subtleties of obesity and support timely access to treatment for patients diagnosed with clinical obesity. That is appropriate for people with any chronic disease. For people with preclinical obesity, the definition points to risk-reduction strategies.

Clinical vs. preclinical obesity

Currently, clinical obesity is defined as a state of chronic illness. Some tissues or organs show reduced function which is attributed to excess fat and affects daily activities. Some of these conditions are breathlessness, joint pain or reduced mobility, often in the knees and hips, metabolic dysfunction and impaired function of organ systems.

Applying the proposed definition, the diagnosis of clinical obesity requires two main criteria: confirmation of excess adiposity plus chronic organ dysfunction and/or limitations on mobility or daily living.

To confirm the diagnosis of clinical obesity in those with excess body fat requires that a healthcare provider evaluate the individual’s medical history and conduct a physical exam, the usual laboratory tests, and additional diagnostic tests as indicated.

The commission authors stated that, “A diagnosis of clinical obesity should have the same implications as other chronic disease diagnoses. Patients diagnosed with clinical obesity should, therefore, have timely and equitable access to comprehensive care and evidence-based treatments.”

Preclinical obesity is more of a spectrum of risk. Excess fat is confirmed, but these individuals don’t have ongoing illness attributed to adiposity. They can perform daily activities and have no or mild organ dysfunction. These patients are at higher risk for diseases like clinical obesity, cardiovascular disease, some cancers, Type 2 diabetes, and other illnesses.

“Preclinical obesity is different from metabolically healthy obesity because it is defined by the preserved function of all organs potentially affected by obesity,” the authors write, “not only those involved in metabolic regulation.”

What these changes mean for you, if you have excess fat, is that your condition is treated like any other medical condition. It isn’t something you just “get over” with diet and exercise. The effects of your fat are clearly identified, including the consequences without intervention. Specific fat-mediated dysfunctions have specific protocols for intervention. You work side-by-side with your healthcare provider to manage risk and consequences, hopefully even reducing risks and possibly reversing some consequences.

Source : https://studyfinds.org/obesity-redefined-why-doctors-are-ditching-bmi-for-these-key-health-markers/

Yes, parents really do have a ‘favorite’ child. Study reveals how to tell if it’s you

(Photo by New Africa on Shutterstock)

Ever wondered if your parents really did have a favorite child? That nagging suspicion might not be all in your head. A study analyzing data from over 19,400 participants concludes that parents do indeed treat their children differently, and the way they choose their “favorites” is more systematic than you might think.

“For decades, researchers have known that differential treatment from parents can have lasting consequences for children,” said lead author Alexander Jensen, PhD, an associate professor at Brigham Young University, in a statement. “This study helps us understand which children are more likely to be on the receiving end of favoritism, which can be both positive and negative.”

So what makes a child more likely to receive the coveted “favorite” status? The research team discovered several fascinating patterns. First, contrary to what many might expect, both mothers and fathers tend to favor daughters. Children who demonstrate responsibility and organization in their daily lives, from completing homework on time to keeping their rooms tidy, also typically receive more favorable treatment from their parents.

The study, published in Psychological Bulletin, examined five key areas of parent-child interaction: overall treatment, positive interactions (such as displays of affection or praise), negative interactions (like conflicts or criticism), resource allocation (including time spent with each child and material resources), and behavioral control (rules and expectations).

Birth order influences how parents interact with their children, particularly regarding independence and rules. Parents tend to grant older siblings more autonomy, such as later curfews or more decision-making freedom. However, the researchers note this may reflect appropriate developmental adjustments rather than favoritism.

Personality characteristics emerged as significant predictors of parental treatment. Children who demonstrate conscientiousness — showing responsibility through behaviors like completing chores without reminders or planning ahead for school assignments – typically experience more positive interactions and fewer conflicts with parents.

Similarly, agreeable children who show cooperation and consideration in family life often receive more positive parental responses.

One particularly noteworthy finding involves the disconnect between parents’ and children’s perceptions. While parents acknowledged treating daughters more favorably, children themselves didn’t report noticing significant gender-based differences in treatment. This suggests that some aspects of parental favoritism operate so subtly that children may not consciously recognize them.

Research has shown that children who receive less favorable treatment may face increased challenges with mental health and family relationships. “Understanding these nuances can help parents and clinicians recognize potentially damaging family patterns,” Jensen explained. “It is crucial to ensure all children feel loved and supported.”

The researchers emphasize that their findings show correlation rather than causation. “It is important to note that this research is correlational, so it doesn’t tell us why parents favor certain children,” Jensen said. “However, it does highlight potential areas where parents may need to be more mindful of their interactions with their children.”

For families navigating these dynamics, Jensen offers this perspective: “The next time you’re left wondering whether your sibling is the golden child, remember there is likely more going on behind the scenes than just a preference for the eldest or youngest. It might be about responsibility, temperament or just how easy or hard you are to deal with.”

Source : https://studyfinds.org/parents-really-do-have-favorite-child/

Nightmare: Your dreams are for sale — and companies are already buying

(Image by Shutterstock AI Generator)

Shocking new survey reveals 54% of young Americans report ads infiltrating their dreams

Remember when sleep offered an escape from endless advertising? That era may be ending. While U.S. citizens already face up to 4,000 advertisements daily in their waking hours, research suggests that even our dreams are no longer safe from commercial messaging. A new study reveals that 54% of young Americans report experiencing dreams influenced by ads—and some companies might be doing it intentionally.

The findings come at a critical time, as the American Marketing Association previously reported that 77% of companies surveyed in 2021 expressed intentions to experiment with “dream ads” by this year. What was once considered science fiction may now be becoming reality, with major implications for consumer protection and marketing ethics.

According to The Media Image’s newly released consumer survey focusing on Gen Z and Millennials, 54% of Americans aged 18-35 report having experienced dreams that appeared to be influenced by advertisements or contained ad-like content. Even more striking, 61% of respondents report having such dreams within the past year, with 38% experiencing them regularly—ranging from daily occurrences to monthly episodes.

Conducted by Survey Monkey on behalf of The Media Image between January 2nd and 3rd, 2025, the research included a representative sample of 1,101 American respondents aged 18-35. While the sample skewed slightly female (62%), the findings are considered reflective of broader perspectives within this age group.

The data shows a striking pattern: 22% of respondents experience ad-like content in their dreams between once a week to daily, while another 17% report such occurrences between once a month to every couple of months.

The phenomenon isn’t merely passive. The survey reveals that these dream-based advertisements may be influencing consumer behavior in tangible ways. While two-thirds of consumers (66%) report resistance to making purchases based on their dreams, the other third admit that their dreams have encouraged them to buy products or services over the past year—a conversion rate that rivals or exceeds many traditional advertising campaigns.

The presence of major brands in dreams appears to be particularly prevalent, with 48% of young Americans reporting encounters with well-known companies such as Coca-Cola, Apple, or McDonald’s during their sleep. Harvard experts suggest this may be due to memory “reactivation” during sleep, where frequent exposure to brands in daily life increases their likelihood of appearing in dreams.

Perhaps most troubling is the apparent willingness of many consumers to accept this new frontier of advertising. The survey found that 41% of respondents would be open to seeing ads in their dreams if it meant receiving discounts on products or services. This raises serious ethical questions about the commercialization of human consciousness and the potential exploitation of vulnerable mental states for marketing purposes.

Despite these concerns, there appears to be limited interest in protecting dreams from commercial influence. Over two-thirds of respondents (68%) indicated they would not be willing to pay to keep their dreams ad-free, even if such technology existed. However, a significant minority (32%) expressed interest in a hypothetical “dream-ad blocker,” suggesting growing awareness and concern about this issue among some consumers.

The research comes in the wake of dream researchers issuing an open letter warning the public about corporate attempts to infiltrate dreams with advertisements, sparked by Coors Light’s experimental campaign that achieved notable success. This confluence of corporate interest and technological capability raises serious questions about the future of personal privacy and mental autonomy.

The potential manipulation of dreams for advertising purposes raises serious concerns about psychological well-being and the need for protective regulations. As companies explore ways to influence our subconscious minds, the lack of existing safeguards becomes increasingly problematic.

These results emerge against a backdrop of increasing advertising saturation in daily life. Current estimates suggest that U.S. citizens are exposed to up to 4,000 advertisements daily, making sleep one of the last remaining refuges from commercial messaging. The potential erosion of this final sanctuary raises important questions about consumer rights and mental well-being in an increasingly commercialized world.

The research presents a clear warning: without immediate attention to the ethical and regulatory challenges of dream-based advertising, we risk losing the last advertisement-free space in modern life. As companies develop new technologies to influence our dreams, the choice between consumer protection and commercial interests becomes increasingly pressing.

Source : https://studyfinds.org/your-dreams-are-for-sale-and-companies-are-already-buying/

How smoking cigarettes could sabotage your career and income

(Photo creidt: © Alem Bradic | Dreamstime.com

Most people know smoking is bad for their health, but a new study suggests it could also be bad for their wealth. Research from Finland reveals that smoking in early adulthood can significantly impact your career trajectory and earning potential, with effects that ripple through decades of working life.

Living in an age where smoking rates have declined significantly since the 1990s, you might wonder why this matters. Despite the downward trend, smoking remains surprisingly prevalent in high-income countries, with 18% of women and 27% of men still lighting up as of 2019. While most smokers are aware of the health risks, they might not realize how their habit could be affecting their professional lives and financial future.

The study, published in Nicotine and Tobacco Research, analyzed data from nearly 2,000 Finnish adults to explore how smoking habits in early adulthood influenced their long-term success in the job market. What they found was striking: for each pack-year of smoking (equivalent to smoking one pack of cigarettes daily for a year), people experienced an average 1.8% decrease in earnings and were employed for 0.5% fewer years over the study period.

“Smoking in early adulthood is closely linked to long-term earnings and employment, with lower-educated individuals experiencing the most severe consequences,” said the paper’s lead author, Jutta Viinikainen, from the University of Jyväskylä, in a statement. “These findings highlight the need for policies that address smoking’s hidden economic costs and promote healthier behaviors.”

Research from the Cardiovascular Risk in Young Finns Study tracked participants’ smoking habits and career trajectories from 2001 to 2019, providing a long-term look at how tobacco use influences professional success over time. The study focused on adults who were between 24 and 39 years old at the start of the study period. Beyond just counting cigarettes, researchers calculated “pack-years” – a measure that considers both how much and how long someone has smoked – to understand the cumulative impact of smoking on career outcomes.

Particularly interesting was how smoking’s impact varied across different demographic groups. Young smokers with lower education levels faced the steepest penalties in terms of reduced earnings, while older smokers in this educational bracket saw the most significant drops in employment years. This pattern suggests that smoking’s effects on career success evolve differently across age groups and education levels.

For younger workers, smoking appeared to create immediate barriers to earning potential, possibly due to reduced productivity or unconscious bias from employers. Meanwhile, older workers faced growing challenges maintaining steady employment as the long-term health effects of smoking began to manifest, particularly in physically demanding jobs that are more common among those with less formal education.

Consider this: reducing smoking by just five pack-years (equivalent to smoking one pack daily for five years) could potentially boost earnings by 9%. That’s a substantial difference in earning power that could compound significantly over a career span, affecting everything from lifestyle choices to retirement savings.

Of particular concern is how these effects might create a potential feedback loop of disadvantage. While the study found that those with lower education levels appeared to face greater economic consequences from smoking, it’s important to note that this relationship is complex and influenced by many factors. This suggests that smoking could be amplifying existing socioeconomic disparities, making it harder for people to climb the economic ladder.

Smoking’s impact on physical fitness and performance may explain part of this effect, particularly in jobs requiring manual labor or physical stamina. When you’re constantly short of breath or taking more frequent breaks for cigarettes, it’s harder to maintain the same level of productivity as non-smoking colleagues. Over time, these small differences in daily performance can translate into significant gaps in career advancement and earning potential.

Perhaps most encouraging was the finding that quitting smoking could help mitigate these negative effects, particularly regarding employment stability among less-educated workers. This suggests it’s never too late to improve your career prospects by putting out that last cigarette.

In a world where career success increasingly depends on maintaining peak performance and adaptability, smoking may be more than just a health risk – it could be a career liability that many can’t afford to ignore. As the costs of smoking continue to mount, both in terms of health and wealth, the message becomes clear: your wallet, not just your lungs, might breathe easier if you quit.

Source : https://studyfinds.org/smoking-cigarettes-career-income/

Glass of milk a day keeps colorectal cancer away, massive study reveals

(© Goran – stock.adobe.com)

What if reducing your cancer risk was as simple as adding a glass of milk to your daily diet? A study of over half a million women concludes that dairy products, particularly those rich in calcium, may help protect against colorectal cancer, while alcohol and processed meats continue to pose significant risks. The massive research project tracked the eating habits and health outcomes of 542,778 British women for over 16 years, identifying key foods and nutrients that could help prevent one of the world’s most common cancers.

Colorectal cancer shows striking differences between regions, with higher rates in wealthy nations like the United States, European countries, and Japan, compared to lower rates in much of Africa and South Asia. However, when people move to countries with higher rates, their risk begins matching that of their new home within about a decade, suggesting that lifestyle factors, particularly diet, play a crucial role.

The international research team analyzed 97 different dietary components, from specific foods to nutrients. During the study period, 12,251 women developed colorectal cancer, allowing scientists to identify clear patterns between eating habits and cancer risk.

Among the strongest protective factors was calcium intake. Women who consumed more calcium-rich foods showed a significantly lower risk of developing colorectal cancer. The benefit appeared consistent whether the calcium came from dairy products or other sources.

Dairy milk emerged as another powerful player in cancer prevention. Regular milk drinkers showed notably lower cancer risk, and other dairy products like yogurt demonstrated similar protective effects. Several nutrients commonly found in dairy — including riboflavin, magnesium, phosphorus, and potassium — also showed benefits.

On the flip side, alcohol consumption stood out as the strongest risk factor. Having about two standard drinks more per day was linked to a 15% higher risk of developing colorectal cancer. The risk appeared particularly pronounced for rectal cancer compared to colon cancer.

Red and processed meats maintained their concerning reputation. Each additional daily serving about the size of a slice of ham was associated with an 8% higher risk. This finding supports previous research that led international health organizations to classify processed meat as cancer-causing and red meat as probably cancer-causing in humans.

The researchers took an innovative approach to confirm dairy’s protective effects by examining genetic differences that affect how well people can digest milk products. This analysis provided additional evidence that dairy foods help protect against colorectal cancer, as people who were genetically better able to digest dairy had lower cancer rates.

While breakfast cereals, fruits, whole grains, and high-fiber foods showed some protective effects, these benefits became less pronounced when researchers accounted for overall lifestyle habits. This suggests that people who eat these foods might have generally healthier lifestyles that contribute to lower cancer risk.

Scientists believe calcium helps prevent cancer in several ways: by binding to harmful substances in the digestive system, promoting healthy cell development in the colon, and reducing inflammation. While dairy products aren’t suitable for everyone, particularly those with lactose intolerance or milk allergies, the research suggests that for many people, including more dairy in their diet might help reduce their cancer risk.

These findings provide compelling evidence that simple dietary changes, like having more dairy products while limiting alcohol and processed meats, could help reduce the risk of developing one of the world’s most common cancers. However, no single food acts as a magic bullet: it’s the overall pattern of dietary choices that matters most for cancer prevention.

Source : https://studyfinds.org/dairy-milk-keeps-colorectal-cancer-away/

Could AI replace politicians? A philosopher maps out three possible futures

(© jon – stock.adobe.com)

From business and public administration to daily life, artificial intelligence is reshaping the world – and politics may be next. While the idea of AI politicians might make some people uneasy, survey results tell a different story. A poll conducted by my university in 2021, during the early surge of AI advancements, found broad public support for integrating AI into politics across many countries and regions.

A majority of Europeans said they would like to see at least some of their politicians replaced by AI. Chinese respondents were even more bullish about AI agents making public policy, while normally innovation-friendly Americans were more circumspect.

As a philosopher who researches the moral and political questions raised by AI, I see three main pathways for integrating AI into politics, each with its own mixture of promises and pitfalls.

While some of these proposals are more outlandish than others, weighing them up makes one thing certain: AI’s involvement in politics will force us to reckon with the value of human participation in politics, and with the nature of democracy itself.

Chatbots running for office?

Prior to ChatGPT’s explosive arrival in 2022, efforts to replace politicians with chatbots were already well underway in several countries. As far back as 2017, a chatbot named Alisa challenged Vladimir Putin for the Russian presidency, while a chatbot named Sam ran for office in New Zealand. Denmark and Japan have also experimented with chatbot-led political initiatives.

These efforts, while experimental, reflect a longstanding curiosity about AI’s role in governance across diverse cultural contexts.

The appeal of replacing flesh and blood politicians with chatbots is, on some levels, quite clear. Chatbots lack many of the problems and limitations typically associated with human politics. They are not easily tempted by desires for money, power, or glory. They don’t need rest, can engage virtually with everyone at once, and offer encyclopedic knowledge along with superhuman analytic abilities.

However, chatbot politicians also inherit the flaws of today’s AI systems. These chatbots, powered by large language models, are often black boxes, limiting our insight into their reasoning. They frequently generate inaccurate or fabricated responses, known as hallucinations. They face cybersecurity risks, require vast computational resources, and need constant network access. They are also shaped by biases derived from training data, societal inequalities, and programmers’ assumptions.

Additionally, chatbot politicians would be ill-suited to what we expect from elected officials. Our institutions were designed for human politicians, with human bodies and moral agency. We expect our politicians to do more than answer prompts – we also expect them to supervise staff, negotiate with colleagues, show genuine concern for their constituents, and take responsibility for their choices and actions.

Without major improvements in the technology, or a more radical reimagining of politics itself, chatbot politicians remain an uncertain prospect.

AI-powered direct democracy

Another approach seeks to completely do away with politicians, at least as we know them. Physicist César Hidalgo believes that politicians are troublesome middlemen that AI finally allows us to cut out. Instead of electing politicians, Hidalgo wants each citizen to be able to program an AI agent with their own political preferences. These agents can then negotiate with each other automatically to find common ground, resolve disagreements, and write legislation.

Hidalgo hopes that this proposal can unleash direct democracy, giving citizens more direct input into politics while overcoming the traditional barriers of time commitment and legislative expertise. The proposal seems especially attractive in light of widespread dissatisfaction with conventional representative institutions.

However, eliminating representation may be more difficult than it seems. In Hidalgo’s “avatar democracy,” the de facto kingmakers would be the experts who design the algorithms. Since the only way to legitimately authorize their power would likely be through voting, we might merely replace one form of representation with another.

The specter of algocracy

One even more radical idea involves eliminating humans from politics altogether. The logic is simple enough: if AI technology advances to the point where it makes reliably better decisions than humans, what would be the point of human input?

An algocracy is a political regime run by algorithms. While few have argued outright for a total handover of political power to machines (and the technology for doing so is still far off), the specter of algocracy forces us to think critically about why human participation in politics matters. What values – such as autonomy, responsibility, or deliberation – must we preserve in an age of automation, and how?

Source : https://studyfinds.org/could-ai-replace-politicians/

Obesity label is medically flawed, says global report

People with excess body fat can still be active and healthy, experts say

Calling people obese is medically “flawed” – and the definition should be split into two, a report from global experts says.

The term “clinical obesity” should be used for patients with a medical condition caused by their weight, while “pre-clinically obese” should be applied to those remaining fat but fit, although at risk of disease.

This is better for patients than relying only on body mass index (BMI) – which measures whether they are a healthy weight for their height – to determine obesity.

More than a billion people are estimated to be living with obesity worldwide and prescription weight-loss drugs are in high demand.

The report, published in The Lancet Diabetes & Endocrinology journal, is supported by more than 50 medical experts around the world.

“Some individuals with obesity can maintain normal organ function and overall health, even long term, whereas others display signs and symptoms of severe illness here and now,” Prof Francesco Rubino, from King’s College London, who chaired the expert group, said.

“Obesity is a spectrum,” he added.

The current, blanket definition means too many people are being diagnosed as obese but not receiving the most appropriate care, the report says.

Natalie, from Crewe, goes to the gym four times a week and has a healthy diet, but is still overweight.

“I would consider myself on the larger side, but I’m fit,” she told the BBC 5 Live phone-in with Nicky Campbell.

“If you look at my BMI I’m obese, but if I speak to my doctor they say that I’m fit, healthy and there’s nothing wrong with me.

“I’m doing everything I can to stay fit and have a long healthy life,” she said.

Richard, from Falmouth, said there is a lot of confusion around BMI.

“When they did my test, it took me to a level of borderline obesity, but my body fat was only 4.9% – the problem is I had a lot of muscle mass,” he says.

In Mike’s opinion, you cannot be fat and fit – he says it is all down to diet.

“All these skinny jabs make me laugh, if you want to lose weight stop eating – it’s easy.”

Currently, in many countries, obesity is defined as having a BMI over 30 – a measurement that estimates body fat based on height and weight.

How is BMI calculated?
It is calculated by dividing an adult’s weight in kilograms by their height in metres squared.

For example, if they are 70kg (about 11 stone) and 1.70m (about 5ft 7in):

square their height in metres: 1.70 x 1.70 = 2.89
divide their weight in kilograms by this amount: 70 ÷ 2.89 = 24.22
display the result to one decimal place: 24.2
Find out what your body mass index (BMI) means on the NHS website

But BMI has limitations.

It measures whether someone is carrying too much weight – but not too much fat.

So very muscular people, such as athletes, tend to have a high BMI but not much fat.

The report says BMI is useful on a large scale, to work out the proportion of a population who are a healthy weight, overweight or obese.

But it reveals nothing about an individual patient’s overall health, whether they have heart problems or other illnesses, for example, and fails to distinguish between different types of body fat or measure the more dangerous fat around the waist and organs.

Measuring a patient’s waist or the amount of fat in their body, along with a detailed medical history, can give a much clearer picture than BMI, the report says.

Source: https://www.bbc.com/news/articles/c79dz14d30ro

Keeping the thermostat between these temperatures is best for seniors’ brains

(Credit: © Lopolo | Dreamstime.com)

That perfect thermostat setting might be more important than you think, especially at grandma and grandpa’s house. A new study finds that indoor temperature significantly affects older adults’ ability to concentrate, even in their own homes where they control the climate. The research suggests that as climate change brings more extreme temperatures, elderly individuals may face increased cognitive challenges unless their indoor environments are properly regulated.

Researchers at the Hinda and Arthur Marcus Institute for Aging Research, the research arm of Hebrew SeniorLife affiliated with Harvard Medical School, conducted a year-long study monitoring 47 community-dwelling adults aged 65 and older. The study tracked both their home temperatures and their self-reported ability to maintain attention throughout the day. What they discovered was a clear U-shaped relationship between room temperature and cognitive function. In other words, attention spans were optimal within a specific temperature range and declined when rooms became either too hot or too cold.

The sweet spot for cognitive function appeared to be between 20-24°C (68-75°F). When temperatures deviated from this range by just 4°C (7°F) in either direction, participants were twice as likely to report difficulty maintaining attention on tasks. This finding is particularly concerning given that many older adults live on fixed incomes and may struggle to maintain optimal indoor temperatures, especially during extreme weather events.

Many previous studies have examined temperature’s effects on cognition in controlled laboratory settings, but this research breaks new ground by studying people in their natural home environments over an extended period. The research team used smart sensors placed in participants’ primary living spaces to continuously monitor temperature and humidity levels, while participants completed twice-daily smartphone surveys about their thermal comfort and attention levels.

The study’s findings revealed an interesting asymmetry in how people responded to temperature variations. While both hot and cold conditions impaired attention, participants seemed particularly sensitive to cold temperatures. When reporting feeling cold, they showed greater cognitive difficulties across a wider range of actual temperatures compared to when they felt hot. This suggests that maintaining adequate heating may be especially crucial for preserving cognitive function in older adults during winter months.

“Our findings underscore the importance of understanding how environmental factors, like indoor temperature, impact cognitive health in aging populations,” said lead author Dr. Amir Baniassadi, an assistant scientist at the Marcus Institute, in a statement. “This research highlights the need for public health interventions and housing policies that prioritize climate resilience for older adults. As global temperatures rise, ensuring access to temperature-controlled environments will be crucial for protecting their cognitive well-being.”

This study follows a 2023 investigation measuring how temperature affected older adults’ sleep and cognitive ability, building a growing body of evidence that climate change impacts extend beyond physical health. While much attention has been paid to the direct health impacts of heat waves and cold snaps, this research suggests that even moderate temperature variations inside homes could affect older adults’ daily cognitive functioning.

The participant group, while relatively small, was carefully monitored. With an average age of 79 years, the cohort completed over 17,000 surveys during the study period. Most participants lived in private, market-rate housing (34 participants) rather than subsidized housing (13 participants), suggesting they had reasonable control over their home environments. This makes the findings particularly striking: if even relatively advantaged older adults experience cognitive effects from temperature variations, more vulnerable populations may face even greater challenges.

The connection between temperature and cognition isn’t entirely surprising. As we age, our bodies become less efficient at regulating temperature, a problem often compounded by chronic conditions like diabetes or medications that affect thermoregulation. What’s novel about this research is its demonstration that these physiological vulnerabilities may extend to cognitive function in real-world settings.

As winter gives way to spring and thermostats across the country get adjusted, this research suggests we might want to pay closer attention to those settings — especially in homes where older adults reside. The cognitive sweet spot of 68-75°F might just be the temperature range where wisdom flourishes.

Source : https://studyfinds.org/cold-homes-linked-to-attention-problems-in-older-adults/

Process this: 50,000 grocery products reveal shocking truth about America’s food supply

(Credit: © Photopal604 | Dreamstime.com)

Minimally processed foods make up just a small percentage of what’s available in the U.S. supermarkets

Next time you walk down the aisles of your local grocery store, take a closer look at what’s actually available on those shelves. A stunning report reveals the majority of food products sold at major U.S. grocery chains are highly processed, with most of them priced significantly cheaper than less processed alternatives.

In what may be the most comprehensive analysis of food processing in American grocery stores to date, researchers examined over 50,000 food items sold at Walmart, Target, and Whole Foods to understand just how processed our food supply really is. Using sophisticated machine learning techniques, they developed a database called GroceryDB that scores foods based on their degree of processing.

What exactly makes a food “processed“? While nearly all foods undergo some form of processing (like washing and packaging), ultra-processed foods are industrial formulations made mostly from substances extracted from foods or synthesized in laboratories. Think instant soups, packaged snacks, and soft drinks – products that often contain additives like preservatives, emulsifiers, and artificial colors.

Research has suggested that diets high in ultra-processed foods can contribute to health issues like obesity, diabetes and heart disease. Over-processing can also strip foods of beneficial nutrients. Despite these risks, there has been no easy way for consumers to identify what foods are processed, highly processed, or ultra-processed.

“There are a lot of mixed messages about what a person should eat. Our work aims to create a sort of translator to help people look at food information in a more digestible way,” explains Giulia Menichetti, PhD, an investigator in the Channing Division of Network Medicine at Brigham and Women’s Hospital and the study’s corresponding author, in a statement.

The findings paint a concerning picture of American food retail. Across all three stores, minimally processed products made up a relatively small fraction of available items, while ultra-processed foods dominated the shelves. Even more troubling, the researchers found that for every 10% increase in processing scores, the price per calorie dropped by 8.7% on average. This means highly processed foods tend to be substantially cheaper than their less processed counterparts.

However, the degree of processing varied significantly between stores. Whole Foods offered more minimally processed options and fewer ultra-processed products compared to Walmart and Target. The researchers also found major differences between food categories. Some categories, like jerky, popcorn, chips, bread, and mac and cheese, showed little variation in processing levels – meaning consumers have limited choices if they want less processed versions of these foods. Other categories, like cereals, milk alternatives, pasta, and snack bars, displayed wider ranges of processing levels.

Looking at specific examples helps illustrate these differences. When examining breads, researchers found that Manna Organics multi-grain bread from Whole Foods scored low on the processing scale since it’s made primarily from whole wheat kernels and basic ingredients. In contrast, certain breads from Walmart and Target scored much higher due to added ingredients like resistant corn starch, soluble corn fiber, and various additives.

The research team also developed a novel way to analyze individual ingredients’ contributions to food processing. They found that certain oils, like brain octane oil, flaxseed oil, and olive oil, contributed less to ultra-processing compared to palm oil, vegetable oil, and soybean oil. This granular analysis helps explain why seemingly similar products can have very different processing scores.

Study authors have made their findings publicly accessible through a website called TrueFood.tech, where consumers can look up specific products and find less processed alternatives within the same category.

“When people hear about the dangers of ultra-processed foods, they ask, ‘OK, what are the rules? How can we apply this knowledge?’” Menichetti notes. “We are building tools to help people implement changes to their diet based on information currently available about food processing. Given the challenging task of transforming eating behaviors, we want to nudge them to eat something that is within what they currently want but a less-processed option.”

As Americans increasingly rely on grocery stores for their food — with over 60% of U.S. food consumption coming from retail establishments — understanding what’s actually available on store shelves becomes crucial for public health. While this research doesn’t definitively prove that ultra-processed foods are harmful, it does demonstrate that avoiding them may require both conscious effort and deeper pockets.

Source : https://studyfinds.org/ultra-processed-foods-america-grocery-stores-target-walmart/

 

Age 13 rule isn’t working — Most pre-teens already deep in social media

(Credit: Child Social Media © Andrii Iemelianenko | Dreamstime.com)

Ages 11 and 12 represent a pivotal transition from childhood to adolescence — a time traditionally marked by first crushes, growing independence, and deepening friendships. But according to new research, this age group is also marked by something more troubling: widespread social media addiction. The study of over 10,000 American youth reveals that most pre-teens are active on platforms they’re technically too young to use.

As the U.S. Supreme Court prepares to hear arguments against Congress’ TikTok ban, the research pulls back the curtain on what many parents have long suspected: nearly 64% of pre-teens have at least one social media account, flouting minimum age requirements and raising concerns about online safety and mental health impacts.

Drawing from a diverse sample of adolescents aged 11 to 15, researchers found that TikTok reigns supreme among young users, with 67% of social media-using teens maintaining an account on the short-form video platform. YouTube and Instagram followed closely behind at around 65% and 66% respectively.

“Policymakers need to look at TikTok as a systemic social media issue and create effective measures that protect children online,” said Dr. Jason Nagata, a pediatrician at UCSF Benioff Children’s Hospitals and the lead author of the study, in a statement. “TikTok is the most popular social media platform for children, yet kids reported having more than three different social media accounts, including Instagram and Snapchat.”

Notable gender differences emerged in platform preferences. Female adolescents gravitated toward TikTok, Snapchat, Instagram, and Pinterest, while their male counterparts showed stronger affinity for YouTube and Reddit. This digital divide hints at how social media may be shaping different aspects of adolescent development and socialization between genders.

Among the study’s more concerning findings was that 6.3% of young social media users admitted to maintaining “secret” accounts hidden from parental oversight. These covert profiles, sometimes dubbed “Finstas” (fake Instagram accounts), represent a digital double life that could put vulnerable youth at risk while hampering parents’ ability to protect their children online.

Signs of problematic use and potential addiction emerged as significant concerns. Twenty-five percent of children with social media accounts reported often thinking about social media apps, and another 25% said they use the apps to forget about their problems. Moreover, 17% of users tried to reduce their social media use but couldn’t, while 11% reported that excessive use had negatively impacted their schoolwork.

“Our study revealed a quarter of children reported elements of addiction while using social media, with some as young as eleven years old,” Nagata explained. “The research shows underage social media use is linked with greater symptoms of depression, eating disorders, ADHD, and disruptive behaviors. When talking about social media usage and policies, we need to prioritize the health and safety of our children.”

Recent legislative efforts, including the federal Protecting Kids on Social Media Act and various state-level initiatives, aim to strengthen safeguards around youth social media use. The U.S. Surgeon General has called for more robust age verification systems and warning labels on social media platforms, highlighting the growing recognition of this issue as a public health concern.

To address these challenges, medical professionals recommend structured approaches to managing screen time. The American Academy of Pediatrics has developed the Family Media Plan, providing families with tools to schedule both online and offline activities effectively.

“Every parent and family should have a family media plan to ensure children and adults stay safe online and develop a healthy relationship with screens and social media,” said Nagata, who practices this approach with his own children. “Parents can create strong relationships with their children by starting open conversations and modeling good behaviors.”

As social media continues evolving at breakneck speed, this research, published in Academic Pediatrics, provides a crucial snapshot of how the youngest generation navigates the digital landscape. The timing proves particularly relevant as the Supreme Court prepares to hear arguments about Congress’ TikTok ban, set to take effect January 19th. While the case primarily centers on national security concerns, the study’s findings suggest that children’s welfare should be an equally important consideration in platform regulation.

Source : https://studyfinds.org/most-pre-teens-already-deep-in-social-media/

Warning: Your pooch’s smooches really could make you quite sick

(Credit: © Natalia Skripnikova | Dreamstime.com)

39% of healthy dogs may silently carry dangerous Salmonella strains, researchers warn
UNIVERSITY PARK, Pa. — Next time your furry friend gives you those irresistible puppy dog eyes, you might want to think twice before sharing your snack. That’s because scientists say that household dogs could be silent carriers of dangerous antibiotic-resistant Salmonella bacteria, potentially putting their human families at risk.

Most pet owners know to wash their hands after handling raw pet food or cleaning up after their dogs, but researchers at Pennsylvania State University have uncovered a concerning trend: household dogs can carry and spread drug-resistant strains of Salmonella even when they appear perfectly healthy. This finding is particularly worrisome because these resistant bacteria can make treating infections much more challenging in both animals and humans.

The research takes on added significance considering that over half of U.S. homes include dogs. “We have this close bond with companion animals in general, and we have a really close interface with dogs,” explains Sophia Kenney, the study’s lead author and doctoral candidate at Penn State, in a statement. “We don’t let cows sleep in our beds or lick our faces, but we do dogs.”

To investigate this concerning possibility, the research team employed a clever detective-like approach. They first tapped into an existing network of veterinary laboratories that regularly test animals for various diseases. They identified 87 cases where dogs had tested positive for Salmonella between May 2017 and March 2023. These weren’t just random samples: they came from real cases where veterinarians had submitted samples for testing, whether the dogs showed symptoms or not.

The scientists then did something akin to matching fingerprints. For each dog case they found, they searched a national database of human Salmonella infections, looking for cases that occurred in the same geographic areas around the same times. This database, maintained by the National Institutes of Health, is like a library of bacterial information collected from patients across the country. Through this matching process, they identified 77 human cases that could potentially be connected to the dog infections.

The research team then used advanced DNA sequencing technology to analyze each bacterial sample. This allowed them to not only identify different varieties of Salmonella but also determine how closely related the bacteria from dogs were to those found in humans. They specifically looked for two key things: genes that make the bacteria resistant to antibiotics, and genes that help the bacteria cause disease.

What they found was eye-opening. Among the dog samples, they discovered 82 cases of the same type of Salmonella that commonly causes human illness. More concerning was that many of these bacterial strains carried genes making them resistant to important antibiotics, the same medicines doctors rely on to treat serious infections.

In particular, 16 of the human cases were found to be very closely related to six different dog-associated strains. While this doesn’t definitively prove the infections spread from dogs to humans, it’s like finding matching puzzle pieces that suggest a connection. The researchers also discovered that 39% of the dog samples contained a special gene called shdA, which allows the bacteria to survive longer in the dog’s intestines. This means infected dogs could potentially spread the bacteria through their waste for extended periods without appearing sick themselves.

The bacteria showed impressive diversity, with researchers identifying 31 different varieties in dogs alone. Some common types found in both dogs and humans included strains known as Newport, Typhimurium, and Enteritidis — names that might not mean much to the average person but are well-known to health officials for causing human illness.

The research has highlighted real-world implications. Study co-author Nkuchia M’ikanatha, lead epidemiologist for the Pennsylvania Department of Health, points to a recent outbreak where pig ear pet treats sickened 154 people across 34 states with multidrug-resistant Salmonella. “This reminds us that simple hygiene practices such as hand washing are needed to protect both our furry friends and ourselves — our dogs are family but even the healthiest pup can carry Salmonella,” he notes.

The historical context adds another layer to the findings. According to researchers, Salmonella has been intertwined with human history since agriculture began, potentially shadowing humanity for around 10,000 years alongside animal domestication.

While the study reveals concerning patterns about antibiotic resistance and disease transmission, lead researcher Erika Ganda emphasizes that not all bacteria are harmful. “Bacteria are never entirely ‘bad’ or ‘good’ — their role depends on the context,” she explains. “While some bacteria, like Salmonella, can pose serious health risks, others are essential for maintaining our health and the health of our pets.”

Of course, this doesn’t mean we should reconsider having dogs as pets. Instead, scientists say just be smart, and maybe try not to let your pooch kiss you on the lips.

“Several studies highlight the significant physical and mental health benefits of owning a dog, including reduced stress and increased physical activity,” Ganda notes. “Our goal is not to discourage pet ownership but to ensure that people are aware of potential risks and take simple steps, like practicing good hygiene, to keep both their families and their furry companions safe.”

Source : https://studyfinds.org/dogs-drug-resistant-salmonella/

‘Super Scoopers’ dumping ocean water on the Los Angeles fires: Why using saltwater is typically a last resort

A Croatian Air Force CL-415 Super Scooper firefighting aircraft in flight. (Photo by crordx on Shutterstock)

Firefighters battling the deadly wildfires that raced through the Los Angeles area in January 2025 have been hampered by a limited supply of freshwater. So, when the winds are calm enough, skilled pilots flying planes aptly named Super Scoopers are skimming off 1,500 gallons of seawater at a time and dumping it with high precision on the fires.

Using seawater to fight fires can sound like a simple solution – the Pacific Ocean has a seemingly endless supply of water. In emergencies like Southern California is facing, it’s often the only quick solution, though the operation can be risky amid ocean swells.

But seawater also has downsides.

Saltwater corrodes firefighting equipment and may harm ecosystems, especially those like the chaparral shrublands around Los Angeles that aren’t normally exposed to seawater. Gardeners know that small amounts of salt – added, say, as fertilizer – does not harm plants, but excessive salts can stress and kill plants.

While the consequences of adding seawater to ecosystems are not yet well understood, we can gain insights on what to expect by considering the effects of sea-level rise.

A seawater experiment in a coastal forest

As an ecosystem ecologist at the Smithsonian Environmental Research Center, I lead a novel experiment called TEMPEST that was designed to understand how and why historically salt-free coastal forests react to their first exposures to salty water.

Sea-level rise has increased by an average of about 8 inches globally over the past century, and that water has pushed salty water into U.S. forests, farms and neighborhoods that had previously known only freshwater. As the rate of sea-level rise accelerates, storms push seawater ever farther onto the dry land, eventually killing trees and creating ghost forests, a result of climate change that is widespread in the U.S. and globally.

In our TEMPEST test plots, we pump salty water from the nearby Chesapeake Bay into tanks, then sprinkle it on the forest soil surface fast enough to saturate the soil for about 10 hours at a time. This simulates a surge of salty water during a big storm.

Our coastal forest showed little effect from the first 10-hour exposure to salty water in June 2022 and grew normally for the rest of the year. We increased the exposure to 20 hours in June 2023, and the forest still appeared mostly unfazed, although the tulip poplar trees were drawing water from the soil more slowly, which may be an early warning signal.

Things changed after a 30-hour exposure in June 2024. The leaves of tulip poplar in the forests started to brown in mid-August, several weeks earlier than normal. By mid-September the forest canopy was bare, as if winter had set in. These changes did not occur in a nearby plot that we treated the same way, but with freshwater rather than seawater.

The initial resilience of our forest can be explained in part by the relatively low amount of salt in the water in this estuary, where water from freshwater rivers and a salty ocean mix. Rain that fell after the experiments in 2022 and 2023 washed salts out of the soil.

But a major drought followed the 2024 experiment, so salts lingered in the soil then. The trees’ longer exposure to salty soils after our 2024 experiment may have exceeded their ability to tolerate these conditions.

Seawater being dumped on the Southern California fires is full-strength, salty ocean water. And conditions there have been very dry, particularly compared with our East Coast forest plot.

Changes evident in the ground

Our research group is still trying to understand all the factors that limit the forest’s tolerance to salty water, and how our results apply to other ecosystems such as those in the Los Angeles area.

Tree leaves turning from green to brown well before fall was a surprise, but there were other surprises hidden in the soil below our feet.

Rainwater percolating through the soil is normally clear, but about a month after the first and only 10-hour exposure to salty water in 2022, the soil water turned brown and stayed that way for two years. The brown color comes from carbon-based compounds leached from dead plant material. It’s a process similar to making tea.

Our lab experiments suggest that salt was causing clay and other particles to disperse and move about in the soil. Such changes in soil chemistry and structure can persist for many years.

Source : https://studyfinds.org/super-scoopers-dumping-ocean-water-los-angeles-fires/

An eye for an eye: People agree about the values of body parts across cultures and eras

(Credit: © Kateryna Chyzhevska | Dreamstime.com)

The Bible’s lex talionis – “Eye for eye, tooth for tooth, hand for hand, foot for foot” (Exodus 21:24-27) – has captured the human imagination for millennia. This idea of fairness has been a model for ensuring justice when bodily harm is inflicted.

Thanks to the work of linguists, historians, archaeologists and anthropologists, researchers know a lot about how different body parts are appraised in societies both small and large, from ancient times to the present day.

But where did such laws originate?

According to one school of thought, laws are cultural constructions – meaning they vary across cultures and historical periods, adapting to local customs and social practices. By this logic, laws about bodily damage would differ substantially between cultures.

Our new study explored a different possibility – that laws about bodily damage are rooted in something universal about human nature: shared intuitions about the value of body parts.

Do people across cultures and throughout history agree on which body parts are more or less valuable? Until now, no one had systematically tested whether body parts are valued similarly across space, time and levels of legal expertise – that is, among laypeople versus lawmakers.

We are psychologists who study evaluative processes and social interactions. In previous research, we have identified regularities in how people evaluate different wrongful actions, personal characteristics, friends and foods. The body is perhaps a person’s most valuable asset, and in this study we analyzed how people value its different parts. We investigated links between intuitions about the value of body parts and laws about bodily damage.

How critical is a body part or its function?

We began with a simple observation: Different body parts and functions have different effects on the odds that a person will survive and thrive. Life without a toe is a nuisance. But life without a head is impossible. Might people intuitively understand that different body parts are have different values?

Knowing the value of body parts gives you an edge. For example, if you or a loved one has suffered multiple injuries, you could treat the most valuable body part first, or allocate a greater share of limited resources to its treatment.

This knowledge could also play a role in negotiations when one person has injured another. When person A injures person B, B or B’s family can claim compensation from A or A’s family. This practice appears around the world: among the Mesopotamians, the Chinese during the Tang dynasty, the Enga of Papua New Guinea, the Nuer of Sudan, the Montenegrins and many others. The Anglo-Saxon word “wergild,” meaning “man price,” now designates in general the practice of paying for body parts.

But how much compensation is fair? Claiming too little leads to loss, while claiming too much risks retaliation. To walk the fine line between the two, victims would claim compensation in Goldilocks fashion: just right, based on the consensus value that victims, offenders and third parties in the community attach to the body part in question.

This Goldilocks principle is readily apparent in the exact proportionality of the lex talionis – “eye for eye, tooth for tooth.” Other legal codes dictate precise values of different body parts but do so in money or other goods. For example, the Code of Ur-Nammu, written 4,100 years ago in ancient Nippur, present-day Iraq, states that a man must pay 40 shekels of silver if he cuts off another man’s nose, but only 2 shekels if he knocks out another man’s tooth.

Testing the idea across cultures and time

If people have intuitive knowledge of the values of different body parts, might this knowledge underpin laws about bodily damage across cultures and historical eras?

To test this hypothesis, we conducted a study involving 614 people from the United States and India. The participants read descriptions of various body parts, such as “one arm,” “one foot,” “the nose,” “one eye” and “one molar tooth.” We chose these body parts because they were featured in legal codes from five different cultures and historical periods that we studied: the Law of Æthelberht from Kent, England, in 600 C.E., the Guta lag from Gotland, Sweden, in 1220 C.E., and modern workers’ compensation laws from the United States, South Korea and the United Arab Emirates.

Participants answered one question about each body part they were shown. We asked some how difficult it would be for them to function in daily life if they lost various body parts in an accident. Others we asked to imagine themselves as lawmakers and determine how much compensation an employee should receive if that person lost various body parts in a workplace accident. Still others we asked to estimate how angry another person would feel if the participant damaged various parts of the other’s body. While these questions differ, they all rely on assessing the value of different body parts.

To determine whether untutored intuitions underpin laws, we didn’t include people who had college training in medicine or law.

Then we analyzed whether the participants’ intuitions matched the compensations established by law.

Our findings were striking. The values placed on body parts by both laypeople and lawmakers were largely consistent. The more highly American laypeople tended to value a given body part, the more valuable this body part seemed also to Indian laypeople, to American, Korean and Emirati lawmakers, to King Æthelberht and to the authors of the Guta lag. For example, laypeople and lawmakers across cultures and over centuries generally agree that the index finger is more valuable than the ring finger, and that one eye is more valuable than one ear.

But do people value body parts accurately, in a way that corresponds with their actual functionality? There are some hints that, yes, they do. For example, laypeople and lawmakers regard the loss of a single part as less severe than the loss of multiples of that part. In addition, laypeople and lawmakers regard the loss of a part as less severe than the loss of the whole; the loss of a thumb is less severe than the loss of a hand, and the loss of a hand is less severe than the loss of an arm.

Additional evidence of accuracy can be gleaned from ancient laws. For example, linguist Lisi Oliver notes that in Barbarian Europe, “wounds that may cause permanent incapacitation or disability are fined higher than those which may eventually heal.”

Although people generally agree in valuing some body parts more than others, some sensible differences may arise. For instance, sight would be more important for someone making a living as a hunter than as a shaman. The local environment and culture might also play a role. For example, upper body strength could be particularly important in violent areas, where one needs to defend oneself against attacks. These differences remain to be investigated.

Source : https://studyfinds.org/values-of-body-parts-across-cultures-and-eras/

One juice, three benefits: How elderberry could transform metabolism in just 7 days

(Photo credit: © Anna Komisarenko | Dreamstime.com)

Small study demonstrates the enormous fat-burning and gut-boosting powers of an ‘underappreciated’ berry

In an era where 74% of Americans are considered overweight and 40% have obesity, scientists have discovered that an ancient berry might offer modern solutions. Research from Washington State University reveals that elderberry juice could help regulate blood sugar levels and improve the body’s ability to burn fat, while also promoting beneficial gut bacteria.

Elderberries have long been used in traditional medicine, but this new research provides scientific evidence for their metabolic benefits. The study, published in the journal Nutrients, demonstrates that consuming elderberry juice for just one week led to significant improvements in how the body processes sugar and burns fat.

“Elderberry is an underappreciated berry, commercially and nutritionally,” says Patrick Solverson, an assistant professor in WSU’s Department of Nutrition and Exercise Physiology, in a statement. “We’re now starting to recognize its value for human health, and the results are very exciting.”

Solverson and his team recruited 18 overweight but otherwise healthy adults for this carefully controlled experiment. Most participants were women, with an average age of 40 years and an average body mass index (BMI) of 29.12, placing them in the overweight category.

This wasn’t your typical “drink this and tell us how you feel” study. Instead, the researchers implemented a sophisticated crossover design where participants served as their own control group. Each person completed two one-week periods: one drinking elderberry juice and another drinking a placebo beverage that looked and tasted similar but lacked the active compounds. A three-week “washout” period separated these phases to ensure no carryover effects.

During the study, participants consumed 355 grams (about 12 ounces) of either elderberry juice or placebo daily, split between morning and evening doses. The elderberry juice provided approximately 720 milligrams of beneficial compounds called anthocyanins, which give the berries their deep purple color.

Perhaps most remarkably, after just one week of elderberry juice consumption, participants showed a 24% reduction in blood glucose response following a high-carbohydrate meal challenge. This suggests that elderberry juice might help the body better regulate blood sugar levels, a crucial factor in metabolic health and weight management.

The study also revealed that participants burned more fat both while resting and during exercise when consuming elderberry juice. Using specialized equipment to measure breath gases, researchers found that those drinking elderberry juice burned 27% more fat compared to when they drank the placebo. This increased fat-burning occurred not only during rest but also persisted during a 30-minute moderate-intensity walking test.

But the benefits didn’t stop there. The research team also examined participants’ gut bacteria through stool samples and found that elderberry juice promoted the growth of beneficial bacterial species while reducing less desirable ones. Specifically, it increased levels of bacteria known for producing beneficial compounds called short-chain fatty acids, which play essential roles in metabolism and gut health.

What makes elderberry particularly special is its exceptionally high concentration of anthocyanins. According to Solverson, a person would need to consume four cups of blackberries to match the anthocyanin content found in just 6 ounces of elderberry juice. These compounds are believed to be responsible for the berry’s anti-inflammatory, anti-diabetic, and antimicrobial effects.

While further research is needed to confirm these effects over longer periods and in larger populations, this study suggests that elderberry juice might offer a practical dietary strategy for supporting metabolic health. It’s worth noting that participants reported no adverse effects from consuming the juice, suggesting it’s both safe and well-tolerated.

The timing of this research coincides with growing consumer interest in elderberry products. While these purple berries have long been popular in European markets, demand in the United States surged during the COVID-19 pandemic and continues to rise. This increasing market presence could make it easier for consumers to access elderberry products if further research continues to support their health benefits.

“Food is medicine, and science is catching up to that popular wisdom,” Solverson notes. “This study contributes to a growing body of evidence that elderberry, which has been used as a folk remedy for centuries, has numerous benefits for metabolic as well as prebiotic health.”

The research team isn’t stopping here. With an additional $600,000 in funding from the U.S. Department of Agriculture, they plan to investigate whether elderberry juice might help people maintain their weight after discontinuing weight loss medications. This could provide a natural solution for one of the most challenging aspects of weight management – maintaining weight loss over time.

As obesity rates continue to climb and are projected to reach 48-55% of American adults by 2050, finding natural, food-based approaches to support metabolic health becomes increasingly important. While elderberry juice shouldn’t be viewed as a magic bullet, this research suggests it might be a valuable addition to a healthy diet and lifestyle approach for managing weight and metabolic health.

Source : https://studyfinds.org/how-elderberry-might-transform-metabolism-in-just-7-days/

From first breath: Male and female brains really do differ at birth

(Credit: © Katrina Trninich | Dreamstime.com)

The age-old debate about differences between male and female brains has taken a dramatic turn with new evidence suggesting these variations begin before a baby’s first cry. In the largest study of its kind, researchers at Cambridge University’s Autism Research Centre have discovered that structural brain differences between the sexes don’t gradually emerge through childhood — they’re already established at birth.

Brain development during the first few weeks of life occurs at a remarkably rapid pace, making this period particularly crucial for understanding how sex differences in the brain emerge and evolve. Previous research has primarily focused on older infants, children, and adults, leaving a significant gap in our understanding of the earliest stages of brain development.

The research team analyzed brain scans of 514 newborns (236 females and 278 males) aged 0-28 days using data from the developing Human Connectome Project. The study, published in the journal Biology of Sex Differences, represents one of the largest and most comprehensive investigations of sex differences in neonatal brain structure to date, addressing a common limitation of past research: small sample sizes.

Male newborns showed larger overall brain volumes compared to females, even after accounting for differences in birth weight. This finding was particularly significant because the research team carefully controlled for body size differences between sexes, a factor that has complicated previous studies in this field.

When controlling for total brain volume, female babies exhibited greater amounts of gray matter — the outer brain tissue containing nerve cell bodies and dendrites responsible for processing and interpreting information, such as sensation, perception, learning, speech, and cognition. Meanwhile, male infants had higher volumes of white matter, which consists of long nerve fibers (axons) that connect different brain regions together.

“Our study settles an age-old question of whether male and female brains differ at birth,” says lead author Yumnah Khan, a PhD student at the Autism Research Centre, in a statement. “We know there are differences in the brains of older children and adults, but our findings show that they are already present in the earliest days of life.”

Several specific brain regions showed notable differences between males and females. Female newborns had larger volumes in areas related to memory and emotional regulation, while male infants showed greater volume in regions involved in sensory processing and motor control.

Dr. Alex Tsompanidis, who supervised the study, emphasizes its methodological rigor: “This is the largest such study to date, and we took additional factors into account, such as birth weight, to ensure that these differences are specific to the brain and not due to general size differences between the sexes.”

The research team is now investigating potential prenatal factors that might contribute to these differences. “To understand why males and females show differences in their relative grey and white matter volume, we are now studying the conditions of the prenatal environment, using population birth records, as well as in vitro cellular models of the developing brain,” explains Dr. Tsompanidis.

Importantly, the researchers stress that these findings represent group averages rather than individual characteristics.

“The differences we see do not apply to all males or all females, but are only seen when you compare groups of males and females together,” says Dr. Carrie Allison, Deputy Director of the Autism Research Centre. “There is a lot of variation within, and a lot of overlap between, each group.”

These findings mark a significant step forward in understanding early brain development, while raising new questions about the role of prenatal factors in shaping neurological differences. The research team’s ongoing investigations into prenatal conditions and cellular models may soon provide even more insights into how these sex-based variations emerge.

“These differences do not imply the brains of males and females are better or worse. It’s just one example of neurodiversity,” says Professor Simon Baron-Cohen, Director of the Autism Research Centre. “This research may be helpful in understanding other kinds of neurodiversity, such as the brain in children who are later diagnosed as autistic, since this is diagnosed more often in males.”

Source : https://studyfinds.org/how-male-and-female-brains-differ-at-birth/

Gender shock: Study reveals men, not women, make more emotional money choices

(Credit: © Yuri Arcurs | Dreamstime.com)

When it comes to making financial decisions, conventional wisdom suggests keeping emotions out of the equation. But new research reveals that men, contrary to traditional gender stereotypes, may be significantly more susceptible to letting emotions influence their financial choices than women.

A study led by the University of Essex challenges long-held assumptions about gender and emotional decision-making. The research explores how emotions generated in one context can influence decisions in completely unrelated situations – a phenomenon known as the emotional carryover effect.

“These results challenge the long-held stereotype that women are more emotional and open new avenues for understanding how emotions influence decision-making across genders,” explains lead researcher Dr. Nikhil Masters from Essex’s Department of Economics.

Working with colleagues from the Universities of Bournemouth and Nottingham, Masters designed an innovative experiment comparing how different types of emotional stimuli affect people’s willingness to take financial risks. They contrasted a traditional laboratory approach targeting a single emotion (fear) with a more naturalistic stimulus based on real-world events that could trigger multiple emotional responses.

The researchers recruited 186 university students (100 women and 86 men) and randomly assigned them to one of three groups. One group watched a neutral nature documentary about the Great Barrier Reef. Another group viewed a classic fear-inducing clip from the movie “The Shining,” showing a boy searching for his mother in an empty corridor with tense background music. The third group watched actual news footage about the BSE crisis (commonly known as “mad cow disease”) from the 1990s, a real food safety scare that generated widespread public anxiety.

After watching their assigned videos, participants completed decision-making tasks involving both risky and ambiguous financial choices using real money. In the risky scenario, they had to decide between taking guaranteed amounts of money or gambling on a lottery with known 50-50 odds. The ambiguous scenario was similar, but participants weren’t told the odds of winning.

The results revealed striking gender differences. Men who watched either the horror movie clip or the BSE footage subsequently made more conservative financial choices compared to those who watched the neutral nature video. This effect was particularly pronounced for those who saw the BSE news footage, and even stronger when the odds were ambiguous rather than clearly defined.

Perhaps most surprisingly, women’s financial decisions remained remarkably consistent regardless of which video they watched. The researchers found that while women reported experiencing similar emotional responses to the videos as men did, these emotions didn’t carry over to influence their subsequent financial choices.

The study challenges previous assumptions about how specific emotions like fear influence risk-taking behavior. While earlier studies suggested that fear directly leads to more cautious decision-making, this new research indicates the relationship may be more complex. Even when the horror movie clip successfully induced fear in participants, individual variations in reported fear levels didn’t correlate with their financial choices.

Instead, the researchers discovered that changes in positive emotions may play a more important role than previously thought. When positive emotions decreased after watching either the horror clip or BSE footage, male participants became more risk-averse in their financial decisions.

The study also demonstrated that emotional effects on decision-making can be even stronger when using realistic stimuli that generate multiple emotions simultaneously, compared to artificial laboratory conditions designed to induce a single emotion. This suggests that real-world emotional experiences may have more powerful influences on our financial choices than controlled laboratory studies have indicated.

The research team is now investigating why only men appear to be affected by these carryover effects. “Previous research has shown that emotional intelligence helps people to manage their emotions more effectively. Since women generally score higher on emotional intelligence tests, this could explain the big differences we see between men and women,” explains Dr. Masters.

These findings could have significant implications for understanding how major news events or crises might affect financial markets differently across gender lines. They also suggest the potential value of implementing “cooling-off” periods for important financial decisions, particularly after exposure to emotionally charged events or information.

“We don’t make choices in a vacuum and a cooling-off period might be crucial after encountering emotionally charged situations,” says Dr. Masters, “especially for life-changing financial commitments like buying a home or large investments.”

Source : https://studyfinds.org/study-men-not-women-make-more-emotional-money-choices/

Danger in drinking water? Flouride linked to lower IQ scores in children

(Photo by Tatevosian Yana on Shutterstock)

In a discovery that could reshape how we think about water fluoridation, researchers have uncovered a troubling pattern across 10 countries and nearly 21,000 children: higher fluoride exposure consistently correlates with lower IQ scores. The meta-analysis raises critical questions about the balance between preventing tooth decay and protecting cognitive development.

While fluoride has long been added to public drinking water systems to prevent tooth decay, this research suggests the need to carefully weigh the dental health benefits against potential developmental risks. In the United States, the recommended fluoride concentration for community water systems is 0.7 mg/L, with regulatory limits set at 4.0 mg/L by the Environmental Protection Agency (EPA).

The research team, led by scientists from the National Institute of Environmental Health Sciences, examined studies from ten different countries, though notably none from the United States. The majority of the research (45 studies) came from China, with others from Canada, Denmark, India, Iran, Mexico, New Zealand, Pakistan, Spain, and Taiwan.

Published in JAMA Pediatrics, the findings paint a consistent picture across different types of analyses. When comparing groups with higher versus lower fluoride exposure, children in the higher exposure groups showed significantly lower IQ scores. For every 1 mg/L increase in urinary fluoride levels, researchers observed an average decrease of 1.63 IQ points.

This effect size might seem small, but population-level impacts can be substantial. The researchers note that a five-point decrease in population IQ would nearly double the number of people classified as intellectually disabled, highlighting the potential public health significance of their findings.

The study employed three different analytical approaches to examine the relationship between fluoride and IQ. First, they compared mean IQ scores between groups with different exposure levels. Second, they analyzed dose-response relationships to understand how IQ scores changed with increasing fluoride concentrations. Finally, they examined individual-level data to calculate precise estimates of IQ changes per unit increase in fluoride exposure.

Of particular concern, the inverse relationship between fluoride exposure and IQ remained significant even at relatively low exposure levels. When researchers restricted their analysis to studies with fluoride concentrations below 2 mg/L (closer to levels found in fluoridated water systems), they still found evidence of cognitive impacts.

The implications of these findings are especially relevant for the United States, where fluoridated water serves about 75% of people using community water systems. While no U.S. studies were included in this analysis, the researchers note that significant inequalities exist in American water fluoride levels, particularly affecting Hispanic and Latino communities.

The study’s findings arrive at a crucial moment in public health policy. While water fluoridation has been hailed as one of the great public health achievements of the 20th century for its role in preventing tooth decay, this research suggests the need for a careful reassessment of fluoride exposure guidelines, particularly for vulnerable populations like pregnant women and young children.

Source : https://studyfinds.org/danger-in-drinking-water-flouride-linked-to-lower-iq-scores-in-children/

The disturbing trend discovered in 166,534 movies over past 50 years

(Credit: Prostock-studio on Shutterstock)

Movies are getting deadlier – at least in terms of their dialogue. A new study analyzing over 160,000 English-language films has revealed a disturbing trend: characters are talking about murder and killing more frequently than ever before, even in movies that aren’t focused on crime.

Researchers from the University of Maryland, University of Pennsylvania, and The Ohio State University examined movie subtitles spanning five decades, from 1970 to 2020, to track how often characters used words related to murder and killing. What they found was a clear upward trajectory that mirrors previous findings about increasing visual violence in films.

“Characters in noncrime movies are also talking more about killing and murdering today than they did 50 years ago,” says Brad Bushman, corresponding author of the study and professor of communication at The Ohio State University, in a statement. “Not as much as characters in crime movies, and the increase hasn’t been as steep. But it is still happening. We found increases in violence cross all genres.”

By applying sophisticated natural language processing techniques, the team calculated the percentage of “murderous verbs” – variations of words like “kill” and “murder” – compared to the total number of verbs used in movie dialogue. They deliberately took a conservative approach, excluding passive phrases like “he was killed,” negations such as “she didn’t kill,” and questions like “did he murder someone?” to focus solely on characters actively discussing committing violent acts.

“Our findings suggest that references to killing and murder in movie dialogue not only occur far more frequently than in real life but are also increasing over time,” explains Babak Fotouhi, lead author of the study and adjunct assistant research professor in the College of Information at the University of Maryland.

“We focused exclusively on murderous verbs in our analysis to establish a lower bound in our reporting,” notes Amir Tohidi, a postdoctoral researcher at the University of Pennsylvania. “Including less extreme forms of violence would result in a higher overall count of violence.”

Nearly 7% of all movies analyzed contained these murderous verbs in their dialogue. The findings demonstrate a steady increase in such language over time, particularly in crime-focused films. Male characters showed the strongest upward trend in violent dialogue, though female characters also demonstrated a significant increase in non-crime movies.

This rising tide of violent speech wasn’t confined to obvious genres like action or thriller films. Even movies not centered on crime showed a measurable uptick in murder-related dialogue over the 50-year period studied. This suggests that casual discussion of lethal violence has become more normalized across all types of movies, potentially contributing to what researchers call “mean world syndrome” – where heavy media consumption leads people to view the world as more dangerous and threatening than it actually is.

The findings align with previous research showing that gun violence in top movies has more than doubled since 1950, and more than tripled in PG-13 films since that rating was introduced in 1985. What makes this new study particularly noteworthy is its massive scale – examining dialogue from more than 166,000 films provides a much more comprehensive picture than earlier studies that looked at smaller samples.

Movie studios operate in an intensely competitive market where they must fight for audience attention. “Movies are trying to compete for the audience’s attention and research shows that violence is one of the elements that most effectively hooks audiences,” Fotouhi explains.

“The evidence suggests that it is highly unlikely we’ve reached a tipping point,” Bushman warns. Decades of research have demonstrated that exposure to media violence can influence aggressive behavior and mental health in both adults and children. This can manifest in various ways, from direct imitation of observed violent acts to a general desensitization toward violence and decreased empathy for others.

As content platforms continue to multiply and screen time increases, particularly among young people, these findings raise important questions about the cumulative impact of exposure to violent dialogue in entertainment media. The researchers emphasize that their results highlight the crucial need for promoting mindful consumption and media literacy, especially among vulnerable populations like children.

Source : https://studyfinds.org/movie-violence-dialogue-disturbing-trend/

Even small diet tweaks can lead to sustainable weight loss – here’s how

Woman stepping on scale (© Siam – stock.adobe.com)

It’s a well-known fact that to lose weight, you either need to eat less or move more. But how many calories do you really need to cut out of your diet each day to lose weight? It may be less than you think.

To determine how much energy (calories) your body requires, you need to calculate your total daily energy expenditure (TDEE). This is comprised of your basal metabolic rate (BMR) – the energy needed to sustain your body’s metabolic processes at rest – and your physical activity level. Many online calculators can help determine your daily calorie needs.

If you reduce your energy intake (or increase the amount you burn through exercise) by 500-1,000 calories per day, you’ll see a weekly weight loss of around one pound (0.45kg).

But studies show that even small calorie deficits (of 100-200 calories daily) can lead to long-term, sustainable weight-loss success. And although you might not lose as much weight in the short-term by only decreasing calories slightly each day, these gradual reductions are more effective than drastic cuts as they tend to be easier to stick with.

Hormonal changes

When you decrease your calorie intake, the body’s BMR often decreases. This phenomenon is known as adaptive thermogenesis. This adaptation slows down weight loss so the body can conserve energy in response to what it perceives as starvation. This can lead to a weight-loss plateau – even when calorie intake remains reduced.

Caloric restriction can also lead to hormonal changes that influence metabolism and appetite. For instance, thyroid hormones, which regulate metabolism, can decrease – leading to a slower metabolic rate. Additionally, leptin levels drop, reducing satiety, increasing hunger and decreasing metabolic rate.

Ghrelin, known as the “hunger hormone,” also increases when caloric intake is reduced, signaling the brain to stimulate appetite and increase food intake. Higher ghrelin levels make it challenging to maintain a reduced-calorie diet, as the body constantly feels hungrier.

Insulin, which helps regulate blood sugar levels and fat storage, can improve in sensitivity when we reduce calorie intake. But sometimes, insulin levels decrease instead, affecting metabolism and leading to a reduction in daily energy expenditure. Cortisol, the stress hormone, can also spike – especially when we’re in a significant caloric deficit. This may break down muscles and lead to fat retention, particularly in the stomach.

Lastly, hormones such as peptide YY and cholecystokinin, which make us feel full when we’ve eaten, can decrease when we lower calorie intake. This may make us feel hungrier.

Fortunately, there are many things we can do to address these metabolic adaptations so we can continue losing weight.

Weight loss strategies

Maintaining muscle mass (either through resistance training or eating plenty of protein) is essential to counteract the physiological adaptations that slow weight loss down. This is because muscle burns more calories at rest compared to fat tissue – which may help mitigate decreased metabolic rate.

Gradual caloric restriction (reducing daily calories by only around 200-300 a day), focusing on nutrient-dense foods (particularly those high in protein and fibre), and eating regular meals can all also help to mitigate these hormonal challenges.

Source : https://studyfinds.org/small-diet-tweaks-sustainable-weight-loss/

 

A proven way to stay younger longer — and all it takes is an hour each week

(© New Africa – stock.adobe.com)

Could you find an hour a week to devote to slowing your biological aging? You’ll get other, additional benefits – adding not just more years to your life but more life to your years. That hour can also create a sense of purpose, improve mental health, give you a psychological lift, boost your social connectedness, and you’ll know you’re making the world a better place. All you have to do is volunteer. If you can find a few hours a week, the benefits are even greater.

A study published in this month’s issue of Social Science and Medicine found that volunteering for as little as an hour a week is linked to slower biological aging.

Biologic age

Biologic age refers to the age of a body’s cells and tissues and, over time, how quickly they are aging, compared to the body’s chronologic age. The most common way to assess biological age examines how your behaviors and environment change the expression of your DNA; it’s called epigenetic testing.

Why volunteering is associated with slower aging

Experts explain that volunteering’s significant effect on biologic aging is multifactorial, with physical, social, and psychological benefits.

Volunteering often includes physical activity, like walking. Social connections are vital; we’re programmed for connectedness. Social connections decrease stress and improve cognitive function. According to the study authors, volunteering can also create a sense of purpose, improve mental health, and buffer any loss of important roles, like spouse or parent, as we age.

Family Volunteering

When my son was six, we volunteered at a soup kitchen in a less-affluent part of Detroit. On the Saturday after Thanksgiving, he was right in the thick of making gallons of turkey soup and hundreds of cheese or peanut butter and jelly sandwiches. Finally, he grabbed his own PB&J and munched out with our guests. It’s one of my favorite memories.

Family volunteering (whatever “family” means to you) is a win for everyone. It strengthens families and communities. When family members unite for a worthy cause, their collective power is greater than just adding together the strengths of individuals.

Children will develop compassion and tolerance. They may acquire new skills. More importantly, volunteering provides models from which children learn to respect and serve others. They discover the gratitude that flows only from giving. Children who volunteer are more likely to volunteer as adults and, later on in life, create their own traditions with their children.

Parents get to spend more time with their kids, instilling important values with action; those values run deeper than words could ever reach. Include your kids in planning. You may discover what’s truly important to them.

Nonprofit agencies, understaffed and overstressed, can do little without volunteers. Virtually everyone can find a nonprofit that matches their passion.

Getting started

To decide if volunteering is right for your family, consider:

  • About what issues are you passionate?
  • What are your children’s ages?
  • Who would you like to help?
  • What does your family enjoy doing together?
  • How frequently can you volunteer?
  • What skills and talents can your family offer?
  • What do you want your family to learn from the experience?

There are innumerable causes in which you can make a difference. About 3.5 million people a year will experience homelessness; about 40 percent are kids. Since 1989, the number of beds available in shelters has tripled. Collect toiletries. Give art and school supplies. Provide clothing and transportation.

Every day, 10% of Americans are hungry. Have a canned food drive. Make bag lunches for kids in a homeless shelter. Have a party – with an entrance fee of a can of food.

The elderly often need help the most. Adopt a grandparent. Deliver food – drive for Meals on Wheels. Look at photos and listen to stories. Give manicures and pedicures. Do seasonal yard work, rake leaves or shovel snow. Write letters. Play board games. Read books or newspapers. Bring your pet to visit. Write life stories. Provide transportation for medical appointments. Run errands. Make small home repairs.

I had elderly neighbors next door. When I cleared snow and ice (which was plentiful) from my car, I’d clear their car as well. Mrs. Neighbor watched through the living room window. Sometime later, she told me that she had a remote device to start and clear her car from inside her home! What can you do but laugh?

Source : https://studyfinds.org/volunteering-proven-way-stay-younger-longer/

‘Simple nasal swab’ could revolutionize childhood asthma treatment

(Credit: © Alena Stalmashonak | Dreamstime.com)

A novel diagnostic test using just a nasal swab could transform how doctors diagnose and treat childhood asthma. Researchers at the University of Pittsburgh have developed this non-invasive approach that, for the first time, allows physicians to precisely identify different subtypes of asthma in children without requiring invasive procedures.

Until now, determining the specific type of asthma a child has typically required bronchoscopy, an invasive procedure performed under general anesthesia to collect lung tissue samples. This limitation has forced doctors to rely on less accurate methods like blood tests and allergy screenings, potentially leading to suboptimal treatment choices.

“Because asthma is a highly variable disease with different endotypes, which are driven by different immune cells and respond differently to treatments, the first step toward better therapies is accurate diagnosis of endotype,” says senior author Dr. Juan Celedón, a professor of pediatrics at the University of Pittsburgh and chief of pulmonary medicine at UPMC Children’s Hospital of Pittsburgh, in a statement.

3 subtypes of asthma

The new nasal swab test analyzes the activity of eight specific genes associated with different types of immune responses in the airways. This genetic analysis reveals which of three distinct asthma subtypes, or endotypes, a patient has: T2-high (involving allergic inflammation), T17-high (showing a different type of inflammatory response), or low-low (exhibiting minimal inflammation of either type).

The research team validated their approach across three separate studies involving 459 young people with asthma, focusing particularly on Puerto Rican and African American youth, populations that experience disproportionately higher rates of asthma-related emergency room visits and complications. According to the researchers, Puerto Rican children have emergency department and urgent care visit rates of 23.5% for asthma, while Black children have rates of 26.6% — both significantly higher than the 12.1% rate among non-Hispanic white youth.

The findings, published in JAMA, challenge long-held assumptions about childhood asthma. While doctors have traditionally believed that most cases were T2-high, the nasal swab analysis revealed this type appears in only 23-29% of participants. Instead, T17-high asthma accounted for 35-47% of cases, while the low-low type represented 30-38% of participants.

“These tests allow us to presume whether a child has T2-high disease or not,” explained Celedón. “But they are not 100% accurate, and they cannot tell us whether a child has T17-high or low-low disease. There is no clinical marker for these two subtypes. This gap motivated us to develop better approaches to improve the accuracy of asthma endotype diagnosis.”

Precision medicine for patients

This breakthrough carries significant implications for treatment. Currently, powerful biological medications exist for T2-high asthma, but no available treatments specifically target T17-high or low-low types. The availability of this new diagnostic test could accelerate research into treatments for these previously understudied forms of asthma.

“We have better treatments for T2-high disease, in part, because better markers have propelled research on this endotype,” said Celedón. “But now that we have a simple nasal swab test to detect other endotypes, we can start to move the needle on developing biologics for T17-high and low-low disease.”

The test could also help researchers understand how asthma evolves throughout childhood and adolescence. Celedón noted that one of the “million-dollar questions in asthma” involves understanding why the condition affects children differently as they age.

“Before puberty, asthma is more common in boys, but the incidence of asthma goes up in females in adulthood. Is this related to endotype? Does endotype change over time or in response to treatments? We don’t know,” he says. “But now that we can easily measure endotype, we can start to answer these questions.”

Dr. Gustavo Matute-Bello, acting director of the Division of Lung Diseases at the National Heart, Lung, and Blood Institute, emphasizes the potential impact of this diagnostic advancement. “Having tools to test which biological pathways have a major role in asthma in children, especially those who have a disproportionate burden of disease, may help achieve our goal of improving asthma outcomes,” he says. “This research has the potential to pave the way for more personalized treatments, particularly in minority communities.”

Source : https://studyfinds.org/simple-nasal-swab-could-revolutionize-childhood-asthma-treatment/

Do You Believe in Life After Death? These Scientists Study It.

In an otherwise nondescript office in downtown Charlottesville, Va., a small leather chest sits atop a filing cabinet. Within it lies a combination lock, unopened for more than 50 years. The man who set it is dead.

On its own, the lock is unremarkable — the kind you might use at the gym. The code, a mnemonic of a six-letter word converted into numbers, was known only to the psychiatrist Dr. Ian Stevenson, who set it long before he died, and years before he retired as director of the Division of Perceptual Studies, or DOPS, a parapsychology research unit he founded in 1967 within the University of Virginia’s school of medicine.

Dr. Stevenson called this experiment the Combination Lock Test for Survival. He reasoned that if he could transmit the code to someone from the grave, it might help answer the questions that had consumed him in life: Is communication from the “beyond” possible? Can the personality survive bodily death? Or, simply: Is reincarnation real?

This last conundrum — the survival of consciousness after death — continues to be at the forefront of the division’s research. The team has logged hundreds of cases of children who claim to remember past lives from all continents except Antarctica. “And that’s only because we haven’t looked for cases there,” said Dr. Jim Tucker, who has been investigating claims of past lives for more than two decades. He recently retired after having been the director of DOPS since 2015.

It was an unexpected career path to begin with.

“As far as reincarnation itself goes, I never had any particular interest in it,” said Dr. Tucker, who set out to solely become a child psychiatrist and was, at one point, the head of U.Va.’s Child and Family Psychiatry Clinic. “Even when I was training, it never occurred to me that I’d end up doing this work.”

Now, at 64 years old, after traveling the world to record cases of possible past life recollections, and with books and papers of his own on the subject of past lives, he has left the position.

“There’s a level of stress in medicine, and in academics,” he reflected. “There are always things you should be doing, papers you should be writing, prescriptions you should be giving. I enjoyed my day to day work, both in the clinic and at DOPS, but you reach a point where you’re ready not to have so many responsibilities and demands.”

According to a job listing issued by the medical school, on top of their academic reputation, the ideal candidate to replace Dr. Tucker must have “a track record of rigorous investigation of extraordinary human experiences, such as the mind’s relationship to the body and the possibility of consciousness surviving physical death.”

None of the eight principal team members have the required academic status to undertake the role, making it necessary to find someone externally.

“I think there’s a feeling that it would be rejuvenating for the group to have an outside person come in,” said Dr. Jennifer Payne, vice-chair of research at the department of psychiatry, who leads the selection committee.

Scientists That Have Strayed From the Usual Path

Dr. Tucker was running a busy practice when he first learned about DOPS. It was 1996 and a local newspaper, The Daily Progress in Charlottesville, had profiled Dr. Stevenson after he received funding to interview individuals about their near-death experiences. Entranced by the pioneering work, Dr. Tucker began volunteering at the division before joining as a permanent researcher.

Each of the division’s researchers has committed their career — and, to some extent, risked their professional reputation — to the study of the so-called paranormal. This includes near-death and out-of-body experiences, altered states of consciousness, and past lives research, which all come under the portmanteau of “parapsychology.” They are scientists that have strayed from the usual path.

DOPS is a curious institution. There are only a few other labs in the world undertaking similar lines of research — the Koestler Parapsychology Unit at the University of Edinburgh, for instance — with DOPS being by far the most prominent. The only other major parapsychology unit in the United States was Princeton’s Engineering Anomalies Research Laboratory, or PEAR, which focused on telekinesis and extrasensory perception. That unit was shuttered in 2007.

While it is technically part of the U.Va., DOPS occupies four spacious would-be condominiums inside a residential building. It is notably distanced from the university’s leafy main campus, and at least a couple of miles from the medical school.

“Nobody knows we’re here,” said Dr. Bruce Greyson, 78, a former director of DOPS and a professor emeritus of psychiatry and neurobehavioral sciences at U.Va., who started working with Dr. Stevenson in the late 1970s. “Ian was very cautious about that, because he had faced a lot of prejudice,” Dr. Greyson said. “He kept a very low profile.”

Dr. Greyson received a lot of pushback before joining DOPS. He had worked at the University of Michigan for eight years early in his career, but his interest in near-death experiences began to ruffle feathers, much like it had for Dr. Stevenson.

“They told me, point blank, that I wouldn’t have a future there if I did near-death research, because you can’t measure that in a test tube,” he said. “Unless I could quantify it by a biological measure, they didn’t want to hear about it.” He left Michigan for the University of Connecticut, where he spent 11 years, and then found his way to DOPS.

The atmosphere within DOPS is one of studious calm. There are only a few signs of the team’s activities. In the basement laboratory one finds a copper-lined Faraday cage used to assess out of body experience subjects, and foam mannequin heads sporting Electroencephalogram caps. Upstairs, running the full length of the wall in the Ian Stevenson Memorial Library, which boasts over 5,000 books and papers pertaining to past lives research, is a glass display case containing a collection of knives, swords and mallets — weapons described by children who recalled a violent end in their previous life.

“It’s not the actual weapon, but the kind of weapon used,” explained Dr. Tucker. Each object is labeled with intricate, sometimes gory, detail. One display told the story of a young girl from Burma, Ma Myint Thein, who was born with deformities of her fingers and birthmarks across her back and neck. “According to villagers,” the label reads, “the man whose life she remembered being had been murdered, his fingers chopped off and his throat slashed by a sword.” It is accompanied by a photograph of the girl’s hands, her right missing two fingers.

That children who claim to remember past lives are most frequently found in South Asia, where reincarnation is a core tenet of many religious beliefs, has been used by critics to debunk the studies. After all, surely it’s all too easy to find corroborative evidence in places with a pre-existing belief in reincarnation.

The question of life after death has been an existential preoccupation for humans throughout time, however, and reincarnation is a central tenet of belief in many cultures. Buddhism, where there is thought to be a 49-day journey between death and rebirth; Hinduism, with its concept of samsara, the endless cycle; and Native American and West African nations, all share similar core concepts of the soul or spirit moving from one life to the next. Meanwhile, a 2023 Pew Research survey found that a quarter of Americans believe it is “definitely or probably true” that people who have died can be reincarnated.

When it comes to past life claims, the DOPS team works on cases that almost always have come directly from parents.

Common features in children who claim to have led a previous life include a verbal precocity and mannerisms at odds with that of the rest of the family. Unexplained phobias or aversions have also been thought to have been transferred over from a past existence. In some cases, extreme clarity besets the remembrances: the names, professions and quirks of a different set of relatives, or the particularities of the streets they used to live on and sometimes even recalling obscure historical events — details the child couldn’t possibly have known about.

One of the most famous cases the team worked on was that of James Leininger, an American boy who remembered being a fighter pilot in Japan. The case drew a great deal of attention to DOPS, but also brought with it numerous detractors.

Ben Radford, the deputy editor of Skeptical Inquirer, a magazine dedicated to scientific research, believes that wishful thinking and general death anxiety has fueled an increased interest in reincarnation, and finds flaws in the DOPS research methodology, which he often dissects in his blog. He said, “The fact is, no matter how sincere the person is, often recovered memories are false.”

‘The Evidence Is Not Flawless’

Remembered by many as a dignified man with a penchant for three-piece suits, Dr. Stevenson lived for his research. He almost never took time off. “I had to swing by the office once on New Year’s Eve and there was one car in the lot, and it was his,” Dr. Tucker recalled.

Born in 1918, Dr. Stevenson, who was Canadian and graduated from St. Andrews with a degree in history before studying biochemistry and psychiatry at McGill University, had served as chair of the department of psychiatry at U.Va. for 10 years until 1967.

By the early 1960s he had become disillusioned by conventional medicine. In an interview with The New York Times in 1999, he said that he had been drawn to studying past lives through his “discontent with other explanations of human personality. I wasn’t satisfied with psychoanalysis or behaviorism or, for that matter, neuroscience. Something seemed to be missing.”

And so he began recording potential cases of reincarnation, which he would come to call “cases of the reincarnation type,” or CORT. It was one of his initial CORT research papers, from a 1966 trip to India, that caught the attention of Chester F. Carlson, the inventor of the technology behind Xerox photocopying machines. It was Mr. Carlson’s generous financial assistance that enabled Dr. Stevenson to leave his role at the medical school and focus full-time on past lives research.

The dean of the medical school at the time, Kenneth Crispell, didn’t approve of this foray into the paranormal. He was happy to see Dr. Stevenson resign from his spot in the department of psychiatry, and, believing in academic freedom, agreed to the formation of a small research division. However, any hope Dr. Crispell had that Dr. Stevenson and his unorthodox ideas would disappear into the academic shadows was quickly dashed: Mr. Carlson died of a heart attack in 1968 and in his will he bequeathed $1 million to Dr. Stevenson’s endeavor.

While not all of the attention was positive in the division’s early years, some individuals in the science community were intrigued. “Either Dr. Stevenson is making a colossal mistake, or he will be known as the Galileo of the 20th century,” the psychiatrist Harold Lief wrote in a 1977 article for the Journal of Nervous and Mental Disease.

To this day, DOPS is still financed entirely by private donations. In October it was announced that the division had received the first installment of a $1 million estate gift from The Philip B. Rothenberg Legacy Fund, which will be used to finance early-career researchers. Other supporters have included the Bonner sisters, Priscilla Bonner-Woolfan and Margerie Bonner-Lowry — silent screen actresses of the 1920s, whose endowment continues to fund the DOPS directorship. Another unlikely supporter is the actor John Cleese, who first encountered the division at the Esalen Institute, a retreat and intentional community located in Big Sur, Calif.

“These people are behaving like good scientists,” Mr. Cleese said in a phone interview. “Good scientists are after the truth: they don’t just want to be right. I think it is absolutely astonishing and quite disgraceful, the way that orthodox contemporary, materialistic reductionist theory treats all the things — and there are so many of them — that they can’t begin to explain.”

In the early years of the department, Dr. Stevenson traveled the world extensively, recording more than 2,500 cases of children recalling past lives. In this pre-internet time, discovering so many similar accounts and trends served to strengthen his thesis. The findings from these excursions, collected in Dr. Stevenson’s neat handwriting, are stored by country in filing cabinets and is in the slow process of being digitized.

From this database, researchers have yielded findings they believe are interesting. The strongest cases, according to the DOPS researchers, have been found in children under the age of 10, and the majority of remembrances tend to occur between the ages of 2 and 6, after which they appear to fade. The median time between death and rebirth is about 16 months, a period the researchers see as a form of intermission. Very often, the child has memories that match up to the life of a deceased relative.

And yet for all of this meticulous work, Dr. Stevenson was aware of the limitations of past lives research. “The evidence is not flawless and it certainly does not compel such a belief,” he explained in a lecture at The University of Southwestern Louisiana (now the University of Louisiana at Lafayette) in 1989. “Even the best of it is open to alternative interpretations, and one can only censure those who say there is no evidence whatsoever.”

“Ian thought reincarnation was the best explanation, but he wasn’t positive,” said Dr. Greyson. “He thought a lot of the cases may be something else. It might be a kind of possession, it might even be delusion. There are lots of different possibilities. It may be clairvoyance, or picking up the information from some other sources that you’re not aware of.”

After spending more than half his life studying past lives, Dr. Stevenson retired from DOPS in 2002, handing the directorial baton to Dr. Greyson. Though he kept a watchful eye on proceedings from afar, offering guidance when solicited, he never set foot in the division again. He died of pneumonia five years later, at 88 years old.

‘Many of the Memories Are Difficult’

Each year DOPS receives more than 100 emails from parents regarding something their child has said. Reaching out to the division is often an attempt at clarity, but the researchers never promise answers. Their only promise is to take these claims seriously, “but as far as the case having enough to investigate, enough to potentially verify that it matches with a past life, those are very few,” said Dr. Tucker.

This summer, Dr. Tucker drove to the rural town of Amherst, Va., to visit a case of possible past life remembrance. He was joined by his colleagues Marieta Pehlivanova and Philip Cozzolino, who would be taking over his research in the new year.

Ms. Pehlivanova, 43, who specializes in near death experience and children who remember past lives, has been at DOPS for seven years and is launching a study into women who’ve had near death experiences during childbirth. When she tells people what she does, they find the subject matter both fascinating and disturbing. “We’ve had emails from people saying we’re doing the work of the devil,” she said.

Upon arrival at the family’s home, the team was shown into the kitchen. A child, who was three, the youngest of four home-schooled siblings, peeked from behind her mother’s legs, looking up shyly. She wore a baggy Minnie Mouse shirt and went to perch between her grandparents on a banquette, watching everyone take their seats around the dining table.

“Let’s start from the very beginning,” Dr. Tucker said after the paperwork had been signed by Misty, the child’s 28-year-old mother. “It all began with the puzzle piece?”

A few months earlier, mother and child had been looking at a wooden puzzle of the United States, with each state represented by a cartoon of a person or object. Misty’s daughter pointed excitedly at the jagged piece representing Illinois, which had an abstract illustration of Abraham Lincoln.

“That’s Pom,” her daughter exclaimed. “He doesn’t have his hat on.”

This was indeed a drawing of Abraham Lincoln without his hat, but more important, there was no name under the image indicating who he was. Following weeks of endless talk about “Pom” bleeding out after being hurt and being carried to a too-small bed — which the family had started to think could be related to Lincoln’s assassination — they began to consider that their daughter had been present for the historical moment. This was despite the family having no prior belief in reincarnation, nor any particular interest in Lincoln.

On the drive to Amherst, Dr. Tucker confessed his hesitation in taking on this particular case — or any case connected to a famous individual. “If you say your child was Babe Ruth, for example, there would be lots of information online,” he said. “When we get those cases, usually it’s that the parents are into it. Still, it’s all a little strange to be coming out of a three-year-old’s mouth. Now if she had said her daughter was Lincoln, I probably wouldn’t have made the trip.”

Lately, Dr. Tucker has been giving the children picture tests. “Where we think we know the person they’re talking about, we’ll show them a picture from that life, and then show them another picture — a dummy picture — from somewhere else, to see if they can pick out the right one,” he said. “You have to have a few pictures for it to mean anything. I had one where the kid remembered dying in Vietnam. I showed him eight pairs of pictures and a couple of them he didn’t make any choice on, but the others he was six out of six. So, you know, that makes you think. But this girl is so young, that I don’t think we can do that.”

On this occasion, the little girl decided not to engage, and pretended to be asleep. Then she actually fell asleep.

“She’ll come around to it soon,” Misty assured the researchers. As the minutes ticked by, Dr. Tucker decided the picture test would be best left for another time. The child was still asleep when the researchers returned to their car.

After the first meeting, the only course of action is to do nothing and wait, see if the memories develop into something more concrete. Since the onus for past lives research is on spontaneous recollections, the team are largely unconvinced by the concept of hypnotic regression. “People will be hypnotized and told to go back to their past lives and all that, which we’re quite skeptical about,” said Dr. Tucker. “You can also make up a lot of stuff, even if you’re talking about memories from this life.”

DOPS rarely takes accounts from adults into consideration. “They’re not our primary interest, partly because, as an adult, you’ve been exposed to a lot,” Dr. Tucker explained. “You may think that you don’t know things from history, but you may well have been exposed to it. But also, the phenomenon typically happens in young kids. It’s as if they carry the memories with them, and they are typically very young when they start talking.”

There is also the concern that parents are looking for attention. “There are people who say, ‘Well, the parents are just doing it to have their 15 minutes of fame or whatever,” said Dr. Tucker. “But most of them have no interest in anyone knowing about it, you know, because it’s kind of embarrassing, or they worry people will think their kid is weird.”

For a child, recalling a past life can be trying. “They might be missing people, or have a sense of unfinished business,” he said. After a silence, he continued, his voice contemplative. “Frankly it’s probably better for the child that they don’t have these memories, because so many of the memories are difficult. The majority of kids who remember how they died perished in some kind of violent, unnatural death.”

Source : https://dnyuz.com/2025/01/03/do-you-believe-in-life-after-death-these-scientists-study-it/

Why your couch could be killing you: Sedentary lifestyle linked to 19 chronic conditions

(Credit: © Tracy King | Dreamstime.com)

In an era where many of us spend our days hunched over computers or scrolling through phones, mounting evidence suggests our sedentary lifestyles may be quietly damaging our health. A new study from the University of Iowa reveals that physically inactive individuals face significantly higher risks for up to 19 different chronic health conditions, ranging from obesity and diabetes to depression and heart problems.

Medical researchers have long known that regular physical activity helps prevent disease and promotes longevity. However, this comprehensive study, which analyzed electronic medical records from over 40,000 patients at a major Midwestern hospital system, provides some of the most detailed evidence yet about just how extensively physical inactivity can impact overall health.

Leading the study, now published in the journal Preventing Chronic Disease, was a team of researchers from various departments at the University of Iowa, including pharmacy practice, family medicine, and human physiology. Their mission was to examine whether screening patients for physical inactivity during routine medical visits could help identify those at higher risk for developing chronic diseases.

The simple 30-second exercise survey

When patients at the University of Iowa Health Care Medical Center arrived for their annual wellness visits, they received a tablet during the standard check-in process. Researchers implemented the Exercise Vital Sign (EVS), which asks two straightforward questions: how many days per week they engaged in moderate to vigorous exercise (like a brisk walk) and for how many minutes per session. Based on their responses, patients were categorized into three groups: inactive (0 minutes per week), insufficiently active (1-149 minutes per week), or active (150+ minutes per week).

“This two-question survey typically takes fewer than 30 seconds for a patient to complete, so it doesn’t interfere with their visit. But it can tell us a whole lot about that patient’s overall health,” says Lucas Carr, associate professor in the Department of Health and Human Physiology and the study’s corresponding author, in a statement.

Study authors discovered clear patterns when they analyzed responses from 7,261 screened patients. About 60% met the recommended guidelines by exercising moderately for 150 or more minutes per week. However, 36% fell short of these guidelines, exercising less than 150 minutes weekly, and 4% reported no physical activity whatsoever. When the team examined the health records of these groups, they found remarkable differences in health outcomes.

Consequences of a sedentary lifestyle

The data painted a compelling picture of how physical activity influences overall health. Active patients showed significantly lower rates of depression (15% compared to 26% in inactive patients), obesity (12% versus 21%), and hypertension (20% versus 35%). Their cardiovascular health markers were also notably better, including lower resting pulse rates and more favorable cholesterol profiles.

Perhaps most revealing was the relationship between activity levels and chronic disease burden. Patients reporting no physical activity carried a median of 2.16 chronic conditions. This number dropped to 1.49 conditions among insufficiently active patients and fell further to just 1.17 conditions among those meeting exercise guidelines. This clear progression suggests that even small increases in physical activity might help reduce disease risk.

To provide context for their findings, the researchers compared the screened group against 33,445 unscreened patients from other areas of the hospital. This comparison revealed an important pattern: patients who completed the survey tended to be younger and healthier than the general patient population. As Carr notes, “We believe this finding is a result of those patients who take the time to come in for annual wellness exams also are taking more time to engage in healthy behaviors, such as being physically active.”

Based on the study’s findings, physical inactivity was associated with higher rates of:

  1. Obesity
  2. Liver disease
  3. Psychoses
  4. Chronic lung disease
  5. Neurological seizures
  6. Coagulopathy (blood clotting disorders)
  7. Depression
  8. Weight loss issues
  9. Uncontrolled hypertension (high blood pressure)
  10. Controlled hypertension
  11. Uncontrolled diabetes
  12. Anemia deficiency
  13. Neurological disorders affecting movement
  14. Peripheral vascular disease
  15. Autoimmune disease
  16. Drug abuse
  17. Hypothyroidism
  18. Congestive heart failure
  19. Valvular disease (heart valve problems)

Need for better exercise counseling

The findings highlight a crucial gap in healthcare delivery that needs addressing. “In our healthcare environment, there’s no easy pathway for a doctor to be reimbursed for helping patients become more physically active,” Carr explains. “And so, for these patients, many of whom report insufficient activity, we need options to easily connect them with supportive services like exercise prescriptions and/or community health specialists.”

However, there’s encouraging news about the financial feasibility of exercise counseling. A related study by Carr’s team found that when healthcare providers billed for exercise counseling services, insurance companies reimbursed these claims nearly 95% of the time. This suggests that expanding physical activity screening and counseling services could be both beneficial for patients and financially viable for healthcare providers.

Source : https://studyfinds.org/couch-potato-sedentary-lifestyle-chronic-diseases/

Science confirms: ‘Know-it-alls’ typically know less than they think

(Credit: © Robert Byron | Dreamstime.com)

The next time you find yourself in a heated argument, absolutely certain of your position, consider this: researchers have discovered that the more confident you feel about your stance, the more likely you are to be working with incomplete information. It’s a psychological quirk that might explain everything from family disagreements to international conflicts.

We’ve all been there: stuck in traffic, grumbling about the “idiot” driving too slowly in front of us or the “maniac” who just zoomed past. But what if that slow driver is carefully transporting a wedding cake, or the speeding car is rushing someone to the hospital? The fascinating new study published in PLOS ONE suggests that these snap judgments stem from what researchers call “the illusion of information adequacy” — our tendency to believe we have enough information to make sound decisions, even when we’re missing crucial details.

“We found that, in general, people don’t stop to think whether there might be more information that would help them make a more informed decision,” explains study co-author Angus Fletcher, a professor of English at The Ohio State University and member of the university’s Project Narrative, in a statement. “If you give people a few pieces of information that seems to line up, most will say ‘that sounds about right’ and go with that.”

In today’s polarized world, where debates rage over everything from vaccines to climate change, understanding why people maintain opposing viewpoints despite access to the same information has never been more critical. This research, conducted by Fletcher, Hunter Gehlbach of Johns Hopkins University, and Carly Robinson of Stanford University, reveals that we rarely pause to consider what information we might be missing before making judgments.

The researchers conducted an experiment with 1,261 American participants recruited through the online platform Prolific. The study centered around a hypothetical scenario about a school facing a critical decision: whether to merge with another school due to a drying aquifer threatening their water supply.

The participants were divided into three groups. One group received complete information about the situation, including arguments both for and against the merger. The other two groups only received partial information – either pro-merger or pro-separation arguments. The remarkable finding? Those who received partial information felt just as competent to make decisions as those who had the full picture.

“Those with only half the information were actually more confident in their decision to merge or remain separate than those who had the complete story,” Fletcher notes. “They were quite sure that their decision was the right one, even though they didn’t have all the information.”

Social media users might recognize this pattern in their own behavior: confidently sharing or commenting on articles after reading only headlines or snippets, feeling fully informed despite missing crucial context. It’s a bit like trying to review a movie after watching only the first half, yet feeling qualified to give it a definitive rating.

The study revealed an interesting finding regarding the influence of new information. When participants who initially received only one side of the story were later presented with opposing arguments, about 55% maintained their original position on the merger decision. That rate is comparable to that of the control group, which had received all information from the start.

Fletcher notes that this openness to new information might not apply to deeply entrenched ideological issues, where people may either distrust new information or try to reframe it to fit their existing beliefs. “But most interpersonal conflicts aren’t about ideology,” he points out. “They are just misunderstandings in the course of daily life.”

Beyond personal relationships, this finding has profound implications for how we navigate complex social and political issues. When people engage in debates about controversial topics, each side might feel fully informed while missing critical pieces of the puzzle. It’s like two people arguing about a painting while looking at it from different angles: each sees only their perspective but assumes they’re seeing the whole picture.

Fletcher, who studies how people are influenced by the power of stories, emphasizes the importance of seeking complete information before taking a stand. “Your first move when you disagree with someone should be to think, ‘Is there something that I’m missing that would help me see their perspective and understand their position better?’ That’s the way to fight this illusion of information adequacy.”

Source : https://studyfinds.org/science-confirms-know-it-alls-typically-know-less-than-they-think/

Ants smarter than humans? Watch as tiny insects outperform grown adults in solving puzzle

Longhorn Crazy Ants (Paratrechina longicornis) swarming and attacking a much larger ant. They are harmless to humans and found in the world`s tropical regions.(Credit: © Brett Hondow | Dreamstime.com)

Scientists have long been fascinated by collective intelligence, the idea that groups can solve problems better than individuals. Now, an interesting new study reveals some unexpected findings about group problem-solving abilities across species, specifically comparing how ants and humans tackle complex spatial challenges.

Researchers at the Weizmann Institute of Science designed an ingenious experiment pitting groups of longhorn crazy ants against groups of humans in solving the same geometric puzzle at different scales. The puzzle, known as a “piano-movers’ problem,” required moving a T-shaped load through a series of tight spaces and around corners. Imagine trying to maneuver a couch through a narrow doorway, but with more mathematical precision involved.

What makes this study, published in PNAS, particularly fascinating is that both ants and humans are among the few species known to cooperatively transport large objects in nature. In fact, of the approximately 15,000 ant species on Earth, only about 1% engage in cooperative transport of heavy loads, making this shared behavior between humans and ants especially remarkable.

The species chosen for this evolutionary competition was Paratrechina longicornis, commonly known as “crazy ants” due to their erratic movement patterns. These black ants, measuring just 3 millimeters in length, are widespread globally but particularly prevalent along Israel’s coast and southern regions. Their name derives from their distinctive long antennae, though their frenetic behavior earned them their more colorful nickname.

Recruiting participants for the study presented different challenges across species. While human volunteers readily joined when asked, likely motivated by the competitive aspect, the ants required a bit of deception. Researchers had to trick them into thinking the T-shaped load was food that needed to be transported to their nest.

In experiments spanning three years and involving over 1,250 human participants and multiple ant colonies, researchers tested different group sizes tackling scaled versions of the same puzzle. For the ants, they used both individual ants and small groups of about 7 ants, as well as larger groups averaging 80 ants. Human participants were divided into single solvers and groups of 6-9 or 16-26 people.

Perhaps most intriguingly, the researchers found that while larger groups of ants performed significantly better than smaller groups or individuals, the opposite was true for humans when their communication was restricted. When human groups were not allowed to speak or use gestures and had to wear masks and sunglasses, their performance actually deteriorated compared to individuals working alone.

This counterintuitive finding speaks to fundamental differences in how ants and humans approach collective problem-solving. Individual ants cannot grasp the global nature of the puzzle, but their collective motion translates into emergent cognitive abilities; in other words, they develop new problem-solving skills simply by working together. The large ant groups showed impressive persistence and coordination, maintaining their direction even after colliding with walls and efficiently scanning their environment until finding openings.

The study highlights a crucial distinction between ant and human societies. “An ant colony is actually a family. All the ants in the nest are sisters, and they have common interests. It’s a tightly knit society in which cooperation greatly outweighs competition,” explains study co-author Prof. Ofer Feinerman in a statement. “That’s why an ant colony is sometimes referred to as a super-organism, sort of a living body composed of multiple ‘cells’ that cooperate with one another.”

This familial structure appears to enhance the ants’ collective problem-solving abilities. Their findings validated this “super-organism” vision, demonstrating that ants acting as a group are indeed smarter, with the whole being greater than the sum of its parts. In contrast, human groups showed no such enhancement of cognitive abilities, challenging popular notions about the “wisdom of crowds” in the social media age.

Source : https://studyfinds.org/ants-smarter-than-humans/

5 consumer myths to ditch in 2025

(© Ivan Kruk – stock.adobe.com)

Over the past year, books like Less by Patrick Grant and documentaries like Buy Now: The Shopping Conspiracy have encouraged consumers to rethink their internalized beliefs that more consumption equals better living.

As we enter a new year, it’s the perfect time to reflect on and leave behind some consumer myths that are detrimental to ourselves and to the planet.

Myth 1: Buying more is better for consumers and society

Retail therapy is a common practice for coping with negative emotions and might seem easier than actual therapy. However, research has consistently shown that materialistic consumption leads to lower individual and societal well-being. In fact, emerging studies are pointing out that low-consumption lifestyles might bring greater personal satisfaction and higher environmental benefits.

Some might argue that buying more stimulates the economy, creates jobs and supports public services through taxes. However, the positive impact on local communities is often overstated due to globalized supply chains and corporate tax avoidance.

To ensure that your spending really does support your community and does not contribute to economic inequalities, it is helpful to learn more about the story behind the labels and the businesses you support with your money.

Myth 2: New is always better

While certain cutting-edge tech may indeed offer improvements over older versions, for most items new might not always be better. As Grant argues in his book Less, product quality has declined over the past few decades as manufacturers prioritize affordability and engage in planned obsolescence practices. That is, they purposely design products that will break after a certain number of uses to keep the cycle of consumption going and hit their sales targets.

But older products were often built to last, so choosing secondhand or repairing older items can save you money and actually secure you better-quality products.

Myth 3: Being sustainable is expensive

It’s true that some brands have used the term “sustainable” to justify premium prices. However, adopting sustainable consumer practices can often be free or even bring in some extra cash if you sell or donate the things you no longer need.

Instead of “buying new,” consider swapping unused items with others by hosting a “swapping party” for things like toys or clothes with your friends, family, or neighbours. Decluttering your home could free up space, bring you some joy, and could also help you to connect with others by exchanging items.

Myth 4: Buying experiences are better than buying material things

Previous research has found that spending money on experiences brings more happiness primarily because these purchases are better at bringing people together. But material purchases that help you to connect with others, such as a board game, could bring as much joy as an experience.

When spending money, my research has shown that the key is to understand whether the purchase will help you to connect with others, learn new things or help your community. It’s not about whether we spend our money on material items or experiences.

It is also worth remembering that there are plenty of activities that can help you to achieve those goals with no spending required. So, instead of instinctively reaching to our wallets, perhaps in the new year we could think about whether a non-consumer activity like a winter hike or doing some volunteering could bring us closer to those intrinsic goals like personal growth or developing relationships. These goals have been consistently linked to better well-being.

Source : https://studyfinds.org/5-consumer-myths-to-ditch-in-2025/

The rise of the intention economy: How AI could turn your thoughts into currency

(Image by Shutterstock AI Generator)

Imagine scrolling through your social media feed when your AI assistant chimes in: “I notice you’ve been feeling down lately. Should we book that beach vacation you’ve been thinking about?” The eerie part isn’t that it knows you’re sad — it’s that it predicted your desire for a beach vacation before you consciously formed the thought yourself. Welcome to what some experts believe will be known as the “intention economy,” a way of life for consumers in the not-too-distant future.

A new paper by researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence warns that large language models (LLMs) like ChatGPT aren’t just changing how we interact with technology, they’re laying the groundwork for a new marketplace where our intentions could become commodities to be bought and sold.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” says co-author Dr. Yaqub Chaudhary, a visiting scholar at the Centre, in a statement.

For decades, tech companies have profited from what’s known as the attention economy, where our eyeballs and clicks are the currency. Social media platforms and websites compete for our limited attention spans, serving up endless streams of content and ads. But according to researchers Chaudhary and Dr. Jonnie Penn, we’re witnessing early signs of something potentially more invasive: an economic system that could treat our motivations and plans as valuable data to be captured and traded.

What makes this potential new economy particularly concerning is its intimate nature. “What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary explains.

Early signs of this emerging marketplace are already visible. Apple’s new “App Intents” developer framework for Siri includes protocols to “predict actions someone might take in future” and suggest apps based on these predictions. OpenAI has openly called for “data that expresses human intention… across any language, topic, and format.” Meanwhile, Meta has been researching “Intentonomy,” developing datasets for understanding human intent.

Consider Meta’s AI system CICERO, which achieved human-level performance in the strategy game Diplomacy by predicting players’ intentions and engaging in persuasive dialogue. While currently limited to gaming, this technology demonstrates the potential for AI systems to understand and influence human intentions through natural conversation.

Major tech companies are positioning themselves for this potential future. Microsoft has partnered with OpenAI in what the researchers describe as “the largest infrastructure buildout that humanity has ever seen,” investing over $50 billion annually from 2024 onward. The researchers suggest that future AI assistants could have unprecedented access to psychological and behavioral data, often collected through casual conversation.

The researchers warn that unless regulated, this developing intention economy “will treat your motivations as the new currency” in what amounts to “a gold rush for those who target, steer, and sell human intentions.” This isn’t just about selling products — It could have implications for democracy itself, potentially affecting everything from consumer choices to voting behavior.

Source: https://studyfinds.org/rise-of-intention-economy-ai-assistant/

Farewell, 2024: You were just a so-so year for most Americans

(ID 327257589 | 2024 © Penchan Pumila | Dreamstime.com)

Americans may be divided on many issues, but when it comes to rating 2024, they’ve reached a surprising consensus: it was decidedly average. In a nationwide survey of 2,000 people, the year earned a 6.1 out of 10—though beneath this seemingly tepid score lies a heartening discovery about what truly matters to Americans: personal connections topped the list of memorable moments.

The comprehensive study, conducted by Talker Research, surveyed 2,000 Americans about their experiences throughout the year. Perhaps most touching was the discovery that the most memorable moment for many Americans wasn’t a grand achievement or milestone, but rather the simple joy of reconnecting with old friends and family members, with 17% of respondents citing this as their standout experience.

Overall, a notable 30% of Americans rated their year as exceptional, scoring it eight or higher on the ten-point scale.

Personal development emerged as a dominant theme in 2024, with an overwhelming 67% of Americans reporting some form of growth over the past year. This growth manifested in various aspects of their lives: more than half (52%) saw improvements in their personal relationships, while 38% experienced positive changes in their mental and emotional well-being. Physical health gains were reported by 29% of respondents, and a quarter celebrated advances in their financial situation.

The year proved transformative for many Americans in unexpected ways. Tied for second place among memorable experiences were three distinct life changes: creative and personal growth, welcoming a new pet, and mastering a new skill or hobby, each cited by 12% of respondents. Close behind, 11% found meaning in volunteering or contributing to causes they care about.

The survey revealed that 17% of respondents rated the year a seven out of ten, matched by another 17% giving it a five, while 16% scored it an eight. At the extremes, 8% of Americans had a fantastic year worthy of a perfect ten, while 5% rated it a disappointing one out of ten.

The survey highlighted how Americans found joy and achievement in various pursuits, from visiting new places (10%) to overcoming major health challenges (9%). Some celebrated financial victories, with 8% paying off significant debts and 7% reaching important savings goals. Others embraced adventure, with 6% embarking on dream vacations or relocating to new homes.

Source: https://studyfinds.org/americans-rate-2024-six-out-of-ten/

Unlock the Power of Manifestation: How to Achieve What You Truly Desire


Manifestation is the art of turning your dreams into reality by aligning your thoughts, beliefs, and actions toward achieving them. It’s a combination of positive thinking and purposeful action. Here’s a comprehensive guide on how to manifest your aspirations with clarity and confidence.

1. Understand Manifestation and How It Works

Manifestation is rooted in the idea that your thoughts and energy can influence your reality. It’s driven by two powerful principles:

The Power of Positive Thinking

Your mindset shapes your outcomes. Positive thinking helps you:
• Overcome fears and doubts.
• Channel your energy toward your goals.
• Take actions that bring you closer to success.

When you believe in your ability to achieve something, you’re more likely to focus your efforts and persist through challenges.

The Law of Attraction

The law of attraction states that what you focus on is what you attract. By immersing yourself in your interests and goals:
• You gain knowledge and expertise in the area.
• You build networks with like-minded individuals.
• Opportunities naturally come your way, making success more attainable.

2. Key Techniques to Manifest Your Goals

Practice Visualization

Visualization is a powerful tool to make your dreams feel real. Spend a few minutes daily imagining your goals and the steps you’ll take to achieve them.
• Morning visualization can motivate you for the day.
• Evening visualization allows you to reflect on your progress.

Create a Vision Board

A vision board is a physical or digital collage of images and notes representing your goals.
• For instance, if your dream is a perfect home, include pictures of the decor, layout, or neighborhood.
• Seeing your vision board daily reinforces your commitment to your goals.

Maintain a Future Box

A future box (or manifestation box) holds items that represent your goals.
• Collect objects or notes related to your dreams, such as travel accessories for a future vacation or a letter to your future self.
• This tangible collection keeps your aspirations alive and close.

Use the 3-6-9 Method

Write down or repeat your goal:
• 3 times in the morning,
• 6 times in the afternoon,
• 9 times in the evening.

This repetition focuses your thoughts and reinforces your intent.

Try the 777 Method

Write your goal seven times in the morning and evening for seven days.
• This method is particularly effective for short-term objectives.
• It keeps your mind engaged with your aspirations consistently.

Make a 10-10-10 Worksheet

List out:
• 10 things you desire.
• 10 things you’re grateful for.
• 10 things you enjoy doing.

This worksheet offers a holistic view of your goals, strengths, and passions, helping you stay positive and self-aware.

Keep a Journal

Document your dreams, fears, and progress in a journal.
• Journaling helps identify obstacles and find solutions.
• Regular updates keep your journey organized and inspiring.

3. Strategies for Effective Manifestation

Be Clear About What You Want

Clarity is essential. Define your goals in detail to create a focused path toward achieving them.

Make Positive Affirmations

Speak positively about your goals. Examples include:
• “I am capable and deserving of this promotion.”
• “I am grateful for my growing success and the abundance it brings.”
• “I will live in my dream home within five years.”

Take Action Toward Your Goal

Manifestation requires action. Dedicate time to your goals every day or week.
• Example: If you want a new job, apply to at least one opening weekly.

Step Out of Your Comfort Zone

Growth often involves discomfort. Start small, like sharing your work with friends, then gradually take on bigger challenges.

Build Your Confidence

Confidence is key to success. Begin your day with affirmations such as, “I am strong, capable, and ready to succeed.”

Practice Gratitude

Gratitude fosters positivity. Appreciate what you have while working toward your dreams.

4. The Bottom Line: Manifest Your Best Life

Manifestation isn’t magic; it’s a combination of belief, focus, and consistent action. By practicing these techniques and strategies, you can align your thoughts and energy with your goals and transform them into reality.

Your journey begins with a single thought. Dream big, believe in yourself, and take the steps needed to turn your vision into life-changing success.

 

Scientists crack the code of how gold reaches Earth’s surface

(Credit: Aleksandrkozak/Shutterstock)

In a breakthrough that reads like alchemy, scientists at the University of Geneva have solved a long-standing mystery about how gold travels through the Earth’s crust to form valuable deposits of this precious metal. Their discovery reveals that a particular form of sulfur acts as nature’s gold courier, challenging previous theories about how precious metal deposits form.

The journey of gold from deep within the Earth to mineable deposits has long puzzled geologists. Now, researchers have identified that bisulphide, a specific form of sulfur, plays a crucial role in transporting gold through superhot fluids released by magma – the molten rock that eventually becomes the volcanic formations we see at the surface.

“Due to the drop in pressure, magmas rising towards the Earth’s surface saturate a water-rich fluid, which is then released as magmatic fluid bubbles, leaving a silicate melt behind,” explains Stefan Farsang, lead author of the study published in Nature Geoscience.

In a breakthrough that reads like alchemy, scientists at the University of Geneva have solved a long-standing mystery about how gold travels through the Earth’s crust to form valuable deposits of this precious metal. Their discovery reveals that a particular form of sulfur acts as nature’s gold courier, challenging previous theories about how precious metal deposits form.

The journey of gold from deep within the Earth to mineable deposits has long puzzled geologists. Now, researchers have identified that bisulphide, a specific form of sulfur, plays a crucial role in transporting gold through superhot fluids released by magma – the molten rock that eventually becomes the volcanic formations we see at the surface.

“Due to the drop in pressure, magmas rising towards the Earth’s surface saturate a water-rich fluid, which is then released as magmatic fluid bubbles, leaving a silicate melt behind,” explains Stefan Farsang, lead author of the study published in Nature Geoscience.

This groundbreaking methodology allowed the scientists to observe something previous researchers couldn’t: the exact chemical form of sulfur present in these magmatic fluids. Using laser analysis techniques, they discovered that bisulphide, along with hydrogen sulfide and sulfur dioxide, are the main forms of sulfur present at these extreme temperatures.

The findings overturn a 2011 study that had suggested different sulfur compounds were responsible for gold transport.

“By carefully choosing our laser wavelengths, we also showed that in previous studies, the amount of sulfur radicals in geologic fluids was severely overestimated and that the results of the 2011 study were in fact based on a measurement artifact,” says Farsang, effectively settling a decade-long debate in the geological community.

Since much of the world’s gold and copper comes from deposits formed by these magma-derived fluids, understanding exactly how they form could also aid in future mineral exploration efforts.

Think of it as understanding nature’s own delivery system: just as a postal service needs specific vehicles and routes to deliver packages, gold needs specific chemical compounds and conditions to move through Earth’s crust. By identifying bisulphide as the primary “delivery vehicle,” scientists have mapped out one of nature’s most valuable transportation networks.

The study emerged from the complex interaction between tectonic plates – the massive sections of Earth’s crust that slowly move against each other. When one plate slides beneath another, it generates magma rich in volatile elements like water, sulphur, and chlorine. As this magma rises toward the surface, it releases fluids that carry dissolved metals with them – a process that ultimately leads to the formation of the gold deposits that humans have prized throughout history.

This new understanding of gold’s journey through the Earth not only helps explain how existing deposits formed but could also guide future exploration efforts, potentially making gold mining more efficient and targeted.

Source : https://studyfinds.org/how-gold-reaches-earths-surface/

Single cigarette takes 20 minutes off life expectancy, study finds

The study found having a single cigarette reduces life expectancy by 17 minutes in men and 22 minutes in women. Photograph: Yui Mok/PA

Smokers are being urged to kick the habit for 2025 after a fresh assessment of the harms of cigarettes found they shorten life expectancy even more than doctors thought.

Researchers at University College London found that on average a single cigarette takes about 20 minutes off a person’s life, meaning that a typical pack of 20 cigarettes can shorten a person’s life by nearly seven hours.

According to the analysis, if a smoker on 10 cigarettes a day quits on 1 January, they could prevent the loss of a full day of life by 8 January. They could boost their life expectancy by a week if they quit until 5 February and a whole month if they stop until 5 August. By the end of the year, they could have avoided losing 50 days of life, the assessment found.

“People generally know that smoking is harmful but tend to underestimate just how much,” said Dr Sarah Jackson, a principal research fellow at UCL’s alcohol and tobacco research group. “On average, smokers who don’t quit lose around a decade of life. That’s 10 years of precious time, life moments, and milestones with loved ones.”

Smoking is one of the world’s leading preventable causes of disease and death, killing up to two-thirds of long-term users. It causes about 80,000 deaths a year in the UK and a quarter of all cancer deaths in England.

The study, commissioned by the Department of Health, draws on the latest data from the British Doctors Study, which began in 1951 as one of the world’s first large studies into the effects of smoking, and the Million Women Study, which has tracked women’s health since 1996.

While an earlier assessment in the BMJ in 2000 found that on average a single cigarette reduced life expectancy by about 11 minutes, the latest analysis published in the Journal of Addiction nearly doubles the figure to 20 minutes – 17 minutes for men and 22 minutes for women.

“Some people might think they don’t mind missing out on a few years of life, given that old age is often marked by chronic illness or disability. But smoking doesn’t cut short the unhealthy period at the end of life,” Jackson told the Guardian. “It primarily eats into the relatively healthy years in midlife, bringing forward the onset of ill-health. This means a 60-year-old smoker will typically have the health profile of a 70-year-old non-smoker.”

Although some smokers live long lives, others develop smoking-related diseases and even die from them in their 40s. The variation is driven by differences in smoking habits such as the type of cigarette used, the number of puffs taken and how deeply smokers inhale. People also differ in how susceptible they are to the toxic substances in cigarette smoke.

The authors stress that smokers must quit completely to get the full benefits to health and life expectancy. Previous work has shown there is no safe level of smoking: the risk of heart disease and stroke is only about 50% lower for people who smoke one cigarette a day compared with those who smoke 20 a day. “Stopping smoking at every age is beneficial, but the sooner smokers get off this escalator of death the longer and healthier they can expect their lives to be,” they write.

Source : https://www.theguardian.com/society/2024/dec/30/single-cigarette-takes-20-minutes-off-life-expectancy-study

Tea bags release shocking number of plastic particles into your drink

Photo by Charlotte May from Pexels

In a concerning discovery for tea lovers everywhere, scientists have found that a simple cup of tea might come with an unwanted extra ingredient: billions of microscopic plastic particles. A new study reveals that common tea bags can release substantial amounts of micro and nanoplastics (MNPLs) into your brew during the steeping process.

The research, conducted by a team of scientists from Spain, Egypt, and Germany, and published in Chemosphere, examined three different types of commercial tea bags: those made from nylon-6, polypropylene, and cellulose. What they found was startling. A single tea bag can release anywhere from 8 million to 1.2 billion nanoplastic particles into your cup, with polypropylene bags being the worst offenders.

These plastic particles are incredibly tiny – most are smaller than a human hair’s width – and can be readily absorbed by the cells in our digestive system. The researchers discovered that different types of intestinal cells interact with these particles in varying ways, with some cells taking up more particles than others. Of particular concern was the finding that these nanoplastics can interact with cell nuclei, where our genetic material is stored.

“We have managed to innovatively characterize these pollutants with a set of cutting-edge techniques, which is a very important tool to advance research on their possible impacts on human health,” says Universitat Autònoma de Barcelona researcher Alba Garcia in a media release.

The study’s findings add to growing concerns about our daily exposure to microplastics through food and beverages. While plastic tea bags have become increasingly popular due to their durability and convenience, this research suggests we might be paying an unexpected health price for this modern convenience.

When examining the tea bags under powerful microscopes, the researchers found various surface irregularities, including scales, spheres, and irregular particles. These imperfections, which can appear during the manufacturing process, may contribute to the release of plastic particles during steeping.

The study raises particular concerns about how these particles interact with our digestive system. The researchers tested three different types of human intestinal cells, including ones that produce protective mucus similar to our gut lining. Interestingly, cells that produced more mucus tended to accumulate more plastic particles, suggesting that our body’s natural defensive barriers might actually trap these unwanted materials.

While the immediate health implications of consuming these particles remain unclear, the research highlights an important source of plastic exposure that many people might not be aware of. With tea being one of the world’s most popular beverages, the cumulative exposure to these particles could be significant for regular tea drinkers.

“As the use of plastic in food packaging continues to increase, it is vital to address MNPLs contamination to ensure food safety and protect public health,” the researchers conclude.

Source : https://studyfinds.org/tea-bags-plastic-particles/

150 years under the sea? Whale lifespans are much longer than we thought

Animals with long lifespans tend to reproduce extremely slowly. Els Vermeulen

Southern right whales have lifespans that reach well past 100 years, and 10% may live past 130 years, according to our new research published in the journal Science Advances. Some of these whales may live to 150. This lifespan is almost double the 70-80 years they are conventionally believed to live.

North Atlantic right whales were also thought to have a maximum lifespan of about 70 years. We found, however, that this critically endangered species’ current average lifespan is only 22 years, and they rarely live past 50.

These two species are very closely related – only 25 years ago they were considered to be one species – so we’d expect them to have similarly long lifespans. We attribute the stark difference in longevity in North Atlantic right whales to human-caused mortality, mostly from entanglements in fishing gear and ship strikes.

We made these new age estimates using photo identification of individual female whales over several decades. Individual whales can be recognized year after year from photographs. When they die, they stop being photographically “resighted” and disappear. Using these photos, we developed what scientists call “survivorship curves” by estimating the probability whales would disappear from the photographic record as they aged. From these survivorship curves, we could estimate maximum potential lifespans.

Twenty-five years ago, scientists working with Indigenous whale hunters in the Arctic showed that bowhead whales could live up to and even over 200 years. Their evidence included finding stone harpoon points that hadn’t been used since the mid-1800s embedded in the blubber of whales recently killed by traditional whalers. Analysis of proteins from the eyes of hunted whales provided further evidence of their long lifespan. Like right whales, before that analysis, researchers thought bowhead whales lived to about 80 years, and that humans were the mammals that lived the longest.

In the years following that report, scientists tried to figure out what was unique about bowhead whales that allowed them to live so long. But our new analysis of the longevity of two close relatives of bowheads shows that other whale species also have potentially extremely long lives.

Why it matters

Understanding how long wild animals live has major implications for how to best protect them. Animals that have very long lifespans usually reproduce extremely slowly and can go many years between births. Baleen whales’ life history – particularly the age when females start breeding and the interval between calves – is strongly influenced by their potential lifespan. Conservation and management strategies that do not plan accordingly will have a higher chance of failure. This is especially important given the expected impacts of climate disruption.

What still isn’t known

There are many other large whales, including blue, fin, sei, humpback, gray and sperm whales. Like bowhead and right whales, these were also almost wiped out by whaling. Scientists currently assume they live about 80 or 90 years, but that’s what we believed about bowhead and right whales until data proved they can live much longer.

How long can these other whale species live? Industrial whaling, which ended only in the 1960s, removed old whales from the world’s whale populations. Though many whale populations are recovering in number, there hasn’t been enough time for whales born after the end of industrial whaling to become old.

It’s possible, even likely, that many other whale species will also prove to have long lifespans.

What other research is being done

Other research finds the loss of older individuals from populations is a phenomenon occurring across most large animal species. It diminishes the reproductive potential of many species. Researchers also argue this represents a real loss of culture and wisdom in animals that degrades their potential for survival in the face of changing conditions.

Source : https://studyfinds.org/whale-lifespans-much-longer/

Evolution: What Will Humans Look Like in 50,000 Years?

Many people hold the view that evolution in modern humans has come to a halt. But while modern medicine and technologies have changed the environment in which evolution operates, many scientists are in agreement that the phenomenon is still occurring.

This evolution may be less about survival and more about reproductive success in our current environment. Changes in gene frequencies because of factors like cultural preferences, geographic migration and even random events continue to shape the human genome.

But what might humans look like in 50,000 years time? Such a question is clearly speculative in nature. Nevertheless, experts that Newsweek spoke to gave their predictions for how evolution might affect the appearance of our species in the future.

“Evolution is part deterministic—there are rules for how systems evolve—and part random—mutations and environmental changes are primarily unpredictable,” Thomas Mailund, an associate professor of bioinformatics at Aarhus University in Denmark, told Newsweek.

“In some rare cases, we can observe evolution in action, but over a time span of tens or hundreds of years, it is mostly guesswork. We can make somewhat qualified guesses, but the predictive power is low, so think of it as thought experiments more than anything else.”

Something we can say with certainty is that 50,000 years is more than enough time for several evolutionary changes to occur, albeit on a relatively minor scale, according to Mailund.

“Truly dramatic changes require a longer time, of course. We are not going to grow wings or gills in less than millions of years, and 50,000 years ago, we were anatomically modern humans.”

Jason Hodgson, an anthropologist and evolutionary geneticist at Anglia Ruskin University in the United Kingdom, told Newsweek that 50,000 years is an “extremely long time” in the course of human evolution, representing more than 1,667 human generations given a 30-year generation time.

A 3D illustration of a facial recognition system. What will humans look like in 50,000 years? Design Cells/iStock/Getty Images Plus

“Within the past 50,000 years most of the variation that is seen among human populations evolved,” Hodgson said. “This includes all of the skin color variation seen across the globe, all of the stature variation, all of the hair color and texture variation, etc. In fact, most of the variation we are so familiar with evolved within the past 10,000 years.”

In the more immediate future, Hodgson predicts that global populations will become more homogenous and less structured when it comes to genetics and phenotype—an individual’s observable traits.

“Currently the phenotypes that we associate with geographic regions—for example, dark skin in Africans, light skin in Scandinavians, short stature in African pygmy hunter-gatherers, tall stature in Dutch, etc.—is maintained by assortative mating. People are much more likely to choose mates who are similar to themselves,” he said.

“Part of this is due to the human history of migration and culture which means people tend to live by and be exposed to people who are more similar to themselves with respect to global variation. And some of this is due to preference for similarity within local populations for reasons that we still do not really understand.

“However, admixture—mating between distantly related groups—is increasing, and this will result in less structure and a more homogenous global population. As an analogy, if you stick a bunch of poodles, rottweilers, chihuahuas and St. Bernards on an island and let them breed randomly, within a few generations everything would be a medium sized brown dog.”

When distinct populations mix, so do their traits. Some traits are determined by a few gene variants. But many traits result from a combination of various different genes, and there we will blend together to some degree, according to Mailund.

“So there will be some changes, not caused by selection, but because previously isolated groups are now mixing,” he said.

It is still possible though that despite increasing homogeneity, not everyone would evolve in the same direction, according to Nick Longrich, a paleontologist and evolutionary biologist at the University of Bath in the United Kingdom.

“You could imagine that in distinct subpopulations you could get people evolving in different ways,” he said.

If there are strong, consistent pressures toward certain characteristics, our species could experience “very rapid evolution” in a matter of thousands—or possibly even hundreds—of years, Longrich said.

While we do not know what the selective pressures will be like going forward, Longrich said he expects a number of developments, extrapolating from past trends and current conditions.

For example, we might get taller, because of sexual selection. And we might also become more attractive on average, since sexual selection plays more of a role in modern society than natural selection.

“Attractiveness is relative, so maybe we’d look like movie stars but if everyone looked that way, it wouldn’t be exceptional,” he said.

As time passes and technology evolves, it is also possible that humans will begin to direct our own evolution in a targeted fashion through gene editing tools such as CRISPR—potentially aided by artificial intelligence.

“Applying genetic techniques to humans that alter phenotypes is highly controversial and ethically fraught. Indeed, 20th century eugenicists thought they could improve the human species by only allowing the ‘right’ people to breed,” Hodgson said.

Source : https://www.newsweek.com/evolution-what-will-humans-look-like-50000-years-2006894

The human brain processes thoughts 5,000,000 times slower than the average internet connection

The brain may not be as powerful as previously thought, according to the research (Picture: Getty Images)

People think many millions of times slower than the average internet connection, scientists have found.

The body’s sensory systems, including the eyes, ears, skin, and nose, gather data about our environments at a rate of a billion bits per second.

But the brain processes these signals at only about 10 bits per second, millions of times slower than the inputs, according to author Markus Meister.

A bit is the unit of information in computing. A typical Wi-Fi connection processes about 50 million bits per second.

Despite the brain having over 85 billion neurons, researchers found that humans think at around 10 bits per second – a number they called ‘extremely low’.

Writing in the scientific journal Neuron, research co-author Markus Meister said: ‘Every moment, we are extracting just 10 bits from the trillion that our senses are taking in and using those 10 to perceive the world around us and make decisions.

‘This raises a paradox: What is the brain doing to filter all this information?’

Individual nerve cells in the brain are capable of transmitting over 10 bits per second.

However, the new findings suggest they don’t help process thoughts at such high speeds.

This makes humans relatively slow thinkers, who are unable to process many thoughts in parallel, the research suggests.

This prevents scenarios like a chess player being able to envision a set of future moves and only lets people explore one possible sequence at a time rather than several at once.

The discovery of this ‘speed limit’ paradox in the brain warrants further neuroscience research, scientists say.

They speculated that this speed limit likely emerged in the first animals with a nervous system.

These creatures likely used their brains primarily for navigation to move toward food and away from predators.

Since human brains evolved from these, it could be that we can only follow one ‘path’ of thought at a time, according to researchers.

Source : https://metro.co.uk/2024/12/27/human-brain-processes-thoughts-5-000-000-times-slower-average-internet-connection-22258645/

Young and restless: 37% of Gen Z skipping the gym, going straight to Ozempic

Overweight woman applying medicine injection (© Mauricio – stock.adobe.com)

CORONA DEL MAR, Calif. — Is your New Year’s resolution to lose some weight? A new poll finds many people may actually achieve their goals in 2025 — with a little help from their pharmacist. More than a quarter of Americans are planning to turn to GLP-1 medications like Ozempic and Wegovy to reach their 2025 weight loss goals.

According to researchers with Tebra, who surveyed over 1,000 Americans in November 2024, there’s now a growing acceptance of pharmaceutical interventions for weight management, particularly among younger people.

Specifically, Gen Z is skipping the gym and going straight to the pharmacy, with 37% planning to add these medications to their wellness strategy in the coming year. Women are leading the charge, with 30% intending to use GLP-1 drugs to reach their weight loss goals, compared to 20% of men. On average, women are setting more ambitious weight loss targets, aiming to shed 23 pounds in 2025, while men are looking to lose 19 pounds.

Despite the growing enthusiasm for weight loss shortcuts, the path to accessing these medications remains complicated. Nearly eight in 10 people believe GLP-1 weight loss medications are out of reach for the average person due to their skyrocketing cost. In fact, 64% of those interested in using these medications cite high costs as their main concern, followed by worries about potential side-effects (59%).

For those who have already taken the plunge, the results appear to justify the costs. An overwhelming 86% of current GLP-1 users report that the health risks are worth the results they’re seeing. This satisfaction may explain why 66% of Americans now believe these medications are more effective than traditional weight loss routes like diet and exercise.

Baby boomers show the strongest confidence in these drugs’ effectiveness, with 72% believing they outperform traditional methods, followed by Gen X at 70%, millennials at 64%, and Gen Z at 58%. The gender gap is even more pronounced, with 75% of women believing in the superior effectiveness of GLP-1 medications compared to 53% of men.

Despite the growing trust in popular weight loss drugs, nearly one in four current users are taking these medications without a doctor’s oversight, raising questions about safety and proper usage. This statistic becomes particularly alarming when you consider that 41% of Americans are uncertain about the long-term effectiveness of these drugs, and 39% worry about developing an addiction to them.

The timing of this shift toward pharmaceutical weight loss solutions may not be coincidental. The survey reveals that nearly half (49%) of Americans have previously abandoned their New Year’s resolution wellness goals, with 31% giving up as early as February. This history of frustration with traditional approaches might explain the growing openness to medical shortcuts for weight loss.

Source : https://studyfinds.org/gen-z-ozempic/

 

The effects of ‘brain rot’: How junk content is damaging our minds

Recent research has found that Internet use and abuse is associated with a decrease in gray matter in the prefrontal regions of the brain.
Photographer, Basak Gurbuz Derman (Getty Images)

“Brain rot” was named the Oxford Word of the Year for 2024 after a public vote involving more than 37,000 people. Oxford University Press defines the concept as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.”

According to Oxford’s language experts, the term reflects growing concerns about “the impact of consuming excessive amounts of low-quality online content, especially on social media.” The term increased in usage frequency by 230% between 2023 and 2024.

But brain rot is not just a linguistic quirk. Over the past decade, scientific studies have shown that consuming excessive amounts of junk content — including sensationalist news, conspiracy theories and vacuous entertainment — can profoundly affect our brains. In other words, “rot” may not be that big of an exaggeration when it comes to describing the impact of low-quality online content.

Research from prestigious institutions such as Harvard Medical School, Oxford University, and King’s College London — cited by The Guardian — reveals that social media consumption can reduce grey matter, shorten attention spans, weaken memory, and distort core cognitive functions.

A 2023 study highlighted these effects, showing how internet addiction causes structural changes in the brain that influence behavior and cognitive abilities. Michoel Moshel, a researcher at Macquarie University and co-author of the study, explains that compulsive content consumption — popularly known as doomscrolling — “takes advantage of our brain’s natural tendency to seek out new things, especially when it comes to potentially harmful or alarming information, a trait that once helped us survive.”

Moshel explains that features like “infinite scrolling,” which are designed to keep users glued to their screens, can trap people — especially young individuals — in a cycle of content consumption for hours. “This can significantly impair attention and executive functions by overwhelming our focus and altering the way we perceive and respond to the world,” says the researcher.

Eduardo Fernández Jiménez, a clinical psychologist at Hospital La Paz in Madrid, explains that the brain activates different neural networks to manage various types of attention. He notes that excessive use of smartphones and the internet is causing issues with sustained attention, which “allows you to concentrate on the same task for a more or less extended period of time.” He adds: “It is the one that is linked to academic learning processes.”

The problem, says the researcher, is that social media users are constantly exposed to rapidly changing and variable stimuli — such as Instagram notifications, WhatsApp messages, or news alerts — that have addictive potential. This means users are constantly switching their focus, which undermines their ability to concentrate effectively.

The first warning came with email

Experts have been sounding the alarm about this issue since the turn of the century, when email became a common tool. In 2005, The Guardian ran the headline: “Email pose ‘threat to IQ.’” The article reported that a team of scientists at the University of London investigated the impact of the constant influx of information on the brain. After conducting 80 clinical trials, they found that participants who used email and cellphones daily experienced an average IQ drop of 10 points. The researchers concluded that this constant demand for attention had a more detrimental effect than cannabis use

This was before the rise of tweets, Instagram reels, TikTok challenges, and push notifications. The current situation, however, is even more concerning. Recent research has found that excessive internet use is linked to a decrease in grey matter in the prefrontal regions of the brain — areas responsible for problem-solving, emotional regulation, memory, and impulse control.

The research conducted by Moshel and his colleagues supports these findings. Their latest study, which reviewed 27 neuroimaging studies, revealed that excessive internet use is associated with a reduction in the volume of grey matter in brain regions involved in reward processing, impulse control, and decision-making. “These changes reflect patterns observed in substance addictions,” says Moshel, comparing them to the effects of methamphetamines and alcohol.

That’s not all. The research also found that “these neuroanatomical changes in adolescents coincide with disruptions in processes such as identity formation and social cognition — critical aspects of development during this stage.” This creates a kind of feedback loop, where the most vulnerable individuals are often the most affected. According to a study published in Nature in November, people with poorer mental health are more likely to engage with junk content, which further exacerbates their symptoms.

Source : https://english.elpais.com/technology/2024-12-26/the-effects-of-brain-rot-how-junk-content-is-damaging-our-minds.html

Which infectious disease is most likely to be biggest emerging problem in 2025?

(Credit: Melnikov Dmitriy/Shutterstock)

COVID emerged suddenly, spread rapidly and killed millions of people around the world. Since then, I think it’s fair to say that most people have been nervous about the emergence of the next big infectious disease – be that a virus, bacterium, fungus or parasite.

With COVID in retreat (thanks to highly effective vaccines), the three infectious diseases causing public health officials the greatest concern are malaria (a parasite), HIV (a virus) and tuberculosis (a bacterium). Between them, they kill around 2 million people each year.

And then there are the watchlists of priority pathogens – especially those that have become resistant to the drugs usually used to treat them, such as antibiotics and antivirals.

Scientists must also constantly scan the horizon for the next potential problem. While this could come in any form of pathogen, certain groups are more likely than others to cause swift outbreaks, and that includes influenza viruses.

One influenza virus is causing great concern right now and is teetering on the edge of being a serious problem in 2025. This is influenza A subtype H5N1, sometimes referred to as “bird flu.” This virus is widely spread in both wild and domestic birds, such as poultry. Recently, it has also been infecting dairy cattle in several U.S. states and found in horses in Mongolia.

When influenza cases start increasing in animals such as birds, there is always a worry that it could jump to humans. Indeed, bird flu can infect humans with 61 cases in the U.S. this year already, mostly resulting from farm workers coming into contact with infected cattle and people drinking raw milk.

Compared with only two cases in the Americas in the previous two years, this is quite a large increase. Coupling this with a 30% mortality rate from human infections, bird flu is quickly jumping up the list of public health officials’ priorities.

Luckily, H5N1 bird flu doesn’t seem to transmit from person to person, which greatly reduces its likelihood of causing a pandemic in humans. Influenza viruses have to attach to molecular structures called sialic receptors on the outside of cells in order to get inside and start replicating.

Flu viruses that are highly adapted to humans recognise these sialic receptors very well, making it easy for them to get inside our cells, which contributes to their spread between humans. Bird flu, on the other hand, is highly adapted to bird sialic receptors and has some mismatches when “binding” (attaching) to human ones. So, in its current form, H5N1 can’t easily spread in humans.

However, a recent study showed that a single mutation in the flu genome could make H5N1 adept at spreading from human to human, which could jump-start a pandemic.

If this strain of bird flu makes that switch and can start transmitting between humans, governments must act quickly to control the spread. Centers for disease control around the world have drawn up pandemic preparedness plans for bird flu and other diseases that are on the horizon.

For example, the UK has bought 5 million doses of H5 vaccine that can protect against bird flu, in preparation for that risk in 2025.

Even without the potential ability to spread between humans, bird flu is likely to affect animal health even more in 2025. This not only has large animal welfare implications but also the potential to disrupt food supply and have economic effects as well.

Source : https://studyfinds.org/which-infectious-disease-is-most-likely-to-be-biggest-emerging-problem-in-2025/

 

Why human civilization may be on the brink of a ‘planetary phase shift’

(Credit: © Aleksandr Zamuruev | Dreamstime.com)

Systems theorist suggests the ‘next giant leap in evolution’ is nearing, but authoritarian politics could get in the way
Picture a caterpillar transforming into a butterfly. At a certain point, the creature enters a critical phase where its old form breaks down before emerging as something entirely new. According to a thought-provoking paper by renowned systems theorist Dr. Nafeez Ahmed, human civilization may be approaching a similar transformative moment, or what researchers call a “planetary phase shift.” And while the potential for positive transformation is enormous, Ahmed warns that rising authoritarianism could derail this evolutionary leap.

Ahmed, founding director of the System Shift Lab, presents compelling evidence in the journal Foresight that we’re living through an unprecedented period of change. Multiple global crises — from climate change to economic instability to technological disruption — aren’t just separate problems, but symptoms of an entire civilization undergoing metamorphosis.

“An amazing new possibility space is emerging, where humanity could provide itself superabundant energy, transport, food and knowledge without hurting the earth,” Ahmed says in a statement. “This could be the next giant leap in human evolution.”

The paper synthesizes research across natural and social sciences to develop a new theory of how civilizations rise and fall. It introduces the concept of “adaptive cycles,” a pattern observed in everything from forest ecosystems to ancient civilizations. These cycles move through four phases: rapid growth, conservation (stability), release (creative destruction), and reorganization. Think of it like the seasons: spring growth, summer abundance, autumn release, and winter renewal.

According to Ahmed, industrial civilization is now entering the “release” phase, where old structures begin breaking down. This explains why we’re seeing simultaneous crises across multiple systems. The fossil fuel economy is faltering, evidenced by a global decrease in Energy Return on Investment (EROI) for oil, gas, and coal. Meanwhile, renewable energy technologies are experiencing exponentially improving EROI rates.

But here’s where it gets interesting: these breakdowns aren’t necessarily catastrophic. They’re creating space for radical new possibilities. The study points to major technological innovations expected between the 2030s and 2060s, including clean energy, cellular agriculture, electric vehicles, artificial intelligence, and 3D printing. When combined, these technologies could enable what the researcher calls “networked superabundance” — a world where clean energy, transportation, food, and knowledge become universally accessible at near-zero cost while protecting Earth’s systems.

“This planetary renewable energy system will potentially enable citizens everywhere to produce clean energy ‘superabundance’ at near-zero marginal costs for most times of the year. This huge energy surplus – as much as ten times what we produce today – could power a global ‘circular economy’ system in which materials are rigorously recycled, with the system overall requiring 300 times less materials by weight than the fossil fuel system,” Ahmed writes. “[C]ost and performance improvements in autonomous driving technology could enable a new model called transport-as-a-service, leading private car ownership to collapse by about 90% – replaced by fleets of privately or publicly-owned autonomous taxis and buses up to ten times cheaper than transport today – as early as the 2030s.”

However, Ahmed emphasizes that technology alone won’t determine our fate. The key challenge is whether we can evolve our “operating system” — our social, economic, and cultural structures — to harness these capabilities for the common good. There’s a growing gulf between the old “industrial operating system” and emerging new systems that are inherently distributed and decentralized. This mismatch is driving major political and cultural disruptions globally.

Source : https://studyfinds.org/human-civilization-planetary-phase-shift/

 

Are we moral blank slates at birth? New study offers intriguing clues

(Photo by Ana Tablas on Unsplash)

What does a baby know about right and wrong? A foundational finding in moral psychology suggested that even infants have a moral sense, preferring “helpers” over “hinderers” before uttering their first word. Now, nearly 20 years later, a study that tried to replicate these findings calls this result into question.

In the original study, Kiley Hamlin and her colleagues showed a puppet show to six- and ten-month-old babies. During the show, the babies would see a character — which was really just a shape with googly eyes — struggling to reach the top of a hill.

Next, a new character would either help the struggling individual reach the top (acting as a “helper”) or push the character back down to the bottom of the hill (acting as a “hinderer”).

By gauging babies’ behavior — specifically, watching how their eyes moved during the show and whether they preferred to hold a specific character after the show ended — it seemed that the infants had basic moral preferences. Indeed, in the first study, 88% of the ten-month-olds – and 100% of the six-month-olds – chose to reach for the helper.

But psychology, and developmental psychology, in particular, is no stranger to replicability concerns (when it is difficult or impossible to reproduce the results of a scientific study). After all, the original study sampled only a few dozen infants.

This isn’t the fault of the researchers; it’s just really hard to collect data from babies. But what if it was possible to run the same study again — with say, hundreds or even thousands of babies? Would researchers find the same result?

This is the chief aim of ManyBabies, a consortium of developmental psychologists spread around the world. By combining resources across individual research labs, ManyBabies can robustly test findings in developmental science, like Hamlin’s original “helper-hinderer” effect. And as of last month, the results are in.

With a final sample of 567 babies, tested in 37 research labs across five continents, babies did not show evidence of an early-emerging moral sense. Across the ages tested, babies showed no preference for the helpful character.

Blank slate?

John Locke, an English philosopher argued that the human mind is a “tabula rasa” or “blank slate.” Everything that we, as humans, know comes from our experiences in the world. So should people take the most recent ManyBabies result as evidence of this? My answer, however underwhelming, is “perhaps.”

This is not the first attempted replication of the helper-hinderer effect (nor is it the first “failure to replicate”). In fact, there have been a number of successful replications. It can be hard to know what underlies differences in results. For example, a previous “failure” seemed to come from the characters’ “googly eyes” not being oriented the right way.

The ManyBabies experiment also had an important change in how the “show” was presented to infants. Rather than a puppet show performed live to baby participants, researchers instead presented a video with digital versions of the characters. This approach has its strengths. For example, ensuring that the exact same presentation occurs across every trial, in every lab. But it could also shift how babies engage with the show and its characters.

Source : https://studyfinds.org/are-we-moral-blank-slates-at-birth-new-study-offers-intriguing-clues/

Make it personal: Customized gifts trigger this unique psychological response in recipients

Man and woman opening Christmas gifts (© luckybusiness – stock.adobe.com)

When Nike launched its customization platform NikeID, few could have predicted it would reveal profound insights about human psychology. Now, research spanning four countries shows that personalized products trigger a fascinating emotional phenomenon called “vicarious pride.” That is, recipients of customized gifts experience the same pride their friends felt while creating them.

The study, published in the journal Psychology & Marketing, explores the psychological dynamics at play when someone receives a personalized gift.

“Gift-giving is an age-old tradition, but in today’s world, personalization has become a powerful way to make gifts stand out,” explains Dr. Diletta Acuti, a marketing expert at the University of Bath School of Management, in a statement.

When someone receives a customized gift, such as a chocolate bar with personally selected flavors or a leather journal with their name inscribed, they don’t just appreciate the thought behind it.

“You don’t just appreciate the care and intention they put into crafting that gift; you feel them,” Dr. Acuti explains.

This emotional mirroring stems from a psychological concept called simulation theory, where people mentally recreate others’ experiences and emotions. It’s similar to how sports fans feel their team’s victories and defeats as if they were on the field themselves, or how parents beam with pride at their children’s achievements. When it comes to customized gifts, recipients essentially piggyback on the gift-giver’s sense of creative accomplishment.

Through four carefully designed studies, the researchers examined this phenomenon from different angles. In their first experiment with 74 participants, they studied how people responded to customized clothing gifts. To measure appreciation objectively, recipients were asked to indicate which items, if any, they would change – a novel approach to gauging satisfaction. Those who received customized gifts wanted to make fewer changes to their presents, suggesting higher appreciation.

The second study took a different approach, showing 134 participants videos of two different gift-selection processes: one showing the customization of a T-shirt, and another showing standard gift selection through website browsing. Even when controlling for the time spent selecting the gift, customized presents consistently generated more appreciation.

In the third and fourth studies, conducted online using a mug and wristwatch as gifts, the researchers confirmed that customization not only increased appreciation but also enhanced recipients’ self-esteem. This suggests that receiving a personalized gift makes people feel more valued and special.

Interestingly, the research revealed that the time and effort spent on customization didn’t significantly impact the recipient’s appreciation. Whether the giver spent considerable time or just a few minutes personalizing the gift, recipients experienced similar levels of vicarious pride. This finding challenges common assumptions about the relationship between time invested and gift appreciation.

The study also uncovered an important caveat: relationship anxiety can diminish these positive effects. When recipients feel insecure about their relationship with the gift-giver, the benefits of customization – including vicarious pride and enhanced self-esteem – may not materialize.

For businesses, these insights suggest new opportunities in the growing customized gift market, which is projected to reach $13 billion by 2027 according to Technavio. “Using ‘made by’ signals – such as including the giver’s name, a short message about the process or a visual representation of the customization – can make things even more impactful,” suggests Dr. Acuti. “These small additions reinforce the emotional connection between the giver and the recipient.”

The research also has implications for sustainability, as the study found that recipients tend to take better care of gifts they value more. This suggests that personalization might contribute to longer product lifespans and reduced waste.

“When choosing a gift, personalization can be a game-changer. But it’s not just about selecting a customizable option: you also need to communicate that effort to your recipient. Sharing why you chose elements of the gift or the thought that went into it will make the recipient appreciate it even more. Indeed, this additional effort helps them to connect with the pride you felt in your choices, making the gift even more meaningful,” Dr. Acuti advises.

Perhaps the true magic of customized gifts isn’t in the personalization itself, but in their ability to create invisible bridges between people – emotional connections forged through shared pride and mutual recognition. In a world increasingly mediated by screens and algorithms, these moments of genuine human connection might be the most valuable gift of all.

Source : https://studyfinds.org/customized-gifts-psychological/

A growth mindset protects mental health during hard times

(© Татьяна Макарова – stock.adobe.com)

When the world turned upside down during the COVID-19 pandemic, some people seemed to weather the storm better than others. Though many struggled with depression and loneliness during lockdowns, others maintained their mental well-being and even thrived. What made the difference? According to new research, one key factor may be something called a “growth mindset” – the belief that our abilities and attributes can change and develop over time.

This fascinating study, conducted by researchers at the University of California, Riverside and the University of the Pacific, followed 454 adults ages 19 to 89 over two years of the pandemic, from June 2020 to September 2022. Their findings suggest that people who believe their capabilities are malleable rather than fixed were better equipped to handle the psychological challenges of the pandemic.

Growth mindset represents a fundamental belief about human potential – that we can develop our abilities through effort, good strategies, and input from others. During the pandemic, this mindset appeared to help people view adversity as opportunities for adaptation and learning.

Looking at adults from diverse backgrounds in Southern California, the researchers examined how growth mindset related to three key aspects of mental health during the pandemic: depression levels, overall well-being, and how well people adjusted their daily routines to accommodate physical distancing requirements.

The results, published in PLOS Mental Health, were striking. People with stronger growth mindsets reported lower levels of depression and higher levels of well-being, even after accounting for various demographic factors like age, income, and education level. They were also more likely to successfully adapt their daily routines to pandemic restrictions.

The study included a unique group of older adults who had participated in a special learning intervention before the pandemic. These individuals had spent three months learning multiple new skills – from painting to using iPads to speaking Spanish. Not only did this group show increased growth mindset after their learning experience, but they also demonstrated better mental health outcomes during the pandemic compared to their peers who hadn’t participated in the intervention.

This finding suggests that actively engaging in learning new skills might help build mental resilience for challenging circumstances. The combination of growth mindset with actual learning experiences appeared to create stronger psychological benefits during the pandemic.

Age played a fascinating role in the results. While older adults generally showed more resilience in terms of emotional well-being and lower depression rates compared to younger participants, they were less likely to adjust their daily routines during the pandemic. This suggests that while age may bring emotional stability, it might also be associated with less behavioral flexibility.

Source : https://studyfinds.org/growth-mindset-protects-mental-health-during-hard-times/

What sleep paralysis feels like: Terrifying, like you’re trapped with a demon on your chest

The Nightmare, a 1781 oil painting by Swiss artist Henry Fuseli of a woman in deep sleep, arms flung over her head and an incubus, a male demon, on her belly, has been taken as a symbol of sleep paralysis. Photo by Detroit Institute of Arts.

This feature is part of a National Post series by health reporter Sharon Kirkey on what is keeping us up at night. In the series, Kirkey talks to sleep scientists and brain researchers to explore our obsession with sleep, the seeming lack of it and how we can rest easier.

Psychologist Brian Sharpless has been a horror movie buff since watching 1974’s It’s Alive! on HBO, a cult classic about a fanged and sharp-clawed mutant baby with a proclivity to kill whenever it got upset.

In his new book, Monsters on the Couch: The Real Psychological Disorders Behind Your Favorite Horror Movies, Sharpless devotes a full chapter to a surprisingly common human sleep experience that has been worked into so many movie plots “it now constitutes its own sub-genre of horror.”

Not full sleep, exactly, but rather a state stuck between sleep and wakefulness that follows a reliable pattern: People suddenly wake but cannot move because all major muscles are paralyzed.

The paralysis is often accompanied by the sensed presence of another, human or otherwise. The most eerie episodes involve striking hallucinations. Sharpless once hallucinated a “serpentine-necked monstrosity” lurking in the silvery moonlight seeping through the slats of his bedroom window blind.

Feeling pressure on the chest or a heavy weight on the ribs is also common. People feel as if they’re being smothered. They might also sweat, tremble or shake, but are “trapped,” unable to move their arms or legs, yell or scream. The experience can last seconds, or up to 20 minutes, “with a mean duration of six minutes,” Sharpless shared with non-sleep specialists in his doctor’s guide to sleep paralysis.

Sleep paralysis is a parasomnia, a sleep disorder that at least eight per cent of the general population will experience at least once in their lifetime. That low-ball estimate is higher still among university students (28 per cent) and those with a psychiatric condition (32 per cent). It’s usually harmless, but the combination of a waking nightmare and temporary paralysis can make for a “very unpleasant experience,” Sharpless advised clinicians, “one that may not be easily understood by patients.”

“Patients may instead use other non-medical explanations to make sense of it,” such as, say, some kind of alien, spiritual or demonic attack.

Eight years into studying sleep paralysis and with hundreds of interviews with experiencers under his belt, Sharpless had never once experienced the phenomenon himself, until 2015, the year he published his first book, Sleep Paralysis, with Dr. Karl Doghramji, a professor of psychiatry at Thomas Jefferson University. Sharpless woke at 2 a.m. and saw shadows in the hallway mingling and melding into a snake-like form with a freakishly long neck and eyes that glowed red. When he attempted to lift his head to get a better look, “I came to the uncomfortable realization I couldn’t move,” Sharpless recounts in Monsters on the Couch. “Oh my God, you’re having sleep paralysis,” he remembers thinking when he began to think rationally again.

“It’s an unusual experience that a lot of folks have,” Sharpless said in an interview with the National Post. The hallucinatory elements “that tap into a lot of paranormal and supernatural beliefs” is partly what makes it so fascinating, he said. Several celebrities — supermodel Kendall Jenner, American singer-songwriter and Apple Music’s 2024 Artist of the Year Billie Eilish, English actor and Spider-Man star Tom Holland — have also been open about their sleep paralysis.

You’re seeing, smelling, hearing something that isn’t there but feels like it is

It has a role in culture and folklore as well. In Brazilian folklore, the “Pisadeira” is a long-finger-nailed crone “who lurks on rooftops at night” and tramples on people lying belly up. Newfoundlanders called sleep paralysis an attack of the “Old Hag.” Sleep paralysis has been recognized by scholars and doctors since the ancient Greeks, Sharplesss said. Too much blood, different lunar phases, upset gastrointestinal tracts — all were thought to trigger bouts of sleep paralysis. Episodes have been described in the Salem Witch Trials in 1692. The Nightmare, a 1781 oil painting by Swiss artist Henry Fuseli of a woman in deep sleep, arms flung over her head and an incubus, a male demon, on her belly, has been taken as a symbol of sleep paralysis, among other interpretations. Sleep paralysis figures in numerous scary films and docu-horrors, including Shadow People, Dead Awake, Haunting of Hill House, Be Afraid, Slumber and The Nightmare.

The wildest story Sharpless has heard involved an undergrad student at Pennsylvania State University who was sleeping in her dormitory bunk bed when she woke suddenly, moved her eyes to the left and saw a child vampire with blood coming out of her mouth.

“The vampire girl ripped her covers off, grabbed her by the leg and started screaming, ‘I’m dragging you to hell, I’m dragging you to hell,’ pulling her out of the bed, all the while blood is coming out of her mouth,” Sharpless recalled the student telling him.

When she was able to move again, she found herself fully covered, her leg still under the blankets and not hanging off the ledge of the bunk bed as she imagined.

With sleep paralysis, hallucinations evaporate the moment movement returns, Sharpless said.

People are immobile in the first place because, during REM sleep, when dreams tend to be the most vivid and emotion-rich, muscles that move the eyes and involve breathing keep moving but most other muscles do not. The relaxed muscle tone keeps people from acting out their dreams and potentially injuring themselves or a bedmate.

“In REM, if you’re dreaming that you’re running or playing the piano, the brain is sending commands to your muscles as if you were awake,” said Antonio Zadra, a professor of psychology and a sleep scientist at Université de Montréal.

More than a decade ago, University of Toronto neuroscientists Patricia Brooks and John Peever found that two distinct brain chemicals worked together to switch off motor neurons communicating those brain messages to move. The result: muscle atonia or that REM sleep muscle paralysis. With REM sleep behaviour disorder, another parasomnia, the circuit isn’t switched off to inhibit muscle movement. People can act out their dreams, flailing, kicking, sitting up or even leaving the bed.

Normally, when people wake out of REM sleep, the paralysis that accompanies REM also stops. With sleep paralysis, the atonia carries over into wakefulness.

“You’re experiencing two aspects of REM sleep, namely, the paralysis and any dream activity, but now going on while the person is fully awake,” Sharpless said. People have normal waking consciousness. They think just like they can when fully awake. But they’re also experiencing “dreams” and because they’re awake, the dreams are hallucinations that feel just as real as anything in waking life.

Sleep paralysis tends to happen most often when people sleep in supine (on their back) positions and, while Sharpless and colleagues found that about 23 per cent of 172 adults with recurrent sleep paralysis surveyed reported always, mostly or sometimes pleasant experiences — some felt as if they were floating or flying — the hallucinations, like the beast Sharpless conjured up, are almost always threatening and bizarre.

Why so negative?

Evolution primed humans to be afraid of the dark and, in general, when we wake up, “It’s not usual for us to be paralyzed,” Sharpless said. “That’s an unusual experience from the get-go.” Sometimes people have catastrophizing thoughts like, “Oh my god, I’m having a stroke,” or they fear they’re going to die or be forever paralyzed.

“If you start having the hallucinatory REM sleep-dream activity going on, then it can get even worse,” Sharpless said.

Should people sense a presence in the room, the brain organizes that sensed presence into an actual shape or object, usually an intruder, attacker or something else scary, like an evil force. “If it goes on, you might actually make physical contact with the hallucination: You could feel that you’re being touched. You might smell it; you might hear it,” Sharpless said.

These aren’t nightmares. With nightmares, people aren’t aware of their bedroom surroundings and they certainly can’t move their eyes around the room.

What might explain that dense pressure on the chest, like you’re being suffocated or smothered? People are more likely to experience breathing disruptions when they’re sleeping on their backs. People with sleep apnea are also more likely to experience bouts of sleep paralysis because of disrupted oxygen levels, and the fact that they are awake, temporarily paralyzed and in a not so positive state, can affect respiration. Rates of sleep paralysis are higher in other sleep disorders as well, narcolepsy especially.

While sleep paralysis can be weird and seriously uncomfortable, Sharpless marvels in Monsters on the Couch at how often people have asked him how one might be able to induce sleep paralysis.

One way is to have messed up sleep. Anything that disrupts sleep seems to increase the odds, Sharpless said, like sleep deprivation, jet lag, erratic sleep schedules. Sleep paralysis has also been linked to “exploding head syndrome,” a sleep disorder Sharpless has published a good bit on. People experience auditory hallucinations — loud bangs or explosions that last a mere second — during sleep-wake transitions.

How can people snap out of sleep paralysis?

In a survey of 156 university students with sleep paralysis, some of the more effective “disruption techniques” involved trying to move smaller body parts like fingers or toes, and trying to calm down or relax in the moment.

One review of 42 studies linked a history of trauma, a higher body mass index and chronic pain with episodes of fearful sleep paralysis. Excessive daytime sleepiness, excessively short (fewer than six hours) or excessively long (longer than nine hours) sleep duration have also been implicated.

To reduce the risk, Sharpless recommends good sleep hygiene, including going to bed and waking up at the same time, not drinking alcohol or caffeine too close to bedtime and “taking care of any issues you’ve been avoiding,” especially anxiety, depression or trauma. One simple suggestion: try to sleep on your side. “If you have a partner, have them gently roll you over,” Sharpless said. Zadra, author, with Robert Stickgold, of When Brains Dream: Exploring the Science and Mystery of Sleep, recommends trying to move the tongue to disengage motor paralysis. “The tongue is not paralyzed in REM sleep. Technically, you can move it,” Zadra said. Even thinking about moving the tongue or toes can put people into a whole different mindset “rather this feeling of panic and not being able to move at all,” said Zadra.

Source : https://nationalpost.com/longreads/sleep-paralysis-terrors

 

Weight loss drugs help with fat loss – but they cause bone and muscle loss too

Patient injecting themself in the stomach with an Ozempic (semaglutide) needle. (Photo by Douglas Cliff on Shutterstock)

For a long time, dieting and exercise were the only realistic options for many people who wanted to lose weight, but recent pharmaceutical advances have led to the development of weight loss drugs. These are based on natural hormones from the intestine that help control food intake, such as GLP and GIP.

GLP-1-based drugs such as semaglutide (Wegovy and Ozempic) and tirzepatide (Mounjaro) work by helping people to feel less hungry. This results in them eating less – leading to weight loss.

Studies show that these drugs are very effective in helping people lose weight. In clinical trials of people with obesity, these drugs lead to a weight loss of up to 20% body weight in some instances.

But it’s important to note that not all the weight lost is fat. Research shows that up to one-third of this weight loss is so-called “non-fat mass” – this includes muscle and bone mass. This also happens when someone goes on a diet, and after weight loss surgery.

Muscle and bone play very important roles in our health. Muscle is really important for a number of reasons including that it helps us control our blood sugar. Blood sugar control isn’t as good in people who have lower levels of muscle mass.

High blood sugar levels are also linked to health conditions such as Type 2 diabetes – where having high blood sugar levels can lead to blindness, nerve damage, foot ulcers and infections, and circulation problems such as heart attacks and strokes.

We need our bones to be strong so that we can carry out our everyday activities. Losing bone mass can increase our risk of fractures.

Researchers aren’t completely sure why people lose fat-free mass during weight loss – though there are a couple of theories.

It’s thought that during weight loss, muscle proteins are broken down faster than they can be built. And, because there’s less stress on the bones due to the weight that has been lost, this might affect normal bone turnover – the process where old bone is removed and new bone is formed leading to less bone mass being manufactured than before weight loss.

Because GLP-1 drugs are so new, we don’t yet know the longer-term effects of weight loss achieved by using them. So, we can’t be completely sure how much non-fat mass someone will lose while using these drugs or why it happens.

It’s hard to say whether the loss of non-fat mass could cause problems in the longer term or if this would outweigh the many benefits that are associated with these drugs.

Maintaining muscle and bone

There are many things you can do while taking GLP-1 drugs for weight loss to maintain your muscle and bone mass.

Research tells us that eating enough protein and staying physically active can be helpful in reducing the amount of non-fat mass that is lost when losing weight. One of the best types of exercise is doing resistance training or weight training. This will help to preserve muscle mass, and protein will help us maintain and build muscle.

Source : https://studyfinds.org/weight-loss-drugs-bone-muscle/

Content overload: Streaming audiences plagued by far too many options

(Credit: DANIEL CONSTANTE/Shutterstock)

New survey finds the average viewer spends 110 hours each year just figuring out what to watch.

In an era of endless entertainment options, streaming subscribers are drowning in choices — and not in a good way. A new survey reveals a startling paradox: despite having more content at their fingertips than ever before, viewers are struggling to find something worth watching.

Commissioned by UserTesting and conducted by Talker Research, the survey exposes the growing frustration with the current streaming landscape. The research paints a vivid picture of entertainment exhaustion, revealing that the average person now spends a staggering 110 hours per year — nearly five full days — simply scrolling through streaming platforms in search of something to watch.

One in five subscribers believe finding something to watch is harder now than a decade ago, a sentiment rooted in the overwhelming abundance of content. Forty-one percent of respondents struggle with increasingly large content libraries, while 26% feel there’s an overproduction of original content.

“The streaming landscape has evolved from solving the problem of content access to creating a new challenge of content discovery,” says Bobby Meixner, Senior Director of Industry Solutions at UserTesting, in a statement.

This observation is backed by intriguing revelations that highlight the complexity of our modern entertainment landscape. While 75% appreciate streaming service algorithms for providing accurate recommendations, 51% simultaneously admit feeling overwhelmed by the sheer quantity of suggested content.

Traditional TV is rapidly transforming too

Researchers found that 48% of subscribers have already abandoned cable television. TV viewers have been drawn to streaming platforms for various reasons, including content variety (43%), access to shows not available on cable (34%), and the convenience of on-the-go viewing (29%). However, the audience’s satisfaction remains elusive. In fact, 51% of subscribers would welcome more streaming options, even if those options include advertisements.

When envisioning their ideal streaming service, subscribers prioritized some specific features. Four in 10 desired premium channels and networks at no additional cost, while 39% emphasized the importance of an easy-to-navigate interface. The average subscriber believes a comprehensive streaming service should cost no more than $46 per month, though 11% would be willing to pay over $100 for the right service.

Hidden fees and content availability present significant challenges to subscriber loyalty. Seventy-nine percent expressed frustration with streaming services requiring additional subscription fees for select content. When encountering these fees, viewers respond dramatically: 73% look for the content on another platform, 77% give up and watch something else, and 37% consider canceling their subscription altogether. One in five would even resort to signing up for a free trial just to watch a specific show.

What do loyal customers want?

The study also revealed the precarious nature of content loyalty. Two in three people have opened a streaming service only to find the show they signed up to watch had been removed from the platform. Forty-four percent would switch services to continue watching a favorite show, with 56% planning to cancel their subscription immediately after finishing that show. The cancellation process itself becomes another point of friction, with 23% of subscribers reporting difficulties, including challenges in finding the cancellation option (39%) and overly complicated multi-step processes (36%).

Source : https://studyfinds.org/content-overload-streaming/

Microplastics are invading our bodies — 5 ways to keep them out

Microplastics on the beach. (© David Pereiras – stock.adobe.com)

Most people know by now that microplastics are building up in our environment and within our bodies. However, according to Dr. Leonarde Transande, director of environmental pediatrics at NYU School of Medicine, there are ways to reduce the influx of plastics into our bodies. It starts with avoiding canned foods.

Plastic is everywhere. It’s in our food packaging, our homes, and our clothing. You can’t avoid it completely. Much of it serves important purposes in everything from computers to cars, but it’s also overwhelming our environment.

It affects our health. Minute bits of plastic, called microplastics or nanoplastics, are shed from larger products. These particles have invaded our brains, glands, reproductive organs, and cardiovascular systems.

CNN Chief Medical Correspondent Dr. Sanjay Gupta discussed with Transande his last two decades studying environmental effects on our health. Transande said that we eat a lot of plastic and also inhale it as dust. It’s even in cosmetics we absorb into our skin.

This contamination also concerns what’s in the plastic as well; chemicals causing inflammation and irritation. Polyvinyl chloride, a plastic in food packaging, has added chemicals called phthalates which make it softer.

Dr. Transande worries about phthalates (an ingredient in personal care items and food packaging), bisphenols (lining aluminum cans and thermal paper receipts), and perfluoroalkyl and polyfluoroalkyl substances (PFAS) – called “forever chemicals” because they last for centuries in the environment.

Many of these added chemicals are especially concerning due to their effects on the endocrine system – glands and the hormones they secrete. The endocrine system controls many of our bodies’ functions, such as metabolism and reproduction. Hormones are signaling molecules, acting as expert conductors of the body’s communication within itself.

5 things you can do to avoid exposure

Avoid canned foods

While bisphenol A (BPA) — a chemical that was commonly used in the lining of many metal food and drink cans, lids, and caps — is no longer present in the packaging for most products (canned tuna, soda, and tomatoes), industry data shows that it is still used about 5% of the time, possibly more.

Also, it is unclear if BPA’s replacement is safer. One of the common substitutes, bisphenol S, is as toxic as BPA. It has seeped into our environment as well.

Keep plastic containers away from heat and harsh cleaners

The “microwave and dishwasher-safe” labeling on some plastics refers only to the warping or gross misshaping of a plastic container. If, however, you examine the container microscopically, you can see damage. Bits of chemical additives and/or plastic are shed and absorbed into the food, which you then ingest.

If the plastic is etched, like a well-used plastic cutting board, it should be discarded. Etching increases the leaching of chemicals into your food.

Source : https://studyfinds.org/microplastics-5-ways-to-keep-them-out/

Heart tissue can regenerate — How Cold War nuclear tests led to major discovery

(ID 328527023 © Dmitry Buksha | Dreamstime.com)

Study reveals extraordinary self-healing potential in advanced heart failure patients
TUCSON, Ariz. — For decades, medical science has insisted that the human heart cannot repair itself in any meaningful way. This dogma, as fundamental to cardiology as a heartbeat itself, is now being challenged by game-changing research that reveals our hearts may possess an extraordinary power of regeneration—provided they’re given the right conditions to heal.

The study, published in Circulation, offers potential new directions for treating heart failure, a condition that affects nearly 7 million U.S. adults and accounts for 14% of deaths annually, according to the Centers for Disease Control and Prevention.

Traditionally, the medical community has viewed the human heart as having minimal regenerative capabilities. Unlike skeletal muscles that can heal after injury, cardiac muscle tissue has been thought to have very limited repair capacity.

“When a heart muscle is injured, it doesn’t grow back. We have nothing to reverse heart muscle loss,” says Dr. Hesham Sadek, director of the Sarver Heart Center at the University of Arizona College of Medicine – Tucson, in a statement.

However, this new research, conducted by an international team of scientists, demonstrates that hearts supported by mechanical assist devices can achieve cellular renewal rates significantly higher than previously observed. The study examined tissue samples from 52 patients with advanced heart failure, including 28 who received left ventricular assist devices (LVADs) – mechanical pumps surgically implanted to help weakened hearts pump blood more effectively.

The research methodology centered on an innovative approach to tracking cell renewal. Using a technique that measures carbon-14 levels in cellular DNA – taking advantage of elevated atmospheric levels from Cold War nuclear testing – researchers could effectively date when cardiac cells were created. This method provided unprecedented insight into the heart’s regenerative processes.

The findings revealed a stark contrast between different patient groups. In healthy hearts, cardiac muscle cells (cardiomyocytes) naturally renew at approximately 0.5% per year. However, in failing hearts, this renewal rate drops dramatically – to 0.03% in cases of non-ischemic cardiomyopathy (heart failure not caused by blocked arteries) and 0.01% in ischemic cardiomyopathy (heart failure from blocked arteries).

The most significant finding emerged from patients who responded positively to LVAD support. These “responders,” who showed improved cardiac function, demonstrated cardiomyocyte renewal rates more than six times higher than those seen in healthy hearts. This observation provides what Dr. Sadek calls “the strongest evidence we have, so far, that human heart muscle cells can actually regenerate.”

The study builds upon previous research, including Dr. Sadek’s 2011 publication in Science showing that heart muscle cells actively divide during fetal development but cease shortly after birth to focus solely on pumping blood. His 2014 research provided initial evidence of cell division in artificial heart patients, laying the groundwork for the current study.

The mechanism behind this increased regeneration may be linked to the unique way LVADs support heart function. These devices effectively provide cardiac muscle with periods of reduced workload by assisting with blood pumping, potentially creating conditions that enable regeneration. This observation aligns with established knowledge about how other tissues in the body heal and regenerate when given adequate rest.

The research team found that in failing hearts, most cellular DNA synthesis is directed toward making existing cells larger or more complex through processes called polyploidization and multinucleation, rather than creating new cells. However, in LVAD patients who showed improvement, a significant portion of DNA synthesis was dedicated to generating entirely new cardiac muscle cells – a more beneficial form of cardiac adaptation.

Approximately 25% of LVAD patients demonstrate this enhanced regenerative response, raising important questions about why some patients respond while others do not. Understanding these differences could be crucial for developing new therapeutic approaches. “The exciting part now is to determine how we can make everyone a responder,” says Sadek.

The implications of this research are particularly promising because LVADs are already an established treatment option. As Dr. Sadek points out, “The beauty of this is that a mechanical heart is not a therapy we hope to deliver to our patients in the future – these devices are tried and true, and we’ve been using them for years.”

Source: https://studyfinds.org/heart-muscle-regeneration-cold-war-tests/

The dark side of digital work: ‘Always on’ culture creating new type of anxiety for employees

(© Maridav – stock.adobe.com)

Think about the last time you checked your work email after hours. Do you find yourself having the urge to scan your inbox frequently while on vacation? A new study from the University of Nottingham suggests these digital intrusions may be taking a significant toll on employee wellbeing.

The research, published in Frontiers in Organizational Psychology, explores what researchers call the “dark side” of digital workplaces: the hidden psychological and physical costs that come with being constantly connected to work through technology. While digital tools have enabled greater flexibility and collaboration, they’ve also created new challenges that organizations need to address.

The researchers identified a phenomenon they term “Digital Workplace Technology Intensity” (DWTI). This is the mental and emotional effort required to navigate constant connectivity, handle information overload, deal with technical difficulties, and cope with the fear of missing important updates or connections in the digital workplace.

“Digital workplaces benefit both organizations and employees, for example, by enabling collaborative and flexible work,” explains Elizabeth Marsh, ESRC PhD student from the School of Psychology who led the qualitative study, in a statement. “However, what we have found in our research is that there is a potential dark side to digital working, where employees can feel fatigue and strain due to being overburdened by the demands and intensity of the digital work environment. A sense of pressure to be constantly connected and keeping up with messages can make it hard to psychologically detach from work.”

Rise of ‘productivity anxiety’

To understand these challenges, the research team conducted in-depth interviews with 14 employees across various roles and industries. The participants, aged 27 to 60, included store managers, software engineers, and other professionals, providing insights into how digital workplace demands affect different types of work.

The researchers identified five key themes that characterize the challenges of digital work. The first is “hyperconnectivity.” They define this as a state of constant connection to work through digital devices that erodes the boundaries between professional and personal life. As one participant explained: “You kind of feel like you have to be there all the time. You have to be a little green light.”

This always-on culture has given rise to what the study reveals as “productivity anxiety,” or workers’ fear of being perceived as unproductive when working remotely. One participant described this pressure directly: “It’s that pressure to respond […] I’ve received an e-mail, I’ve gotta do this quickly because if not, someone might think ‘What is she doing from home?’”

FOMO leading to workplace overload

The study also identified “techno-overwhelm,” where workers struggle with the sheer volume of digital communications and platforms they must manage. Participants described feeling bombarded by emails and overwhelmed by the proliferation of messages, applications, and meetings in the digital workplace.

Technical difficulties, which the researchers termed “digital workplace hassles,” emerged as another significant source of stress. The study found these challenges were particularly significant for older workers and those with disabilities, highlighting important accessibility concerns that organizations need to address.

The research also revealed an interesting pattern around the Fear of Missing Out (FoMO) in professional settings. While digital tools are meant to improve communication, many participants expressed anxiety about potentially missing important updates or opportunities for connection with colleagues.

“This research extends the Job Demands-Resources literature by clarifying digital workplace job demands including hyperconnectivity and overload,” says Dr. Alexa Spence, Professor of Psychology at Nottingham. “It also contributes a novel construct of digital workplace technology intensity which adds new insight on the causes of technostress in the digital workplace. In doing so, it highlights the potential health impacts, both mental and physical, of digital work.”

Disconnecting from the connected world

The study’s findings are particularly relevant in our post-pandemic era, where the boundaries between office and home have become increasingly blurred. As one participant noted: “[It’s] just more difficult to leave it behind when it’s all online and you can kind of jump on and do work at any time of the day or night.”

Source : https://studyfinds.org/the-dark-side-of-digital-work-productivity-anxiety/

80% of adults carry this virus — For some, it could trigger Alzheimer’s

The brain’s immune cells, or microglia (light blue/purple), are shown interacting with amyloid plaques (red) — harmful protein clumps linked to Alzheimer’s disease. The illustration highlights the microglia’s role in monitoring brain health and clearing debris. (Illustration by Jason Drees/Arizona State University)

In the gut of some Alzheimer’s patients lies an unexpected culprit: a common virus that may be silently contributing to their disease. While scientists have long suspected microbes might play a role in Alzheimer’s disease, new research has uncovered a surprising link between a virus that infects most humans and a distinct subtype of the devastating neurological condition.

The research suggests that human cytomegalovirus (HCMV) — a virus that infects between 80% of adults over 80 — may play a more significant role in Alzheimer’s disease than previously thought, particularly when combined with specific immune system responses.

The study, led by researchers at Arizona State University and multiple collaborating institutions, focused on a specific type of brain cell called microglia marked by a protein called CD83. These CD83-positive microglia were found in 47% of Alzheimer’s patients compared to 25% of unaffected individuals.

This study, published in the journal Alzheimer’s and Dementia, is particularly notable because it examines multiple body systems, including the gut, the vagus nerve (which connects the gut to the brain), and the brain itself. The researchers found that subjects with CD83-positive microglia in their brains were more likely to have both HCMV and increased levels of an antibody called immunoglobulin G4 (IgG4) in their colon, vagus nerve, and brain tissue.

“We think we found a biologically unique subtype of Alzheimer’s that may affect 25% to 45% of people with this disease,” says study co-author Dr. Ben Readhead, a research associate professor with ASU-Banner Neurodegenerative Disease Research Center, in a statement. “This subtype of Alzheimer’s includes the hallmark amyloid plaques and tau tangles—microscopic brain abnormalities used for diagnosis—and features a distinct biological profile of virus, antibodies and immune cells in the brain.”

For their research, the team examined tissue samples from multiple areas of the body in both Alzheimer’s patients and healthy controls. They found that patients with CD83-positive microglia in their brains were significantly more likely to have both HCMV and elevated IgG4 levels in their colon, vagus nerve, and brain tissue.

“It was critically important for us to have access to different tissues from the same individuals. That allowed us to piece the research together,” says Readhead, who also serves as the Edson Endowed Professor of Dementia Research at the center.

To further investigate the potential impact of HCMV on brain cells, the team conducted experiments using cerebral organoids – simplified versions of human brain tissue grown in the laboratory. When these organoids were infected with HCMV, they showed accelerated development of two key markers of Alzheimer’s disease: amyloid beta-42 and phosphorylated Tau-212. The infected organoids also showed increased rates of neuronal death.

The researchers emphasize that while HCMV infection is common, only a subset of individuals showed evidence of intestinal HCMV infection, which appears to be the relevant factor in the virus’s presence in the brain.

Study authors suggest that in some individuals, HCMV infection might trigger a cascade of events involving the immune system that could contribute to the development or progression of Alzheimer’s disease. This is particularly interesting because it might help explain why some people develop Alzheimer’s while others don’t, despite HCMV being so common in the general population.

Looking ahead, the research team is developing a blood test to identify individuals with chronic intestinal HCMV infection. They hope to use this in combination with emerging Alzheimer’s blood tests to evaluate whether existing antiviral drugs could be beneficial for this subtype of Alzheimer’s disease.

“We are extremely grateful to our research participants, colleagues, and supporters for the chance to advance this research in a way that none of us could have done on our own,” notes Dr. Eric Reiman, Executive Director of Banner Alzheimer’s Institute and the study’s senior author. “We’re excited about the chance to have researchers test our findings in ways that make a difference in the study, subtyping, treatment and prevention of Alzheimer’s disease.”

With the development of a blood test to identify patients with chronic HCMV infection on the horizon, this research might not just explain why some people develop Alzheimer’s – it might also point the way toward preventing it. In the end, the key to understanding this devastating brain disease may have been hiding in our gut all along.

Source : https://studyfinds.org/gut-virus-trigger-alzheimers/

You shouldn’t have! Holiday shoppers spending $10.1 billion on gifts nobody wants

(Credit: Asier Romero/Shutterstock)

This holiday season, take a moment to ask yourself, “Does this person really want what I’m buying them?” A new survey finds the answer is likely no! Researchers have found that more than half of Americans (53%) will receive a gift they don’t want.

As Elon Musk and Vivek Ramaswamy go looking for waste in Washington, it turns out that everyday Americans are throwing away tons of money too. According to the new forecast from Finder, unwanted presents will reach an all-time high in both volume and cost this year, with an estimated $10.1 billion being spent on gifts headed for the regifting pile.

Overall, the annual holiday spending forecast finds that roughly 140 million Americans will receive at least one unwanted present in 2024. Shockingly, one in 20 people expect to receive at least five gifts they won’t want to keep. The average cost of these unwanted items is expected to rise to $72 this holiday season, up from $66 last year. That represents a billion-dollar surge in wasteful holiday spending.

Saying “you shouldn’t have…” might be a more truthful statement than ever when it comes to certain gift ideas. Clothing and accessories top 2024’s list of the most unwanted gifts people receive. Specifically, 43% hope to avoid these personal items. However, that number is actually down from the 49% who didn’t want clothes for Christmas in 2022. So, maybe some Americans need a new pair of socks this year.

Household items follow clothing as the least popular holiday gifts (33%), while cosmetics and fragrances round out the top three at 26%. Interestingly, technology gifts are skyrocketing in unpopularity. Since 2022, the dislike for tech gifts has risen by a whopping 10%, going from 15% in 2022 to 25% this holiday season. So, maybe think twice before getting your friend their eighth pair of headphones.

The season of re-giving

So, what happens to all these well-intentioned but unwanted presents? The survey found that regifting is the most popular solution in 2024. Nearly four in 10 Americans (39%) plan to pass their unwanted gifts along to someone else. That’s the most popular option this year, surpassing the awkward choice of keeping a bad gift. Interestingly, a staggering 43% of Americans kept their unwanted presents in 2022, but that number has now fallen to 35%.

Another 32% take advantage of post-holiday exchange policies to swap their unwanted items for something more desirable. However, more and more people are just opting to sell their sub-par presents for cold hard cash. Over one in four (27%) plan to sell unwanted gifts after the holidays, up significantly from 17% in 2022.

So, if you’re still looking for last-minute gifts this holiday season, choose wisely. There’s a very good chance the person you’re buying for won’t like your choices anyway.

Source : https://studyfinds.org/holiday-shoppers-unwanted-gifts/

See how Google Gemini 2.0 Flash can perform hours of business analysis in minutes

Anyone who has had a job that required intensive amounts of analysis will tell you that any speed gain they can find is like getting an extra 30, 60, or 90 minutes back out of their day.

Automation tools in general, and AI tools specifically, can assist business analysts who need to crunch massive amounts of data and succinctly communicate it.

In fact, a recent Gartner analysis, “An AI-First Strategy Leads to Increasing Returns,” states that the most advanced enterprises rely on AI to increase the accuracy, speed, and scale of analytical work to fuel three core objectives — business growth, customer success, and cost efficiency — with competitive intelligence being core to each.

Google’s newly released Gemini 2.0 Flash provides business analysts with greater speed and flexibility in defining Python scripts for complex analysis, giving analysts more precise control over the results they generate.

Google claims that Gemini 2.0 Flash builds on the success of 1.5 Flash, its most adopted model yet for developers.

Gemini 2.0 Flash outperforms 1.5 Pro on key benchmarks, delivering twice the speed, according to Google. 2.0 Flash also supports multimodal inputs, including images, video, and audio, as well as multimodal output, including natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. It can also natively call tools like Google Search, code execution, and third-party user-defined functions.

Taking Gemini 2.0 Flash for a test drive
VentureBeat gave Gemini 2.0 Flash a series of increasingly complex Python scripting requests to test its speed, accuracy, and precision in dealing with the nuances of the cybersecurity market.

Using Google AI Studio to access the model, VentureBeat started with simple scripting requests, working up to more complex ones centered on the cybersecurity market.

What’s immediately noticeable about Python scripting with Gemini 2.0 Flash is how fast it is — nearly instantaneous, in fact — at providing Python scripts, generating them in seconds. It’s noticeably faster than 1.5 Pro, Claude, and ChatGPT when handling increasingly complex prompts.

VentureBeat asked Gemini 2.0 Flash to perform a typical task that a business or market analyst would be requested to do: Create a matrix comparing a series of vendors and analyze how AI is used across each company’s products.

Analysts often have to create tables quickly in response to sales, marketing, or strategic planning requests, and they usually need to include unique advantages or insights into each company. This can take hours and even days to get done manually, depending on an analyst’s experience and knowledge.

VentureBeat wanted to make the prompt request realistic by having the script encompass an analysis of 13 XDR vendors, also providing insights into how AI helps the listed vendors handle telemetry data. As is the case with many requests analysts receive, VentureBeat asked Python to produce an Excel file of the results.

Here is the prompt we gave Gemini 2.0 Flash to execute:

Write a Python script to analyze the following cybersecurity vendors who have AI integrated into their XDR platform and build a table showing how they differ from each other in implementing AI. Have the first column be the company name, the second column the company’s products that have AI integrated into them, the third column being what makes them unique and the fourth column being how AI helps handle their XDR platforms’ telemetry data in detail with an example. Don’t web scrape. Produce an Excel file of the result and format the text in the Excel file so it is clear of any brackets ({}), quote marks (‘) and any HTML code to improve readability. Name the Excel file. Gemini 2 flash test.
Cato Networks, Cisco, CrowdStrike, Elastic Security XDR, Fortinet, Google Cloud (Mandiant Advantage XDR), Microsoft (Microsoft 365 Defender XDR), Palo Alto Networks, SentinelOne, Sophos, Symantec, Trellix, VMware Carbon Black Cloud XDR

Using Google AI Studio, VentureBeat created the following AI-powered XDR Vendor Comparison Python scripting request, with Python code produced in seconds:

Next, VentureBeat saved the code and loaded it into Google Colab. The goal in doing this was to see how bug-free the Python code was outside of Google AI Studio and also measure its speed of being compiled. The code ran flawlessly with no errors and produced the Microsoft Excel file Gemini_2_flash_test.xlsx.

The results speak for themselves
Within seconds, the script ran, and Colab signaled no errors. It also provided a message at the end of the script that the Excel file was done.

VentureBeat downloaded the Excel file and found it had been finished in less than two seconds. The following is a formatted view of the Excel table where the Python script was delivered.

The total time needed to get this table done was less than four minutes, from submitting the prompt, getting the Python script, running it in Colab, downloading the Excel file, and doing some quick formatting.

Source: https://venturebeat.com/ai/google-gemini-2-0-flash-test-drive-reveals-why-every-analyst-needs-to-know-this-modelgoogle-gemini-2-0-flash-test-drive-why-every-analyst-needs-to-know-this-model/

‘Big Brother’ isn’t just watching — He’s changing how your brain works

Surveillance cameras are seemingly everywhere. (ID 192949897 © Aleksandr Koltyrin | Dreamstime.com)

Every time you walk down a city street, electronic eyes are watching. From security systems to traffic cameras, surveillance is ubiquitous in modern society. Yet these cameras might be doing more than just recording our movements: according to a new study that peers into the psychology of surveillance, they could be fundamentally altering how our brains process visual information.

While previous research has shown that surveillance cameras can modify our conscious behavior – making us less likely to steal or more inclined to follow rules – a new study published in Neuroscience of Consciousness suggests that being watched affects something far more fundamental: the unconscious way our brains perceive the world around us.

“We found direct evidence that being conspicuously monitored via CCTV markedly impacts a hardwired and involuntary function of human sensory perception – the ability to consciously detect a face,” explains Associate Professor Kiley Seymour, lead author of the study, in a statement.

Putting surveillance to the test
The research team at the University of Technology Sydney, led by Seymour, designed an ingenious experiment to test how surveillance affects our unconscious visual processing. They recruited 54 undergraduate students and split them into two groups: one group completed a visual task while being conspicuously monitored by multiple surveillance cameras, while the control group performed the same task without cameras present.

The monitored group was shown the surveillance setup beforehand, including a live feed of themselves from the adjacent room, and had to sign additional consent forms acknowledging they would be watched. To ensure participants felt the full weight of surveillance, cameras were positioned to capture their whole body, face, and even their hands as they performed the task.

The visual task itself employed a clever technique called continuous flash suppression (CFS), which temporarily prevents images shown to one eye from reaching conscious awareness while the brain still processes them unconsciously. Participants viewed different images through each eye: one eye saw rapidly changing colorful patterns, while the other saw faces that were either looking directly at them or away from them.

‘Ancient survival mechanisms’ turn on when being watched
The results were remarkable: “Our surveilled participants became hyper-aware of face stimuli almost a second faster than the control group. This perceptual enhancement also occurred without participants realizing it,” says Seymour. This held true whether the faces were looking directly at them or away, though both groups detected direct-gazing faces more quickly overall.

This heightened awareness appears to tap into ancient survival mechanisms. “It’s a mechanism that evolved for us to detect other agents and potential threats in our environment, such as predators and other humans, and it seems to be enhanced when we’re being watched on CCTV,” Seymour explains.

Importantly, this wasn’t simply due to participants trying harder or being more alert under surveillance. When the researchers ran the same experiment using simple geometric patterns instead of faces, there was no difference between the watched and unwatched groups. The enhancement was specific to social stimuli – faces – suggesting that surveillance taps into fundamental neural circuits evolved for processing social information.

Effects on mental health and consciousness
The findings have particular relevance for mental health. “We see hyper-sensitivity to eye gaze in mental health conditions like psychosis and social anxiety disorder where individuals hold irrational beliefs or preoccupations with the idea of being watched,” notes Seymour. This suggests that surveillance might interact with these conditions in ways we don’t yet fully understand.

Perhaps most unsettling was the disconnect between participants’ conscious experience and their brain’s response. “We had a surprising yet unsettling finding that despite participants reporting little concern or preoccupation with being monitored, its effects on basic social processing were marked, highly significant and imperceptible to the participants,” Seymour reveals.

These findings arrive at a crucial moment in human history, as we grapple with unprecedented levels of technological surveillance. From CCTV cameras and facial recognition systems to trackable devices and the “Internet of Things,” our activities are increasingly monitored and recorded. The study suggests that this constant observation may be affecting us on a deeper level than previously realized, modifying basic perceptual processes that normally operate outside our awareness.

The implications extend beyond individual privacy concerns to questions about public mental health and the subtle ways surveillance might be reshaping human cognition and social interaction. As surveillance technology continues to advance, including emerging neurotechnology that could potentially monitor our mental activity, understanding these unconscious effects becomes increasingly crucial.

Source: https://studyfinds.org/big-brother-watching-surveillance-changing-how-brain-works/

 

The Most Beautiful Mountains on Earth | International Mountain Day

Denali Mountain (Photo by Bryson Beaver on Unsplash)

From the snow-capped peaks of the Himalayas to the dramatic spires of Patagonia, Earth’s mountains stand as nature’s most awe-inspiring monuments. On International Mountain Day, we celebrate these colossal formations that have shaped cultures, inspired religions, and challenged adventurers throughout human history. These geological giants aren’t just spectacular viewpoints – they’re vital ecosystems that provide water, shelter diverse wildlife, and influence global weather patterns. In this visual journey, join us to explore the most beautiful mountains on our planet, each telling its own story of natural forces, cultural significance, and unparalleled beauty that continues to captivate millions of visitors and photographers from around the world.

Most Beautiful Mountains in the World, According to Experts
1. Mount Fuji in Japan
This active volcano on the island of Honshu is a sight to behold. A site of pilgrimage for centuries among Buddhists, Shinto, and others, Mount Fuji is the largest peak in Japan. The last time it erupted was in the 18th century.

Mount Fuji soars to an impressive height of 12,389 feet (3,775 meters) and is particularly stunning when adorned with its signature snowy cap. As Hostelworld points out, while many visitors are eager to get up close to this legendary mountain, its true majesty is often best appreciated from a distance – though you’ll need some patience, as this shy giant has a habit of playing hide-and-seek behind the clouds.

The mountain’s significance runs far deeper than its physical beauty. According to Exoticca, Mount Fuji’s perfect conical shape has made it not just a national symbol, but a deeply spiritual place. Its slopes have long been intertwined with Shinto traditions, and by the early 12th century, followers of the Shugendō faith had even established a temple at its summit, marking its importance in Japanese religious life.

There’s a fascinating irony to Mount Fuji’s allure. Atlas & Boots shares a telling Japanese proverb: climbing it once makes you wise, but twice makes you a fool. While around 300,000 people make the trek annually, the immediate mountain environment is surprisingly stark. The real magic lies in viewing Fuji from afar, where its serene symmetry and majestic presence have rightfully earned it a place among the world’s most beautiful mountains.

2. Mount Kilimanjaro in Tanzania
As the highest freestanding mountain in the world, Kilimanjaro is also the highest mountain in Africa. It is made up of three dormant volcanic cones: Kibo, Mawenzi, and Shira.

Standing proudly at 19,341 feet (5,895 meters), Mount Kilimanjaro offers something you rarely find in a single mountain: an incredible variety of ecosystems stacked one above the other. As The Travel Enthusiast says, this African giant hosts everything from lush rainforests and moorlands to alpine deserts, culminating in an arctic summit that seems almost impossible for its location.

Those who venture to climb Kilimanjaro are treated to more than just stunning vistas. Veranda notes that the mountain provides spectacular views of the surrounding savanna, while the journey up its slopes takes you through an impressive sequence of distinct ecological zones. It’s like traveling from the equator to the poles in a matter of days.

The mountain’s surroundings are just as remarkable as its height. According to Travel Triangle, this legendary peak – one of Africa’s Seven Summits – is crowned with glaciers and an ice field, though both are slowly shrinking. The surrounding Kilimanjaro National Park is a haven for wildlife, where visitors might spot everything from elegant black and white colobus monkeys to elephants and even the occasional leopard prowling through the forest.

3. Matterhorn in Switzerland and Italy
The famously pyramid-shaped Matterhorn straddles the border of Italy and Switzerland in the Alps. Considered one of the deadliest peaks to climb in the world, its beauty is breathtaking and unmistakable.

At 14,692 feet (4,478 meters), the Matterhorn might not be the Alps’ tallest peak, but it’s arguably its most mesmerizing. As Hostelworld notes, this pyramid-shaped giant earned its legendary status not just through its distinctive silhouette, but also through its dramatic history – including its first ascent in 1865 by British climber Edward Whymper.

As Exoticca points out, the mountain’s majesty is best appreciated from the charming Swiss town of Zermatt. This picturesque resort has become synonymous with the Matterhorn itself, offering visitors front-row seats to one of nature’s most impressive displays.

According to Earth & World, which also crowns it the world’s most beautiful mountain, the Matterhorn creates an unforgettable natural spectacle when its rocky peak catches the light, particularly when reflected in the nearby Stellisee Lake. The area around this “mountain of mountains” is also home to Europe’s highest summer skiing region, operating year-round as a paradise for winter sports enthusiasts.

4. Denali Peak in Alaska
Also known as Mount McKinley, Denali Peak is the crown jewel of Alaska’s Denali National Park and Preserve. It’s aptly named, as Denali means “The High One,” being the tallest mountain in North America.

Rising to a staggering 20,237 feet (6,168 meters), Denali dominates the Alaskan landscape as one of the world’s most isolated and impressive peaks. As Beautiful World notes, this snow-crowned giant draws adventurers throughout the year, from mountaineers and backpackers in the warmer months to cross-country skiers who glide along its snow-blanketed paths in winter.

Among the world’s greatest climbing challenges, Denali stands as a formidable test of skill and endurance. Atlas & Boots ranks it as perhaps the most demanding of the Seven Summits after Everest, though its breathtaking beauty helps explain why climbers continue to be drawn to its unforgiving slopes.

The mountain’s appeal extends far beyond just climbing, according to Travel Triangle. Situated at the heart of the vast Denali National Park, this Alaskan masterpiece offers visitors a chance to experience nature in its most magnificent form. Its remarkable isolation and untamed character make it a perfect destination for those seeking to connect with the raw power of the natural world.

Source: https://studyfinds.org/most-beautiful-mountains/

Married Millennials Are Getting ‘Sleep Divorces’

Married millennials who are otherwise happy in their relationships are getting “sleep divorces,” a phenomenon in which mismatched sleeping habits make it impossible for the couple to continue sleeping in the same bed, or even in the same bedroom.

Watch an old episode of I Love Lucy and you’ll probably cock your head to the side like a confused dog when you see Lucy and Ricky’s sleeping arrangement: a married couple sleeping in the same bedroom but in two different beds, separated by a bedside table. That’s the way some couples used to sleep, and it’s the only way the FCC allowed TV shows to depict couples in their bedrooms back in the day.

Today’s 30-something married couples are living life like Lucy and Ricky. Whether it’s snoring, restless movements, or one or both having to get up to pee, there are simply too many issues to deal with that can disturb a partner’s sleep.

According to sleep scientist and psychologist Wendy Troxel, up to 30 percent of a person’s sleep quality is influenced by their partner’s sleepytime behavior. Sure, your own thoughts and anxieties make falling and staying asleep a nightmare, but add your partner’s sleep idiosyncrasies into the mix and you have a recipe for insomnia.

A study from Ohio State University found that couples who are not getting adequate sleep are more likely to exhibit negative behaviors when discussing their marital issues. A study of 48 British couples showed that men move around in their sleep a lot more than women, with women reporting being disturbed by their male partner’s movements.

Interestingly, the same study showed that most couples prefer to sleep together rather than apart despite the downsides.

Source : https://www.vice.com/en/article/married-millennials-sleep-divorces/

Friendship after 50: Why social support becomes a matter of life and death

(© Rawpixel.com – stock.adobe.com)

For adults over 50, maintaining close friendships isn’t just about having someone to chat with over coffee – it could be integral to their health and well-being. A new study reveals a stark reality: while 75% of older adults say they have enough close friends, those saying they’re in poor mental or physical health are significantly less likely to maintain these vital social connections. The findings paint a concerning picture of how health challenges can create a cycle of social isolation, potentially making health problems worse.

The University of Michigan’s National Poll on Healthy Aging, conducted in August 2024, surveyed 3,486 adults between 50 and 94, offering an in-depth look at how friendships evolve in later life and their crucial role in supporting health and well-being. The results highlight a complex relationship between health status and social connections that many may not realize exists.

“With growing understanding of the importance of social connection for older adults, it’s important to explore the relationship between friendship and health, and identify those who might benefit most from efforts to support more interaction,” explains University of Michigan demographer Sarah Patterson, in a statement.

Patterson, a research assistant professor at the UM Institute for Social Research’s Survey Research Center, emphasizes the critical nature of understanding these social connections. A robust 90% of adults over 50 said they have at least one close friend, with 48% maintaining one to three close friendships and 42% enjoying the company of four or more close friends. However, these numbers drop dramatically for those facing health challenges.

Among individuals reporting fair or poor mental health, 20% have no close friends at all – double the overall rate. Similarly, 18% of those with fair or poor physical health report having no close friends, suggesting that health challenges can significantly impact social connections.

The gender divide in friendship maintenance is notable: men are more likely than women to report having no close friends. Age also plays a role, with those 50 to 64 years-old more likely to report no close friendships compared to their older counterparts 65 and older – a somewhat counterintuitive finding that challenges assumptions about social isolation increasing with age.

When it comes to staying in touch, modern technology has helped keep connections alive. In the month before the survey, 78% of older adults had in-person contact with close friends, while 73% connected over the phone, and 71% used text messages. This multi-channel approach to maintaining friendships suggests that older adults are adapting to new ways of staying connected.

The findings resonate particularly with AARP, one of the study’s supporters.

“This poll underscores the vital role friendships play in the health and well-being of older adults,” says Indira Venkat, Senior Vice President of Research at AARP. “Strong social connections can encourage healthier choices, provide emotional support, and help older adults navigate health challenges, particularly for those at greater risk of isolation.”

Perhaps most striking is the role that close friends play in supporting health and well-being. Among those with at least one close friend, 79% say they can “definitely count on these friends for emotional support in good times or bad,” and 70% feel confident turning to their friends to discuss health concerns. These aren’t just casual relationships – they’re vital support systems that can influence health behaviors and outcomes.

Consider this: 50% of older adults say that their close friends encouraged them to make healthy choices, such as exercising more or eating a healthier diet. Another 35% say friends motivated them to get concerning symptoms or health issues checked out by a healthcare provider, and 29% received encouragement to stop unhealthy behaviors like poor eating habits or excessive drinking.

The practical support is equally impressive: 32% had friends who helped them when sick or injured, 17% had friends pick up medications for them, and 15% had friends attend medical appointments with them. These statistics underscore how friendship networks can function as informal healthcare support systems.

However, the study reveals a challenging paradox: making and maintaining friendships becomes more difficult precisely when people might need them most. Among those reporting fair or poor mental health, 65% say making new friends is harder now than when they were younger, compared to 42% of the overall population. Similarly, 61% of those with fair or poor mental health find it harder to maintain existing friendships, compared to 34% of the general over-50 population.

A desire to form new friendships remains high, with 75% of older adults expressing interest in developing new friendships (14% very interested, 61% somewhat interested). This interest is particularly strong among those who live alone and those who report feeling lonely, suggesting a recognition of the importance of social connections.

The study also reveals an interesting trend among friendships between people from different age groups. Among those with at least one close friend, 46% have a friend from a different generation (defined as being at least 15 years older or younger). Of these, 52% have friends from both older and younger generations, while 35% have friends only from younger generations, and 13% have friends only from older generations. This diversity in friendship age ranges suggests that meaningful connections can transcend generational boundaries.

The implications of these findings extend beyond individual relationships. Healthcare providers are encouraged to recognize the vital role that friends play in their patients’ health journeys, from encouraging preventive care to supporting healthy behaviors. Community organizations are urged to create more opportunities for social connection, particularly those that are inclusive and accessible to people with varying health status.

“When health care providers see older adults, we should also ask about their social support network, including close friends, especially for those with more serious health conditions,” says Dr. Jeffrey Kullgren, the poll director and primary care physician at the VA Ann Arbor Healthcare System.

As one considers the cycle of health and friendship revealed in this study, it becomes clear that the old adage about friendship being the best medicine might have more truth to it than we realized. In an age where healthcare increasingly focuses on holistic well-being, perhaps it’s time to add “friendship prescription” to the standard of care.

Source : https://studyfinds.org/friendship-after-50-social-support/

If the universe is already infinite, what is it expanding into?

NASA’s James Webb Space Telescope has produced the deepest and sharpest infrared image of the distant universe to date. Known as Webb’s First Deep Field, this image of galaxy cluster SMACS 0723 is overflowing with detail.Thousands of galaxies – including the faintest objects ever observed in the infrared – have appeared in Webb’s view for the first time. (Credits: NASA, ESA, CSA, and STScI)

When you bake a loaf of bread or a batch of muffins, you put the dough into a pan. As the dough bakes in the oven, it expands into the baking pan. Any chocolate chips or blueberries in the muffin batter become farther away from each other as the muffin batter expands.

The expansion of the universe is, in some ways, similar. But this analogy gets one thing wrong – while the dough expands into the baking pan, the universe doesn’t have anything to expand into. It just expands into itself.

It can feel like a brain teaser, but the universe is considered everything within the universe. In the expanding universe, there is no pan. Just dough. Even if there were a pan, it would be part of the universe and therefore it would expand with the pan.

Even for me, a teaching professor in physics and astronomy who has studied the universe for years, these ideas are hard to grasp. You don’t experience anything like this in your daily life. It’s like asking what direction is farther north of the North Pole.

Another way to think about the universe’s expansion is by thinking about how other galaxies are moving away from our galaxy, the Milky Way. Scientists know the universe is expanding because they can track other galaxies as they move away from ours. They define expansion using the rate that other galaxies move away from us. This definition allows them to imagine expansion without needing something to expand into.

The expanding universe
The universe started with the Big Bang 13.8 billion years ago. The Big Bang describes the origin of the universe as an extremely dense, hot singularity. This tiny point suddenly went through a rapid expansion called inflation, where every place in the universe expanded outward. But the name Big Bang is misleading. It wasn’t a giant explosion, as the name suggests, but a time where the universe expanded rapidly.

The universe then quickly condensed and cooled down, and it started making matter and light. Eventually, it evolved to what we know today as our universe.

The idea that our universe was not static and could be expanding or contracting was first published by the physicist Alexander Friedman in 1922. He confirmed mathematically that the universe is expanding.

While Friedman proved that the universe was expanding, at least in some spots, it was Edwin Hubble who looked deeper into the expansion rate. Many other scientists confirmed that other galaxies are moving away from the Milky Way, but in 1929, Hubble published his famous paper that confirmed the entire universe was expanding, and that the rate it’s expanding at is increasing.

This discovery continues to puzzle astrophysicists. What phenomenon allows the universe to overcome the force of gravity keeping it together while also expanding by pulling objects in the universe apart? And on top of all that, its expansion rate is speeding up over time.

Many scientists use a visual called the expansion funnel to describe how the universe’s expansion has sped up since the Big Bang. Imagine a deep funnel with a wide brim. The left side of the funnel – the narrow end – represents the beginning of the universe. As you move toward the right, you are moving forward in time. The cone widening represents the universe’s expansion.

Scientists haven’t been able to directly measure where the energy causing this accelerating expansion comes from. They haven’t been able to detect it or measure it. Because they can’t see or directly measure this type of energy, they call it dark energy.

According to researchers’ models, dark energy must be the most common form of energy in the universe, making up about 68% of the total energy in the universe. The energy from everyday matter, which makes up the Earth, the Sun and everything we can see, accounts for only about 5% of all energy.

Outside the expansion funnel
So, what is outside the expansion funnel?

Scientists don’t have evidence of anything beyond our known universe. However, some predict that there could be multiple universes. A model that includes multiple universes could fix some of the problems scientists encounter with the current models of our universe.

One major problem with our current physics is that researchers can’t integrate quantum mechanics, which describes how physics works on a very small scale, and gravity, which governs large-scale physics.

The rules for how matter behaves at the small scale depend on probability and quantized, or fixed, amounts of energy. At this scale, objects can come into and pop out of existence. Matter can behave as a wave. The quantum world is very different from how we see the world.

At large scales, which physicists call classical mechanics, objects behave how we expect them to behave on a day-to-day basis. Objects are not quantized and can have continuous amounts of energy. Objects do not pop in and out of existence.

The quantum world behaves kind of like a light switch, where energy has only an on-off option. The world we see and interact with behaves like a dimmer switch, allowing for all levels of energy.

But researchers run into problems when they try to study gravity at the quantum level. At the small scale, physicists would have to assume gravity is quantized. But the research many of them have conducted doesn’t support that idea.

Source: https://studyfinds.org/universe-infinite-still-expanding/

Human settlement of Mars isn’t as far off as we might think

Illustration of human colony on Mars. (© Anastasiya – stock.adobe.com)

Could humans expand out beyond their homeworld and establish settlements on the planet Mars? The idea of settling the Red Planet has been around for decades. However, it has been seen by skeptics as a delusion at best and mere bluster at worst.

Mars might seem superficially similar to Earth in a number of ways. But its atmosphere is thin and humans would need to live within pressurized habitats on the surface.

Yet in an era where space tourism has become possible, the Red Planet has emerged as a dreamland for rich eccentrics and techno-utopians. As is often the case with science communication, there’s a gulf between how close we are to this ultimate goal and where the general public understands it to be.

However, I believe there is a rationale for settling Mars and that this objective is not as far off as some would believe. There are actually a few good reasons to be optimistic about humanity’s future on the Red Planet.

First, Mars is reachable. During an optimal alignment between Earth and Mars as the two planets orbit the Sun, its possible to travel there in a spacecraft in six to eight months. Some very interesting new engine designs suggest that it could be done in two months. But based on technology that’s ready to go, it would take astronauts six months to travel to Mars and six months back to Earth.

Astronauts have already stayed for this long on the International Space Station (ISS) and on the Soviet orbiting lab Mir. We can get there safely and we have already shown that we can reliably land robots on the surface. There’s no technical reason why we couldn’t do the same with humans.

Second, Mars is abundant in the raw materials required for humans to “live off the land”; in other words, achieve a level of self-sufficiency. The Red Planet has plentiful carbon, nitrogen, hydrogen and oxygen which can be separated and isolated, using processes developed on Earth. Mars is interesting and useful in a multitude of ways that the moon isn’t. And we have technology on Earth to enable us to stay and settle Mars by making use of its materials.

A third reason for Mars optimism is the radical new technology that we can put to use on a crewed mission to the planet. For example, Moxie (Mars Oxygen In-Situ Resource Utilization Experiment) is an project developed by scientists at the California Institute of Technology (Caltech) that sucks in Martian atmosphere and separates it into oxygen. Byproducts of the process – carbon monoxide, nitrogen and argon – can be vented.

When scaled up, similar machines would be able to separate oxygen from hydrogen to produce breathable air, rocket fuel and water. This makes it easier to travel to the planet and live on the surface because it’s not necessary to bring these commodities from Earth – they can be made once on Mars. Generating fuel on the surface would also make any future habitat less reliant on electric or solar-powered vehicles.

But how would we build the habitats for our Mars settlers? Space architect Melodie Yasher has developed ingenious plans for using robots to 3D print the habitats, landing pads and everything needed for human life on Mars. Using robots means that these could all be manufactured on Mars before humans landed. 3D printed homes have already been demonstrated on Earth.

Volunteers have also spent time living in simulated Mars habitats here on Earth. These are known as Mars analogues. The emergency medicine doctor Beth Healey spent a year overwintering in Antarctica (which offers many parallels with living on another planet) for the European Space Agency (Esa) and communicates her experience regularly.

She is not alone, as each year sees new projects in caves, deserts and other extreme environments, where long term studies can explore the physical and psychological demands on humans living in such isolated environments.

Finally, the Mars Direct plan devised by Dr. Robert Zubrin has existed for more than 30 years, and has been modified to account for modern technology as the private sector has grown. The original plan was based on using a Saturn V rocket (used for the Apollo missions in the 1960s and 1970s) to launch people. However, this can now be accomplished using the SpaceX Falcon 9 rocket and a SpaceX Dragon capsule to carry crew members.

Several uncrewed launches from Earth could ferry necessary equipment to Mars. These could include a vehicle for crew members to return on. This means that everything could be ready for the first crew once they arrived.

Source : https://studyfinds.org/human-settlement-of-mars-closer/

It’s Friday the 13th. Why is this number feared worldwide?

(Credit: Prazis Images/Shutterstock)

Of all the days to stay in bed, Friday the 13th is surely the best. It’s the title of a popular (if increasingly corny) horror movie series; it’s associated with bad luck, and it’s generally thought to be a good time not to take any serious risks.

Even if you try to escape it, you might fail, as happened to New Yorker Daz Baxter. On Friday 13th in 1976, he decided to just stay in bed for the day, only to be killed when the floor of his apartment block collapsed under him. There’s even a term for the terror the day evokes: Paraskevidekatriaphobia was coined by the psychotherapist Donald Dossey, a specialist in phobias, to describe an intense and irrational fear of the date.

Unfortunately there is always one Friday 13th in a year, and sometimes there are as many as three. Today is one of them. But no matter how many times the masked killer Jason Voorhees from Friday the 13th returns to haunt our screens, this fear is in our own minds rather than any basis in science.

One study did show a small rise in accidents on that day for women drivers in Finland, but much of the problem was due to anxiety rather than general bad luck. Follow-up research found no consistent evidence of a rise in accidents on the day but suggested that if you’re superstitious, it might be better not to get behind the wheel of a car on it anyway.

The stigma against Friday 13th likely comes from a merging of two different superstitions. In the Christian tradition, the death of Jesus took place on a Friday, following the presence of 13 people at the Last Supper. In Teutonic legend, the god Loki appears at a dinner party seated for 12 gods, making him the outcast 13th at the table, leading to the death of another guest.

Elsewhere in the world, 13 is less unlucky. In Hinduism, people fast to worship Lord Shiva and Parvati on Trayodashi, the 13th day in Hindu month. There are 13 Buddhas in the Shingon sect of Buddhism, and there is mention of a lucky 13 signs, rather than unlucky, in The Tibetan Book of the Great Liberation.

In Italy, it is more likely to be “heptadecaphobia”, or fear of the number 17, that leads to a change of plans. In Greece, Spain, and Mexico, the “unlucky” day is not Friday 13th, but Tuesday 13th.

In China, the number four is considered significantly unlucky, as it is nearly homophonous to the word “death”. In a multicultural country like Australia you may find hotels and cinemas missing both 13th and fourth floors, out of respect for the trepidation people can have about those numbers.

The lure of superstition

Superstitions were one of the first elements of paranormal beliefs studied in the early 1900s. While many are now just social customs rather than a genuine conviction, their persistence is remarkable.

If you cross your fingers, feel alarmed at breaking a mirror, find a “lucky” horseshoe, or throw spilled salt over your shoulder, you are engaging in long-held practices that can have a powerful impact on your emotions. Likewise, many students are now heading towards their semester exams. In the lecture rooms, they may take lucky charms such as a particular pen or favorite socks.

In sports, baseballer Nomar Garciaparra is known for his elaborate batting ritual. Other sports people wear “lucky gear” or put on their gloves in a particular order. The great cricket umpire David Shepherd stood on one leg whenever the score reached 111. These sorts of superstitions are humorously depicted in the film Silver Linings Playbook. It’s interesting to note that it’s often the successful athletes who have these superstitions and stick to them.

Source : https://studyfinds.org/friday-the-13th-number-feared/

Scientists close to creating ‘simple pill’ that cures diabetes

Diabetes with insulin, syringe, vials, pills (© Sherry Young – stock.adobe.com)

Imagine a world where diabetes could be treated with a simple pill that essentially reprograms your body to produce insulin again. Researchers at Mount Sinai have taken a significant step toward making this possibility a reality, uncovering a groundbreaking approach that could potentially help over 500 million people worldwide living with diabetes.

Diabetes, affecting 537 million people globally, develops when cells in the pancreas known as beta cells become unable to produce insulin, a hormone essential for regulating blood sugar levels. In both Type 1 and Type 2 diabetes, patients experience a marked reduction in functional, insulin-producing beta cells. While current treatments help manage symptoms, researchers have been searching for ways to replenish these crucial cells.

The journey to this latest discovery began in 2015 when Mount Sinai researchers identified harmine, a drug belonging to a class called DYRK1A inhibitors, as the first compound capable of stimulating insulin-producing human beta cell regeneration. The research team continued to build on this foundation, reporting in 2019 and 2020 that harmine could work synergistically with other medications, including GLP-1 receptor agonists like semaglutide and exenatide, to enhance beta cell regeneration.

In July 2024, researchers reported remarkable results: harmine alone increased human beta cell mass by 300 percent in their studies, and when combined with a GLP-1 receptor agonist like Ozempic, that increase reached 700 percent.

However, there’s an even more exciting part of this discovery. These new cells might come from an unexpected source. Researchers discovered that alpha cells, another type of pancreatic cell that’s abundant in both Type 1 and Type 2 diabetes, could potentially be transformed into insulin-producing beta cells.

“This is an exciting finding that shows harmine-family drugs may be able to induce lineage conversion in human pancreatic islets,” says Dr. Esra Karakose, Assistant Professor of Medicine at Mount Sinai and the study’s corresponding author, in a statement. “It may mean that people with all forms of diabetes have a large potential ‘reservoir’ for future beta cells, just waiting to be activated by drugs like harmine.”

Using single-cell RNA sequencing technology, the researchers analyzed over 109,881 individual cells from human pancreatic islets donated by four adults. This technique allowed them to study each cell’s genetic activity in detail, suggesting that “cycling alpha cells” may have the potential to transform into insulin-producing beta cells. Alpha cells, being the most abundant cell type in pancreatic islets, could potentially serve as an important source for new beta cells if this transformation process can be successfully controlled.

The Mount Sinai team is now moving these studies toward human trials.

“A simple pill, perhaps together with a GLP1RA like semaglutide, is affordable and scalable to the millions of people with diabetes,” says Dr. Andrew F. Stewart, director of the Mount Sinai Diabetes, Obesity, and Metabolism Institute.

While the research is still in its early stages, it offers hope to millions of people who currently manage diabetes through daily insulin injections or complex medication regimens. The possibility of a treatment that could essentially restart the body’s insulin production is nothing short of revolutionary.

The study, published in the journal Cell Reports Medicine, represents a significant step forward in diabetes research. By potentially turning one type of pancreatic cell into another, researchers may have found a way to essentially reprogram the body’s own cellular mechanisms to combat diabetes.

Source : https://studyfinds.org/simple-pill-cure-diabetes/

Polio was supposedly wiped out – Now the virus has been found in Europe’s wastewater

(Credit: Babul Hosen/Shutterstock)

In 1988, the World Health Organization (WHO) called for the global eradication of polio. Within a decade, one of the three poliovirus strains was already virtually eradicated — meaning a permanent reduction of the disease to zero new cases worldwide.

Polio, also known as poliomyelitis, is an extremely contagious disease caused by the poliovirus. It attacks the nervous system and can lead to full paralysis within hours. The virus enters through the mouth and multiplies in the intestine. Infected people shed poliovirus into the environment by the fecal-oral route.

About one in every 200 infections results in irreversible paralysis (usually affecting the legs). Of those who become paralyzed, 5–10% die due to immobilized breathing muscles.

Since 1988, the global number of poliovirus cases has decreased by over 99%. Today, only two countries — Pakistan and Afghanistan — are considered “endemic” for polio. This means that the disease is regularly transmitted in the country.

Yet in recent months, poliovirus has been detected in wastewater in Germany, Spain and Poland. This discovery does not confirm infections in the population, but it is a wake-up call for Europe, which was declared polio-free in 2002. Any gaps in vaccination coverage could see a resurgence of the disease.

Poliovirus strains originating from regions where the virus remained in circulation led to outbreaks among unvaccinated people in Tajikistan and Ukraine in 2021, and Israel in 2022. By contrast, in the UK — where poliovirus was detected in wastewater in 2022 — no cases of paralytic disease were recorded.

This information highlights the varied effect of poliovirus detection. Why? In areas with under-immunized populations, the virus can circulate widely and cause paralysis. But in communities with strong vaccination coverage, the virus often remains limited to symptomless (“asymptomatic”) infections or is detectable only in wastewater.

In this sense, the mere detection of the virus in the environment can serve as a canary in the coal mine. It warns public health officials to check vaccination coverage and take measures such as boosting vaccination campaigns, improving access to healthcare and enhancing disease surveillance to prevent outbreaks.

Rich source of information

Wastewater surveillance, an approach reinvigorated during the COVID pandemic, has proven invaluable for early detection of disease outbreaks. Wastewater is a rich source of information. It contains a blend of human excrement, including viruses, bacteria, fungi and chemical traces. Analysing this mixture offers valuable insights for public health officials.

Routine wastewater testing in the three countries revealed a specific vaccine-derived strain. No polio cases were reported in any of the three countries.

Vaccine-derived poliovirus strains emerge from the weakened live poliovirus contained in oral polio vaccines. If this weakened virus circulates long enough among under-immunized or unimmunized groups or in people with weakened immune systems (such as transplant recipients or those undergoing chemotherapy), it can genetically shift back into a form capable of causing disease.

In this case, it is possible that the virus was shed in the sewage by an infected asymptomatic person. But it is also possible that a person who was recently vaccinated with the oral vaccine (with the weakened virus) shed the virus in the wastewater, which subsequently evolved until re-acquiring the mutations that cause paralysis.

A different type of vaccine exists. The inactivated polio vaccine (IPV) cannot revert to a dangerous form. However, it is more expensive and more complex to deliver, needing trained health workers to administer and more complex procedures. This can limit the feasibility of deploying it in poor countries — often where the need to vaccinate is greater.

This does not mean that the oral polio vaccine is not any good. On the contrary, they have been instrumental in eradicating certain poliovirus strains globally. The real issue arises when vaccination coverage is insufficient.

In 2023, polio immunization coverage in one-year-olds in Europe stood around 95%. This is well above the 80% “herd immunity” threshold — when enough people in a population are vaccinated so that vulnerable groups are protected from the disease.

In Spain, Germany and Poland, coverage with three doses ranges from 85–93%, protecting most people from severe disease. Yet under-immunized groups and those with weakened immune systems remain at risk.

The massive progress in polio eradication that happened over the past three decades is the result of the global effort to fight the disease. But mounting humanitarian crises — sparked by conflict, natural disasters and climate change — are significantly disrupting vaccination programs essential for safeguarding public health.

Source : https://studyfinds.org/polio-found-in-europe-wastewater/

Universe expanding faster than physics can explain: Webb telescope confirms mysterious growth spurt

Primordial creation: The universe begins with the Big Bang, an extraordinary moment of immense energy, igniting formation of everything in existence. (© Alla – stock.adobe.com)

When two of humanity’s most powerful eyes on the cosmos agree something strange is happening, astronomers tend to pay attention. Now, the James Webb Space Telescope has backed up what Hubble has been telling us for years: the universe is expanding faster than our best physics can explain, and nobody knows why.

Scientists have long known that our universe is expanding, but exactly how fast it’s growing is an ongoing and fascinating debate in the astronomy world. The expansion rate, known as the “Hubble constant,” helps scientists map the universe’s structure and understand its state billions of years after the Big Bang. This latest discovery suggests we may need to rethink our understanding of the universe itself.

“The discrepancy between the observed expansion rate of the universe and the predictions of the standard model suggests that our understanding of the universe may be incomplete,” says Nobel laureate and lead author Adam Riess, a Bloomberg Distinguished Professor at Johns Hopkins University, in a statement. “With two NASA flagship telescopes now confirming each other’s findings, we must take this problem very seriously—it’s a challenge but also an incredible opportunity to learn more about our universe.”

This research, published in The Astrophysical Journal, builds on Riess’ Nobel Prize-winning discovery that the universe’s expansion is accelerating due to a mysterious “dark energy” that permeates the vast stretches of space between stars and galaxies. Think of this expanding universe like a loaf of raisin bread rising in the oven. As the dough expands, the raisins (representing galaxies) move farther apart from each other. While this force pushes galaxies apart, exactly how fast this is happening remains hotly debated.

For over a decade, scientists have used two different methods to measure this expansion rate. One method looks at ancient light from the early universe, like examining a baby photo to understand how someone grew. The other method, using telescopes to observe nearby galaxies, looks at more recent cosmic events. These two methods give significantly different answers about how fast the universe is expanding – and not just slightly different.

While theoretical models predict the universe should be expanding at about 67-68 kilometers per second per megaparsec (a unit of cosmic distance), telescope observations consistently show a faster rate of 70-76 kilometers per second per megaparsec, averaging around 73. This significant discrepancy is what scientists call the “Hubble tension.”

To help resolve this mystery, researchers turned to the James Webb Space Telescope, the most powerful space observatory ever built. “The Webb data is like looking at the universe in high definition for the first time and really improves the signal-to-noise of the measurements,” says Siyang Li, a graduate student at Johns Hopkins University who worked on the study.

Webb’s super-sharp vision allowed it to examine these cosmic distances in unprecedented detail. The telescope looked at about one-third of the galaxies that Hubble had previously studied, using a nearby galaxy called NGC 4258 as a reference point – like using a well-known landmark to measure other distances.

The researchers used three different methods to measure these cosmic distances, each acting as an independent check on the others. First, they observed special pulsating stars called Cepheid variables, which astronomers consider the “gold standard” for measuring distances in space. These stars brighten and dim in a precise pattern that reveals their true brightness, making them reliable cosmic yardsticks. The team also looked at the brightest red giant stars in each galaxy and observed special carbon-rich stars, providing two additional ways to verify their measurements.

When they combined all these observations, they found something remarkable: All three methods pointed to nearly identical results, with Webb’s measurements matching Hubble’s almost exactly. The differences between measurements were less than 2% – far smaller than the roughly 8-9% discrepancy that creates the Hubble tension.

This agreement might seem like a simple confirmation, but it actually deepens one of astronomy’s biggest mysteries. Scientists now believe this discrepancy might point to missing pieces in our understanding of the cosmos. Recent research has revealed that mysterious components called dark matter and dark energy make up about 96% of the universe’s content and drive its accelerated expansion. Yet even these exotic components don’t fully explain the Hubble tension.

“One possible explanation for the Hubble tension would be if there was something missing in our understanding of the early universe, such as a new component of matter—early dark energy—that gave the universe an unexpected kick after the big bang,” explains Marc Kamionkowski, a Johns Hopkins cosmologist. “And there are other ideas, like funny dark matter properties, exotic particles, changing electron mass, or primordial magnetic fields that may do the trick. Theorists have license to get pretty creative.”

Whether this cosmic puzzle leads us to discover new forms of energy, exotic particles, or completely novel physics, one thing is certain: the universe is expanding our understanding just as surely as it’s expanding itself. And thanks to Webb and Hubble, we’re along for the ride.

Source : https://studyfinds.org/universe-expansion-rate-physics-webb-telescope/

Exit mobile version