Gen Z Are More Anxious Than Any Other Generation

Gen Z students are experiencing poor mental health and a lack of hope for the future. (To be honest, I think most generations are).

According to professors who teach Gen Zers, the generation appears even more anxious than their Millennial counterparts and has completely lost hope in the American Dream. Gen Z also reports the poorest mental health of any generation, and only 44 percent of Gen Zers say they feel prepared for the future.

“The biggest change that I’ve seen is they have this fear of failure or making the wrong decision, and I think it’s because they just don’t want to go through more mental anguish,” Matt Prince, an adjunct professor at Chapman University, told Fortune.

Prince added that Gen Z seems to have a “huge weight on their shoulders.”

Millennials have a long-held reputation for being lazy complainers who spend too much money on avocado toast. For a while, that’s why we couldn’t afford homes… it had nothing to do with the insane housing market, of course. Gen Zers tend to wear similar labels, receiving criticisms for things like being “chronically online” and “easily offended.”

Gen Z Doesn’t believe in the American dream

But let’s be honest: all that time spent on social media (and from such a young age) can’t be healthy for your brain.

Alyssa Mancao, a Los Angeles therapist with a Gen Z client base, told Axios that because this generation grew up with social media, they tend to compare themselves more to others, which oftentimes naturally leads to feelings of inadequacy.

Plus, given the state of the world right now, I empathize with Gen Zers who are trying to make a name for themselves in this economy.

It’s also not shocking that so many Gen Zers are losing faith in the American Dream. I mean, it’s hard to imagine living a comfortable life filled with love, family, and freedom when many of us work long hours and still can’t afford groceries or a home.

“I think there is just an overarching fear of failure or making mistakes or making the wrong turn in their career trajectory that would emotionally or physically set them back years,” Prince told Fortune. “And so I think that anguish is just an anchor that’s holding them back.”

Another California-based therapist, Erica Basso, pointed out that there’s a ton of uncertainty plaguing Gen Zers today.

Source : https://www.vice.com/en/article/gen-z-are-more-anxious-than-any-other-generation/

Beauty bias? Attractive people land better jobs, higher salaries

(© deagreez – stock.adobe.com)

Think your next promotion depends purely on your skills and experience? A recent study suggests your appearance might matter more than you’d expect. Research looking at over 43,000 business school graduates found that attractive professionals earn thousands more each year than their equally qualified colleagues — and this advantage grows stronger over time.

The study, conducted by researchers at the University of Southern California and Carnegie Mellon University, tracked MBA graduates for 15 years after they left business school. What they discovered was eye-opening: People rated as attractive were 52.4% more likely to land prestigious positions, leading to an average bump in salary of $2,508 per year. For the most attractive individuals — those in the top 10% — that yearly advantage jumped to over $5,500.

This advantage, which researchers call the “attractiveness premium,” shows up across different industries but not always in the same way. Fields involving lots of face-time with clients and colleagues, like management and consulting, showed the biggest benefits for attractive individuals. Meanwhile, technical roles like IT and engineering, where work often happens behind the scenes, showed much smaller effects.

This disparity may explain why attractive professionals tend to gravitate away from technical fields and toward management positions, a phenomenon the researchers termed “horizontal sorting.”

Even more remarkable was the “extreme attractiveness premium.” Individuals in the top 10% of the attractiveness scale enjoyed an 11% advantage in career outcomes compared to those in the bottom 10%.

What makes these findings particularly noteworthy is that the benefits of being attractive don’t fade over time, even after people have proven their abilities. Each year, attractive professionals gained a small but consistent advantage over their peers, which added up significantly over the course of their careers. For perspective, the salary difference linked to attractiveness was about one-third the size of the gender pay gap among the same group of graduates.

“This study shows how appearance shapes not just the start of a career, but its trajectory over decades,” explains Nikhil Malik, who led the study at USC, in a statement. “These findings reveal a persistent and compounding effect of beauty in professional settings.”

To reach these conclusions, the researchers used advanced computer programs to analyze professional profile pictures and career progression data. Unlike previous studies that only looked at short-term effects or specific jobs, this research followed real careers across many industries and positions over 15 years.

The attractiveness premium was particularly evident among graduates from top-tier MBA programs, where competition for advancement is especially intense. In these high-stakes environments, where candidates already possess strong qualifications, appearance appeared to play a notable role in determining who reached senior leadership positions.

“It’s a stark reminder that success is influenced not just by skills and qualifications but also by societal perceptions of beauty,” observes Kannan Srinivasan, another researcher from Carnegie Mellon University.

The findings raise important questions about fairness in the workplace. While many companies now offer training to address unconscious bias related to gender and race, appearance-based advantages may be harder to tackle. These biases often operate through subtle social preferences rather than obvious discrimination.

“This research underscores how biases tied to physical appearance persist in shaping career outcomes, even for highly educated professionals,” notes Param Vir Singh, one of the study’s co-authors from Carnegie Mellon University.

Creating more equitable workplaces has been a top priority for corporations in recent years, yet these findings suggest that appearance-based advantages may require new approaches to workplace policy and practice. The persistent nature of the attractiveness premium indicates that simple awareness or training programs may be insufficient to address this form of bias.

Source : https://studyfinds.org/beauty-bias-attractive-people-better-jobs-salary/

Aging ‘hot spot’: Where the brain first starts showing signs of getting older

(© vegefox.com – stock.adobe.com)

What if we could pinpoint exactly where aging begins in the brain? Scientists at the Allen Institute have done just that, creating the first detailed cellular atlas of brain aging by analyzing millions of individual cells and identifying key regions where age-related changes first emerge.

The brain is like a massive city with thousands of different neighborhoods, each populated by unique types of cells performing specific jobs. Until now, researchers haven’t had a detailed “census” showing how each neighborhood changes as the city ages. This study, published in Nature, provides exactly that, examining cells from young adult mice (2 months old) and aged mice (18 months old). While mice age differently than humans, this comparison roughly mirrors the differences between young adult and older adult human brains.

Researchers analyzed 16 different brain regions, covering about 35% of the mouse brain’s total volume. They identified 847 distinct types of cells and discovered that certain cell populations, particularly support cells called glia, were especially sensitive to aging. They found significant changes around the third ventricle in the hypothalamus, which is the brain’s master control center that regulates essential functions like hunger, body temperature, sleep, and hormone production.

As the brain ages, it shows increased immune activity across various cell types. The researchers observed this, particularly in microglia, which are specialized cells that act as the brain’s maintenance and immune defense system. They also found this in border-associated macrophages, another type of immune cell. These cells showed signs of increased inflammatory activity in aged mice, suggesting they were working harder to maintain brain health.

The research team discovered fascinating changes in specialized cells called tanycytes and ependymal cells that line fluid-filled chambers in the brain, particularly around the third ventricle.

“Our hypothesis is that those cell types are getting less efficient at integrating signals from our environment or from things that we’re consuming,” says lead author Kelly Jin, Ph.D., in a statement. This inefficiency might contribute to broader aging effects throughout the body.

The study revealed changes in cells that produce myelin, the crucial insulating material around nerve fibers. Like the protective coating around electrical wires, myelin helps neurons communicate effectively. The researchers found that aging affects these insulator-producing cells, which could impact how well brain circuits function.

Most intriguingly, the researchers identified specific groups of neurons in the hypothalamus that showed dramatic changes with age. These neurons, which help control appetite, metabolism, and energy use throughout the body, showed signs of both decreased function and increased immune activity. This finding aligns with previous research suggesting that dietary factors, like intermittent fasting or calorie restriction, might influence lifespan.

“Aging is the most important risk factor for Alzheimer’s disease and many other devastating brain disorders. These results provide a highly detailed map for which brain cells may be most affected by aging,” says Dr. Richard J. Hodes, director of NIH’s National Institute on Aging.

While this research was conducted in mice, the findings provide a crucial roadmap for understanding human brain aging. The identification of specific vulnerable cell types and regions gives scientists clear targets for future development of therapies to maintain brain health throughout life.

Source : https://studyfinds.org/where-your-brain-first-starts-aging/

Smartphone use leads to hallucinations, detachment from reality, aggression in teens as young as 13: Study

Smartphones are making teenagers more aggressive, detached from reality and causing them to hallucinate, according to new research.

Scientists concluded the younger a person starts using a phone, the more likely they would be crippled by a whole host of psychological ills after surveying 10,500 teens between 13 and 17 from both the US and India for the study, by Sapien Labs.

“People don’t fully appreciate that hyper-real and hyper-immersive screen experiences can blur reality at key stages of development,” addiction psychologist Dr. Nicholas Kardaras, who was not part of the team who did the study, told The Post.

More than a third of 13-year-olds surveyed said they feel aggression, while a fifth experience hallucinations, the survey by Sapien Labs showed.

“Their digital world can compromise their ability to distinguish between what’s real and what’s not. A hallucination by any other name.

“Screen time essentially acts as a toxin that stunts both brain development and social development,” Kardaras explained. “The younger a kid is when given a device, the higher the likelihood of mental health issues later on.”

The teens surveyed for “The Youth Mind: Rising Aggression and Anger” were significantly worse off than older Gen Zers in Sapien Labs’ database and the youngest ages were more likely to suffer aggression, anger and hallucinations compared to their older counterparts.

A staggering 37% of 13-year-olds reported experiencing aggression, compared with 27% of 17-year-olds.

Frighteningly, 20% of 13-year-olds say they suffer from hallucinations, compared to 12% of 17-year-olds.

“Whereas today’s 17-year-olds typically got a phone at age 11 or 12, today’s 13-year-olds got their phones at age 10,” the report noted.

Respondents also reported they could pose a harm to themselves. 42% of American girls and 27% of boys aged 13 to 17 admitted to problems with suicidal thoughts.

The majority of teens polled said they had feelings of hopelessness, guilt, anxiety, and unwanted strange thoughts. More than 40% reported a sense of detachment from reality, mood swings, withdrawal, and traumatic flashbacks.

The researchers also warned phones are making kids withdraw from society.

“Once you have a phone, you spend a lot less time with in-person interaction, and the less you have in-person interaction, the less integrated you are into the real social fabric,” Sapien Labs chief scientist Tara Thiagarajan told The Post.

“You’re no longer connected in the way humans have been wired for hundreds of thousands of years.”

Kardaras, author of “Glow Kids”, also wasn’t surprised aggression was associated with phone use.

He runs Omega Recovery tech addiction recovery center in Austin, where teens are often admitted after violently attacking their parents for taking their phones away.

Kids around the country have also been assaulting their teachers at school after having their devices confiscated, with one Tennessee teacher even pepper-sprayed by a female student after he took her cell phone.

The CDC also warned in 2023 teen girls are at risk of increased violence — often at the hands of one another. Sapien Labs also flagged the uptick in aggression is disproportionately taking place in females, according to their research.

“There’s a fairly rapid rise now in kids experiencing actual violence in school, and kids are fearing for their safety,” Thiagarajan said. “That is something that everyone should sit up and take note of.”

She pointed to a December school shooting in Wisconsin was anomalously carried out by a teen girl. It had been 45 years since a female juvenile perpetrated a school shooting.

That shooter, Natalie “Samantha” Rupnow, 15, was known to have spent a great deal of her life online and had exhibited extremist views on the internet, but authorities are still looking for a motive for her shooting, after which she turned her gun on herself.

Source : https://nypost.com/2025/01/23/lifestyle/smartphone-use-leads-to-hallucinations-aggression-in-teens-study/

Adults with ADHD die 7 to 9 years sooner, alarming study shows

(© Rainer Hendla | Dreamstime.com)

Seven years. That’s how much sooner men with ADHD are dying compared to their neurotypical peers, and for women, the outlook is even bleaker at nearly nine years. These sobering numbers emerge from a new study examining life expectancy in adults with ADHD, painting a picture far more serious than the familiar narrative of forgotten appointments and misplaced keys.

The research, published in The British Journal of Psychiatry, analyzed data from nearly 10 million people across UK general practices, identifying over 30,000 adults with diagnosed ADHD. This represents just one in nine of the estimated ADHD population, as most cases remain undiagnosed.

“It is deeply concerning that some adults with diagnosed ADHD are living shorter lives than they should,” says Professor Josh Stott, senior author from University College London Psychology & Language Sciences, in a statement. “People with ADHD have many strengths and can thrive with the right support and treatment. However, they often lack support and are more likely to experience stressful life events and social exclusion, negatively impacting their health and self-esteem.”

Living with ADHD extends far beyond difficulties with focus and organization. People with the condition often experience differences in how they focus their attention. While they may possess high energy and an ability to focus intensely on their interests, they frequently struggle with mundane tasks. This can lead to challenges with impulsiveness, restlessness, and differences in planning and time management, potentially impacting success at school and work.

The study revealed that adults with ADHD were more likely to develop physical health conditions like diabetes, heart disease, chronic respiratory problems, and epilepsy. Mental health challenges were particularly prevalent: anxiety, depression, and self-harm occurred at notably higher rates than in the general population.

Treatment access remains a critical issue. A national survey found that while a third of adults with ADHD traits received medication or counseling for mental health issues (compared to 11% without ADHD), nearly 8% reported being denied requested mental health treatment. That rate is eight times higher than those without ADHD.

“Only a small percentage of adults with ADHD have been diagnosed, meaning this study covers just a segment of the entire community,” explains lead author Dr. Liz O’Nions. “More of those who are diagnosed may have additional health problems compared to the average person with ADHD. Therefore, our research may over-estimate the life expectancy gap for people with ADHD overall, though more community-based research is needed to test whether this is the case.”

The research carries particular weight because it drew from the UK’s primary care system, where almost everyone is registered. This comprehensive dataset allowed researchers to track real health outcomes rather than relying on self-reported information or smaller samples.

The gender disparity, with women losing even more years of life than men, raises important questions about how ADHD manifests and is treated across genders. Historically, the condition has been better recognized in males, potentially leaving many women undiagnosed until later in life, if at all.

“Although many people with ADHD live long and healthy lives,” Dr. O’Nions notes, “our finding that on average they are living shorter lives than they should indicates unmet support needs. It is crucial that we find out the reasons behind premature deaths so we can develop strategies to prevent these in future.”

These findings demand immediate attention from healthcare providers and policymakers. Treatment and support for ADHD is associated with better outcomes, including reduced mental health problems and substance use.

The numbers speak for themselves: 7 years, 9 years, 3% of adults affected. But behind these statistics are real lives being cut short by a condition that’s often dismissed as a simple attention problem. This study doesn’t just highlight a gap in life expectancy, it exposes a gap in our understanding of what ADHD truly means for those living with it.

Source : https://studyfinds.org/adults-with-adhd-die-7-to-9-years-sooner-alarming-study-shows/

Why camel’s milk will be the next big immune-boosting dairy alternative

Camel milk may be better for our immune health than cow’s milk. (Leo Morgan/Shutterstock)

Move over almond milk. There’s a new dairy alternative in town, and it comes from camels. While that might sound strange to Western ears, new research from Edith Cowan University (ECU) in Australia suggests camel milk could offer some impressive health benefits, especially for our immune systems.

The study, published in Food Chemistry, explored an in-depth analysis comparing cow and camel milk, focusing particularly on proteins that affect immune function and digestion. While cow’s milk dominates global dairy production at over 81%, camel’s milk currently accounts for just 0.4% of global milk production. Unlike cow’s milk, it contains distinctive proteins that could make it especially valuable for immune system support and gut health.

When examining the cream portion of both milk types, scientists identified 1,143 proteins in camel milk compared to 851 in cow’s milk. The cream fraction proved particularly rich in immune system-supporting proteins and bioactive peptides that can help fight harmful bacteria and potentially protect against certain diseases. However, researchers emphasize that further testing is needed to confirm their potency.

“These bioactive peptides can selectively inhibit certain pathogens, and by doing so, create a healthy gut environment and also has the potential to decrease the risk of developing cardiovascular disease in future,” explains study researcher Manujaya Jayamanna Mohittige, a Ph.D. student at ECU, in a statement.

For people who struggle with dairy sensitivities, the study confirms that camel milk naturally lacks beta-lactoglobulin, the primary protein that triggers allergic reactions to cow’s milk. Additionally, camel milk contains lower lactose levels than cow’s milk, potentially making it easier to digest for some individuals.

Composition-wise, camel milk is slightly different from cow’s milk. Cow’s milk typically contains 85-87% water, with 3.8-5.5% fat, 2.9-3.5% protein, and 4.6% lactose. Camel milk, meanwhile, consists of 87-90% water, with protein content varying from 2.15-4.90%, fat ranging from 1.2-4.5%, and lactose levels between 3.5-4.5%.

Camel milk production currently ranks fifth globally behind cow, buffalo, goat, and sheep milk. Given Australia‘s semi-arid climate and camel population, increasing its production is an increasingly viable option.

“Camel milk is gaining global attention, in part because of environmental conditions. Arid or semi-arid areas can be challenging for traditional cattle farming, but perfect for camels,” adds Mohittige.

However, there are practical challenges to overcome. While dairy cows can produce up to 28 liters of milk daily, camels typically yield only about 5 liters. Several camel dairies already operate in Australia, but their production volumes remain relatively low.

This doesn’t mean everyone should rush out and switch to camel milk. It’s still relatively hard to find in many places and typically costs more than cow’s milk. But for people looking for alternatives to traditional dairy, especially those with certain milk sensitivities, camel milk might offer an interesting option.

While camel milk may not appear in your local supermarket just yet, this research reveals why it deserves attention beyond its novelty value. Its unique protein profile and immune-supporting properties may help explain why this unconventional dairy source has persisted for millennia in cultures worldwide.

Source : https://studyfinds.org/camel-milk-immune-boosting-alternative/

Gender shock: Study reveals men, not women, make more emotional money choices

(Credit: © Yuri Arcurs | Dreamstime.com)

When it comes to making financial decisions, conventional wisdom suggests keeping emotions out of the equation. But new research reveals that men, contrary to traditional gender stereotypes, may be significantly more susceptible to letting emotions influence their financial choices than women.

A study led by the University of Essex challenges long-held assumptions about gender and emotional decision-making. The research explores how emotions generated in one context can influence decisions in completely unrelated situations – a phenomenon known as the emotional carryover effect.

“These results challenge the long-held stereotype that women are more emotional and open new avenues for understanding how emotions influence decision-making across genders,” explains lead researcher Dr. Nikhil Masters from Essex’s Department of Economics.

Working with colleagues from the Universities of Bournemouth and Nottingham, Masters designed an innovative experiment comparing how different types of emotional stimuli affect people’s willingness to take financial risks. They contrasted a traditional laboratory approach targeting a single emotion (fear) with a more naturalistic stimulus based on real-world events that could trigger multiple emotional responses.

The researchers recruited 186 university students (100 women and 86 men) and randomly assigned them to one of three groups. One group watched a neutral nature documentary about the Great Barrier Reef. Another group viewed a classic fear-inducing clip from the movie “The Shining,” showing a boy searching for his mother in an empty corridor with tense background music. The third group watched actual news footage about the BSE crisis (commonly known as “mad cow disease”) from the 1990s, a real food safety scare that generated widespread public anxiety.

After watching their assigned videos, participants completed decision-making tasks involving both risky and ambiguous financial choices using real money. In the risky scenario, they had to decide between taking guaranteed amounts of money or gambling on a lottery with known 50-50 odds. The ambiguous scenario was similar, but participants weren’t told the odds of winning.

The results revealed striking gender differences. Men who watched either the horror movie clip or the BSE footage subsequently made more conservative financial choices compared to those who watched the neutral nature video. This effect was particularly pronounced for those who saw the BSE news footage, and even stronger when the odds were ambiguous rather than clearly defined.

Perhaps most surprisingly, women’s financial decisions remained remarkably consistent regardless of which video they watched. The researchers found that while women reported experiencing similar emotional responses to the videos as men did, these emotions didn’t carry over to influence their subsequent financial choices.

The study challenges previous assumptions about how specific emotions like fear influence risk-taking behavior. While earlier studies suggested that fear directly leads to more cautious decision-making, this new research indicates the relationship may be more complex. Even when the horror movie clip successfully induced fear in participants, individual variations in reported fear levels didn’t correlate with their financial choices.

Instead, the researchers discovered that changes in positive emotions may play a more important role than previously thought. When positive emotions decreased after watching either the horror clip or BSE footage, male participants became more risk-averse in their financial decisions.

The study also demonstrated that emotional effects on decision-making can be even stronger when using realistic stimuli that generate multiple emotions simultaneously, compared to artificial laboratory conditions designed to induce a single emotion. This suggests that real-world emotional experiences may have more powerful influences on our financial choices than controlled laboratory studies have indicated.

The research team is now investigating why only men appear to be affected by these carryover effects. “Previous research has shown that emotional intelligence helps people to manage their emotions more effectively. Since women generally score higher on emotional intelligence tests, this could explain the big differences we see between men and women,” explains Dr. Masters.

These findings could have significant implications for understanding how major news events or crises might affect financial markets differently across gender lines. They also suggest the potential value of implementing “cooling-off” periods for important financial decisions, particularly after exposure to emotionally charged events or information.

“We don’t make choices in a vacuum and a cooling-off period might be crucial after encountering emotionally charged situations,” says Dr. Masters, “especially for life-changing financial commitments like buying a home or large investments.”

Source : https://studyfinds.org/study-men-not-women-make-more-emotional-money-choices/

Having a bigger waist could help some diabetes patients live longer

(© spaskov – stock.adobe.com)

Most health professionals would likely raise an eyebrow at the suggestion that a larger waist circumference might benefit some diabetes patients. Yet that’s exactly what researchers discovered when they analyzed survival rates among more than 6,600 American adults with diabetes, finding that the relationship between waist size and mortality follows unexpected patterns that vary significantly between men and women.

Medical professionals have long preached the dangers of excess belly fat, particularly for people with diabetes. However, follows distinct U-shaped and J-shaped patterns for women and men respectively, suggesting that both too little and too much belly fat could be problematic.

Researchers from Northern Jiangsu People’s Hospital in China analyzed data from the National Health and Nutrition Examination Survey (NHANES), a massive health study of Americans conducted between 2003 and 2018. They tracked the survival outcomes of 3,151 women and 3,473 men with diabetes, following them for roughly six to seven years on average.

The findings challenge conventional wisdom: women with diabetes actually showed the lowest mortality risk when their waist circumference hit 107 centimeters (about 42 inches), well above what’s typically considered healthy. For men, the sweet spot was 89 centimeters (around 35 inches), closer to traditional recommendations but still surprising in its implications.

The relationship manifested differently between the sexes. For women, the association between waist size and mortality formed a U-shaped curve – meaning death rates were higher among those with both smaller and larger waists than the optimal point. Men showed a J-shaped pattern, with mortality risk rising more steeply as waist sizes increased beyond the optimal point.

This phenomenon, dubbed the “obesity paradox,” isn’t entirely new to medical research. Similar patterns have been observed with body mass index (BMI) in various populations. However, this study is among the first to demonstrate it specifically with waist circumference in people with diabetes.

The findings were consistent across different causes of death. Whether looking at overall mortality or deaths specifically from cardiovascular disease, the patterns held steady. For every centimeter increase in waist size below the optimal point, women saw their mortality risk decrease by 3%, while men saw a 6% reduction. Above these thresholds, each additional centimeter increased mortality risk by 4% in women and 3% in men.

What makes these findings particularly intriguing is their persistence even after researchers accounted for numerous other factors that could influence survival, including age, education, ethnicity, smoking status, drinking habits, physical activity, and various health conditions.

While these findings might seem counterintuitive, they align with a growing body of research suggesting that optimal health parameters might vary more widely between individuals than previously thought. For diabetes patients and their healthcare providers, this study offers compelling evidence that when it comes to waist circumference, the relationship with survival is more complex than simply “less is more.”

Source : https://studyfinds.org/bigger-waist-helps-diabetes-patients-live-longer/

Ancient tooth enamel shatters long-held beliefs about early human diet

Model depicting Australopithecus afarensis. (Credit: © Procyab | Dreamstime.com)

Breaking new ground in our understanding of early human diet and evolution, scientists have discovered that our ancient relatives may not have been the avid meat-eaters previously believed. Research reveals that Australopithecus, one of humanity’s earliest ancestors who lived in South Africa between 3.7 and 3.3 million years ago, primarily maintained a plant-based diet rather than regularly consuming meat.

Scientists have long debated when our ancestors began regularly consuming meat, as this dietary shift has been linked to several crucial evolutionary developments, including increased brain size and reduced gut size. Many researchers believed meat-eating began with early human ancestors like Australopithecus, partly because stone tools and cut marks on animal bones have been found dating back to this period.

“Tooth enamel is the hardest tissue of the mammalian body and can preserve the isotopic fingerprint of an animal’s diet for millions of years,” says geochemist Tina Lüdecke, the study’s lead author, in a statement. As head of the Emmy-Noether Junior Research Group for Hominin Meat Consumption at the Max Planck Institute for Chemistry and Honorary Research Fellow at the University of the Witwatersrand, Lüdecke regularly travels to Africa to collect fossilized teeth samples for analysis.

When living things digest food and process nutrients, they create a kind of chemical signature involving different forms of nitrogen. Think of it like leaving footprints in sand. Herbivores leave one type of print, while meat-eaters leave another. By examining these ancient chemical footprints preserved in tooth enamel, scientists can determine what kinds of foods an animal ate. Meat-eaters consistently show higher levels of a specific form of nitrogen compared to plant-eaters.

The research, published in Science, focused on specimens from the Sterkfontein cave near Johannesburg, part of South Africa’s “Cradle of Humankind,” an area renowned for its abundant early hominin fossils. Using innovative chemical analysis techniques, researchers examined fossilized teeth from seven Australopithecus specimens, comparing them with teeth from other animals that lived alongside them, including ancient relatives of antelopes, cats, dogs, and hyenas.

Source : https://studyfinds.org/ancient-tooth-enamel-early-human-diet-meat

Love bacon? Just one slice is all it takes to raise your risk of dementia

(© Boris Ryzhkov | Dreamstime.com)

If you could see inside your brain after eating processed meats, you might think twice about that morning bacon ritual. An eye-opening new study has revealed that even modest consumption of processed red meat could be aging your brain faster than normal.

Doctors from Brigham and Women’s Hospital and the Harvard T.H. Chan School of Public Health followed over 133,000 healthcare professionals for up to 43 years, finding that people who ate just a quarter serving or more of processed red meat per day had a 13% higher risk of developing dementia compared to those who consumed minimal amounts. For perspective, a serving of red meat is about three ounces – roughly the size of a deck of cards.

Most previous studies exploring the connection between red meat consumption and brain health have been relatively small or short-term, making this extensive research particularly noteworthy. The study, published in Neurology, carefully defined its terms: processed red meat included products like bacon, hot dogs, sausages, salami and bologna, while unprocessed red meat encompassed beef, pork, lamb and hamburger.

While both types of red meat have been previously linked to conditions like Type 2 diabetes and cardiovascular disease, processed meats carry additional risks due to their high levels of sodium, nitrites, and other potentially harmful compounds. These substances can trigger inflammation, oxidative stress, and vascular problems that may contribute to cognitive decline.

Participants were divided into three consumption groups for processed meat: those eating fewer than 0.10 servings per day (low), between 0.10 and 0.24 servings daily (medium), and 0.25 or more servings per day (high).

Beyond just tracking dementia diagnoses, researchers also assessed participants’ cognitive function through telephone interviews and questionnaires. Those who regularly consumed processed red meat showed signs of accelerated brain aging – approximately 1.6 years of additional cognitive aging for each daily serving. In practical terms, this means their brain function declined as if they were over a year and a half older than their actual age.

To assess cognitive decline from multiple angles, the researchers examined both subjective and objective measures. A group of nearly 44,000 participants with an average age of 78 completed surveys rating their own memory and thinking skills. This self-reported assessment revealed that those consuming 0.25 or more servings of processed meat daily had a 14% higher risk of subjective cognitive decline compared to minimal consumers.

Intriguingly, the study found that replacing processed red meat with healthier protein sources could help protect brain health. Swapping out that daily serving of bacon or hot dogs for nuts and legumes was associated with a 19% lower risk of dementia. Fish proved even more beneficial, with a 28% reduction in dementia risk when substituted for processed meat.

The research team focused on two large cohorts of health professionals: the Nurses’ Health Study and the Health Professionals Follow-Up Study. These groups were ideal for long-term research as they were already completing detailed dietary questionnaires every 2-4 years and had high rates of follow-up participation. The participants’ professional backgrounds also meant they were likely to provide accurate health information.

Women made up about two-thirds of the study population, with an average starting age of 49 years. By following participants for several decades, researchers could observe how dietary patterns in middle age influenced cognitive health later in life. This long-term perspective is crucial, as cognitive decline often begins subtly, years before noticeable symptoms appear.

“Dietary guidelines tend to focus on reducing risks of chronic conditions like heart disease and diabetes, while cognitive health is less frequently discussed, despite being linked to these diseases,” said corresponding author Dr. Daniel Wang, of the Channing Division of Network Medicine at Brigham and Women’s Hospital, in a statement. “Reducing how much red meat a person eats and replacing it with other protein sources and plant-based options could be included in dietary guidelines to promote cognitive health.”

Having that hot dog at the baseball game or bacon at Sunday brunch are certainly delicious traditions in the American diet. With dementia rates expected to soar in the next 30 years, it seems that developing the devastating condition could eventually be a tradition too. Taking the right steps to protect your brain can rewrite that fate.

Source : https://studyfinds.org/love-bacon-just-one-slice-dementia-risk/

Obesity redefined: Why doctors are ditching BMI for these key health markers

(© Feng Yu – stock.adobe.com)

When the issue is obesity, the questions are many, and the routes to answers anything but straight. What is abundantly clear is a need for consensus on two foundational matters:

  • What is a useful definition for obesity?
  • Is obesity a disease?

To answer these questions and standardize the concepts, a group of 58 experts, representing multiple medical specialties and countries, convened a commission and participated in a consensus development process. They were careful to include people who experienced obesity to ensure consideration of patients’ perspectives. The commission’s report was just published in The Lancet, Diabetes & Endocrinology.

The commission recognized that the current measure of obesity, which is body-mass index (BMI), can both overestimate and underestimate adiposity (how much of the body is fat). The global commission determined that to reduce misclassification, it is necessary to use other measures of body fat. Some of these included waist circumference, waist-to-hip ratio, waist-to-height ratio, direct fat measurement, and signs and symptoms of poor health that could be attributed to excess adiposity.

The experts proposed two distinct types of obesity:

  • Clinical obesity: A systemic chronic illness directly and specifically caused by excess adiposity
  • Preclinical obesity: Excess adiposity with preserved tissue and organ function, accompanied by an increased risk of progression to clinical or other noncommunicable disease

The commission’s leader, Dr. Francesco Rubino, of King’s College, London, explained the importance of distinction in these new definitions of disease. The group acknowledges the subtleties of obesity and support timely access to treatment for patients diagnosed with clinical obesity. That is appropriate for people with any chronic disease. For people with preclinical obesity, the definition points to risk-reduction strategies.

Clinical vs. preclinical obesity

Currently, clinical obesity is defined as a state of chronic illness. Some tissues or organs show reduced function which is attributed to excess fat and affects daily activities. Some of these conditions are breathlessness, joint pain or reduced mobility, often in the knees and hips, metabolic dysfunction and impaired function of organ systems.

Applying the proposed definition, the diagnosis of clinical obesity requires two main criteria: confirmation of excess adiposity plus chronic organ dysfunction and/or limitations on mobility or daily living.

To confirm the diagnosis of clinical obesity in those with excess body fat requires that a healthcare provider evaluate the individual’s medical history and conduct a physical exam, the usual laboratory tests, and additional diagnostic tests as indicated.

The commission authors stated that, “A diagnosis of clinical obesity should have the same implications as other chronic disease diagnoses. Patients diagnosed with clinical obesity should, therefore, have timely and equitable access to comprehensive care and evidence-based treatments.”

Preclinical obesity is more of a spectrum of risk. Excess fat is confirmed, but these individuals don’t have ongoing illness attributed to adiposity. They can perform daily activities and have no or mild organ dysfunction. These patients are at higher risk for diseases like clinical obesity, cardiovascular disease, some cancers, Type 2 diabetes, and other illnesses.

“Preclinical obesity is different from metabolically healthy obesity because it is defined by the preserved function of all organs potentially affected by obesity,” the authors write, “not only those involved in metabolic regulation.”

What these changes mean for you, if you have excess fat, is that your condition is treated like any other medical condition. It isn’t something you just “get over” with diet and exercise. The effects of your fat are clearly identified, including the consequences without intervention. Specific fat-mediated dysfunctions have specific protocols for intervention. You work side-by-side with your healthcare provider to manage risk and consequences, hopefully even reducing risks and possibly reversing some consequences.

Source : https://studyfinds.org/obesity-redefined-why-doctors-are-ditching-bmi-for-these-key-health-markers/

Yes, parents really do have a ‘favorite’ child. Study reveals how to tell if it’s you

(Photo by New Africa on Shutterstock)

Ever wondered if your parents really did have a favorite child? That nagging suspicion might not be all in your head. A study analyzing data from over 19,400 participants concludes that parents do indeed treat their children differently, and the way they choose their “favorites” is more systematic than you might think.

“For decades, researchers have known that differential treatment from parents can have lasting consequences for children,” said lead author Alexander Jensen, PhD, an associate professor at Brigham Young University, in a statement. “This study helps us understand which children are more likely to be on the receiving end of favoritism, which can be both positive and negative.”

So what makes a child more likely to receive the coveted “favorite” status? The research team discovered several fascinating patterns. First, contrary to what many might expect, both mothers and fathers tend to favor daughters. Children who demonstrate responsibility and organization in their daily lives, from completing homework on time to keeping their rooms tidy, also typically receive more favorable treatment from their parents.

The study, published in Psychological Bulletin, examined five key areas of parent-child interaction: overall treatment, positive interactions (such as displays of affection or praise), negative interactions (like conflicts or criticism), resource allocation (including time spent with each child and material resources), and behavioral control (rules and expectations).

Birth order influences how parents interact with their children, particularly regarding independence and rules. Parents tend to grant older siblings more autonomy, such as later curfews or more decision-making freedom. However, the researchers note this may reflect appropriate developmental adjustments rather than favoritism.

Personality characteristics emerged as significant predictors of parental treatment. Children who demonstrate conscientiousness — showing responsibility through behaviors like completing chores without reminders or planning ahead for school assignments – typically experience more positive interactions and fewer conflicts with parents.

Similarly, agreeable children who show cooperation and consideration in family life often receive more positive parental responses.

One particularly noteworthy finding involves the disconnect between parents’ and children’s perceptions. While parents acknowledged treating daughters more favorably, children themselves didn’t report noticing significant gender-based differences in treatment. This suggests that some aspects of parental favoritism operate so subtly that children may not consciously recognize them.

Research has shown that children who receive less favorable treatment may face increased challenges with mental health and family relationships. “Understanding these nuances can help parents and clinicians recognize potentially damaging family patterns,” Jensen explained. “It is crucial to ensure all children feel loved and supported.”

The researchers emphasize that their findings show correlation rather than causation. “It is important to note that this research is correlational, so it doesn’t tell us why parents favor certain children,” Jensen said. “However, it does highlight potential areas where parents may need to be more mindful of their interactions with their children.”

For families navigating these dynamics, Jensen offers this perspective: “The next time you’re left wondering whether your sibling is the golden child, remember there is likely more going on behind the scenes than just a preference for the eldest or youngest. It might be about responsibility, temperament or just how easy or hard you are to deal with.”

Source : https://studyfinds.org/parents-really-do-have-favorite-child/

Nightmare: Your dreams are for sale — and companies are already buying

(Image by Shutterstock AI Generator)

Shocking new survey reveals 54% of young Americans report ads infiltrating their dreams

Remember when sleep offered an escape from endless advertising? That era may be ending. While U.S. citizens already face up to 4,000 advertisements daily in their waking hours, research suggests that even our dreams are no longer safe from commercial messaging. A new study reveals that 54% of young Americans report experiencing dreams influenced by ads—and some companies might be doing it intentionally.

The findings come at a critical time, as the American Marketing Association previously reported that 77% of companies surveyed in 2021 expressed intentions to experiment with “dream ads” by this year. What was once considered science fiction may now be becoming reality, with major implications for consumer protection and marketing ethics.

According to The Media Image’s newly released consumer survey focusing on Gen Z and Millennials, 54% of Americans aged 18-35 report having experienced dreams that appeared to be influenced by advertisements or contained ad-like content. Even more striking, 61% of respondents report having such dreams within the past year, with 38% experiencing them regularly—ranging from daily occurrences to monthly episodes.

Conducted by Survey Monkey on behalf of The Media Image between January 2nd and 3rd, 2025, the research included a representative sample of 1,101 American respondents aged 18-35. While the sample skewed slightly female (62%), the findings are considered reflective of broader perspectives within this age group.

The data shows a striking pattern: 22% of respondents experience ad-like content in their dreams between once a week to daily, while another 17% report such occurrences between once a month to every couple of months.

The phenomenon isn’t merely passive. The survey reveals that these dream-based advertisements may be influencing consumer behavior in tangible ways. While two-thirds of consumers (66%) report resistance to making purchases based on their dreams, the other third admit that their dreams have encouraged them to buy products or services over the past year—a conversion rate that rivals or exceeds many traditional advertising campaigns.

The presence of major brands in dreams appears to be particularly prevalent, with 48% of young Americans reporting encounters with well-known companies such as Coca-Cola, Apple, or McDonald’s during their sleep. Harvard experts suggest this may be due to memory “reactivation” during sleep, where frequent exposure to brands in daily life increases their likelihood of appearing in dreams.

Perhaps most troubling is the apparent willingness of many consumers to accept this new frontier of advertising. The survey found that 41% of respondents would be open to seeing ads in their dreams if it meant receiving discounts on products or services. This raises serious ethical questions about the commercialization of human consciousness and the potential exploitation of vulnerable mental states for marketing purposes.

Despite these concerns, there appears to be limited interest in protecting dreams from commercial influence. Over two-thirds of respondents (68%) indicated they would not be willing to pay to keep their dreams ad-free, even if such technology existed. However, a significant minority (32%) expressed interest in a hypothetical “dream-ad blocker,” suggesting growing awareness and concern about this issue among some consumers.

The research comes in the wake of dream researchers issuing an open letter warning the public about corporate attempts to infiltrate dreams with advertisements, sparked by Coors Light’s experimental campaign that achieved notable success. This confluence of corporate interest and technological capability raises serious questions about the future of personal privacy and mental autonomy.

The potential manipulation of dreams for advertising purposes raises serious concerns about psychological well-being and the need for protective regulations. As companies explore ways to influence our subconscious minds, the lack of existing safeguards becomes increasingly problematic.

These results emerge against a backdrop of increasing advertising saturation in daily life. Current estimates suggest that U.S. citizens are exposed to up to 4,000 advertisements daily, making sleep one of the last remaining refuges from commercial messaging. The potential erosion of this final sanctuary raises important questions about consumer rights and mental well-being in an increasingly commercialized world.

The research presents a clear warning: without immediate attention to the ethical and regulatory challenges of dream-based advertising, we risk losing the last advertisement-free space in modern life. As companies develop new technologies to influence our dreams, the choice between consumer protection and commercial interests becomes increasingly pressing.

Source : https://studyfinds.org/your-dreams-are-for-sale-and-companies-are-already-buying/

How smoking cigarettes could sabotage your career and income

(Photo creidt: © Alem Bradic | Dreamstime.com

Most people know smoking is bad for their health, but a new study suggests it could also be bad for their wealth. Research from Finland reveals that smoking in early adulthood can significantly impact your career trajectory and earning potential, with effects that ripple through decades of working life.

Living in an age where smoking rates have declined significantly since the 1990s, you might wonder why this matters. Despite the downward trend, smoking remains surprisingly prevalent in high-income countries, with 18% of women and 27% of men still lighting up as of 2019. While most smokers are aware of the health risks, they might not realize how their habit could be affecting their professional lives and financial future.

The study, published in Nicotine and Tobacco Research, analyzed data from nearly 2,000 Finnish adults to explore how smoking habits in early adulthood influenced their long-term success in the job market. What they found was striking: for each pack-year of smoking (equivalent to smoking one pack of cigarettes daily for a year), people experienced an average 1.8% decrease in earnings and were employed for 0.5% fewer years over the study period.

“Smoking in early adulthood is closely linked to long-term earnings and employment, with lower-educated individuals experiencing the most severe consequences,” said the paper’s lead author, Jutta Viinikainen, from the University of Jyväskylä, in a statement. “These findings highlight the need for policies that address smoking’s hidden economic costs and promote healthier behaviors.”

Research from the Cardiovascular Risk in Young Finns Study tracked participants’ smoking habits and career trajectories from 2001 to 2019, providing a long-term look at how tobacco use influences professional success over time. The study focused on adults who were between 24 and 39 years old at the start of the study period. Beyond just counting cigarettes, researchers calculated “pack-years” – a measure that considers both how much and how long someone has smoked – to understand the cumulative impact of smoking on career outcomes.

Particularly interesting was how smoking’s impact varied across different demographic groups. Young smokers with lower education levels faced the steepest penalties in terms of reduced earnings, while older smokers in this educational bracket saw the most significant drops in employment years. This pattern suggests that smoking’s effects on career success evolve differently across age groups and education levels.

For younger workers, smoking appeared to create immediate barriers to earning potential, possibly due to reduced productivity or unconscious bias from employers. Meanwhile, older workers faced growing challenges maintaining steady employment as the long-term health effects of smoking began to manifest, particularly in physically demanding jobs that are more common among those with less formal education.

Consider this: reducing smoking by just five pack-years (equivalent to smoking one pack daily for five years) could potentially boost earnings by 9%. That’s a substantial difference in earning power that could compound significantly over a career span, affecting everything from lifestyle choices to retirement savings.

Of particular concern is how these effects might create a potential feedback loop of disadvantage. While the study found that those with lower education levels appeared to face greater economic consequences from smoking, it’s important to note that this relationship is complex and influenced by many factors. This suggests that smoking could be amplifying existing socioeconomic disparities, making it harder for people to climb the economic ladder.

Smoking’s impact on physical fitness and performance may explain part of this effect, particularly in jobs requiring manual labor or physical stamina. When you’re constantly short of breath or taking more frequent breaks for cigarettes, it’s harder to maintain the same level of productivity as non-smoking colleagues. Over time, these small differences in daily performance can translate into significant gaps in career advancement and earning potential.

Perhaps most encouraging was the finding that quitting smoking could help mitigate these negative effects, particularly regarding employment stability among less-educated workers. This suggests it’s never too late to improve your career prospects by putting out that last cigarette.

In a world where career success increasingly depends on maintaining peak performance and adaptability, smoking may be more than just a health risk – it could be a career liability that many can’t afford to ignore. As the costs of smoking continue to mount, both in terms of health and wealth, the message becomes clear: your wallet, not just your lungs, might breathe easier if you quit.

Source : https://studyfinds.org/smoking-cigarettes-career-income/

Glass of milk a day keeps colorectal cancer away, massive study reveals

(© Goran – stock.adobe.com)

What if reducing your cancer risk was as simple as adding a glass of milk to your daily diet? A study of over half a million women concludes that dairy products, particularly those rich in calcium, may help protect against colorectal cancer, while alcohol and processed meats continue to pose significant risks. The massive research project tracked the eating habits and health outcomes of 542,778 British women for over 16 years, identifying key foods and nutrients that could help prevent one of the world’s most common cancers.

Colorectal cancer shows striking differences between regions, with higher rates in wealthy nations like the United States, European countries, and Japan, compared to lower rates in much of Africa and South Asia. However, when people move to countries with higher rates, their risk begins matching that of their new home within about a decade, suggesting that lifestyle factors, particularly diet, play a crucial role.

The international research team analyzed 97 different dietary components, from specific foods to nutrients. During the study period, 12,251 women developed colorectal cancer, allowing scientists to identify clear patterns between eating habits and cancer risk.

Among the strongest protective factors was calcium intake. Women who consumed more calcium-rich foods showed a significantly lower risk of developing colorectal cancer. The benefit appeared consistent whether the calcium came from dairy products or other sources.

Dairy milk emerged as another powerful player in cancer prevention. Regular milk drinkers showed notably lower cancer risk, and other dairy products like yogurt demonstrated similar protective effects. Several nutrients commonly found in dairy — including riboflavin, magnesium, phosphorus, and potassium — also showed benefits.

On the flip side, alcohol consumption stood out as the strongest risk factor. Having about two standard drinks more per day was linked to a 15% higher risk of developing colorectal cancer. The risk appeared particularly pronounced for rectal cancer compared to colon cancer.

Red and processed meats maintained their concerning reputation. Each additional daily serving about the size of a slice of ham was associated with an 8% higher risk. This finding supports previous research that led international health organizations to classify processed meat as cancer-causing and red meat as probably cancer-causing in humans.

The researchers took an innovative approach to confirm dairy’s protective effects by examining genetic differences that affect how well people can digest milk products. This analysis provided additional evidence that dairy foods help protect against colorectal cancer, as people who were genetically better able to digest dairy had lower cancer rates.

While breakfast cereals, fruits, whole grains, and high-fiber foods showed some protective effects, these benefits became less pronounced when researchers accounted for overall lifestyle habits. This suggests that people who eat these foods might have generally healthier lifestyles that contribute to lower cancer risk.

Scientists believe calcium helps prevent cancer in several ways: by binding to harmful substances in the digestive system, promoting healthy cell development in the colon, and reducing inflammation. While dairy products aren’t suitable for everyone, particularly those with lactose intolerance or milk allergies, the research suggests that for many people, including more dairy in their diet might help reduce their cancer risk.

These findings provide compelling evidence that simple dietary changes, like having more dairy products while limiting alcohol and processed meats, could help reduce the risk of developing one of the world’s most common cancers. However, no single food acts as a magic bullet: it’s the overall pattern of dietary choices that matters most for cancer prevention.

Source : https://studyfinds.org/dairy-milk-keeps-colorectal-cancer-away/

Could AI replace politicians? A philosopher maps out three possible futures

(© jon – stock.adobe.com)

From business and public administration to daily life, artificial intelligence is reshaping the world – and politics may be next. While the idea of AI politicians might make some people uneasy, survey results tell a different story. A poll conducted by my university in 2021, during the early surge of AI advancements, found broad public support for integrating AI into politics across many countries and regions.

A majority of Europeans said they would like to see at least some of their politicians replaced by AI. Chinese respondents were even more bullish about AI agents making public policy, while normally innovation-friendly Americans were more circumspect.

As a philosopher who researches the moral and political questions raised by AI, I see three main pathways for integrating AI into politics, each with its own mixture of promises and pitfalls.

While some of these proposals are more outlandish than others, weighing them up makes one thing certain: AI’s involvement in politics will force us to reckon with the value of human participation in politics, and with the nature of democracy itself.

Chatbots running for office?

Prior to ChatGPT’s explosive arrival in 2022, efforts to replace politicians with chatbots were already well underway in several countries. As far back as 2017, a chatbot named Alisa challenged Vladimir Putin for the Russian presidency, while a chatbot named Sam ran for office in New Zealand. Denmark and Japan have also experimented with chatbot-led political initiatives.

These efforts, while experimental, reflect a longstanding curiosity about AI’s role in governance across diverse cultural contexts.

The appeal of replacing flesh and blood politicians with chatbots is, on some levels, quite clear. Chatbots lack many of the problems and limitations typically associated with human politics. They are not easily tempted by desires for money, power, or glory. They don’t need rest, can engage virtually with everyone at once, and offer encyclopedic knowledge along with superhuman analytic abilities.

However, chatbot politicians also inherit the flaws of today’s AI systems. These chatbots, powered by large language models, are often black boxes, limiting our insight into their reasoning. They frequently generate inaccurate or fabricated responses, known as hallucinations. They face cybersecurity risks, require vast computational resources, and need constant network access. They are also shaped by biases derived from training data, societal inequalities, and programmers’ assumptions.

Additionally, chatbot politicians would be ill-suited to what we expect from elected officials. Our institutions were designed for human politicians, with human bodies and moral agency. We expect our politicians to do more than answer prompts – we also expect them to supervise staff, negotiate with colleagues, show genuine concern for their constituents, and take responsibility for their choices and actions.

Without major improvements in the technology, or a more radical reimagining of politics itself, chatbot politicians remain an uncertain prospect.

AI-powered direct democracy

Another approach seeks to completely do away with politicians, at least as we know them. Physicist César Hidalgo believes that politicians are troublesome middlemen that AI finally allows us to cut out. Instead of electing politicians, Hidalgo wants each citizen to be able to program an AI agent with their own political preferences. These agents can then negotiate with each other automatically to find common ground, resolve disagreements, and write legislation.

Hidalgo hopes that this proposal can unleash direct democracy, giving citizens more direct input into politics while overcoming the traditional barriers of time commitment and legislative expertise. The proposal seems especially attractive in light of widespread dissatisfaction with conventional representative institutions.

However, eliminating representation may be more difficult than it seems. In Hidalgo’s “avatar democracy,” the de facto kingmakers would be the experts who design the algorithms. Since the only way to legitimately authorize their power would likely be through voting, we might merely replace one form of representation with another.

The specter of algocracy

One even more radical idea involves eliminating humans from politics altogether. The logic is simple enough: if AI technology advances to the point where it makes reliably better decisions than humans, what would be the point of human input?

An algocracy is a political regime run by algorithms. While few have argued outright for a total handover of political power to machines (and the technology for doing so is still far off), the specter of algocracy forces us to think critically about why human participation in politics matters. What values – such as autonomy, responsibility, or deliberation – must we preserve in an age of automation, and how?

Source : https://studyfinds.org/could-ai-replace-politicians/

Obesity label is medically flawed, says global report

People with excess body fat can still be active and healthy, experts say

Calling people obese is medically “flawed” – and the definition should be split into two, a report from global experts says.

The term “clinical obesity” should be used for patients with a medical condition caused by their weight, while “pre-clinically obese” should be applied to those remaining fat but fit, although at risk of disease.

This is better for patients than relying only on body mass index (BMI) – which measures whether they are a healthy weight for their height – to determine obesity.

More than a billion people are estimated to be living with obesity worldwide and prescription weight-loss drugs are in high demand.

The report, published in The Lancet Diabetes & Endocrinology journal, is supported by more than 50 medical experts around the world.

“Some individuals with obesity can maintain normal organ function and overall health, even long term, whereas others display signs and symptoms of severe illness here and now,” Prof Francesco Rubino, from King’s College London, who chaired the expert group, said.

“Obesity is a spectrum,” he added.

The current, blanket definition means too many people are being diagnosed as obese but not receiving the most appropriate care, the report says.

Natalie, from Crewe, goes to the gym four times a week and has a healthy diet, but is still overweight.

“I would consider myself on the larger side, but I’m fit,” she told the BBC 5 Live phone-in with Nicky Campbell.

“If you look at my BMI I’m obese, but if I speak to my doctor they say that I’m fit, healthy and there’s nothing wrong with me.

“I’m doing everything I can to stay fit and have a long healthy life,” she said.

Richard, from Falmouth, said there is a lot of confusion around BMI.

“When they did my test, it took me to a level of borderline obesity, but my body fat was only 4.9% – the problem is I had a lot of muscle mass,” he says.

In Mike’s opinion, you cannot be fat and fit – he says it is all down to diet.

“All these skinny jabs make me laugh, if you want to lose weight stop eating – it’s easy.”

Currently, in many countries, obesity is defined as having a BMI over 30 – a measurement that estimates body fat based on height and weight.

How is BMI calculated?
It is calculated by dividing an adult’s weight in kilograms by their height in metres squared.

For example, if they are 70kg (about 11 stone) and 1.70m (about 5ft 7in):

square their height in metres: 1.70 x 1.70 = 2.89
divide their weight in kilograms by this amount: 70 ÷ 2.89 = 24.22
display the result to one decimal place: 24.2
Find out what your body mass index (BMI) means on the NHS website

But BMI has limitations.

It measures whether someone is carrying too much weight – but not too much fat.

So very muscular people, such as athletes, tend to have a high BMI but not much fat.

The report says BMI is useful on a large scale, to work out the proportion of a population who are a healthy weight, overweight or obese.

But it reveals nothing about an individual patient’s overall health, whether they have heart problems or other illnesses, for example, and fails to distinguish between different types of body fat or measure the more dangerous fat around the waist and organs.

Measuring a patient’s waist or the amount of fat in their body, along with a detailed medical history, can give a much clearer picture than BMI, the report says.

Source: https://www.bbc.com/news/articles/c79dz14d30ro

Keeping the thermostat between these temperatures is best for seniors’ brains

(Credit: © Lopolo | Dreamstime.com)

That perfect thermostat setting might be more important than you think, especially at grandma and grandpa’s house. A new study finds that indoor temperature significantly affects older adults’ ability to concentrate, even in their own homes where they control the climate. The research suggests that as climate change brings more extreme temperatures, elderly individuals may face increased cognitive challenges unless their indoor environments are properly regulated.

Researchers at the Hinda and Arthur Marcus Institute for Aging Research, the research arm of Hebrew SeniorLife affiliated with Harvard Medical School, conducted a year-long study monitoring 47 community-dwelling adults aged 65 and older. The study tracked both their home temperatures and their self-reported ability to maintain attention throughout the day. What they discovered was a clear U-shaped relationship between room temperature and cognitive function. In other words, attention spans were optimal within a specific temperature range and declined when rooms became either too hot or too cold.

The sweet spot for cognitive function appeared to be between 20-24°C (68-75°F). When temperatures deviated from this range by just 4°C (7°F) in either direction, participants were twice as likely to report difficulty maintaining attention on tasks. This finding is particularly concerning given that many older adults live on fixed incomes and may struggle to maintain optimal indoor temperatures, especially during extreme weather events.

Many previous studies have examined temperature’s effects on cognition in controlled laboratory settings, but this research breaks new ground by studying people in their natural home environments over an extended period. The research team used smart sensors placed in participants’ primary living spaces to continuously monitor temperature and humidity levels, while participants completed twice-daily smartphone surveys about their thermal comfort and attention levels.

The study’s findings revealed an interesting asymmetry in how people responded to temperature variations. While both hot and cold conditions impaired attention, participants seemed particularly sensitive to cold temperatures. When reporting feeling cold, they showed greater cognitive difficulties across a wider range of actual temperatures compared to when they felt hot. This suggests that maintaining adequate heating may be especially crucial for preserving cognitive function in older adults during winter months.

“Our findings underscore the importance of understanding how environmental factors, like indoor temperature, impact cognitive health in aging populations,” said lead author Dr. Amir Baniassadi, an assistant scientist at the Marcus Institute, in a statement. “This research highlights the need for public health interventions and housing policies that prioritize climate resilience for older adults. As global temperatures rise, ensuring access to temperature-controlled environments will be crucial for protecting their cognitive well-being.”

This study follows a 2023 investigation measuring how temperature affected older adults’ sleep and cognitive ability, building a growing body of evidence that climate change impacts extend beyond physical health. While much attention has been paid to the direct health impacts of heat waves and cold snaps, this research suggests that even moderate temperature variations inside homes could affect older adults’ daily cognitive functioning.

The participant group, while relatively small, was carefully monitored. With an average age of 79 years, the cohort completed over 17,000 surveys during the study period. Most participants lived in private, market-rate housing (34 participants) rather than subsidized housing (13 participants), suggesting they had reasonable control over their home environments. This makes the findings particularly striking: if even relatively advantaged older adults experience cognitive effects from temperature variations, more vulnerable populations may face even greater challenges.

The connection between temperature and cognition isn’t entirely surprising. As we age, our bodies become less efficient at regulating temperature, a problem often compounded by chronic conditions like diabetes or medications that affect thermoregulation. What’s novel about this research is its demonstration that these physiological vulnerabilities may extend to cognitive function in real-world settings.

As winter gives way to spring and thermostats across the country get adjusted, this research suggests we might want to pay closer attention to those settings — especially in homes where older adults reside. The cognitive sweet spot of 68-75°F might just be the temperature range where wisdom flourishes.

Source : https://studyfinds.org/cold-homes-linked-to-attention-problems-in-older-adults/

Process this: 50,000 grocery products reveal shocking truth about America’s food supply

(Credit: © Photopal604 | Dreamstime.com)

Minimally processed foods make up just a small percentage of what’s available in the U.S. supermarkets

Next time you walk down the aisles of your local grocery store, take a closer look at what’s actually available on those shelves. A stunning report reveals the majority of food products sold at major U.S. grocery chains are highly processed, with most of them priced significantly cheaper than less processed alternatives.

In what may be the most comprehensive analysis of food processing in American grocery stores to date, researchers examined over 50,000 food items sold at Walmart, Target, and Whole Foods to understand just how processed our food supply really is. Using sophisticated machine learning techniques, they developed a database called GroceryDB that scores foods based on their degree of processing.

What exactly makes a food “processed“? While nearly all foods undergo some form of processing (like washing and packaging), ultra-processed foods are industrial formulations made mostly from substances extracted from foods or synthesized in laboratories. Think instant soups, packaged snacks, and soft drinks – products that often contain additives like preservatives, emulsifiers, and artificial colors.

Research has suggested that diets high in ultra-processed foods can contribute to health issues like obesity, diabetes and heart disease. Over-processing can also strip foods of beneficial nutrients. Despite these risks, there has been no easy way for consumers to identify what foods are processed, highly processed, or ultra-processed.

“There are a lot of mixed messages about what a person should eat. Our work aims to create a sort of translator to help people look at food information in a more digestible way,” explains Giulia Menichetti, PhD, an investigator in the Channing Division of Network Medicine at Brigham and Women’s Hospital and the study’s corresponding author, in a statement.

The findings paint a concerning picture of American food retail. Across all three stores, minimally processed products made up a relatively small fraction of available items, while ultra-processed foods dominated the shelves. Even more troubling, the researchers found that for every 10% increase in processing scores, the price per calorie dropped by 8.7% on average. This means highly processed foods tend to be substantially cheaper than their less processed counterparts.

However, the degree of processing varied significantly between stores. Whole Foods offered more minimally processed options and fewer ultra-processed products compared to Walmart and Target. The researchers also found major differences between food categories. Some categories, like jerky, popcorn, chips, bread, and mac and cheese, showed little variation in processing levels – meaning consumers have limited choices if they want less processed versions of these foods. Other categories, like cereals, milk alternatives, pasta, and snack bars, displayed wider ranges of processing levels.

Looking at specific examples helps illustrate these differences. When examining breads, researchers found that Manna Organics multi-grain bread from Whole Foods scored low on the processing scale since it’s made primarily from whole wheat kernels and basic ingredients. In contrast, certain breads from Walmart and Target scored much higher due to added ingredients like resistant corn starch, soluble corn fiber, and various additives.

The research team also developed a novel way to analyze individual ingredients’ contributions to food processing. They found that certain oils, like brain octane oil, flaxseed oil, and olive oil, contributed less to ultra-processing compared to palm oil, vegetable oil, and soybean oil. This granular analysis helps explain why seemingly similar products can have very different processing scores.

Study authors have made their findings publicly accessible through a website called TrueFood.tech, where consumers can look up specific products and find less processed alternatives within the same category.

“When people hear about the dangers of ultra-processed foods, they ask, ‘OK, what are the rules? How can we apply this knowledge?’” Menichetti notes. “We are building tools to help people implement changes to their diet based on information currently available about food processing. Given the challenging task of transforming eating behaviors, we want to nudge them to eat something that is within what they currently want but a less-processed option.”

As Americans increasingly rely on grocery stores for their food — with over 60% of U.S. food consumption coming from retail establishments — understanding what’s actually available on store shelves becomes crucial for public health. While this research doesn’t definitively prove that ultra-processed foods are harmful, it does demonstrate that avoiding them may require both conscious effort and deeper pockets.

Source : https://studyfinds.org/ultra-processed-foods-america-grocery-stores-target-walmart/

 

Age 13 rule isn’t working — Most pre-teens already deep in social media

(Credit: Child Social Media © Andrii Iemelianenko | Dreamstime.com)

Ages 11 and 12 represent a pivotal transition from childhood to adolescence — a time traditionally marked by first crushes, growing independence, and deepening friendships. But according to new research, this age group is also marked by something more troubling: widespread social media addiction. The study of over 10,000 American youth reveals that most pre-teens are active on platforms they’re technically too young to use.

As the U.S. Supreme Court prepares to hear arguments against Congress’ TikTok ban, the research pulls back the curtain on what many parents have long suspected: nearly 64% of pre-teens have at least one social media account, flouting minimum age requirements and raising concerns about online safety and mental health impacts.

Drawing from a diverse sample of adolescents aged 11 to 15, researchers found that TikTok reigns supreme among young users, with 67% of social media-using teens maintaining an account on the short-form video platform. YouTube and Instagram followed closely behind at around 65% and 66% respectively.

“Policymakers need to look at TikTok as a systemic social media issue and create effective measures that protect children online,” said Dr. Jason Nagata, a pediatrician at UCSF Benioff Children’s Hospitals and the lead author of the study, in a statement. “TikTok is the most popular social media platform for children, yet kids reported having more than three different social media accounts, including Instagram and Snapchat.”

Notable gender differences emerged in platform preferences. Female adolescents gravitated toward TikTok, Snapchat, Instagram, and Pinterest, while their male counterparts showed stronger affinity for YouTube and Reddit. This digital divide hints at how social media may be shaping different aspects of adolescent development and socialization between genders.

Among the study’s more concerning findings was that 6.3% of young social media users admitted to maintaining “secret” accounts hidden from parental oversight. These covert profiles, sometimes dubbed “Finstas” (fake Instagram accounts), represent a digital double life that could put vulnerable youth at risk while hampering parents’ ability to protect their children online.

Signs of problematic use and potential addiction emerged as significant concerns. Twenty-five percent of children with social media accounts reported often thinking about social media apps, and another 25% said they use the apps to forget about their problems. Moreover, 17% of users tried to reduce their social media use but couldn’t, while 11% reported that excessive use had negatively impacted their schoolwork.

“Our study revealed a quarter of children reported elements of addiction while using social media, with some as young as eleven years old,” Nagata explained. “The research shows underage social media use is linked with greater symptoms of depression, eating disorders, ADHD, and disruptive behaviors. When talking about social media usage and policies, we need to prioritize the health and safety of our children.”

Recent legislative efforts, including the federal Protecting Kids on Social Media Act and various state-level initiatives, aim to strengthen safeguards around youth social media use. The U.S. Surgeon General has called for more robust age verification systems and warning labels on social media platforms, highlighting the growing recognition of this issue as a public health concern.

To address these challenges, medical professionals recommend structured approaches to managing screen time. The American Academy of Pediatrics has developed the Family Media Plan, providing families with tools to schedule both online and offline activities effectively.

“Every parent and family should have a family media plan to ensure children and adults stay safe online and develop a healthy relationship with screens and social media,” said Nagata, who practices this approach with his own children. “Parents can create strong relationships with their children by starting open conversations and modeling good behaviors.”

As social media continues evolving at breakneck speed, this research, published in Academic Pediatrics, provides a crucial snapshot of how the youngest generation navigates the digital landscape. The timing proves particularly relevant as the Supreme Court prepares to hear arguments about Congress’ TikTok ban, set to take effect January 19th. While the case primarily centers on national security concerns, the study’s findings suggest that children’s welfare should be an equally important consideration in platform regulation.

Source : https://studyfinds.org/most-pre-teens-already-deep-in-social-media/

Warning: Your pooch’s smooches really could make you quite sick

(Credit: © Natalia Skripnikova | Dreamstime.com)

39% of healthy dogs may silently carry dangerous Salmonella strains, researchers warn
UNIVERSITY PARK, Pa. — Next time your furry friend gives you those irresistible puppy dog eyes, you might want to think twice before sharing your snack. That’s because scientists say that household dogs could be silent carriers of dangerous antibiotic-resistant Salmonella bacteria, potentially putting their human families at risk.

Most pet owners know to wash their hands after handling raw pet food or cleaning up after their dogs, but researchers at Pennsylvania State University have uncovered a concerning trend: household dogs can carry and spread drug-resistant strains of Salmonella even when they appear perfectly healthy. This finding is particularly worrisome because these resistant bacteria can make treating infections much more challenging in both animals and humans.

The research takes on added significance considering that over half of U.S. homes include dogs. “We have this close bond with companion animals in general, and we have a really close interface with dogs,” explains Sophia Kenney, the study’s lead author and doctoral candidate at Penn State, in a statement. “We don’t let cows sleep in our beds or lick our faces, but we do dogs.”

To investigate this concerning possibility, the research team employed a clever detective-like approach. They first tapped into an existing network of veterinary laboratories that regularly test animals for various diseases. They identified 87 cases where dogs had tested positive for Salmonella between May 2017 and March 2023. These weren’t just random samples: they came from real cases where veterinarians had submitted samples for testing, whether the dogs showed symptoms or not.

The scientists then did something akin to matching fingerprints. For each dog case they found, they searched a national database of human Salmonella infections, looking for cases that occurred in the same geographic areas around the same times. This database, maintained by the National Institutes of Health, is like a library of bacterial information collected from patients across the country. Through this matching process, they identified 77 human cases that could potentially be connected to the dog infections.

The research team then used advanced DNA sequencing technology to analyze each bacterial sample. This allowed them to not only identify different varieties of Salmonella but also determine how closely related the bacteria from dogs were to those found in humans. They specifically looked for two key things: genes that make the bacteria resistant to antibiotics, and genes that help the bacteria cause disease.

What they found was eye-opening. Among the dog samples, they discovered 82 cases of the same type of Salmonella that commonly causes human illness. More concerning was that many of these bacterial strains carried genes making them resistant to important antibiotics, the same medicines doctors rely on to treat serious infections.

In particular, 16 of the human cases were found to be very closely related to six different dog-associated strains. While this doesn’t definitively prove the infections spread from dogs to humans, it’s like finding matching puzzle pieces that suggest a connection. The researchers also discovered that 39% of the dog samples contained a special gene called shdA, which allows the bacteria to survive longer in the dog’s intestines. This means infected dogs could potentially spread the bacteria through their waste for extended periods without appearing sick themselves.

The bacteria showed impressive diversity, with researchers identifying 31 different varieties in dogs alone. Some common types found in both dogs and humans included strains known as Newport, Typhimurium, and Enteritidis — names that might not mean much to the average person but are well-known to health officials for causing human illness.

The research has highlighted real-world implications. Study co-author Nkuchia M’ikanatha, lead epidemiologist for the Pennsylvania Department of Health, points to a recent outbreak where pig ear pet treats sickened 154 people across 34 states with multidrug-resistant Salmonella. “This reminds us that simple hygiene practices such as hand washing are needed to protect both our furry friends and ourselves — our dogs are family but even the healthiest pup can carry Salmonella,” he notes.

The historical context adds another layer to the findings. According to researchers, Salmonella has been intertwined with human history since agriculture began, potentially shadowing humanity for around 10,000 years alongside animal domestication.

While the study reveals concerning patterns about antibiotic resistance and disease transmission, lead researcher Erika Ganda emphasizes that not all bacteria are harmful. “Bacteria are never entirely ‘bad’ or ‘good’ — their role depends on the context,” she explains. “While some bacteria, like Salmonella, can pose serious health risks, others are essential for maintaining our health and the health of our pets.”

Of course, this doesn’t mean we should reconsider having dogs as pets. Instead, scientists say just be smart, and maybe try not to let your pooch kiss you on the lips.

“Several studies highlight the significant physical and mental health benefits of owning a dog, including reduced stress and increased physical activity,” Ganda notes. “Our goal is not to discourage pet ownership but to ensure that people are aware of potential risks and take simple steps, like practicing good hygiene, to keep both their families and their furry companions safe.”

Source : https://studyfinds.org/dogs-drug-resistant-salmonella/

‘Super Scoopers’ dumping ocean water on the Los Angeles fires: Why using saltwater is typically a last resort

A Croatian Air Force CL-415 Super Scooper firefighting aircraft in flight. (Photo by crordx on Shutterstock)

Firefighters battling the deadly wildfires that raced through the Los Angeles area in January 2025 have been hampered by a limited supply of freshwater. So, when the winds are calm enough, skilled pilots flying planes aptly named Super Scoopers are skimming off 1,500 gallons of seawater at a time and dumping it with high precision on the fires.

Using seawater to fight fires can sound like a simple solution – the Pacific Ocean has a seemingly endless supply of water. In emergencies like Southern California is facing, it’s often the only quick solution, though the operation can be risky amid ocean swells.

But seawater also has downsides.

Saltwater corrodes firefighting equipment and may harm ecosystems, especially those like the chaparral shrublands around Los Angeles that aren’t normally exposed to seawater. Gardeners know that small amounts of salt – added, say, as fertilizer – does not harm plants, but excessive salts can stress and kill plants.

While the consequences of adding seawater to ecosystems are not yet well understood, we can gain insights on what to expect by considering the effects of sea-level rise.

A seawater experiment in a coastal forest

As an ecosystem ecologist at the Smithsonian Environmental Research Center, I lead a novel experiment called TEMPEST that was designed to understand how and why historically salt-free coastal forests react to their first exposures to salty water.

Sea-level rise has increased by an average of about 8 inches globally over the past century, and that water has pushed salty water into U.S. forests, farms and neighborhoods that had previously known only freshwater. As the rate of sea-level rise accelerates, storms push seawater ever farther onto the dry land, eventually killing trees and creating ghost forests, a result of climate change that is widespread in the U.S. and globally.

In our TEMPEST test plots, we pump salty water from the nearby Chesapeake Bay into tanks, then sprinkle it on the forest soil surface fast enough to saturate the soil for about 10 hours at a time. This simulates a surge of salty water during a big storm.

Our coastal forest showed little effect from the first 10-hour exposure to salty water in June 2022 and grew normally for the rest of the year. We increased the exposure to 20 hours in June 2023, and the forest still appeared mostly unfazed, although the tulip poplar trees were drawing water from the soil more slowly, which may be an early warning signal.

Things changed after a 30-hour exposure in June 2024. The leaves of tulip poplar in the forests started to brown in mid-August, several weeks earlier than normal. By mid-September the forest canopy was bare, as if winter had set in. These changes did not occur in a nearby plot that we treated the same way, but with freshwater rather than seawater.

The initial resilience of our forest can be explained in part by the relatively low amount of salt in the water in this estuary, where water from freshwater rivers and a salty ocean mix. Rain that fell after the experiments in 2022 and 2023 washed salts out of the soil.

But a major drought followed the 2024 experiment, so salts lingered in the soil then. The trees’ longer exposure to salty soils after our 2024 experiment may have exceeded their ability to tolerate these conditions.

Seawater being dumped on the Southern California fires is full-strength, salty ocean water. And conditions there have been very dry, particularly compared with our East Coast forest plot.

Changes evident in the ground

Our research group is still trying to understand all the factors that limit the forest’s tolerance to salty water, and how our results apply to other ecosystems such as those in the Los Angeles area.

Tree leaves turning from green to brown well before fall was a surprise, but there were other surprises hidden in the soil below our feet.

Rainwater percolating through the soil is normally clear, but about a month after the first and only 10-hour exposure to salty water in 2022, the soil water turned brown and stayed that way for two years. The brown color comes from carbon-based compounds leached from dead plant material. It’s a process similar to making tea.

Our lab experiments suggest that salt was causing clay and other particles to disperse and move about in the soil. Such changes in soil chemistry and structure can persist for many years.

Source : https://studyfinds.org/super-scoopers-dumping-ocean-water-los-angeles-fires/

An eye for an eye: People agree about the values of body parts across cultures and eras

(Credit: © Kateryna Chyzhevska | Dreamstime.com)

The Bible’s lex talionis – “Eye for eye, tooth for tooth, hand for hand, foot for foot” (Exodus 21:24-27) – has captured the human imagination for millennia. This idea of fairness has been a model for ensuring justice when bodily harm is inflicted.

Thanks to the work of linguists, historians, archaeologists and anthropologists, researchers know a lot about how different body parts are appraised in societies both small and large, from ancient times to the present day.

But where did such laws originate?

According to one school of thought, laws are cultural constructions – meaning they vary across cultures and historical periods, adapting to local customs and social practices. By this logic, laws about bodily damage would differ substantially between cultures.

Our new study explored a different possibility – that laws about bodily damage are rooted in something universal about human nature: shared intuitions about the value of body parts.

Do people across cultures and throughout history agree on which body parts are more or less valuable? Until now, no one had systematically tested whether body parts are valued similarly across space, time and levels of legal expertise – that is, among laypeople versus lawmakers.

We are psychologists who study evaluative processes and social interactions. In previous research, we have identified regularities in how people evaluate different wrongful actions, personal characteristics, friends and foods. The body is perhaps a person’s most valuable asset, and in this study we analyzed how people value its different parts. We investigated links between intuitions about the value of body parts and laws about bodily damage.

How critical is a body part or its function?

We began with a simple observation: Different body parts and functions have different effects on the odds that a person will survive and thrive. Life without a toe is a nuisance. But life without a head is impossible. Might people intuitively understand that different body parts are have different values?

Knowing the value of body parts gives you an edge. For example, if you or a loved one has suffered multiple injuries, you could treat the most valuable body part first, or allocate a greater share of limited resources to its treatment.

This knowledge could also play a role in negotiations when one person has injured another. When person A injures person B, B or B’s family can claim compensation from A or A’s family. This practice appears around the world: among the Mesopotamians, the Chinese during the Tang dynasty, the Enga of Papua New Guinea, the Nuer of Sudan, the Montenegrins and many others. The Anglo-Saxon word “wergild,” meaning “man price,” now designates in general the practice of paying for body parts.

But how much compensation is fair? Claiming too little leads to loss, while claiming too much risks retaliation. To walk the fine line between the two, victims would claim compensation in Goldilocks fashion: just right, based on the consensus value that victims, offenders and third parties in the community attach to the body part in question.

This Goldilocks principle is readily apparent in the exact proportionality of the lex talionis – “eye for eye, tooth for tooth.” Other legal codes dictate precise values of different body parts but do so in money or other goods. For example, the Code of Ur-Nammu, written 4,100 years ago in ancient Nippur, present-day Iraq, states that a man must pay 40 shekels of silver if he cuts off another man’s nose, but only 2 shekels if he knocks out another man’s tooth.

Testing the idea across cultures and time

If people have intuitive knowledge of the values of different body parts, might this knowledge underpin laws about bodily damage across cultures and historical eras?

To test this hypothesis, we conducted a study involving 614 people from the United States and India. The participants read descriptions of various body parts, such as “one arm,” “one foot,” “the nose,” “one eye” and “one molar tooth.” We chose these body parts because they were featured in legal codes from five different cultures and historical periods that we studied: the Law of Æthelberht from Kent, England, in 600 C.E., the Guta lag from Gotland, Sweden, in 1220 C.E., and modern workers’ compensation laws from the United States, South Korea and the United Arab Emirates.

Participants answered one question about each body part they were shown. We asked some how difficult it would be for them to function in daily life if they lost various body parts in an accident. Others we asked to imagine themselves as lawmakers and determine how much compensation an employee should receive if that person lost various body parts in a workplace accident. Still others we asked to estimate how angry another person would feel if the participant damaged various parts of the other’s body. While these questions differ, they all rely on assessing the value of different body parts.

To determine whether untutored intuitions underpin laws, we didn’t include people who had college training in medicine or law.

Then we analyzed whether the participants’ intuitions matched the compensations established by law.

Our findings were striking. The values placed on body parts by both laypeople and lawmakers were largely consistent. The more highly American laypeople tended to value a given body part, the more valuable this body part seemed also to Indian laypeople, to American, Korean and Emirati lawmakers, to King Æthelberht and to the authors of the Guta lag. For example, laypeople and lawmakers across cultures and over centuries generally agree that the index finger is more valuable than the ring finger, and that one eye is more valuable than one ear.

But do people value body parts accurately, in a way that corresponds with their actual functionality? There are some hints that, yes, they do. For example, laypeople and lawmakers regard the loss of a single part as less severe than the loss of multiples of that part. In addition, laypeople and lawmakers regard the loss of a part as less severe than the loss of the whole; the loss of a thumb is less severe than the loss of a hand, and the loss of a hand is less severe than the loss of an arm.

Additional evidence of accuracy can be gleaned from ancient laws. For example, linguist Lisi Oliver notes that in Barbarian Europe, “wounds that may cause permanent incapacitation or disability are fined higher than those which may eventually heal.”

Although people generally agree in valuing some body parts more than others, some sensible differences may arise. For instance, sight would be more important for someone making a living as a hunter than as a shaman. The local environment and culture might also play a role. For example, upper body strength could be particularly important in violent areas, where one needs to defend oneself against attacks. These differences remain to be investigated.

Source : https://studyfinds.org/values-of-body-parts-across-cultures-and-eras/

One juice, three benefits: How elderberry could transform metabolism in just 7 days

(Photo credit: © Anna Komisarenko | Dreamstime.com)

Small study demonstrates the enormous fat-burning and gut-boosting powers of an ‘underappreciated’ berry

In an era where 74% of Americans are considered overweight and 40% have obesity, scientists have discovered that an ancient berry might offer modern solutions. Research from Washington State University reveals that elderberry juice could help regulate blood sugar levels and improve the body’s ability to burn fat, while also promoting beneficial gut bacteria.

Elderberries have long been used in traditional medicine, but this new research provides scientific evidence for their metabolic benefits. The study, published in the journal Nutrients, demonstrates that consuming elderberry juice for just one week led to significant improvements in how the body processes sugar and burns fat.

“Elderberry is an underappreciated berry, commercially and nutritionally,” says Patrick Solverson, an assistant professor in WSU’s Department of Nutrition and Exercise Physiology, in a statement. “We’re now starting to recognize its value for human health, and the results are very exciting.”

Solverson and his team recruited 18 overweight but otherwise healthy adults for this carefully controlled experiment. Most participants were women, with an average age of 40 years and an average body mass index (BMI) of 29.12, placing them in the overweight category.

This wasn’t your typical “drink this and tell us how you feel” study. Instead, the researchers implemented a sophisticated crossover design where participants served as their own control group. Each person completed two one-week periods: one drinking elderberry juice and another drinking a placebo beverage that looked and tasted similar but lacked the active compounds. A three-week “washout” period separated these phases to ensure no carryover effects.

During the study, participants consumed 355 grams (about 12 ounces) of either elderberry juice or placebo daily, split between morning and evening doses. The elderberry juice provided approximately 720 milligrams of beneficial compounds called anthocyanins, which give the berries their deep purple color.

Perhaps most remarkably, after just one week of elderberry juice consumption, participants showed a 24% reduction in blood glucose response following a high-carbohydrate meal challenge. This suggests that elderberry juice might help the body better regulate blood sugar levels, a crucial factor in metabolic health and weight management.

The study also revealed that participants burned more fat both while resting and during exercise when consuming elderberry juice. Using specialized equipment to measure breath gases, researchers found that those drinking elderberry juice burned 27% more fat compared to when they drank the placebo. This increased fat-burning occurred not only during rest but also persisted during a 30-minute moderate-intensity walking test.

But the benefits didn’t stop there. The research team also examined participants’ gut bacteria through stool samples and found that elderberry juice promoted the growth of beneficial bacterial species while reducing less desirable ones. Specifically, it increased levels of bacteria known for producing beneficial compounds called short-chain fatty acids, which play essential roles in metabolism and gut health.

What makes elderberry particularly special is its exceptionally high concentration of anthocyanins. According to Solverson, a person would need to consume four cups of blackberries to match the anthocyanin content found in just 6 ounces of elderberry juice. These compounds are believed to be responsible for the berry’s anti-inflammatory, anti-diabetic, and antimicrobial effects.

While further research is needed to confirm these effects over longer periods and in larger populations, this study suggests that elderberry juice might offer a practical dietary strategy for supporting metabolic health. It’s worth noting that participants reported no adverse effects from consuming the juice, suggesting it’s both safe and well-tolerated.

The timing of this research coincides with growing consumer interest in elderberry products. While these purple berries have long been popular in European markets, demand in the United States surged during the COVID-19 pandemic and continues to rise. This increasing market presence could make it easier for consumers to access elderberry products if further research continues to support their health benefits.

“Food is medicine, and science is catching up to that popular wisdom,” Solverson notes. “This study contributes to a growing body of evidence that elderberry, which has been used as a folk remedy for centuries, has numerous benefits for metabolic as well as prebiotic health.”

The research team isn’t stopping here. With an additional $600,000 in funding from the U.S. Department of Agriculture, they plan to investigate whether elderberry juice might help people maintain their weight after discontinuing weight loss medications. This could provide a natural solution for one of the most challenging aspects of weight management – maintaining weight loss over time.

As obesity rates continue to climb and are projected to reach 48-55% of American adults by 2050, finding natural, food-based approaches to support metabolic health becomes increasingly important. While elderberry juice shouldn’t be viewed as a magic bullet, this research suggests it might be a valuable addition to a healthy diet and lifestyle approach for managing weight and metabolic health.

Source : https://studyfinds.org/how-elderberry-might-transform-metabolism-in-just-7-days/

From first breath: Male and female brains really do differ at birth

(Credit: © Katrina Trninich | Dreamstime.com)

The age-old debate about differences between male and female brains has taken a dramatic turn with new evidence suggesting these variations begin before a baby’s first cry. In the largest study of its kind, researchers at Cambridge University’s Autism Research Centre have discovered that structural brain differences between the sexes don’t gradually emerge through childhood — they’re already established at birth.

Brain development during the first few weeks of life occurs at a remarkably rapid pace, making this period particularly crucial for understanding how sex differences in the brain emerge and evolve. Previous research has primarily focused on older infants, children, and adults, leaving a significant gap in our understanding of the earliest stages of brain development.

The research team analyzed brain scans of 514 newborns (236 females and 278 males) aged 0-28 days using data from the developing Human Connectome Project. The study, published in the journal Biology of Sex Differences, represents one of the largest and most comprehensive investigations of sex differences in neonatal brain structure to date, addressing a common limitation of past research: small sample sizes.

Male newborns showed larger overall brain volumes compared to females, even after accounting for differences in birth weight. This finding was particularly significant because the research team carefully controlled for body size differences between sexes, a factor that has complicated previous studies in this field.

When controlling for total brain volume, female babies exhibited greater amounts of gray matter — the outer brain tissue containing nerve cell bodies and dendrites responsible for processing and interpreting information, such as sensation, perception, learning, speech, and cognition. Meanwhile, male infants had higher volumes of white matter, which consists of long nerve fibers (axons) that connect different brain regions together.

“Our study settles an age-old question of whether male and female brains differ at birth,” says lead author Yumnah Khan, a PhD student at the Autism Research Centre, in a statement. “We know there are differences in the brains of older children and adults, but our findings show that they are already present in the earliest days of life.”

Several specific brain regions showed notable differences between males and females. Female newborns had larger volumes in areas related to memory and emotional regulation, while male infants showed greater volume in regions involved in sensory processing and motor control.

Dr. Alex Tsompanidis, who supervised the study, emphasizes its methodological rigor: “This is the largest such study to date, and we took additional factors into account, such as birth weight, to ensure that these differences are specific to the brain and not due to general size differences between the sexes.”

The research team is now investigating potential prenatal factors that might contribute to these differences. “To understand why males and females show differences in their relative grey and white matter volume, we are now studying the conditions of the prenatal environment, using population birth records, as well as in vitro cellular models of the developing brain,” explains Dr. Tsompanidis.

Importantly, the researchers stress that these findings represent group averages rather than individual characteristics.

“The differences we see do not apply to all males or all females, but are only seen when you compare groups of males and females together,” says Dr. Carrie Allison, Deputy Director of the Autism Research Centre. “There is a lot of variation within, and a lot of overlap between, each group.”

These findings mark a significant step forward in understanding early brain development, while raising new questions about the role of prenatal factors in shaping neurological differences. The research team’s ongoing investigations into prenatal conditions and cellular models may soon provide even more insights into how these sex-based variations emerge.

“These differences do not imply the brains of males and females are better or worse. It’s just one example of neurodiversity,” says Professor Simon Baron-Cohen, Director of the Autism Research Centre. “This research may be helpful in understanding other kinds of neurodiversity, such as the brain in children who are later diagnosed as autistic, since this is diagnosed more often in males.”

Source : https://studyfinds.org/how-male-and-female-brains-differ-at-birth/

Gender shock: Study reveals men, not women, make more emotional money choices

(Credit: © Yuri Arcurs | Dreamstime.com)

When it comes to making financial decisions, conventional wisdom suggests keeping emotions out of the equation. But new research reveals that men, contrary to traditional gender stereotypes, may be significantly more susceptible to letting emotions influence their financial choices than women.

A study led by the University of Essex challenges long-held assumptions about gender and emotional decision-making. The research explores how emotions generated in one context can influence decisions in completely unrelated situations – a phenomenon known as the emotional carryover effect.

“These results challenge the long-held stereotype that women are more emotional and open new avenues for understanding how emotions influence decision-making across genders,” explains lead researcher Dr. Nikhil Masters from Essex’s Department of Economics.

Working with colleagues from the Universities of Bournemouth and Nottingham, Masters designed an innovative experiment comparing how different types of emotional stimuli affect people’s willingness to take financial risks. They contrasted a traditional laboratory approach targeting a single emotion (fear) with a more naturalistic stimulus based on real-world events that could trigger multiple emotional responses.

The researchers recruited 186 university students (100 women and 86 men) and randomly assigned them to one of three groups. One group watched a neutral nature documentary about the Great Barrier Reef. Another group viewed a classic fear-inducing clip from the movie “The Shining,” showing a boy searching for his mother in an empty corridor with tense background music. The third group watched actual news footage about the BSE crisis (commonly known as “mad cow disease”) from the 1990s, a real food safety scare that generated widespread public anxiety.

After watching their assigned videos, participants completed decision-making tasks involving both risky and ambiguous financial choices using real money. In the risky scenario, they had to decide between taking guaranteed amounts of money or gambling on a lottery with known 50-50 odds. The ambiguous scenario was similar, but participants weren’t told the odds of winning.

The results revealed striking gender differences. Men who watched either the horror movie clip or the BSE footage subsequently made more conservative financial choices compared to those who watched the neutral nature video. This effect was particularly pronounced for those who saw the BSE news footage, and even stronger when the odds were ambiguous rather than clearly defined.

Perhaps most surprisingly, women’s financial decisions remained remarkably consistent regardless of which video they watched. The researchers found that while women reported experiencing similar emotional responses to the videos as men did, these emotions didn’t carry over to influence their subsequent financial choices.

The study challenges previous assumptions about how specific emotions like fear influence risk-taking behavior. While earlier studies suggested that fear directly leads to more cautious decision-making, this new research indicates the relationship may be more complex. Even when the horror movie clip successfully induced fear in participants, individual variations in reported fear levels didn’t correlate with their financial choices.

Instead, the researchers discovered that changes in positive emotions may play a more important role than previously thought. When positive emotions decreased after watching either the horror clip or BSE footage, male participants became more risk-averse in their financial decisions.

The study also demonstrated that emotional effects on decision-making can be even stronger when using realistic stimuli that generate multiple emotions simultaneously, compared to artificial laboratory conditions designed to induce a single emotion. This suggests that real-world emotional experiences may have more powerful influences on our financial choices than controlled laboratory studies have indicated.

The research team is now investigating why only men appear to be affected by these carryover effects. “Previous research has shown that emotional intelligence helps people to manage their emotions more effectively. Since women generally score higher on emotional intelligence tests, this could explain the big differences we see between men and women,” explains Dr. Masters.

These findings could have significant implications for understanding how major news events or crises might affect financial markets differently across gender lines. They also suggest the potential value of implementing “cooling-off” periods for important financial decisions, particularly after exposure to emotionally charged events or information.

“We don’t make choices in a vacuum and a cooling-off period might be crucial after encountering emotionally charged situations,” says Dr. Masters, “especially for life-changing financial commitments like buying a home or large investments.”

Source : https://studyfinds.org/study-men-not-women-make-more-emotional-money-choices/

Danger in drinking water? Flouride linked to lower IQ scores in children

(Photo by Tatevosian Yana on Shutterstock)

In a discovery that could reshape how we think about water fluoridation, researchers have uncovered a troubling pattern across 10 countries and nearly 21,000 children: higher fluoride exposure consistently correlates with lower IQ scores. The meta-analysis raises critical questions about the balance between preventing tooth decay and protecting cognitive development.

While fluoride has long been added to public drinking water systems to prevent tooth decay, this research suggests the need to carefully weigh the dental health benefits against potential developmental risks. In the United States, the recommended fluoride concentration for community water systems is 0.7 mg/L, with regulatory limits set at 4.0 mg/L by the Environmental Protection Agency (EPA).

The research team, led by scientists from the National Institute of Environmental Health Sciences, examined studies from ten different countries, though notably none from the United States. The majority of the research (45 studies) came from China, with others from Canada, Denmark, India, Iran, Mexico, New Zealand, Pakistan, Spain, and Taiwan.

Published in JAMA Pediatrics, the findings paint a consistent picture across different types of analyses. When comparing groups with higher versus lower fluoride exposure, children in the higher exposure groups showed significantly lower IQ scores. For every 1 mg/L increase in urinary fluoride levels, researchers observed an average decrease of 1.63 IQ points.

This effect size might seem small, but population-level impacts can be substantial. The researchers note that a five-point decrease in population IQ would nearly double the number of people classified as intellectually disabled, highlighting the potential public health significance of their findings.

The study employed three different analytical approaches to examine the relationship between fluoride and IQ. First, they compared mean IQ scores between groups with different exposure levels. Second, they analyzed dose-response relationships to understand how IQ scores changed with increasing fluoride concentrations. Finally, they examined individual-level data to calculate precise estimates of IQ changes per unit increase in fluoride exposure.

Of particular concern, the inverse relationship between fluoride exposure and IQ remained significant even at relatively low exposure levels. When researchers restricted their analysis to studies with fluoride concentrations below 2 mg/L (closer to levels found in fluoridated water systems), they still found evidence of cognitive impacts.

The implications of these findings are especially relevant for the United States, where fluoridated water serves about 75% of people using community water systems. While no U.S. studies were included in this analysis, the researchers note that significant inequalities exist in American water fluoride levels, particularly affecting Hispanic and Latino communities.

The study’s findings arrive at a crucial moment in public health policy. While water fluoridation has been hailed as one of the great public health achievements of the 20th century for its role in preventing tooth decay, this research suggests the need for a careful reassessment of fluoride exposure guidelines, particularly for vulnerable populations like pregnant women and young children.

Source : https://studyfinds.org/danger-in-drinking-water-flouride-linked-to-lower-iq-scores-in-children/

The disturbing trend discovered in 166,534 movies over past 50 years

(Credit: Prostock-studio on Shutterstock)

Movies are getting deadlier – at least in terms of their dialogue. A new study analyzing over 160,000 English-language films has revealed a disturbing trend: characters are talking about murder and killing more frequently than ever before, even in movies that aren’t focused on crime.

Researchers from the University of Maryland, University of Pennsylvania, and The Ohio State University examined movie subtitles spanning five decades, from 1970 to 2020, to track how often characters used words related to murder and killing. What they found was a clear upward trajectory that mirrors previous findings about increasing visual violence in films.

“Characters in noncrime movies are also talking more about killing and murdering today than they did 50 years ago,” says Brad Bushman, corresponding author of the study and professor of communication at The Ohio State University, in a statement. “Not as much as characters in crime movies, and the increase hasn’t been as steep. But it is still happening. We found increases in violence cross all genres.”

By applying sophisticated natural language processing techniques, the team calculated the percentage of “murderous verbs” – variations of words like “kill” and “murder” – compared to the total number of verbs used in movie dialogue. They deliberately took a conservative approach, excluding passive phrases like “he was killed,” negations such as “she didn’t kill,” and questions like “did he murder someone?” to focus solely on characters actively discussing committing violent acts.

“Our findings suggest that references to killing and murder in movie dialogue not only occur far more frequently than in real life but are also increasing over time,” explains Babak Fotouhi, lead author of the study and adjunct assistant research professor in the College of Information at the University of Maryland.

“We focused exclusively on murderous verbs in our analysis to establish a lower bound in our reporting,” notes Amir Tohidi, a postdoctoral researcher at the University of Pennsylvania. “Including less extreme forms of violence would result in a higher overall count of violence.”

Nearly 7% of all movies analyzed contained these murderous verbs in their dialogue. The findings demonstrate a steady increase in such language over time, particularly in crime-focused films. Male characters showed the strongest upward trend in violent dialogue, though female characters also demonstrated a significant increase in non-crime movies.

This rising tide of violent speech wasn’t confined to obvious genres like action or thriller films. Even movies not centered on crime showed a measurable uptick in murder-related dialogue over the 50-year period studied. This suggests that casual discussion of lethal violence has become more normalized across all types of movies, potentially contributing to what researchers call “mean world syndrome” – where heavy media consumption leads people to view the world as more dangerous and threatening than it actually is.

The findings align with previous research showing that gun violence in top movies has more than doubled since 1950, and more than tripled in PG-13 films since that rating was introduced in 1985. What makes this new study particularly noteworthy is its massive scale – examining dialogue from more than 166,000 films provides a much more comprehensive picture than earlier studies that looked at smaller samples.

Movie studios operate in an intensely competitive market where they must fight for audience attention. “Movies are trying to compete for the audience’s attention and research shows that violence is one of the elements that most effectively hooks audiences,” Fotouhi explains.

“The evidence suggests that it is highly unlikely we’ve reached a tipping point,” Bushman warns. Decades of research have demonstrated that exposure to media violence can influence aggressive behavior and mental health in both adults and children. This can manifest in various ways, from direct imitation of observed violent acts to a general desensitization toward violence and decreased empathy for others.

As content platforms continue to multiply and screen time increases, particularly among young people, these findings raise important questions about the cumulative impact of exposure to violent dialogue in entertainment media. The researchers emphasize that their results highlight the crucial need for promoting mindful consumption and media literacy, especially among vulnerable populations like children.

Source : https://studyfinds.org/movie-violence-dialogue-disturbing-trend/

Even small diet tweaks can lead to sustainable weight loss – here’s how

Woman stepping on scale (© Siam – stock.adobe.com)

It’s a well-known fact that to lose weight, you either need to eat less or move more. But how many calories do you really need to cut out of your diet each day to lose weight? It may be less than you think.

To determine how much energy (calories) your body requires, you need to calculate your total daily energy expenditure (TDEE). This is comprised of your basal metabolic rate (BMR) – the energy needed to sustain your body’s metabolic processes at rest – and your physical activity level. Many online calculators can help determine your daily calorie needs.

If you reduce your energy intake (or increase the amount you burn through exercise) by 500-1,000 calories per day, you’ll see a weekly weight loss of around one pound (0.45kg).

But studies show that even small calorie deficits (of 100-200 calories daily) can lead to long-term, sustainable weight-loss success. And although you might not lose as much weight in the short-term by only decreasing calories slightly each day, these gradual reductions are more effective than drastic cuts as they tend to be easier to stick with.

Hormonal changes

When you decrease your calorie intake, the body’s BMR often decreases. This phenomenon is known as adaptive thermogenesis. This adaptation slows down weight loss so the body can conserve energy in response to what it perceives as starvation. This can lead to a weight-loss plateau – even when calorie intake remains reduced.

Caloric restriction can also lead to hormonal changes that influence metabolism and appetite. For instance, thyroid hormones, which regulate metabolism, can decrease – leading to a slower metabolic rate. Additionally, leptin levels drop, reducing satiety, increasing hunger and decreasing metabolic rate.

Ghrelin, known as the “hunger hormone,” also increases when caloric intake is reduced, signaling the brain to stimulate appetite and increase food intake. Higher ghrelin levels make it challenging to maintain a reduced-calorie diet, as the body constantly feels hungrier.

Insulin, which helps regulate blood sugar levels and fat storage, can improve in sensitivity when we reduce calorie intake. But sometimes, insulin levels decrease instead, affecting metabolism and leading to a reduction in daily energy expenditure. Cortisol, the stress hormone, can also spike – especially when we’re in a significant caloric deficit. This may break down muscles and lead to fat retention, particularly in the stomach.

Lastly, hormones such as peptide YY and cholecystokinin, which make us feel full when we’ve eaten, can decrease when we lower calorie intake. This may make us feel hungrier.

Fortunately, there are many things we can do to address these metabolic adaptations so we can continue losing weight.

Weight loss strategies

Maintaining muscle mass (either through resistance training or eating plenty of protein) is essential to counteract the physiological adaptations that slow weight loss down. This is because muscle burns more calories at rest compared to fat tissue – which may help mitigate decreased metabolic rate.

Gradual caloric restriction (reducing daily calories by only around 200-300 a day), focusing on nutrient-dense foods (particularly those high in protein and fibre), and eating regular meals can all also help to mitigate these hormonal challenges.

Source : https://studyfinds.org/small-diet-tweaks-sustainable-weight-loss/

 

A proven way to stay younger longer — and all it takes is an hour each week

(© New Africa – stock.adobe.com)

Could you find an hour a week to devote to slowing your biological aging? You’ll get other, additional benefits – adding not just more years to your life but more life to your years. That hour can also create a sense of purpose, improve mental health, give you a psychological lift, boost your social connectedness, and you’ll know you’re making the world a better place. All you have to do is volunteer. If you can find a few hours a week, the benefits are even greater.

A study published in this month’s issue of Social Science and Medicine found that volunteering for as little as an hour a week is linked to slower biological aging.

Biologic age

Biologic age refers to the age of a body’s cells and tissues and, over time, how quickly they are aging, compared to the body’s chronologic age. The most common way to assess biological age examines how your behaviors and environment change the expression of your DNA; it’s called epigenetic testing.

Why volunteering is associated with slower aging

Experts explain that volunteering’s significant effect on biologic aging is multifactorial, with physical, social, and psychological benefits.

Volunteering often includes physical activity, like walking. Social connections are vital; we’re programmed for connectedness. Social connections decrease stress and improve cognitive function. According to the study authors, volunteering can also create a sense of purpose, improve mental health, and buffer any loss of important roles, like spouse or parent, as we age.

Family Volunteering

When my son was six, we volunteered at a soup kitchen in a less-affluent part of Detroit. On the Saturday after Thanksgiving, he was right in the thick of making gallons of turkey soup and hundreds of cheese or peanut butter and jelly sandwiches. Finally, he grabbed his own PB&J and munched out with our guests. It’s one of my favorite memories.

Family volunteering (whatever “family” means to you) is a win for everyone. It strengthens families and communities. When family members unite for a worthy cause, their collective power is greater than just adding together the strengths of individuals.

Children will develop compassion and tolerance. They may acquire new skills. More importantly, volunteering provides models from which children learn to respect and serve others. They discover the gratitude that flows only from giving. Children who volunteer are more likely to volunteer as adults and, later on in life, create their own traditions with their children.

Parents get to spend more time with their kids, instilling important values with action; those values run deeper than words could ever reach. Include your kids in planning. You may discover what’s truly important to them.

Nonprofit agencies, understaffed and overstressed, can do little without volunteers. Virtually everyone can find a nonprofit that matches their passion.

Getting started

To decide if volunteering is right for your family, consider:

  • About what issues are you passionate?
  • What are your children’s ages?
  • Who would you like to help?
  • What does your family enjoy doing together?
  • How frequently can you volunteer?
  • What skills and talents can your family offer?
  • What do you want your family to learn from the experience?

There are innumerable causes in which you can make a difference. About 3.5 million people a year will experience homelessness; about 40 percent are kids. Since 1989, the number of beds available in shelters has tripled. Collect toiletries. Give art and school supplies. Provide clothing and transportation.

Every day, 10% of Americans are hungry. Have a canned food drive. Make bag lunches for kids in a homeless shelter. Have a party – with an entrance fee of a can of food.

The elderly often need help the most. Adopt a grandparent. Deliver food – drive for Meals on Wheels. Look at photos and listen to stories. Give manicures and pedicures. Do seasonal yard work, rake leaves or shovel snow. Write letters. Play board games. Read books or newspapers. Bring your pet to visit. Write life stories. Provide transportation for medical appointments. Run errands. Make small home repairs.

I had elderly neighbors next door. When I cleared snow and ice (which was plentiful) from my car, I’d clear their car as well. Mrs. Neighbor watched through the living room window. Sometime later, she told me that she had a remote device to start and clear her car from inside her home! What can you do but laugh?

Source : https://studyfinds.org/volunteering-proven-way-stay-younger-longer/

‘Simple nasal swab’ could revolutionize childhood asthma treatment

(Credit: © Alena Stalmashonak | Dreamstime.com)

A novel diagnostic test using just a nasal swab could transform how doctors diagnose and treat childhood asthma. Researchers at the University of Pittsburgh have developed this non-invasive approach that, for the first time, allows physicians to precisely identify different subtypes of asthma in children without requiring invasive procedures.

Until now, determining the specific type of asthma a child has typically required bronchoscopy, an invasive procedure performed under general anesthesia to collect lung tissue samples. This limitation has forced doctors to rely on less accurate methods like blood tests and allergy screenings, potentially leading to suboptimal treatment choices.

“Because asthma is a highly variable disease with different endotypes, which are driven by different immune cells and respond differently to treatments, the first step toward better therapies is accurate diagnosis of endotype,” says senior author Dr. Juan Celedón, a professor of pediatrics at the University of Pittsburgh and chief of pulmonary medicine at UPMC Children’s Hospital of Pittsburgh, in a statement.

3 subtypes of asthma

The new nasal swab test analyzes the activity of eight specific genes associated with different types of immune responses in the airways. This genetic analysis reveals which of three distinct asthma subtypes, or endotypes, a patient has: T2-high (involving allergic inflammation), T17-high (showing a different type of inflammatory response), or low-low (exhibiting minimal inflammation of either type).

The research team validated their approach across three separate studies involving 459 young people with asthma, focusing particularly on Puerto Rican and African American youth, populations that experience disproportionately higher rates of asthma-related emergency room visits and complications. According to the researchers, Puerto Rican children have emergency department and urgent care visit rates of 23.5% for asthma, while Black children have rates of 26.6% — both significantly higher than the 12.1% rate among non-Hispanic white youth.

The findings, published in JAMA, challenge long-held assumptions about childhood asthma. While doctors have traditionally believed that most cases were T2-high, the nasal swab analysis revealed this type appears in only 23-29% of participants. Instead, T17-high asthma accounted for 35-47% of cases, while the low-low type represented 30-38% of participants.

“These tests allow us to presume whether a child has T2-high disease or not,” explained Celedón. “But they are not 100% accurate, and they cannot tell us whether a child has T17-high or low-low disease. There is no clinical marker for these two subtypes. This gap motivated us to develop better approaches to improve the accuracy of asthma endotype diagnosis.”

Precision medicine for patients

This breakthrough carries significant implications for treatment. Currently, powerful biological medications exist for T2-high asthma, but no available treatments specifically target T17-high or low-low types. The availability of this new diagnostic test could accelerate research into treatments for these previously understudied forms of asthma.

“We have better treatments for T2-high disease, in part, because better markers have propelled research on this endotype,” said Celedón. “But now that we have a simple nasal swab test to detect other endotypes, we can start to move the needle on developing biologics for T17-high and low-low disease.”

The test could also help researchers understand how asthma evolves throughout childhood and adolescence. Celedón noted that one of the “million-dollar questions in asthma” involves understanding why the condition affects children differently as they age.

“Before puberty, asthma is more common in boys, but the incidence of asthma goes up in females in adulthood. Is this related to endotype? Does endotype change over time or in response to treatments? We don’t know,” he says. “But now that we can easily measure endotype, we can start to answer these questions.”

Dr. Gustavo Matute-Bello, acting director of the Division of Lung Diseases at the National Heart, Lung, and Blood Institute, emphasizes the potential impact of this diagnostic advancement. “Having tools to test which biological pathways have a major role in asthma in children, especially those who have a disproportionate burden of disease, may help achieve our goal of improving asthma outcomes,” he says. “This research has the potential to pave the way for more personalized treatments, particularly in minority communities.”

Source : https://studyfinds.org/simple-nasal-swab-could-revolutionize-childhood-asthma-treatment/

Do You Believe in Life After Death? These Scientists Study It.

In an otherwise nondescript office in downtown Charlottesville, Va., a small leather chest sits atop a filing cabinet. Within it lies a combination lock, unopened for more than 50 years. The man who set it is dead.

On its own, the lock is unremarkable — the kind you might use at the gym. The code, a mnemonic of a six-letter word converted into numbers, was known only to the psychiatrist Dr. Ian Stevenson, who set it long before he died, and years before he retired as director of the Division of Perceptual Studies, or DOPS, a parapsychology research unit he founded in 1967 within the University of Virginia’s school of medicine.

Dr. Stevenson called this experiment the Combination Lock Test for Survival. He reasoned that if he could transmit the code to someone from the grave, it might help answer the questions that had consumed him in life: Is communication from the “beyond” possible? Can the personality survive bodily death? Or, simply: Is reincarnation real?

This last conundrum — the survival of consciousness after death — continues to be at the forefront of the division’s research. The team has logged hundreds of cases of children who claim to remember past lives from all continents except Antarctica. “And that’s only because we haven’t looked for cases there,” said Dr. Jim Tucker, who has been investigating claims of past lives for more than two decades. He recently retired after having been the director of DOPS since 2015.

It was an unexpected career path to begin with.

“As far as reincarnation itself goes, I never had any particular interest in it,” said Dr. Tucker, who set out to solely become a child psychiatrist and was, at one point, the head of U.Va.’s Child and Family Psychiatry Clinic. “Even when I was training, it never occurred to me that I’d end up doing this work.”

Now, at 64 years old, after traveling the world to record cases of possible past life recollections, and with books and papers of his own on the subject of past lives, he has left the position.

“There’s a level of stress in medicine, and in academics,” he reflected. “There are always things you should be doing, papers you should be writing, prescriptions you should be giving. I enjoyed my day to day work, both in the clinic and at DOPS, but you reach a point where you’re ready not to have so many responsibilities and demands.”

According to a job listing issued by the medical school, on top of their academic reputation, the ideal candidate to replace Dr. Tucker must have “a track record of rigorous investigation of extraordinary human experiences, such as the mind’s relationship to the body and the possibility of consciousness surviving physical death.”

None of the eight principal team members have the required academic status to undertake the role, making it necessary to find someone externally.

“I think there’s a feeling that it would be rejuvenating for the group to have an outside person come in,” said Dr. Jennifer Payne, vice-chair of research at the department of psychiatry, who leads the selection committee.

Scientists That Have Strayed From the Usual Path

Dr. Tucker was running a busy practice when he first learned about DOPS. It was 1996 and a local newspaper, The Daily Progress in Charlottesville, had profiled Dr. Stevenson after he received funding to interview individuals about their near-death experiences. Entranced by the pioneering work, Dr. Tucker began volunteering at the division before joining as a permanent researcher.

Each of the division’s researchers has committed their career — and, to some extent, risked their professional reputation — to the study of the so-called paranormal. This includes near-death and out-of-body experiences, altered states of consciousness, and past lives research, which all come under the portmanteau of “parapsychology.” They are scientists that have strayed from the usual path.

DOPS is a curious institution. There are only a few other labs in the world undertaking similar lines of research — the Koestler Parapsychology Unit at the University of Edinburgh, for instance — with DOPS being by far the most prominent. The only other major parapsychology unit in the United States was Princeton’s Engineering Anomalies Research Laboratory, or PEAR, which focused on telekinesis and extrasensory perception. That unit was shuttered in 2007.

While it is technically part of the U.Va., DOPS occupies four spacious would-be condominiums inside a residential building. It is notably distanced from the university’s leafy main campus, and at least a couple of miles from the medical school.

“Nobody knows we’re here,” said Dr. Bruce Greyson, 78, a former director of DOPS and a professor emeritus of psychiatry and neurobehavioral sciences at U.Va., who started working with Dr. Stevenson in the late 1970s. “Ian was very cautious about that, because he had faced a lot of prejudice,” Dr. Greyson said. “He kept a very low profile.”

Dr. Greyson received a lot of pushback before joining DOPS. He had worked at the University of Michigan for eight years early in his career, but his interest in near-death experiences began to ruffle feathers, much like it had for Dr. Stevenson.

“They told me, point blank, that I wouldn’t have a future there if I did near-death research, because you can’t measure that in a test tube,” he said. “Unless I could quantify it by a biological measure, they didn’t want to hear about it.” He left Michigan for the University of Connecticut, where he spent 11 years, and then found his way to DOPS.

The atmosphere within DOPS is one of studious calm. There are only a few signs of the team’s activities. In the basement laboratory one finds a copper-lined Faraday cage used to assess out of body experience subjects, and foam mannequin heads sporting Electroencephalogram caps. Upstairs, running the full length of the wall in the Ian Stevenson Memorial Library, which boasts over 5,000 books and papers pertaining to past lives research, is a glass display case containing a collection of knives, swords and mallets — weapons described by children who recalled a violent end in their previous life.

“It’s not the actual weapon, but the kind of weapon used,” explained Dr. Tucker. Each object is labeled with intricate, sometimes gory, detail. One display told the story of a young girl from Burma, Ma Myint Thein, who was born with deformities of her fingers and birthmarks across her back and neck. “According to villagers,” the label reads, “the man whose life she remembered being had been murdered, his fingers chopped off and his throat slashed by a sword.” It is accompanied by a photograph of the girl’s hands, her right missing two fingers.

That children who claim to remember past lives are most frequently found in South Asia, where reincarnation is a core tenet of many religious beliefs, has been used by critics to debunk the studies. After all, surely it’s all too easy to find corroborative evidence in places with a pre-existing belief in reincarnation.

The question of life after death has been an existential preoccupation for humans throughout time, however, and reincarnation is a central tenet of belief in many cultures. Buddhism, where there is thought to be a 49-day journey between death and rebirth; Hinduism, with its concept of samsara, the endless cycle; and Native American and West African nations, all share similar core concepts of the soul or spirit moving from one life to the next. Meanwhile, a 2023 Pew Research survey found that a quarter of Americans believe it is “definitely or probably true” that people who have died can be reincarnated.

When it comes to past life claims, the DOPS team works on cases that almost always have come directly from parents.

Common features in children who claim to have led a previous life include a verbal precocity and mannerisms at odds with that of the rest of the family. Unexplained phobias or aversions have also been thought to have been transferred over from a past existence. In some cases, extreme clarity besets the remembrances: the names, professions and quirks of a different set of relatives, or the particularities of the streets they used to live on and sometimes even recalling obscure historical events — details the child couldn’t possibly have known about.

One of the most famous cases the team worked on was that of James Leininger, an American boy who remembered being a fighter pilot in Japan. The case drew a great deal of attention to DOPS, but also brought with it numerous detractors.

Ben Radford, the deputy editor of Skeptical Inquirer, a magazine dedicated to scientific research, believes that wishful thinking and general death anxiety has fueled an increased interest in reincarnation, and finds flaws in the DOPS research methodology, which he often dissects in his blog. He said, “The fact is, no matter how sincere the person is, often recovered memories are false.”

‘The Evidence Is Not Flawless’

Remembered by many as a dignified man with a penchant for three-piece suits, Dr. Stevenson lived for his research. He almost never took time off. “I had to swing by the office once on New Year’s Eve and there was one car in the lot, and it was his,” Dr. Tucker recalled.

Born in 1918, Dr. Stevenson, who was Canadian and graduated from St. Andrews with a degree in history before studying biochemistry and psychiatry at McGill University, had served as chair of the department of psychiatry at U.Va. for 10 years until 1967.

By the early 1960s he had become disillusioned by conventional medicine. In an interview with The New York Times in 1999, he said that he had been drawn to studying past lives through his “discontent with other explanations of human personality. I wasn’t satisfied with psychoanalysis or behaviorism or, for that matter, neuroscience. Something seemed to be missing.”

And so he began recording potential cases of reincarnation, which he would come to call “cases of the reincarnation type,” or CORT. It was one of his initial CORT research papers, from a 1966 trip to India, that caught the attention of Chester F. Carlson, the inventor of the technology behind Xerox photocopying machines. It was Mr. Carlson’s generous financial assistance that enabled Dr. Stevenson to leave his role at the medical school and focus full-time on past lives research.

The dean of the medical school at the time, Kenneth Crispell, didn’t approve of this foray into the paranormal. He was happy to see Dr. Stevenson resign from his spot in the department of psychiatry, and, believing in academic freedom, agreed to the formation of a small research division. However, any hope Dr. Crispell had that Dr. Stevenson and his unorthodox ideas would disappear into the academic shadows was quickly dashed: Mr. Carlson died of a heart attack in 1968 and in his will he bequeathed $1 million to Dr. Stevenson’s endeavor.

While not all of the attention was positive in the division’s early years, some individuals in the science community were intrigued. “Either Dr. Stevenson is making a colossal mistake, or he will be known as the Galileo of the 20th century,” the psychiatrist Harold Lief wrote in a 1977 article for the Journal of Nervous and Mental Disease.

To this day, DOPS is still financed entirely by private donations. In October it was announced that the division had received the first installment of a $1 million estate gift from The Philip B. Rothenberg Legacy Fund, which will be used to finance early-career researchers. Other supporters have included the Bonner sisters, Priscilla Bonner-Woolfan and Margerie Bonner-Lowry — silent screen actresses of the 1920s, whose endowment continues to fund the DOPS directorship. Another unlikely supporter is the actor John Cleese, who first encountered the division at the Esalen Institute, a retreat and intentional community located in Big Sur, Calif.

“These people are behaving like good scientists,” Mr. Cleese said in a phone interview. “Good scientists are after the truth: they don’t just want to be right. I think it is absolutely astonishing and quite disgraceful, the way that orthodox contemporary, materialistic reductionist theory treats all the things — and there are so many of them — that they can’t begin to explain.”

In the early years of the department, Dr. Stevenson traveled the world extensively, recording more than 2,500 cases of children recalling past lives. In this pre-internet time, discovering so many similar accounts and trends served to strengthen his thesis. The findings from these excursions, collected in Dr. Stevenson’s neat handwriting, are stored by country in filing cabinets and is in the slow process of being digitized.

From this database, researchers have yielded findings they believe are interesting. The strongest cases, according to the DOPS researchers, have been found in children under the age of 10, and the majority of remembrances tend to occur between the ages of 2 and 6, after which they appear to fade. The median time between death and rebirth is about 16 months, a period the researchers see as a form of intermission. Very often, the child has memories that match up to the life of a deceased relative.

And yet for all of this meticulous work, Dr. Stevenson was aware of the limitations of past lives research. “The evidence is not flawless and it certainly does not compel such a belief,” he explained in a lecture at The University of Southwestern Louisiana (now the University of Louisiana at Lafayette) in 1989. “Even the best of it is open to alternative interpretations, and one can only censure those who say there is no evidence whatsoever.”

“Ian thought reincarnation was the best explanation, but he wasn’t positive,” said Dr. Greyson. “He thought a lot of the cases may be something else. It might be a kind of possession, it might even be delusion. There are lots of different possibilities. It may be clairvoyance, or picking up the information from some other sources that you’re not aware of.”

After spending more than half his life studying past lives, Dr. Stevenson retired from DOPS in 2002, handing the directorial baton to Dr. Greyson. Though he kept a watchful eye on proceedings from afar, offering guidance when solicited, he never set foot in the division again. He died of pneumonia five years later, at 88 years old.

‘Many of the Memories Are Difficult’

Each year DOPS receives more than 100 emails from parents regarding something their child has said. Reaching out to the division is often an attempt at clarity, but the researchers never promise answers. Their only promise is to take these claims seriously, “but as far as the case having enough to investigate, enough to potentially verify that it matches with a past life, those are very few,” said Dr. Tucker.

This summer, Dr. Tucker drove to the rural town of Amherst, Va., to visit a case of possible past life remembrance. He was joined by his colleagues Marieta Pehlivanova and Philip Cozzolino, who would be taking over his research in the new year.

Ms. Pehlivanova, 43, who specializes in near death experience and children who remember past lives, has been at DOPS for seven years and is launching a study into women who’ve had near death experiences during childbirth. When she tells people what she does, they find the subject matter both fascinating and disturbing. “We’ve had emails from people saying we’re doing the work of the devil,” she said.

Upon arrival at the family’s home, the team was shown into the kitchen. A child, who was three, the youngest of four home-schooled siblings, peeked from behind her mother’s legs, looking up shyly. She wore a baggy Minnie Mouse shirt and went to perch between her grandparents on a banquette, watching everyone take their seats around the dining table.

“Let’s start from the very beginning,” Dr. Tucker said after the paperwork had been signed by Misty, the child’s 28-year-old mother. “It all began with the puzzle piece?”

A few months earlier, mother and child had been looking at a wooden puzzle of the United States, with each state represented by a cartoon of a person or object. Misty’s daughter pointed excitedly at the jagged piece representing Illinois, which had an abstract illustration of Abraham Lincoln.

“That’s Pom,” her daughter exclaimed. “He doesn’t have his hat on.”

This was indeed a drawing of Abraham Lincoln without his hat, but more important, there was no name under the image indicating who he was. Following weeks of endless talk about “Pom” bleeding out after being hurt and being carried to a too-small bed — which the family had started to think could be related to Lincoln’s assassination — they began to consider that their daughter had been present for the historical moment. This was despite the family having no prior belief in reincarnation, nor any particular interest in Lincoln.

On the drive to Amherst, Dr. Tucker confessed his hesitation in taking on this particular case — or any case connected to a famous individual. “If you say your child was Babe Ruth, for example, there would be lots of information online,” he said. “When we get those cases, usually it’s that the parents are into it. Still, it’s all a little strange to be coming out of a three-year-old’s mouth. Now if she had said her daughter was Lincoln, I probably wouldn’t have made the trip.”

Lately, Dr. Tucker has been giving the children picture tests. “Where we think we know the person they’re talking about, we’ll show them a picture from that life, and then show them another picture — a dummy picture — from somewhere else, to see if they can pick out the right one,” he said. “You have to have a few pictures for it to mean anything. I had one where the kid remembered dying in Vietnam. I showed him eight pairs of pictures and a couple of them he didn’t make any choice on, but the others he was six out of six. So, you know, that makes you think. But this girl is so young, that I don’t think we can do that.”

On this occasion, the little girl decided not to engage, and pretended to be asleep. Then she actually fell asleep.

“She’ll come around to it soon,” Misty assured the researchers. As the minutes ticked by, Dr. Tucker decided the picture test would be best left for another time. The child was still asleep when the researchers returned to their car.

After the first meeting, the only course of action is to do nothing and wait, see if the memories develop into something more concrete. Since the onus for past lives research is on spontaneous recollections, the team are largely unconvinced by the concept of hypnotic regression. “People will be hypnotized and told to go back to their past lives and all that, which we’re quite skeptical about,” said Dr. Tucker. “You can also make up a lot of stuff, even if you’re talking about memories from this life.”

DOPS rarely takes accounts from adults into consideration. “They’re not our primary interest, partly because, as an adult, you’ve been exposed to a lot,” Dr. Tucker explained. “You may think that you don’t know things from history, but you may well have been exposed to it. But also, the phenomenon typically happens in young kids. It’s as if they carry the memories with them, and they are typically very young when they start talking.”

There is also the concern that parents are looking for attention. “There are people who say, ‘Well, the parents are just doing it to have their 15 minutes of fame or whatever,” said Dr. Tucker. “But most of them have no interest in anyone knowing about it, you know, because it’s kind of embarrassing, or they worry people will think their kid is weird.”

For a child, recalling a past life can be trying. “They might be missing people, or have a sense of unfinished business,” he said. After a silence, he continued, his voice contemplative. “Frankly it’s probably better for the child that they don’t have these memories, because so many of the memories are difficult. The majority of kids who remember how they died perished in some kind of violent, unnatural death.”

Source : https://dnyuz.com/2025/01/03/do-you-believe-in-life-after-death-these-scientists-study-it/

Why your couch could be killing you: Sedentary lifestyle linked to 19 chronic conditions

(Credit: © Tracy King | Dreamstime.com)

In an era where many of us spend our days hunched over computers or scrolling through phones, mounting evidence suggests our sedentary lifestyles may be quietly damaging our health. A new study from the University of Iowa reveals that physically inactive individuals face significantly higher risks for up to 19 different chronic health conditions, ranging from obesity and diabetes to depression and heart problems.

Medical researchers have long known that regular physical activity helps prevent disease and promotes longevity. However, this comprehensive study, which analyzed electronic medical records from over 40,000 patients at a major Midwestern hospital system, provides some of the most detailed evidence yet about just how extensively physical inactivity can impact overall health.

Leading the study, now published in the journal Preventing Chronic Disease, was a team of researchers from various departments at the University of Iowa, including pharmacy practice, family medicine, and human physiology. Their mission was to examine whether screening patients for physical inactivity during routine medical visits could help identify those at higher risk for developing chronic diseases.

The simple 30-second exercise survey

When patients at the University of Iowa Health Care Medical Center arrived for their annual wellness visits, they received a tablet during the standard check-in process. Researchers implemented the Exercise Vital Sign (EVS), which asks two straightforward questions: how many days per week they engaged in moderate to vigorous exercise (like a brisk walk) and for how many minutes per session. Based on their responses, patients were categorized into three groups: inactive (0 minutes per week), insufficiently active (1-149 minutes per week), or active (150+ minutes per week).

“This two-question survey typically takes fewer than 30 seconds for a patient to complete, so it doesn’t interfere with their visit. But it can tell us a whole lot about that patient’s overall health,” says Lucas Carr, associate professor in the Department of Health and Human Physiology and the study’s corresponding author, in a statement.

Study authors discovered clear patterns when they analyzed responses from 7,261 screened patients. About 60% met the recommended guidelines by exercising moderately for 150 or more minutes per week. However, 36% fell short of these guidelines, exercising less than 150 minutes weekly, and 4% reported no physical activity whatsoever. When the team examined the health records of these groups, they found remarkable differences in health outcomes.

Consequences of a sedentary lifestyle

The data painted a compelling picture of how physical activity influences overall health. Active patients showed significantly lower rates of depression (15% compared to 26% in inactive patients), obesity (12% versus 21%), and hypertension (20% versus 35%). Their cardiovascular health markers were also notably better, including lower resting pulse rates and more favorable cholesterol profiles.

Perhaps most revealing was the relationship between activity levels and chronic disease burden. Patients reporting no physical activity carried a median of 2.16 chronic conditions. This number dropped to 1.49 conditions among insufficiently active patients and fell further to just 1.17 conditions among those meeting exercise guidelines. This clear progression suggests that even small increases in physical activity might help reduce disease risk.

To provide context for their findings, the researchers compared the screened group against 33,445 unscreened patients from other areas of the hospital. This comparison revealed an important pattern: patients who completed the survey tended to be younger and healthier than the general patient population. As Carr notes, “We believe this finding is a result of those patients who take the time to come in for annual wellness exams also are taking more time to engage in healthy behaviors, such as being physically active.”

Based on the study’s findings, physical inactivity was associated with higher rates of:

  1. Obesity
  2. Liver disease
  3. Psychoses
  4. Chronic lung disease
  5. Neurological seizures
  6. Coagulopathy (blood clotting disorders)
  7. Depression
  8. Weight loss issues
  9. Uncontrolled hypertension (high blood pressure)
  10. Controlled hypertension
  11. Uncontrolled diabetes
  12. Anemia deficiency
  13. Neurological disorders affecting movement
  14. Peripheral vascular disease
  15. Autoimmune disease
  16. Drug abuse
  17. Hypothyroidism
  18. Congestive heart failure
  19. Valvular disease (heart valve problems)

Need for better exercise counseling

The findings highlight a crucial gap in healthcare delivery that needs addressing. “In our healthcare environment, there’s no easy pathway for a doctor to be reimbursed for helping patients become more physically active,” Carr explains. “And so, for these patients, many of whom report insufficient activity, we need options to easily connect them with supportive services like exercise prescriptions and/or community health specialists.”

However, there’s encouraging news about the financial feasibility of exercise counseling. A related study by Carr’s team found that when healthcare providers billed for exercise counseling services, insurance companies reimbursed these claims nearly 95% of the time. This suggests that expanding physical activity screening and counseling services could be both beneficial for patients and financially viable for healthcare providers.

Source : https://studyfinds.org/couch-potato-sedentary-lifestyle-chronic-diseases/

Science confirms: ‘Know-it-alls’ typically know less than they think

(Credit: © Robert Byron | Dreamstime.com)

The next time you find yourself in a heated argument, absolutely certain of your position, consider this: researchers have discovered that the more confident you feel about your stance, the more likely you are to be working with incomplete information. It’s a psychological quirk that might explain everything from family disagreements to international conflicts.

We’ve all been there: stuck in traffic, grumbling about the “idiot” driving too slowly in front of us or the “maniac” who just zoomed past. But what if that slow driver is carefully transporting a wedding cake, or the speeding car is rushing someone to the hospital? The fascinating new study published in PLOS ONE suggests that these snap judgments stem from what researchers call “the illusion of information adequacy” — our tendency to believe we have enough information to make sound decisions, even when we’re missing crucial details.

“We found that, in general, people don’t stop to think whether there might be more information that would help them make a more informed decision,” explains study co-author Angus Fletcher, a professor of English at The Ohio State University and member of the university’s Project Narrative, in a statement. “If you give people a few pieces of information that seems to line up, most will say ‘that sounds about right’ and go with that.”

In today’s polarized world, where debates rage over everything from vaccines to climate change, understanding why people maintain opposing viewpoints despite access to the same information has never been more critical. This research, conducted by Fletcher, Hunter Gehlbach of Johns Hopkins University, and Carly Robinson of Stanford University, reveals that we rarely pause to consider what information we might be missing before making judgments.

The researchers conducted an experiment with 1,261 American participants recruited through the online platform Prolific. The study centered around a hypothetical scenario about a school facing a critical decision: whether to merge with another school due to a drying aquifer threatening their water supply.

The participants were divided into three groups. One group received complete information about the situation, including arguments both for and against the merger. The other two groups only received partial information – either pro-merger or pro-separation arguments. The remarkable finding? Those who received partial information felt just as competent to make decisions as those who had the full picture.

“Those with only half the information were actually more confident in their decision to merge or remain separate than those who had the complete story,” Fletcher notes. “They were quite sure that their decision was the right one, even though they didn’t have all the information.”

Social media users might recognize this pattern in their own behavior: confidently sharing or commenting on articles after reading only headlines or snippets, feeling fully informed despite missing crucial context. It’s a bit like trying to review a movie after watching only the first half, yet feeling qualified to give it a definitive rating.

The study revealed an interesting finding regarding the influence of new information. When participants who initially received only one side of the story were later presented with opposing arguments, about 55% maintained their original position on the merger decision. That rate is comparable to that of the control group, which had received all information from the start.

Fletcher notes that this openness to new information might not apply to deeply entrenched ideological issues, where people may either distrust new information or try to reframe it to fit their existing beliefs. “But most interpersonal conflicts aren’t about ideology,” he points out. “They are just misunderstandings in the course of daily life.”

Beyond personal relationships, this finding has profound implications for how we navigate complex social and political issues. When people engage in debates about controversial topics, each side might feel fully informed while missing critical pieces of the puzzle. It’s like two people arguing about a painting while looking at it from different angles: each sees only their perspective but assumes they’re seeing the whole picture.

Fletcher, who studies how people are influenced by the power of stories, emphasizes the importance of seeking complete information before taking a stand. “Your first move when you disagree with someone should be to think, ‘Is there something that I’m missing that would help me see their perspective and understand their position better?’ That’s the way to fight this illusion of information adequacy.”

Source : https://studyfinds.org/science-confirms-know-it-alls-typically-know-less-than-they-think/

Ants smarter than humans? Watch as tiny insects outperform grown adults in solving puzzle

Longhorn Crazy Ants (Paratrechina longicornis) swarming and attacking a much larger ant. They are harmless to humans and found in the world`s tropical regions.(Credit: © Brett Hondow | Dreamstime.com)

Scientists have long been fascinated by collective intelligence, the idea that groups can solve problems better than individuals. Now, an interesting new study reveals some unexpected findings about group problem-solving abilities across species, specifically comparing how ants and humans tackle complex spatial challenges.

Researchers at the Weizmann Institute of Science designed an ingenious experiment pitting groups of longhorn crazy ants against groups of humans in solving the same geometric puzzle at different scales. The puzzle, known as a “piano-movers’ problem,” required moving a T-shaped load through a series of tight spaces and around corners. Imagine trying to maneuver a couch through a narrow doorway, but with more mathematical precision involved.

What makes this study, published in PNAS, particularly fascinating is that both ants and humans are among the few species known to cooperatively transport large objects in nature. In fact, of the approximately 15,000 ant species on Earth, only about 1% engage in cooperative transport of heavy loads, making this shared behavior between humans and ants especially remarkable.

The species chosen for this evolutionary competition was Paratrechina longicornis, commonly known as “crazy ants” due to their erratic movement patterns. These black ants, measuring just 3 millimeters in length, are widespread globally but particularly prevalent along Israel’s coast and southern regions. Their name derives from their distinctive long antennae, though their frenetic behavior earned them their more colorful nickname.

Recruiting participants for the study presented different challenges across species. While human volunteers readily joined when asked, likely motivated by the competitive aspect, the ants required a bit of deception. Researchers had to trick them into thinking the T-shaped load was food that needed to be transported to their nest.

In experiments spanning three years and involving over 1,250 human participants and multiple ant colonies, researchers tested different group sizes tackling scaled versions of the same puzzle. For the ants, they used both individual ants and small groups of about 7 ants, as well as larger groups averaging 80 ants. Human participants were divided into single solvers and groups of 6-9 or 16-26 people.

Perhaps most intriguingly, the researchers found that while larger groups of ants performed significantly better than smaller groups or individuals, the opposite was true for humans when their communication was restricted. When human groups were not allowed to speak or use gestures and had to wear masks and sunglasses, their performance actually deteriorated compared to individuals working alone.

This counterintuitive finding speaks to fundamental differences in how ants and humans approach collective problem-solving. Individual ants cannot grasp the global nature of the puzzle, but their collective motion translates into emergent cognitive abilities; in other words, they develop new problem-solving skills simply by working together. The large ant groups showed impressive persistence and coordination, maintaining their direction even after colliding with walls and efficiently scanning their environment until finding openings.

The study highlights a crucial distinction between ant and human societies. “An ant colony is actually a family. All the ants in the nest are sisters, and they have common interests. It’s a tightly knit society in which cooperation greatly outweighs competition,” explains study co-author Prof. Ofer Feinerman in a statement. “That’s why an ant colony is sometimes referred to as a super-organism, sort of a living body composed of multiple ‘cells’ that cooperate with one another.”

This familial structure appears to enhance the ants’ collective problem-solving abilities. Their findings validated this “super-organism” vision, demonstrating that ants acting as a group are indeed smarter, with the whole being greater than the sum of its parts. In contrast, human groups showed no such enhancement of cognitive abilities, challenging popular notions about the “wisdom of crowds” in the social media age.

Source : https://studyfinds.org/ants-smarter-than-humans/

5 consumer myths to ditch in 2025

(© Ivan Kruk – stock.adobe.com)

Over the past year, books like Less by Patrick Grant and documentaries like Buy Now: The Shopping Conspiracy have encouraged consumers to rethink their internalized beliefs that more consumption equals better living.

As we enter a new year, it’s the perfect time to reflect on and leave behind some consumer myths that are detrimental to ourselves and to the planet.

Myth 1: Buying more is better for consumers and society

Retail therapy is a common practice for coping with negative emotions and might seem easier than actual therapy. However, research has consistently shown that materialistic consumption leads to lower individual and societal well-being. In fact, emerging studies are pointing out that low-consumption lifestyles might bring greater personal satisfaction and higher environmental benefits.

Some might argue that buying more stimulates the economy, creates jobs and supports public services through taxes. However, the positive impact on local communities is often overstated due to globalized supply chains and corporate tax avoidance.

To ensure that your spending really does support your community and does not contribute to economic inequalities, it is helpful to learn more about the story behind the labels and the businesses you support with your money.

Myth 2: New is always better

While certain cutting-edge tech may indeed offer improvements over older versions, for most items new might not always be better. As Grant argues in his book Less, product quality has declined over the past few decades as manufacturers prioritize affordability and engage in planned obsolescence practices. That is, they purposely design products that will break after a certain number of uses to keep the cycle of consumption going and hit their sales targets.

But older products were often built to last, so choosing secondhand or repairing older items can save you money and actually secure you better-quality products.

Myth 3: Being sustainable is expensive

It’s true that some brands have used the term “sustainable” to justify premium prices. However, adopting sustainable consumer practices can often be free or even bring in some extra cash if you sell or donate the things you no longer need.

Instead of “buying new,” consider swapping unused items with others by hosting a “swapping party” for things like toys or clothes with your friends, family, or neighbours. Decluttering your home could free up space, bring you some joy, and could also help you to connect with others by exchanging items.

Myth 4: Buying experiences are better than buying material things

Previous research has found that spending money on experiences brings more happiness primarily because these purchases are better at bringing people together. But material purchases that help you to connect with others, such as a board game, could bring as much joy as an experience.

When spending money, my research has shown that the key is to understand whether the purchase will help you to connect with others, learn new things or help your community. It’s not about whether we spend our money on material items or experiences.

It is also worth remembering that there are plenty of activities that can help you to achieve those goals with no spending required. So, instead of instinctively reaching to our wallets, perhaps in the new year we could think about whether a non-consumer activity like a winter hike or doing some volunteering could bring us closer to those intrinsic goals like personal growth or developing relationships. These goals have been consistently linked to better well-being.

Source : https://studyfinds.org/5-consumer-myths-to-ditch-in-2025/

The rise of the intention economy: How AI could turn your thoughts into currency

(Image by Shutterstock AI Generator)

Imagine scrolling through your social media feed when your AI assistant chimes in: “I notice you’ve been feeling down lately. Should we book that beach vacation you’ve been thinking about?” The eerie part isn’t that it knows you’re sad — it’s that it predicted your desire for a beach vacation before you consciously formed the thought yourself. Welcome to what some experts believe will be known as the “intention economy,” a way of life for consumers in the not-too-distant future.

A new paper by researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence warns that large language models (LLMs) like ChatGPT aren’t just changing how we interact with technology, they’re laying the groundwork for a new marketplace where our intentions could become commodities to be bought and sold.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” says co-author Dr. Yaqub Chaudhary, a visiting scholar at the Centre, in a statement.

For decades, tech companies have profited from what’s known as the attention economy, where our eyeballs and clicks are the currency. Social media platforms and websites compete for our limited attention spans, serving up endless streams of content and ads. But according to researchers Chaudhary and Dr. Jonnie Penn, we’re witnessing early signs of something potentially more invasive: an economic system that could treat our motivations and plans as valuable data to be captured and traded.

What makes this potential new economy particularly concerning is its intimate nature. “What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary explains.

Early signs of this emerging marketplace are already visible. Apple’s new “App Intents” developer framework for Siri includes protocols to “predict actions someone might take in future” and suggest apps based on these predictions. OpenAI has openly called for “data that expresses human intention… across any language, topic, and format.” Meanwhile, Meta has been researching “Intentonomy,” developing datasets for understanding human intent.

Consider Meta’s AI system CICERO, which achieved human-level performance in the strategy game Diplomacy by predicting players’ intentions and engaging in persuasive dialogue. While currently limited to gaming, this technology demonstrates the potential for AI systems to understand and influence human intentions through natural conversation.

Major tech companies are positioning themselves for this potential future. Microsoft has partnered with OpenAI in what the researchers describe as “the largest infrastructure buildout that humanity has ever seen,” investing over $50 billion annually from 2024 onward. The researchers suggest that future AI assistants could have unprecedented access to psychological and behavioral data, often collected through casual conversation.

The researchers warn that unless regulated, this developing intention economy “will treat your motivations as the new currency” in what amounts to “a gold rush for those who target, steer, and sell human intentions.” This isn’t just about selling products — It could have implications for democracy itself, potentially affecting everything from consumer choices to voting behavior.

Source: https://studyfinds.org/rise-of-intention-economy-ai-assistant/

Farewell, 2024: You were just a so-so year for most Americans

(ID 327257589 | 2024 © Penchan Pumila | Dreamstime.com)

Americans may be divided on many issues, but when it comes to rating 2024, they’ve reached a surprising consensus: it was decidedly average. In a nationwide survey of 2,000 people, the year earned a 6.1 out of 10—though beneath this seemingly tepid score lies a heartening discovery about what truly matters to Americans: personal connections topped the list of memorable moments.

The comprehensive study, conducted by Talker Research, surveyed 2,000 Americans about their experiences throughout the year. Perhaps most touching was the discovery that the most memorable moment for many Americans wasn’t a grand achievement or milestone, but rather the simple joy of reconnecting with old friends and family members, with 17% of respondents citing this as their standout experience.

Overall, a notable 30% of Americans rated their year as exceptional, scoring it eight or higher on the ten-point scale.

Personal development emerged as a dominant theme in 2024, with an overwhelming 67% of Americans reporting some form of growth over the past year. This growth manifested in various aspects of their lives: more than half (52%) saw improvements in their personal relationships, while 38% experienced positive changes in their mental and emotional well-being. Physical health gains were reported by 29% of respondents, and a quarter celebrated advances in their financial situation.

The year proved transformative for many Americans in unexpected ways. Tied for second place among memorable experiences were three distinct life changes: creative and personal growth, welcoming a new pet, and mastering a new skill or hobby, each cited by 12% of respondents. Close behind, 11% found meaning in volunteering or contributing to causes they care about.

The survey revealed that 17% of respondents rated the year a seven out of ten, matched by another 17% giving it a five, while 16% scored it an eight. At the extremes, 8% of Americans had a fantastic year worthy of a perfect ten, while 5% rated it a disappointing one out of ten.

The survey highlighted how Americans found joy and achievement in various pursuits, from visiting new places (10%) to overcoming major health challenges (9%). Some celebrated financial victories, with 8% paying off significant debts and 7% reaching important savings goals. Others embraced adventure, with 6% embarking on dream vacations or relocating to new homes.

Source: https://studyfinds.org/americans-rate-2024-six-out-of-ten/

Unlock the Power of Manifestation: How to Achieve What You Truly Desire


Manifestation is the art of turning your dreams into reality by aligning your thoughts, beliefs, and actions toward achieving them. It’s a combination of positive thinking and purposeful action. Here’s a comprehensive guide on how to manifest your aspirations with clarity and confidence.

1. Understand Manifestation and How It Works

Manifestation is rooted in the idea that your thoughts and energy can influence your reality. It’s driven by two powerful principles:

The Power of Positive Thinking

Your mindset shapes your outcomes. Positive thinking helps you:
• Overcome fears and doubts.
• Channel your energy toward your goals.
• Take actions that bring you closer to success.

When you believe in your ability to achieve something, you’re more likely to focus your efforts and persist through challenges.

The Law of Attraction

The law of attraction states that what you focus on is what you attract. By immersing yourself in your interests and goals:
• You gain knowledge and expertise in the area.
• You build networks with like-minded individuals.
• Opportunities naturally come your way, making success more attainable.

2. Key Techniques to Manifest Your Goals

Practice Visualization

Visualization is a powerful tool to make your dreams feel real. Spend a few minutes daily imagining your goals and the steps you’ll take to achieve them.
• Morning visualization can motivate you for the day.
• Evening visualization allows you to reflect on your progress.

Create a Vision Board

A vision board is a physical or digital collage of images and notes representing your goals.
• For instance, if your dream is a perfect home, include pictures of the decor, layout, or neighborhood.
• Seeing your vision board daily reinforces your commitment to your goals.

Maintain a Future Box

A future box (or manifestation box) holds items that represent your goals.
• Collect objects or notes related to your dreams, such as travel accessories for a future vacation or a letter to your future self.
• This tangible collection keeps your aspirations alive and close.

Use the 3-6-9 Method

Write down or repeat your goal:
• 3 times in the morning,
• 6 times in the afternoon,
• 9 times in the evening.

This repetition focuses your thoughts and reinforces your intent.

Try the 777 Method

Write your goal seven times in the morning and evening for seven days.
• This method is particularly effective for short-term objectives.
• It keeps your mind engaged with your aspirations consistently.

Make a 10-10-10 Worksheet

List out:
• 10 things you desire.
• 10 things you’re grateful for.
• 10 things you enjoy doing.

This worksheet offers a holistic view of your goals, strengths, and passions, helping you stay positive and self-aware.

Keep a Journal

Document your dreams, fears, and progress in a journal.
• Journaling helps identify obstacles and find solutions.
• Regular updates keep your journey organized and inspiring.

3. Strategies for Effective Manifestation

Be Clear About What You Want

Clarity is essential. Define your goals in detail to create a focused path toward achieving them.

Make Positive Affirmations

Speak positively about your goals. Examples include:
• “I am capable and deserving of this promotion.”
• “I am grateful for my growing success and the abundance it brings.”
• “I will live in my dream home within five years.”

Take Action Toward Your Goal

Manifestation requires action. Dedicate time to your goals every day or week.
• Example: If you want a new job, apply to at least one opening weekly.

Step Out of Your Comfort Zone

Growth often involves discomfort. Start small, like sharing your work with friends, then gradually take on bigger challenges.

Build Your Confidence

Confidence is key to success. Begin your day with affirmations such as, “I am strong, capable, and ready to succeed.”

Practice Gratitude

Gratitude fosters positivity. Appreciate what you have while working toward your dreams.

4. The Bottom Line: Manifest Your Best Life

Manifestation isn’t magic; it’s a combination of belief, focus, and consistent action. By practicing these techniques and strategies, you can align your thoughts and energy with your goals and transform them into reality.

Your journey begins with a single thought. Dream big, believe in yourself, and take the steps needed to turn your vision into life-changing success.

 

Scientists crack the code of how gold reaches Earth’s surface

(Credit: Aleksandrkozak/Shutterstock)

In a breakthrough that reads like alchemy, scientists at the University of Geneva have solved a long-standing mystery about how gold travels through the Earth’s crust to form valuable deposits of this precious metal. Their discovery reveals that a particular form of sulfur acts as nature’s gold courier, challenging previous theories about how precious metal deposits form.

The journey of gold from deep within the Earth to mineable deposits has long puzzled geologists. Now, researchers have identified that bisulphide, a specific form of sulfur, plays a crucial role in transporting gold through superhot fluids released by magma – the molten rock that eventually becomes the volcanic formations we see at the surface.

“Due to the drop in pressure, magmas rising towards the Earth’s surface saturate a water-rich fluid, which is then released as magmatic fluid bubbles, leaving a silicate melt behind,” explains Stefan Farsang, lead author of the study published in Nature Geoscience.

In a breakthrough that reads like alchemy, scientists at the University of Geneva have solved a long-standing mystery about how gold travels through the Earth’s crust to form valuable deposits of this precious metal. Their discovery reveals that a particular form of sulfur acts as nature’s gold courier, challenging previous theories about how precious metal deposits form.

The journey of gold from deep within the Earth to mineable deposits has long puzzled geologists. Now, researchers have identified that bisulphide, a specific form of sulfur, plays a crucial role in transporting gold through superhot fluids released by magma – the molten rock that eventually becomes the volcanic formations we see at the surface.

“Due to the drop in pressure, magmas rising towards the Earth’s surface saturate a water-rich fluid, which is then released as magmatic fluid bubbles, leaving a silicate melt behind,” explains Stefan Farsang, lead author of the study published in Nature Geoscience.

This groundbreaking methodology allowed the scientists to observe something previous researchers couldn’t: the exact chemical form of sulfur present in these magmatic fluids. Using laser analysis techniques, they discovered that bisulphide, along with hydrogen sulfide and sulfur dioxide, are the main forms of sulfur present at these extreme temperatures.

The findings overturn a 2011 study that had suggested different sulfur compounds were responsible for gold transport.

“By carefully choosing our laser wavelengths, we also showed that in previous studies, the amount of sulfur radicals in geologic fluids was severely overestimated and that the results of the 2011 study were in fact based on a measurement artifact,” says Farsang, effectively settling a decade-long debate in the geological community.

Since much of the world’s gold and copper comes from deposits formed by these magma-derived fluids, understanding exactly how they form could also aid in future mineral exploration efforts.

Think of it as understanding nature’s own delivery system: just as a postal service needs specific vehicles and routes to deliver packages, gold needs specific chemical compounds and conditions to move through Earth’s crust. By identifying bisulphide as the primary “delivery vehicle,” scientists have mapped out one of nature’s most valuable transportation networks.

The study emerged from the complex interaction between tectonic plates – the massive sections of Earth’s crust that slowly move against each other. When one plate slides beneath another, it generates magma rich in volatile elements like water, sulphur, and chlorine. As this magma rises toward the surface, it releases fluids that carry dissolved metals with them – a process that ultimately leads to the formation of the gold deposits that humans have prized throughout history.

This new understanding of gold’s journey through the Earth not only helps explain how existing deposits formed but could also guide future exploration efforts, potentially making gold mining more efficient and targeted.

Source : https://studyfinds.org/how-gold-reaches-earths-surface/

Single cigarette takes 20 minutes off life expectancy, study finds

The study found having a single cigarette reduces life expectancy by 17 minutes in men and 22 minutes in women. Photograph: Yui Mok/PA

Smokers are being urged to kick the habit for 2025 after a fresh assessment of the harms of cigarettes found they shorten life expectancy even more than doctors thought.

Researchers at University College London found that on average a single cigarette takes about 20 minutes off a person’s life, meaning that a typical pack of 20 cigarettes can shorten a person’s life by nearly seven hours.

According to the analysis, if a smoker on 10 cigarettes a day quits on 1 January, they could prevent the loss of a full day of life by 8 January. They could boost their life expectancy by a week if they quit until 5 February and a whole month if they stop until 5 August. By the end of the year, they could have avoided losing 50 days of life, the assessment found.

“People generally know that smoking is harmful but tend to underestimate just how much,” said Dr Sarah Jackson, a principal research fellow at UCL’s alcohol and tobacco research group. “On average, smokers who don’t quit lose around a decade of life. That’s 10 years of precious time, life moments, and milestones with loved ones.”

Smoking is one of the world’s leading preventable causes of disease and death, killing up to two-thirds of long-term users. It causes about 80,000 deaths a year in the UK and a quarter of all cancer deaths in England.

The study, commissioned by the Department of Health, draws on the latest data from the British Doctors Study, which began in 1951 as one of the world’s first large studies into the effects of smoking, and the Million Women Study, which has tracked women’s health since 1996.

While an earlier assessment in the BMJ in 2000 found that on average a single cigarette reduced life expectancy by about 11 minutes, the latest analysis published in the Journal of Addiction nearly doubles the figure to 20 minutes – 17 minutes for men and 22 minutes for women.

“Some people might think they don’t mind missing out on a few years of life, given that old age is often marked by chronic illness or disability. But smoking doesn’t cut short the unhealthy period at the end of life,” Jackson told the Guardian. “It primarily eats into the relatively healthy years in midlife, bringing forward the onset of ill-health. This means a 60-year-old smoker will typically have the health profile of a 70-year-old non-smoker.”

Although some smokers live long lives, others develop smoking-related diseases and even die from them in their 40s. The variation is driven by differences in smoking habits such as the type of cigarette used, the number of puffs taken and how deeply smokers inhale. People also differ in how susceptible they are to the toxic substances in cigarette smoke.

The authors stress that smokers must quit completely to get the full benefits to health and life expectancy. Previous work has shown there is no safe level of smoking: the risk of heart disease and stroke is only about 50% lower for people who smoke one cigarette a day compared with those who smoke 20 a day. “Stopping smoking at every age is beneficial, but the sooner smokers get off this escalator of death the longer and healthier they can expect their lives to be,” they write.

Source : https://www.theguardian.com/society/2024/dec/30/single-cigarette-takes-20-minutes-off-life-expectancy-study

Tea bags release shocking number of plastic particles into your drink

Photo by Charlotte May from Pexels

In a concerning discovery for tea lovers everywhere, scientists have found that a simple cup of tea might come with an unwanted extra ingredient: billions of microscopic plastic particles. A new study reveals that common tea bags can release substantial amounts of micro and nanoplastics (MNPLs) into your brew during the steeping process.

The research, conducted by a team of scientists from Spain, Egypt, and Germany, and published in Chemosphere, examined three different types of commercial tea bags: those made from nylon-6, polypropylene, and cellulose. What they found was startling. A single tea bag can release anywhere from 8 million to 1.2 billion nanoplastic particles into your cup, with polypropylene bags being the worst offenders.

These plastic particles are incredibly tiny – most are smaller than a human hair’s width – and can be readily absorbed by the cells in our digestive system. The researchers discovered that different types of intestinal cells interact with these particles in varying ways, with some cells taking up more particles than others. Of particular concern was the finding that these nanoplastics can interact with cell nuclei, where our genetic material is stored.

“We have managed to innovatively characterize these pollutants with a set of cutting-edge techniques, which is a very important tool to advance research on their possible impacts on human health,” says Universitat Autònoma de Barcelona researcher Alba Garcia in a media release.

The study’s findings add to growing concerns about our daily exposure to microplastics through food and beverages. While plastic tea bags have become increasingly popular due to their durability and convenience, this research suggests we might be paying an unexpected health price for this modern convenience.

When examining the tea bags under powerful microscopes, the researchers found various surface irregularities, including scales, spheres, and irregular particles. These imperfections, which can appear during the manufacturing process, may contribute to the release of plastic particles during steeping.

The study raises particular concerns about how these particles interact with our digestive system. The researchers tested three different types of human intestinal cells, including ones that produce protective mucus similar to our gut lining. Interestingly, cells that produced more mucus tended to accumulate more plastic particles, suggesting that our body’s natural defensive barriers might actually trap these unwanted materials.

While the immediate health implications of consuming these particles remain unclear, the research highlights an important source of plastic exposure that many people might not be aware of. With tea being one of the world’s most popular beverages, the cumulative exposure to these particles could be significant for regular tea drinkers.

“As the use of plastic in food packaging continues to increase, it is vital to address MNPLs contamination to ensure food safety and protect public health,” the researchers conclude.

Source : https://studyfinds.org/tea-bags-plastic-particles/

150 years under the sea? Whale lifespans are much longer than we thought

Animals with long lifespans tend to reproduce extremely slowly. Els Vermeulen

Southern right whales have lifespans that reach well past 100 years, and 10% may live past 130 years, according to our new research published in the journal Science Advances. Some of these whales may live to 150. This lifespan is almost double the 70-80 years they are conventionally believed to live.

North Atlantic right whales were also thought to have a maximum lifespan of about 70 years. We found, however, that this critically endangered species’ current average lifespan is only 22 years, and they rarely live past 50.

These two species are very closely related – only 25 years ago they were considered to be one species – so we’d expect them to have similarly long lifespans. We attribute the stark difference in longevity in North Atlantic right whales to human-caused mortality, mostly from entanglements in fishing gear and ship strikes.

We made these new age estimates using photo identification of individual female whales over several decades. Individual whales can be recognized year after year from photographs. When they die, they stop being photographically “resighted” and disappear. Using these photos, we developed what scientists call “survivorship curves” by estimating the probability whales would disappear from the photographic record as they aged. From these survivorship curves, we could estimate maximum potential lifespans.

Twenty-five years ago, scientists working with Indigenous whale hunters in the Arctic showed that bowhead whales could live up to and even over 200 years. Their evidence included finding stone harpoon points that hadn’t been used since the mid-1800s embedded in the blubber of whales recently killed by traditional whalers. Analysis of proteins from the eyes of hunted whales provided further evidence of their long lifespan. Like right whales, before that analysis, researchers thought bowhead whales lived to about 80 years, and that humans were the mammals that lived the longest.

In the years following that report, scientists tried to figure out what was unique about bowhead whales that allowed them to live so long. But our new analysis of the longevity of two close relatives of bowheads shows that other whale species also have potentially extremely long lives.

Why it matters

Understanding how long wild animals live has major implications for how to best protect them. Animals that have very long lifespans usually reproduce extremely slowly and can go many years between births. Baleen whales’ life history – particularly the age when females start breeding and the interval between calves – is strongly influenced by their potential lifespan. Conservation and management strategies that do not plan accordingly will have a higher chance of failure. This is especially important given the expected impacts of climate disruption.

What still isn’t known

There are many other large whales, including blue, fin, sei, humpback, gray and sperm whales. Like bowhead and right whales, these were also almost wiped out by whaling. Scientists currently assume they live about 80 or 90 years, but that’s what we believed about bowhead and right whales until data proved they can live much longer.

How long can these other whale species live? Industrial whaling, which ended only in the 1960s, removed old whales from the world’s whale populations. Though many whale populations are recovering in number, there hasn’t been enough time for whales born after the end of industrial whaling to become old.

It’s possible, even likely, that many other whale species will also prove to have long lifespans.

What other research is being done

Other research finds the loss of older individuals from populations is a phenomenon occurring across most large animal species. It diminishes the reproductive potential of many species. Researchers also argue this represents a real loss of culture and wisdom in animals that degrades their potential for survival in the face of changing conditions.

Source : https://studyfinds.org/whale-lifespans-much-longer/

Evolution: What Will Humans Look Like in 50,000 Years?

Many people hold the view that evolution in modern humans has come to a halt. But while modern medicine and technologies have changed the environment in which evolution operates, many scientists are in agreement that the phenomenon is still occurring.

This evolution may be less about survival and more about reproductive success in our current environment. Changes in gene frequencies because of factors like cultural preferences, geographic migration and even random events continue to shape the human genome.

But what might humans look like in 50,000 years time? Such a question is clearly speculative in nature. Nevertheless, experts that Newsweek spoke to gave their predictions for how evolution might affect the appearance of our species in the future.

“Evolution is part deterministic—there are rules for how systems evolve—and part random—mutations and environmental changes are primarily unpredictable,” Thomas Mailund, an associate professor of bioinformatics at Aarhus University in Denmark, told Newsweek.

“In some rare cases, we can observe evolution in action, but over a time span of tens or hundreds of years, it is mostly guesswork. We can make somewhat qualified guesses, but the predictive power is low, so think of it as thought experiments more than anything else.”

Something we can say with certainty is that 50,000 years is more than enough time for several evolutionary changes to occur, albeit on a relatively minor scale, according to Mailund.

“Truly dramatic changes require a longer time, of course. We are not going to grow wings or gills in less than millions of years, and 50,000 years ago, we were anatomically modern humans.”

Jason Hodgson, an anthropologist and evolutionary geneticist at Anglia Ruskin University in the United Kingdom, told Newsweek that 50,000 years is an “extremely long time” in the course of human evolution, representing more than 1,667 human generations given a 30-year generation time.

A 3D illustration of a facial recognition system. What will humans look like in 50,000 years? Design Cells/iStock/Getty Images Plus

“Within the past 50,000 years most of the variation that is seen among human populations evolved,” Hodgson said. “This includes all of the skin color variation seen across the globe, all of the stature variation, all of the hair color and texture variation, etc. In fact, most of the variation we are so familiar with evolved within the past 10,000 years.”

In the more immediate future, Hodgson predicts that global populations will become more homogenous and less structured when it comes to genetics and phenotype—an individual’s observable traits.

“Currently the phenotypes that we associate with geographic regions—for example, dark skin in Africans, light skin in Scandinavians, short stature in African pygmy hunter-gatherers, tall stature in Dutch, etc.—is maintained by assortative mating. People are much more likely to choose mates who are similar to themselves,” he said.

“Part of this is due to the human history of migration and culture which means people tend to live by and be exposed to people who are more similar to themselves with respect to global variation. And some of this is due to preference for similarity within local populations for reasons that we still do not really understand.

“However, admixture—mating between distantly related groups—is increasing, and this will result in less structure and a more homogenous global population. As an analogy, if you stick a bunch of poodles, rottweilers, chihuahuas and St. Bernards on an island and let them breed randomly, within a few generations everything would be a medium sized brown dog.”

When distinct populations mix, so do their traits. Some traits are determined by a few gene variants. But many traits result from a combination of various different genes, and there we will blend together to some degree, according to Mailund.

“So there will be some changes, not caused by selection, but because previously isolated groups are now mixing,” he said.

It is still possible though that despite increasing homogeneity, not everyone would evolve in the same direction, according to Nick Longrich, a paleontologist and evolutionary biologist at the University of Bath in the United Kingdom.

“You could imagine that in distinct subpopulations you could get people evolving in different ways,” he said.

If there are strong, consistent pressures toward certain characteristics, our species could experience “very rapid evolution” in a matter of thousands—or possibly even hundreds—of years, Longrich said.

While we do not know what the selective pressures will be like going forward, Longrich said he expects a number of developments, extrapolating from past trends and current conditions.

For example, we might get taller, because of sexual selection. And we might also become more attractive on average, since sexual selection plays more of a role in modern society than natural selection.

“Attractiveness is relative, so maybe we’d look like movie stars but if everyone looked that way, it wouldn’t be exceptional,” he said.

As time passes and technology evolves, it is also possible that humans will begin to direct our own evolution in a targeted fashion through gene editing tools such as CRISPR—potentially aided by artificial intelligence.

“Applying genetic techniques to humans that alter phenotypes is highly controversial and ethically fraught. Indeed, 20th century eugenicists thought they could improve the human species by only allowing the ‘right’ people to breed,” Hodgson said.

Source : https://www.newsweek.com/evolution-what-will-humans-look-like-50000-years-2006894

The human brain processes thoughts 5,000,000 times slower than the average internet connection

The brain may not be as powerful as previously thought, according to the research (Picture: Getty Images)

People think many millions of times slower than the average internet connection, scientists have found.

The body’s sensory systems, including the eyes, ears, skin, and nose, gather data about our environments at a rate of a billion bits per second.

But the brain processes these signals at only about 10 bits per second, millions of times slower than the inputs, according to author Markus Meister.

A bit is the unit of information in computing. A typical Wi-Fi connection processes about 50 million bits per second.

Despite the brain having over 85 billion neurons, researchers found that humans think at around 10 bits per second – a number they called ‘extremely low’.

Writing in the scientific journal Neuron, research co-author Markus Meister said: ‘Every moment, we are extracting just 10 bits from the trillion that our senses are taking in and using those 10 to perceive the world around us and make decisions.

‘This raises a paradox: What is the brain doing to filter all this information?’

Individual nerve cells in the brain are capable of transmitting over 10 bits per second.

However, the new findings suggest they don’t help process thoughts at such high speeds.

This makes humans relatively slow thinkers, who are unable to process many thoughts in parallel, the research suggests.

This prevents scenarios like a chess player being able to envision a set of future moves and only lets people explore one possible sequence at a time rather than several at once.

The discovery of this ‘speed limit’ paradox in the brain warrants further neuroscience research, scientists say.

They speculated that this speed limit likely emerged in the first animals with a nervous system.

These creatures likely used their brains primarily for navigation to move toward food and away from predators.

Since human brains evolved from these, it could be that we can only follow one ‘path’ of thought at a time, according to researchers.

Source : https://metro.co.uk/2024/12/27/human-brain-processes-thoughts-5-000-000-times-slower-average-internet-connection-22258645/

Young and restless: 37% of Gen Z skipping the gym, going straight to Ozempic

Overweight woman applying medicine injection (© Mauricio – stock.adobe.com)

CORONA DEL MAR, Calif. — Is your New Year’s resolution to lose some weight? A new poll finds many people may actually achieve their goals in 2025 — with a little help from their pharmacist. More than a quarter of Americans are planning to turn to GLP-1 medications like Ozempic and Wegovy to reach their 2025 weight loss goals.

According to researchers with Tebra, who surveyed over 1,000 Americans in November 2024, there’s now a growing acceptance of pharmaceutical interventions for weight management, particularly among younger people.

Specifically, Gen Z is skipping the gym and going straight to the pharmacy, with 37% planning to add these medications to their wellness strategy in the coming year. Women are leading the charge, with 30% intending to use GLP-1 drugs to reach their weight loss goals, compared to 20% of men. On average, women are setting more ambitious weight loss targets, aiming to shed 23 pounds in 2025, while men are looking to lose 19 pounds.

Despite the growing enthusiasm for weight loss shortcuts, the path to accessing these medications remains complicated. Nearly eight in 10 people believe GLP-1 weight loss medications are out of reach for the average person due to their skyrocketing cost. In fact, 64% of those interested in using these medications cite high costs as their main concern, followed by worries about potential side-effects (59%).

For those who have already taken the plunge, the results appear to justify the costs. An overwhelming 86% of current GLP-1 users report that the health risks are worth the results they’re seeing. This satisfaction may explain why 66% of Americans now believe these medications are more effective than traditional weight loss routes like diet and exercise.

Baby boomers show the strongest confidence in these drugs’ effectiveness, with 72% believing they outperform traditional methods, followed by Gen X at 70%, millennials at 64%, and Gen Z at 58%. The gender gap is even more pronounced, with 75% of women believing in the superior effectiveness of GLP-1 medications compared to 53% of men.

Despite the growing trust in popular weight loss drugs, nearly one in four current users are taking these medications without a doctor’s oversight, raising questions about safety and proper usage. This statistic becomes particularly alarming when you consider that 41% of Americans are uncertain about the long-term effectiveness of these drugs, and 39% worry about developing an addiction to them.

The timing of this shift toward pharmaceutical weight loss solutions may not be coincidental. The survey reveals that nearly half (49%) of Americans have previously abandoned their New Year’s resolution wellness goals, with 31% giving up as early as February. This history of frustration with traditional approaches might explain the growing openness to medical shortcuts for weight loss.

Source : https://studyfinds.org/gen-z-ozempic/

 

The effects of ‘brain rot’: How junk content is damaging our minds

Recent research has found that Internet use and abuse is associated with a decrease in gray matter in the prefrontal regions of the brain.
Photographer, Basak Gurbuz Derman (Getty Images)

“Brain rot” was named the Oxford Word of the Year for 2024 after a public vote involving more than 37,000 people. Oxford University Press defines the concept as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.”

According to Oxford’s language experts, the term reflects growing concerns about “the impact of consuming excessive amounts of low-quality online content, especially on social media.” The term increased in usage frequency by 230% between 2023 and 2024.

But brain rot is not just a linguistic quirk. Over the past decade, scientific studies have shown that consuming excessive amounts of junk content — including sensationalist news, conspiracy theories and vacuous entertainment — can profoundly affect our brains. In other words, “rot” may not be that big of an exaggeration when it comes to describing the impact of low-quality online content.

Research from prestigious institutions such as Harvard Medical School, Oxford University, and King’s College London — cited by The Guardian — reveals that social media consumption can reduce grey matter, shorten attention spans, weaken memory, and distort core cognitive functions.

A 2023 study highlighted these effects, showing how internet addiction causes structural changes in the brain that influence behavior and cognitive abilities. Michoel Moshel, a researcher at Macquarie University and co-author of the study, explains that compulsive content consumption — popularly known as doomscrolling — “takes advantage of our brain’s natural tendency to seek out new things, especially when it comes to potentially harmful or alarming information, a trait that once helped us survive.”

Moshel explains that features like “infinite scrolling,” which are designed to keep users glued to their screens, can trap people — especially young individuals — in a cycle of content consumption for hours. “This can significantly impair attention and executive functions by overwhelming our focus and altering the way we perceive and respond to the world,” says the researcher.

Eduardo Fernández Jiménez, a clinical psychologist at Hospital La Paz in Madrid, explains that the brain activates different neural networks to manage various types of attention. He notes that excessive use of smartphones and the internet is causing issues with sustained attention, which “allows you to concentrate on the same task for a more or less extended period of time.” He adds: “It is the one that is linked to academic learning processes.”

The problem, says the researcher, is that social media users are constantly exposed to rapidly changing and variable stimuli — such as Instagram notifications, WhatsApp messages, or news alerts — that have addictive potential. This means users are constantly switching their focus, which undermines their ability to concentrate effectively.

The first warning came with email

Experts have been sounding the alarm about this issue since the turn of the century, when email became a common tool. In 2005, The Guardian ran the headline: “Email pose ‘threat to IQ.’” The article reported that a team of scientists at the University of London investigated the impact of the constant influx of information on the brain. After conducting 80 clinical trials, they found that participants who used email and cellphones daily experienced an average IQ drop of 10 points. The researchers concluded that this constant demand for attention had a more detrimental effect than cannabis use

This was before the rise of tweets, Instagram reels, TikTok challenges, and push notifications. The current situation, however, is even more concerning. Recent research has found that excessive internet use is linked to a decrease in grey matter in the prefrontal regions of the brain — areas responsible for problem-solving, emotional regulation, memory, and impulse control.

The research conducted by Moshel and his colleagues supports these findings. Their latest study, which reviewed 27 neuroimaging studies, revealed that excessive internet use is associated with a reduction in the volume of grey matter in brain regions involved in reward processing, impulse control, and decision-making. “These changes reflect patterns observed in substance addictions,” says Moshel, comparing them to the effects of methamphetamines and alcohol.

That’s not all. The research also found that “these neuroanatomical changes in adolescents coincide with disruptions in processes such as identity formation and social cognition — critical aspects of development during this stage.” This creates a kind of feedback loop, where the most vulnerable individuals are often the most affected. According to a study published in Nature in November, people with poorer mental health are more likely to engage with junk content, which further exacerbates their symptoms.

Source : https://english.elpais.com/technology/2024-12-26/the-effects-of-brain-rot-how-junk-content-is-damaging-our-minds.html

Which infectious disease is most likely to be biggest emerging problem in 2025?

(Credit: Melnikov Dmitriy/Shutterstock)

COVID emerged suddenly, spread rapidly and killed millions of people around the world. Since then, I think it’s fair to say that most people have been nervous about the emergence of the next big infectious disease – be that a virus, bacterium, fungus or parasite.

With COVID in retreat (thanks to highly effective vaccines), the three infectious diseases causing public health officials the greatest concern are malaria (a parasite), HIV (a virus) and tuberculosis (a bacterium). Between them, they kill around 2 million people each year.

And then there are the watchlists of priority pathogens – especially those that have become resistant to the drugs usually used to treat them, such as antibiotics and antivirals.

Scientists must also constantly scan the horizon for the next potential problem. While this could come in any form of pathogen, certain groups are more likely than others to cause swift outbreaks, and that includes influenza viruses.

One influenza virus is causing great concern right now and is teetering on the edge of being a serious problem in 2025. This is influenza A subtype H5N1, sometimes referred to as “bird flu.” This virus is widely spread in both wild and domestic birds, such as poultry. Recently, it has also been infecting dairy cattle in several U.S. states and found in horses in Mongolia.

When influenza cases start increasing in animals such as birds, there is always a worry that it could jump to humans. Indeed, bird flu can infect humans with 61 cases in the U.S. this year already, mostly resulting from farm workers coming into contact with infected cattle and people drinking raw milk.

Compared with only two cases in the Americas in the previous two years, this is quite a large increase. Coupling this with a 30% mortality rate from human infections, bird flu is quickly jumping up the list of public health officials’ priorities.

Luckily, H5N1 bird flu doesn’t seem to transmit from person to person, which greatly reduces its likelihood of causing a pandemic in humans. Influenza viruses have to attach to molecular structures called sialic receptors on the outside of cells in order to get inside and start replicating.

Flu viruses that are highly adapted to humans recognise these sialic receptors very well, making it easy for them to get inside our cells, which contributes to their spread between humans. Bird flu, on the other hand, is highly adapted to bird sialic receptors and has some mismatches when “binding” (attaching) to human ones. So, in its current form, H5N1 can’t easily spread in humans.

However, a recent study showed that a single mutation in the flu genome could make H5N1 adept at spreading from human to human, which could jump-start a pandemic.

If this strain of bird flu makes that switch and can start transmitting between humans, governments must act quickly to control the spread. Centers for disease control around the world have drawn up pandemic preparedness plans for bird flu and other diseases that are on the horizon.

For example, the UK has bought 5 million doses of H5 vaccine that can protect against bird flu, in preparation for that risk in 2025.

Even without the potential ability to spread between humans, bird flu is likely to affect animal health even more in 2025. This not only has large animal welfare implications but also the potential to disrupt food supply and have economic effects as well.

Source : https://studyfinds.org/which-infectious-disease-is-most-likely-to-be-biggest-emerging-problem-in-2025/

 

Why human civilization may be on the brink of a ‘planetary phase shift’

(Credit: © Aleksandr Zamuruev | Dreamstime.com)

Systems theorist suggests the ‘next giant leap in evolution’ is nearing, but authoritarian politics could get in the way
Picture a caterpillar transforming into a butterfly. At a certain point, the creature enters a critical phase where its old form breaks down before emerging as something entirely new. According to a thought-provoking paper by renowned systems theorist Dr. Nafeez Ahmed, human civilization may be approaching a similar transformative moment, or what researchers call a “planetary phase shift.” And while the potential for positive transformation is enormous, Ahmed warns that rising authoritarianism could derail this evolutionary leap.

Ahmed, founding director of the System Shift Lab, presents compelling evidence in the journal Foresight that we’re living through an unprecedented period of change. Multiple global crises — from climate change to economic instability to technological disruption — aren’t just separate problems, but symptoms of an entire civilization undergoing metamorphosis.

“An amazing new possibility space is emerging, where humanity could provide itself superabundant energy, transport, food and knowledge without hurting the earth,” Ahmed says in a statement. “This could be the next giant leap in human evolution.”

The paper synthesizes research across natural and social sciences to develop a new theory of how civilizations rise and fall. It introduces the concept of “adaptive cycles,” a pattern observed in everything from forest ecosystems to ancient civilizations. These cycles move through four phases: rapid growth, conservation (stability), release (creative destruction), and reorganization. Think of it like the seasons: spring growth, summer abundance, autumn release, and winter renewal.

According to Ahmed, industrial civilization is now entering the “release” phase, where old structures begin breaking down. This explains why we’re seeing simultaneous crises across multiple systems. The fossil fuel economy is faltering, evidenced by a global decrease in Energy Return on Investment (EROI) for oil, gas, and coal. Meanwhile, renewable energy technologies are experiencing exponentially improving EROI rates.

But here’s where it gets interesting: these breakdowns aren’t necessarily catastrophic. They’re creating space for radical new possibilities. The study points to major technological innovations expected between the 2030s and 2060s, including clean energy, cellular agriculture, electric vehicles, artificial intelligence, and 3D printing. When combined, these technologies could enable what the researcher calls “networked superabundance” — a world where clean energy, transportation, food, and knowledge become universally accessible at near-zero cost while protecting Earth’s systems.

“This planetary renewable energy system will potentially enable citizens everywhere to produce clean energy ‘superabundance’ at near-zero marginal costs for most times of the year. This huge energy surplus – as much as ten times what we produce today – could power a global ‘circular economy’ system in which materials are rigorously recycled, with the system overall requiring 300 times less materials by weight than the fossil fuel system,” Ahmed writes. “[C]ost and performance improvements in autonomous driving technology could enable a new model called transport-as-a-service, leading private car ownership to collapse by about 90% – replaced by fleets of privately or publicly-owned autonomous taxis and buses up to ten times cheaper than transport today – as early as the 2030s.”

However, Ahmed emphasizes that technology alone won’t determine our fate. The key challenge is whether we can evolve our “operating system” — our social, economic, and cultural structures — to harness these capabilities for the common good. There’s a growing gulf between the old “industrial operating system” and emerging new systems that are inherently distributed and decentralized. This mismatch is driving major political and cultural disruptions globally.

Source : https://studyfinds.org/human-civilization-planetary-phase-shift/

 

Are we moral blank slates at birth? New study offers intriguing clues

(Photo by Ana Tablas on Unsplash)

What does a baby know about right and wrong? A foundational finding in moral psychology suggested that even infants have a moral sense, preferring “helpers” over “hinderers” before uttering their first word. Now, nearly 20 years later, a study that tried to replicate these findings calls this result into question.

In the original study, Kiley Hamlin and her colleagues showed a puppet show to six- and ten-month-old babies. During the show, the babies would see a character — which was really just a shape with googly eyes — struggling to reach the top of a hill.

Next, a new character would either help the struggling individual reach the top (acting as a “helper”) or push the character back down to the bottom of the hill (acting as a “hinderer”).

By gauging babies’ behavior — specifically, watching how their eyes moved during the show and whether they preferred to hold a specific character after the show ended — it seemed that the infants had basic moral preferences. Indeed, in the first study, 88% of the ten-month-olds – and 100% of the six-month-olds – chose to reach for the helper.

But psychology, and developmental psychology, in particular, is no stranger to replicability concerns (when it is difficult or impossible to reproduce the results of a scientific study). After all, the original study sampled only a few dozen infants.

This isn’t the fault of the researchers; it’s just really hard to collect data from babies. But what if it was possible to run the same study again — with say, hundreds or even thousands of babies? Would researchers find the same result?

This is the chief aim of ManyBabies, a consortium of developmental psychologists spread around the world. By combining resources across individual research labs, ManyBabies can robustly test findings in developmental science, like Hamlin’s original “helper-hinderer” effect. And as of last month, the results are in.

With a final sample of 567 babies, tested in 37 research labs across five continents, babies did not show evidence of an early-emerging moral sense. Across the ages tested, babies showed no preference for the helpful character.

Blank slate?

John Locke, an English philosopher argued that the human mind is a “tabula rasa” or “blank slate.” Everything that we, as humans, know comes from our experiences in the world. So should people take the most recent ManyBabies result as evidence of this? My answer, however underwhelming, is “perhaps.”

This is not the first attempted replication of the helper-hinderer effect (nor is it the first “failure to replicate”). In fact, there have been a number of successful replications. It can be hard to know what underlies differences in results. For example, a previous “failure” seemed to come from the characters’ “googly eyes” not being oriented the right way.

The ManyBabies experiment also had an important change in how the “show” was presented to infants. Rather than a puppet show performed live to baby participants, researchers instead presented a video with digital versions of the characters. This approach has its strengths. For example, ensuring that the exact same presentation occurs across every trial, in every lab. But it could also shift how babies engage with the show and its characters.

Source : https://studyfinds.org/are-we-moral-blank-slates-at-birth-new-study-offers-intriguing-clues/

Make it personal: Customized gifts trigger this unique psychological response in recipients

Man and woman opening Christmas gifts (© luckybusiness – stock.adobe.com)

When Nike launched its customization platform NikeID, few could have predicted it would reveal profound insights about human psychology. Now, research spanning four countries shows that personalized products trigger a fascinating emotional phenomenon called “vicarious pride.” That is, recipients of customized gifts experience the same pride their friends felt while creating them.

The study, published in the journal Psychology & Marketing, explores the psychological dynamics at play when someone receives a personalized gift.

“Gift-giving is an age-old tradition, but in today’s world, personalization has become a powerful way to make gifts stand out,” explains Dr. Diletta Acuti, a marketing expert at the University of Bath School of Management, in a statement.

When someone receives a customized gift, such as a chocolate bar with personally selected flavors or a leather journal with their name inscribed, they don’t just appreciate the thought behind it.

“You don’t just appreciate the care and intention they put into crafting that gift; you feel them,” Dr. Acuti explains.

This emotional mirroring stems from a psychological concept called simulation theory, where people mentally recreate others’ experiences and emotions. It’s similar to how sports fans feel their team’s victories and defeats as if they were on the field themselves, or how parents beam with pride at their children’s achievements. When it comes to customized gifts, recipients essentially piggyback on the gift-giver’s sense of creative accomplishment.

Through four carefully designed studies, the researchers examined this phenomenon from different angles. In their first experiment with 74 participants, they studied how people responded to customized clothing gifts. To measure appreciation objectively, recipients were asked to indicate which items, if any, they would change – a novel approach to gauging satisfaction. Those who received customized gifts wanted to make fewer changes to their presents, suggesting higher appreciation.

The second study took a different approach, showing 134 participants videos of two different gift-selection processes: one showing the customization of a T-shirt, and another showing standard gift selection through website browsing. Even when controlling for the time spent selecting the gift, customized presents consistently generated more appreciation.

In the third and fourth studies, conducted online using a mug and wristwatch as gifts, the researchers confirmed that customization not only increased appreciation but also enhanced recipients’ self-esteem. This suggests that receiving a personalized gift makes people feel more valued and special.

Interestingly, the research revealed that the time and effort spent on customization didn’t significantly impact the recipient’s appreciation. Whether the giver spent considerable time or just a few minutes personalizing the gift, recipients experienced similar levels of vicarious pride. This finding challenges common assumptions about the relationship between time invested and gift appreciation.

The study also uncovered an important caveat: relationship anxiety can diminish these positive effects. When recipients feel insecure about their relationship with the gift-giver, the benefits of customization – including vicarious pride and enhanced self-esteem – may not materialize.

For businesses, these insights suggest new opportunities in the growing customized gift market, which is projected to reach $13 billion by 2027 according to Technavio. “Using ‘made by’ signals – such as including the giver’s name, a short message about the process or a visual representation of the customization – can make things even more impactful,” suggests Dr. Acuti. “These small additions reinforce the emotional connection between the giver and the recipient.”

The research also has implications for sustainability, as the study found that recipients tend to take better care of gifts they value more. This suggests that personalization might contribute to longer product lifespans and reduced waste.

“When choosing a gift, personalization can be a game-changer. But it’s not just about selecting a customizable option: you also need to communicate that effort to your recipient. Sharing why you chose elements of the gift or the thought that went into it will make the recipient appreciate it even more. Indeed, this additional effort helps them to connect with the pride you felt in your choices, making the gift even more meaningful,” Dr. Acuti advises.

Perhaps the true magic of customized gifts isn’t in the personalization itself, but in their ability to create invisible bridges between people – emotional connections forged through shared pride and mutual recognition. In a world increasingly mediated by screens and algorithms, these moments of genuine human connection might be the most valuable gift of all.

Source : https://studyfinds.org/customized-gifts-psychological/

A growth mindset protects mental health during hard times

(© Татьяна Макарова – stock.adobe.com)

When the world turned upside down during the COVID-19 pandemic, some people seemed to weather the storm better than others. Though many struggled with depression and loneliness during lockdowns, others maintained their mental well-being and even thrived. What made the difference? According to new research, one key factor may be something called a “growth mindset” – the belief that our abilities and attributes can change and develop over time.

This fascinating study, conducted by researchers at the University of California, Riverside and the University of the Pacific, followed 454 adults ages 19 to 89 over two years of the pandemic, from June 2020 to September 2022. Their findings suggest that people who believe their capabilities are malleable rather than fixed were better equipped to handle the psychological challenges of the pandemic.

Growth mindset represents a fundamental belief about human potential – that we can develop our abilities through effort, good strategies, and input from others. During the pandemic, this mindset appeared to help people view adversity as opportunities for adaptation and learning.

Looking at adults from diverse backgrounds in Southern California, the researchers examined how growth mindset related to three key aspects of mental health during the pandemic: depression levels, overall well-being, and how well people adjusted their daily routines to accommodate physical distancing requirements.

The results, published in PLOS Mental Health, were striking. People with stronger growth mindsets reported lower levels of depression and higher levels of well-being, even after accounting for various demographic factors like age, income, and education level. They were also more likely to successfully adapt their daily routines to pandemic restrictions.

The study included a unique group of older adults who had participated in a special learning intervention before the pandemic. These individuals had spent three months learning multiple new skills – from painting to using iPads to speaking Spanish. Not only did this group show increased growth mindset after their learning experience, but they also demonstrated better mental health outcomes during the pandemic compared to their peers who hadn’t participated in the intervention.

This finding suggests that actively engaging in learning new skills might help build mental resilience for challenging circumstances. The combination of growth mindset with actual learning experiences appeared to create stronger psychological benefits during the pandemic.

Age played a fascinating role in the results. While older adults generally showed more resilience in terms of emotional well-being and lower depression rates compared to younger participants, they were less likely to adjust their daily routines during the pandemic. This suggests that while age may bring emotional stability, it might also be associated with less behavioral flexibility.

Source : https://studyfinds.org/growth-mindset-protects-mental-health-during-hard-times/

What sleep paralysis feels like: Terrifying, like you’re trapped with a demon on your chest

The Nightmare, a 1781 oil painting by Swiss artist Henry Fuseli of a woman in deep sleep, arms flung over her head and an incubus, a male demon, on her belly, has been taken as a symbol of sleep paralysis. Photo by Detroit Institute of Arts.

This feature is part of a National Post series by health reporter Sharon Kirkey on what is keeping us up at night. In the series, Kirkey talks to sleep scientists and brain researchers to explore our obsession with sleep, the seeming lack of it and how we can rest easier.

Psychologist Brian Sharpless has been a horror movie buff since watching 1974’s It’s Alive! on HBO, a cult classic about a fanged and sharp-clawed mutant baby with a proclivity to kill whenever it got upset.

In his new book, Monsters on the Couch: The Real Psychological Disorders Behind Your Favorite Horror Movies, Sharpless devotes a full chapter to a surprisingly common human sleep experience that has been worked into so many movie plots “it now constitutes its own sub-genre of horror.”

Not full sleep, exactly, but rather a state stuck between sleep and wakefulness that follows a reliable pattern: People suddenly wake but cannot move because all major muscles are paralyzed.

The paralysis is often accompanied by the sensed presence of another, human or otherwise. The most eerie episodes involve striking hallucinations. Sharpless once hallucinated a “serpentine-necked monstrosity” lurking in the silvery moonlight seeping through the slats of his bedroom window blind.

Feeling pressure on the chest or a heavy weight on the ribs is also common. People feel as if they’re being smothered. They might also sweat, tremble or shake, but are “trapped,” unable to move their arms or legs, yell or scream. The experience can last seconds, or up to 20 minutes, “with a mean duration of six minutes,” Sharpless shared with non-sleep specialists in his doctor’s guide to sleep paralysis.

Sleep paralysis is a parasomnia, a sleep disorder that at least eight per cent of the general population will experience at least once in their lifetime. That low-ball estimate is higher still among university students (28 per cent) and those with a psychiatric condition (32 per cent). It’s usually harmless, but the combination of a waking nightmare and temporary paralysis can make for a “very unpleasant experience,” Sharpless advised clinicians, “one that may not be easily understood by patients.”

“Patients may instead use other non-medical explanations to make sense of it,” such as, say, some kind of alien, spiritual or demonic attack.

Eight years into studying sleep paralysis and with hundreds of interviews with experiencers under his belt, Sharpless had never once experienced the phenomenon himself, until 2015, the year he published his first book, Sleep Paralysis, with Dr. Karl Doghramji, a professor of psychiatry at Thomas Jefferson University. Sharpless woke at 2 a.m. and saw shadows in the hallway mingling and melding into a snake-like form with a freakishly long neck and eyes that glowed red. When he attempted to lift his head to get a better look, “I came to the uncomfortable realization I couldn’t move,” Sharpless recounts in Monsters on the Couch. “Oh my God, you’re having sleep paralysis,” he remembers thinking when he began to think rationally again.

“It’s an unusual experience that a lot of folks have,” Sharpless said in an interview with the National Post. The hallucinatory elements “that tap into a lot of paranormal and supernatural beliefs” is partly what makes it so fascinating, he said. Several celebrities — supermodel Kendall Jenner, American singer-songwriter and Apple Music’s 2024 Artist of the Year Billie Eilish, English actor and Spider-Man star Tom Holland — have also been open about their sleep paralysis.

You’re seeing, smelling, hearing something that isn’t there but feels like it is

It has a role in culture and folklore as well. In Brazilian folklore, the “Pisadeira” is a long-finger-nailed crone “who lurks on rooftops at night” and tramples on people lying belly up. Newfoundlanders called sleep paralysis an attack of the “Old Hag.” Sleep paralysis has been recognized by scholars and doctors since the ancient Greeks, Sharplesss said. Too much blood, different lunar phases, upset gastrointestinal tracts — all were thought to trigger bouts of sleep paralysis. Episodes have been described in the Salem Witch Trials in 1692. The Nightmare, a 1781 oil painting by Swiss artist Henry Fuseli of a woman in deep sleep, arms flung over her head and an incubus, a male demon, on her belly, has been taken as a symbol of sleep paralysis, among other interpretations. Sleep paralysis figures in numerous scary films and docu-horrors, including Shadow People, Dead Awake, Haunting of Hill House, Be Afraid, Slumber and The Nightmare.

The wildest story Sharpless has heard involved an undergrad student at Pennsylvania State University who was sleeping in her dormitory bunk bed when she woke suddenly, moved her eyes to the left and saw a child vampire with blood coming out of her mouth.

“The vampire girl ripped her covers off, grabbed her by the leg and started screaming, ‘I’m dragging you to hell, I’m dragging you to hell,’ pulling her out of the bed, all the while blood is coming out of her mouth,” Sharpless recalled the student telling him.

When she was able to move again, she found herself fully covered, her leg still under the blankets and not hanging off the ledge of the bunk bed as she imagined.

With sleep paralysis, hallucinations evaporate the moment movement returns, Sharpless said.

People are immobile in the first place because, during REM sleep, when dreams tend to be the most vivid and emotion-rich, muscles that move the eyes and involve breathing keep moving but most other muscles do not. The relaxed muscle tone keeps people from acting out their dreams and potentially injuring themselves or a bedmate.

“In REM, if you’re dreaming that you’re running or playing the piano, the brain is sending commands to your muscles as if you were awake,” said Antonio Zadra, a professor of psychology and a sleep scientist at Université de Montréal.

More than a decade ago, University of Toronto neuroscientists Patricia Brooks and John Peever found that two distinct brain chemicals worked together to switch off motor neurons communicating those brain messages to move. The result: muscle atonia or that REM sleep muscle paralysis. With REM sleep behaviour disorder, another parasomnia, the circuit isn’t switched off to inhibit muscle movement. People can act out their dreams, flailing, kicking, sitting up or even leaving the bed.

Normally, when people wake out of REM sleep, the paralysis that accompanies REM also stops. With sleep paralysis, the atonia carries over into wakefulness.

“You’re experiencing two aspects of REM sleep, namely, the paralysis and any dream activity, but now going on while the person is fully awake,” Sharpless said. People have normal waking consciousness. They think just like they can when fully awake. But they’re also experiencing “dreams” and because they’re awake, the dreams are hallucinations that feel just as real as anything in waking life.

Sleep paralysis tends to happen most often when people sleep in supine (on their back) positions and, while Sharpless and colleagues found that about 23 per cent of 172 adults with recurrent sleep paralysis surveyed reported always, mostly or sometimes pleasant experiences — some felt as if they were floating or flying — the hallucinations, like the beast Sharpless conjured up, are almost always threatening and bizarre.

Why so negative?

Evolution primed humans to be afraid of the dark and, in general, when we wake up, “It’s not usual for us to be paralyzed,” Sharpless said. “That’s an unusual experience from the get-go.” Sometimes people have catastrophizing thoughts like, “Oh my god, I’m having a stroke,” or they fear they’re going to die or be forever paralyzed.

“If you start having the hallucinatory REM sleep-dream activity going on, then it can get even worse,” Sharpless said.

Should people sense a presence in the room, the brain organizes that sensed presence into an actual shape or object, usually an intruder, attacker or something else scary, like an evil force. “If it goes on, you might actually make physical contact with the hallucination: You could feel that you’re being touched. You might smell it; you might hear it,” Sharpless said.

These aren’t nightmares. With nightmares, people aren’t aware of their bedroom surroundings and they certainly can’t move their eyes around the room.

What might explain that dense pressure on the chest, like you’re being suffocated or smothered? People are more likely to experience breathing disruptions when they’re sleeping on their backs. People with sleep apnea are also more likely to experience bouts of sleep paralysis because of disrupted oxygen levels, and the fact that they are awake, temporarily paralyzed and in a not so positive state, can affect respiration. Rates of sleep paralysis are higher in other sleep disorders as well, narcolepsy especially.

While sleep paralysis can be weird and seriously uncomfortable, Sharpless marvels in Monsters on the Couch at how often people have asked him how one might be able to induce sleep paralysis.

One way is to have messed up sleep. Anything that disrupts sleep seems to increase the odds, Sharpless said, like sleep deprivation, jet lag, erratic sleep schedules. Sleep paralysis has also been linked to “exploding head syndrome,” a sleep disorder Sharpless has published a good bit on. People experience auditory hallucinations — loud bangs or explosions that last a mere second — during sleep-wake transitions.

How can people snap out of sleep paralysis?

In a survey of 156 university students with sleep paralysis, some of the more effective “disruption techniques” involved trying to move smaller body parts like fingers or toes, and trying to calm down or relax in the moment.

One review of 42 studies linked a history of trauma, a higher body mass index and chronic pain with episodes of fearful sleep paralysis. Excessive daytime sleepiness, excessively short (fewer than six hours) or excessively long (longer than nine hours) sleep duration have also been implicated.

To reduce the risk, Sharpless recommends good sleep hygiene, including going to bed and waking up at the same time, not drinking alcohol or caffeine too close to bedtime and “taking care of any issues you’ve been avoiding,” especially anxiety, depression or trauma. One simple suggestion: try to sleep on your side. “If you have a partner, have them gently roll you over,” Sharpless said. Zadra, author, with Robert Stickgold, of When Brains Dream: Exploring the Science and Mystery of Sleep, recommends trying to move the tongue to disengage motor paralysis. “The tongue is not paralyzed in REM sleep. Technically, you can move it,” Zadra said. Even thinking about moving the tongue or toes can put people into a whole different mindset “rather this feeling of panic and not being able to move at all,” said Zadra.

Source : https://nationalpost.com/longreads/sleep-paralysis-terrors

 

Weight loss drugs help with fat loss – but they cause bone and muscle loss too

Patient injecting themself in the stomach with an Ozempic (semaglutide) needle. (Photo by Douglas Cliff on Shutterstock)

For a long time, dieting and exercise were the only realistic options for many people who wanted to lose weight, but recent pharmaceutical advances have led to the development of weight loss drugs. These are based on natural hormones from the intestine that help control food intake, such as GLP and GIP.

GLP-1-based drugs such as semaglutide (Wegovy and Ozempic) and tirzepatide (Mounjaro) work by helping people to feel less hungry. This results in them eating less – leading to weight loss.

Studies show that these drugs are very effective in helping people lose weight. In clinical trials of people with obesity, these drugs lead to a weight loss of up to 20% body weight in some instances.

But it’s important to note that not all the weight lost is fat. Research shows that up to one-third of this weight loss is so-called “non-fat mass” – this includes muscle and bone mass. This also happens when someone goes on a diet, and after weight loss surgery.

Muscle and bone play very important roles in our health. Muscle is really important for a number of reasons including that it helps us control our blood sugar. Blood sugar control isn’t as good in people who have lower levels of muscle mass.

High blood sugar levels are also linked to health conditions such as Type 2 diabetes – where having high blood sugar levels can lead to blindness, nerve damage, foot ulcers and infections, and circulation problems such as heart attacks and strokes.

We need our bones to be strong so that we can carry out our everyday activities. Losing bone mass can increase our risk of fractures.

Researchers aren’t completely sure why people lose fat-free mass during weight loss – though there are a couple of theories.

It’s thought that during weight loss, muscle proteins are broken down faster than they can be built. And, because there’s less stress on the bones due to the weight that has been lost, this might affect normal bone turnover – the process where old bone is removed and new bone is formed leading to less bone mass being manufactured than before weight loss.

Because GLP-1 drugs are so new, we don’t yet know the longer-term effects of weight loss achieved by using them. So, we can’t be completely sure how much non-fat mass someone will lose while using these drugs or why it happens.

It’s hard to say whether the loss of non-fat mass could cause problems in the longer term or if this would outweigh the many benefits that are associated with these drugs.

Maintaining muscle and bone

There are many things you can do while taking GLP-1 drugs for weight loss to maintain your muscle and bone mass.

Research tells us that eating enough protein and staying physically active can be helpful in reducing the amount of non-fat mass that is lost when losing weight. One of the best types of exercise is doing resistance training or weight training. This will help to preserve muscle mass, and protein will help us maintain and build muscle.

Source : https://studyfinds.org/weight-loss-drugs-bone-muscle/

Content overload: Streaming audiences plagued by far too many options

(Credit: DANIEL CONSTANTE/Shutterstock)

New survey finds the average viewer spends 110 hours each year just figuring out what to watch.

In an era of endless entertainment options, streaming subscribers are drowning in choices — and not in a good way. A new survey reveals a startling paradox: despite having more content at their fingertips than ever before, viewers are struggling to find something worth watching.

Commissioned by UserTesting and conducted by Talker Research, the survey exposes the growing frustration with the current streaming landscape. The research paints a vivid picture of entertainment exhaustion, revealing that the average person now spends a staggering 110 hours per year — nearly five full days — simply scrolling through streaming platforms in search of something to watch.

One in five subscribers believe finding something to watch is harder now than a decade ago, a sentiment rooted in the overwhelming abundance of content. Forty-one percent of respondents struggle with increasingly large content libraries, while 26% feel there’s an overproduction of original content.

“The streaming landscape has evolved from solving the problem of content access to creating a new challenge of content discovery,” says Bobby Meixner, Senior Director of Industry Solutions at UserTesting, in a statement.

This observation is backed by intriguing revelations that highlight the complexity of our modern entertainment landscape. While 75% appreciate streaming service algorithms for providing accurate recommendations, 51% simultaneously admit feeling overwhelmed by the sheer quantity of suggested content.

Traditional TV is rapidly transforming too

Researchers found that 48% of subscribers have already abandoned cable television. TV viewers have been drawn to streaming platforms for various reasons, including content variety (43%), access to shows not available on cable (34%), and the convenience of on-the-go viewing (29%). However, the audience’s satisfaction remains elusive. In fact, 51% of subscribers would welcome more streaming options, even if those options include advertisements.

When envisioning their ideal streaming service, subscribers prioritized some specific features. Four in 10 desired premium channels and networks at no additional cost, while 39% emphasized the importance of an easy-to-navigate interface. The average subscriber believes a comprehensive streaming service should cost no more than $46 per month, though 11% would be willing to pay over $100 for the right service.

Hidden fees and content availability present significant challenges to subscriber loyalty. Seventy-nine percent expressed frustration with streaming services requiring additional subscription fees for select content. When encountering these fees, viewers respond dramatically: 73% look for the content on another platform, 77% give up and watch something else, and 37% consider canceling their subscription altogether. One in five would even resort to signing up for a free trial just to watch a specific show.

What do loyal customers want?

The study also revealed the precarious nature of content loyalty. Two in three people have opened a streaming service only to find the show they signed up to watch had been removed from the platform. Forty-four percent would switch services to continue watching a favorite show, with 56% planning to cancel their subscription immediately after finishing that show. The cancellation process itself becomes another point of friction, with 23% of subscribers reporting difficulties, including challenges in finding the cancellation option (39%) and overly complicated multi-step processes (36%).

Source : https://studyfinds.org/content-overload-streaming/

Microplastics are invading our bodies — 5 ways to keep them out

Microplastics on the beach. (© David Pereiras – stock.adobe.com)

Most people know by now that microplastics are building up in our environment and within our bodies. However, according to Dr. Leonarde Transande, director of environmental pediatrics at NYU School of Medicine, there are ways to reduce the influx of plastics into our bodies. It starts with avoiding canned foods.

Plastic is everywhere. It’s in our food packaging, our homes, and our clothing. You can’t avoid it completely. Much of it serves important purposes in everything from computers to cars, but it’s also overwhelming our environment.

It affects our health. Minute bits of plastic, called microplastics or nanoplastics, are shed from larger products. These particles have invaded our brains, glands, reproductive organs, and cardiovascular systems.

CNN Chief Medical Correspondent Dr. Sanjay Gupta discussed with Transande his last two decades studying environmental effects on our health. Transande said that we eat a lot of plastic and also inhale it as dust. It’s even in cosmetics we absorb into our skin.

This contamination also concerns what’s in the plastic as well; chemicals causing inflammation and irritation. Polyvinyl chloride, a plastic in food packaging, has added chemicals called phthalates which make it softer.

Dr. Transande worries about phthalates (an ingredient in personal care items and food packaging), bisphenols (lining aluminum cans and thermal paper receipts), and perfluoroalkyl and polyfluoroalkyl substances (PFAS) – called “forever chemicals” because they last for centuries in the environment.

Many of these added chemicals are especially concerning due to their effects on the endocrine system – glands and the hormones they secrete. The endocrine system controls many of our bodies’ functions, such as metabolism and reproduction. Hormones are signaling molecules, acting as expert conductors of the body’s communication within itself.

5 things you can do to avoid exposure

Avoid canned foods

While bisphenol A (BPA) — a chemical that was commonly used in the lining of many metal food and drink cans, lids, and caps — is no longer present in the packaging for most products (canned tuna, soda, and tomatoes), industry data shows that it is still used about 5% of the time, possibly more.

Also, it is unclear if BPA’s replacement is safer. One of the common substitutes, bisphenol S, is as toxic as BPA. It has seeped into our environment as well.

Keep plastic containers away from heat and harsh cleaners

The “microwave and dishwasher-safe” labeling on some plastics refers only to the warping or gross misshaping of a plastic container. If, however, you examine the container microscopically, you can see damage. Bits of chemical additives and/or plastic are shed and absorbed into the food, which you then ingest.

If the plastic is etched, like a well-used plastic cutting board, it should be discarded. Etching increases the leaching of chemicals into your food.

Source : https://studyfinds.org/microplastics-5-ways-to-keep-them-out/

Heart tissue can regenerate — How Cold War nuclear tests led to major discovery

(ID 328527023 © Dmitry Buksha | Dreamstime.com)

Study reveals extraordinary self-healing potential in advanced heart failure patients
TUCSON, Ariz. — For decades, medical science has insisted that the human heart cannot repair itself in any meaningful way. This dogma, as fundamental to cardiology as a heartbeat itself, is now being challenged by game-changing research that reveals our hearts may possess an extraordinary power of regeneration—provided they’re given the right conditions to heal.

The study, published in Circulation, offers potential new directions for treating heart failure, a condition that affects nearly 7 million U.S. adults and accounts for 14% of deaths annually, according to the Centers for Disease Control and Prevention.

Traditionally, the medical community has viewed the human heart as having minimal regenerative capabilities. Unlike skeletal muscles that can heal after injury, cardiac muscle tissue has been thought to have very limited repair capacity.

“When a heart muscle is injured, it doesn’t grow back. We have nothing to reverse heart muscle loss,” says Dr. Hesham Sadek, director of the Sarver Heart Center at the University of Arizona College of Medicine – Tucson, in a statement.

However, this new research, conducted by an international team of scientists, demonstrates that hearts supported by mechanical assist devices can achieve cellular renewal rates significantly higher than previously observed. The study examined tissue samples from 52 patients with advanced heart failure, including 28 who received left ventricular assist devices (LVADs) – mechanical pumps surgically implanted to help weakened hearts pump blood more effectively.

The research methodology centered on an innovative approach to tracking cell renewal. Using a technique that measures carbon-14 levels in cellular DNA – taking advantage of elevated atmospheric levels from Cold War nuclear testing – researchers could effectively date when cardiac cells were created. This method provided unprecedented insight into the heart’s regenerative processes.

The findings revealed a stark contrast between different patient groups. In healthy hearts, cardiac muscle cells (cardiomyocytes) naturally renew at approximately 0.5% per year. However, in failing hearts, this renewal rate drops dramatically – to 0.03% in cases of non-ischemic cardiomyopathy (heart failure not caused by blocked arteries) and 0.01% in ischemic cardiomyopathy (heart failure from blocked arteries).

The most significant finding emerged from patients who responded positively to LVAD support. These “responders,” who showed improved cardiac function, demonstrated cardiomyocyte renewal rates more than six times higher than those seen in healthy hearts. This observation provides what Dr. Sadek calls “the strongest evidence we have, so far, that human heart muscle cells can actually regenerate.”

The study builds upon previous research, including Dr. Sadek’s 2011 publication in Science showing that heart muscle cells actively divide during fetal development but cease shortly after birth to focus solely on pumping blood. His 2014 research provided initial evidence of cell division in artificial heart patients, laying the groundwork for the current study.

The mechanism behind this increased regeneration may be linked to the unique way LVADs support heart function. These devices effectively provide cardiac muscle with periods of reduced workload by assisting with blood pumping, potentially creating conditions that enable regeneration. This observation aligns with established knowledge about how other tissues in the body heal and regenerate when given adequate rest.

The research team found that in failing hearts, most cellular DNA synthesis is directed toward making existing cells larger or more complex through processes called polyploidization and multinucleation, rather than creating new cells. However, in LVAD patients who showed improvement, a significant portion of DNA synthesis was dedicated to generating entirely new cardiac muscle cells – a more beneficial form of cardiac adaptation.

Approximately 25% of LVAD patients demonstrate this enhanced regenerative response, raising important questions about why some patients respond while others do not. Understanding these differences could be crucial for developing new therapeutic approaches. “The exciting part now is to determine how we can make everyone a responder,” says Sadek.

The implications of this research are particularly promising because LVADs are already an established treatment option. As Dr. Sadek points out, “The beauty of this is that a mechanical heart is not a therapy we hope to deliver to our patients in the future – these devices are tried and true, and we’ve been using them for years.”

Source: https://studyfinds.org/heart-muscle-regeneration-cold-war-tests/

The dark side of digital work: ‘Always on’ culture creating new type of anxiety for employees

(© Maridav – stock.adobe.com)

Think about the last time you checked your work email after hours. Do you find yourself having the urge to scan your inbox frequently while on vacation? A new study from the University of Nottingham suggests these digital intrusions may be taking a significant toll on employee wellbeing.

The research, published in Frontiers in Organizational Psychology, explores what researchers call the “dark side” of digital workplaces: the hidden psychological and physical costs that come with being constantly connected to work through technology. While digital tools have enabled greater flexibility and collaboration, they’ve also created new challenges that organizations need to address.

The researchers identified a phenomenon they term “Digital Workplace Technology Intensity” (DWTI). This is the mental and emotional effort required to navigate constant connectivity, handle information overload, deal with technical difficulties, and cope with the fear of missing important updates or connections in the digital workplace.

“Digital workplaces benefit both organizations and employees, for example, by enabling collaborative and flexible work,” explains Elizabeth Marsh, ESRC PhD student from the School of Psychology who led the qualitative study, in a statement. “However, what we have found in our research is that there is a potential dark side to digital working, where employees can feel fatigue and strain due to being overburdened by the demands and intensity of the digital work environment. A sense of pressure to be constantly connected and keeping up with messages can make it hard to psychologically detach from work.”

Rise of ‘productivity anxiety’

To understand these challenges, the research team conducted in-depth interviews with 14 employees across various roles and industries. The participants, aged 27 to 60, included store managers, software engineers, and other professionals, providing insights into how digital workplace demands affect different types of work.

The researchers identified five key themes that characterize the challenges of digital work. The first is “hyperconnectivity.” They define this as a state of constant connection to work through digital devices that erodes the boundaries between professional and personal life. As one participant explained: “You kind of feel like you have to be there all the time. You have to be a little green light.”

This always-on culture has given rise to what the study reveals as “productivity anxiety,” or workers’ fear of being perceived as unproductive when working remotely. One participant described this pressure directly: “It’s that pressure to respond […] I’ve received an e-mail, I’ve gotta do this quickly because if not, someone might think ‘What is she doing from home?’”

FOMO leading to workplace overload

The study also identified “techno-overwhelm,” where workers struggle with the sheer volume of digital communications and platforms they must manage. Participants described feeling bombarded by emails and overwhelmed by the proliferation of messages, applications, and meetings in the digital workplace.

Technical difficulties, which the researchers termed “digital workplace hassles,” emerged as another significant source of stress. The study found these challenges were particularly significant for older workers and those with disabilities, highlighting important accessibility concerns that organizations need to address.

The research also revealed an interesting pattern around the Fear of Missing Out (FoMO) in professional settings. While digital tools are meant to improve communication, many participants expressed anxiety about potentially missing important updates or opportunities for connection with colleagues.

“This research extends the Job Demands-Resources literature by clarifying digital workplace job demands including hyperconnectivity and overload,” says Dr. Alexa Spence, Professor of Psychology at Nottingham. “It also contributes a novel construct of digital workplace technology intensity which adds new insight on the causes of technostress in the digital workplace. In doing so, it highlights the potential health impacts, both mental and physical, of digital work.”

Disconnecting from the connected world

The study’s findings are particularly relevant in our post-pandemic era, where the boundaries between office and home have become increasingly blurred. As one participant noted: “[It’s] just more difficult to leave it behind when it’s all online and you can kind of jump on and do work at any time of the day or night.”

Source : https://studyfinds.org/the-dark-side-of-digital-work-productivity-anxiety/

80% of adults carry this virus — For some, it could trigger Alzheimer’s

The brain’s immune cells, or microglia (light blue/purple), are shown interacting with amyloid plaques (red) — harmful protein clumps linked to Alzheimer’s disease. The illustration highlights the microglia’s role in monitoring brain health and clearing debris. (Illustration by Jason Drees/Arizona State University)

In the gut of some Alzheimer’s patients lies an unexpected culprit: a common virus that may be silently contributing to their disease. While scientists have long suspected microbes might play a role in Alzheimer’s disease, new research has uncovered a surprising link between a virus that infects most humans and a distinct subtype of the devastating neurological condition.

The research suggests that human cytomegalovirus (HCMV) — a virus that infects between 80% of adults over 80 — may play a more significant role in Alzheimer’s disease than previously thought, particularly when combined with specific immune system responses.

The study, led by researchers at Arizona State University and multiple collaborating institutions, focused on a specific type of brain cell called microglia marked by a protein called CD83. These CD83-positive microglia were found in 47% of Alzheimer’s patients compared to 25% of unaffected individuals.

This study, published in the journal Alzheimer’s and Dementia, is particularly notable because it examines multiple body systems, including the gut, the vagus nerve (which connects the gut to the brain), and the brain itself. The researchers found that subjects with CD83-positive microglia in their brains were more likely to have both HCMV and increased levels of an antibody called immunoglobulin G4 (IgG4) in their colon, vagus nerve, and brain tissue.

“We think we found a biologically unique subtype of Alzheimer’s that may affect 25% to 45% of people with this disease,” says study co-author Dr. Ben Readhead, a research associate professor with ASU-Banner Neurodegenerative Disease Research Center, in a statement. “This subtype of Alzheimer’s includes the hallmark amyloid plaques and tau tangles—microscopic brain abnormalities used for diagnosis—and features a distinct biological profile of virus, antibodies and immune cells in the brain.”

For their research, the team examined tissue samples from multiple areas of the body in both Alzheimer’s patients and healthy controls. They found that patients with CD83-positive microglia in their brains were significantly more likely to have both HCMV and elevated IgG4 levels in their colon, vagus nerve, and brain tissue.

“It was critically important for us to have access to different tissues from the same individuals. That allowed us to piece the research together,” says Readhead, who also serves as the Edson Endowed Professor of Dementia Research at the center.

To further investigate the potential impact of HCMV on brain cells, the team conducted experiments using cerebral organoids – simplified versions of human brain tissue grown in the laboratory. When these organoids were infected with HCMV, they showed accelerated development of two key markers of Alzheimer’s disease: amyloid beta-42 and phosphorylated Tau-212. The infected organoids also showed increased rates of neuronal death.

The researchers emphasize that while HCMV infection is common, only a subset of individuals showed evidence of intestinal HCMV infection, which appears to be the relevant factor in the virus’s presence in the brain.

Study authors suggest that in some individuals, HCMV infection might trigger a cascade of events involving the immune system that could contribute to the development or progression of Alzheimer’s disease. This is particularly interesting because it might help explain why some people develop Alzheimer’s while others don’t, despite HCMV being so common in the general population.

Looking ahead, the research team is developing a blood test to identify individuals with chronic intestinal HCMV infection. They hope to use this in combination with emerging Alzheimer’s blood tests to evaluate whether existing antiviral drugs could be beneficial for this subtype of Alzheimer’s disease.

“We are extremely grateful to our research participants, colleagues, and supporters for the chance to advance this research in a way that none of us could have done on our own,” notes Dr. Eric Reiman, Executive Director of Banner Alzheimer’s Institute and the study’s senior author. “We’re excited about the chance to have researchers test our findings in ways that make a difference in the study, subtyping, treatment and prevention of Alzheimer’s disease.”

With the development of a blood test to identify patients with chronic HCMV infection on the horizon, this research might not just explain why some people develop Alzheimer’s – it might also point the way toward preventing it. In the end, the key to understanding this devastating brain disease may have been hiding in our gut all along.

Source : https://studyfinds.org/gut-virus-trigger-alzheimers/

You shouldn’t have! Holiday shoppers spending $10.1 billion on gifts nobody wants

(Credit: Asier Romero/Shutterstock)

This holiday season, take a moment to ask yourself, “Does this person really want what I’m buying them?” A new survey finds the answer is likely no! Researchers have found that more than half of Americans (53%) will receive a gift they don’t want.

As Elon Musk and Vivek Ramaswamy go looking for waste in Washington, it turns out that everyday Americans are throwing away tons of money too. According to the new forecast from Finder, unwanted presents will reach an all-time high in both volume and cost this year, with an estimated $10.1 billion being spent on gifts headed for the regifting pile.

Overall, the annual holiday spending forecast finds that roughly 140 million Americans will receive at least one unwanted present in 2024. Shockingly, one in 20 people expect to receive at least five gifts they won’t want to keep. The average cost of these unwanted items is expected to rise to $72 this holiday season, up from $66 last year. That represents a billion-dollar surge in wasteful holiday spending.

Saying “you shouldn’t have…” might be a more truthful statement than ever when it comes to certain gift ideas. Clothing and accessories top 2024’s list of the most unwanted gifts people receive. Specifically, 43% hope to avoid these personal items. However, that number is actually down from the 49% who didn’t want clothes for Christmas in 2022. So, maybe some Americans need a new pair of socks this year.

Household items follow clothing as the least popular holiday gifts (33%), while cosmetics and fragrances round out the top three at 26%. Interestingly, technology gifts are skyrocketing in unpopularity. Since 2022, the dislike for tech gifts has risen by a whopping 10%, going from 15% in 2022 to 25% this holiday season. So, maybe think twice before getting your friend their eighth pair of headphones.

The season of re-giving

So, what happens to all these well-intentioned but unwanted presents? The survey found that regifting is the most popular solution in 2024. Nearly four in 10 Americans (39%) plan to pass their unwanted gifts along to someone else. That’s the most popular option this year, surpassing the awkward choice of keeping a bad gift. Interestingly, a staggering 43% of Americans kept their unwanted presents in 2022, but that number has now fallen to 35%.

Another 32% take advantage of post-holiday exchange policies to swap their unwanted items for something more desirable. However, more and more people are just opting to sell their sub-par presents for cold hard cash. Over one in four (27%) plan to sell unwanted gifts after the holidays, up significantly from 17% in 2022.

So, if you’re still looking for last-minute gifts this holiday season, choose wisely. There’s a very good chance the person you’re buying for won’t like your choices anyway.

Source : https://studyfinds.org/holiday-shoppers-unwanted-gifts/

See how Google Gemini 2.0 Flash can perform hours of business analysis in minutes

Anyone who has had a job that required intensive amounts of analysis will tell you that any speed gain they can find is like getting an extra 30, 60, or 90 minutes back out of their day.

Automation tools in general, and AI tools specifically, can assist business analysts who need to crunch massive amounts of data and succinctly communicate it.

In fact, a recent Gartner analysis, “An AI-First Strategy Leads to Increasing Returns,” states that the most advanced enterprises rely on AI to increase the accuracy, speed, and scale of analytical work to fuel three core objectives — business growth, customer success, and cost efficiency — with competitive intelligence being core to each.

Google’s newly released Gemini 2.0 Flash provides business analysts with greater speed and flexibility in defining Python scripts for complex analysis, giving analysts more precise control over the results they generate.

Google claims that Gemini 2.0 Flash builds on the success of 1.5 Flash, its most adopted model yet for developers.

Gemini 2.0 Flash outperforms 1.5 Pro on key benchmarks, delivering twice the speed, according to Google. 2.0 Flash also supports multimodal inputs, including images, video, and audio, as well as multimodal output, including natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. It can also natively call tools like Google Search, code execution, and third-party user-defined functions.

Taking Gemini 2.0 Flash for a test drive
VentureBeat gave Gemini 2.0 Flash a series of increasingly complex Python scripting requests to test its speed, accuracy, and precision in dealing with the nuances of the cybersecurity market.

Using Google AI Studio to access the model, VentureBeat started with simple scripting requests, working up to more complex ones centered on the cybersecurity market.

What’s immediately noticeable about Python scripting with Gemini 2.0 Flash is how fast it is — nearly instantaneous, in fact — at providing Python scripts, generating them in seconds. It’s noticeably faster than 1.5 Pro, Claude, and ChatGPT when handling increasingly complex prompts.

VentureBeat asked Gemini 2.0 Flash to perform a typical task that a business or market analyst would be requested to do: Create a matrix comparing a series of vendors and analyze how AI is used across each company’s products.

Analysts often have to create tables quickly in response to sales, marketing, or strategic planning requests, and they usually need to include unique advantages or insights into each company. This can take hours and even days to get done manually, depending on an analyst’s experience and knowledge.

VentureBeat wanted to make the prompt request realistic by having the script encompass an analysis of 13 XDR vendors, also providing insights into how AI helps the listed vendors handle telemetry data. As is the case with many requests analysts receive, VentureBeat asked Python to produce an Excel file of the results.

Here is the prompt we gave Gemini 2.0 Flash to execute:

Write a Python script to analyze the following cybersecurity vendors who have AI integrated into their XDR platform and build a table showing how they differ from each other in implementing AI. Have the first column be the company name, the second column the company’s products that have AI integrated into them, the third column being what makes them unique and the fourth column being how AI helps handle their XDR platforms’ telemetry data in detail with an example. Don’t web scrape. Produce an Excel file of the result and format the text in the Excel file so it is clear of any brackets ({}), quote marks (‘) and any HTML code to improve readability. Name the Excel file. Gemini 2 flash test.
Cato Networks, Cisco, CrowdStrike, Elastic Security XDR, Fortinet, Google Cloud (Mandiant Advantage XDR), Microsoft (Microsoft 365 Defender XDR), Palo Alto Networks, SentinelOne, Sophos, Symantec, Trellix, VMware Carbon Black Cloud XDR

Using Google AI Studio, VentureBeat created the following AI-powered XDR Vendor Comparison Python scripting request, with Python code produced in seconds:

Next, VentureBeat saved the code and loaded it into Google Colab. The goal in doing this was to see how bug-free the Python code was outside of Google AI Studio and also measure its speed of being compiled. The code ran flawlessly with no errors and produced the Microsoft Excel file Gemini_2_flash_test.xlsx.

The results speak for themselves
Within seconds, the script ran, and Colab signaled no errors. It also provided a message at the end of the script that the Excel file was done.

VentureBeat downloaded the Excel file and found it had been finished in less than two seconds. The following is a formatted view of the Excel table where the Python script was delivered.

The total time needed to get this table done was less than four minutes, from submitting the prompt, getting the Python script, running it in Colab, downloading the Excel file, and doing some quick formatting.

Source: https://venturebeat.com/ai/google-gemini-2-0-flash-test-drive-reveals-why-every-analyst-needs-to-know-this-modelgoogle-gemini-2-0-flash-test-drive-why-every-analyst-needs-to-know-this-model/

‘Big Brother’ isn’t just watching — He’s changing how your brain works

Surveillance cameras are seemingly everywhere. (ID 192949897 © Aleksandr Koltyrin | Dreamstime.com)

Every time you walk down a city street, electronic eyes are watching. From security systems to traffic cameras, surveillance is ubiquitous in modern society. Yet these cameras might be doing more than just recording our movements: according to a new study that peers into the psychology of surveillance, they could be fundamentally altering how our brains process visual information.

While previous research has shown that surveillance cameras can modify our conscious behavior – making us less likely to steal or more inclined to follow rules – a new study published in Neuroscience of Consciousness suggests that being watched affects something far more fundamental: the unconscious way our brains perceive the world around us.

“We found direct evidence that being conspicuously monitored via CCTV markedly impacts a hardwired and involuntary function of human sensory perception – the ability to consciously detect a face,” explains Associate Professor Kiley Seymour, lead author of the study, in a statement.

Putting surveillance to the test
The research team at the University of Technology Sydney, led by Seymour, designed an ingenious experiment to test how surveillance affects our unconscious visual processing. They recruited 54 undergraduate students and split them into two groups: one group completed a visual task while being conspicuously monitored by multiple surveillance cameras, while the control group performed the same task without cameras present.

The monitored group was shown the surveillance setup beforehand, including a live feed of themselves from the adjacent room, and had to sign additional consent forms acknowledging they would be watched. To ensure participants felt the full weight of surveillance, cameras were positioned to capture their whole body, face, and even their hands as they performed the task.

The visual task itself employed a clever technique called continuous flash suppression (CFS), which temporarily prevents images shown to one eye from reaching conscious awareness while the brain still processes them unconsciously. Participants viewed different images through each eye: one eye saw rapidly changing colorful patterns, while the other saw faces that were either looking directly at them or away from them.

‘Ancient survival mechanisms’ turn on when being watched
The results were remarkable: “Our surveilled participants became hyper-aware of face stimuli almost a second faster than the control group. This perceptual enhancement also occurred without participants realizing it,” says Seymour. This held true whether the faces were looking directly at them or away, though both groups detected direct-gazing faces more quickly overall.

This heightened awareness appears to tap into ancient survival mechanisms. “It’s a mechanism that evolved for us to detect other agents and potential threats in our environment, such as predators and other humans, and it seems to be enhanced when we’re being watched on CCTV,” Seymour explains.

Importantly, this wasn’t simply due to participants trying harder or being more alert under surveillance. When the researchers ran the same experiment using simple geometric patterns instead of faces, there was no difference between the watched and unwatched groups. The enhancement was specific to social stimuli – faces – suggesting that surveillance taps into fundamental neural circuits evolved for processing social information.

Effects on mental health and consciousness
The findings have particular relevance for mental health. “We see hyper-sensitivity to eye gaze in mental health conditions like psychosis and social anxiety disorder where individuals hold irrational beliefs or preoccupations with the idea of being watched,” notes Seymour. This suggests that surveillance might interact with these conditions in ways we don’t yet fully understand.

Perhaps most unsettling was the disconnect between participants’ conscious experience and their brain’s response. “We had a surprising yet unsettling finding that despite participants reporting little concern or preoccupation with being monitored, its effects on basic social processing were marked, highly significant and imperceptible to the participants,” Seymour reveals.

These findings arrive at a crucial moment in human history, as we grapple with unprecedented levels of technological surveillance. From CCTV cameras and facial recognition systems to trackable devices and the “Internet of Things,” our activities are increasingly monitored and recorded. The study suggests that this constant observation may be affecting us on a deeper level than previously realized, modifying basic perceptual processes that normally operate outside our awareness.

The implications extend beyond individual privacy concerns to questions about public mental health and the subtle ways surveillance might be reshaping human cognition and social interaction. As surveillance technology continues to advance, including emerging neurotechnology that could potentially monitor our mental activity, understanding these unconscious effects becomes increasingly crucial.

Source: https://studyfinds.org/big-brother-watching-surveillance-changing-how-brain-works/

 

The Most Beautiful Mountains on Earth | International Mountain Day

Denali Mountain (Photo by Bryson Beaver on Unsplash)

From the snow-capped peaks of the Himalayas to the dramatic spires of Patagonia, Earth’s mountains stand as nature’s most awe-inspiring monuments. On International Mountain Day, we celebrate these colossal formations that have shaped cultures, inspired religions, and challenged adventurers throughout human history. These geological giants aren’t just spectacular viewpoints – they’re vital ecosystems that provide water, shelter diverse wildlife, and influence global weather patterns. In this visual journey, join us to explore the most beautiful mountains on our planet, each telling its own story of natural forces, cultural significance, and unparalleled beauty that continues to captivate millions of visitors and photographers from around the world.

Most Beautiful Mountains in the World, According to Experts
1. Mount Fuji in Japan
This active volcano on the island of Honshu is a sight to behold. A site of pilgrimage for centuries among Buddhists, Shinto, and others, Mount Fuji is the largest peak in Japan. The last time it erupted was in the 18th century.

Mount Fuji soars to an impressive height of 12,389 feet (3,775 meters) and is particularly stunning when adorned with its signature snowy cap. As Hostelworld points out, while many visitors are eager to get up close to this legendary mountain, its true majesty is often best appreciated from a distance – though you’ll need some patience, as this shy giant has a habit of playing hide-and-seek behind the clouds.

The mountain’s significance runs far deeper than its physical beauty. According to Exoticca, Mount Fuji’s perfect conical shape has made it not just a national symbol, but a deeply spiritual place. Its slopes have long been intertwined with Shinto traditions, and by the early 12th century, followers of the Shugendō faith had even established a temple at its summit, marking its importance in Japanese religious life.

There’s a fascinating irony to Mount Fuji’s allure. Atlas & Boots shares a telling Japanese proverb: climbing it once makes you wise, but twice makes you a fool. While around 300,000 people make the trek annually, the immediate mountain environment is surprisingly stark. The real magic lies in viewing Fuji from afar, where its serene symmetry and majestic presence have rightfully earned it a place among the world’s most beautiful mountains.

2. Mount Kilimanjaro in Tanzania
As the highest freestanding mountain in the world, Kilimanjaro is also the highest mountain in Africa. It is made up of three dormant volcanic cones: Kibo, Mawenzi, and Shira.

Standing proudly at 19,341 feet (5,895 meters), Mount Kilimanjaro offers something you rarely find in a single mountain: an incredible variety of ecosystems stacked one above the other. As The Travel Enthusiast says, this African giant hosts everything from lush rainforests and moorlands to alpine deserts, culminating in an arctic summit that seems almost impossible for its location.

Those who venture to climb Kilimanjaro are treated to more than just stunning vistas. Veranda notes that the mountain provides spectacular views of the surrounding savanna, while the journey up its slopes takes you through an impressive sequence of distinct ecological zones. It’s like traveling from the equator to the poles in a matter of days.

The mountain’s surroundings are just as remarkable as its height. According to Travel Triangle, this legendary peak – one of Africa’s Seven Summits – is crowned with glaciers and an ice field, though both are slowly shrinking. The surrounding Kilimanjaro National Park is a haven for wildlife, where visitors might spot everything from elegant black and white colobus monkeys to elephants and even the occasional leopard prowling through the forest.

3. Matterhorn in Switzerland and Italy
The famously pyramid-shaped Matterhorn straddles the border of Italy and Switzerland in the Alps. Considered one of the deadliest peaks to climb in the world, its beauty is breathtaking and unmistakable.

At 14,692 feet (4,478 meters), the Matterhorn might not be the Alps’ tallest peak, but it’s arguably its most mesmerizing. As Hostelworld notes, this pyramid-shaped giant earned its legendary status not just through its distinctive silhouette, but also through its dramatic history – including its first ascent in 1865 by British climber Edward Whymper.

As Exoticca points out, the mountain’s majesty is best appreciated from the charming Swiss town of Zermatt. This picturesque resort has become synonymous with the Matterhorn itself, offering visitors front-row seats to one of nature’s most impressive displays.

According to Earth & World, which also crowns it the world’s most beautiful mountain, the Matterhorn creates an unforgettable natural spectacle when its rocky peak catches the light, particularly when reflected in the nearby Stellisee Lake. The area around this “mountain of mountains” is also home to Europe’s highest summer skiing region, operating year-round as a paradise for winter sports enthusiasts.

4. Denali Peak in Alaska
Also known as Mount McKinley, Denali Peak is the crown jewel of Alaska’s Denali National Park and Preserve. It’s aptly named, as Denali means “The High One,” being the tallest mountain in North America.

Rising to a staggering 20,237 feet (6,168 meters), Denali dominates the Alaskan landscape as one of the world’s most isolated and impressive peaks. As Beautiful World notes, this snow-crowned giant draws adventurers throughout the year, from mountaineers and backpackers in the warmer months to cross-country skiers who glide along its snow-blanketed paths in winter.

Among the world’s greatest climbing challenges, Denali stands as a formidable test of skill and endurance. Atlas & Boots ranks it as perhaps the most demanding of the Seven Summits after Everest, though its breathtaking beauty helps explain why climbers continue to be drawn to its unforgiving slopes.

The mountain’s appeal extends far beyond just climbing, according to Travel Triangle. Situated at the heart of the vast Denali National Park, this Alaskan masterpiece offers visitors a chance to experience nature in its most magnificent form. Its remarkable isolation and untamed character make it a perfect destination for those seeking to connect with the raw power of the natural world.

Source: https://studyfinds.org/most-beautiful-mountains/

Married Millennials Are Getting ‘Sleep Divorces’

Married millennials who are otherwise happy in their relationships are getting “sleep divorces,” a phenomenon in which mismatched sleeping habits make it impossible for the couple to continue sleeping in the same bed, or even in the same bedroom.

Watch an old episode of I Love Lucy and you’ll probably cock your head to the side like a confused dog when you see Lucy and Ricky’s sleeping arrangement: a married couple sleeping in the same bedroom but in two different beds, separated by a bedside table. That’s the way some couples used to sleep, and it’s the only way the FCC allowed TV shows to depict couples in their bedrooms back in the day.

Today’s 30-something married couples are living life like Lucy and Ricky. Whether it’s snoring, restless movements, or one or both having to get up to pee, there are simply too many issues to deal with that can disturb a partner’s sleep.

According to sleep scientist and psychologist Wendy Troxel, up to 30 percent of a person’s sleep quality is influenced by their partner’s sleepytime behavior. Sure, your own thoughts and anxieties make falling and staying asleep a nightmare, but add your partner’s sleep idiosyncrasies into the mix and you have a recipe for insomnia.

A study from Ohio State University found that couples who are not getting adequate sleep are more likely to exhibit negative behaviors when discussing their marital issues. A study of 48 British couples showed that men move around in their sleep a lot more than women, with women reporting being disturbed by their male partner’s movements.

Interestingly, the same study showed that most couples prefer to sleep together rather than apart despite the downsides.

Source : https://www.vice.com/en/article/married-millennials-sleep-divorces/

Friendship after 50: Why social support becomes a matter of life and death

(© Rawpixel.com – stock.adobe.com)

For adults over 50, maintaining close friendships isn’t just about having someone to chat with over coffee – it could be integral to their health and well-being. A new study reveals a stark reality: while 75% of older adults say they have enough close friends, those saying they’re in poor mental or physical health are significantly less likely to maintain these vital social connections. The findings paint a concerning picture of how health challenges can create a cycle of social isolation, potentially making health problems worse.

The University of Michigan’s National Poll on Healthy Aging, conducted in August 2024, surveyed 3,486 adults between 50 and 94, offering an in-depth look at how friendships evolve in later life and their crucial role in supporting health and well-being. The results highlight a complex relationship between health status and social connections that many may not realize exists.

“With growing understanding of the importance of social connection for older adults, it’s important to explore the relationship between friendship and health, and identify those who might benefit most from efforts to support more interaction,” explains University of Michigan demographer Sarah Patterson, in a statement.

Patterson, a research assistant professor at the UM Institute for Social Research’s Survey Research Center, emphasizes the critical nature of understanding these social connections. A robust 90% of adults over 50 said they have at least one close friend, with 48% maintaining one to three close friendships and 42% enjoying the company of four or more close friends. However, these numbers drop dramatically for those facing health challenges.

Among individuals reporting fair or poor mental health, 20% have no close friends at all – double the overall rate. Similarly, 18% of those with fair or poor physical health report having no close friends, suggesting that health challenges can significantly impact social connections.

The gender divide in friendship maintenance is notable: men are more likely than women to report having no close friends. Age also plays a role, with those 50 to 64 years-old more likely to report no close friendships compared to their older counterparts 65 and older – a somewhat counterintuitive finding that challenges assumptions about social isolation increasing with age.

When it comes to staying in touch, modern technology has helped keep connections alive. In the month before the survey, 78% of older adults had in-person contact with close friends, while 73% connected over the phone, and 71% used text messages. This multi-channel approach to maintaining friendships suggests that older adults are adapting to new ways of staying connected.

The findings resonate particularly with AARP, one of the study’s supporters.

“This poll underscores the vital role friendships play in the health and well-being of older adults,” says Indira Venkat, Senior Vice President of Research at AARP. “Strong social connections can encourage healthier choices, provide emotional support, and help older adults navigate health challenges, particularly for those at greater risk of isolation.”

Perhaps most striking is the role that close friends play in supporting health and well-being. Among those with at least one close friend, 79% say they can “definitely count on these friends for emotional support in good times or bad,” and 70% feel confident turning to their friends to discuss health concerns. These aren’t just casual relationships – they’re vital support systems that can influence health behaviors and outcomes.

Consider this: 50% of older adults say that their close friends encouraged them to make healthy choices, such as exercising more or eating a healthier diet. Another 35% say friends motivated them to get concerning symptoms or health issues checked out by a healthcare provider, and 29% received encouragement to stop unhealthy behaviors like poor eating habits or excessive drinking.

The practical support is equally impressive: 32% had friends who helped them when sick or injured, 17% had friends pick up medications for them, and 15% had friends attend medical appointments with them. These statistics underscore how friendship networks can function as informal healthcare support systems.

However, the study reveals a challenging paradox: making and maintaining friendships becomes more difficult precisely when people might need them most. Among those reporting fair or poor mental health, 65% say making new friends is harder now than when they were younger, compared to 42% of the overall population. Similarly, 61% of those with fair or poor mental health find it harder to maintain existing friendships, compared to 34% of the general over-50 population.

A desire to form new friendships remains high, with 75% of older adults expressing interest in developing new friendships (14% very interested, 61% somewhat interested). This interest is particularly strong among those who live alone and those who report feeling lonely, suggesting a recognition of the importance of social connections.

The study also reveals an interesting trend among friendships between people from different age groups. Among those with at least one close friend, 46% have a friend from a different generation (defined as being at least 15 years older or younger). Of these, 52% have friends from both older and younger generations, while 35% have friends only from younger generations, and 13% have friends only from older generations. This diversity in friendship age ranges suggests that meaningful connections can transcend generational boundaries.

The implications of these findings extend beyond individual relationships. Healthcare providers are encouraged to recognize the vital role that friends play in their patients’ health journeys, from encouraging preventive care to supporting healthy behaviors. Community organizations are urged to create more opportunities for social connection, particularly those that are inclusive and accessible to people with varying health status.

“When health care providers see older adults, we should also ask about their social support network, including close friends, especially for those with more serious health conditions,” says Dr. Jeffrey Kullgren, the poll director and primary care physician at the VA Ann Arbor Healthcare System.

As one considers the cycle of health and friendship revealed in this study, it becomes clear that the old adage about friendship being the best medicine might have more truth to it than we realized. In an age where healthcare increasingly focuses on holistic well-being, perhaps it’s time to add “friendship prescription” to the standard of care.

Source : https://studyfinds.org/friendship-after-50-social-support/

If the universe is already infinite, what is it expanding into?

NASA’s James Webb Space Telescope has produced the deepest and sharpest infrared image of the distant universe to date. Known as Webb’s First Deep Field, this image of galaxy cluster SMACS 0723 is overflowing with detail.Thousands of galaxies – including the faintest objects ever observed in the infrared – have appeared in Webb’s view for the first time. (Credits: NASA, ESA, CSA, and STScI)

When you bake a loaf of bread or a batch of muffins, you put the dough into a pan. As the dough bakes in the oven, it expands into the baking pan. Any chocolate chips or blueberries in the muffin batter become farther away from each other as the muffin batter expands.

The expansion of the universe is, in some ways, similar. But this analogy gets one thing wrong – while the dough expands into the baking pan, the universe doesn’t have anything to expand into. It just expands into itself.

It can feel like a brain teaser, but the universe is considered everything within the universe. In the expanding universe, there is no pan. Just dough. Even if there were a pan, it would be part of the universe and therefore it would expand with the pan.

Even for me, a teaching professor in physics and astronomy who has studied the universe for years, these ideas are hard to grasp. You don’t experience anything like this in your daily life. It’s like asking what direction is farther north of the North Pole.

Another way to think about the universe’s expansion is by thinking about how other galaxies are moving away from our galaxy, the Milky Way. Scientists know the universe is expanding because they can track other galaxies as they move away from ours. They define expansion using the rate that other galaxies move away from us. This definition allows them to imagine expansion without needing something to expand into.

The expanding universe
The universe started with the Big Bang 13.8 billion years ago. The Big Bang describes the origin of the universe as an extremely dense, hot singularity. This tiny point suddenly went through a rapid expansion called inflation, where every place in the universe expanded outward. But the name Big Bang is misleading. It wasn’t a giant explosion, as the name suggests, but a time where the universe expanded rapidly.

The universe then quickly condensed and cooled down, and it started making matter and light. Eventually, it evolved to what we know today as our universe.

The idea that our universe was not static and could be expanding or contracting was first published by the physicist Alexander Friedman in 1922. He confirmed mathematically that the universe is expanding.

While Friedman proved that the universe was expanding, at least in some spots, it was Edwin Hubble who looked deeper into the expansion rate. Many other scientists confirmed that other galaxies are moving away from the Milky Way, but in 1929, Hubble published his famous paper that confirmed the entire universe was expanding, and that the rate it’s expanding at is increasing.

This discovery continues to puzzle astrophysicists. What phenomenon allows the universe to overcome the force of gravity keeping it together while also expanding by pulling objects in the universe apart? And on top of all that, its expansion rate is speeding up over time.

Many scientists use a visual called the expansion funnel to describe how the universe’s expansion has sped up since the Big Bang. Imagine a deep funnel with a wide brim. The left side of the funnel – the narrow end – represents the beginning of the universe. As you move toward the right, you are moving forward in time. The cone widening represents the universe’s expansion.

Scientists haven’t been able to directly measure where the energy causing this accelerating expansion comes from. They haven’t been able to detect it or measure it. Because they can’t see or directly measure this type of energy, they call it dark energy.

According to researchers’ models, dark energy must be the most common form of energy in the universe, making up about 68% of the total energy in the universe. The energy from everyday matter, which makes up the Earth, the Sun and everything we can see, accounts for only about 5% of all energy.

Outside the expansion funnel
So, what is outside the expansion funnel?

Scientists don’t have evidence of anything beyond our known universe. However, some predict that there could be multiple universes. A model that includes multiple universes could fix some of the problems scientists encounter with the current models of our universe.

One major problem with our current physics is that researchers can’t integrate quantum mechanics, which describes how physics works on a very small scale, and gravity, which governs large-scale physics.

The rules for how matter behaves at the small scale depend on probability and quantized, or fixed, amounts of energy. At this scale, objects can come into and pop out of existence. Matter can behave as a wave. The quantum world is very different from how we see the world.

At large scales, which physicists call classical mechanics, objects behave how we expect them to behave on a day-to-day basis. Objects are not quantized and can have continuous amounts of energy. Objects do not pop in and out of existence.

The quantum world behaves kind of like a light switch, where energy has only an on-off option. The world we see and interact with behaves like a dimmer switch, allowing for all levels of energy.

But researchers run into problems when they try to study gravity at the quantum level. At the small scale, physicists would have to assume gravity is quantized. But the research many of them have conducted doesn’t support that idea.

Source: https://studyfinds.org/universe-infinite-still-expanding/

Human settlement of Mars isn’t as far off as we might think

Illustration of human colony on Mars. (© Anastasiya – stock.adobe.com)

Could humans expand out beyond their homeworld and establish settlements on the planet Mars? The idea of settling the Red Planet has been around for decades. However, it has been seen by skeptics as a delusion at best and mere bluster at worst.

Mars might seem superficially similar to Earth in a number of ways. But its atmosphere is thin and humans would need to live within pressurized habitats on the surface.

Yet in an era where space tourism has become possible, the Red Planet has emerged as a dreamland for rich eccentrics and techno-utopians. As is often the case with science communication, there’s a gulf between how close we are to this ultimate goal and where the general public understands it to be.

However, I believe there is a rationale for settling Mars and that this objective is not as far off as some would believe. There are actually a few good reasons to be optimistic about humanity’s future on the Red Planet.

First, Mars is reachable. During an optimal alignment between Earth and Mars as the two planets orbit the Sun, its possible to travel there in a spacecraft in six to eight months. Some very interesting new engine designs suggest that it could be done in two months. But based on technology that’s ready to go, it would take astronauts six months to travel to Mars and six months back to Earth.

Astronauts have already stayed for this long on the International Space Station (ISS) and on the Soviet orbiting lab Mir. We can get there safely and we have already shown that we can reliably land robots on the surface. There’s no technical reason why we couldn’t do the same with humans.

Second, Mars is abundant in the raw materials required for humans to “live off the land”; in other words, achieve a level of self-sufficiency. The Red Planet has plentiful carbon, nitrogen, hydrogen and oxygen which can be separated and isolated, using processes developed on Earth. Mars is interesting and useful in a multitude of ways that the moon isn’t. And we have technology on Earth to enable us to stay and settle Mars by making use of its materials.

A third reason for Mars optimism is the radical new technology that we can put to use on a crewed mission to the planet. For example, Moxie (Mars Oxygen In-Situ Resource Utilization Experiment) is an project developed by scientists at the California Institute of Technology (Caltech) that sucks in Martian atmosphere and separates it into oxygen. Byproducts of the process – carbon monoxide, nitrogen and argon – can be vented.

When scaled up, similar machines would be able to separate oxygen from hydrogen to produce breathable air, rocket fuel and water. This makes it easier to travel to the planet and live on the surface because it’s not necessary to bring these commodities from Earth – they can be made once on Mars. Generating fuel on the surface would also make any future habitat less reliant on electric or solar-powered vehicles.

But how would we build the habitats for our Mars settlers? Space architect Melodie Yasher has developed ingenious plans for using robots to 3D print the habitats, landing pads and everything needed for human life on Mars. Using robots means that these could all be manufactured on Mars before humans landed. 3D printed homes have already been demonstrated on Earth.

Volunteers have also spent time living in simulated Mars habitats here on Earth. These are known as Mars analogues. The emergency medicine doctor Beth Healey spent a year overwintering in Antarctica (which offers many parallels with living on another planet) for the European Space Agency (Esa) and communicates her experience regularly.

She is not alone, as each year sees new projects in caves, deserts and other extreme environments, where long term studies can explore the physical and psychological demands on humans living in such isolated environments.

Finally, the Mars Direct plan devised by Dr. Robert Zubrin has existed for more than 30 years, and has been modified to account for modern technology as the private sector has grown. The original plan was based on using a Saturn V rocket (used for the Apollo missions in the 1960s and 1970s) to launch people. However, this can now be accomplished using the SpaceX Falcon 9 rocket and a SpaceX Dragon capsule to carry crew members.

Several uncrewed launches from Earth could ferry necessary equipment to Mars. These could include a vehicle for crew members to return on. This means that everything could be ready for the first crew once they arrived.

Source : https://studyfinds.org/human-settlement-of-mars-closer/

It’s Friday the 13th. Why is this number feared worldwide?

(Credit: Prazis Images/Shutterstock)

Of all the days to stay in bed, Friday the 13th is surely the best. It’s the title of a popular (if increasingly corny) horror movie series; it’s associated with bad luck, and it’s generally thought to be a good time not to take any serious risks.

Even if you try to escape it, you might fail, as happened to New Yorker Daz Baxter. On Friday 13th in 1976, he decided to just stay in bed for the day, only to be killed when the floor of his apartment block collapsed under him. There’s even a term for the terror the day evokes: Paraskevidekatriaphobia was coined by the psychotherapist Donald Dossey, a specialist in phobias, to describe an intense and irrational fear of the date.

Unfortunately there is always one Friday 13th in a year, and sometimes there are as many as three. Today is one of them. But no matter how many times the masked killer Jason Voorhees from Friday the 13th returns to haunt our screens, this fear is in our own minds rather than any basis in science.

One study did show a small rise in accidents on that day for women drivers in Finland, but much of the problem was due to anxiety rather than general bad luck. Follow-up research found no consistent evidence of a rise in accidents on the day but suggested that if you’re superstitious, it might be better not to get behind the wheel of a car on it anyway.

The stigma against Friday 13th likely comes from a merging of two different superstitions. In the Christian tradition, the death of Jesus took place on a Friday, following the presence of 13 people at the Last Supper. In Teutonic legend, the god Loki appears at a dinner party seated for 12 gods, making him the outcast 13th at the table, leading to the death of another guest.

Elsewhere in the world, 13 is less unlucky. In Hinduism, people fast to worship Lord Shiva and Parvati on Trayodashi, the 13th day in Hindu month. There are 13 Buddhas in the Shingon sect of Buddhism, and there is mention of a lucky 13 signs, rather than unlucky, in The Tibetan Book of the Great Liberation.

In Italy, it is more likely to be “heptadecaphobia”, or fear of the number 17, that leads to a change of plans. In Greece, Spain, and Mexico, the “unlucky” day is not Friday 13th, but Tuesday 13th.

In China, the number four is considered significantly unlucky, as it is nearly homophonous to the word “death”. In a multicultural country like Australia you may find hotels and cinemas missing both 13th and fourth floors, out of respect for the trepidation people can have about those numbers.

The lure of superstition

Superstitions were one of the first elements of paranormal beliefs studied in the early 1900s. While many are now just social customs rather than a genuine conviction, their persistence is remarkable.

If you cross your fingers, feel alarmed at breaking a mirror, find a “lucky” horseshoe, or throw spilled salt over your shoulder, you are engaging in long-held practices that can have a powerful impact on your emotions. Likewise, many students are now heading towards their semester exams. In the lecture rooms, they may take lucky charms such as a particular pen or favorite socks.

In sports, baseballer Nomar Garciaparra is known for his elaborate batting ritual. Other sports people wear “lucky gear” or put on their gloves in a particular order. The great cricket umpire David Shepherd stood on one leg whenever the score reached 111. These sorts of superstitions are humorously depicted in the film Silver Linings Playbook. It’s interesting to note that it’s often the successful athletes who have these superstitions and stick to them.

Source : https://studyfinds.org/friday-the-13th-number-feared/

Scientists close to creating ‘simple pill’ that cures diabetes

Diabetes with insulin, syringe, vials, pills (© Sherry Young – stock.adobe.com)

Imagine a world where diabetes could be treated with a simple pill that essentially reprograms your body to produce insulin again. Researchers at Mount Sinai have taken a significant step toward making this possibility a reality, uncovering a groundbreaking approach that could potentially help over 500 million people worldwide living with diabetes.

Diabetes, affecting 537 million people globally, develops when cells in the pancreas known as beta cells become unable to produce insulin, a hormone essential for regulating blood sugar levels. In both Type 1 and Type 2 diabetes, patients experience a marked reduction in functional, insulin-producing beta cells. While current treatments help manage symptoms, researchers have been searching for ways to replenish these crucial cells.

The journey to this latest discovery began in 2015 when Mount Sinai researchers identified harmine, a drug belonging to a class called DYRK1A inhibitors, as the first compound capable of stimulating insulin-producing human beta cell regeneration. The research team continued to build on this foundation, reporting in 2019 and 2020 that harmine could work synergistically with other medications, including GLP-1 receptor agonists like semaglutide and exenatide, to enhance beta cell regeneration.

In July 2024, researchers reported remarkable results: harmine alone increased human beta cell mass by 300 percent in their studies, and when combined with a GLP-1 receptor agonist like Ozempic, that increase reached 700 percent.

However, there’s an even more exciting part of this discovery. These new cells might come from an unexpected source. Researchers discovered that alpha cells, another type of pancreatic cell that’s abundant in both Type 1 and Type 2 diabetes, could potentially be transformed into insulin-producing beta cells.

“This is an exciting finding that shows harmine-family drugs may be able to induce lineage conversion in human pancreatic islets,” says Dr. Esra Karakose, Assistant Professor of Medicine at Mount Sinai and the study’s corresponding author, in a statement. “It may mean that people with all forms of diabetes have a large potential ‘reservoir’ for future beta cells, just waiting to be activated by drugs like harmine.”

Using single-cell RNA sequencing technology, the researchers analyzed over 109,881 individual cells from human pancreatic islets donated by four adults. This technique allowed them to study each cell’s genetic activity in detail, suggesting that “cycling alpha cells” may have the potential to transform into insulin-producing beta cells. Alpha cells, being the most abundant cell type in pancreatic islets, could potentially serve as an important source for new beta cells if this transformation process can be successfully controlled.

The Mount Sinai team is now moving these studies toward human trials.

“A simple pill, perhaps together with a GLP1RA like semaglutide, is affordable and scalable to the millions of people with diabetes,” says Dr. Andrew F. Stewart, director of the Mount Sinai Diabetes, Obesity, and Metabolism Institute.

While the research is still in its early stages, it offers hope to millions of people who currently manage diabetes through daily insulin injections or complex medication regimens. The possibility of a treatment that could essentially restart the body’s insulin production is nothing short of revolutionary.

The study, published in the journal Cell Reports Medicine, represents a significant step forward in diabetes research. By potentially turning one type of pancreatic cell into another, researchers may have found a way to essentially reprogram the body’s own cellular mechanisms to combat diabetes.

Source : https://studyfinds.org/simple-pill-cure-diabetes/

Polio was supposedly wiped out – Now the virus has been found in Europe’s wastewater

(Credit: Babul Hosen/Shutterstock)

In 1988, the World Health Organization (WHO) called for the global eradication of polio. Within a decade, one of the three poliovirus strains was already virtually eradicated — meaning a permanent reduction of the disease to zero new cases worldwide.

Polio, also known as poliomyelitis, is an extremely contagious disease caused by the poliovirus. It attacks the nervous system and can lead to full paralysis within hours. The virus enters through the mouth and multiplies in the intestine. Infected people shed poliovirus into the environment by the fecal-oral route.

About one in every 200 infections results in irreversible paralysis (usually affecting the legs). Of those who become paralyzed, 5–10% die due to immobilized breathing muscles.

Since 1988, the global number of poliovirus cases has decreased by over 99%. Today, only two countries — Pakistan and Afghanistan — are considered “endemic” for polio. This means that the disease is regularly transmitted in the country.

Yet in recent months, poliovirus has been detected in wastewater in Germany, Spain and Poland. This discovery does not confirm infections in the population, but it is a wake-up call for Europe, which was declared polio-free in 2002. Any gaps in vaccination coverage could see a resurgence of the disease.

Poliovirus strains originating from regions where the virus remained in circulation led to outbreaks among unvaccinated people in Tajikistan and Ukraine in 2021, and Israel in 2022. By contrast, in the UK — where poliovirus was detected in wastewater in 2022 — no cases of paralytic disease were recorded.

This information highlights the varied effect of poliovirus detection. Why? In areas with under-immunized populations, the virus can circulate widely and cause paralysis. But in communities with strong vaccination coverage, the virus often remains limited to symptomless (“asymptomatic”) infections or is detectable only in wastewater.

In this sense, the mere detection of the virus in the environment can serve as a canary in the coal mine. It warns public health officials to check vaccination coverage and take measures such as boosting vaccination campaigns, improving access to healthcare and enhancing disease surveillance to prevent outbreaks.

Rich source of information

Wastewater surveillance, an approach reinvigorated during the COVID pandemic, has proven invaluable for early detection of disease outbreaks. Wastewater is a rich source of information. It contains a blend of human excrement, including viruses, bacteria, fungi and chemical traces. Analysing this mixture offers valuable insights for public health officials.

Routine wastewater testing in the three countries revealed a specific vaccine-derived strain. No polio cases were reported in any of the three countries.

Vaccine-derived poliovirus strains emerge from the weakened live poliovirus contained in oral polio vaccines. If this weakened virus circulates long enough among under-immunized or unimmunized groups or in people with weakened immune systems (such as transplant recipients or those undergoing chemotherapy), it can genetically shift back into a form capable of causing disease.

In this case, it is possible that the virus was shed in the sewage by an infected asymptomatic person. But it is also possible that a person who was recently vaccinated with the oral vaccine (with the weakened virus) shed the virus in the wastewater, which subsequently evolved until re-acquiring the mutations that cause paralysis.

A different type of vaccine exists. The inactivated polio vaccine (IPV) cannot revert to a dangerous form. However, it is more expensive and more complex to deliver, needing trained health workers to administer and more complex procedures. This can limit the feasibility of deploying it in poor countries — often where the need to vaccinate is greater.

This does not mean that the oral polio vaccine is not any good. On the contrary, they have been instrumental in eradicating certain poliovirus strains globally. The real issue arises when vaccination coverage is insufficient.

In 2023, polio immunization coverage in one-year-olds in Europe stood around 95%. This is well above the 80% “herd immunity” threshold — when enough people in a population are vaccinated so that vulnerable groups are protected from the disease.

In Spain, Germany and Poland, coverage with three doses ranges from 85–93%, protecting most people from severe disease. Yet under-immunized groups and those with weakened immune systems remain at risk.

The massive progress in polio eradication that happened over the past three decades is the result of the global effort to fight the disease. But mounting humanitarian crises — sparked by conflict, natural disasters and climate change — are significantly disrupting vaccination programs essential for safeguarding public health.

Source : https://studyfinds.org/polio-found-in-europe-wastewater/

Universe expanding faster than physics can explain: Webb telescope confirms mysterious growth spurt

Primordial creation: The universe begins with the Big Bang, an extraordinary moment of immense energy, igniting formation of everything in existence. (© Alla – stock.adobe.com)

When two of humanity’s most powerful eyes on the cosmos agree something strange is happening, astronomers tend to pay attention. Now, the James Webb Space Telescope has backed up what Hubble has been telling us for years: the universe is expanding faster than our best physics can explain, and nobody knows why.

Scientists have long known that our universe is expanding, but exactly how fast it’s growing is an ongoing and fascinating debate in the astronomy world. The expansion rate, known as the “Hubble constant,” helps scientists map the universe’s structure and understand its state billions of years after the Big Bang. This latest discovery suggests we may need to rethink our understanding of the universe itself.

“The discrepancy between the observed expansion rate of the universe and the predictions of the standard model suggests that our understanding of the universe may be incomplete,” says Nobel laureate and lead author Adam Riess, a Bloomberg Distinguished Professor at Johns Hopkins University, in a statement. “With two NASA flagship telescopes now confirming each other’s findings, we must take this problem very seriously—it’s a challenge but also an incredible opportunity to learn more about our universe.”

This research, published in The Astrophysical Journal, builds on Riess’ Nobel Prize-winning discovery that the universe’s expansion is accelerating due to a mysterious “dark energy” that permeates the vast stretches of space between stars and galaxies. Think of this expanding universe like a loaf of raisin bread rising in the oven. As the dough expands, the raisins (representing galaxies) move farther apart from each other. While this force pushes galaxies apart, exactly how fast this is happening remains hotly debated.

For over a decade, scientists have used two different methods to measure this expansion rate. One method looks at ancient light from the early universe, like examining a baby photo to understand how someone grew. The other method, using telescopes to observe nearby galaxies, looks at more recent cosmic events. These two methods give significantly different answers about how fast the universe is expanding – and not just slightly different.

While theoretical models predict the universe should be expanding at about 67-68 kilometers per second per megaparsec (a unit of cosmic distance), telescope observations consistently show a faster rate of 70-76 kilometers per second per megaparsec, averaging around 73. This significant discrepancy is what scientists call the “Hubble tension.”

To help resolve this mystery, researchers turned to the James Webb Space Telescope, the most powerful space observatory ever built. “The Webb data is like looking at the universe in high definition for the first time and really improves the signal-to-noise of the measurements,” says Siyang Li, a graduate student at Johns Hopkins University who worked on the study.

Webb’s super-sharp vision allowed it to examine these cosmic distances in unprecedented detail. The telescope looked at about one-third of the galaxies that Hubble had previously studied, using a nearby galaxy called NGC 4258 as a reference point – like using a well-known landmark to measure other distances.

The researchers used three different methods to measure these cosmic distances, each acting as an independent check on the others. First, they observed special pulsating stars called Cepheid variables, which astronomers consider the “gold standard” for measuring distances in space. These stars brighten and dim in a precise pattern that reveals their true brightness, making them reliable cosmic yardsticks. The team also looked at the brightest red giant stars in each galaxy and observed special carbon-rich stars, providing two additional ways to verify their measurements.

When they combined all these observations, they found something remarkable: All three methods pointed to nearly identical results, with Webb’s measurements matching Hubble’s almost exactly. The differences between measurements were less than 2% – far smaller than the roughly 8-9% discrepancy that creates the Hubble tension.

This agreement might seem like a simple confirmation, but it actually deepens one of astronomy’s biggest mysteries. Scientists now believe this discrepancy might point to missing pieces in our understanding of the cosmos. Recent research has revealed that mysterious components called dark matter and dark energy make up about 96% of the universe’s content and drive its accelerated expansion. Yet even these exotic components don’t fully explain the Hubble tension.

“One possible explanation for the Hubble tension would be if there was something missing in our understanding of the early universe, such as a new component of matter—early dark energy—that gave the universe an unexpected kick after the big bang,” explains Marc Kamionkowski, a Johns Hopkins cosmologist. “And there are other ideas, like funny dark matter properties, exotic particles, changing electron mass, or primordial magnetic fields that may do the trick. Theorists have license to get pretty creative.”

Whether this cosmic puzzle leads us to discover new forms of energy, exotic particles, or completely novel physics, one thing is certain: the universe is expanding our understanding just as surely as it’s expanding itself. And thanks to Webb and Hubble, we’re along for the ride.

Source : https://studyfinds.org/universe-expansion-rate-physics-webb-telescope/

AI Jesus can ‘listen’ to your confession, but here’s why it can’t absolve your sins

(Credit: New Africa/Shutterstock)

This autumn, a Swiss Catholic church installed an AI Jesus in a confessional to interact with visitors.

The installation was a two-month project in religion, technology, and art titled “Deus in Machina,” created at the University of Lucerne. The Latin title literally means “god from the machine”; it refers to a plot device used in Greek and Roman plays, introducing a god to resolve an impossible problem or conflict facing the characters.

This hologram of Jesus Christ on a screen was animated by an artificial intelligence program. The AI’s programming included theological texts, and visitors were invited to pose questions to the AI Jesus, viewed on a monitor behind a latticework screen. Users were advised not to disclose any personal information and confirm that they knew they were engaging with the avatar at their own risk.

Some headlines stated that the AI Jesus was actually engaged in the ritual act of hearing people’s confessions of their sins, but this wasn’t the case. However, even though AI Jesus was not actually hearing confessions, as a specialist in the history of Christian worship, I was disturbed by the act of placing the AI project in a real confessional that parishioners would ordinarily use.

A confessional is a booth where Catholic priests hear parishioners’ confessions of their sins and grant them absolution, forgiveness, in the name of God. Confession and repentance always take place within the human community that is the church. Human believers confess their sins to human priests or bishops.

Early history

The New Testament scriptures clearly stress a human, communal context for admitting and repenting for sins.

In the Gospel of John, for example, Jesus speaks to his apostles, saying, “Whose sins you shall forgive, they are forgiven, and whose sins you shall retain they are retained.” And in the Epistle of James, Christians are urged to confess their sins to one another.

Churches in the earliest centuries encouraged public confession of more serious sins, such as fornication or idolatry. Church leaders, called bishops, absolved sinners and welcomed them back into the community.

From the third century on, the process of forgiving sins became more ritualized. Most confessions of sins remained private – one-on-one with a priest or bishop. Sinners would express their sorrow in doing penance individually by prayer and fasting.

However, some Christians guilty of certain major offenses, such as murder, idolatry, apostasy or sexual misconduct, would be treated very differently.

These sinners would do public penance as a group. Some were required to stand on the steps of the church and ask for prayers. Others might be admitted in for worship but were required to stand in the back or be dismissed before the scriptures were read. Penitents were expected to fast and pray, sometimes for years, before being ritually reconciled to the church community by the bishop.

Medieval developments

During the first centuries of the Middle Ages, public penance fell into disuse, and emphasis was increasingly placed on verbally confessing sins to an individual priest. After privately completing the penitential prayers or acts assigned by the confessor, the penitent would return for absolution.

The concept of Purgatory also became a widespread part of Western Christian spirituality. It was understood to be a stage of the afterlife where the souls of the deceased who died before confession with minor sins or had not completed penance would be cleansed by spiritual suffering before being admitted to heaven.

Living friends or family of the deceased were encouraged to offer prayers and undertake private penitential acts, such as giving alms – gifts of money or clothes – to the poor to reduce the time these souls would have to spend in this interim state.

Other developments took place in the later Middle Ages. Based on the work of the theologian Peter Lombard, penance was declared a sacrament, one of the major rites of the Catholic Church. In 1215, a new church document mandated that every Catholic go to confession and receive Holy Communion at least once a year.

Priests who revealed the identity of any penitent faced severe penalties. Guidebooks for priests, generally called Handbooks for Confessors, listed various types of sins and suggested appropriate penances for each.

The first confessionals

Until the 16th century, those wishing to confess their sins had to arrange meeting places with their clergy, sometimes just inside the local church when it was empty.

But the Catholic Council of Trent changed this. The 14th session in 1551 addressed penance and confession, stressing the importance of privately confessing to priests ordained to forgive in Christ’s name.

Soon after, Charles Borromeo, the cardinal archbishop of Milan, installed the first confessionals along the walls of his cathedral. These booths were designed with a physical barrier between priest and penitent to preserve anonymity and prevent other abuses, such as inappropriate sexual conduct.

Similar confessionals appeared in Catholic churches over the following centuries: The main element was a screen or veil between the priest confessor and the layperson, kneeling at his side. Later, curtains or doors were added to increase privacy and ensure confidentiality.

Rites of penance in contemporary times

In 1962, Pope John XXIII opened the Second Vatican Council. Its first document, issued in December 1963, set new norms for promoting and reforming Catholic liturgy.

Since 1975, Catholics have three forms of the rite of penance and reconciliation. The first form structures private confession, while the second and third forms apply to groups of people in special liturgical rites. The second form, often used at set times during the year, offers those attending the opportunity to go to confession privately with one of the many priests present.

The third form can be used in special circumstances, when death threatens with no time for individual confession, like a natural disaster or pandemic. Those assembled are given general absolution, and survivors confess privately afterward.

In addition, these reforms prompted the development of a second location for confession: Instead of being restricted to the confessional booth, Catholics now had the option of confessing their sins face-to-face with the priest.

To facilitate this, some Catholic communities added a reconciliation room to their churches. Upon entering the room, the penitent could choose anonymity by using the kneeler in front of a traditional screen or walk around the screen to a chair set facing the priest.

Over the following decades, the Catholic experience of penance changed. Catholics went to confession less often or stopped altogether. Many confessionals remained empty or were used for storage. Many parishes began to schedule confessions by appointment only. Some priests might insist on face-to-face confession, and some penitents might prefer the anonymous form only. The anonymous form takes priority, since the confidentiality of the sacrament must be maintained.

Source : https://studyfinds.org/ai-jesus-cant-absolve-your-sins/

 

The 7 Best Ski Resorts In The World | From Colorado To Switzerland

From the powdery slopes of the French Alps to Japan’s legendary snowfields, the world’s elite ski resorts offer far more than pristine runs and breathtaking views. These winter wonderlands combine challenging terrain, luxurious amenities, and rich cultural experiences that make them bucket-list destinations for both serious athletes and leisure travelers. Whether you’re seeking champagne powder in Colorado, traditional Alpine charm in Switzerland, or the untouched backcountry of British Columbia, the best ski resorts define the pinnacle of winter sport destinations.

Pic: https://www.tripadvisor.in/

Best Ski Resorts in the World, According to Experts
1. Zermatt, Switzerland

Zermatt is one of the most well-known ski resorts in the world, known for the iconic Matterhorn peak. But besides the endless slopes, Zermatt is a skiing vacation heaven, with boutique shops, restaurants, skating rinks, and hotel rooms with a gorgeous view. Far and Wide says it all – this resort is legendary, practically trademarked by the iconic Matterhorn peak that graces Toblerone chocolate and even a Disneyland ride. But Zermatt’s true magic lies beyond the photo ops. Sure, snapping that perfect Matterhorn picture is a must, but it’s the vast, snow-covered slopes and charming car-free village that truly steal the show, keeping visitors coming back for more.

Serious skiers can ski all the way down to Zermatt from the top of the Matterhorn for a descent of 2200m, but there are lifts all over to take you to different sectors for intermediates. Speaking of the Matterhorn, PureWow calls it one of the most recognizable ski mountains in the world, and for good reason! Towering at almost 15,000 feet, it’s a skier’s dream backdrop and even more awe-inspiring in person than on a chocolate bar. But Zermatt offers so much more than just epic views. PureWow also recommends taking a ride on the Gornergrat Bahn, a legendary train that takes you up the mountain for breathtaking panoramas. Feeling adventurous? Book a lesson with a SkiBro instructor or a mountain guide to learn the best lines down and conquer those slopes like a pro.

Oyster Worldwide knows what’s up – they simply can’t create a “best ski destinations” list without mentioning Zermatt. As the highest resort in the Alps, the views are unbeatable, with the Matterhorn stealing the show from practically any angle on the slopes. Plus, it boasts the greatest vertical drop in all of Switzerland, meaning long, exhilarating runs for all skill levels. And for those who crave a true adrenaline rush, Zermatt offers incredible off-piste terrain – a playground for powderhounds and adventure seekers. Whether you’re a seasoned skier or a first-timer yearning for snowy bliss, Zermatt has something for everyone. So ditch the crowds and ordinary slopes, Zermatt might just be your perfect winter escape!

2. Aspen Snowmass, United States

Aspen, Colorado is home to four different mountains: historic Aspen Mountain, lively Snowmass, uncrowded Aspen Highlands, and the beginner-friendly Buttermilk. Located about four hours from Denver, it can be a little harder to get to, but has been a hub for ski culture since the early 1900s. U.S. News calls it synonymous with North American skiing, and for good reason. Imagine – over 5,600 acres of pristine slopes to explore, all accessible with a single lift ticket – skier’s paradise, anyone?

Aspen is home to luxurious resorts where you can often find celebrities vacationing, as well as many other award-winning restaurants, bars, spas, and other amenities. But Aspen’s charm goes beyond the slopes. Qantas says the heart of the town throbs with designer stores and hidden consignment gems, chic bars buzzing with après-ski energy, and world-class restaurants like Matsuhisa and Element 47 serving up culinary delights. And for those seeking a cultural fix, the Shigeru Ban-designed art museum offers a stunning backdrop for a dose of inspiration.

On the Snow acknowledges Aspen’s reputation as a luxurious ski resort, but emphasizes the true star of the show – the four mountains themselves. Each peak boasts its own character, with Aspen Mountain, nicknamed “Ajax” by the locals, rising from the heart of the town. Steep runs, challenging bumps, powdery glades, and perfectly groomed trails – Ajax has it all, making it a haven for experienced skiers and snowboarders. So, whether you’re a seasoned pro or a first-timer seeking a luxurious winter wonderland, Aspen promises an unforgettable experience.

3. Whistler Blackcomb, Canada

Whistler is only a two-hour drive from Vancouver, and is home to enough versatile snow space that’s perfect for families. Hit the slopes or take part in activities at the full-service resort. Ever dreamed of a ski vacation with endless slopes and Olympic-worthy thrills? Then Whistler Blackcomb in British Columbia, Canada, might be your perfect match! The Culture Trip calls it Canada’s most famous ski resort, and for good reason – it hosted the 2010 Winter Olympics and welcomes over two million visitors a year. With a massive skiable area of over 8,000 acres, it’s one of the biggest on the planet. Plus, thanks to those Olympics, the facilities are state-of-the-art, ensuring a smooth and luxurious experience.

Whistler has great terrain for all levels of skiers and snowboarders, and even enough snow that you can ski year round in some areas. The views on the slopes stretch all the way to the Pacific Ocean, and there are over 200 runs serviced by over 30 lifts. PlanetWare says that the resort’s two incredible mountains, Whistler and Blackcomb, are practically begging to be explored. Imagine over 200 marked runs, a combined 8,171 acres of skiable terrain, and not just one, but three glaciers to conquer. An average snowfall of 465 inches a year practically guarantees pristine slopes throughout the season. And with so many lifts to whisk you up the mountain, you’ll spend less time waiting and more time carving epic turns. Don’t miss the legendary Peak 2 Peak Gondola, a must-do aerial experience that takes you between the two mountains for breathtaking panoramas.

The Independent spills the tea on Whistler’s après-ski scene, calling it legendary. After a day of conquering those slopes, unwind in style at a high-end bar, sip on fresh oysters, and soak in the vibrant atmosphere – pure bliss! And for those seeking ultimate relaxation, luxurious chalets with world-class spas await. Whether you’re a hardcore skier craving challenging terrain or a luxury lover seeking a glamorous winter escape, Whistler Blackcomb has something for everyone. So pack your warmest coat, your sense of adventure, and get ready to experience winter wonderland magic at its finest.

Source: https://studyfinds.org/best-ski-resorts-in-the-world/

61% of shoppers say the holiday season is financially terrifying

(© Paolese – stock.adobe.com)

Many people “shop ’til they drop” during the holidays — but a new survey finds that may not be such a great thing. Researchers find one in four people grapple with compulsive overspending during the holiday season.

The research, commissioned by Beyond Finance and conducted by Talker Research among 2,000 people who celebrate a winter holiday, paints a stark picture of financial vulnerability. An overwhelming 56% of respondents feel pressured to spend money during the holidays, with family emerging as the primary source of financial strain (71%).

However, the challenges run far deeper than simple spending pressures. More than three-quarters of respondents (76%) experience what researchers call “money wounds” — emotional difficulties stemming from financial challenges that cut to the core of personal well-being.

“In my weekly therapy sessions with clients burdened by credit card debt, I regularly hear about the same challenges and mental health struggles highlighted in these survey findings, especially as they intensify during the holiday season,” says Dr. Erika Rasure, chief financial wellness advisor at Beyond Finance, in a statement. “It’s crucial to remember you’re not alone. Acknowledging these struggles and seeking support are key steps toward managing financial stress and finding peace.”

The study reveals a complex landscape of financial trauma. Low self-esteem (26%), compulsive overspending (21%), shame from past financial mistakes (21%), and a scarcity mindset (20%) emerge as the most common “money wounds.” During the holiday season specifically, compulsive overspending becomes the most prominent financial issue, affecting 25% of respondents.

The financial stress takes a significant emotional toll. Sixty-eight percent of those experiencing money wounds report that these challenges hold them back from feeling fulfilled and successful. This year, more than six in 10 respondents (61%) say they’re anxiously facing their finances for valid reasons.

Shoppers’ coping mechanisms are equally telling. Fifty-four percent of those with money wounds admit to avoiding their financial troubles during the holidays. This avoidance manifests in various ways: 37% refrain from buying gifts, 33% decline party invitations, and 29% avoid checking their bank account balances.

Perhaps most heartbreaking is the social isolation that follows. Forty-two percent of respondents say they’ll become distant from others to avoid feeling “less than” or experiencing spending pressure. This distancing comes at an emotional cost, with participants reporting feelings of shame (38%), guilt (39%), and loneliness (40%).

There is a glimmer of hope. Sixty-one percent of respondents are actively trying to embrace the philosophy that “money and spending don’t equal happiness.” Some are taking concrete steps toward healing, with 27% discussing their financial stress with a therapist or mental health expert, and 26% working with financial professionals to improve their habits.

However, the road to recovery is long. On average, respondents believe it takes six years for a money wound to heal. More sobering still, 37% don’t believe financial trauma ever completely resolves.

As the holiday season approaches, the study serves as a powerful reminder of the emotional complexity behind financial stress, urging compassion, understanding, and support for those struggling with money-related challenges.

Source: https://studyfinds.org/holidays-financially-terrifying/

Why do we exist? Invisible particles passing through our bodies could solve greatest mystery

(Photo by Labutin Art on Shutterstock)

Every second, trillions of invisible particles are passing through your body at nearly the speed of light. These ghostly travelers, called neutrinos, might hold the key to some of science’s biggest questions – including why we exist at all. Now, a global team of scientists has mapped out an ambitious decade-long plan to unlock their secrets.

“It might not make a difference in your daily life, but we’re trying to understand why we’re here,” explains Alexandre Sousa, a physics professor at the University of Cincinnati and one of the white paper’s editors, in a statement. “Neutrinos seem to hold the key to answering these very deep questions.”

These mysterious particles are born in various cosmic cookpots: the nuclear fusion powering our sun, radioactive decay in Earth’s crust and nuclear reactors, and specialized particle accelerator laboratories. As they zoom through space, neutrinos can shape-shift between three different “flavors” – electron, muon, and tau neutrinos.

For over two decades, however, something strange has been happening in neutrino experiments, leaving physicists scratching their heads. Several major studies have observed patterns that don’t match our current understanding of how these particles should behave.

The most famous puzzle emerged from the Liquid Scintillator Neutrino Detector (LSND) experiment at Los Alamos National Laboratory, which detected more electron antineutrinos than their theories predicted. This unexpected excess was later supported by similar findings at Fermilab’s MiniBooNE experiment. Meanwhile, measurements of neutrinos from nuclear reactors and radioactive sources have consistently shown fewer electron antineutrinos than expected.

These anomalies have led scientists to propose an intriguing possibility: there might be a fourth type of neutrino, dubbed “sterile” because it appears immune to three of the four fundamental forces of nature.

“Theoretically, it interacts with gravity, but it has no interaction with the others, weak nuclear force, strong nuclear force or electromagnetic force,” Sousa explains.

However, fitting all the experimental data together into a coherent picture has proven challenging. Some results seem to conflict with others, and observations of the early universe place strict limits on additional neutrino types. This has pushed theorists to consider more exotic explanations, from unknown forces to particle decay to quantum effects we don’t yet understand.

To crack these mysteries, physicists are deploying an arsenal of sophisticated new experiments. One of the most ambitious is DUNE (Deep Underground Neutrino Experiment) at Fermilab. Teams have excavated caverns in a former gold mine 5,000 feet underground – so deep it takes 10 minutes just to reach by elevator – to house massive neutrino detectors shielded from cosmic rays and background radiation.

“With these two detector modules and the most powerful neutrino beam ever we can do a lot of science,” says Sousa. “DUNE coming online will be extremely exciting. It will be the best neutrino experiment ever.”

Another major project called Hyper-Kamiokande is under construction in Japan.

“That should hold very interesting results, especially when you put them together with DUNE,” Sousa notes. “The two experiments combined will advance our knowledge immensely.”

According to the research published in the Journal of Physics G Nuclear and Particle Physics, the stakes couldn’t be higher. Beyond potentially discovering new fundamental particles or forces, neutrino research might help explain one of the universe’s greatest mysteries: why there is more matter than antimatter when the Big Bang should have created equal amounts of both. This asymmetry is the reason galaxies, planets, and we ourselves exist.

The new roadmap for neutrino research represents an extraordinary collaborative effort, bringing together more than 170 scientists from 118 institutions worldwide. Their vision will help guide funding decisions for these ambitious projects through the U.S. government’s Particle Physics Project Prioritization Panel.

As researchers venture deeper into the coming decade of discovery, these ethereal particles continue to surprise and perplex us – much as they did when Wolfgang Pauli first proposed their existence in 1930. Perhaps soon, through the combined power of modern technology and global scientific cooperation, neutrinos will finally reveal their full nature and help us understand not just the smallest scales of physics but the greatest mysteries of our cosmic existence.

Source : https://studyfinds.org/invisible-particles-could-solve-mystery/

 

Arctic could be ‘ice-free’ by 2027 — Scientists warn we’re closer to disaster than we thought

Melting icebergs by the coast of Greenland. (Photo by muratart on Shutterstock)

The Arctic Ocean’s pristine white ice cap, a defining feature of our planet visible even from space, could undergo a historic transformation in the next few years. A new study reveals that while most projections show the first ice-free day occurring within nine to 20 years after 2023, there’s an unlikely but significant possibility this milestone could arrive as soon as 2026-2027.

While scientists have long studied when the Arctic might become ice-free during September (typically when sea ice reaches its annual minimum), this is the first research to examine when we might see the very first day without significant ice cover. The distinction is crucial – like the difference between a lake being ice-free for an entire month versus experiencing its first ice-free day during an unusually warm spell.

The study, led by researchers Céline Heuzé from the University of Gothenburg and Alexandra Jahn from the University of Colorado Boulder, defines “ice-free” as less than one million square kilometers of sea ice remaining. For perspective, that’s about four times the size of the United Kingdom – a small fraction of the Arctic Ocean’s typical ice coverage. It mainly accounts for ice that tends to persist along northern coastlines even during extensive melting.

“The first ice-free day in the Arctic won’t change things dramatically,” says Jahn, an associate professor in the Department of Atmospheric and Oceanic Sciences and a fellow at CU Boulder’s Institute of Arctic and Alpine Research, in a statement. “But it will show that we’ve fundamentally altered one of the defining characteristics of the natural environment in the Arctic Ocean, which is that it is covered by sea ice and snow year-round, through greenhouse gas emissions.”

This transformation is already well underway. The National Snow and Ice Data Center reported that September 2023’s sea ice minimum – 4.28 million square kilometers – was one of the lowest measurements since satellite monitoring began in 1978. While this figure exceeded the record low set in September 2012, it represents a dramatic decline from the 1979-1992 average of 6.85 million square kilometers. Scientists have observed Arctic ice disappearing at an unprecedented rate of more than 12% each decade.

While researchers have long focused on predicting when the Arctic might become ice-free for an entire month (typically September, when sea ice reaches its annual minimum), this study breaks new ground by examining when we might see the very first day without significant ice cover. The distinction is crucial – like the difference between a lake being ice-free for an entire month versus experiencing its first ice-free day during an unusually warm spell.

“Because the first ice-free day is likely to happen earlier than the first ice-free month, we want to be prepared,” says Heuzé. “It’s also important to know what events could lead to the melting of all sea ice in the Arctic Ocean.”

To understand when this threshold might be crossed, the researchers analyzed 366 simulations from 11 carefully selected climate models. These models were chosen based on their accuracy in reproducing historical Arctic conditions and seasonal patterns. The simulations explored various future scenarios, from optimistic cases with reduced emissions (SSP1-1.9) to pessimistic ones with continued high emissions (SSP5-8.5). Nine of these simulations suggested the possibility of an ice-free day occurring within just three to six years – an extreme but plausible scenario.

Recent events demonstrate how quickly Arctic conditions can change. In March 2022, parts of the Arctic experienced temperatures 50°F above average, with areas around the North Pole approaching melting point – an unprecedented warm spell that hints at the kind of extreme events that could accelerate ice loss. The researchers found that such warming events, particularly when they occur in sequence, could trigger rapid ice decline.

These rapid transitions typically follow a pattern: an unusually warm fall weakens the ice, followed by a warm winter and spring that prevent normal ice formation. When these conditions persist for three or more years, they create the perfect environment for an ice-free day to occur in late summer. As climate change progresses, these warm spells are expected to become more frequent and intense.

The loss of Arctic sea ice creates a troubling feedback loop. Ice and snow reflect most incoming sunlight back to space, while dark ocean water absorbs it. As more ice melts, more solar energy is absorbed, further warming the region and potentially accelerating ice loss. This process could have far-reaching effects on global weather patterns and ecosystems.

Source: https://studyfinds.org/arctic-ice-free-by-2027/

Your heart has a hidden brain, game-changing study discovers

(© natali_mis – stock.adobe.com)

Inside your chest lies not just a muscular pump, but a sophisticated neural network that scientists are just beginning to fully understand. A new study reveals that the heart’s internal nervous system – known as the intracardiac nervous system (IcNS) – is far more complex than previously thought, challenging long-held views about how our hearts maintain their life-sustaining rhythm.

For decades, scientists believed this system was simply a relay station, passing along signals from the brain to the heart muscle. This new research, led by scientists from Karolinska Institutet in Sweden and Columbia University in New York, demonstrates that the IcNS is more like a local control center, capable of processing information and even generating rhythmic patterns independently.

A Microscopic Marvel
The researchers made these discoveries using zebrafish, which might seem like an unusual choice for studying human heart function. However, zebrafish hearts share remarkable similarities with human hearts in terms of rate, electrical patterns, and basic structure, making them invaluable models for cardiovascular research. Just like humans, zebrafish hearts have four chambers and require precise coordination between various types of cells to maintain proper function.

Working with young adult zebrafish (8-12 weeks old), the research team focused on a crucial region called the sinoatrial plexus (SAP) – essentially the heart’s pacemaker region. Using a combination of cutting-edge techniques, including single-cell RNA sequencing, detailed imaging, and electrical recordings, they uncovered an unexpectedly diverse population of neurons.

The team found that these neurons use different chemical messengers, or neurotransmitters, to communicate. The majority – about 81% – are cholinergic neurons, using acetylcholine as their primary messenger. The remaining neurons use a variety of other neurotransmitters: 8% are glutamatergic, 6% use GABA, 5% are serotonergic, and 4.6% are catecholaminergic. This diversity suggests a level of local control and fine-tuning previously unrecognized in cardiac function.

‘Complex Nervous System Within The Heart’
Perhaps the most intriguing discovery was a subset of neurons that display “pacemaker-like” or “rhythmogenic” properties. These neurons can generate rhythmic patterns of activity similar to those found in central pattern generator networks – neural circuits in the brain and spinal cord that produce rhythmic behaviors like breathing or walking. This finding suggests that the heart’s nervous system might play a more active role in maintaining cardiac rhythm than previously thought.

“We were surprised to see how complex the nervous system within the heart is,” says Dr. Konstantinos Ampatzis, the study’s lead researcher, in a university release.

To understand how these neurons affect heart function, the researchers developed an innovative experimental approach. They created a preparation that allowed them to study the intact heart while recording from individual neurons, using a compound called blebbistatin to temporarily stop the heart’s muscular contractions while keeping the neurons active. This technique revealed four distinct types of neurons with different firing patterns, from single spikes to rhythmic bursts of activity.

When the researchers manipulated these neurons, they could directly influence the heart’s beating pattern. For example, when they triggered the release of neurotransmitters from these cells using a specific chemical solution, they observed changes in heart rate and rhythm. This demonstrated that the IcNS actively participates in controlling cardiac function, rather than simply relaying signals from the brain.

“This ‘little brain’ has a key role in maintaining and controlling the heartbeat, similar to how the brain regulates rhythmic functions such as locomotion and breathing,” Dr. Ampatzis explains.

In other words, the heart isn’t just a passive recipient of brain commands – it’s an active participant in its own functioning.

More Research Ahead
These findings open up intriguing possibilities for treating heart conditions. Many cardiac problems involve disruptions to the heart’s rhythm, and understanding this local neural network could potentially lead to new therapeutic approaches.

“We aim to identify new therapeutic targets by examining how disruptions in the heart’s neuronal network contribute to different heart disorders,” Dr. Ampatzis notes.

The study, published in Nature Communications, also raises fascinating questions about how organs regulate themselves. The presence of such a sophisticated neural network in the heart suggests that individual organs might have more autonomous control over their function than previously recognized, with the brain providing overall coordination rather than micromanaging every aspect of organ function. This could represent an efficient biological design, allowing for rapid local responses while maintaining central oversight.

Source: https://studyfinds.org/your-heart-has-a-hidden-brain/

What Is Body Roundness Index? Everything You Need To Know

Whether you’re finally getting to your annual physical with your primary care physician or seeing a specialist, you’ve likely learned where you fall on the body mass index (BMI), a calculated measurement of weight relative to height. But some researchers are paving the way for another measurement to join the conversation in assessing a person’s health risks.

What Is Body Roundness Index?

A recent study published in JAMA Open Network makes the argument that a newer tool – body roundness index (BRI) – may be more effective at measuring a person’s health. Whereas BMI is determined using one’s height and weight only, BRI also includes an individual’s waist circumference (or roundness) in the calculation. The result is a clearer picture of how body fat is distributed: The more fat in the middle of your body, the more at-risk you are for some health conditions.

“Visceral fat protects organs, but too much can lead to health problems. Excess fat can also lead to chronic inflammation which is also linked to many diseases including cardiovascular disease, diabetes, arthritis, and cancer,” said Geralyn Plomitallo, registered dietitian and clinical nutrition manager at Stamford Health.

How Can I Calculate My BRI?

Introduced in 2013, the equation for BRI is complicated. It considers your heigh and weight, similar to BMI, but also your waist and hip measurements.

A healthy BRI generally falls below 10 on the scale of 1 – 20. The higher the score, the rounder the body and the more at risk the person is for diseases and obesity.

If you have a measuring tape handy, you can get an idea of where you fall on the BRI scale. “Your waist measurement should be no more than 35 inches for women and no more than 40 inches for men,” said Plomitallo.

Or, in more simpler terms, you can guesstimate by looking at your body shape. “Apple shape typically means you have more fat around the middle,” said Plomitallo. “With a pear shape, the fat is higher around the hips and less in the middle.”

Is BRI Better Than BMI?

BMI, while a helpful gauge of risk of disease, has long been criticized as incomplete or even misleading. “It just uses your height and weight and could overestimate body fat in athletes,” explained Plomitallo. “If using the BMI alone, Arnold Schwarzenegger would be considered obese.” A person who has a lower body weight, but no muscle mass would fall into the healthy BMI zone but could still be at high risk for diseases.

Source : https://www.stamfordhealth.org/healthflash-blog/primary-care/bmi-versus-bri/

One simple meal swap may significantly boost your heart health

(Credit: Panji Dwi Risantoro/Shutterstock)

What if the key to protecting your heart was as simple as rethinking what’s on your dinner plate? A 30-year study by Harvard researchers suggests just that — finding that the secret to preventing cardiovascular disease may be as simple as swapping your sources of protein.

Specifically, scientists revealed how the balance between plant and animal proteins could significantly reduce your risk of heart disease. The study, tracking nearly 203,000 health professionals, uncovered a compelling nutritional strategy: the more you move toward plant-based sources of protein, the better your heart may fare.

Results published in The American Journal of Clinical Nutrition show participants who consumed a diet with the highest ratio of plant to animal protein saw a remarkable 19% lower risk of cardiovascular disease (CVD) and a 27% lower risk of coronary heart disease.

Currently, the average American diet features a 1:3 ratio of plant to animal protein. The new research recommends a dramatic shift.

“The average American eats a 1:3 plant to animal protein ratio. Our findings suggest a ratio of at least 1:2 is much more effective in preventing CVD,” says lead author Andrea Glenn in a media release.

The research isn’t just about cutting meat — it’s about strategic replacement. Swapping red and processed meats for protein-rich plant alternatives like nuts and legumes appears to be the sweet spot. These plant proteins come packed with additional health bonuses: fiber, antioxidant vitamins, minerals, and healthy fats that contribute to improved blood pressure and reduced inflammation.

The study’s most intriguing finding is that more plant protein continues to provide benefits, particularly for coronary heart disease prevention. While cardiovascular disease risk levels off around a 1:2 plant-to-animal protein ratio, heart disease risk keeps declining with even higher plant protein intake.

“Most of us need to begin shifting our diets toward plant-based proteins. We can do so by cutting down on meat, especially red and processed meats, and eating more legumes and nuts,” explains senior author Frank Hu.

Source : https://studyfinds.org/simple-meal-swap-heart-health/

Oceans may never have existed on Venus, says new research

Instead of condensing on the planet’s surface, any water in Venus’ atmosphere likely remained as steam, suggests the research from the University of Cambridge.

Venus’ northern hemisphere as seen by NASA Magellan spacecraft. Pic: NASA

Venus may never have hosted oceans on its surface, according to new research.

Despite a scientific debate raging for years over the history of Venus and whether it ever held liquid oceans, new research by astrochemists from the University of Cambridge suggests it has always been dry.

“Two very different histories of water on Venus have been proposed: one where Venus had a temperate climate for billions of years with surface liquid water and the other where a hot early Venus was never able to condense surface liquid water,” said the report’s authors Tereza Constantinou, Oliver Shorttle and Paul B. Rimmer.

Ms Constantinou and her colleagues modelled the current chemical makeup of Venus’ atmosphere and discovered “the planet has never been liquid-water habitable”.

“Venus today is a hellish world,” suggests NASA. It has an average surface temperature of around 465C (869F) and a pressure 90 times greater than Earth’s at sea level, as well as being permanently shrouded in thick, toxic clouds of sulfuric acid.

In their study, the scientists found the planet’s interior lacks hydrogen, which suggests it is much drier than Earth’s interior.

Instead of condensing on the planet’s surface, any water in Venus’ atmosphere likely remained as steam, suggests the research.

Back in 2016, a team of scientists working for NASA’s Goddard Institute for Space Studies (GISS) in New York suggested the planet may once have been habitable.

The team used a computer model similar to the type used to predict climate change on Earth.

“Many of the same tools we use to model climate change on Earth can be adapted to study climates on other planets, both past and present,” said Michael Way at the time, a researcher at GISS and the paper’s lead author.

Source : https://news.sky.com/story/oceans-may-never-have-existed-on-venus-says-new-research-13265362

High-dose vitamin C: Promising treatment may extend survival of pancreatic cancer patients

(Credit: Numstocker/Shutterstock)

A study published in the November issue of Redox Biology has found that adding intravenous, high-dose vitamin C to a chemotherapy regimen doubled the survival of patients with late-stage, metastatic pancreatic cancer from eight months to 16 months.

“This is a deadly disease with very poor outcomes for patients. The median survival is eight months with treatment, probably less without treatment, and the five-year survival is tiny. When we started the trial, we thought it would be a success if we got to 12 months survival, but we doubled overall survival to 16 months. The results were so strong in showing the benefit of this therapy for patient survival that we were able to stop the trial early,” explains Joseph J. Cullen, MD, FACS, a professor of Surgery and Radiation Oncology at the University of Iowa, in a statement to StudyFinds.

The study consisted of 34 patients with stage 4 pancreatic cancer who were randomized to two groups. One group received standard chemotherapy (gemcitabine and nab-paclitaxel). The other group received the same chemotherapy plus intravenous infusions of 75 grams of vitamin C three times a week.

The average survival for patients who received chemotherapy and vitamin C was 16 months. Patients who received only chemotherapy survived an average of just eight months.

“Not only does it increase overall survival, but the patients seem to feel better with the treatment. They have fewer side effects, and appear to be able to tolerate more treatment, and we’ve seen that in other trials, too,” Cullen says.

There is additional evidence of the benefit of intravenous high-dose vitamin C in cancer treatment. Bryan Allen, MD, PhD, a professor and chief of Radiation Oncology at the University of Iowa, and Cullen collaborated on a trial of high dose vitamin C with chemotherapy and radiation for glioblastoma, a deadly brain cancer. These patients also showed a significant increase in survival.

Cullen, Allen, and their colleagues have been conducting research on the anti-cancer effect of high-dose, IV vitamin C for two decades. They demonstrated that IV vitamin C produces high levels in the blood that cannot be achieved by taking vitamin C orally. The high concentration results in changes in cancer cells which makes them more vulnerable to chemotherapy and radiation. Cullen describes the results of their innovation and perseverance as highly encouraging.

Source : https://studyfinds.org/vitamin-c-cancer-survival/

3 reasons why kids stick toys up their nose

(Credit: zeljkodan/Shutterstock)

Children, especially toddlers and preschoolers, have an uncanny ability to surprise adults. And one of the more alarming discoveries parents can make is their child has stuck a small object, such as a Lego piece, up their nose.

Queensland Children’s Hospital recently reported more than 1,650 children with foreign objects up their nose had presented to its emergency department over the past decade. Lego, beads, balls, batteries, buttons, and crayons were among the most common objects.

With the Christmas season approaching, it’s likely more of these small objects will be brought into our homes as toys, gifts or novelty items.

But why do children stick things like these up their nose? Here’s how natural curiosity, developing motor skills, and a limited understanding of risk can be a dangerous combination.

1. Kids are curious creatures

Toddlers are naturally curious creatures. During the toddler and preschool years, children explore their environment by using their senses. They touch, taste, smell, listen to and look at everything around them. It’s a natural part of their development and a big part of how they learn about the world.

Researchers call this “curiosity-based learning”. They say children are more likely to explore unfamiliar objects or when they don’t completely understand how they work. This may explain why toddlers tend to gravitate towards new or unfamiliar objects at home.

Unfortunately, this healthy developmental curiosity sometimes leads to them putting things in places they shouldn’t, such as their nose.

2. Kids are great mimics

Young children often mimic what they see. Studies that tracked the same group of children over time confirm imitation plays a vital role in a child’s development. This activates certain critical neural pathways in the brain. Imitation is particularly important when learning to use and understand language and when learning motor skills such as walking, clapping, catching a ball, waving, and writing.

Put simply, when a child imitates, it strengthens brain connections and helps them learn new skills faster. Anecdotally, parents of toddlers will relate to seeing their younger children copying older siblings’ phrases or gestures.

Inserting items into their nose is no different. Toddlers see older children and adults placing items near their face – when they blow their nose, put on makeup or eat – and decide to try it themselves.

3. Kids don’t yet understand risk

Toddlers might be curious. But they don’t have the cognitive capacity or reasoning ability to comprehend the consequences of placing items in their nose or mouth. This can be a dangerous combination. So, supervising your toddler is essential.

Small, bright-colored objects, items with interesting textures, or items that resemble food are especially tempting for little ones.

What can I do?

Sometimes, it’s obvious when a child has put something up their nose, but not always. Your child might have pain or itchiness around the nose, discharge or bleeding from the nose, and be upset or uncomfortable.

If your child has difficulty breathing or you suspect your child has inserted a sharp object or button battery seek immediate medical care. Button batteries can burn and damage tissues in as little as 15 minutes, which can lead to infection and injury.

If your child inserts an object where they shouldn’t:

stay calm: your child will react to your emotions, so try to remain calm and reassuring
assess the situation: can you see the object? Is your child in distress?
encourage your child to blow their nose gently. This may help dislodge the object
take your child outside in the Sun: brief exposure for a minute or two might prompt a “Sun sneeze”, which may dislodge the object. But avoid sniffing, which may cause the object to travel further in the airways and into the lungs
never try to remove the object yourself using tweezers, cotton swabs or other tools. This can push the object further into the nose, causing more damage.
If these methods don’t dislodge the item, your child is not distressed and you don’t suspect a sharp object or button battery, go to your GP. They may be able to see and remove the item.

Source : https://studyfinds.org/why-kids-stick-toys-up-nose/

 

American Nightmare: Only 31% think they’ve financially ‘made it’

(© alfa27 – stock.adobe.com)

Despite being the land of opportunity, the American Dream remains frustratingly out of reach for most Americans, with a mere 31% believing they’ve financially “made it” in life. The surprising twist? Millennials are leading the pack in financial confidence, with 34% claiming they’ve achieved financial success – the highest percentage among all generations.

The comprehensive survey of 2,000 employed Americans, conducted by Talker Research for BOK Financial, reveals a complex landscape where traditional markers of success are evolving, and external factors weigh heavily on financial aspirations. For those still climbing the corporate ladder, there’s hope: 54% believe they’re well on their way to financial success in their lifetime.

However, the picture becomes less optimistic with age. Only 27% of baby boomers feel they’ve reached financial success, and among those who haven’t, just one-third believe they ever will. The survey found that Americans consider their path to financial success threatened by various external factors, including presidential elections (46%), interest rate changes (45%), and the job market (42%).

What exactly does it mean to ‘make it’ financially in today’s America?

The goalposts have shifted significantly, with 79% of respondents saying their definition has evolved over time. The magic number appears to be around $234,000 in net worth – though reaching that milestone faces modern obstacles like the high cost of living (42%) and inflation (26%), with some citing their own spending habits (7%) as a barrier.

“The uncertainty around the economy, politics and other external factors can weigh heavily on people — and are right now,” says Jessica Jones with BOK Financial Advisors, an affiliate of BOK Financial, in a statement. “And financial headwinds like high inflation and interest rates can make it feel like it’s harder to get ahead, but baby steps are key. If someone is struggling to see success in their financial future, it’s important to just get started, even with a small savings account.”

Nearly half of baby boomers (48%) and Gen X respondents (47%) point to higher cost of living as a major obstacle, compared to just 34% of Gen Z. Meanwhile, younger generations – Gen Z (28%) and millennials (30%) – are more likely to cite inflation as their primary concern.

The markers of financial success have also undergone a dramatic shift. Today’s Americans consider owning a home (78%) and a vehicle (64%) as necessary indicators of financial success, while traditional milestones like having children (40%) or getting married (34%) – which were crucial for their parents’ generation – have become less significant. Modern indicators now include having an established long-standing career (48%) and earning a college degree (30%).

When it comes to spending, Gen Z (27%) and millennials (31%) direct the largest portion of their money toward family expenses, while Gen X (43%) and baby boomers (50%) prioritize retirement savings. Younger generations are planning ahead too, with Gen Z expecting to start retirement planning at around age 41, and millennials at age 46.

Interestingly, Gen Z shows both practical and personal financial priorities. While they’re the most confident about planning their financial future without professional help (70%), they also lead in prioritizing purchases that make them happy (20%). In contrast, baby boomers express the least confidence in their financial future during retirement (33%) and their ability to plan without professional assistance (49%).

Source : https://studyfinds.org/financially-made-it/

1.5 million years ago, two human species shared the same morning commute

A footprint hypothesized to have been created by a Paranthropus boisei individual. (Photo credit: Kevin Hatala/Chatham University)

Study of ancient footprints first ever to show that early ancestors coexisted in shared space

In the arid landscapes of northern Kenya, a remarkable discovery is reshaping our understanding of human evolution. Scientists have uncovered 1.5-million-year-old footprints that provide the first direct evidence that two different early human species likely encountered one another, potentially sharing the same territories and resources.

The exciting research, published in Science, centers on a series of fossilized footprints found at a site called ET-2022-103-FE22 (abbreviated as FE22) near Lake Turkana. What makes these tracks extraordinary isn’t just their age, but what they reveal about our ancient relatives’ coexistence and movement patterns.

The site preserves a continuous trackway made by one individual and three isolated footprints from different individuals, all pressed into what was once wet, muddy ground near an ancient lakeshore. Alongside the human footprints are tracks from various animals, including massive bird prints likely left by ancient marabou storks, as well as tracks from bovids (antelope-like animals) and equids (horse family members).

But the real story lies in the distinct differences between the footprints. The research team, led by Kevin Hatala from Chatham University, found two clearly different walking patterns preserved in these ancient tracks. One set of prints shows characteristics very similar to modern human footprints, while the other set reveals a notably different way of walking.

“Fossil footprints are exciting because they provide vivid snapshots that bring our fossil relatives to life,” says Kevin Hatala, the study’s first author, and an associate professor of biology at Chatham University, in a statement. “With these kinds of data, we can see how living individuals, millions of years ago, were moving around their environments and potentially interacting with each other, or even with other animals. That’s something that we can’t really get from bones or stone tools.”

Through careful analysis using advanced 3D imaging technologies, the research team identified two distinct patterns of movement in the human footprints. The continuous trackway shows evidence of someone walking at a brisk pace of 1.81 meters per second, but with notably different foot mechanics than modern humans. These tracks are flatter and show signs of a more mobile big toe. In contrast, the isolated footprints more closely match the arch patterns and toe alignment seen in modern human feet.

“In biological anthropology, we’re always interested in finding new ways to extract behavior from the fossil record, and this is a great example,” says Rebecca Ferrell, a program director at the National Science Foundation. “The team used cutting-edge 3D imaging technologies to create an entirely new way to look at footprints, which helps us understand human evolution and the roles of cooperation and competition in shaping our evolutionary journey.”

This distinction is crucial because it suggests these tracks were made by two different early human species: Homo erectus and Paranthropus boisei. H. erectus is often considered our direct ancestor and is thought to have walked very similarly to modern humans. P. boisei, meanwhile, was a more robust species with a distinctly different body build and, as these footprints suggest, a different way of walking.

The lake margin environment where these tracks were preserved offers a rare snapshot of ancient life, frozen in time. The footprints were made within hours or days of each other, suggesting these two species weren’t just living in the same general region, but were actively using the same spaces at nearly the same time.

What’s particularly intriguing is that this pattern of coexistence shows up repeatedly in the fossil record of this region between 1.4 and 1.6 million years ago. Multiple sites preserve evidence of these two distinct walking styles, indicating this wasn’t a one-time encounter but rather a sustained pattern of shared habitat use.

“This proves beyond any question that not only one, but two different hominins were walking on the same surface, literally within hours of each other,” says co-author Craig Feibel, a professor in the Department of Earth and Planetary Sciences and Department of Anthropology in the Rutgers School of Arts and Sciences. “The idea that they lived contemporaneously may not be a surprise. But this is the first time demonstrating it. I think that’s really huge.”

Interestingly, Feibel notes that Homo erectus lived for 1 million years more than. Paranthropus boisei. As to why the latter went extinct much sooner remains a mystery.

This discovery challenges previous limitations in studying ancient human coexistence. While fossilized bones can tell us different species lived in the same general area over thousands of years, footprints provide a much more intimate window into their daily lives and interactions. These tracks show that different human species weren’t just inhabiting the same general region – they were walking the same paths, perhaps even encountering one another face to face.

The findings suggest that despite their differences, these two species found ways to share resources without excessive competition. The lake margin environments where they left their tracks would have provided various food sources and other resources that could have supported both species’ needs. This peaceful coexistence might help explain how multiple human species managed to survive alongside each other for hundreds of thousands of years.

The implications of this research extend beyond just understanding ancient human behavior. It provides insights into how species adapt to share environments and resources, a topic that remains relevant today as we grapple with questions of human impact on other species and their habitats.

While we may never know if these different species exchanged greetings or avoided each other’s gaze, their footprints tell us something profound: Long before we built cities or drew maps, different kinds of humans were already figuring out how to share their world. Perhaps that’s the most human trait of all.

Source : https://studyfinds.org/ancient-footprints-two-human-species/

Do crabs feel pain? Study shells out answer to burning question

(Credit: TasfotoNL/Shutterstock)

For the first time, scientists have directly observed pain signals being transmitted to the brains of shore crabs, providing the strongest evidence yet that these creatures can sense and process pain. This discovery, made by researchers at the University of Gothenburg, could revolutionize how we treat crustaceans – from seafood restaurants to research laboratories.

The study, published in the journal Biology, represents the first time scientists have used EEG-style measurements to record pain responses directly from a crab’s brain.

Shore crabs, those small greenish-brown crustaceans you might spot scuttling along beaches, were the focus of the investigation. The study examined whether these creatures possess what scientists call “nociceptors” – specialized sensory neurons that detect potentially harmful stimuli and send warning signals to the brain. Think of nociceptors as your body’s built-in alarm system: when you touch something too hot or sharp, these neurons quickly fire off signals saying, “Danger! Pull away!”

“We could see that the crab has some kind of pain receptors in its soft tissues, because we recorded an increase in brain activity when we applied a potentially painful chemical, a form of vinegar, to the crab’s soft tissues. The same happened when we applied external pressure to several of the crab’s body parts,” explains lead author Eleftherios Kasiouras, a PhD student at the University of Gothenburg, in a statement.

Unlike previous research that only observed how crustaceans behave when exposed to harmful stimuli, this study directly measured their neural responses – similar to how doctors use an EEG to monitor human brain activity. The research team examined 20 shore crabs, focusing on how their nervous systems responded to both physical pressure and chemical irritants.

The research team used sophisticated equipment to record electrical activity in different parts of the crabs’ nervous systems. They tested various body parts, including the eyes, antennae, claws, and leg joints, applying either gentle pressure with fine instruments or small amounts of acetic acid (similar to vinegar).

The results revealed fascinating differences in how crabs respond to different types of potentially harmful stimuli. When touched with pressure-testing instruments, their nervous systems produced short, intense bursts of activity. However, when exposed to acetic acid, the response was more prolonged but less intense – suggesting crabs can distinguish between different types of threats.

Particularly striking was the discovery that different body parts showed varying levels of sensitivity. The eyes and soft tissues between leg joints were incredibly responsive to touch, detecting pressure as light as 0.008 grams – about 75 times more sensitive than human skin. Meanwhile, their antennae and antennules appeared specialized for detecting chemical threats rather than physical pressure.

The antennae and antennules (smaller antenna-like structures) showed a fascinating specialization: they responded strongly to chemical stimuli but showed no response to mechanical pressure. This suggests these appendages may be specifically tuned to detect harmful chemicals in their environment, similar to how our nose can alert us to dangerous fumes.

“It is a given that all animals need some kind of pain system to cope by avoiding danger. I don’t think we need to test all species of crustaceans, as they have a similar structure and therefore similar nervous systems. We can assume that shrimps, crayfish and lobsters can also send external signals about painful stimuli to their brain which will process this information,” says Kasiouras.

The findings have significant implications for animal welfare practices. Currently, crustaceans aren’t protected under European Union animal welfare legislation, meaning they can legally be cut up while still alive – a practice that would be unthinkable with mammals. As researcher Lynne Sneddon notes, “We need to find less painful ways to kill shellfish if we are to continue eating them. Because now we have scientific evidence that they both experience and react to pain.”

The study builds on previous research showing that crustaceans exhibit protective behaviors when injured, such as rubbing affected areas or avoiding situations that previously caused them harm. However, this is the first time scientists have directly observed the neural signals that drive these behaviors.

Previous studies relied mainly on observing how crustaceans reacted to various stimuli – including mechanical impacts, electric shocks, and acids applied to soft tissues like their antennae. While these crustaceans showed defensive behaviors like touching the affected areas or trying to avoid the threatening stimulus, scientists couldn’t definitively say these responses indicated pain sensation until now.

Whether this research will change how we treat crustaceans remains to be seen, but one thing’s clear: these sideways-walking creatures might deserve a second look – and perhaps a more humane perspective.

Source : https://studyfinds.org/do-crabs-feel-pain/

Why people routinely dismiss (and miss) life’s meaningful moments

(© deagreez – stock.adobe.com)

Imagine skipping Thanksgiving dinner with your family or passing up a chance to write a heartfelt thank-you note, believing these moments aren’t worth your time. Think again. A researcher from the University of Florida finds that people consistently underestimate the profound emotional impact of life’s seemingly mundane experiences.

Dr. Erin Westgate, an assistant professor of psychology leading the research, has uncovered a curious human tendency: we’re remarkably bad at predicting how meaningful our experiences will be.

“We don’t make sense of events until they actually happen,” Westgate explains in a university release. “We don’t process events until we need to, when they actually happen and not before.”

The research began with a simple yet provocative question during Westgate’s graduate school days: Do people accurately anticipate the emotional significance of future events? Her initial study with University of Virginia undergraduates provided a surprising answer. Students consistently misjudged how meaningful their Thanksgiving holiday would be, underestimating the emotional depth of the experience.

Intrigued by these initial findings, Westgate expanded her research during the pandemic, replicating the study with a larger group of University of Florida students. The results were consistent: people systematically fail to recognize the potential meaning in upcoming experiences.

This isn’t just about holiday gatherings. The three-year National Science Foundation-funded study will explore how this psychological blind spot affects major life decisions — from career choices and volunteer work to personal milestones like starting a family. Perhaps most intriguingly, the research will examine how people might avoid potentially transformative experiences that involve discomfort, missing out on opportunities for personal growth and resilience.

“We want to live meaningful lives, we want to do meaningful things,” Westgate notes. “If we are not realizing that an experience is going to be meaningful, we may be less likely to do it and miss out on these potential sources of meaning in our own lives.”

Source: https://studyfinds.org/shouldnt-dismiss-meaningful-moments/

Breaking the scale: Study finds 208 million Americans are now overweight or obese

Obesity problem in United States (© andriano_cz – stock.adobe.com)

Nearly half of adolescents and three-quarters of adults in the U.S. were classified as being clinically overweight or obese in 2021. The rates have more than doubled compared with 1990.

Without urgent intervention, our study forecasts that more than 80% of adults and close to 60% of adolescents will be classified as overweight or obese by 2050. These are the key findings of our recent study, published in the journal The Lancet.

Synthesizing body mass index data from 132 unique sources in the U.S., including national and state-representative surveys, we examined the historical trend of obesity and the condition of being overweight from 1990 to 2021 and forecast estimates through 2050.

For people 18 and older, the condition health researchers refer to as “overweight” was defined as having a body mass index, or BMI, of 25 kilograms per square meter (kg/m²) to less than 30 kg/m² and obesity as a BMI of 30 kg/m² or higher. For those younger than 18, we based definitions on the International Obesity Task Force criteria.

This study was conducted by the Global Burden of Disease Study 2021 U.S. Obesity Forecasting Collaborator Group, which comprises over 300 experts and researchers specializing in obesity.

Why it matters

The U.S. already has one of the highest rates of obesity and people who are overweight globally. Our study estimated that in 2021, a total of 208 million people in the U.S. were medically classified as overweight or obese.

Obesity has slowed health improvements and life expectancy in the U.S. compared with other high-income nations. Previous research showed that obesity accounted for 335,000 deaths in 2021 alone and is one of the most dominant and fastest-growing risk factors for poor health and early death. Obesity increases the risk of diabetes, heart attack, stroke, cancer and mental health disorders.

The economic implications of obesity are also profound. A report by Republican members of the Joint Economic Committee of the U.S. Congress, published in 2024, predicted that obesity-related health care costs will rise to US$9.1 trillion over the next decade.

The rise in childhood and adolescent obesity is particularly concerning, with the rate of obesity more than doubling among adolescents ages 15 to 24 since 1990. Data from the National Health and Nutrition Examination Survey revealed that nearly 20% of children and adolescents in the U.S. ages 2 to 19 live with obesity.

By 2050, our forecast results suggest that 1 in 5 children and 1 in 3 adolescents will experience obesity. The increase in obesity among children and adolescents not only triggers the early onset of chronic diseases but also negatively affects mental health, social interactions and physical functioning.

What other research is being done

Our research highlighted substantial geographical disparities in overweight and obesity prevalence across states, with southern U.S. states observing some of the highest rates.

Other studies on obesity in the United States have also underscored significant socioeconomic, racial and ethnic disparities. Previous studies suggest that Black and Hispanic populations exhibit higher obesity rates compared with their white counterparts. These disparities are further exacerbated by systemic barriers, including discrimination, unequal access to education, health care and economic inequities.

Another active area of research involves identifying effective obesity interventions, including a recent study in Seattle demonstrating that taxation on sweetened beverages reduced average body mass index among children. Various community-based studies also investigated initiatives aimed at increasing access to physical activity and healthy foods, particularly in underserved areas.

Clinical research has been actively exploring new anti-obesity medications and continuously monitoring the effectiveness and safety of current medications.

Furthermore, there is a growing body of research examining technology-driven behavioral interventions, such as mobile health apps, to support weight management. However, whether many of these programs are scalable and sustainable is not yet clear. This gap hinders the broader adoption and adaptation of effective interventions, limiting their potential impact at the population level.

Source : https://studyfinds.org/208-million-americans-obese/

 

Why do diets fail? Study discovers your fat cells remember being fat

(Photo by Towfiqu Barbhuiya on Unsplash)

If you’ve ever lost weight only to watch the pounds creep back on, you’re not alone. Now, scientists have uncovered a biological explanation for this frustrating phenomenon known as the “yo-yo effect” – and it turns out our fat cells have a surprisingly long memory.

Researchers at ETH Zurich have discovered that being overweight leaves a lasting imprint on our fat cells through a process called epigenetics – chemical markers that act like tiny switches controlling which genes are turned on or off in our cells. These markers can persist for years, making it easier for the body to regain weight even after successful dieting.

“The fat cells remember the overweight state and can return to this state more easily,” explains Professor Ferdinand von Meyenn, who led the study published in Nature.

To reach this conclusion, the research team first studied mice, examining fat cells from both overweight mice and those that had successfully lost weight through dieting. They found that obesity created distinctive epigenetic “stamps” on the fat cells that stubbornly remained even after weight loss. When these mice were later given access to high-fat foods, they regained weight more quickly than mice without these cellular memories.

The findings weren’t limited to mice. The team also analyzed fat tissue samples from formerly overweight people who had undergone weight loss surgery, using samples from medical centers in Sweden and Germany. While they looked at different cellular markers in the human samples, the results aligned with their mouse studies, suggesting that human fat cells also “remember” their previous size. Perhaps most striking is how long this cellular memory might last.

“Fat cells are long-lived cells. On average, they live for ten years before our body replaces them with new cells,” says Laura Hinte, a doctoral student involved in the research.

Currently, there’s no way to erase these cellular memories with medication, though that could change in the future. For now, the researchers emphasize that prevention is key, particularly for young people.

“It’s precisely because of this memory effect that it’s so important to avoid being overweight in the first place. Because that’s the simplest way to combat the yo-yo phenomenon,” von Meyenn notes.

The team is now investigating whether other types of cells, such as those in the brain or blood vessels, might also harbor memories of previous weight gain. If so, this could help explain why maintaining weight loss is such a complex challenge for so many people.

This breakthrough research not only helps explain a frustrating aspect of weight loss that millions have experienced but also underscores the importance of preventing weight gain in the first place – our cells, it seems, never quite forget.

Source : https://studyfinds.org/fat-cells-remember-being-fat/

Climate change after the Ice Age: How CO2 and ‘reverse tsunamis’ created a ‘slushy’ Earth

The beginning of the ice age on the Ob River with snow and ice hummocks off the coast. Berdsk, Novosibirsk region, Western Siberia of Russia. (Photo by Starover Sibiriak on Shutterstock)

The Earth underwent a complete makeover after the last Ice Age, turning from a frozen wasteland to a slushy planet surrounded by oceans. In a new study, researchers looked at how it was possible for the once snowball Earth to rapidly melt and enter its “plumeworld ocean” era.

The surface ocean remained deeply frozen for several million years during the Ice Ages, which occurred about 635 to 650 million years ago. Scientists believe global temperatures dropped, causing the polar ice caps to spread around the hemispheres. More ice meant more sunlight reflected away from the Earth, contributing further to the frigid temperatures.

In a new study published in the Proceedings of the National Academy of Sciences journal, researchers show the first geochemical evidence of Earth setting conditions for the climate to change, with carbon dioxide from the sky eventually thawing out the ice.

“Our results have important implications for understanding how Earth’s climate and ocean chemistry changed after the extreme conditions of the last global ice age,” says Tian Gan, a former Virginia Tech postdoctoral researcher and lead author of the study, in a press release.

Along with sunlight reflected off the polar ice caps, a quarter of the ocean stayed deeply frozen because of low carbon dioxide levels. The frozen ocean stopped several chain reactions. The water cycle locked up, preventing evaporation, rain, and snow. With no water available, chemical weathering declined. This carbon dioxide-consuming process involves rocks breaking down because they interact with environmental chemicals. A lack of weathering and erosion causes carbon dioxide to build up in the atmosphere, trapping heat.

“It was just a matter of time until the carbon dioxide levels were high enough to break the pattern of ice,” says Shuhai Xiao, a geologist at Virginia Tech and study coauthor. “When it ended, it probably ended catastrophically.”

Over time, the accumulation of carbon dioxide trapping sunlight caused more heat to pile in the atmosphere. This caused ice caps to melt, and Earth’s climate turned frozen to slushy. Over 10 million years, the average global temperatures moved from -50 to 120 degrees Fahrenheit.

In the current study, researchers analyzed lithium isotopes from carbonate rocks formed after the Ice Age ended. The rocks’ geochemical signatures would give researchers a better idea of what the climate was like after the Ice Age.

Source:https://studyfinds.org/climate-change-after-ice-age-slushy-earth/

Surprising study claims being ‘fat but fit’ is a real thing

CHARLOTTESVILLE, Va. — Forget everything you thought you knew about weight and health. A head-turning study suggests that being physically fit might matter more than how much you weigh when it comes to your risk of dying from heart disease or other causes.

A team led by researchers from the University of Virginia has turned traditional wisdom about health on its head, finding that people who are overweight or obese but physically fit have essentially the same risk of death as those at a “normal” weight.

The real killer? Being unfit, regardless of body size.

This comprehensive analysis, published in the British Journal of Sports Medicine, examined nearly 400,000 individuals and found that people who were out of shape faced a dramatically higher risk of death – being roughly two to three times more likely to die from cardiovascular disease or other causes compared to their physically fit counterparts.

“Fitness, it turns out, is far more important than fatness when it comes to mortality risk,” says Siddhartha Angadi, an associate professor of exercise physiology at the University of Virginia School of Education and Human Development, in a media release.

“Exercise is more than just a way to expend calories. It is excellent ‘medicine’ to optimize overall health and can largely reduce the risk of cardiovascular disease and all-cause death for people of all sizes.”

The study tracked participants across multiple groups, with an average age range of 42 to 64 years. Importantly, the research included a more diverse group than previous studies, with 33% of participants being women – a significant improvement over earlier research that was dominated by male participants.

Participants were categorized into groups based on two key measurements: body mass index (BMI) and cardiorespiratory fitness. BMI is a standard measure that uses height and weight to estimate body fat, while cardiorespiratory fitness measures how efficiently your body can transport and use oxygen during exercise.

Remarkably, the researchers discovered that being “fit” appeared to neutralize the traditionally understood health risks associated with being overweight or obese. Individuals who were overweight or obese but maintained good fitness levels showed no statistically significant increase in mortality risk compared to those at a normal weight.

“The largest reduction in all-cause and cardiovascular disease mortality risk occurs when completely sedentary individuals increase their physical activity modestly,” Angadi reports. “This could be achieved with activities such as brisk walking several times per week with the goal of accumulating approximately 30 minutes per day.”

Conversely, individuals who were unfit – regardless of their weight – faced substantially higher risks. The unfit group, across all weight categories, showed a two to three-fold increase in the likelihood of dying from all causes, including heart disease.

This doesn’t mean weight doesn’t matter at all. Instead, the study suggests that physical activity and fitness might be more critical to long-term health than previously understood. The researchers propose a radical shift in approach: instead of focusing exclusively on weight loss, public health strategies should emphasize improving physical fitness.

Source : https://studyfinds.org/fat-but-fit-is-a-real-thing/

Breathe deeply: Research finds you can absorb nutrients and vitamins from fresh air

(Photo by Rido on Shutterstock)

You know that feeling you get when you take a breath of fresh air in nature? There may be more to it than a simple lack of pollution.

When we think of nutrients, we think of things we obtain from our diet. But a careful look at the scientific literature shows there is strong evidence humans can also absorb some nutrients from the air.

In a new perspective article published in Advances in Nutrition, we call these inhaled nutrients “aeronutrients” – to differentiate them from the “gastronutrients” that are absorbed by the gut.

We propose that breathing supplements our diet with essential nutrients such as iodine, zinc, manganese, and some vitamins. This idea is strongly supported by published data. So, why haven’t you heard about this until now?

Breathing is constant
We breathe in about 9,000 liters of air a day and 438 million liters in a lifetime. Unlike eating, breathing never stops. Our exposure to the components of air, even in very small concentrations, adds up over time.

To date, much of the research around the health effects of air has been centered on pollution. The focus is on filtering out what’s bad rather than what could be beneficial. Also, because a single breath contains minuscule quantities of nutrients, it hasn’t seemed meaningful.

For millennia, different cultures have valued nature and fresh air as healthful. Our concept of aeronutrients shows these views are underpinned by science. Oxygen, for example, is technically a nutrient – a chemical substance “required by the body to sustain basic functions”.

We just don’t tend to refer to it that way because we breathe it rather than eat it.

How do aeronutrients work, then?
Aeronutrients enter our body by being absorbed through networks of tiny blood vessels in the nose, lungs, olfactory epithelium (the area where smell is detected), and the oropharynx (the back of the throat).

The lungs can absorb far larger molecules than the gut – 260 times larger, to be exact. These molecules are absorbed intact into the bloodstream and brain.

Drugs that can be inhaled (such as cocaine, nicotine, and anesthetics, to name a few) will enter the body within seconds. They are effective at far lower concentrations than would be needed if they were being consumed by mouth.

In comparison, the gut breaks substances down into their smallest parts with enzymes and acids. Once these enter the bloodstream, they are metabolized and detoxified by the liver.

The gut is great at taking up starches, sugars, and amino acids, but it’s not so great at taking up certain classes of drugs. In fact, scientists are continuously working to improve medicines so we can effectively take them by mouth.

The evidence has been around for decades
Many of the scientific ideas that are obvious in retrospect have been beneath our noses all along. Research from the 1960s found that laundry workers exposed to iodine in the air had higher iodine levels in their blood and urine.

More recently, researchers in Ireland studied schoolchildren living near seaweed-rich coastal areas, where atmospheric iodine gas levels were much higher. These children had significantly more iodine in their urine and were less likely to be iodine-deficient than those living in lower-seaweed coastal areas or rural areas. There were no differences in iodine in their diet.

This suggests that airborne iodine – especially in places with lots of seaweed – could help supplement dietary iodine. That makes it an aeronutrient our bodies might absorb through breathing.

Source: https://studyfinds.org/nutrients-and-vitamins-from-air/

The egg came before the chicken! Billion-year-old clue answers epic question

A cell of the ichthyosporean C. perkinsii showing distinct signs of polarity, with clear cortical localization of the nucleus before the first cleavage. Microtubules are shown in magenta, DNA in blue, and the nuclear envelope in yellow. © DudinLab

Which came first, the chicken or the egg? It’s been a puzzle that has stumped humanity for ages. However, an ancient cellular clue may have finally answered this timeless question!

A team in Switzerland says that long before chickens clucked or embryos developed, a microscopic marine creature was rehearsing the intricate dance of cellular division. This served as a billion-year-old preview of life’s most fundamental magic.

Specifically, scientists at the University of Geneva discovered something extraordinary in Chromosphaera perkinsii, a single-celled organism that seems to preview animal embryonic development. It turns out that the genetic machinery for creating eggs — the fundamental starting point of complex life — existed over a billion years before animals emerged.

“It’s fascinating, a species discovered very recently allows us to go back in time more than a billion years,” says Marine Olivetta, the study’s first author, in a university release.

In other words, the “egg” came before the “chicken” — but not in the way you might think. The cellular processes that allow an egg to develop into a complex organism were already developing in simple, single-celled life forms. This tiny organism shows that the blueprint for creating life — the ability to divide, specialize, and develop — predates animals by hundreds of millions of years. This research is published in the journal Nature.

The organism undergoes a process called palintomy — synchronized cell divisions without growth — creating multicellular colonies that bear a striking resemblance to early embryonic stages. These colonies persist for about a third of the organism’s life cycle and contain at least two distinct cell types, an unprecedented complexity for a single-celled creature.

Intriguingly, when C. perkinsii reaches its maximum size, it divides into three types of free-living cells: flagellates, amoeboflagellates, and dividing cells. Like a microscopic dress rehearsal for animal life, these cells activate different genes in successive waves, mimicking early embryonic development.

“Although C. perkinsii is a unicellular species, this behavior shows that multicellular coordination and differentiation processes are already present in the species, well before the first animals appeared on Earth,” explains lead researcher Omaya Dudin.

The discovery doesn’t just solve this age-old scientific puzzle — it challenges our understanding of life’s complexity. It also suggests that the genetic tools for creating sophisticated organisms existed far earlier than previously thought, waiting in the wings of evolutionary history.

Who knew the secret to understanding life’s grand performance was hiding in a tiny marine organism, patiently waiting to tell its story?

Source:https://studyfinds.org/egg-came-before-chicken/

How aging men can maintain a satisfying love life, according to a doctor

An older couple in bed (© pikselstock – stock.adobe.com)

For men, sex in their 40s or 60s is different from their 20s. However, it can still be healthy and enjoyable. Sex isn’t just for young men. Seniors can enjoy sex into their 80s and beyond. Moreover, it’s good for their physical health and self-esteem.

Although sex can be healthy for adults of all ages, there are some changes to take note of as men get older:

  • Lower sex drive
  • Erection changes
  • Ejaculation changes (premature or delayed)
  • Discomfort or pain
  • Body, hair, and genital changes
  • Less stamina or strength
  • Depression or stress
  • Lower fertility
  • Fatigue
  • Changes in your partner’s ability or sexual desire

With these challenges in mind, working with your body is key for ongoing and fulfilling sexual enjoyment.

What health problems can disrupt sexual ability?
Age-related changes, long-term health conditions, and drugs can affect you sexually. Blood pressure drugs, antidepressants, antihistamines, and acid-blocking drugs can also affect sexual functioning. So can heart disease, diabetes, cancer, and prostate problems.

However, these don’t have to end sexual functioning. There are different ways to be intimate. Start by talking with your primary healthcare provider. Often, medication dosages can be modified, or different drugs which cause fewer side-effects can be substituted.

Arthritis
Different sexual positions may relieve pain or discomfort during intimacy. Try using heat to lessen joint pain before or after sex. Sexual partners dealing with arthritis should focus on what works rather than what doesn’t work.

Heart disease
Following a heart attack or heart disease diagnosis, talk with your healthcare provider about your concerns regarding sexual activity and how to engage in intimacy safely.

Emotional issues
Feelings affect sex at any age. Being older can actually work in your favor. There may be fewer distractions, more privacy, more time, and fewer concerns about pregnancy. Many older couples say their sex life is the best it’s ever been.

Other couples feel stressed by health conditions, money troubles, or other lifestyle changes. If either of you feel depressed, consult your healthcare provider.

Sex tips for seniors
Talk with your partner. Talking about sex is difficult for many people. Being vulnerable can be uncomfortable. Remember, however, that your partner is probably feeling vulnerable, too. You need to discuss your and your partner’s needs, wants, and worries. If necessary, include a sex therapist in your discussions.

Talk with your healthcare provider. By age 65, you’ll probably be seeing your provider about every six months. They manage your chronic health conditions and medications. Erection problems may be your earliest sign of heart disease. Your doctor can check your testosterone level. Tell them about any incidents of smoking, alcohol misuse, or illicit drug use.

Change your routine. Try sex in the morning, when you are fully rested, and testosterone is usually at its peak.

Expand how you define sex. Intercourse is not the only way to have sex. Oral contact and touching in intimate ways can be satisfying too.

Consult a sex therapist. Your healthcare professional can provide a referral. A therapist can educate you about sexuality, suggest new behaviors, recommend reading material and devices, and address your personal concerns. If your partner refuses to see a sex therapist, going by yourself can still be enlightening.

Laugh together. A sense of humor, especially about the foibles of senior sex, can ease the counterproductive stress that can inhibit functioning.

Reignite romance. Find a way to romance your partner. There are plenty of books with ideas. If you lose your partner, begin socializing. People never lose their need for emotional closeness and intimacy. If you have a new partner, use a condom. Sexually transmitted infections have skyrocketed in older men and women.

Source: https://studyfinds.org/maintain-satisfying-sex-men-age/

Add 11 years to your life? Science says it’s as simple as a daily walk

(© RawPixel.com – stock.adobe.com)

In what might be the best return on investment since Bitcoin’s early days, scientists have discovered that every hour of walking could yield up to six hours of additional life. Unlike cryptocurrency, however, this investment is guaranteed by the laws of human biology. An exciting modeling study reveals that if every American over the age of 40 was as physically active as the most active quarter of the population, they could expect to live an extra five years on average.

While scientists have long known that physical inactivity increases the risk of diseases like heart disease and stroke, this study is the first to quantify exactly how many years of life Americans might be losing due to insufficient physical activity. The findings suggest that the impact of physical inactivity on life expectancy may be substantially larger than previously estimated.

The study, led by researchers from Griffith University in Australia and various institutions worldwide, challenges previous estimates of physical activity’s benefits, which were largely based on self-reported data. By using more accurate device-based measurements, the researchers found that the relationship between physical activity and mortality is about twice as strong as earlier studies suggested.

Consider this: The most active 25% of Americans over age 40 engage in physical activity equivalent to about 160 minutes of normal-paced walking (at 3 miles per hour) every day. If all Americans over 40 matched this level of activity, it would boost the national life expectancy at birth from 78.6 years to nearly 84 years.

For the least active quarter of the population to match the most active group, they would need to add about 111 minutes of daily walking (or equivalent activity) to their routine. While this might sound challenging, the potential reward is significant: nearly 11 additional years of life expectancy.

To put this in perspective, that’s roughly equivalent to eliminating half the life expectancy gap between the U.S. and countries with the highest life expectancy globally.

Not All Talk: How Scientists Walked The Walk
Study authors analyzed data from the National Health and Nutritional Examination Survey (NHANES), focusing on Americans aged 40 and older who wore activity monitors for at least four days. Unlike previous studies that relied on participants’ memory and honesty about their activity levels, these monitors provided objective measurements of every movement throughout the day.

The results showed a striking “diminishing returns” effect in the relationship between physical activity and longevity. The greatest benefits were seen among the least active individuals: moving from the lowest activity quarter to the second-lowest required just 28.5 minutes of additional walking per day but could add 6.3 years to life expectancy. That means every hour of walking for this group translated to an extra 6.3 hours of life — an impressive return on investment.

As people became more active, the additional benefits per hour of activity decreased but remained significant. For those in the second-lowest activity quarter, reaching the activity level of the most active group would require about 83 additional minutes of walking per day and could add 4.6 years to their life expectancy.

Never Too Late

These findings, published in the British Journal of Sports Medicine, have important implications for public health policy and urban planning. Creating walkable neighborhoods, maintaining safe parks and green spaces, and designing cities that encourage active transportation could help populations achieve these higher activity levels naturally. The researchers emphasize that increasing physical activity at the population level requires a comprehensive approach that considers social determinants and addresses inequalities in access to activity-promoting environments.

The study also highlighted significant disparities in physical activity levels across socioeconomic groups. In 2020, only 16.2% of men and 9.9% of women in the lowest income group met the guidelines for aerobic and muscle-strengthening activities, compared to 32.4% and 25.9% in the highest-income group, respectively. This suggests that initiatives to promote physical activity could help reduce health inequalities.

Source: https://studyfinds.org/add-11-years-to-your-life/

Yes, the universe began with the Big Bang – Here’s how scientists know for sure

NASA’s Goddard Space Flight Center/CI Lab

How did everything begin? It’s a question that humans have pondered for thousands of years. Over the last century or so, science has homed in on an answer: the Big Bang.

This describes how the Universe was born in a cataclysmic explosion almost 14 billion years ago. In a tiny fraction of a second, the observable universe grew by the equivalent of a bacterium expanding to the size of the Milky Way. The early universe was extraordinarily hot and extremely dense. But how do we know this happened?

Let’s look first at the evidence. In 1929, the American astronomer Edwin Hubble discovered that distant galaxies are moving away from each other, leading to the realisation that the universe is expanding. If we were to wind the clock back to the birth of the cosmos, the expansion would reverse and the galaxies would fall on top of each other 14 billion years ago. This age agrees nicely with the ages of the oldest astronomical objects we observe.

The idea was initially met with skepticism – and it was actually a sceptic, the English astronomer Fred Hoyle, who coined the name. Hoyle sarcastically dismissed the hypothesis as a “Big Bang” during an interview with BBC radio on March 28 1949.

Then, in 1964, Arno Penzias and Robert Wilson detected a particular type of radiation that fills all of space. This became known as the cosmic microwave background (CMB) radiation. It is a kind of afterglow of the Big Bang explosion, released when the cosmos was a mere 380,000 years old.

The CMB provides a window into the hot, dense conditions at the beginning of the universe. Penzias and Wilson were awarded the 1978 Nobel Prize in Physics for their discovery.

More recently, experiments at particle accelerators like the Large Hadron Collider (LHC) have shed light on conditions even closer to the time of the Big Bang. Our understanding of physics at these high energies suggests that, in the very first moments after the Big Bang, the four fundamental forces of physics that exist today were initially combined in a single force.

The present day four forces are gravity, electromagnetism, the strong nuclear force and the weak nuclear force. As the universe expanded and cooled down, a series of dramatic changes, called phase transitions (like the boiling or freezing of water), separated these forces.

Experiments at particle accelerators suggest that a few billionths of a second after the Big Bang, the latest of these phase transitions took place. This was the breakdown of electroweak unification, when electromagnetism and the weak nuclear force ceased to be combined. This is when all the matter in the Universe assumed its mass.

Source: https://studyfinds.org/universe-began-big-bang/

Breakthrough stem cell surgery restores vision among several human patients

(Photo of an eye Vanessa Bumbeers on Unsplash.com)

In a remarkable medical breakthrough, doctors have successfully used stem cells to treat a debilitating eye condition that can lead to vision loss. The world-first procedure, which involves transplanting lab-grown corneal cells derived from human stem cells, has the potential to restore sight for those suffering from a condition called limbal stem cell deficiency.

LSCD is a devastating disorder that occurs when the stem cells responsible for maintaining the cornea’s outer layer are damaged or depleted. This can lead to the growth of fibrous tissue over the cornea, clouding vision and causing pain, inflammation, and even blindness. Until now, treatments have been limited, often involving complex surgeries or risky immunosuppressant drugs.

However, the new research published in the medical journal The Lancet shows remarkable success with a novel approach using induced pluripotent stem cells (iPSCs) — adult cells that have been reprogrammed to behave like embryonic stem cells. Researchers in Japan were able to generate corneal epithelial cell sheets from iPSCs and successfully transplant them into the eyes of four patients with LSCD.

These patients — three men and one woman — ranged in age from 39 to 72. All had been diagnosed with LSCD stemming from various causes, including chemical burns, immune disorders, and a rare skin condition. After undergoing a procedure to remove the clouded corneal tissue, the research team carefully transplanted the lab-grown stem cell-derived corneal cell sheets onto the patients’ eyes.

Slit-lamp microscopy images of the treated eyes (Credit: The Lancet)

Remarkably, the transplanted cells were able to successfully integrate and restore the corneal surface in all four patients, with no serious side-effects reported over a two-year follow-up period. Three of the patients experienced significant improvements in visual acuity, corneal clarity, and overall eye health. Even the fourth patient, who had the most severe condition, showed some improvement initially, though this was not sustained long-term.

The researchers hypothesize that the transplanted cells either directly regenerate the corneal epithelium or prompt the patient’s own conjunctival cells to take on a corneal-like function, a process called “conjunctival transdifferentiation.” Further research will still be necessary to fully understand the underlying mechanisms of this vision-saving process.

Source : https://studyfinds.org/stem-cell-surgery-restores-vision

12,000-year-old discovery forces experts to spin new story about wheel’s origins

(Credit: Marijana Batinic/Shutterstock)

While most of us learned that the wheel was invented around 3500 BCE for transportation, a groundbreaking discovery in Israel suggests we need to roll back our understanding of rotational technology by several thousand years. Researchers have unearthed over 100 perforated stone discs from a 12,000-year-old village that may represent humanity’s first experiments with wheel-like objects – not for moving carts or chariots, but for spinning thread.

The archaeological site of Nahal Ein-Gev II, located near the Sea of Galilee in Israel, has yielded an extraordinary collection of 113 limestone pebbles, each carefully drilled through the center. While such perforated stones are not uncommon in ancient sites, this collection is special because of its age, quantity, and the careful way the holes were made. These weren’t just random rocks with holes – they appeared to be carefully selected and modified tools that served a specific purpose.

Think of them as prehistoric fidget spinners but with a practical application. The researchers believe these perforated stones served as spindle whorls – weighted discs that, when attached to a wooden stick, helped transform plant or animal fibers into thread through spinning. It’s similar to how a modern spinning wheel works, just more primitive and portable.

The research team, led by Talia Yashuv and Leore Grosman from the Hebrew University of Jerusalem, used cutting-edge 3D scanning technology to analyze these ancient artifacts in unprecedented detail. They discovered that despite their seemingly simple appearance, these tools showed remarkable sophistication in their design and creation.

The stones weren’t just randomly selected – most were made from soft limestone, weighed between 1-34 grams (with most falling between 2-15 grams), and had holes drilled precisely through their centers. This central positioning was crucial for the spinning process to work effectively, much like how a modern fidget spinner needs perfect balance to rotate smoothly.

What’s particularly fascinating is how these holes were created. In 95% of the stones, the holes were drilled from both sides to meet in the middle – a more complex but more effective technique than drilling straight through. This bi-directional drilling created a distinctive hourglass-shaped hole that, as experimental archaeology would later prove, actually helped secure the wooden spindle in place.

Spinning methods. (a) Manual thigh-spinning [64]; (b) Spindle-and-whorl “supported spinning” [68]; (c) “drop spinning” [66]; (d) the experimental spindles and whorls, the 3D scans of the pebbles and their negative perforations. The bottom pictures show Yonit Kristal experimenting spinning fibers with replicas of the perforated pebbles, using supported spinning and drop spinning techniques (photographed by Talia Yashuv). (Credit Yashuv, Grosman, 2024, PLOS ONE, CC-BY 4.0)
To test their theory about these objects being spindle whorls, the researchers created replicas and enlisted the help of a traditional craft expert, Yonit Kristal. Using these reconstructed tools, they successfully spun both wool and flax into thread, though flax proved more effective. The experiments showed that while these ancient tools weren’t as efficient as modern spinning wheels, they represented a significant technological advancement over hand-spinning techniques.

This study, published in PLOS ONE, challenges our understanding of when humans first began experimenting with rotational technology. While the wheel-and-axle system is commonly associated with transportation in the Bronze Age (around 5,000 years ago), these spindle whorls show that humans were already manipulating rotational motion for practical purposes thousands of years earlier.

Source : https://studyfinds.org/12000-year-old-discovery-wheels

Frozen sabre-toothed kitten reveals ‘significant differences’ with modern lion cub

The cub’s mummified remains, including its head, front arms and paws, and part of its chest, were found well-preserved in Yakutia, Russia, in 2020.

The frozen sabre-toothed cub. Pic: A V Lopatin/Scientific Reports

The frozen remains of a sabre-toothed cat thought to be about 31,800 years old have been studied for the first time in history, according to a study.

The cub’s mummified remains, including its head, front arms and paws, and part of its chest, were found well-preserved in Arctic permafrost on the banks of the Badyarikha River in Yakutia, in Russia’s Siberia region, in 2020.

“Findings of frozen mummified remains of the Late Pleistocene mammals are very rare,” the researchers explained, referring to the period in which it lived.

They added: “For the first time in the history of palaeontology, the appearance of an extinct mammal that has no analogues in the modern fauna has been studied.”

A modern day lion cub. Pic: A V Lopatin/Scientific Reports

When compared to the remains of a modern lion cub of a similar age, there were “significant differences”, said the experts.

The kitten, which was about three weeks old, has wider paws with their width almost the same as their length.

It also does not have carpal pads (shock absorbers) which is thought to be an adaptation to low temperatures and walking in snow.

‘Large mouth, small ears and massive neck’

The prehistoric animal also has a “large mouth opening”, small ears and a “very massive neck region” along with elongated forelimbs.

Pics A, B and C are of the prehistoric animal. Pic D shows the modern lion cub, including 1 – the first digital pad, and 2 – the carpal pad. Images: Pic: A V Lopatin/Scientific Reports

Its neck is “longer and more than twice as thick” as the modern cub’s, and the mouth opening is about 11% to 19% bigger.

“The difference in (neck) thickness is explained by the large volume of muscles, which is visually observed at the site of separation of the skin from the mummified flesh,” said the study, which was carried out by A V Lopatin of the Russian Academy of Sciences in Moscow and colleagues.

Source : https://news.sky.com/story/frozen-sabre-toothed-kitten-reveals-significant-differences-with-modern-lion-cub-13254961

New Therapeutic Vaccine Gives Hope Against Super-Aggressive Triple-Negative Breast Cancer

A new therapeutic vaccine offers fresh hope for women battling super aggressive triple-negative cancer. The vaccine teaches patients’ immune systems to identify and attack cancer cells. Around 16 out of 18 patients remained cancer-free three years after receiving the vaccine. The researchers from Washington University School of Medicine now want larger clinical trials to prove the vaccine’s effectiveness. Read on to know in detail.

Triple-negative breast cancer is characterized by cancer cells that lack or have low levels of estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2

An experimental vaccine could be the best hope for women who are battling an aggressive and hard-to-treat breast cancer, a new study has revealed. The shots, according to experts, are safe and totally effective against triple-negative breast cancer – a kind that otherwise cannot be treated with hormone therapy.

Triple-negative breast cancer is characterized by cancer cells that lack or have low levels of estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2.

The new vaccine killed cancer cells in the immune system 

However, in the latest trial for the vaccine, all the patients remained cancer-free even three years after receiving the vaccine – which killed the remaining cancer cells in their immune systems, according to results published Nov. 13 in the journal Genome Medicine.

By comparing this vaccine to the traditional surgery undertaken to treat breast cancer, only half of patients usually remain cancer-free after three years, according to historical data. “These results were better than we expected,” said Dr. William Gillanders, senior researcher and professor of surgery at Washington University School of Medicine in St. Louis.

How was the trial conducted? 

An early clinical trial for the vaccine was conducted with 18 patients of triple-negative breast cancer that had not spread elsewhere in the body, said scientists. Around 10-15 per cent of the breast cancers that occur in the United States are triple-negative, according to the National Breast Cancer Foundation.

To date, triple-negative breast cancer has no targeted therapies. It must be treated with traditional approaches like surgery, chemotherapy, and radiation therapy, researchers said in background notes.

Source: https://www.timesnownews.com/health/new-therapeutic-vaccine-gives-hope-against-super-aggressive-triple-negative-breast-cancer-article-115319099

Scientists discover life in the most uninhabitable place on Earth

(Credit: PositiveTravelArt/Shutterstock)

In a remarkable discovery, researchers have found evidence of living microbes thriving in one of the most inhospitable environments on Earth – the Atacama Desert of Chile. This vast, arid landscape is often described as the driest place on the planet, making it seemingly impossible for any life to exist. Yet, the new study reveals a diverse microbial community actively colonizing this extreme wasteland.

The key to this breakthrough was a novel technique developed by an international team of scientists led by geomicrobiologist Dirk Wagner, Ph.D., from the GFZ German Research Centre for Geosciences. Their method allows researchers to separate the genetic material of living microbes from the fragments of dead cells, providing a clearer picture of the active microbial community.

“Microbes are the pioneers colonizing this kind of environment and preparing the ground for the next succession of life,” explains Dr. Wagner in a media release.

This newfound understanding could have implications far beyond the Atacama, as similar processes may occur in other extreme environments, such as areas affected by natural disasters or even on other planets.

The researchers, who published their research in the journal Applied and Environmental Microbiology, collected soil samples from the Atacama Desert, stretching from the Pacific coast to the foothills of the Andes mountains. By using their innovative separation technique, they were able to identify a diverse array of living and potentially active microbes, including Actinobacteria and Proteobacteria, in even the most arid regions.

Location of the study sites and bacterial abundances. (A) Location of the study sites along the Atacama Desert. Moisture gradient including Coastal Sand (CS), Alluvial Fan (AL), Red Sands (RS), Yungay (YU), and two hyperarid reference sites, Maria Elena (ME) and Lomas Bayas (LB). (B) Bacterial abundance based on 16S rRNA gene copy numbers of the e- and iDNA pools (means ± SE, n = 3–4, see Table S1), and phospholipid fatty acids (PLFAs) in the different investigation sites along the Atacama transect. Missing gene copy numbers for the eDNA pool indicate less than three replicates for the respective study site. (Credit: Applied and Environmental Microbiology)

Interestingly, the team found that in the shallow soil samples (less than 5 centimeters deep), Chloroflexota bacteria dominated the group of living, intracellular DNA. This suggests that these microbes are the most active members of the community, constantly replenishing the pool of genetic material.

“If a community is really active, then a constant turnover is taking place, and that means the 2 pools should be more similar to each other,” Wagner notes.

The researchers plan to further investigate the active microbial processes in the Atacama Desert through metagenomic sequencing of the intracellular DNA. This approach, they believe, will provide deeper insights into the microbes thriving in this extreme environment, paving the way for a better understanding of life’s resilience in the most inhospitable corners of our planet.

Source: https://studyfinds.org/life-most-uninhabitable-place/

Exit mobile version