Prof: We’ve already become too reliant on AI, and it’s ruining ‘real intelligence’

Older man using a smartphone (© Prostock-studio – stock.adobe.com)

Stop Googling, start napping among 9 key habits for preserving brainpower into old age
BOCA RATON, Fla. — In an age where people constantly reach for their smartphones to look up information, a leading Canadian academic is urging the public to exercise their brains instead. Professor Mohamed I. Elmasry, an expert in microchip design and artificial intelligence (AI), believes that 9 simple daily habits like taking afternoon naps and engaging in memory “workouts” can significantly reduce the risk of age-related dementia.

In his new book, “iMind: Artificial and Real Intelligence,” Elmasry argues that we’ve become too reliant on AI at the expense of our natural, or “real” intelligence (RI). He’s calling for a return to nurturing our human minds, which he compares to smartphones but describes as far more powerful and longer-lasting with proper care.

“Your brain-mind is the highest-value asset you have, or will ever have,” Elmasry writes in a media release. “Increase its potential and longevity by caring for it early in life, keeping it and your body healthy so it can continue to develop.”

The inspiration for Elmasry’s book came from personal experience. After losing his brother-in-law to Alzheimer’s and witnessing others close to him, including his mother, suffer from various forms of dementia, he felt compelled to share his insights on brain health.

While Elmasry acknowledges that smart devices are becoming increasingly advanced, he maintains that they pale in comparison to the human brain.

“The useful life expectancy for current smartphones is around 10 years, while a healthy brain-mind inside a healthy human body can live for 100 years or longer,” Elmasry explains.

One of the key issues Elmasry highlights is our growing dependence on technology for basic information recall. He shares an anecdote about his grandchildren needing to use a search engine to name Cuba’s capital despite having just spent a week in the country. This story serves as a stark reminder of how younger generations are increasingly relying on AI smartphone apps instead of exercising their own mental faculties.

“A healthy memory goes hand-in-hand with real intelligence,” Elmasry emphasizes. “Our memory simply can’t reach its full potential without RI.”

In an age where people constantly reach for their smartphones to look up information, a leading Canadian academic is urging the public to exercise their brains instead. (© ikostudio – stock.adobe.com)

So, what can we do to keep our brains sharp and reduce the risk of cognitive decline? Elmasry offers several practical tips:

Elmasry’s book goes beyond just offering tips for brain health. It delves into the history of microchip design, machine learning, and AI, explaining how these technologies work in smartphones and other devices. He also explores how human intelligence functions and how brain activity connects to our mind and memory.

Interestingly, Elmasry draws parallels between the human mind and smartphones, comparing our brain’s “hardware,” “software,” and “apps” to those of our digital devices. However, he stresses that the human brain far surpasses current AI in terms of speed, accuracy, storage capacity, and other functions.

The book also touches on broader societal issues related to brain health. Elmasry argues that healthy aging is as crucial as climate change but receives far less attention. He calls for policymakers to implement reforms that promote cognitive well-being, such as transforming bingo halls from sedentary entertainment venues into active learning centers.

Source: https://studyfinds.org/brain-health-googling-napping/

Smell of human stress affects dogs’ emotions leading them to make more pessimistic choices

(Photo by Meruyert Gonullu from Pexels)

Dogs experience emotional contagion from the smell of human stress, leading them to make more ‘pessimistic’ choices, new research finds. The University of Bristol-led study, published in Scientific Reports today [22 July], is the first to test how human stress odours affect dogs’ learning and emotional state.

Evidence in humans suggests that the smell of a stressed person subconsciously affects the emotions and choices made by others around them. Bristol Veterinary School researchers wanted to find out whether dogs also experience changes in their learning and emotional state in response to human stress or relaxation odours.

The team used a test of ‘optimism’ or ‘pessimism’ in animals, which is based on findings that ‘optimistic’ or ‘pessimistic’ choices by people indicate positive or negative emotions, respectively.

The researchers recruited 18 dog-owner partnerships to take part in a series of trials with different human smells present. During the trials, dogs were trained that when a food bowl was placed in one location, it contained a treat, but when placed in another location, it was empty. Once a dog learned the difference between these bowl locations, they were faster to approach the location with a treat than the empty location. Researchers then tested how quickly the dog would approach new, ambiguous bowl locations positioned between the original two.

A quick approach reflected ‘optimism’ about food being present in these ambiguous locations – a marker of a positive emotional state – whilst a slow approach indicated ‘pessimism’ and negative emotion. These trials were repeated whilst each dog was exposed to either no odour or the odours of sweat and breath samples from humans in either a stressed (arithmetic test) or relaxed (listening to soundscapes) state.

Researchers discovered that the stress smell made dogs slower to approach the ambiguous bowl location nearest the trained location of the empty bowl. An effect that was not seen with the relaxed smell. These findings suggest that the stress smell may have increased the dogs’ expectations that this new location contained no food, similar to the nearby empty bowl location.

Source: https://studyfinds.org/smell-human-stress-affects-dogs/

Nearly half of all cancer deaths are preventable by making simple life changes

(Photo by Diego Indriago from Pexels)

The best way to end cancer is to stop it from forming in the first place. However, did you know your life choices may be the biggest reason you’re at risk for cancer? In fact, a new study reveals that four in 10 cancer cases are preventable. Published in CA: A Cancer Journal for Clinicians, researchers with the American Cancer Society also report that nearly half of all cancer deaths among U.S. adults 30 years and older are the result of controllable risk factors such as cigarette smoking, physical inactivity, obesity, and excessive drinking.

Out of all of the modifiable lifestyle choices, cigarette smoking was the greatest contributor to cancer.

“Despite considerable declines in smoking prevalence during the past few decades, the number of lung cancer deaths attributable to cigarette smoking in the United States is alarming. This finding underscores the importance of implementing comprehensive tobacco control policies in each state to promote smoking cessation, as well as heightened efforts to increase screening for early detection of lung cancer, when treatment could be more effective,” says Dr. Farhad Islami, the senior scientific director of cancer disparity research at the American Cancer Society, in a media release.

Methodology
Researchers collected data on rates of cancer diagnosis, cancer deaths, and risk factors to estimate the number of cases and deaths caused by modifiable risk factors (excluding non-melanoma skin cancers). The study authors looked at 30 different cancer types.

The risk factors assessed included current or former cigarette smoking, routine exposure to secondhand smoke, excess body weight, heavy alcohol drinking, eating red and processed meat, low consumption of fruits and vegetables, dietary calcium, physical inactivity, ultraviolet radiation, and viral infections.

Key Results
Cigarette smoking caused a disproportionate amount of cancer cases, contributing to 19.3% of new diagnoses. Additionally, cigarette smoking contributed to 56% of all potentially preventable cancers in men and 39.9% of preventable cancers in women.

Obesity was the second most influential modifiable risk factor contributing to the formation of new cancers at 7.6%. This was followed by alcohol consumption, UV radiation exposure, and physical inactivity.

“Interventions to help maintain healthy body weight and diet can also substantially reduce the number of cancer cases and deaths in the country, especially given the increasing incidence of several cancer types associated with excess body weight, particularly in younger individuals,” explains Dr. Islami.

These lifestyle choices significantly increased the risk for certain types of cancers in staggering proportions. Modifiable risk factors contributed to 100% of cervical cancer cases and Kaposi sarcoma. For 19 of the 30 cancers studied, these factors played a part in over 50% of new diagnoses.

Source: https://studyfinds.org/cancer-deaths-preventable/

Scientists close to creating ‘one and done’ universal flu shot

Young child gets his annual flu shot. However, scientists believe the annual flu shot could soon be a thing of the past.

The annual flu shot could soon be a thing of the past. Scientists are working on a revolutionary formula that would result in only needing one flu shot in your lifetime. Simply put, this would mean no more annual vaccinations or worrying about whether this year’s shot will match the flu strains circulating around the world.

Scientists at Oregon Health & Science University (OHSU) have developed a promising approach to creating this universal influenza vaccine — one that could provide lifelong protection against the ever-changing flu virus. Their study, published in Nature Communications, tested a new vaccine platform against H5N1, a bird flu strain considered most likely to cause the next pandemic.

Here’s where it gets interesting: instead of using the current H5N1 virus, researchers vaccinated monkeys against the infamous 1918 flu virus – the same one that caused millions of deaths worldwide over a century ago. Surprisingly, this approach showed remarkable results.

“It’s exciting because in most cases, this kind of basic science research advances the science very gradually; in 20 years, it might become something,” says senior author Jonah Sacha, Ph.D., chief of the Division of Pathobiology at OHSU’s Oregon National Primate Research Center, in a media release. “This could actually become a vaccine in five years or less.”

So, How Does This New Vaccine Work?

Unlike traditional flu shots that target the virus’s outer surface – which constantly changes – this approach focuses on the virus’s internal structure. Think of it like targeting the engine of a car instead of its paint job. The internal parts of the virus don’t change much over time, providing a stable target for our immune system.

The researchers used a clever trick to deliver this vaccine. They inserted small pieces of the target flu virus into a common herpes virus called cytomegalovirus (CMV). Don’t worry – CMV is harmless for most people and often causes no symptoms at all. This modified CMV acts like a Trojan horse, sneaking into our bodies and teaching our immune system’s T cells how to recognize and fight off flu viruses.

To test their theory, the team exposed vaccinated non-human primates to the H5N1 bird flu virus. The results were impressive: six out of 11 vaccinated animals survived exposure to one of the deadliest viruses in the world today. In contrast, all unvaccinated primates succumbed to the disease.

“Should a deadly virus such as H5N1 infect a human and ignite a pandemic, we need to quickly validate and deploy a new vaccine,” says co-corresponding author Douglas Reed, Ph.D., associate professor of immunology at the University of Pittsburgh Center for Vaccine Research.

The study tested a new vaccine platform against H5N1, a bird flu strain considered most likely to cause the next pandemic. (Photo by Felipe Caparros on Shutterstock)

What makes this approach even more exciting is its potential to work against other mutating viruses, including the one that causes COVID-19.

“For viruses of pandemic potential, it’s critical to have something like this. We set out to test influenza, but we don’t know what’s going to come next,” Dr. Sacha believes.

The success of this vaccine lies in its ability to target parts of the virus that remain consistent over time.

“It worked because the interior protein of the virus was so well preserved,” Sacha continues. “So much so, that even after almost 100 years of evolution, the virus can’t change those critically important parts of itself.”

Source : https://studyfinds.org/one-and-done-flu-shot

Single drop of blood could accurately reveal your overall health

(Photo by Love the wind on Shutterstock)

Could the future of healthcare be as simple as going to your doctor for a routine checkup and giving a single drop of blood to screen you for multiple health conditions at once? This futuristic scenario may soon become reality, thanks to groundbreaking research combining infrared spectroscopy with machine learning.

A team of researchers from Germany developed a new method that can detect multiple health conditions from a single drop of blood plasma. Their study, published in Cell Reports Medicine, demonstrates how this technique could revolutionize health screening and early disease detection.

The method, called infrared molecular fingerprinting, works by shining infrared light through a blood plasma sample and measuring how different molecules in the sample absorb the light. This creates a unique “fingerprint” of the sample’s molecular composition. By applying advanced machine learning algorithms to these fingerprints, the researchers were able to detect various health conditions with impressive accuracy.

Led by Mihaela Žigman of Ludwig Maximilian University of Munich (LMU), the research team also included scientists from the Max Planck Institute of Quantum Optics (MPQ), and Helmholtz Munich.

Scientists say a single drop of blood can accurately screen for various health conditions including diabetes and hypertension. (Photo by KinoMasterskaya on Shutterstock)

What does the test screen for?

The study analyzed over 5,000 blood samples from more than 3,000 individuals, looking for five common health conditions: dyslipidemia (abnormal cholesterol levels), hypertension (high blood pressure), prediabetes, Type 2 diabetes, and overall health status. Remarkably, the technique was able to correctly identify these conditions simultaneously with high accuracy.

One of the most exciting aspects of this research is its potential for early disease detection. The method was able to predict which individuals would develop metabolic syndrome – a cluster of conditions that increase the risk of heart disease, stroke, and diabetes – up to 6.5 years before onset. This could allow for earlier interventions and potentially prevent or delay the development of serious health problems.

The approach offers a cost-effective, efficient way to screen for multiple health conditions with a single blood test. It could potentially transform how we approach preventive healthcare and disease management.

The technique also showed promise in estimating levels of various clinical markers typically measured in standard blood tests, such as cholesterol, glucose, and triglycerides. This suggests that infrared fingerprinting could potentially replace multiple conventional blood tests with a single, more comprehensive analysis.

Perhaps most intriguingly, the method was able to detect subtle differences between healthy individuals and those with early-stage or pre-disease conditions. For example, it could distinguish between people with normal blood sugar levels and those with prediabetes, a condition that often goes undiagnosed but significantly increases the risk of developing type 2 diabetes.

While doctors w(© bernardbodo – stock.adobe.com)

When will the blood test be available?

The implications of this research are far-reaching. If implemented in clinical practice, this technique could make health screening more accessible and comprehensive. It could enable doctors to catch potential health problems earlier, when they’re often easier to treat or manage. For patients, it could mean fewer blood draws and a more holistic view of their health status from a single test.

The researchers believe this study lays the groundwork for infrared molecular fingerprinting to become a routine part of health screening. As they continue to refine the system and expand its capabilities, they hope to add even more health conditions and their combinations to the diagnostic repertoire. This could lead to personalized health monitoring, where individuals regularly check their health status and catch potential issues long before they become serious.

However, study authors caution that more work is needed before this method can be widely adopted in clinical settings. The current study was conducted on a specific population in southern Germany, and further research is needed to confirm its effectiveness across diverse populations.

Nevertheless, this study represents a significant step forward in the field of medical diagnostics. As we move towards more personalized and preventive healthcare, tools like infrared molecular fingerprinting could play a crucial role in keeping us healthier for longer.

Source : https://studyfinds.org/single-drop-of-blood-test-screens-health

Why are some people happy when they are dying?

(Credit: anatoliy_gleb/Shutterstock)

Simon Boas, who wrote a candid account of living with cancer, passed away on July 15 at the age of 47. In a recent BBC interview, the former aid worker told the reporter: “My pain is under control and I’m terribly happy – it sounds weird to say, but I’m as happy as I’ve ever been in my life.”

It may seem odd that a person could be happy as the end draws near, but in my experience as a clinical psychologist working with people at the end of their lives, it’s not that uncommon.

There is quite a lot of research suggesting that fear of death is at the unconscious center of being human. William James, an American philosopher, called the knowledge that we must die “the worm at the core” of the human condition.

But a study in Psychological Science shows that people nearing death use more positive language to describe their experience than those who just imagine death. This suggests that the experience of dying is more pleasant – or, at least, less unpleasant – than we might picture it.

In the BBC interview, Boas shared some of the insights that helped him come to accept his situation. He mentioned the importance of enjoying life and prioritizing meaningful experiences, suggesting that acknowledging death can enhance our appreciation for life.

Despite the pain and difficulties, Boas seemed cheerful, hoping his attitude would support his wife and parents during the difficult times ahead.

Boas’s words echo the Roman philosopher Seneca who advised that: “To have lived long enough depends neither upon our years nor upon our days, but upon our minds.”

A more recent thinker expressing similar sentiments is the psychiatrist Viktor Frankl who, after surviving Auschwitz, wrote Man’s Search for Meaning (1946) in which he lay the groundwork for a form of existential psychotherapy, with the focus of discovering meaning in any kind of circumstance. Its most recent adaptation is meaning-centered psychotherapy, which offers people with cancer a way to improve their sense of meaning.

How happiness and meaning relate
In two recent studies, in Palliative and Supportive Care and the American Journal of Hospice and Palliative Care, people approaching death were asked what constitutes happiness for them. Common themes in both studies were social connections, enjoying simple pleasures such as being in nature, having a positive mindset, and a general shift in focus from seeking pleasure to finding meaning and fulfillment as their illness progressed.

In my work as a clinical psychologist, I sometimes meet people who have – or eventually arrive at – a similar outlook on life as Boas. One person especially comes to mind – let’s call him Johan.

The first time I met Johan, he came to the clinic by himself, with a slight limp. We talked about life, about interests, relationships and meaning. Johan appeared to be lucid, clear and articulate.

The second time, he came with crutches. One foot had begun to lag and he couldn’t trust his balance. He said it was frustrating to lose control of his foot, but still hoped to cycle around Mont Blanc.

When I asked him what his concerns were, he burst into tears. He said: “That I won’t get to celebrate my birthday next month.” We sat quietly for a while and took in the situation. It wasn’t the moment of death itself that weighed on him the most, it was all the things he wouldn’t be able to do again.

Source: https://studyfinds.org/happy-when-dying/

Study finds most lung cancer patients in India have never smoked in their life; so what is the cause?

Public awareness campaigns about lung cancer symptoms and risk factors are essential for early detection and prevention

Discover the alarming rise of lung cancer among non-smokers in India (Source: Pexels)

The landscape of lung cancer in India is undergoing a dramatic shift. Traditionally linked to smoking, the disease is increasingly affecting individuals with no history of tobacco use.

A recent narrative review, published in The Lancet, unveiled a startling finding: a substantial portion of lung cancer patients, particularly women, are non-smokers. The trend is alarming as it owes to the complex interplay of factors contributing to lung cancer development in the country, particularly a need to re-evaluate risk factors and prevention strategies beyond tobacco control.

Dr Vikas Mittal, pulmonologist at CK Birla Hospital, Delhi, emphasises the role of environmental factors, particularly air pollution. Exposure to particulate matter (PM2.5) is a significant contributor to lung cancer in non-smokers. The prevalence of tuberculosis, another public health challenge in India, can also exacerbate lung damage and increase the risk.

Passive smoking, occupational exposures, and genetic predisposition further contribute to the disease burden, explained Dr Neeraj Goel, Director of Oncology Services at CK Birla Hospital, Delhi. He underscores the importance of early detection through regular health check-ups and awareness about lung cancer symptoms.

The implications of these findings are profound. India’s battle against lung cancer requires a comprehensive approach. Reducing air pollution through stricter regulations, promoting clean energy sources, and improving public transportation are crucial steps. Strengthening tuberculosis control programs and investing in research to understand the genetic factors involved are equally important.

Moreover, public awareness campaigns about lung cancer symptoms and risk factors are essential for early detection and prevention. Encouraging healthy lifestyles, including smoking cessation and avoiding exposure to air pollution, can significantly reduce the risk.

Here are some warning signs of lung cancer you should watch out for, Dr Mittal advised.

What are the warning symptoms and signs of lung cancer?

The warning symptoms and signs of lung cancer include:

* Persistent Cough: A long-standing cough that does not go away.
* Blood in Sputum: Presence of blood in the spit.
* Breathing Difficulty: Trouble breathing or shortness of breath.
* Hoarseness of Voice: Changes in the voice, such as becoming hoarse.
* Chest Pain: Pain in the chest that may worsen with deep breathing, coughing, or laughing.
* Loss of Appetite and Weight Loss: Unexplained loss of appetite and significant weight loss.

Experimental drug extends the lifespan of ‘middle-aged’ mice by 25% – and could work on humans too, scientists say

Mice injected with the antibody anti-IL-11 lived longer and suffered from fewer diseases caused by fibrosis, chronic inflammation and poor metabolism – which are the hallmarks of ageing.

An experimental drug that extends the lifespan of mice by 25% could also work in humans, according to the scientist who ran the trials.

The treatment – an injection of an antibody called anti-IL-11 that was given to the mice when they were ‘middle-aged’ – reduced deaths from cancer.

It also lowered the incidence of diseases caused by fibrosis, chronic inflammation and poor metabolism, which are the hallmarks of ageing.

Professor Stuart Cook, a senior scientist on the study, said: “These findings are very exciting.

“While these findings are only in mice, it raises the tantalising possibility that the drugs could have a similar effect in elderly humans.

“The treated mice had fewer cancers, and were free from the usual signs of ageing and frailty, but we also saw reduced muscle wasting and improvement in muscle strength.

“In other words, the old mice receiving anti-IL-11 were healthier.”

Videos released by the scientists show untreated mice had greying patches on their fur, with hair loss and weight gain.

But those receiving the injection had glossy coats and were more active.

The two female mice – one of which has received the antibody injection. Pic PA

The researchers, from the Medical Research Council Laboratory of Medical Science (MRC LMS), Imperial College London and Duke-NUS Medical School in Singapore, gave the mice the antibody injection when they were 75 weeks old – equivalent to a human age of 55 years.

The mice went on to live to an average of 155 weeks, 35 weeks longer than mice who were not treated, according to results published in the journal Nature.

The drug appeared to have very few side effects.

“Previously proposed life-extending drugs and treatments have either had poor side-effect profiles, or don’t work in both sexes, or could extend life, but not healthy life – however this does not appear to be the case for IL-11,” Professor Cook said.

The antibody blocked the action of the IL-11 protein, which is thought to play a role in the ageing of cells and body tissues – in humans as well as mice.

Source: https://news.sky.com/story/experimental-drug-extends-the-lifespan-of-middle-aged-mice-by-25-and-could-work-on-humans-too-scientists-say-13179979

Is pooping every day necessary? Timing of bowel movements has surprising links to health

(© nito – stock.adobe.com)

We all do it, but how often should we? A groundbreaking study from the Institute for Systems Biology (ISB) has uncovered fascinating links between how frequently we poop and our long-term health. It turns out that your bathroom habits might be more important than you think!

The research team, led by Johannes Johnson-Martinez, examined over 1,400 healthy adults, analyzing everything from their gut microbes to blood chemistry. Their findings, published in Cell Reports Medicine, shed new light on the complex relationship between our bowel movements and overall well-being.

Interestingly, age, sex, and body mass index (BMI) all affected how often people visited the bathroom. Younger individuals, women, and those with lower BMIs tended to have less frequent bowel movements.

So, why does it matter?
“Prior research has shown how bowel movement frequency can have a big impact on gut ecosystem function. Specifically, if stool sticks around too long in the gut, microbes use up all of the available dietary fiber, which they ferment into beneficial short-chain fatty acids. After that, the ecosystem switches to fermentation of proteins, which produces several toxins that can make their way into the bloodstream,” Johnson-Martinez explains in a media release.

In other words, when you’re constipated, your gut bacteria run out of their preferred food (fiber) and start breaking down proteins instead. This process creates potentially harmful substances that can enter your bloodstream and affect other organs.

The study revealed a “Goldilocks zone” for optimal gut health – pooping 1-2 times per day. In this sweet spot, beneficial fiber-fermenting bacteria thrived. However, those with constipation or diarrhea showed higher levels of less desirable bacteria associated with protein fermentation or upper digestive tract issues.

However, this is not just about gut bugs. The researchers found that bowel movement frequency (BMF) has a link to various blood markers and even potential chronic disease risks. For instance, people with constipation had higher levels of substances like p-cresol-sulfate and indoxyl-sulfate in their blood. These compounds, produced by gut bacteria breaking down proteins, are known to be harmful to kidneys.

“Here, in a generally healthy population, we show that constipation, in particular, is associated with blood levels of microbially derived toxins known to cause organ damage, prior to any disease diagnosis,” says Dr. Sean Gibbons, the study’s corresponding author.

The research also hinted at connections between bowel habits and mental health, suggesting that how often you poop might be related to anxiety and depression.

So, what can you do to hit that bathroom sweet spot? Unsurprisingly, the study found that a fiber-rich diet, staying well-hydrated, and regular exercise were associated with healthier bowel movement patterns.

“Overall, this study shows how bowel movement frequency can influence all body systems, and how aberrant bowel movement frequency may be an important risk factor in the development of chronic diseases. These insights could inform strategies for managing bowel movement frequency, even in healthy populations, to optimize health and wellness,” Dr. Gibbons concludes.

While more research is necessary to fully understand these connections, this study highlights the importance of paying attention to your bathroom habits. They might just be a window into your overall health!

Source: https://studyfinds.org/pooping-every-day-necessary/

Parents ‘skipping meals’ and children ‘going without essentials’ – UNICEF UK calls for urgent help

UNICEF UK calls on the new government to scrap the two-child benefit cap as it warns of severe pressure on parents of young children.

File pic: iStock

Mounting debt and expensive childcare are putting children at risk, UNICEF UK has warned, as the charity claims 87% of parents of children under five worry about their future.

Based on findings from its annual survey, the charity said parents are not getting the support they need – particularly in lower-income households – and called on the government to do more.

One respondent to the survey said educational toys and books are too expensive for them, while they can’t afford days out and can just about buy second-hand clothes.

Joanna Rea, the charity’s director of advocacy, said support for parents must become an “urgent national priority”.

“This is the moment to start making the UK one of the best places to raise a child and reverse the years of underinvestment and austerity which contributed to the UK having the highest increase in child poverty of any rich country,” she said.

“With a quarter of parents borrowing money to pay for the essentials for their children – supporting them must be an urgent national priority for the new government.”

Other findings show:

• 38% dread the holidays because of the financial strain they put on the family;
• 25% have had to borrow money or gone into debt to make ends meet;
• 66% said the cost of living crisis had negatively impacted their family;
• 63% report struggling with their mental health while being a parent;
• 62% said childcare is one of the biggest challenges facing parents.

The charity is calling for the new government to introduce what it calls a “National Baby and Toddler Guarantee” to ensure children under five get the support and services they need.

But as a “matter of urgency”, UNICEF UK recommends an end to the two-child limit and removing the benefits cap.

The two-child cap, introduced by the Conservatives in 2017, prevents parents claiming Universal Credit or child tax credits for a third child, except in very limited circumstances.

Source: https://news.sky.com/story/parents-skipping-meals-and-children-going-without-essentials-unicef-uk-calls-for-urgent-help-13178265

Why cats meow at humans more than each other

A cat closes its eyes and meows. (Credit: Amir Ghoorchiani from Pexels)

This is a story that goes back thousands of years.

Originally, cats were solitary creatures. This means they preferred to live and hunt alone rather than in groups. Most of their social behavior was restricted to mother-kitten interactions. Outside of this relationship, cats rarely meow at each other.

However, as cats began to live alongside humans, these vocalizations took on new meanings. In many ways, when a cat meows at us, it’s as if they see us as their caregivers, much like their feline mothers.

Cats probably first encountered humans roughly 10,000 years ago, when people began establishing permanent settlements. These settlements attracted rodents, which in turn drew cats looking for prey. The less fearful and more adaptable cats thrived, benefiting from a consistent food supply. Over time, these cats developed closer bonds with humans.

Unlike dogs, which were bred by humans for specific traits, cats essentially domesticated themselves. Those that could tolerate and communicate with humans had a survival advantage, leading to a population well-suited to living alongside people.

To understand this process, we can look at Russian-farmed fox experiments. Beginning in the 1950s, Soviet scientist Dmitry Belyaev and his team selectively bred silver foxes, mating those that were less fearful and aggressive toward humans.

Over generations, these foxes became more docile and friendly, developing physical traits similar to domesticated dogs, such as floppy ears and curly tails. Their vocalizations changed too, shifting from aggressive “coughs” and “snorts” to more friendly “cackles” and “pants,” reminiscent of human laughter.

These experiments demonstrated that selective breeding for tameness could lead to a range of behavioral and physical changes in animals, achieving in a few decades what would usually take thousands of years. Though less obvious than the differences between dogs and the ancestral wolf, cats have also changed since their days as African wildcats. They now have smaller brains and more varied coat colors, traits common among many domestic species.

Cats’ vocal adaptations
Like the silver foxes, cats have adapted their vocalizations, albeit over a much longer period of time. Human babies are altrical at birth, meaning they are entirely dependent on their parents. This dependency has made us particularly attuned to distress calls – ignoring them would be costly for human survival.

Cats have altered their vocalizations to tap into this sensitivity. A 2009 study by animal behavior researcher Karen McComb and her team gives evidence of this adaptation. Participants in the study listened to two types of purrs. One type was recorded when cats were seeking food (solicitation purr) and another recorded when they were not (non-solicitation purr). Both cat owners and non-cat owners rated the solicitation purrs as more urgent and less pleasant.

An acoustic analysis revealed a high-pitch component in these solicitation purrs, resembling a cry. This hidden cry taps into our innate sensitivity to distress sounds, making it nearly impossible for us to ignore.

Source: https://studyfinds.org/why-cats-meow-at-humans-more/

Why people run from bears: Study explains how the brain takes action upon what we see

Coming upon a bear in the forest will immediately put your brain into fight or flight mode. (© Татьяна Макарова -stock.adobe.com)

From a menacing bear in the forest to a smiling friend at a party, our brains are constantly processing emotional stimuli and guiding our responses. But how exactly does our brain transform what we see into appropriate actions? A new study sheds new light on this complex process, revealing the sophisticated ways our brains encode emotional information to guide behavior.

Led by Prof. Sonia Bishop, now Chair of Psychology at Trinity College Dublin, and Samy Abdel-Ghaffar, a researcher at Google, the study delves into how a specific brain region called the occipital temporal cortex (OTC) plays a crucial role in processing emotional visual information. Its findings are published in Nature Communications.

“It is hugely important for all species to be able to recognize and respond appropriately to emotionally salient stimuli, whether that means not eating rotten food, running from a bear, approaching an attractive person in a bar or comforting a tearful child,” Bishop explains in a statement.

The researchers used advanced brain imaging techniques to analyze how the OTC responds to a wide range of emotional images. They discovered that this brain region doesn’t just categorize what we see – it also encodes information about the emotional content of images in a way that’s particularly well-suited for guiding behavior.

The brain is hard at work when we see emotional stimuli, (© Татьяна Макарова – stock.adobe.com)

One of the study’s key insights is that our brains don’t simply process emotional stimuli in terms of “approach” or “avoid.” Instead, the OTC appears to represent emotional information in a more nuanced way that allows for a diverse range of responses.

“Our research reveals that the occipital, temporal cortex is tuned not only to different categories of stimuli, but it also breaks down these categories based on their emotional characteristics in a way that is well suited to guide selection between alternate behaviors,” says Bishop.

For instance, the brain’s response to a large, threatening bear would be different from its response to a weak, diseased animal – even though both might generally fall under the category of “avoid.” Similarly, the brain’s representation of a potential mate would differ from its representation of a cute baby, despite both being positive stimuli.

The study employed a technique called voxel-wise modeling, which allowed the researchers to examine brain activity at a very fine-grained level. “This approach let us explore the intertwined representation of categorical and emotional scene features, and opened the door to novel understanding of how OTC representations predict behavior,” says Abdel-Ghaffar.

By applying machine learning techniques to the brain imaging data, the researchers found that the patterns of activity in the OTC were remarkably good at predicting what kinds of behavioral responses people would associate with each image. Intriguingly, these predictions based on brain activity were more accurate than predictions based solely on the objective features of the images themselves.

This suggests that the OTC is doing more than just passively representing what we see – it’s actively transforming visual information into a format that’s optimized for guiding our actions in emotionally-charged situations.

These findings not only advance our understanding of how the brain processes emotional information but could also have important implications for mental health research. As Prof. Bishop points out, “The paradigm used does not involve a complex task making this approach suitable in the future, for example, to further understanding of how individuals with a range of neurological and psychiatric conditions differ in processing emotional natural stimuli.”

By unraveling the ways our brains encode emotional information, the study brings us one step closer to understanding how we navigate the complex emotional landscape of our world. From everyday social interactions to life-or-death situations, our brains are constantly working behind the scenes, using sophisticated neural representations to help us respond appropriately to the emotional stimuli we encounter.

Source: https://studyfinds.org/why-people-run-from-bears-study-explains-how-the-brain-takes-action-upon-what-we-see/

Brain-imaging study reveals curiosity as it emerges

(Photo by Paula Corberan on Unsplash)

You look up into the clear blue sky and see something you can’t quite identify. Is it a balloon? A plane? A UFO? You’re curious, right?

A research team based at Columbia’s Zuckerman Institute has for the first time witnessed what is happening in the human brain when feelings of curiosity like this arise. In a study published in the Journal of Neuroscience, the scientists revealed brain areas that appear to assess the degree of uncertainty in visually ambiguous situations, giving rise to subjective feelings of curiosity.

“Curiosity has deep biological origins,” said corresponding author Jacqueline Gottlieb, PhD, a principal investigator at the Zuckerman Institute. The primary evolutionary benefit of curiosity, she added, is to encourage living things to explore their world in ways that help them survive.

“What distinguishes human curiosity is that it drives us to explore much more broadly than other animals, and often just because we want to find things out, not because we are seeking a material reward or survival benefit,” said Dr. Gottlieb, who is also a professor of neuroscience at Columbia’s Vagelos College of Physicians and Surgeons. “This leads to a lot of our creativity.”

Joining Dr. Gottlieb on the research were Michael Cohanpour, PhD, a former graduate student at Columbia (now a data scientist with dsm-firmenich), and Mariam Aly, PhD, also previously at Columbia and now an acting associate professor of psychology at the University of California, Berkeley.

Human brain-scan images show regions toward the back and front that are active for a person who is feeling curious. (Credit: Gottlieb Lab/Columbia’s Zuckerman Institute)

In the study, researchers employed a noninvasive, widely used technology to measure changes in the blood-oxygen levels in the brains of 32 volunteers. Called functional magnetic resonance imaging, or fMRI, the technology enabled the scientists to record how much oxygen different parts of the subjects’ brains consumed as they viewed images. The more oxygen a brain region consumes, the more active it is.

To unveil those brain areas involved in curiosity, the research team presented participants with special images known as texforms. These are images of objects, such as a walrus, frog, tank or hat, that have been distorted to various degrees to make them more or less difficult to recognize.

The researchers asked participants to rate their confidence and curiosity about each texform, and found that the two ratings were inversely related. The more confident subjects were that they knew what the texform depicts, the less curious they were about it. Conversely, the less confident subjects were that they could guess what the texform was, the more curious they were about it.

Three pairs of texforms showing unrecognizable and clear versions of objects.(Credit: Gottlieb Lab/Columbia’s Zuckerman Institute)

Using fMRI, the researchers then viewed what was happening in the brain as the subjects were presented with texforms. The brain-scan data showed high activity in the occipitotemporal cortex (OTC), a region located just above your ears, which has long been known to be involved in vision and in recognizing categories of objects. Based on previous studies, the researchers expected that when they presented participants with clear images, this brain region would show distinct activity patterns for animate and inanimate objects. “You can think of each pattern as a ‘barcode’ identifying the texform category,” Dr. Gottlied said.

The researchers used these patterns to develop a measure, which they dubbed “OTC uncertainty,” of how uncertain this cortical area was about the category of a distorted texform. They showed that, when subjects were less curious about a texform, their OTC activity corresponded to only one barcode, as if it clearly identified whether the image belonged to the animate or the inanimate category. In contrast, when subjects were more curious, their OTC had characteristics of both barcodes, as if it could not clearly identify the image category.

Source: https://studyfinds.org/brain-imaging-study-reveals-curiosity-as-it-emerges/

Saving just 1.2% of Earth’s surface may prevent next mass extinction event

(© herraez – stock.adobe.com)

In a world where the boundaries between human communities and the wilderness blur with each passing year, a groundbreaking study explains how preserving just a tiny amount of the Earth’s surface could prevent the next mass extinction event. An international team warns that the final unprotected havens of rare and endangered species across the globe live in these relatively minuscule spaces that make up just over one percent of the planet’s surface.

The study, published in the journal Frontiers in Science, calls these last refuges of biodiversity irreplaceable, and their loss would trigger the sixth major extinction event in Earth’s history.

“Most species on Earth are rare, meaning that species either have very narrow ranges or they occur at very low densities or both,” says Dr. Eric Dinerstein of the NGO Resolve, the lead author of the report, in a media release. “And rarity is very concentrated. In our study, zooming in on this rarity, we found that we need only about 1.2% of the Earth’s surface to head off the sixth great extinction of life on Earth.”

Methodology: Mapping the Conservation Imperatives
The research team combined six key biodiversity datasets to map out 16,825 vital areas, covering approximately 164 million hectares, that are currently unprotected. These areas were identified by overlaying existing protected zones with regions known for their rare and endangered species.

The team then refined this data using fractional land cover analysis, which allowed them to pinpoint the exact regions still harboring significant natural habitats that require immediate conservation efforts.

Results: Revealing a Dire Need for Protection

The results are alarming. A vast majority of these critical areas are located in the tropics, which are hotspots for biodiversity but also highly vulnerable to human interference and climate change.

Despite covering only 1.22% of the Earth’s terrestrial surface, protecting these areas could prevent a disproportionately large number of species extinctions. Furthermore, the study highlights that recent global efforts to expand protected areas have largely overlooked these crucial habitats, with only 7% of what the scientists call “conservation imperatives” being identified and currently safeguarded.

“These sites are home to over 4,700 threatened species in some of the world’s most biodiverse yet threatened ecosystems,” says study co-author Andy Lee of Resolve. “These include not only mammals and birds that rely on large intact habitats, like the tamaraw in the Philippines and the Celebes crested macaque in Sulawesi Indonesia, but also range-restricted amphibians and rare plant species.”

Source: https://studyfinds.org/earths-surface-mass-extinction/

This bad habit may be the main reason people suffer cognitive decline

(Credit: Laurent T/Shutterstock)

Smoking may be the most influential factor in whether older adults go on to develop dementia. That’s the concerning takeaway from a groundbreaking study spanning 14 European countries. Researchers in London have found that when it comes to maintaining cognitive function as we age, the biggest impact may come from a single lifestyle choice: not smoking.

The study, published in Nature Communications, followed over 32,000 adults between 50 and 104 for up to 15 years. While previous research has often lumped various healthy behaviors together, making it difficult to pinpoint which ones truly matter, this study took a different approach. By examining 16 different lifestyle combinations, the researchers were able to isolate the effects of smoking, alcohol consumption, physical activity, and social contact on cognitive decline.

The results were striking. Regardless of other lifestyle factors, non-smokers consistently showed slower rates of cognitive decline compared to smokers. This finding suggests that quitting smoking – or never starting in the first place – could be the most crucial step in preserving brain function as we age.

“Our findings suggest that among the healthy behaviors we examined, not smoking may be among the most important in terms of maintaining cognitive function,” says Dr. Mikaela Bloomberg from University College London in a media release.

“For people who aren’t able to stop smoking, our results suggest that engaging in other healthy behaviors such as regular exercise, moderate alcohol consumption and being socially active may help offset adverse cognitive effects associated with smoking.”

Methodology: Unraveling the Cognitive Puzzle
To understand how the researchers arrived at this conclusion, let’s break down their methodology. The study drew data from two major aging studies: the English Longitudinal Study of Ageing (ELSA) and the Survey of Health, Ageing and Retirement in Europe (SHARE). These studies are treasure troves of information, following thousands of older adults over many years and collecting data on their health, lifestyle, and cognitive function.

The researchers focused on 4 key lifestyle factors:

  1. Smoking (current smoker or non-smoker)
  2. Alcohol consumption (no-to-moderate or heavy)
  3. Physical activity (weekly moderate-plus-vigorous activity or less)
  4. Social contact (weekly or less than weekly)

By combining these factors, they created 16 distinct lifestyle profiles. For example, one profile might be a non-smoker who drinks moderately, exercises weekly, and has frequent social contact, while another might be a smoker who drinks heavily, doesn’t exercise regularly, and has limited social interaction.

To measure cognitive function, the researchers used two tests:

  1. A memory test, where participants had to recall a list of words immediately and after a delay
  2. A verbal fluency test, where participants named as many animals as they could in one minute

These tests were repeated at multiple time points over the years, allowing the researchers to track how cognitive function changed over time for each lifestyle profile. To ensure they were capturing the effects of lifestyle rather than early signs of dementia, the researchers excluded anyone who showed signs of cognitive impairment at the start of the study or who was diagnosed with dementia during the follow-up period.

Source: https://studyfinds.org/smoking-cognitive-decline/

The Caesar salad was born 100 years ago in Mexico — on the Fourth of July!

Photo by Chris Tweten from Unsplash

The most seductive culinary myths have murky origins, with a revolutionary discovery created by accident or out of necessity.

For the Caesar salad, these classic ingredients are spiced up with a family food feud and a spontaneous recipe invention on the Fourth of July, across the border in Mexico, during Prohibition.

Our story is set during the era when America banned the production and sale of alcohol from 1919–1933.

Two brothers, Caesar (Cesare) and Alex (Alessandro) Cardini, moved to the United States from Italy. Caesar opened a restaurant in California in 1919. In the 1920s, he opened another in the Mexican border town of Tijuana, serving food and liquor to Americans looking to circumvent Prohibition.

Tijuana’s Main Street, packed with saloons, became a popular destination for southern Californians looking for drink. It claimed to have the “world’s longest bar” at the Ballena, 215 feet (66 meters) long with ten bartenders and 30 waitresses.

The story of the Caesar salad, allegedly 100 years old, is one of a cross-border national holiday Prohibition-era myth, a brotherly battle for the claim to fame and celebrity chef endorsements.

Necessity is the mother of invention

On July 4, 1924, so the story goes, Caesar Cardini was hard at work in the kitchen of his restaurant, Caesar’s Place, packed with holiday crowds from across the border looking to celebrate with food and drink.

He was confronted with a chef’s worst nightmare: running out of ingredients in the middle of service.

As supplies for regular menu items dwindled, Caesar decided to improvise with what he had on hand.

He took ingredients in the pantry and cool room and combined the smaller leaves from hearts of cos lettuce with a dressing made from coddled (one-minute boiled) eggs, olive oil, black pepper, lemon juice, a little garlic, and Parmesan cheese.

The novel combination was a huge success with the customers and became a regular menu item: the Caesar salad.

Et tu, Alex?
There is another version of the origin of the famous salad, made by Caesar’s brother, Alex, at his restaurant in Tijuana.

Alex claims Caesar’s “inspiration” was actually a menu item at his place, the “aviator’s salad”, named because he made it as a morning-after pick-me-up for American pilots after a long night drinking.

His version had many of the same ingredients, but used lime juice, not lemon, and was served with large croutons covered with mashed anchovies.

When Caesar’s menu item later became famous, Alex asserted his claim as the true inventor of the salad, now named for his brother.

Enter the celebrity chefs
To add to the intrigue, two celebrity chefs championed the opposing sides of this feud. Julia Child backed Caesar, and Diana Kennedy (not nearly as famous, but known for her authentic Mexican cookbooks) supported Alex’s claim.

By entering the fray, each of these culinary heavyweights added credence to different elements of each story and made the variations more popular in the US.

While Child reached more viewers in print and on television, Kennedy had local influence, known for promoting regional Mexican cuisine.

While they chose different versions, the influence of major media figures contributed to the evolution of the Caesar salad beyond its origins.

The original had no croutons and no anchovies. As the recipe was codified into an “official” version, garlic was included in the form of an infused olive oil. Newer versions either mashed anchovies directly into the dressing or added Worcestershire sauce, which has anchovies in the mix.

Caesar’s daughter, Rosa, always maintained her father was the original inventor of the salad. She continued to market her father’s trademarked recipe after his death in 1954.

Ultimately, she won the battle for her father’s claim as the creator of the dish, but elements from Alex’s recipe have become popular inclusions that deviate from the purist version, so his influence is present – even if his contribution is less visible.

No forks required – but a bit of a performance
If this weren’t enough, there is also a tasty morsel that got lost along the way.

Caesar salad was originally meant to be eaten as finger food, with your hands, using the baby leaves as scoops for the delicious dressing ingredients.

Source: https://studyfinds.org/caesar-salad-fourth-of-july/

Age of newly-discovered cave painting rewrites human history

Photo-stitched panorama of the rock art panel (with photographs enhanced using DStretch_ac_lds_cb). (Credit: Nature/Griffith University)

The tale of early humans hunting pigs roughly 50,000 years ago may be the first recorded story in human history. Deep in the limestone caves of Indonesia’s Sulawesi island, this remarkable cave painting discovery has pushed back the origins of narrative art by thousands of years.

Researchers in Australia have found that our ancestors were creating complex scenes of human-animal interaction at least 51,200 years ago – making this the oldest known example of visual storytelling in the world. The groundbreaking study focuses on two cave sites in the Maros-Pangkep region of South Sulawesi. Using advanced dating techniques, researchers have determined that a hunting scene at Leang Bulu’ Sipong 4 cave is at least 48,000 years-old, while a newly discovered composition at Leang Karampuang cave dates back at least 51,200 years.

“Our findings show that figurative portrayals of anthropomorphic figures and animals have a deeper origin in the history of modern human (Homo sapiens) image-making than recognized to date, as does their representation in composed scenes,” the study authors write in the journal Nature.

The Leang Karampuang artwork depicts at least three human-like figures interacting with a large pig, likely a Sulawesi warty pig. This scene predates the next oldest known narrative art by over 20,000 years, fundamentally altering our understanding of early human brain and artistic development.

a, Photostitched panorama of the rock art panel (with photographs enhanced using DStretch_ac_lds_cb). b, Tracing of the rock art panel showing the results of LA-U-series dating. c, Tracing of the painted scene showing the human-like figures (H1, H2 and H3) interacting with the pig. d, Transect view of the coralloid speleothem, sample LK1, removed from the rock art panel, showing the paint layer and the three integration zones (ROIs), as well as the associated age calculations. e, LA-MC-ICP-MS imaging of the LK1 232Th/238U isotopic activity ratio. (Credit: Nature/Griffith University)

A New Way to Date Ancient Art

Key to this discovery was the research team’s novel approach to dating cave art. Previous studies relied on analyzing calcium carbonate deposits that form on top of paintings using a method called uranium-series dating. While effective, this technique had limitations when dealing with thin or complex mineral layers.

The researchers developed an innovative laser-based method that allows for much more precise analysis. By using laser ablation to create detailed maps of the calcium carbonate layers, they could pinpoint the oldest deposits directly on top of the pigments.

“This method provides enhanced spatial accuracy, resulting in older minimum ages for previously dated art,” the researchers explain.

The team applied this technique to re-date the hunting scene at Leang Bulu’ Sipong 4, which was previously thought to be around 44,000 years-old. The new analysis revealed it to be at least 48,000 years-old – 4,000 years older than initially believed.

Pushing Back the Timeline of Human Art
Armed with this refined dating method, the researchers turned their attention to Leang Karampuang cave. There, they discovered and dated a previously unknown composition showing human-like figures apparently interacting with a pig.

Three small human figures are arrayed around a much larger pig painted in red ochre. Two of the human figures appear to be holding objects, possibly spears or ropes, while a third is depicted upside-down with its arms outstretched towards the pig’s head.

Using their laser ablation technique, the team dated calcium carbonate deposits on top of these figures. The results were astounding – the artwork was at least 51,200 years-old, making it the earliest known example of narrative art in the world.

“This enigmatic scene may represent a hunting narrative, while the prominent portrayal of therianthropic figures implies that the artwork reflects imaginative storytelling (for example, a myth),” the international team writes.

Rewriting the History of Human Creativity
These findings have profound implications for our understanding of human brain development in ancient times. Previously, the oldest known figurative art was a painting of a Sulawesi warty pig from the same region, dated to 45,500 years ago. The oldest known narrative scene was thought to be the Leang Bulu’ Sipong 4 hunting tableau, originally dated to 44,000 years ago.

The new dates push back the origins of both figurative and narrative art by thousands of years. They suggest that early humans in this region were engaging in complex symbolic thinking and visual storytelling (drawing and painting) far earlier than previously believed.

Source: https://studyfinds.org/cave-painting-rewrites-history/

Here’s why most people are right-handed but actually left-eye dominant

(Credit: Krakenimages.com/Shutterstock)

Whether you’re left, right or ambidextrous, “handedness” is part of our identity. But a lot of people don’t realize that we have other biases too and they are not unique to humans. My colleagues and I have published a new study that shows aligning our biases in the same way as other people may have social benefits.

Across different cultures, human populations have high levels of right-handedness (around 90%). We also have a strong population bias in how we recognize faces and their emotions.

A significant majority of the population are faster and more accurate at recognizing identities and emotions when they fall within the left visual field compared with the right visual field.

These types of biases develop in our brains in early childhood. The left and right hemispheres of the brain control motor action on the opposite sides of the body. If your left visual field is dominant, that means the right side of your brain is taking dominance for recognizing faces and emotions.

Until recently, scientists thought behavioral biases were unique to humans. But animal research over the last several decades shows there are behavioral biases across all branches of the vertebrate tree of life.

For example, chicks that peck for food with an eye bias are better at telling grain from pebbles. Also, chicks with an eye bias for monitoring predators are less likely to be eaten than unlateralized chicks. Studies show that animals with biases tend to perform better at survival-related tasks in laboratory experiments, which probably translates to a better survival rate in the wild.

But the chicks with the best advantage are ones that favor one eye to the ground (to find food) and the other eye to the sky (to look out for threats). A benefit of the “divided brain” is that wild animals can forage for food and look out for predators – important multitasking.

So why do animals have behavioral biases?
Research suggests that brain hemisphere biases evolved because it allows the two sides of the brain to concurrently control different behavior. It also protects animals from becoming muddled. If both sides of the brain had equal control over critical functions they might simultaneously direct the body to carry out incompatible responses.

So biases free up some resources or “neural capacity”, making animals more efficient at finding food and keeping safe from predators.

Animal studies suggest it is the presence, not the direction (left or right) of our biases that matters for performance. But that doesn’t explain why so many people are right-handed for motor tasks and left-visual field biased for face processing.

Every person should have a 50-50 chance of being left or right-biased. Yet, across the animal kingdom, the majority of individuals in a species align in the same direction.

This suggests that aligning biases with others in your group might have a social advantage. For example, animals that align with the population during cooperative behavior (shoaling, flocking) dilute the possibility of being picked off by a predator. The few that turn away from the flock or shoal become clear targets.

People tend to be left or right-eye dominant. (Photo by Gayatri Malhotra)

Although humans are highly lateralized regardless of ethnic or geographic background, there is always a significant minority in the population, suggesting that this alternative bias has its own merits.

The prevailing theory is that deviating from the population offers animals an advantage during competitive interactions, by creating an element of surprise. It may explain why left-handedness is overrepresented in professional interactive sports like cricket and baseball.

In the first study of its kind, scientists from the universities of Sussex, Oxford, Westminster, London (City, Birkbeck), and Kent put our human behavioral biases to the test. We investigated associations between strength of hand bias and performance as well as direction of biases and social ability. We chose behavior that aligns with animal research.

Over 1,600 people of all ages and ethnicities participated in this investigation.

You don’t always use your preferred hand: some people are mildly, moderately, or strongly handed. So, we measured handedness in our participants using a timed color-matching pegboard task. Not everyone knows whether they have a visual field bias, so we evaluated this for participants using images of faces expressing different emotions (such as anger and surprise) presented on a screen.

People with mild to moderate strength hand bias (left or right) placed more color-matched pegs correctly than those with a strong or weak bias. These results suggest that, in humans, extremes may limit our performance flexibility, unlike wild animals.

The majority of the participants had a standard bias (right-handedness for motor tasks, left visual field bias for face processing). But not everyone.

Source: https://studyfinds.org/right-handed-but-left-eyed/

Ozempic linked to blindness? New study says sudden vision loss a side-effect of semaglutide

(Photo by myskin on Shutterstock)

A groundbreaking new study has uncovered a potential link between a popular weight loss and diabetes medication and an increased risk of sudden vision loss. The drug in question, semaglutide, sold under brand names like Ozempic and Wegovy, has been hailed as a game-changer in the fight against obesity and Type 2 diabetes. However, this research suggests it may come with an unexpected and serious side-effect.

Semaglutide belongs to a class of drugs called GLP-1 receptor agonists. These medications mimic a hormone that helps regulate blood sugar and appetite. Since its approval by the FDA in 2017 for diabetes and later for weight loss, semaglutide has skyrocketed in popularity. By early 2023, it accounted for the highest number of new prescriptions among similar drugs in the United States.

But as more people turn to semaglutide for its benefits, researchers at Massachusetts Eye and Ear and Harvard Medical School have raised a red flag. Their study, published in JAMA Ophthalmology, suggests that patients taking semaglutide may face a significantly higher risk of developing a condition called nonarteritic anterior ischemic optic neuropathy, or NAION for short.

NAION is a serious eye condition that occurs when blood flow to the optic nerve is suddenly reduced or blocked. This can lead to rapid and often permanent vision loss, typically in one eye. While it’s the second most common cause of optic nerve-related vision loss in adults, it’s still relatively rare, affecting only two to 10 people per 100,000 in the general population.

The study’s findings are striking. Among patients with Type 2 diabetes, those taking semaglutide were over four times more likely to develop NAION compared to those on other diabetes medications. The risk was even higher for overweight or obese patients using semaglutide for weight loss – they were more than seven times more likely to experience NAION than those using other weight loss drugs.

Overweight or obese patients using semaglutide for weight loss were more than 7 times more likely to experience NAION than those using other weight loss drugs. (© Mauricio – stock.adobe.com)

These numbers are certainly attention-grabbing, but what do they mean in real-world terms? To put it in perspective, over a three-year period, about 9% of diabetes patients on semaglutide developed NAION, compared to less than 2% of those on other medications. For overweight or obese patients, the numbers were about 7% for semaglutide users versus less than 1% for those on other drugs.

“The use of these drugs has exploded throughout industrialized countries and they have provided very significant benefits in many ways, but future discussions between a patient and their physician should include NAION as a potential risk,” says study co-author Dr. Joseph Rizzo, the study’s corresponding author and director of the Neuro-Ophthalmology Service at Mass Eye and Ear, in a statement. “It is important to appreciate, however, that the increased risk relates to a disorder that is relatively uncommon.”

The timing of NAION onset is also noteworthy. The study found that the risk was highest in the first year after starting semaglutide, with most cases occurring within the initial 12 months of treatment.

It’s important to note that this study doesn’t prove that semaglutide directly causes NAION. Rather, it highlights the need for increased awareness and careful monitoring among both patients and healthcare providers.

The potential link between semaglutide and NAION is particularly concerning given the drug’s widespread use and growing popularity. As obesity rates continue to climb and Type 2 diabetes remains a major public health concern, medications like semaglutide play a crucial role in managing these conditions. The benefits of these drugs – including improved blood sugar control, significant weight loss, and reduced risk of heart disease – are well-documented and potentially life-changing for many patients.

As research continues, patients currently taking semaglutide should not panic or discontinue their medication without consulting their doctor. Instead, they should be aware of the potential risk and report any sudden changes in vision immediately. Healthcare providers may need to consider more frequent eye exams for patients on these medications, especially in the first year of treatment.

“Our findings should be viewed as being significant but tentative, as future studies are needed to examine these questions in a much larger and more diverse population,” says Rizzo, who is also the Simmons Lessell Professor of Ophthalmology at Harvard Medical School. “This is information we did not have before and it should be included in discussions between patients and their doctors, especially if patients have other known optic nerve problems like glaucoma or if there is preexisting significant visual loss from other causes.”

Source: https://studyfinds.org/ozempic-linked-to-blindness/

Concerning link discovered between heart disease and disappearance of the Y chromosome

Y-Chromosomes. (©YustynaOlha – stock.adobe.com)

The spontaneous loss of the Y chromosome has been a medical mystery among aging men for quite some time. Now, a new study is linking this condition to an even more concerning problem — death from heart disease. Researchers at Boston Medical Center (BMC) and Boston University (BU) Chobanian & Avedisian School of Medicine have found that men who are losing their Y chromosomes are at a much higher risk of dying from heart disease.

Specifically, the study published in Circulation: Heart Failure explored the risk factors for transthyretin cardiac amyloidosis (ATTR-CA), a common cause of heart disease among older men. Transthyretin amyloidosis occurs when a person’s liver produces faulty transthyretin proteins. Clumps of these abnormal proteins build up in the heart’s main pumping chamber, causing the left ventricle to become stiff and weak.

The analysis uncovered a connection to the spontaneous loss of the Y chromosome (LOY). The more blood cells missing their Y chromosomes, the greater the odds were that a person would die from ATTR-CA.

Study authors note that LOY is one of the most common genetic mutations among men. Over half of the male population who make it to age 90 will lose their Y chromosomes from at least some of their blood cells. Previous studies have also linked the disappearance of the Y chromosome to poorer heart health, but these reports did not examine LOY’s link to ATTR-CA.

“Our study suggests that spontaneous LOY in circulating white blood cells contributes both to the development of ATTR-CA in men and influences the severity of disease,” says Dr. Frederick Ruberg, the Chief of Cardiovascular Medicine at BMC and Professor of Medicine at BU Chobanian & Avedisian School of Medicine, in a media release. “Additionally, our study’s findings indicate that elevated LOY may be an important reason why some patients do not respond to the ATTR-CA therapy that is typically effective.”

Methodology & Results
In total, researchers examined 145 men from the United States and Japan with ATTR-CA and another 91 dealing with heart failure due to an issue other than transthyretin cardiac amyloidosis. Results revealed that men who had lost more than 21% of their Y chromosomes were over two and a half times more likely to die of heart disease than men with intact blood cells.

Source: https://studyfinds.org/heart-disease-y-chromosome/

Sixty-million-year-old grape seeds reveal how the death of the dinosaurs may have paved the way for grapes to spread

Fabiany Herrera (left) and Mónica Carvalho (right) at the fossil plant locality, holding the newly-discovered earliest grape from the Western Hemisphere. (Photos courtesy of Fabiany Herrera.)

If you’ve ever snacked on raisins or enjoyed a glass of wine, you may, in part, have the extinction of the dinosaurs to thank for it. In a discovery described in the journal Nature Plants, researchers found fossil grape seeds that range from 60 to 19 million years old in Colombia, Panama, and Peru. One of these species represents the earliest known example of plants from the grape family in the Western Hemisphere. These fossil seeds help show how the grape family spread in the years following the death of the dinosaurs.

“These are the oldest grapes ever found in this part of the world, and they’re a few million years younger than the oldest ones ever found on the other side of the planet,” says Fabiany Herrera, an assistant curator of paleobotany at the Field Museum in Chicago’s Negaunee Integrative Research Center and the lead author of the Nature Plants paper. “This discovery is important because it shows that after the extinction of the dinosaurs, grapes really started to spread across the world.”

It’s rare for soft tissues like fruits to be preserved as fossils, so scientists’ understanding of ancient fruits often comes from the seeds, which are more likely to fossilize. The earliest known grape seed fossils were found in India and are 66 million years old. It’s not a coincidence that grapes appeared in the fossil record 66 million years ago–that’s around when a huge asteroid hit the Earth, triggering a massive extinction that altered the course of life on the planet. “We always think about the animals, the dinosaurs, because they were the biggest things to be affected, but the extinction event had a huge impact on plants too,” says Herrera. “The forest reset itself, in a way that changed the composition of the plants.”

Herrera and his colleagues hypothesize that the disappearance of the dinosaurs might have helped alter the forests. “Large animals, such as dinosaurs, are known to alter their surrounding ecosystems. We think that if there were large dinosaurs roaming through the forest, they were likely knocking down trees, effectively maintaining forests more open than they are today,” says Mónica Carvalho, a co-author of the paper and assistant curator at the University of Michigan’s Museum of Paleontology. But without large dinosaurs to prune them, some tropical forests, including those in South America, became more crowded, with layers of trees forming an understory and a canopy.

Lithouva – the earliest fossil grape from the Western Hemisphere, ~60 million years old from Colombia. Top figure shows fossil accompanied with CT scan reconstruction. Bottom shows artist reconstruction. (Photos by Fabiany Herrera, art by Pollyanna von Knorring.)

These new, dense forests provided an opportunity. “In the fossil record, we start to see more plants that use vines to climb up trees, like grapes, around this time,” says Herrera. The diversification of birds and mammals in the years following the mass extinction may have also aided grapes by spreading their seeds.

In 2013, Herrera’s PhD advisor and senior author of the new paper, Steven Manchester, published a paper describing the oldest known grape seed fossil, from India. While no fossil grapes had ever been found in South America, Herrera suspected that they might be there too.

“Grapes have an extensive fossil record that starts about 50 million years ago, so I wanted to discover one in South America, but it was like looking for a needle in a haystack,” says Herrera. “I’ve been looking for the oldest grape in the Western Hemisphere since I was an undergrad student.”

But in 2022, Herrera and his co-author Mónica Carvalho were conducting fieldwork in the Colombian Andes when a fossil caught Carvalho’s eye. “She looked at me and said, ‘Fabiany, a grape!’ And then I looked at it, I was like, ‘Oh my God.’ It was so exciting,” recalls Herrera. The fossil was in a 60-million-year-old rock, making it not only the first South American grape fossil, but among the world’s oldest grape fossils as well.

The fossil seed itself is tiny, but Herrera and Carvalho were able to identify it based on its particular shape, size, and other morphological features. Back in the lab, they conducted CT scans showing its internal structure that confirmed its identity. The team named the fossil Lithouva susmanii, “Susman’s stone grape,” in honor of Arthur T. Susman, a supporter of South American paleobotany at the Field Museum. “This new species is also important because it supports a South American origin of the group in which the common grape vine Vitis evolved,” says co-author Gregory Stull of the National Museum of Natural History.

Internet addiction: What is it doing to teen brains?

(© olly – stock.adobe.com)

Internet addiction is the problematic, compulsive use of the Internet that results in significant impairments in an individual’s functioning in various aspects of life, including social, work, and academic arenas.

Internet addiction is becoming a worldwide problem. Individual screen time averages have risen to about three hours daily. Many people declare that their internet use is “compulsive.” In fact, more than 30 million of the United Kingdom’s 50 million internet users acknowledge that their compulsive, habitual use of the Internet is adversely affecting their personal lives by disrupting relationships and neglecting responsibilities.

Teens addicted to their internet-connected devices have significant alterations in their brain function, worsening addictive behaviors and prohibiting normal development. Internet addiction, powered by uncontrollable urges, disrupts their development, psychological well-being, and every aspect of their lives – mental, emotional, social, and physical.

A study by scientists at UCLA identified the extensive changes to young brains, especially those of children aged 10 to 19 years. A ten-year study, which concluded in 2023, collected the findings from 237 adolescents who had been officially diagnosed with Internet addiction.

Teens addicted to their internet-connected devices have significant alterations in their brain function, worsening addictive behaviors and prohibiting normal development. (© Monkey Business – stock.adobe.com)

Effects on brain function

Using functional magnetic resonance imaging (fMRI), scientists examined different areas of the brain and diverse types of brain function both at rest and while performing tasks. Some parts of the brain showed increased activity, and some parts showed decreased activity. The most significant changes occurred in the connectivity in the part of the brain critical for active thinking and decision-making.

Alterations in brain function show up as addictive behaviors and deterioration in both thought and physical capabilities. The teens’ still immature brains suffered changes that adversely affected intellectual function, physical coordination, mental health, development, and overall well-being.

The brain is in an especially vulnerable stage of development in adolescence. It is more susceptible to internet-associated compulsions. Some of the compulsions were nonstop mouse clicking and consumption of social media. The damage can be profound, with dire consequences. It can manifest as problems in maintaining relationships, lying about online activities, and disturbed eating and sleeping patterns. The sleep disruption interferes with daytime concentration and chronic fatigue.

Brain function is not the only thing altered in teens with internet addiction. Anxiety, depression, and social isolation are all severe consequences of their irresistible compulsions. Additional significant concerns include cyberbullying and exposure to inappropriate material, resulting in emotional distress and a distorted perception of reality.

Source: https://studyfinds.org/internet-addiction-what-its-doing-to-teen-brains/

Scientists discover what really causes us to procrastinate

(Credit: ntkris/Shutterstock)

Chronic procrastinators are often seen as lazy, but a new study suggests that it’s more than just a lack of motivation. A new study published in the Proceedings of the Annual Meeting of the Cognitive Science Society examined the cost-benefit risks the brain goes through when deciding to put off tasks, especially in the face of serious consequences or failure. According to researchers in Germany, understanding why people wait until the last minute to finish important tasks would help create more effective strategies when it comes to productivity.

Procrastination is a complex issue, especially when you consider that most people have been guilty of doing this at least once. Whether it’s filing taxes, meeting a project deadline for work, or simply cleaning out the garage, procrastination causes people to delay tasks despite having the time to do them right away. Given the stress, anxiety, and guilt that can come with procrastination, it’s surprising the human brain continues to support this bad habit.

One issue with procrastination is more than waiting until the last minute to complete a task. While they might look alike, there are different forms of procrastination.

“Procrastination is an umbrella term for different behaviors,” explains Sahiti Chebolu, a computational neuroscientist from the Max Planck Institute for Biological Cybernetics, in a media release. “If we want to understand it, we need to differentiate between its various types.”

A common pattern of procrastination, for example, is not following through on a decision. You might have set aside time to do laundry in the evening, but when the time comes, you decide to watch a movie instead. Usually, something is stopping a person from committing to the original task and waiting for the right conditions or motivation to start the work.

In the current study, Chebolu categorized each type of procrastination and narrowed it down to two explanations: misjudging the time needed to complete the task and protecting the ego from prospective failure.

Researchers narrowed down procrastination to two explanations: misjudging the time needed to complete the task and protecting the ego from prospective failure. (Photo by Pedro Forester Da Silva from Unsplash)

The Theory Behind a Distracted Brain

The theory of procrastination is that it is a series of temporal decisions or making a choice now that would have consequences later. For example, deciding to file taxes on Friday but then choosing to watch a new show on TV when the time comes. Obviously, missing the Tax Day deadline results in penalties and other financial consequences — yet people do it anyway.

According to the authors, the brain weighs all the rewards and penalties of choosing an alternative behavior. However, the brain is biased and prefers immediate gratification over delayed pleasure. The joy of watching television right now is a more appealing option to the brain than the relief of filing taxes three weeks later. It’s too long of a wait for the reward, so the brain prefers the quicker option.

Now, if this were the case all the time, no one would get anything done. That’s why the brain also considers the penalties for making a different decision. However, the study finds the negative outcomes have less weight than the option that gives immediate pleasure. The brain will always try to find the easiest and most immediately pleasurable option.

Evolutionarily, this makes sense. The distant future is always full of uncertainties, so the emphasis should be on helping yourself in the present moment. Procrastination comes when this mental process becomes maladaptive. Chebolu says people’s decision-making skills become flawed as they put too much emphasis on experiences in the present and not enough on the future.

Source: https://studyfinds.org/what-causes-procrastinate/

Is silver worse than bronze? Here’s why many Olympic athletes shockingly think it is

RIO DE JANEIRO, BRAZIL – AUGUST 12, 2016: Laszlo Cseh HUN (L), Chad le Clos RSA , Michael Phelps USA and Joseph Schooling SGP during medal ceremony after Men’s 100m butterfly of the Rio 2016 Olympics (Credit: Leonard Zhukovsky/Shutterstock)

At the 2022 Beijing Olympics, a distraught Alexandra Trusova won silver and promptly declared, “I will never skate again.” Swimmer Michael Phelps displayed a mix of frustration and disappointment at the 2012 London Olympics when he added a silver to his trove of gold medals. At those same games, gymnast McKayla Maroney’s grim expression on the medal stand went viral.

These moments, caught by the camera’s unblinking eye, reveal a surprising pattern: Silver medalists often appear less happy than those winning bronze.

In a 2021 study, which we conducted with our research assistant, Raelyn Rouner, we investigated whether there’s any truth to this phenomenon.

Detecting disappointment
When the athletes of the world convene in Paris this summer for the games of the 33rd Olympiad, many will march in the opening ceremonies, dreaming of gold.

But what happens when they fall just short?

We studied photos of 413 Olympic athletes taken during medal ceremonies between 2000 and 2016. The photos came from the Olympic World Library and Getty Images and included athletes from 67 countries. We also incorporated Sports Illustrated’s Olympic finish predictions because we wanted to see whether athletes’ facial expressions would be affected if they had exceeded expectations or underperformed.

To analyze the photos, we used a form of artificial intelligence that detects facial expressions. By using AI to quantify the activation of facial muscles, we eliminated the need for research assistants to manually code the expressions, reducing the possibility of personal bias. The algorithm identified the shapes and positions of the athletes’ mouths, eyes, eyebrows, nose, and other parts of the face that indicate a smile.

Even though second-place finishers had just performed objectively better than third-place finishers, the AI found that bronze medalists, on average, appeared happier than silver medalists.

Close but no cigar
So why does this happen?

The answer has to do with what psychologists call “counterfactual thinking,” which refers to when people envision what didn’t occur but could have happened.

With this thought process in mind, there are two main explanations for this medal stand phenomenon.

Beijing, China – February 10, 2022: Close-up of the silver medal of the Winter Olympic Games in Beijing (Credit: Andrew Will/Shutterstock)

First, silver medalists and bronze medalists form different points of comparison – what are called category-based counterfactuals.

Silver medalists form an upward comparison, imagining a different outcome – “I almost won gold.” Bronze medalists, on the other hand, form a downward comparison: “At least I won a medal” or “It could have been worse.”

The direction of this comparison shows how happiness can be relative. For silver medalists, almost winning gold is a cause for disappointment, while simply being on the medal stand can gratify a bronze medalist.

We also point to a second reason for this phenomenon: Medalists form something called expectation-based counterfactuals.

Some silver medalists are disappointed because they expected to do better. Maroney’s famous grimace is an example of this. Sports Illustrated predicted she would win the gold medal by a wide margin. In other words, for Maroney, anything other than gold was a big disappointment.

We found evidence consistent with both category-based and expectation-based counterfactual accounts of Olympic medalists’ expressions. Unsurprisingly, our analysis also found that gold medalists are far more likely to smile than the other two medalists, and people who finished better than expected were also more likely to smile, regardless of their medal.

Source: https://studyfinds.org/silver-worse-bronze-olympics/

Memory expert: Triple your recall skills using this simple method

(Credit: Marko Aliaksandr on Shutterstock)

We’re all forgetful from time to time, but for some of us, forgetfulness is a real problem. From little things like items on our grocery list to bigger things like important work meetings or anniversaries, the tendency to forget is not only annoying, but it can be detrimental to our relationships, work, and general ability to function well in a structured, fast-paced society.

There are many ways to combat memory loss and decrease an individual’s risk for conditions such as Alzheimers disease and other forms of dementia. Playing cognitively stimulating games and engaging in educational classes or activities has been proven to reduce the onset of memory decline.

But what if you could do more than delay memory loss? What if we told you that it is possible to triple your memory with one simple method? Memory expert Dave Farrow, author of the book “Brainhacker,” has developed a test that does just that. Farrow believes that our minds have slowed because individuals are no longer asked to remember things. Phone numbers are programmed into our phones and we are able to “ask” our phones to remind us of important dates or events.

“We have become better at sifting through information and searching for information. Looking through search engines and such, and much worse at remembering information. And the reason is because we have a device that remembers it for us,” Farrow tells StudyFinds. “We don’t need phone numbers. We don’t need to hold it in our heads, things like that. Just a little bit of brain training and actually exercising your brain makes a difference there.”

That’s why he suggests what he calls “The Farrow Memory Method,” which he claims can triple an individual’s ability to remember. Here’s how it works:

The Farrow Memory Method

1. Make A List Of Random Objects

Select six or seven random objects and make a list. Focus on the order of the objects. You will need to repeat the objects in order at the end of the test.

2. Use Visual Association

Make connections between the objects. “Essentially what you would do is you’d get a list of random objects and you use visual association.” Instead of memorizing the entire list at once, focus on two at a time. Farrow continues, “the way I would memorize that is, you want to connect two items together at a time. The mistake people make when they’re trying to memorize a list of items is they try to hold it all in their head, and that’s why you have a limit of six to seven items or so. But what you should do is just focus on two at a time and making a connection.”

Farrow uses an example list to explain: shoe, tree, rubber ball, money, and movie. After he makes his list, he begins connecting the items by visualizing silly pictures or actions. “So the first item was a shoe. I would imagine a shoe connected to a tree. Maybe a tree is growing shoes like some miracle of genetic engineering. I love that. I actually pictured like a tree growing out of a giant shoe and it’s just like sitting on the ground and some art project,” he says.

He connects the tree to a rubber ball by visualizing balls coming out of the tree and hitting kids nearby. The kids discover money inside the rubber balls. He says, “Some of the kids, they pick up the ball, and they open it up and they realize it’s actually money inside. So they’re all excited.  After money, I believe I had a movie, and I just imagined like you go to a movie and just dollar bills are raining from the sky in the movie, like you just won the lottery or something.”

By making unique, visual connections, individuals are more likely to remember the list. Objects are no longer random, but part of a story.

3. Take Some Time

Read a book, watch a movie, or go out with a friend. Walk away from the list for a period of time. Then, come back to the list.

4. Recall The List

Using the visual connections, restate your list. The images should help you tie the seemingly random objects together.

Will you always need the silly pictures to help you remember? Farrow says no.

“With just a few repetitions most of these links will fade, but the information will stay. That is, you won’t remember that there was a tree growing out of a shoe. It’s just, whenever you think of shoe, it’ll remind you of tree,” he explains. “By the third or fourth repetition, the links would fade, and you would just remember the information. That’s really the goal. You don’t want to have to come up with silly pictures all the time just to remember your parent’s phone number. So it’s a means to an end. The picture fades and the information stays.”

Other Memory Tips

Of course, previous studies point to other easy ways we can improve our recall.

In one study, scientists in Australia found that simple mental activities strengthen the brain by improving a person’s cognitive reserve. Activities such as adult literacy courses were found to reduce dementia risk by 11 percent, while playing intelligence-testing games led to a nine-percent reduction. Engaging in painting, drawing, or other artistic hobbies displayed an association with a seven-percent decrease in dementia risk.

And following a healthy lifestyle with a nutritious diet is also beneficial in warding off memory loss. A decade-long study of Chinese adults over the age of 60 shows that the benefits of healthy living even positively impact those with a gene making them susceptible to Alzheimer’s disease. The study followed carriers of the apolipoprotein E (APOE) gene — the strongest known risk factor for Alzheimer’s and other types of dementia. Those with favorable or average lifestyles were nearly 90 percent and almost 30 percent less likely to develop dementia or mild cognitive impairment in comparison to unfavorable lifestyle participants, respectively.

Are today’s teens more content being single? Study reveals surprising trends

Teenagers (Photo by Tim Mossholder on Unsplash)

Maybe romance really is just for adults after all. A new study suggests that teenagers today are not only more likely to be single, but also happier about it compared to previous generations. It’s an interesting shift in attitudes towards romantic relationships among young people considering rising levels of loneliness across the world today.

The research, conducted by a team of psychologists in Germany and published in the journal Personality and Social Psychology Bulletin, examines how satisfaction with being single has changed over time for different age groups. Their most striking finding was that adolescents born between 2001 and 2003 reported significantly higher satisfaction with singlehood compared to those born just a decade earlier.

This trend appears to be unique to teenagers, as the study found no similar increases in singlehood satisfaction among adults in their 20s and 30s. The results suggest that broader societal changes in how relationships and individual autonomy are viewed may be having a particularly strong impact on the youngest generation.

“Adolescents nowadays may be postponing entering relationships, prioritizing personal autonomy and individual fulfillment over romantic involvement, and embracing singlehood more openly,” the researchers speculate. However, they caution that more investigation is needed to understand the exact reasons behind this shift.

Beyond the generational differences, the study also uncovered several factors that were associated with higher satisfaction among singles across age groups. Younger singles tended to be more content than older ones, and those with lower levels of the personality trait neuroticism also reported greater satisfaction with singlehood.

Interestingly, the research found that singles’ satisfaction tends to decline over time, both with being single specifically and with life in general. This suggests that while attitudes may be changing, there are still challenges associated with long-term singlehood for many people.

“It seems that today’s adolescents are less inclined to pursue a romantic relationship. This could well be the reason for the increased singlehood satisfaction,” said psychologist and lead author Dr. Tita Gonzalez Avilés, of the Institute of Psychology at Johannes Gutenberg University Mainz, in a statement.

Methodology

The study utilized data from a large, nationally representative longitudinal survey in Germany called the Panel Analysis of Intimate Relationships and Family Dynamics (pairfam). This ongoing project has been collecting annual data on romantic relationships and family dynamics since 2008.

The researchers employed a cohort-sequential design, allowing them to compare different birth cohorts at similar ages. They focused on four birth cohorts (1971-1973, 1981-1983, 1991-1993, and 2001-2003) and three age groups: adolescents (14-20 years), emerging adults (24-30 years), and established adults (34-40 years).

For their main analyses, the team included 2,936 participants who remained single throughout the study period. These individuals provided annual data on their satisfaction with singlehood and overall life satisfaction over three consecutive years.

The researchers used sophisticated statistical techniques, including multilevel growth-curve models, to examine how satisfaction changed over time and how it differed between cohorts, age groups, and based on individual characteristics like gender and personality traits.

Results Breakdown

The study’s findings can be broken down into several key areas:

  1. Prevalence of singles: Adolescents born in 2001-2003 were about 3% more likely to be single compared to those born in 1991-1993. This difference was not observed for older age groups.
  2. Satisfaction with singlehood: Later-born adolescents (2001-2003) reported significantly higher satisfaction with being single compared to earlier-born adolescents (1991-1993). This difference was not found among emerging or established adults.
  3. Life satisfaction: There were no significant cohort differences in overall life satisfaction for singles.
  4. Age effects: Across cohorts, adolescent singles reported higher satisfaction (both with singlehood and life in general) compared to adult singles.
  5. Gender differences: Contrary to expectations, single women in established adulthood (34-40 years) reported higher satisfaction with singlehood than single men in the same age group.
  6. Personality effects: Higher levels of neuroticism were associated with lower satisfaction among singles, while the effects of extraversion were less consistent.
  7. Changes over time: On average, satisfaction with singlehood tended to decline over the two-year study period for all age groups.

Limitations

The researchers acknowledge several limitations to their study:

  1. Time frame: The study compared cohorts separated by only 10 years. Longer time periods might reveal more pronounced effects of historical changes.
  2. Period vs. cohort effects: It’s challenging to completely separate the effects of being born in a certain time period from the effects of experiencing certain events (like the COVID-19 pandemic) at a particular age.
  3. Age range: The study focused on individuals up to age 40, so the findings may not generalize to older singles.
  4. Cultural context: The research was conducted in Germany, and the results might differ in countries with more traditional views on marriage and family.
  5. Limited factors: While the study examined several individual characteristics, there are many other factors that could influence singles’ satisfaction that were not included in this analysis.

Discussion and Takeaways

The study’s findings offer several important insights and raise intriguing questions for future research:

Changing norms: The higher prevalence and satisfaction with singlehood among recent cohorts of adolescents suggests that societal norms around romantic relationships may be shifting. This could have implications for future patterns of partnership, marriage, and family formation.

Age-specific effects: The fact that historical changes were only observed among adolescents, not adults, indicates that this age group may be particularly responsive to shifting social norms. This aligns with developmental theories suggesting adolescence is a key period for identity formation and susceptibility to societal influences.

Individual differences matter: While cohort effects were observed, individual factors like age and personality traits emerged as stronger predictors of singles’ satisfaction. This highlights the importance of considering both societal and personal factors in understanding relationship experiences.

Declining satisfaction over time: Researchers say the general trend of decreasing satisfaction with singlehood over time suggests that there may still be challenges associated with long-term singlehood, even as social acceptance increases.

Gender dynamics: The finding that older single women reported higher satisfaction than older single men contradicts some previous assumptions and warrants further investigation into changing gender roles and expectations.

Neuroticism’s impact: The consistent negative relationship between neuroticism and satisfaction among singles points to the importance of emotional stability and coping skills in navigating singlehood.

Adolescent well-being: The higher overall satisfaction reported by adolescent singles compared to adult singles raises questions about the pressures and expectations that may emerge in adulthood regarding romantic relationships.

Source: https://studyfinds.org/are-todays-teens-more-content-being-single-study-reveals-surprising-trends/

Average young adult predicts they’ll be dead by 76!

(© Syda Productions – stock.adobe.com)

Have you ever wondered how long you’ll live? A recent study has revealed some intriguing insights into how different age groups perceive their own mortality. Buckle up, because the results might surprise you! The research surveyed 2,000 adults across the United Kingdom. It turns out that millennials (those in the 35-44 age bracket) believe they’ll reach the ripe old age of 81. Conversely, their younger Gen Z counterparts (the under-24 crowd) is a bit more pessimistic, expecting to only make it to 76. In fact, 1 in 6 Gen Z participants aren’t even sure they’ll be alive in time for retirement!

But here’s the kicker: those over 65 are the most optimistic of all, anticipating they’ll live until 84 – the highest estimate of any age group.

And what about the battle of the sexes? Well, men seem to think they’ll outlast women, predicting an average lifespan of 82 compared to women’s 80. However, the joke might be on them, as women typically have a longer life expectancy than men.

The study, commissioned by UK life insurance brand British Seniors, also found that a whopping 65% of respondents sometimes or often contemplate their own mortality. As a spokesperson from British Seniors put it, “The research has revealed a fascinating look into these predictions and differences between gender, location, and age group. Such conversations are becoming more open than ever – as well as discussion of how you’d like your funeral to look.”

Speaking of funerals, 23% of adults have some or all of their funeral plans in place. A quarter don’t want any fuss for their send-off, while 20% are happy with whatever their friends and family decide on. The report revealed that 21% have discussed their own funeral with someone else, and 35% of those over 65 have explained their preferences to someone.

So, what’s the secret to a long life? According to the respondents, leading an active lifestyle, not smoking, keeping the brain ticking, and having good genetics and family history on their side are all key factors. And when it comes to approaching life, 37% believe in being balanced, 20% want to live it to the fullest, and 16% think slow and steady wins the race.

Source: https://studyfinds.org/young-adult-lifespan-prediction/

Survey says it takes nearly 2 months of exercise before you’ll start to look more fit

(© rangizzz – stock.adobe.com)

The poll of 2,000 adults reveals what goals people prioritize when it comes to their fitness. Above all, they’re aiming to lose a certain amount of weight (43%), increase their general strength (43%) and increase their general mobility (35%).

However, 48 percent are worried about potentially losing the motivation to get fit and 65 percent believe the motivation to increase their level of physical fitness wanes over time.

According to respondents, the motivation to keep going lasts for about four weeks before needing a new push.

The survey, commissioned by Optimum Nutrition and conducted by TalkerResearch, finds that a majority of Americans’ diet affects their level of fitness motivation (89%).

Nearly three in 10 (29%) believe they don’t get enough protein in their diet, lacking it either “all the time” (19%) or often (40%).

Gen X respondents feel like they are lacking protein the most out of all generations (35%), compared to millennials (34%), Gen Z (27%) and baby boomers (21%). Plus, over three in five (35%) women don’t think they get enough protein vs. 23 percent of men.

The average person has two meals per day that don’t include protein, but 61 percent would be more likely to increase their protein intake to help achieve their fitness goals.

As people reflect on health and wellness goals, the most common experiences that make people feel out of shape include running out of breath often (49%) and trying on clothing that no longer fits (46%).

Over a quarter (29%) say they realized they were out of shape after not being able to walk up a flight of stairs without feeling winded.

Source: https://studyfinds.org/survey-says-it-takes-nearly-2-months-of-exercise-before-youll-start-to-look-more-fit/

What it’s like to have aphantasia, the condition that turns off the mind’s eye

Concept of aphantasia, inability to visualize and create mental images. (© Studio Light & Shade – stock.adobe.com)

Close your eyes and try to picture a loved one’s face or your childhood home. For most people, this conjures up a mental image, perhaps fuzzy but still recognizable. But for a small percentage of the population, this simple act of imagination draws a complete blank. No colors, no shapes, no images at all – just darkness. This condition, known as aphantasia, is shedding new light on the nature of human imagination and consciousness.

A recent review published in Trends in Cognitive Sciences explores the fascinating world of aphantasia and its opposite extreme, hyperphantasia – imagery so vivid it rivals actual perception. These conditions, affecting roughly 1% and 3% of the population, respectively, are opening up new avenues for understanding how our brains create and manipulate mental images.

Aphantasia, from the Greek “a” (without) and “phantasia” (imagination), was only recently named in 2015 by Adam Zeman, a professor at the University of Exeter, though the phenomenon was first noted by Sir Francis Galton in the 1880s. People with aphantasia report being unable to voluntarily generate visual images in their mind’s eye. This doesn’t mean they lack imagination altogether – many excel in abstract or spatial thinking – but they can’t “see” things in their mind the way most people can.

On the flip side, those with hyperphantasia experience incredibly vivid mental imagery, sometimes indistinguishable from actual perception. These individuals might be able to recall a scene from a movie in perfect detail or manipulate complex visual scenarios in their minds with ease.

What’s particularly intriguing about these conditions is that they often affect multiple senses. Many people with aphantasia report difficulty imagining sounds, smells, or tactile sensations as well. This suggests that the ability to generate mental imagery might be a fundamental aspect of how our brains process and represent information.

The review, authored by Zeman, delves into the growing body of research on these conditions. Some key findings include the apparent genetic component of aphantasia, as it seems to run in families. People with aphantasia often have reduced autobiographical memory – they can recall facts about their past but struggle to “relive” experiences in their minds. Interestingly, many people with aphantasia still experience visual dreams, suggesting different neural mechanisms for voluntary and involuntary imagery.

There’s a higher prevalence of aphantasia among people in scientific and technical fields, while hyperphantasia is more common in creative professions. Additionally, aphantasia is associated with some difficulties in face recognition and a higher likelihood of having traits associated with autism spectrum disorders.

These findings paint a complex picture of how mental imagery relates to other cognitive processes and even career choices. But perhaps most importantly, they’re challenging our assumptions about what it means to “imagine” something.

Methodology: Peering into the Mind’s Eye
Studying something as subjective as mental imagery poses unique challenges. How do you measure something that exists only in someone’s mind? Zeman reviewed about 50 previous studies to reach his takeaways about the condition.

Researchers across these studies developed several clever approaches to better understand aphantasia. The most common method is simply asking people to rate the vividness of their mental images using self-report questionnaires like the Vividness of Visual Imagery Questionnaire (VVIQ).

Researchers also use behavioral tasks that typically require mental imagery and compare performance between those with and without aphantasia. For example, participants might be asked to compare the colors of two objects without seeing them. Some studies have looked at physical responses that correlate with mental imagery, such as pupil dilation in response to imagining bright or dark scenes.

Brain imaging techniques, particularly functional MRI, allow researchers to see which brain areas activate during imagery tasks in people with different imagery abilities. Another interesting technique is binocular rivalry, which uses the tendency for mental imagery to bias subsequent perception. It’s been used to objectively measure imagery strength.

These varied approaches help researchers triangulate on the nature of mental imagery and its absence in aphantasia, providing a more comprehensive understanding of these phenomena.

Results: A World Without Pictures
The review synthesizes findings from numerous studies, revealing a complex picture of how aphantasia affects cognition and behavior. While general memory function is largely intact, people with aphantasia often report less vivid and detailed autobiographical memories. They can recall facts about events but struggle to “relive” them mentally.

Contrary to what one might expect, aphantasia doesn’t necessarily impair creativity. Many successful artists and writers have aphantasia, suggesting alternative routes to creative thinking. Some studies suggest that people with aphantasia have a reduced emotional response to written scenarios, possibly because they can’t visualize the described scenes.

Surprisingly, many people with aphantasia report normal visual dreams. This dissociation between voluntary and involuntary imagery is a puzzle for researchers. There’s also a higher prevalence of face recognition difficulties among those with aphantasia, though the connection isn’t fully understood.

While object imagery is impaired, spatial imagery abilities are often preserved in aphantasia. This suggests different neural underpinnings for these two types of mental representation. Neuroimaging studies show differences in connectivity between frontal and visual areas of the brain in people with aphantasia, potentially explaining the difficulty in generating voluntary mental images.

“Despite the profound contrast in subjective experience between aphantasia and hyperphantasia, effects on everyday functioning are subtle – lack of imagery does not imply lack of imagination. Indeed, the consensus among researchers is that neither aphantasia nor hyperphantasia is a disorder. These are variations in human experience with roughly balanced advantages and disadvantages. Further work should help to spell these out in greater detail,” Prof. Zeman says in a media release.

Source: https://studyfinds.org/what-its-like-to-have-aphantasia/

Gold goes 2D: Scientists create ultra-thin ‘goldene’ sheets

Lars Hultman, professor of thin film physics and Shun Kashiwaya, researcher at the Materials Design Division at Linköping University. (Credit: Olov Planthaber)

In a remarkable feat of nanoscale engineering, scientists have created the world’s thinnest gold sheets at just one atom thick. This new material, dubbed “goldene,” could revolutionize fields from electronics to medicine, offering unique properties that bulk gold simply can’t match.

The research team, led by scientists from Linköping University in Sweden, managed to isolate single-atom layers of gold by cleverly manipulating the metal’s atomic structure. Their findings, published in the journal Nature Synthesis, represent a significant breakthrough in the rapidly evolving field of two-dimensional (2D) materials.

Since the discovery of graphene — single-atom-thick sheets of carbon — in 2004, researchers have been racing to create 2D versions of other elements. While 2D materials made from carbon, boron, and even iron have been achieved, gold has proven particularly challenging. Previous attempts resulted in gold sheets several atoms thick or required the gold to be supported by other materials.

The Swedish team’s achievement is particularly noteworthy because they created free-standing sheets of gold just one atom thick. This ultra-thin gold, or goldene, exhibits properties quite different from its three-dimensional counterpart. For instance, the atoms in goldene are packed more tightly together, with about 9% less space between them compared to bulk gold. This compressed structure leads to changes in the material’s electronic properties, which could make it useful for a wide range of applications.

One of the most exciting potential uses for goldene is in catalysis, which is the process of speeding up chemical reactions. Gold nanoparticles are already used as catalysts in various industrial processes, from converting harmful vehicle emissions into less dangerous gases to producing hydrogen fuel. The researchers believe that goldene’s extremely high surface-area-to-volume ratio could make it an even more efficient catalyst.

The creation of goldene also opens up new possibilities in fields like electronics, photonics, and medicine. For example, the material’s unique optical properties could lead to improved solar cells or new types of sensors. In medicine, goldene might be used to create ultra-sensitive diagnostic tools or to deliver drugs more effectively within the body.

How They Did It: Peeling Gold Atom by Atom
The process of creating goldene is almost as fascinating as the material itself. The researchers used a technique that might be described as atomic-scale sculpting, carefully removing unwanted atoms to leave behind a single layer of gold.

They started with a material called Ti3AuC2, which is part of a family of compounds known as MAX phases. These materials have a layered structure, with sheets of titanium carbide (Ti3C2) alternating with layers of gold atoms. The challenge was to remove the titanium carbide layers without disturbing the gold.

To accomplish this, the team used a chemical etching process. They immersed the Ti3AuC2 in a carefully prepared solution containing potassium hydroxide and potassium ferricyanide, known as Murakami’s reagent. This solution selectively attacks the titanium carbide layers, gradually dissolving them away.

However, simply etching away the titanium carbide wasn’t enough. Left to their own devices, t

he freed gold atoms would quickly clump together, forming 3D nanoparticles instead of 2D sheets. To prevent this, the researchers added surfactants — molecules that help keep the gold atoms spread out in a single layer.

Two key surfactants were used: cetrimonium bromide (CTAB) and cysteine. These molecules attach to the surface of the gold, creating a protective barrier that prevents the atoms from coalescing. The entire process took about a week, with the researchers carefully controlling the concentration of the etching solution and surfactants to achieve the desired result.

For the first time, scientists have managed to create sheets of gold only a single atom layer thick. (Credit: Olov Planthaber)

Results: A New Form of Gold Emerges

The team’s efforts resulted in sheets of gold just one atom thick, confirmed through high-resolution electron microscopy. These goldene sheets showed several interesting properties:

  1. Compressed structure: The gold atoms in goldene are packed about 9% closer together than in bulk gold. This compression changes how the electrons in the material behave, potentially leading to new electronic and optical properties.
  2. Increased binding energy: X-ray photoelectron spectroscopy revealed that the electrons in goldene are more tightly bound to their atoms compared to bulk gold. This shift in binding energy could affect the material’s chemical reactivity.
  3. Rippling and curling: Unlike perfectly flat sheets, the goldene layers showed some rippling and curling, especially at the edges. This behavior is common in 2D materials and can influence their properties.
  4. Stability: Computer simulations suggested that goldene should be stable at room temperature, although the experimental samples showed some tendency to form blobs or clump together over time.

The researchers also found that they could control the thickness of the gold sheets by adjusting their process. Using slightly different conditions, they were able to create two- and three-atom-thick sheets of gold as well.

Limitations and Challenges

  1. Scale: The current process produces relatively small sheets of goldene, typically less than 100 nanometers across. Scaling up production to create larger sheets will be crucial for many potential applications.
  2. Stability: Although computer simulations suggest goldene should be stable, the experimental samples showed some tendency to curl and form blobs, especially at the edges. Finding ways to keep the sheets flat and prevent them from clumping together over time will be important.
  3. Substrate dependence: The goldene sheets were most stable when still partially attached to the original Ti3AuC2 material or when supported on a substrate. Creating large, free-standing sheets of goldene remains a challenge.
  4. Purity: The etching process leaves some residual titanium and carbon atoms mixed in with the gold. While these impurities are minimal, they could affect the material’s properties in some applications.
  5. Reproducibility: The process of creating goldene is quite sensitive to the exact conditions used. Ensuring consistent results across different batches and scaling up production will require further refinement of the technique.

The surprising cure for chronic back pain? Just take a walk

(© glisic_albina – stock.adobe.com)

For anyone who has experienced the debilitating effects of low back pain, the results of an eye-opening new study may be a game-changer. Researchers have found that a simple, accessible program of progressive walking and education can significantly reduce the risk of constant low back pain flare-ups in adults. The implications are profound — no longer does managing this pervasive condition require costly equipment or specialized rehab facilities. Instead, putting on a pair of sneakers and taking a daily stroll could be one of the best preventative therapies available.

Australian researchers, publishing their work in The Lancet, recruited over 700 adults across the country who had recently recovered from an episode of non-specific low back pain, which lasted at least 24 hours and interfered with their daily activities. The participants were divided into two groups: one received an individualized walking and education program guided by a physiotherapist over six months, and the other received no treatment at all during the study.

Participants were then carefully followed for at least one year, up to a maximum of nearly three years for some participants. The researchers meticulously tracked any recurrences of low back pain that were severe enough to limit daily activities.

“Our study has shown that this effective and accessible means of exercise has the potential to be successfully implemented at a much larger scale than other forms of exercise,” says lead author Dr. Natasha Pocovi in a media release. “It not only improved people’s quality of life, but it reduced their need both to seek healthcare support and the amount of time taken off work by approximately half.”

Methodology: A Step-by-Step Approach

So, what did this potentially back-saving intervention involve? It utilized the principles of health coaching, where physiotherapists worked one-on-one with participants to design and progressively increase a customized walking plan based on the individual’s age, fitness level, and objectives.

The process began with a 45-minute consultation to understand each participant’s history, conduct an examination, and prescribe an initial “walking dose,” which was then gradually ramped up. The guiding target was to work up to walking at least 30 minutes per day, five times per week, by the six-month mark.

During this period, participants also participated in lessons to help overcome fears about back pain while learning easy strategies to self-manage any recurrences. They were provided with a pedometer and a walking diary to track their progress. After the first 12 weeks, they could choose whether to keep using those motivational tools. Follow-up sessions with the physiotherapist every few weeks, either in-person or via video calls, were focused on monitoring progress, adjusting walking plans when needed, and providing encouragement to keep participants engaged over the long haul.

Results: Dramatic Improvement & A Manageable Approach
The impact of this straightforward intervention was striking. Compared to the control group, participants in the walking program experienced a significantly lower risk of suffering a recurrence of low back pain that limited daily activities. Overall, the risk fell by 28%.

Even more impressive, the average time for a recurrence to strike was nearly double for those in the walking group (208 days) versus the control group (112 days). The results for any recurrence of low back pain, regardless of impact on activities and recurrences requiring medical care, showed similarly promising reductions in risk. Simply put, people engaging in the walking program stayed pain-free for nearly twice as long as others not treating their lower back pain.

Source: https://studyfinds.org/back-pain-just-take-a-walk/

Intermittent fasting may supercharge ‘natural killer’ cells to destroy cancer

Could skipping a few meals each week help you fight cancer? It might sound far-fetched, but new research suggests that one type of intermittent fasting could actually boost your body’s natural ability to defeat cancer.

(Credit: MIA Studio/Shutterstock)

A team of scientists at Memorial Sloan Kettering Cancer Center (MSK) has uncovered an intriguing link between fasting and the body’s immune system. Their study, published in the journal Immunity, focuses on a particular type of immune cell called natural killer (NK) cells. These cells are like the special forces of your immune system, capable of taking out cancer cells and virus-infected cells without needing prior exposure.

So, what’s the big deal about these NK cells? Well, they’re pretty important when it comes to battling cancerous tumors. Generally speaking, the more NK cells you have in a tumor, the better your chances of beating the disease. However, there’s a catch: the environment inside and around tumors is incredibly harsh. It’s like a battlefield where resources are scarce, and many immune cells struggle to survive.

This is where fasting enters the picture. The researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment and more effective at fighting cancer.

“Tumors are very hungry,” says immunologist Joseph Sun, PhD, the study’s senior author, in a media release. “They take up essential nutrients, creating a hostile environment often rich in lipids that are detrimental to most immune cells. What we show here is that fasting reprograms these natural killer cells to better survive in this suppressive environment.”

Illustration of a group of cancer cells. Researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment. (© fotoyou – stock.adobe.com)

How exactly does intermittent fasting achieve this?
The study, which was conducted on mice, involved denying the animals food for 24 hours twice a week, with normal eating in between. This intermittent fasting approach had some pretty remarkable effects on the NK cells.

First off, fasting caused the mice’s glucose levels to drop and their levels of free fatty acids to rise. Free fatty acids are a type of lipid (fat) that can be used as an alternative energy source when other nutrients are scarce. The NK cells learned to use these fatty acids as fuel instead of glucose, which is typically their primary energy source.

“During each of these fasting cycles, NK cells learned to use these fatty acids as an alternative fuel source to glucose,” says Dr. Rebecca Delconte, the lead author of the study. “This really optimizes their anti-cancer response because the tumor microenvironment contains a high concentration of lipids, and now they’re able enter the tumor and survive better because of this metabolic training.”

The fasting also caused the NK cells to move around the body in interesting ways. Many of them traveled to the bone marrow, where they were exposed to high levels of a protein called Interleukin-12. This exposure primed the NK cells to produce more of another protein called Interferon-gamma, which plays a crucial role in fighting tumors. Meanwhile, NK cells in the spleen were undergoing their own transformation, becoming even better at using lipids as fuel. The result? NK cells were pre-primed to produce more cancer-fighting substances and were better equipped to survive in the harsh tumor environment.

 

Source: https://studyfinds.org/intermittent-fasting-fight-cancer/

There are 6 different types of depression, brain pattern study shows

(Image by Feng Yu on Shutterstock)

Depression and anxiety disorders are among the most common mental health issues worldwide, yet current treatments often fail to provide relief for many sufferers. A major challenge has been the heterogeneity of these conditions. Patients with the same diagnosis can have vastly different symptoms and underlying brain dysfunctions. Now, a team of researchers at Stanford University has developed a novel approach to parse this heterogeneity, identifying six distinct “biotypes” of depression and anxiety based on specific patterns of brain circuit dysfunction.

The study, published in Nature Medicine, analyzed brain scans from over 800 patients with depression and anxiety disorders. By applying advanced computational techniques to these scans, the researchers were able to quantify the function of key brain circuits involved in cognitive and emotional processing at the individual level. This allowed them to group patients into biotypes defined by shared patterns of circuit dysfunction, rather than relying solely on symptoms.

Intriguingly, the six biotypes showed marked differences not just in their brain function, but also in their clinical profiles. Patients in each biotype exhibited distinct constellations of symptoms, cognitive impairments, and critically, responses to different treatments. For example, one biotype characterized by hyperconnectivity in circuits involved in self-referential thought and salience processing responded particularly well to behavioral therapy. Another, with heightened activity in circuits processing sadness and reward, was distinguished by prominent anhedonia (inability to feel pleasure).

These findings represent a significant step towards a more personalized, brain-based approach to diagnosing and treating depression and anxiety. By moving beyond one-size-fits-all categories to identify subgroups with shared neural mechanisms, this work opens the door to matching patients with the therapies most likely to help them based on the specific way their brain is wired. It suggests that brain circuit dysfunction may be a more meaningful way to stratify patients than symptoms alone. In the future, brain scans could be used to match individual patients with the treatments most likely to work for them, based on their specific neural profile.

More broadly, this study highlights the power of a transdiagnostic, dimensional approach to understanding mental illness. By focusing on neural circuits that cut across traditional diagnostic boundaries, we may be able to develop a more precise, mechanistic framework for classifying these conditions.

“To our knowledge, this is the first time we’ve been able to demonstrate that depression can be explained by different disruptions to the functioning of the brain,” says the study’s senior author, Dr. Leanne Williams, a professor of psychiatry and behavioral sciences, and the director of Stanford Medicine’s Center for Precision Mental Health and Wellness. “In essence, it’s a demonstration of a personalized medicine approach for mental health based on objective measures of brain function.”

The 6 Biotypes Of Depression

  1. The Overwhelmed Ruminator: This biotype has overactive brain circuits involved in self-reflection, detecting important information, and controlling attention. People in this group tend to have slowed-down emotional reactions and attention, but respond well to talk therapy.
  2. The Distracted Impulsive: This biotype has underactive brain circuits that control attention. They tend to have trouble concentrating and controlling impulses, and don’t respond as well to talk therapy.
  3. The Sensitive Worrier: This biotype has overactive brain circuits that process sadness and reward. They tend to have trouble experiencing pleasure and positive emotions.
  4. The Overcontrolled Perfectionist: This biotype has overactive brain circuits involved in regulating behavior and thoughts. They tend to have excessive negative emotions and threat sensitivity, trouble with working memory, but respond well to certain antidepressant medications.
  5. The Disconnected Avoider: This biotype has reduced connectivity in emotion circuits when viewing threatening faces, and reduced activity in behavior control circuits. They tend to have less rumination and faster reaction times to sad faces.
  6. The Balanced Coper: This biotype doesn’t show any major overactivity or underactivity in the brain circuits studied compared to healthy people. Their symptoms are likely due to other factors not captured by this analysis.

Of course, much work remains to translate these findings into clinical practice. The biotypes need to be replicated in independent samples and their stability over time needs to be established. We need to develop more efficient and scalable ways to assess circuit function that could be deployed in routine care. And ultimately, we will need prospective clinical trials that assign patients to treatments based on their biotype.

Nevertheless, this study represents a crucial proof of concept. It brings us one step closer to a future where psychiatric diagnosis is based not just on symptoms, but on an integrated understanding of brain, behavior, and response to interventions. As we continue to map the neural roots of mental illness, studies like this light the way towards more personalized and effective care for the millions of individuals struggling with these conditions.

“To really move the field toward precision psychiatry, we need to identify treatments most likely to be effective for patients and get them on that treatment as soon as possible,” says Dr. Jun Ma, the Beth and George Vitoux Professor of Medicine at the University of Illinois Chicago. “Having information on their brain function, in particular the validated signatures we evaluated in this study, would help inform more precise treatment and prescriptions for individuals.”

Source: https://studyfinds.org/there-are-6-different-types-of-depression-brain-pattern-study-shows/

Super dads, super kids: Science uncovers how the magic of fatherly care boosts child development

(Photo by Ketut Subiyanto from Pexels)

The crucial early years of a child’s life lay the foundation for their lifelong growth and happiness. Spending quality time with parents during these formative stages can lead to substantial positive changes in children. With that in mind, researchers have found an important link between a father’s involvement and their child’s successful development, both mentally and physically. Simply put, being a “super dad” results in raising super kids.

However, in Japan, where this study took place, a historical gender-based division of labor has limited fathers’ participation in childcare-related activities, impacting the development of children. Traditionally, Japanese fathers, especially those in their 20s to 40s, have been expected to prioritize work commitments over family responsibilities.

This cultural norm has resulted in limited paternal engagement in childcare, regardless of individual inclinations. The increasing number of mothers entering full-time employment further exacerbates the issue, leaving a void in familial support for childcare. With the central government advocating for paternal involvement in response to low fertility rates, Japanese fathers are now urged to become co-caregivers, shifting away from their traditional role as primary breadwinners.

While recent trends have found a rise in paternal childcare involvement, the true impact of this active participation on a child’s developmental outcomes has remained largely unexplored. This groundbreaking study published in Pediatric Research, utilizing data from the largest birth cohort in Japan, set out to uncover the link between paternal engagement and infant developmental milestones. Led by Dr. Tsuguhiko Kato from the National Center for Child Health and Development and Doshisha University Center for Baby Science, the study delved into this critical aspect of modern parenting.

“In developed countries, the time fathers spend on childcare has increased steadily in recent decades. However, studies on the relationship between paternal care and child outcomes remain scarce. In this study, we examined the association between paternal involvement in childcare and children’s developmental outcomes,” explains Dr. Kato in a media release.

Leveraging data from the Japan Environment and Children’s Study, the research team assessed developmental milestones in 28,050 Japanese children. These children received paternal childcare at six months of age and were evaluated for various developmental markers at three years. Additionally, the study explored whether maternal parenting stress mediates these outcomes at 18 months.

“The prevalence of employed mothers has been on the rise in Japan. As a result, Japan is witnessing a paradigm shift in its parenting culture. Fathers are increasingly getting involved in childcare-related parental activities,” Dr. Kato says.

The study measured paternal childcare involvement through seven key questions, gauging tasks like feeding, diaper changes, bathing, playtime, outdoor activities, and dressing. Each father’s level of engagement was scored accordingly. The research findings were then correlated with the extent of developmental delay in infants, as evaluated using the Ages and Stages questionnaire.

Source: https://studyfinds.org/super-dads-super-kids/

Women are losing their X chromosomes — What’s causing it?

(Credit: ustas7777777/Shutterstock)

A groundbreaking new study has uncovered genetic factors that may help explain why some women experience a phenomenon called mosaic loss of the X chromosome (mLOX) as they age. With mLOX, some of a woman’s blood cells randomly lose one of their two X chromosomes over time. Concerningly, scientists believe this genetic oddity may lead to the development of several disease, including cancer.

Researchers with the National Institutes of Health found that certain inherited gene variants make some women more susceptible to developing mLOX in the first place. Other genetic variations they identified seem to give a selective growth advantage to the blood cells that retain one X chromosome over the other after mLOX occurs.

Importantly, the study published in the journal Nature confirmed that women with mLOX have an elevated risk of developing blood cancers like leukemia and increased susceptibility to infections like pneumonia. This underscores the potential health implications of this chromosomal abnormality.

As some women age, their white blood cells can lose a copy of chromosome X. A new study sheds light on the potential causes and consequences of this phenomenon. (Credit: Created by Linda Wang with Biorender.com)

Paper Summary

Methodology

To uncover the genetic underpinnings of mLOX, the researchers conducted a massive analysis of nearly 900,000 women’s blood samples from eight different biobanks around the world. About 12% of these women showed signs of mLOX in their blood cells.

Results

By comparing the DNA of women with and without mLOX, the team pinpointed 56 common gene variants associated with developing the condition. Many of these genes are known to influence processes like abnormal cell division and cancer susceptibility. The researchers also found that rare mutations in a gene called FBXO10 could double a woman’s risk of mLOX. This gene likely plays an important role in the cellular processes that lead to randomly losing an X chromosome.

Source: https://studyfinds.org/women-losing-x-chromosomes/

Facially expressive people are more well-liked, socially successful

(Photo by airdone on Shutterstock)

Are you an open book, your face broadcasting every passing emotion, or more of a stoic poker face? Scientists at Nottingham Trent University say that wearing your heart on your sleeve (or rather, your face) could actually give you a significant social advantage. Their research shows that people who are more facially expressive are more well-liked by others, considered more agreeable and extraverted, and even fare better in negotiations if they have an amenable personality.

The study, led by Eithne Kavanagh, a research fellow at NTU’s School of Social Sciences, is the first large-scale systematic exploration of individual differences in facial expressivity in real-world social interactions. Across two studies involving over 1,300 participants, Kavanagh and her team found striking variations in how much people moved their faces during conversations. Importantly, this expressivity emerged as a stable individual trait. People displayed similar levels of facial expressiveness across different contexts, with different social partners, and even over time periods up to four months.

Connecting facial expressions with social success
So what drives these differences in facial communication styles and why do they matter? The researchers say that facial expressivity is linked to personality, with more agreeable, extraverted and neurotic individuals displaying more animated faces. But facial expressiveness also translated into concrete social benefits above and beyond the effects of personality.

In a negotiation task, more expressive individuals were more likely to secure a larger slice of a reward, but only if they were also agreeable. The researchers suggest that for agreeable folks, dynamic facial expressions may serve as a tool for building rapport and smoothing over conflicts.

Across the board, the results point to facial expressivity serving an “affiliative function,” or a social glue that fosters liking, cooperation and smoother interactions. Third-party observers and actual conversation partners consistently rated more expressive people as more likable.

Expressivity was also linked to being seen as more “readable,” suggesting that an animated face makes one’s intentions and mental states easier for others to decipher. Beyond frequency of facial movements, people who deployed facial expressions more strategically to suit social goals, such as looking friendly in a greeting, were also more well-liked.

“This is the first large scale study to examine facial expression in real-world interactions,” Kavanagh says in a media release. “Our evidence shows that facial expressivity is related to positive social outcomes. It suggests that more expressive people are more successful at attracting social partners and in building relationships. It also could be important in conflict resolution.”

Taking our faces at face value
The study, published in Scientific Reports, represents a major step forward in understanding the dynamics and social significance of facial expressions in everyday life. Moving beyond the traditional focus on static, stylized emotional expressions, it highlights facial expressivity as a consequential and stable individual difference.

The findings challenge the “poker face” intuition that a still, stoic demeanor is always most advantageous. Instead, they suggest that for most people, allowing one’s face to mirror inner states and intentions can invite warmer reactions and reap social rewards. The authors propose that human facial expressions evolved largely for affiliative functions, greasing the wheels of social cohesion and cooperation.

The results also underscore the importance of studying facial behavior situated in real-world interactions to unveil its true colors and consequences. Emergent technologies like automated facial coding now make it feasible to track the face’s mercurial movements in the wild, opening up new horizons for unpacking how this ancient communication channel shapes human social life.

Far from mere emotional readouts, our facial expressions appear to be powerful tools in the quest for interpersonal connection and social success. As the researchers conclude, “Being facially expressive is socially advantageous.” So the next time you catch yourself furrowing your brow or flashing a smile, know that your face just might be working overtime on your behalf to help you get ahead.

 

Source: https://studyfinds.org/facially-expressive-people-well-liked-socially-successful/

Can indie games inspire a creative boom from Indian developers?

Visai Games’ Venba won a Bafta Games Award this year

India might not be the first country that springs to mind when someone mentions video games, but it’s one of the fastest-growing markets in the world.
Analysts believe there could be more than half a billion players there by the end of this year.
Most of them are playing on mobile phones and tablets, and fans will tell you the industry is mostly known for fantasy sports games that let you assemble imaginary teams based on real players.
Despite concerns over gambling and possible addiction, they’re big business.
The country’s three largest video game startups – Game 24X7, Dream11 and Mobile Premier League – all provide some kind of fantasy sport experience and are valued at over $1bn.
But there’s hope that a crop of story-driven games making a splash worldwide could inspire a new wave of creativity and investment.
During the recent Summer Game Fest (SGF) – an annual showcase of new and upcoming titles held in Los Angeles and watched by millions – audiences saw previews of a number of story-rich titles from South Asian teams.

Detective Dotson will also have a companion TV series produced

One of those was Detective Dotson by Masala Games, based in Gujarat, about a failed Bollywood actor turned detective.
Industry veteran Shalin Shodhan is behind the game and tells BBC Asian Network this focus on unique stories is “bucking the trend” in India’s games industry.
He wants video games to become an “interactive cultural export” but says he’s found creating new intellectual property difficult.
“There really isn’t anything in the marketplace to make stories about India,” he says, despite the strength of some of the country’s other cultural industries.
“If you think about how much intellectual property there is in film in India, it is really surprising to think nothing indigenous exists as an original entertainment property in games,” he says.
“It’s almost like the Indian audience accepted that we’re just going to play games from outside.”
Another game shown during SGF was The Palace on the Hill – a “slice-of-life” farming sim set in rural India.
Mala Sen, from developer Niku Games, says games like this and Detective Dotson are what “India needed”.
“We know that there are a lot of people in India who want games where characters and setting are relatable to them,” she says.

Games developed by South Asian teams based in western countries have been finding critical praise and commercial success in recent years.

Venba, a cooking sim that told the story of a migrant family reconnecting with their heritage through food, became the first game of its kind to take home a Bafta Game Award this year.

Canada-based Visai Games, which developed the title, was revealed during SGF as one of the first beneficiaries of a new fund set up by Among Us developer Innersloth to boost fellow indie developers.

That will go towards their new, unnamed project based on ancient Tamil legends.

Another title awarded funding by the scheme was Project Dosa, from developer Outerloop, that sees players pilot giant robots, cook Indian food and fight lawyers.

Its previous game, Thirsty Suitors, was also highly praised and nominated for a Bafta award this year.

Games such as these resonating with players worldwide helps perceptions from the wider industry, says Mumbai-based Indrani Ganguly, of Duronto Games.

“Finally, people are starting to see we’re not just a place for outsource work,” she says.

“We’re moving from India being a technical space to more of a creative hub.

“I’m not 100% seeing a shift but that’s more of a mindset thing.

“People who are able to make these kinds of games have always existed but now there is funding and resource opportunities available to be able to act on these creative visions.”

Earth’s inner core rotation slows down and reverses direction. What does this mean for the planet?

(Image by DestinaDesign on Shutterstock)

Earth’s inner core, a solid iron sphere nestled deep within our planet, has slowed its rotation, according to new research. Scientists from the University of Southern California say their discovering challenges previous notions about the inner core’s behavior and raises intriguing questions about its influence on Earth’s dynamics.

The inner core, a mysterious realm located nearly 3,000 miles beneath our feet, has long been known to rotate independently of the Earth’s surface. Scientists have spent decades studying this phenomenon, believing it to play a crucial role in generating our planet’s magnetic field and shaping the convection patterns in the liquid outer core. Until now, it was widely accepted that the inner core was gradually spinning faster than the rest of the Earth, a process known as super-rotation. However, this latest study, published in the journal Nature, reveals a surprising twist in this narrative.

“When I first saw the seismograms that hinted at this change, I was stumped,” says John Vidale, Dean’s Professor of Earth Sciences at the USC Dornsife College of Letters, Arts and Sciences, in a statement. “But when we found two dozen more observations signaling the same pattern, the result was inescapable. The inner core had slowed down for the first time in many decades. Other scientists have recently argued for similar and different models, but our latest study provides the most convincing resolution.”

Slowing Spin, Reversing Rhythm
By analyzing seismic waves generated by repeating earthquakes in the South Sandwich Islands from 1991 to 2023, the researchers discovered that the inner core’s rotation had not only slowed down but had actually reversed direction. The team focused on a specific type of seismic wave called PKIKP, which traverses the inner core and is recorded by seismic arrays in northern North America. By comparing the waveforms of these waves from 143 pairs of repeating earthquakes, they noticed a peculiar pattern.

Many of the earthquake pairs exhibited seismic waveforms that changed over time, but remarkably, they later reverted to match their earlier counterparts. This observation suggests that the inner core, after a period of super-rotation from 2003 to 2008, had begun to sub-rotate, or spin more slowly than the Earth’s surface, essentially retracing its previous path. The researchers found that from 2008 to 2023, the inner core sub-rotated two to three times more slowly than its prior super-rotation.

The inner core began to decrease its speed around 2010, moving slower than the Earth’s surface. (Credit: USC Graphic/Edward Sotelo)

The study’s findings paint a captivating picture of the inner core’s rotational dynamics. The matching waveforms observed in numerous earthquake pairs indicate moments when the inner core returned to positions it had occupied in the past, relative to the mantle. This pattern, combined with insights from previous studies, reveals that the inner core’s rotation is far more complex than a simple, steady super-rotation.

The researchers discovered that the inner core’s super-rotation from 2003 to 2008 was faster than its subsequent sub-rotation, suggesting an asymmetry in its behavior. This difference in rotational rates implies that the interactions between the inner core, outer core, and mantle are more intricate than previously thought.

Limitations: Pieces Of The Core Puzzle
While the study offers compelling evidence for the inner core’s slowing and reversing rotation, the study of course has some limitations. The spatial coverage of the seismic data is relatively sparse, particularly in the North Atlantic, due to the presence of chert layers that hindered continuous coring. Furthermore, the Earth system model used in the study, despite its sophistication, is still a simplified representation of the complex dynamics at play.

The authors emphasize the need for additional high-resolution data from a broader range of locations to strengthen their findings. They also call for ongoing refinement of Earth system models to better capture the intricacies of the inner core’s behavior and its interactions with the outer core and mantle.

Source: https://studyfinds.org/earth-inner-core-rotation-slows/

Mars missions likely impossible for astronauts without kidney dialysis

Photo by Mike Kiev from Unsplash

New study shows damage from cosmic radiation, microgravity could be ‘catastrophic’ for human body
LONDON — As humanity sets its sights on deep space missions to the Moon, Mars, and beyond, a team of international researchers has uncovered a potential problem lurking in the shadows of these ambitious plans: spaceflight-induced kidney damage.

The findings, in a nutshell
In a new study that integrated a dizzying array of cutting-edge scientific techniques, researchers from University College London found that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts.

This sobering discovery, published in Nature Communications, not only highlights the immense challenges of long-duration space travel but also underscores the urgent need for effective countermeasures to protect the health of future space explorers.

“If we don’t develop new ways to protect the kidneys, I’d say that while an astronaut could make it to Mars they might need dialysis on the way back,” says the study’s first author, Dr. Keith Siew, from the London Tubular Centre, based at the UCL Department of Renal Medicine, in a media release. “We know that the kidneys are late to show signs of radiation damage; by the time this becomes apparent it’s probably too late to prevent failure, which would be catastrophic for the mission’s chances of success.”

New research shows that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts. (© alonesdj – stock.adobe.com)

Methodology

To unravel the complex effects of spaceflight on the kidneys, the researchers analyzed a treasure trove of biological samples and data from 11 different mouse missions, five human spaceflights, one simulated microgravity experiment in rats, and four studies exposing mice to simulated galactic cosmic radiation on Earth.

The team left no stone unturned, employing a comprehensive “pan-omics” approach that included epigenomics (studying changes in gene regulation), transcriptomics (examining gene expression), proteomics (analyzing protein levels), epiproteomics (investigating protein modifications), metabolomics (measuring metabolite profiles), and metagenomics (exploring the microbiome). They also pored over clinical chemistry data (electrolytes, hormones, biochemical markers), assessed kidney function, and scrutinized kidney structure and morphology using advanced histology, 3D imaging, and in situ hybridization techniques.

By integrating and cross-referencing these diverse datasets, the researchers were able to paint a remarkably detailed and coherent picture of how spaceflight stressors impact the kidneys at multiple biological levels, from individual molecules to whole organ structure and function.

Results
The study’s findings are as startling as they are sobering. Exposure to microgravity and simulated cosmic radiation induced a constellation of detrimental changes in the kidneys of both humans and animals.

First, the researchers discovered that spaceflight alters the phosphorylation state of key kidney transport proteins, suggesting that the increased kidney stone risk in astronauts is not solely a secondary consequence of bone demineralization but also a direct result of impaired kidney function.

Second, they found evidence of extensive remodeling of the nephron – the basic structural and functional unit of the kidney. This included the expansion of certain tubule segments but an overall loss of tubule density, hinting at a maladaptive response to the unique stressors of spaceflight.

Perhaps most alarmingly, exposing mice to a simulated galactic cosmic radiation dose equivalent to a round trip to Mars led to overt signs of kidney damage and dysfunction, including vascular injury, tubular damage, and impaired filtration and reabsorption.

Piecing together the diverse “omics” datasets, the researchers identified several convergent molecular pathways and biological processes that were consistently disrupted by spaceflight, causing mitochondrial dysfunction, oxidative stress, inflammation, fibrosis, and senescence (cell death) — all hallmarks of chronic kidney disease.

Source: https://studyfinds.org/mars-missions-catastrophic-astronauts-kidneys/

Being more optimistic can keep you from procrastinating

(© chinnarach – stock.adobe.com)

We’ve all been there — a big task is looming over our heads, but we choose to put it off for another day. Procrastination is so common that researchers have spent years trying to understand what drives some people to chronically postpone important chores until the last possible moment. Now, researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future.

The findings, in a nutshell
Researchers found evidence that having a pessimistic view about how stressful the future will be could increase the likelihood of falling into a pattern of severe procrastination. Moreover, the study published in Scientific Reports reveals that having an optimistic view on the future wards off the urge to procrastinate.

“Our research showed that optimistic people — those who believe that stress does not increase as we move into the future — are less likely to have severe procrastination habits,” explains Saya Kashiwakura from the Graduate School of Arts and Sciences at the University of Tokyo, in a media release. “This finding helped me adopt a more light-hearted perspective on the future, leading to a more direct view and reduced procrastination.”

Researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future. (Credit: Ground Picture/Shutterstock)

Methodology
To examine procrastination through the lens of people’s perspectives on the past, present, and future, the researchers introduced new measures they dubbed the “chronological stress view” and “chronological well-being view.” Study participants were asked to rate their levels of stress and well-being across nine different timeframes: the past 10 years, past year, past month, yesterday, now, tomorrow, next month, next year, and the next 10 years.

The researchers then used clustering analysis to group participants based on the patterns in their responses over time – for instance, whether their stress increased, decreased or stayed flat as they projected into the future. Participants were also scored on a procrastination scale, allowing the researchers to investigate whether certain patterns of future perspective were associated with more or less severe procrastination tendencies.

Results: Procrastination is All About Mindset
When examining the chronological stress view patterns, the analysis revealed four distinct clusters: “descending” (stress decreases over time), “ascending” (stress increases), “V-shaped” (stress is lowest in the present), and a “skewed mountain” shape where stress peaked in the past and declined toward the future.

Intriguingly, the researchers found a significant relationship between cluster membership and level of procrastination. The percentage of severe procrastinators was significantly lower in the “descending” cluster – those who believed their stress levels would stay flat or decrease as they projected into the future.

Source: https://studyfinds.org/being-more-optimistic-can-keep-you-from-procrastinating/

Who’s most vulnerable to scams? Psychologists reveal who criminals target and why

(Credit: fizkes/Shutterstock)

About 1 in 6 Americans are age 65 or older, and that percentage is projected to grow. Older adults often hold positions of power, have retirement savings accumulated over the course of their lifetimes, and make important financial and health-related decisions – all of which makes them attractive targets for financial exploitation.

In 2021, there were more than 90,000 older victims of fraud, according to the FBI. These cases resulted in US$1.7 billion in losses, a 74% increase compared with 2020. Even so, that may be a significant undercount since embarrassment or lack of awareness keeps some victims from reporting.

Financial exploitation represents one of the most common forms of elder abuse. Perpetrators are often individuals in the victims’ inner social circles – family members, caregivers, or friends – but can also be strangers.

When older adults experience financial fraud, they typically lose more money than younger victims. Those losses can have devastating consequences, especially since older adults have limited time to recoup – dramatically reducing their independence, health, and well-being.

But older adults have been largely neglected in research on this burgeoning type of crime. We are psychologists who study social cognition and decision-making, and our research lab at the University of Florida is aimed at understanding the factors that shape vulnerability to deception in adulthood and aging.

Defining vulnerability
Financial exploitation involves a variety of exploitative tactics, such as coercion, manipulation, undue influence, and, frequently, some sort of deception.

The majority of current research focuses on people’s ability to distinguish between truth and lies during interpersonal communication. However, deception occurs in many contexts – increasingly, over the internet.

Our lab conducts laboratory experiments and real-world studies to measure susceptibility under various conditions: investment games, lie/truth scenarios, phishing emails, text messages, fake news and deepfakes – fabricated videos or images that are created by artificial intelligence technology.

To study how people respond to deception, we use measures like surveys, brain imaging, behavior, eye movement, and heart rate. We also collect health-related biomarkers, such as being a carrier of gene variants that increase risk for Alzheimer’s disease, to identify individuals with particular vulnerability.

And our work shows that an older adult’s ability to detect deception is not just about their individual characteristics. It also depends on how they are being targeted.

Individual risk factors
Better cognition, social and emotional capacities, and brain health are all associated with less susceptibility to deception.

Cognitive functions, such as how quickly our brain processes information and how well we remember it, decline with age and impact decision-making. For example, among people around 70 years of age or older, declines in analytical thinking are associated with reduced ability to detect false news stories.

Additionally, low memory function in aging is associated with greater susceptibility to email phishing. Further, according to recent research, this correlation is specifically pronounced among older adults who carry a gene variant that is a genetic risk factor for developing Alzheimer’s disease later in life. Indeed, some research suggests that greater financial exploitability may serve as an early marker of disease-related cognitive decline.

Social and emotional influences are also crucial. Negative mood can enhance somebody’s ability to detect lies, while positive mood in very old age can impair a person’s ability to detect fake news.

Lack of support and loneliness exacerbate susceptibility to deception. Social isolation during the COVID-19 pandemic has led to increased reliance on online platforms, and older adults with lower digital literacy are more vulnerable to fraudulent emails and robocalls.

Isolation during the COVID-19 pandemic has increased aging individuals’ vulnerability to online scams. (© Andrey Popov – stock.adobe.com)

Finally, an individual’s brain and body responses play a critical role in susceptibility to deception. One important factor is interoceptive awareness: the ability to accurately read our own body’s signals, like a “gut feeling.” This awareness is correlated with better lie detection in older adults.

According to a first study, financially exploited older adults had a significantly smaller size of insula – a brain region key to integrating bodily signals with environmental cues – than older adults who had been exposed to the same threat but avoided it. Reduced insula activity is also related to greater difficulty picking up on cues that make someone appear less trustworthy.

Types of effective fraud
Not all deception is equally effective on everyone.

Our findings show that email phishing that relies on reciprocation – people’s tendency to repay what another person has provided them – was more effective on older adults. Younger adults, on the other hand, were more likely to fall for phishing emails that employed scarcity: people’s tendency to perceive an opportunity as more valuable if they are told its availability is limited. For example, an email might alert you that a coin collection from the 1950s has become available for a special reduced price if purchased within the next 24 hours.

There is also evidence that as we age, we have greater difficulty detecting the “wolf in sheep’s clothing”: someone who appears trustworthy, but is not acting in a trustworthy way. In a card-based gambling game, we found that compared with their younger counterparts, older adults are more likely to select decks presented with trustworthy-looking faces, even though those decks consistently resulted in negative payouts. Even after learning about untrustworthy behavior, older adults showed greater difficulty overcoming their initial impressions.

Reducing vulnerability
Identifying who is especially at risk for financial exploitation in aging is crucial for preventing victimization.

We believe interventions should be tailored instead of a one-size-fits-all approach. For example, perhaps machine learning algorithms could someday determine the most dangerous types of deceptive messages that certain groups encounter – such as in text messages, emails, or social media platforms – and provide on-the-spot warnings. Black and Hispanic consumers are more likely to be victimized, so there is also a dire need for interventions that resonate with their communities.

Prevention efforts would benefit from taking a holistic approach to help older adults reduce their vulnerability to scams. Training in financial, health, and digital literacy are important, but so are programs to address loneliness.

People of all ages need to keep these lessons in mind when interacting with online content or strangers – but not only then. Unfortunately, financial exploitation often comes from individuals close to the victim.

Source: https://studyfinds.org/whos-most-vulnerable-to-scams/

Mushroom-infused ‘microdosing’ chocolate bars are sending people to the hospital, prompting investigation: FDA

The Food and Drug Administration (FDA) is warning consumers about a mushroom-infused chocolate bar that has reportedly sent some people to the hospital.

The FDA released an advisory message about Diamond Shruumz “microdosing” chocolate bars on June 7. The chocolate bars contain a “proprietary nootropics blend” that is said to give a “relaxed euphoric experience without psilocybin,” according to its website.

“The FDA and CDC, in collaboration with America’s Poison Centers and state and local partners, are investigating a series of illnesses associated with eating Diamond Shruumz-brand Microdosing Chocolate Bars,” the FDA’s website reads.

“Do not eat, sell, or serve Diamond Shruumz-Brand Microdosing Chocolate Bars,” the site warns. “FDA’s investigation is ongoing.”

The FDA is warning consumers against Diamond Shruumz chocolate bars. (FDA | iStock)

“Microdosing” is a practice where one takes a very small amount of psychedelic drugs with the intent of increasing productivity, inspiring creativity and boosting mood. According to Diamond Shruumz’s website, the brand said its products help achieve “a subtle, sumptuous experience and a more creative state of mind.”

“We’re talkin’ confections with a kick,” the brand said. “So if you like mushroom chocolate bars and want to mingle with some microdosing, check us out. We just might change how you see the world.”

But government officials warn that the products have caused seizures in some consumers and vomiting in others.

“People who became ill after eating Diamond Shruumz-brand Microdosing Chocolate Bars reported a variety of severe symptoms including seizures, central nervous system depression (loss of consciousness, confusion, sleepiness), agitation, abnormal heart rates, hyper/hypotension, nausea, and vomiting,” the FDA reported.

Six people reportedly experienced such severe reactions that they sought medical care.

At least eight people have suffered a variety of medical symptoms from the chocolates, including nausea. (iStock)

“All eight people have reported seeking medical care; six have been hospitalized,” the FDA’s press release said. “No deaths have been reported.”

Diamond Shruumz says on its website that its products are not necessarily psychedelic. Although the chocolate is marketed as promising a psilocybin-like experience, there is no psilocybin in it.

“There is no presence of psilocybin, amanita or any scheduled drugs, ensuring a safe and enjoyable experience,” the website claims. “Rest assured, our treats are not only free from psychedelic substances but our carefully crafted ingredients still offer an experience.”

“This allows you to indulge in a uniquely crafted blend designed for your pleasure and peace of mind.”

Officials warn consumers to keep the products out of the reach of minors, as kids and teens may be tempted to eat the chocolate bars.

Source: https://www.foxnews.com/health/mushroom-infused-microdosing-chocolate-bars-sending-people-hospital-prompting-investigation-fda

 

Elephants give each other ‘names,’ just like humans

(Photo by Unsplash+ in collaboration with Getty Images)

They say elephants never forget a face, and now as it turns out, they seem to remember names too. That is, the “names” they have for one another. Yes, believe it or not, a new study shows that elephants actually have the rare ability to identify one another through unique calls, essentially giving one another human-like names when they converse.

Scientists from Colorado State University, along with a team of researchers from Save the Elephants and ElephantVoices, used machine learning to make this fascinating discovery. Their work suggests that elephants possess a level of communication and abstract thought that is more similar to ours than previously believed.

In the study, published in Nature Ecology and Evolution, the researchers analyzed hundreds of recorded elephant calls from Kenya’s Samburu National Reserve and Amboseli National Park. By training a sophisticated model to identify the intended recipient of each call based on its unique acoustic features, they could confirm that elephant calls contain a name-like component, a behavior they had suspected based on observation.

“Dolphins and parrots call one another by ‘name’ by imitating the signature call of the addressee. By contrast, our data suggest that elephants do not rely on imitation of the receiver’s calls to address one another, which is more similar to the way in which human names work,” says lead author Michael Pardo, who conducted the study as a postdoctoral researcher at CSU and Save the Elephants, in a statement.

Once the team pinpointed the specific calls to the corresponding elephants, the scientists played back the recordings and observed their reactions. When the calls were addressed to them, the elephants responded positively by calling back or approaching the speaker. In contrast, calls meant for other elephants elicited less enthusiasm, demonstrating that the elephants recognized their own “names.”

Two juvenile elephants greet each other in Samburu National Reserve in Kenya. (Credit: George Wittemyer)

Elephants’ Brains Even More Complex Than Realized

The ability to learn and produce new sounds, a prerequisite for naming individuals, is uncommon in the animal kingdom. This form of arbitrary communication, where a sound represents an idea without imitating it, is considered a higher-level cognitive skill that greatly expands an animal’s capacity to communicate.

Co-author George Wittemyer, a professor at CSU’s Warner College of Natural Resources and chairman of the scientific board of Save the Elephants, elaborated on the implications of this finding: “If all we could do was make noises that sounded like what we were talking about, it would vastly limit our ability to communicate.” He adds that the use of arbitrary vocal labels suggests that elephants may be capable of abstract thought.

To arrive at these conclusions, the researchers embarked on a four-year study that included 14 months of intensive fieldwork in Kenya. They followed elephants in vehicles, recording their vocalizations and capturing approximately 470 distinct calls from 101 unique callers and 117 unique receivers.

Kurt Fristrup, a research scientist in CSU’s Walter Scott, Jr. College of Engineering, developed a novel signal processing technique to detect subtle differences in call structure. Together with Pardo, he trained a machine-learning model to correctly identify the intended recipient of each call based solely on its acoustic features. This innovative approach allowed the researchers to uncover the hidden “names” within the elephant calls.

Source: https://studyfinds.org/elephants-give-each-other-names/

Baby talk explained! All those sounds mean more than you think

Mother and baby laying down together (Photo by Ana Tablas on Unsplash)

From gurgling “goos” to squealing “wheees!”, the delightful symphony of sounds emanating from a baby’s crib may seem like charming gibberish to the untrained ear. However, a new study suggests that these adorable vocalizations are far more than just random noise — they’re actually a crucial stepping stone on the path to language development.

The research, published in PLOS One, took a deep dive into the vocal patterns of 130 typically developing infants over the course of their first year of life. Their discoveries challenge long-held assumptions about how babies learn to communicate.

Traditionally, many experts believed that infants start out making haphazard sounds, gradually progressing to more structured “baby talk” as they listen to and imitate the adults around them. This new study paints a different picture, one where babies are actively exploring and practicing different categories of sounds in what might be thought of as a precursor to speech.

Think of it like a baby’s very first music lesson. Just as a budding pianist might spend time practicing scales and chords, it seems infants devote chunks of their day to making specific types of sounds, almost as if they’re trying to perfect their technique.

The researchers reached this conclusion after sifting through an enormous trove of audio data captured by small recording devices worn by the babies as they went about their daily lives. In total, they analyzed over 1,100 daylong recordings, adding up to nearly 14,500 hours – or about 1.6 years – of audio.

Using special software to isolate the infant vocalizations, the research team categorized the sounds into three main types: squeals (high-pitched, often excited-sounding noises), growls (low-pitched, often “rumbly” sounds), and vowel-like utterances (which the researchers dubbed “vocants”).

Next, they zoomed in on five-minute segments from each recording, hunting for patterns in how these sound categories were distributed. The results were striking: 40% of the recordings showed significant “clustering” of squeals, with a similar percentage showing clustering of growls. In other words, the babies weren’t randomly mixing their sounds, but rather, they seemed to focus on one type at a time, practicing it intensively.

Source: https://studyfinds.org/baby-talk-explained/

Why do giraffes have long necks? Researchers may finally have the answer

Photo by Krish Radhakrishna from Unsplash

Everything in biology ultimately boils down to food and sex. To survive as an individual, you need food. To survive as a species, you need sex.

Not surprisingly, then, the age-old question of why giraffes have long necks has centered around food and sex. After debating this question for the past 150 years, biologists still cannot agree on which of these two factors was the most important in the evolution of the giraffe’s neck. In the past three years, my colleagues and I have been trying to get to the bottom of this question.

Necks for sex
In the 19th century, biologists Charles Darwin and Jean Baptiste Lamarck both speculated that giraffes’ long necks helped them reach acacia leaves high up in the trees, though they likely weren’t observing actual giraffe behavior when they came up with this theory. Several decades later, when scientists started observing giraffes in Africa, a group of biologists came up with an alternative theory based on sex and reproduction.

These pioneering giraffe biologists noticed how male giraffes, standing side by side, used their long necks to swing their heads and club each other. The researchers called this behavior “neck-fighting” and guessed that it helped the giraffes prove their dominance over each other and woo mates. Males with the longest necks would win these contests and, in turn, boost their reproductive success. That favorability, the scientists predicted, drove the evolution of long necks.

Since its inception, the necks-for-sex sexual selection hypothesis has overshadowed Darwin’s and Lamarck’s necks-for-food hypothesis.

The necks-for-sex hypothesis predicts that males should have longer necks than females since only males use them to fight, and indeed, they do. But adult male giraffes are also about 30% to 50% larger than female giraffes. All of their body components are bigger. So, my team wanted to find out if males have proportionally longer necks when accounting for their overall stature, comprised of their head, neck, and forelegs.

Necks not for sex?
But it’s not easy to measure giraffe body proportions. For one, their necks grow disproportionately faster during the first six to eight years of their life. And in the wild, you can’t tell exactly how old an individual animal is. To get around these problems, we measured body proportions in captive Masai giraffes in North American zoos. Here, we knew the exact age of the giraffes and could then compare this data with the body proportions of wild giraffes that we knew confidently were older than 8 years.

To our surprise, we found that adult female giraffes have proportionally longer necks than males, which contradicts the necks-for-sex hypothesis. We also found that adult female giraffes have proportionally longer body trunks, while adult males have proportionally longer forelegs and thicker necks.

Giraffe babies don’t have any of these sex-specific body proportion differences. They only appear as giraffes are reaching adulthood.
Finding that female giraffes have proportionally both longer necks and longer body trunks led us to propose that females, and not males, drove the evolution of the giraffe’s long neck, and not for sex but for food and reproduction. Our theory is in agreement with Darwin and Lamarck that food was the major driver for the evolution of the giraffe’s neck but with an emphasis on female reproductive success.

A shape to die for
Giraffes are notoriously picky eaters and browse on fresh leaves, flowers, and seed pods. Female giraffes especially need enough to eat because they spend most of their adult lives either pregnant or providing milk to their calves.

Females tend to use their long necks to probe deep into bushes and trees to find the most nutritious food. By contrast, males tend to feed high in trees by fully extending their necks vertically. Females need proportionally longer trunks to grow calves that can be well over 6 feet tall at birth.

For males, I’d guess that their proportionally longer forelegs are an adaptation that allows them to mount females more easily during sex. While we found that their necks might not be as proportionally long as females’ necks are, they are thicker. That’s probably an adaptation that helps them win neck fights.

Source: https://studyfinds.org/why-do-giraffes-have-long-necks/

Eleven tonnes of rubbish taken off Himalayan peaks

Fewer permits were issued and fewer climbers died on Mount Everest in 2024 than 2023.

The Nepalese army says it has removed eleven tonnes of rubbish, four corpses and one skeleton from Mount Everest and two other Himalayan peaks this year.
It took troops 55 days to recover the rubbish and bodies from Everest, Nuptse and Lhotse mountains.
It is estimated that more than fifty tonnes of waste and more than 200 bodies cover Everest.
The army began conducting an annual clean-up of the mountain, which is often described as the world’s highest garbage dump, in 2019 during concerns about overcrowding and climbers queueing in dangerous conditions to reach the summit.
The five clean-ups have collected 119 tonnes of rubbish, 14 human corpses and some skeletons, the army says.
This year, authorities aimed to reduce rubbish and improve rescues by making climbers wear tracking devices and bring back their own poo.

In the future, the government plans to create a mountain rangers team to monitor rubbish and put more money toward its collection, Nepal’s Department of Tourism director of mountaineering Rakesh Gurung told the BBC.
For the spring climbing season that ended in May, the government issued permits to 421 climbers, down from a record-breaking 478 last year. Those numbers do not include Nepalese guides. In total, an estimated 600 people climbed the mountain this year.
This year, eight climbers died or went missing, compared to 19 last year.
A Brit, Daniel Paterson, and his Nepalese guide, Pastenji Sherpa, are among those missing after being hit by falling ice on 21 May.
Mr Paterson’s family started a fundraiser to hire a search team to find them, but said in an update on 4 June that recovery “is not possible at this time” because of the location and danger of the operation.
Mr Gurung said the number of permits was lower this year because of the global economic situation, China also issuing permits and the national election in India which reduced the number of climbers from that country.
Source: https://www.bbc.com/news/articles/cq5539lj1pqo

Women experience greater mental agility during menstruation

For female athletes, the impact of the menstrual cycle on physical performance has been a topic of much discussion. But what about the mental side of the game? A groundbreaking new study suggests that certain cognitive abilities, particularly those related to spatial awareness and anticipation, may indeed ebb and flow with a woman’s cycle.

(Photo 102762325 | Black Teen Brain © Denisismagilov | Dreamstime.com)

The findings, in a nutshell
Researchers from University College London tested nearly 400 participants on a battery of online cognitive tasks designed to measure reaction times, attention, visuospatial functions (like 3D mental rotation), and timing anticipation. The study, published in Neuropsychologia, included men, women on hormonal contraception, and naturally cycling women.

Fascinatingly, the naturally cycling women exhibited better overall cognitive performance during menstruation compared to any other phase of their cycle. This held true even though these women reported poorer mood and more physical symptoms during their period. In contrast, performance dipped during the late follicular phase (just before ovulation) and the luteal phase (after ovulation).

“What is surprising is that the participant’s performance was better when they were on their period, which challenges what women, and perhaps society more generally, assume about their abilities at this particular time of the month,” says Dr. Flaminia Ronca, first author of the study from UCL, in a university release.

“I hope that this will provide the basis for positive conversations between coaches and athletes about perceptions and performance: how we feel doesn’t always reflect how we perform.”

This study provides compelling preliminary evidence that sport-relevant cognitive skills may indeed fluctuate across the menstrual cycle, with a surprising boost during menstruation itself. If confirmed in future studies, this could have implications for understanding injury risk and optimizing mental training in female athletes.

Importantly, there was a striking mismatch between women’s perceptions and their actual performance. Many felt their thinking was impaired during their period when, in fact, it was enhanced. This points to the power of negative expectations and the importance of educating athletes about their unique physiology.

Source: https://studyfinds.org/womens-brains-show-more-mental-agility-during-their-periods/

Colon cancer crisis in young people could be fueled by booming drinks brands adored by teens

Colon cancer crisis in young people could be fueled by booming drinks brands adored by teens


They are used by millions of workers to power through afternoon slumps.

But highly caffeinated energy drinks could be partly fueling the explosion of colorectal cancers in young people, US researchers warn.

They believe an ingredient in Red Bull and other top brands such as Celsius and Monster may be linked to bacteria in the gut that speeds up tumor growth.

Researchers in Florida theorize that cancer cells use taurine – an amino acid thought to improve mental clarity – as their ‘primary energy source.’

At the world’s biggest cancer conference this week, the team announced a new human trial that will test their hypothesis, which so far is based on animal studies.

They plan to discover whether drinking an energy drink every day causes levels of cancer-causing gut bacteria to rise.

Highly caffeinated energy drinks could be partly fueling the explosion of colorectal cancers in young people , US researchers warn – based on a new hypothesis

DailyMail.com revealed earlier this week how diets high in sugar and low in fiber may also be contributing to the epidemic of colon cancers in under-50s.

The University of Florida researchers are recruiting around 60 people aged 18 to 40 to be studied for four weeks.

Half of the group will consume at least one original Red Bull or Celsius, a sugar-free energy drink, per day and their guts will be compared to a control group who don’t.

The upcoming trial is ‘one of the earliest’ studies to evaluate potential factors contributing to the meteoric in colorectal cancer, the researchers say.

Early onset cancers are still uncommon. About 90 per cent of all cancers affect people over the age of 50.

But rates in younger age groups have soared around 70 percent since the 1990s, with around 17,000 new cases diagnosed in the US each year.

Source: https://www.dailymail.co.uk/health/article-13493163/red-bull-colon-cancer-crisis-young-people.html

Here’s why sugar wreaks havoc on gut health, worsens inflammatory bowel disease

(Photo by Alexander Grey from Unsplash)

There can be a lot of inconsistent dietary advice when it comes to gut health, but those that says that eating lots of sugar is harmful tend to be the most consistent of them all. Scientists from the University of Pittsburgh are now showing that consuming excess sugar disrupts cells that keep the colon healthy in mice with inflammatory bowel disease (IBD).

“The prevalence of IBD is rising around the world, and it’s rising the fastest in cultures with industrialized, urban lifestyles, which typically have diets high in sugar,” says senior author Timothy Hand, Ph.D., associate professor of pediatrics and immunology at Pitt’s School of Medicine and UPMC Children’s Hospital of Pittsburgh. “Too much sugar isn’t good for a variety of reasons, and our study adds to that evidence by showing how sugar may be harmful to the gut. For patients with IBD, high-density sugar — found in things like soda and candy — might be something to stay away from.”

In this study, researchers fed mice either a standard or high-sugar diet, and then mimicked IBD symptoms by exposing them to a chemical called DSS, which damages the colon.

Shockingly, all of the mice that ate a high-sugar diet died within nine days. All of the animals that ate a standard diet lived until the end of the 14-day experiment. To figure out where things went wrong, the team looked for answers inside the colon. Typically, the colon is lined with a layer of epithelial cells that are arranged with finger-like projections called crypts. They are frequently replenished by dividing stem cells to keep the colon healthy.

“The colon epithelium is like a conveyor belt,” explains Hand in a media release. “It takes five days for cells to travel through the circuit from the bottom to the top of the crypt, where they are shed into the colon and defecated out. You essentially make a whole new colon every five days.”

(© T. L. Furrer – stock.adobe.com)

This system collapsed in mice fed a high-sugar diet
In fact, the protective layer of cells was completely gone in some animals, filling the colon with blood and immune cells. This shows that sugar may directly impact the colon, rather than the harm being dependent on the gut microbiome, which is what the team originally thought.

To compare the findings to human colons, the researchers used poppy seed-sized intestinal cultures that could be grown in a lab dish. They found that as sugar concentrations increased, fewer cultures developed, which suggests that sugar hinders cell devision.

“We found that stem cells were dividing much more slowly in the presence of sugar — likely too slow to repair damage to the colon,” says Hand. “The other strange thing we noticed was that the metabolism of the cells was different. These cells usually prefer to use fatty acids, but after being grown in high-sugar conditions, they seemed to get locked into using sugar.”

Hand adds that these findings may be key to strengthening existing links between sweetened drinks and worse IBD outcomes.

Source: https://studyfinds.org/sugar-wreaks-havoc-gut-health/

Shocking study claims pollution causes more deaths than war, disease, and drugs combined

(Credit: aappp/Shutterstock)

We often think of war, terrorism, and deadly diseases as the greatest threats to human life. But what if the real danger is something we encounter every day, something that’s in the air we breathe, the water we drink, and even in the noise that surrounds us? A new study published in the Journal of the American College of Cardiology reveals a startling truth: pollution, in all its forms, is now a greater health threat than war, terrorism, malaria, HIV, tuberculosis, drugs, and alcohol combined. Specifically, researchers estimate that manmade pollutants and climate change contribute to a staggering seven million deaths globally each year.

“Every year around 20 million people worldwide die from cardiovascular disease with pollutants playing an ever-increasing role,” explains Professor Jason Kovacic, Director and CEO of the Victor Chang Cardiac Research Institute in Australia, in a media release.

The findings, in a nutshell
The culprits behind this global death toll aren’t just the obvious ones like air pollution from car exhausts or factory chimneys. The study, conducted by researchers from prestigious institutions worldwide, shines a light on lesser-known villains: soil pollution, noise pollution, light pollution, and even exposure to toxic chemicals in our homes.

Think about your daily life. You wake up after a night’s sleep disrupted by the glow of streetlights and the hum of late-night traffic. On your way to work, you’re exposed to car fumes and the blaring horns of impatient drivers. At home, you might be unknowingly using products containing untested chemicals. All these factors, the study suggests, are chipping away at your heart health.

“Pollutants have reached every corner of the globe and are affecting every one of us,” Prof. Kovacic warns. “We are witnessing unprecedented wildfires, soaring temperatures, unacceptable road noise and light pollution in our cities and exposure to untested toxic chemicals in our homes.”

Specifically, researchers estimate that manmade pollutants and climate change contribute to a staggering 7 million deaths globally each year. (© Quality Stock Arts – stock.adobe.com)

How do these pollutants harm our hearts?
Air Pollution: When you inhale smoke from a wildfire or exhaust fumes, these toxins travel deep into your lungs, enter your bloodstream, and then circulate throughout your body. It’s like sending tiny invaders into your system, causing damage wherever they go, including your heart.

Noise and Light Pollution: Ever tried sleeping with a streetlight shining through your window or with noisy neighbors? These disruptions do more than just annoy you—they mess up your sleep patterns. Poor sleep can lead to inflammation in your body, raise your blood pressure, and even cause weight gain. All of these are risk factors for heart disease.

Extreme Heat: Think of your heart as a car engine. On a scorching hot day, your engine works harder to keep cool. Similarly, during a heatwave, your heart has to work overtime. This extra strain, coupled with dehydration and reduced blood volume from sweating, can lead to serious issues like acute kidney failure.

Chemical Exposure: Many household items — from non-stick pans to water-resistant clothing — contain chemicals that haven’t been thoroughly tested for safety. Prof. Kovacic points out, “There are hundreds of thousands of chemicals that haven’t even been tested for their safety or toxicity, let alone their impact on our health.”

The statistics are alarming. Air pollution alone is linked to over seven million premature deaths per year, with more than half due to heart problems. During heatwaves, the risk of heat-related cardiovascular deaths can spike by over 10%. In the U.S., exposure to wildfire smoke has surged by 77% since 2002.

Source: https://studyfinds.org/pollution-causes-more-deaths/

Never-before-seen blue ants discovered in India

In the lush forests of India’s Arunachal Pradesh, a team of intrepid researchers has made a startling discovery: a never-before-seen species of ant that sparkles like a brilliant blue gemstone. The remarkable find marks the first new species of its genus to be identified in India in over 120 years.

Dubbing the species Paraparatrechina neela, the fascinating discovery was made by entomologists Dr. Priyadarsanan Dharma Rajan and Ramakrishnaiah Sahanashree, from the Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bengaluru, along with Aswaj Punnath from the University of Florida. The name “neela” comes from Indian languages, meaning the color blue. And for good reason – this ant sports an eye-catching iridescent blue exoskeleton, unlike anything seen before in its genus.

Paraparatrechina is a widespread group of ants found across Asia, Africa, Australia and the Pacific. They are typically small, measuring just a few millimeters in length. Before this discovery, India was home to only one known species in the genus, Paraparatrechina aseta, which was described way back in 1902.

The researchers collected the dazzling P. neela specimens during an expedition in 2022 to the Siang Valley in the foothills of the Eastern Himalayas. Fittingly, this trip was part of a series called the “Siang Expeditions” – a project aiming to retrace the steps of a historic 1911-12 expedition that documented the region’s biodiversity.

Paraparatrechina neela — the blue ant discovered in India’s Himalayas. (Credit: Sahanashree R)

Over a century later, the area still holds surprises. The team found the ants living in a tree hole in a patch of secondary forest, at an altitude of around 800 meters. After carefully extracting a couple of specimens with an aspirator device, they brought them back to the lab for a closer look under the microscope. Their findings are published in the journal ZooKeys.

Beyond its “captivating metallic-blue color,” a unique combination of physical features distinguishes P. neela from its relatives. The body is largely blue, but the legs and antennae fade to a brownish-white. Compared to the light brown, rectangular head of its closest Indian relative, P. aseta, the sapphire ant has a subtriangular head. It also has one less tooth on its mandibles and a distinctly raised section on its propodeum (the first abdominal segment that’s fused to the thorax).

So what’s behind the blue? While pigments provide color for some creatures, in insects, hues like blue are usually the result of microscopic structural arrangements that reflect light in particular ways. Different layers and shapes of the exoskeleton can interact with light to produce shimmering, iridescent effects. This has evolved independently in many insect groups, but is very rare in ants.

The function of the blue coloration remains a mystery for now. In other animals, such striking hues can serve many possible roles – from communication and camouflage to thermoregulation.

“This vibrant feature raises intriguing questions. Does it help in communication, camouflage, or other ecological interactions? Delving into the evolution of this conspicuous coloration and its connections to elevation and the biology of P. neela presents an exciting avenue for research,” the authors write.

A view of Siang Valley. (Credit: Ranjith AP)

The Eastern Himalayas are known to be a biodiversity hotspot, but remain underexplored by scientists. Finding a new species of ant, in a genus that specializes in tiny, inconspicuous creatures, hints at the many more discoveries that likely await in the region’s forests. Who knows – maybe there are entire rainbow-hued colonies of ants hidden in the treetops!

Source: https://studyfinds.org/blue-ants-discovered/

Prenatal stress hormones may finally explain why infants won’t sleep at night

(Photo by Laura Garcia on Unsplash)

Babies with higher stress hormone levels late in their mother’s pregnancy can end up having trouble falling asleep, researchers explain. The sleep research suggests that measuring cortisol during the third trimester can predict infant sleep patterns up to seven months after a baby’s birth.

Babies often wake up in the middle of the night and have trouble falling asleep. A team from the University of Denver says one possible but unexplored reason for this is how well the baby’s hypothalamic-adrenal-pituitary (HPA) system is working. The HPA system is well-known for regulating the stress response and has previously been linked with sleep disorders when it’s not working properly. Cortisol is the end product produced from the HPA axis.

What is cortisol?

Cortisol is a steroid hormone produced by the adrenal glands, which are located on top of each kidney. It plays a crucial role in several body functions, including:

Regulation of metabolism: Cortisol helps regulate the metabolism of proteins, fats, and carbohydrates, releasing energy and managing how the body uses these macronutrients.

Stress response: Often referred to as the “stress hormone,” cortisol is released in response to stress and low blood-glucose concentration. It helps the body manage and cope with stress by altering immune system responses and suppressing non-essential functions in a fight-or-flight situation.

Anti-inflammatory effects: Cortisol has powerful anti-inflammatory capabilities, helping to reduce inflammation and assist in healing.

Blood pressure regulation: It helps in maintaining blood pressure and cardiovascular function.

Circadian rhythm influence: Cortisol levels fluctuate throughout the day, typically peaking in the morning and gradually falling to their lowest level at night.

Collecting hair samples is one way to measure fetal cortisol levels in the final trimester of pregnancy.

“Although increases in cortisol across pregnancy are normal and important for preparing the fetus for birth, our findings suggest that higher cortisol levels during late pregnancy could predict the infant having trouble falling asleep,” says lead co-author Melissa Nevarez-Brewster in a media release. “We are excited to conduct future studies to better understand this link.”

The team collected hair cortisol samples from 70 infants during the first few days after birth. Approximately 57% of the infants were girls. When each child was seven months-old, parents completed a sleep questionnaire including questions such as how long it took on average for the children to fall asleep, how long babies stayed awake at night, and the number of times the infants woke up in the middle of the night. The researchers also collected data on each infant’s gestational age at birth and their family’s income.

Source: https://studyfinds.org/prenatal-stress-hormones-may-finally-explain-why-infants-wont-sleep-at-night/

How much stress is too much?

Pedro Figueras / pexels.com

COVID-19 taught most people that the line between tolerable and toxic stress – defined as persistent demands that lead to disease – varies widely. But, some people will age faster and die younger from toxic stressors than others.

So, how much stress is too much, and what can you do about it?

I’m a psychiatrist specializing in psychosomatic medicine, which is the study and treatment of people who have physical and mental illnesses. My research is focused on people who have psychological conditions and medical illnesses, as well as those whose stress exacerbates their health issues.

I’ve spent my career studying mind-body questions and training physicians to treat mental illness in primary care settings. My forthcoming book is titled “Toxic Stress: How Stress is Killing Us and What We Can Do About It.”

A 2023 study of stress and aging over the life span – one of the first studies to confirm this piece of common wisdom – found that four measures of stress all speed up the pace of biological aging in midlife. It also found that persistent high-stress ages people in a comparable way to the effects of smoking and low socioeconomic status, two well-established risk factors for accelerated aging.

The difference between good stress and the toxic kind

Good stress – a demand or challenge you readily cope with – is good for your health. In fact, the rhythm of these daily challenges, including feeding yourself, cleaning up messes, communicating with one another, and carrying out your job, helps to regulate your stress response system and keep you fit.

Toxic stress, on the other hand, wears down your stress response system in ways that have lasting effects, as psychiatrist and trauma expert Bessel van der Kolk explains in his bestselling book “The Body Keeps the Score.”

The earliest effects of toxic stress are often persistent symptoms such as headache, fatigue, or abdominal pain that interfere with overall functioning. After months of initial symptoms, a full-blown illness with a life of its own – such as migraine headaches, asthma, diabetes, or ulcerative colitis – may surface.

When we are healthy, our stress response systems are like an orchestra of organs that miraculously tune themselves and play in unison without our conscious effort – a process called self-regulation. But when we are sick, some parts of this orchestra struggle to regulate themselves, which causes a cascade of stress-related dysregulation that contributes to other conditions.

For instance, in the case of diabetes, the hormonal system struggles to regulate sugar. With obesity, the metabolic system has a difficult time regulating energy intake and consumption. With depression, the central nervous system develops an imbalance in its circuits and neurotransmitters that makes it difficult to regulate mood, thoughts and behaviors.

‘Treating’ stress
Though stress neuroscience in recent years has given researchers like me new ways to measure and understand stress, you may have noticed that in your doctor’s office, the management of stress isn’t typically part of your treatment plan.

Most doctors don’t assess the contribution of stress to a patient’s common chronic diseases such as diabetes, heart disease, and obesity, partly because stress is complicated to measure and partly because it is difficult to treat. In general, doctors don’t treat what they can’t measure.

Stress neuroscience and epidemiology have also taught researchers recently that the chances of developing serious mental and physical illnesses in midlife rise dramatically when people are exposed to trauma or adverse events, especially during vulnerable periods such as childhood.

Over the past 40 years in the U.S., the alarming rise in rates of diabetes, obesity, depression, PTSD, suicide, and addictions points to one contributing factor that these different illnesses share: toxic stress.

Toxic stress increases the risk for the onset, progression, complications, or early death from these illnesses.

Suffering from toxic stress
Because the definition of toxic stress varies from one person to another, it’s hard to know how many people struggle with it. One starting point is the fact that about 16% of adults report having been exposed to four or more adverse events in childhood. This is the threshold for higher risk for illnesses in adulthood.

Research dating back to before the COVID-19 pandemic also shows that about 19% of adults in the U.S. have four or more chronic illnesses. If you have even one chronic illness, you can imagine how stressful four must be.

And about 12% of the U.S. population lives in poverty, the epitome of a life in which demands exceed resources every day. For instance, if a person doesn’t know how they will get to work each day or doesn’t have a way to fix a leaking water pipe or resolve a conflict with their partner, their stress response system can never rest. One or any combination of threats may keep them on high alert or shut them down in a way that prevents them from trying to cope at all.

Add to these overlapping groups all those who struggle with harassing relationships, homelessness, captivity, severe loneliness, living in high-crime neighborhoods, or working in or around noise or air pollution. It seems conservative to estimate that about 20% of people in the U.S. live with the effects of toxic stress.

Source: https://studyfinds.org/how-much-stress-is-too-much/

Eye Stroke Cases Surge During Heatwave: Symptoms, Prevention Tips

The extreme heat can affect overall health, increasing the risk of heart diseases, brain disorders, and other organ issues.

गर्मियों में कैसे रखें आंखों का ख्याल | Image:Freepik

As heatwaves sweep across various regions, there has been a noticeable increase in eye stroke cases. This condition, also known as retinal artery occlusion, can cause sudden vision loss and is comparable to a brain stroke in its seriousness.

Impact of heatwaves on eye health 

The extreme heat can affect overall health, increasing the risk of heart diseases, brain disorders, and other organ issues. Notably, it can also lead to eye strokes due to dehydration and heightened blood pressure. Dehydration during hot weather makes the blood more prone to clotting, while high temperatures can exacerbate cardiovascular problems, raising the risk of arterial blockages.

Eye stroke

An eye stroke occurs when blood flow to the retina is obstructed, depriving it of oxygen and nutrients. This can cause severe retinal damage in minutes. Dehydration from heatwaves thickens the blood, making clots more likely, while heat stress can worsen cardiovascular conditions, further increasing eye stroke risk.

Signs and symptoms

Sudden Vision Loss: The most common symptom, this can be partial or complete, and typically painless.

Visual Disturbances: Sudden dimming or blurring of vision, where central vision is affected but peripheral vision remains intact.

Preventive measures

Stay Hydrated: Ensure adequate fluid intake to prevent dehydration.

Avoid Peak Sun Hours: Limit exposure to the sun during the hottest parts of the day.

Manage Chronic Conditions: Keep blood pressure and other chronic conditions under control.

TImmediate Medical Attentioreatment optionsn: Urgency is crucial as delays can lead to permanent vision loss.

Source: https://www.republicworld.com/health/eye-stroke-cases-surge-during-heatwave-symptoms-prevention-tips/?amp=1

5 Hidden Effects Of Childhood Neglect

(Photo by Volurol on Shutterstock)

Trauma, abuse, and neglect — in the current cultural landscape, it’s not hard to find a myriad of discussions on these topics. But with so many people chiming in on the conversation, it’s more important now than ever to listen to what experts on the topic have to say. As we begin to understand more and more about the effects of growing up experiencing trauma and abuse, we also begin to understand that the effects of these experiences are more complex and wide-ranging than we had ever imagined.

Recent studies in the field of childhood trauma and abuse have found that these experiences can affect a wide range of aspects of our adult life. In fact, even seemingly disparate topics ranging from your stance on vaccinations to the frequency with which we experience headaches, to the types of judgments that we make about others are impacted by histories of abuse, trauma, or neglect.

Clearly, the effects of a traumatic childhood go far beyond the time when you are living in an abusive or unhealthy environment. A recent study reports that early childhood traumas can impact health outcomes decades later, potentially following you for the rest of your life. With many new and surprising effects of childhood trauma being discovered every day, it’s no wonder that so many people are interested in what exactly trauma is and how it can affect us.

So, what are the long-term ramifications of childhood neglect? For an answer to that question, StudyFinds sat down with Michael Menard, inventor-turned-author of the upcoming book, “The Kite That Couldn’t Fly: And Other May Avenue Stories,” to discuss the lesser-understood side of trauma and how it can affect us long into our adult lives.

Here is his list of five hidden effects of trauma, and some of them just might surprise you.

1. Unstable Relationships
For individuals with childhood trauma, attachment issues are an often overlooked form of collateral damage. Through infancy and early childhood, a person’s attachment style is developed largely through familial bonds and is then carried into every relationship from platonic peers to romantic partners. When this is lovingly and healthily developed, this is usually a positive thing. But for children and adults with a background of neglect, it often leads to difficulty in finding, developing, and keeping healthy relationships.

As Menard explains it, a childhood spent feeling invisible left scars on his adult relationship patterns. “As a child, I felt that I didn’t exist. No matter what I did, it was not recognized, so there was no reinforcement,” he says. “As a young adult, I panicked when I got ignored. I was afraid that everyone was going to leave. I also felt that I would drive people away in relationships. I would only turn to others when I needed emotional support, never when things were good. When things were good, I could handle them myself. I didn’t need anybody.”

Childhood trauma often creates adults who struggle to be emotionally vulnerable, to process feelings of anger and disappointment, and to accept support from others. And with trust as one of the most vital components of longterm, healthy relationships, it’s clear where difficulty may arise. But Menard emphasizes that a childhood of neglect should not have to mean a lifetime of distant or unstable relationships. “A large percentage of the people that I’ve talked to about struggles in their life, they think it’s life. But we were born to be healthy, happy, prosperous, and anything that is taking away from that is not good,” he says.

“The lesser known [effects] I would say are the things that cause disruption in relationships,” Menard adds. “The divorce rate is about 60%. Where does that come from? It comes from disruption and unhappiness between two people. Lack of respect, love, trust, sacrifice. And if you come into that relationship broken from childhood trauma and you don’t even know it, I’d say that’s not well known.”

2. Physical Health Issues
The most commonly discussed long-term effects of childhood neglect are usually mental and emotional ones. But believe it or not, a background of trauma can actually impact your physical health. From diabetes to cardiac disease, the toll of childhood trauma can turn distinctly physical. “Five of the top 10 diseases that kill us have been scientifically proven to come from childhood trauma,” says Menard. “I’ve got high blood pressure. I go to the doctor, and they can’t figure it out. I have diabetes, hypertension, obesity, cardiac disease, COPD—it’s now known that they have a high probability that they originated from childhood trauma or neglect. Silent killers.”

In some cases, the physical ramifications of childhood trauma may be due to long-term medical neglect. What was once a treatable issue can become a much larger and potentially permanent problem. In Menard’s case, untreated illness in his childhood meant open heart surgery in his adult years. “I’m now 73. When I was 70, my aortic valve closed. I had to have four open heart surgeries in two months — almost died three times,” he explains. “Now, can I blame that on childhood trauma? I can, because I had strep throat repeatedly as a child without medication. One episode turned into rheumatic fever that damaged my aortic valve. 50 years later, I’m having my chest opened up.”

From loss of sleep to chronic pain, the physical manifestations of a neglectful childhood can be painful and difficult. But beyond that, they often go entirely overlooked. For many people, this can feel frustrating and invalidating. For others, they may not know themselves that their emotional pain could be having physical ramifications. As Menard puts it, “things are happening to people that they think [are just] part of life, and [they’re] not.”

3. Mental Health Struggles
Growing up in an abusive or neglectful environment can have a variety of negative effects on children. However, one of the most widely discussed and understood consequences is that of their mental health. “Forty-one percent of all depression in the United States is caused by childhood trauma or comes from childhood trauma,” notes Menard. And this connection between trauma and mental illness goes far beyond just depression. In fact, a recent study found a clear link between experiences of childhood trauma and various mental illnesses including anxiety, depression, and substance use disorders.

Of course, depression and anxiety are also compounded when living in an environment lacking the proper love, support, and encouragement that a child deserves to grow up in. For Menard, growing up in a home with 16 people did little to keep the loneliness at bay. “I just thought it was normal—being left out,” Menard says. “We all need to trust, and we need to rely on people. But if you become an island and self-reliant, not depending on others, you become isolated.”

In some cases, the impact of mental health can also do physical damage. In one example, Menard notes an increased likelihood for eating disorders. “Mine came from not having enough food,” he says. “I get that, but there are all types of eating disorders that come from emotional trauma.”

4. Acting Out

For most children, the model set by the behavior of their parents lays the foundation for their own personal growth and development. However, kids who lack these positive examples of healthy behavior are less likely to develop important traits like empathy, self-control, and responsibility. Menard is acutely aware of this, stating, “Good self-care and self-discipline are taught. It goes down the drain when you experience emotional trauma.” Children who are not given proper role models for behavior will often instead mimic the anger and aggressive behaviors prevalent in emotionally neglectful or abusive households.

“My wife is a school teacher and she could tell immediately through the aggressive behavior of even a first grader that there were multiple problems,” adds Menard. However, his focus is less on pointing fingers at the person who is displaying these negative behaviors, and more about understanding what made them act this way in the first place. “It’s not about what’s wrong with you, it’s about what happened to you.”

However, for many, the negative influence extends beyond simple bad behavior. Menard also describes being taught by his father to steal steaks from the restaurant where he worked at the age of 12. This was not only what his father encouraged him to do, but also what seemed completely appropriate to him because of how he had been raised. “I’d bring steaks home for him, and when he got off the factory shift at midnight, that seemed quite okay,” Menard says. “It seemed quite normal. And it’s horrible. Everybody’s searching to try to heal that wound and they don’t know why they’re doing it.”

Source: https://studyfinds.org/5-hidden-effects-of-childhood-neglect/

You won’t believe how fast people adapt to having an extra thumb

The Third Thumb worn by different users (CREDIT: Dani Clode Design / The Plasticity Lab)

Will human evolution eventually give us a sixth finger? If it does, a new study is showing that we’ll have no trouble using an extra thumb! It may sound like science fiction, but researchers have shown that people of all ages can quickly learn how to use an extra, robotic third thumb.

The findings, in a nutshell
A team at the University of Cambridge developed a wearable, prosthetic thumb device and had nearly 600 people from diverse backgrounds try it out. The results in the journal Science Robotics were astonishing: 98% of participants could manipulate objects using the third thumb within just one minute of picking it up and getting brief instructions.

The researchers put people through simple tasks like moving pegs from a board into a basket using only the robotic thumb. They also had people use the device along with their real hand to manipulate oddly-shaped foam objects, testing hand-eye coordination. People, both young and old, performed similarly well on the tasks after just a little practice. This suggests we may be surprisingly adept at integrating robotic extensions into our sense of body movement and control.

While you might expect those with hand-intensive jobs or hobbies to excel, that wasn’t really the case. Most everyone caught on quickly, regardless of gender, handedness, age, or experience with manual labor. The only groups that did noticeably worse were the very youngest children under age 10 and the oldest seniors. Even so, the vast majority in those age brackets still managed to use the third thumb effectively with just brief training.

Professor Tamar Makin and designer Dani Clode have been working on Third Thumb for several years. One of their initial tests in 2021 demonstrated that the 3D-printed prosthetic thumb could be a helpful extension of the human hand. In a test with 20 volunteers, it even helped participants complete tasks while blindfolded!

Designer Dani Clode with her ‘Third Thumb’ device. (Credit: Dani Clode)

How did scientists test the third thumb?
For their inclusive study, the Cambridge team recruited a wide range of 596 participants between the ages of three and 96. The group comprised an intentionally diverse mix of demographics to ensure the robotic device could be effectively used by all types of people.

The Third Thumb device itself consists of a rigid, controllable robotic digit worn on the opposite side of the hand from the normal thumb. It’s operated by foot sensors – pressing with the right foot pulls the robotic thumb inward across the palm while the left foot pushes it back out toward the fingertips. Releasing foot pressure returns the thumb to its resting position.

During testing at a science exhibition, each participant received up to one minute of instructions on how to control the device and perform one of two simple manual tasks. The first had them individually pick up pegs from a board using just the third thumb and drop as many as possible into a basket within 60 seconds. The second required them to manipulate a set of irregularly-shaped foam objects using the robotic thumb in conjunction with their real hand and fingers.

Detailed data was collected on every participant’s age, gender, handedness, and even occupations or hobbies that could point to exceptional manual dexterity skills. This allowed the researchers to analyze how user traits and backgrounds affected performance with the third thumb device after just a minute’s practice. The stark consistency across demographics proved its intuitive usability.

Source: https://studyfinds.org/people-adapt-to-extra-thumb/

Mysterious layer inside Earth may come from another planet!

3D illustration showing layers of the Earth in space. (© Destina – stock.adobe.com)

From the surface to the inner core, Earth has several layers that continue to be a mystery to science. Now, it turns out one of these layers may consist of material from an entirely different planet!

Deep within our planet lies a mysterious, patchy layer known as the D” layer. Located a staggering 3,000 kilometers (1,860 miles) below the surface, this zone sits just above the boundary separating Earth’s molten outer core from its solid mantle. Unlike a perfect sphere, the D” layer’s thickness varies drastically around the globe, with some regions completely lacking this layer altogether – much like how continents poke through the oceans on Earth’s surface.

These striking variations have long puzzled geophysicists, who describe the D” layer as heterogeneous, meaning non-uniform in its composition. However, a new study might finally shed light on this deep enigma, proposing that the D” layer could be a remnant of another planet that collided with Earth during its early days, billions of years ago.

The findings, in a nutshell
The research, published in National Science Review and led by Dr. Qingyang Hu from the Center for High Pressure Science and Technology Advanced Research and Dr. Jie Deng from Princeton University, draws upon the widely accepted Giant Impact hypothesis. This hypothesis suggests that a Mars-sized object violently collided with the proto-Earth, creating a global ocean of molten rock, or magma, in the aftermath.

Hu and Deng believe the D” layer’s unique composition may be the leftover fallout from this colossal impact, potentially holding valuable clues about our planet’s formation. A key aspect of their theory involves the presence of substantial water within this ancient magma ocean. While the origin of this water remains up for debate, the researchers are focusing on what happened as the molten rock began to cool.

“The prevailing view,” Dr. Deng explains in a media release, “suggests that water would have concentrated towards the bottom of the magma ocean as it cooled. By the final stages, the magma closest to the core could have contained water volumes comparable to Earth’s present-day oceans.”

Is there a hidden ocean inside the Earth?
This water-rich environment at the bottom of the magma ocean would have created extreme pressure and temperature conditions, fostering unique chemical reactions between water and minerals.

“Our research suggests this hydrous magma ocean favored the formation of an iron-rich phase called iron-magnesium peroxide,” Dr. Hu elaborates.

This peroxide, which has a chemical formula of (Fe,Mg)O2, has an even stronger affinity for iron compared to other major components expected in the lower mantle.

“According to our calculation, its affinity to iron could have led to the accumulation of iron-dominant peroxide in layers ranging from several to tens of kilometers thick,” Hu explains.

The presence of such an iron-rich peroxide phase would alter the mineral composition of the D” layer, deviating from our current understanding. According to the new model proposed by Hu and Deng, minerals in the D” layer would be dominated by an assemblage of iron-poor silicate, iron-rich (Fe,Mg) peroxide, and iron-poor (Fe,Mg) oxide. Interestingly, this iron-dominant peroxide also possesses unique properties that could explain some of the D” layer’s puzzling geophysical features, such as ultra-low velocity zones and layers of high electrical conductance — both of which contribute to the D” layer’s well-known compositional heterogeneity.

Source: https://studyfinds.org/layer-inside-earth-another-planet/

Targeting ‘monster cells’ may keep cancer from returning after treatment

Targeted Cancer Therapy Illustration (© Riz – stock.adobe.com)

Cancer can sometimes come back, even after undergoing chemotherapy or radiation treatments. Why does this happen? Researchers at the MUSC Hollings Cancer Center may have unlocked part of the mystery. They discovered that cancer cells can transform into monstrous “polyploid giant cancer cells” or PGCCs when under extreme stress from treatment. With that in mind, scientists believe targeting these cells could be the key to preventing recurrences of cancer.

The findings, in a nutshell
Study authors, who published their work in the Journal of Biological Chemistry, found that these bizarre, monster-like cells have multiple nuclei crammed into a single, enlarged cell body. At first, the researchers thought PGCCs were doomed freaks headed for cellular destruction. However, they realized PGCCs could actually spawn new “offspring” cancer cells after the treatment ended. It’s these rapidly dividing daughter cells that likely drive cancer’s resurgence in some patients. Blocking PGCCs from reverting and generating these daughter cells could be the strategy that keeps the disease from returning.

The scientists identified specific genes that cancer cells crank up to become PGCCs as a survival mechanism against harsh therapy. One gene called p21 seems particularly important. In healthy cells it stops DNA replication if damage occurs, but in cancer cells lacking p53 regulation, p21 allows replication of damaged DNA to continue, facilitating PGCC formation.

PGCCs could actually spawn new “offspring” cancer cells after treatments like chemotherapy have ended. (© RFBSIP – stock.adobe.com)

How did scientists make the discovery?
Originally, the Hollings team was studying whether an experimental drug inhibitor could boost cancer cell death when combined with radiation therapy. However, their initial experiments showed no extra killing benefit from the combination treatment. Discouraged, they extended the experiment timeline, and that’s when they noticed something very strange.

While the inhibitor made no difference in the short term, over a longer period, the scientists observed the emergence of bizarre, bloated “monster” cancer cells containing multiple nuclei. At first, they assumed these polyploid giant cancer cells (PGCCs) were doomed mutations that would naturally die off in the patient’s body. Then, researchers saw the PGCCs were generating rapidly dividing offspring cells around themselves, mimicking tumor recurrence.

This made the team rethink the inhibitor’s effects. It didn’t increase cancer cell killing, but it did seem to stop PGCCs from reverting to a state where they could spawn proliferating daughter cells. Blocking this reversion to divisible cells could potentially prevent cancer relapse after treatment.

The researchers analyzed gene expression changes as cancer cells transformed into PGCCs and then back into dividing cells. They identified molecular pathways involved, like p21 overexpression, which allows duplication of damaged DNA. Ultimately, combining their inhibitor with radiation prevented PGCC reversion and daughter cell generation, providing a possible novel strategy against treatment-resistant cancers.

What do the researchers say?
“We initially thought that combination of radiation with the inhibitor killed cancer cells better,” says research leader Christina Voelkel-Johnson, Ph.D., in a media release. “It was only when the inhibitor failed to make a difference in short-term experiments that the time frame was extended, which allowed for an unusual observation.”

Source: https://studyfinds.org/monster-cells-cancer-returning/

Average person wastes more than 2 hours ‘dreamscrolling’ everyday!

(Photo by Perfect Wave on Shutterstock)

NEW YORK — The average American spends nearly two and a half hours a day “dreamscrolling” — looking at dream purchases or things they’d like to one day own. While some might think you’re just wasting your day, a whopping 71% say it’s time well spent, as the habit motivates them to reach their financial goals.

In a recent poll of 2,000 U.S. adults, more than two in five respondents say they spend more time dreamscrolling when the economy is uncertain (43%). Over a full year, that amounts to about 873 hours or nearly 36 days spent scrolling.

Conducted by OnePoll on behalf of financial services company Empower, the survey reveals half of the respondents say they dreamscroll while at work. Of those daydreaming employees, one in five admit to spending between three and four hours a day multitasking while at their job.

Gen Zers spend the most time dreamscrolling at just over three hours per day, while boomers spend the least, clocking in around an hour of fantasy purchases and filling wish lists. Americans say looking at dream purchases makes it easier for them to be smart with their money (56%), avoid making unplanned purchases or going into debt (30%), and better plan to achieve their financial goals (25%).

Nearly seven in 10 see dreamscrolling as an investment in themselves (69%) and an outlet for them to envision what they want out of life (67%). Four in 10 respondents (42%) say they regularly spend time picturing their ideal retirement — including their retirement age, location, and monthly expenses.

A whopping 71% say dreamscrolling is time well spent, as the habit motivates them to reach their financial goals. (© Antonioguillem – stock.adobe.com)

Many respondents are now taking the American dream online, with one in five respondents scrolling through listings of dream homes or apartments. Meanwhile, some are just browsing through vacation destinations (25%), beauty or self-care products (23%), and items for their pets (19%). Many others spend time looking at clothing, shoes, and accessories (49%), gadgets and technology (30%), and home décor or furniture (29%).

More than half (56%) currently have things left open in tabs and windows or saved in shopping carts that they’d like to purchase or own in the future. For those respondents, they estimate it would cost about $86,593.40 to afford everything they currently have saved.

Almost half of Americans say they are spending more time dreamscrolling now than in previous years (45%), and 56% plan on buying something on their dream list before this year wraps. While 65% are optimistic they’ll be able to one day buy everything on their list, nearly one in four say they don’t think they’ll ever be able to afford the majority of items (23%).

More than half (51%) say owning their dream purchases would make them feel more financially secure, and close to half say working with a financial professional would help them reach their goals (47%). Others feel they have more work to do: 34% say they’ve purchased fewer things on their dream list than they should at their age, with millennials feeling the most behind (39%).

Rising prices (54%), the inability to save money (29%), and growing debt (21%) are the top economic factors that may be holding some Americans back. Instead of doom spending, dreamscrolling has had a positive impact on Americans’ money habits: respondents say they better understand their financial goals (24%) as a result.

Source: https://studyfinds.org/shopping-browsing-cant-afford/

Who really was Mona Lisa? 500+ years on, there’s good reason to think we got it wrong

Visiting looking at the Mona Lisa (Credit: pixabay.com)

In the pantheon of Renaissance art, Leonardo da Vinci’s Mona Lisa stands as an unrivalled icon. This half-length portrait is more than just an artistic masterpiece; it embodies the allure of an era marked by unparalleled cultural flourishing.

Yet, beneath the surface of the Mona Lisa’s elusive smile lies a debate that touches the very essence of the Renaissance, its politics and the role of women in history.

A mystery woman

The intrigue of the Mona Lisa, also known as La Gioconda, isn’t solely due to Leonardo’s revolutionary painting techniques. It’s also because the identity of the subject is unconfirmed to this day. More than half a century since it was first painted, the real identity of the Mona Lisa remains one of art’s greatest mysteries, intriguing scholars and enthusiasts alike.

A Mona Lisa painting from the workshop of Leonardo da Vinci, held in the collection of the Museo del Prado in Madrid, Spain. Collection of the Museo del Prado

The painting has traditionally been associated with Lisa Gherardini, the wife of Florentine silk merchant Francesco del Giocondo. But another compelling theory suggests a different sitter: Isabella of Aragon.

Isabella of Aragon was born into the illustrious House of Aragon in Naples, in 1470. She was a princess who was deeply entwined in the political and cultural fabric of the Renaissance.

Her 1490 marriage to Gian Galeazzo Sforza, Duke of Milan, positioned Isabella at the heart of Italian politics. And this role was both complicated and elevated by the ambitions and machinations of Ludovico Sforza (also called Ludovico il Moro), her husband’s uncle and usurper of the Milanese dukedom.

In The Virgin and Child with Four Saints and Twelve Devotees, by (unknown) Master of the Pala Sforzesca, circa 1490, Gian Galeazzo Sforza is shown in prayer facing his wife, Isabella of Aragon (identified by her heraldic red and gold). National Gallery

Scholarly perspectives
The theory that Isabella is the real Mona Lisa is supported by a combination of stylistic analyses, historical connections and reinterpretations of Leonardo’s intent as an artist.

In his biography of Leonardo, author Robert Payne points to preliminary studies by the artist that bear a striking resemblances to Isabella around age 20. Payne suggests Leonardo captured Isabella across different life stages, including during widowhood, as depicted in the Mona Lisa.

U.S. artist Lillian F. Schwartz’s 1988 study used x-rays to reveal an initial sketch of a woman hidden beneath Leonardo’s painting. This sketch was then painted over with Leonardo’s own likeness.

Schwartz believes the woman in the sketch is Isabella, because of its similarity with a cartoon Leonardo made of the princess. She proposes the work was made by integrating specific features of the initial model with Leonardo’s own features.

An illustration of Isabella of Aragon from the Story of Cremona by Antonio Campi. Library of Congress

This hypothesis is further supported by art historians Jerzy Kulski and Maike Vogt-Luerssen.

According to Vogt-Luerssen’s detailed analysis of the Mona Lisa, the symbols of the Sforza house and the depiction of mourning garb both align with Isabella’s known life circumstances. They suggest the Mona Lisa isn’t a commissioned portrait, but a nuanced representation of a woman’s journey through triumph and tragedy.

Similarly, Kulski highlights the portrait’s heraldic designs, which would be atypical for a silk merchant’s wife. He, too, suggests the painting shows Isabella mourning her late husband.

The Mona Lisa’s enigmatic expression also captures Isabella’s self-described state post-1500 of being “alone in misfortune.” Contrary to representing a wealthy, recently married woman, the portrait exudes the aura of a virtuous widow.

Late professor of art history Joanna Woods-Marsden suggested the Mona Lisa transcends traditional portraiture and embodies Leonardo’s ideal, rather than being a straightforward commission.

This perspective frames the work as a deeply personal project for Leonardo, possibly signifying a special connection between him and Isabella. Leonardo’s reluctance to part with the work also indicates a deeper, personal investment in it.

Beyond the canvas
The theory that Isabella of Aragon could be the true Mona Lisa is a profound reevaluation of the painting’s context, opening up new avenues through which to appreciate the work.

It elevates Isabella from a figure overshadowed by the men in her life, to a woman of courage and complexity who deserves recognition in her own right.

Source: https://studyfinds.org/who-really-was-mona-lisa-500-years-on-theres-good-reason-to-think-we-got-it-wrong/

Scientists discover what gave birth to Earth’s unbreakable continents

Photo by Brett Zeck from Unsplash

The Earth beneath our feet may feel solid, stable, and seemingly eternal. But the continents we call home are unique among our planetary neighbors, and their formation has long been a mystery to scientists. Now, researchers believe they may have uncovered a crucial piece of the puzzle: the role of ancient weathering in shaping Earth’s “cratons,” the most indestructible parts of our planet’s crust.

Cratons are the old souls of the continents, forming roughly half of Earth’s continental crust. Some date back over three billion years and have remained largely unchanged ever since. They form the stable hearts around which the rest of the continents have grown. For decades, geologists have wondered what makes these regions so resilient, even as the plates shift and collide around them.

It turns out that the key may lie not in the depths of the Earth but on its surface. A new study out of Penn State and published in Nature suggests that subaerial weathering – the breakdown of rocks exposed to air – may have triggered a chain of events that led to the stabilization of cratons billions of years ago, during the Neoarchaean era, around 2.5 to 3 billion years ago.

These ancient metamorphic rocks called gneisses, found on the Arctic Coast, represent the roots of the continents now exposed at the surface. The scientists said sedimentary rocks interlayered in these types of rocks would provide a heat engine for stabilizing the continents. Credit: Jesse Reimink. All Rights Reserved.

To understand how this happened, let’s take a step way back in time. In the Neoarchaean, Earth was a very different place. The atmosphere contained little oxygen, and the continents were mostly submerged beneath a global ocean. But gradually, land began to poke above the waves – a process called continental emergence.

As more rock was exposed to air, weathering rates increased dramatically. When rocks weather, they release their constituent minerals, including radioactive elements like uranium, thorium, and potassium. These heat-producing elements, or HPEs, are crucial because their decay generates heat inside the Earth over billions of years.

The researchers propose that as the HPEs were liberated by weathering, they were washed into sediments that accumulated in the oceans. Over time, plate tectonic processes would have carried these sediments deep into the crust, where the concentrated HPEs could really make their presence felt.

Buried at depth and heated from within, the sediments would have started to melt. This would have driven what geologists call “crustal differentiation” – the separation of the continental crust into a lighter, HPE-rich upper layer and a denser, HPE-poor lower layer. It’s this layering, the researchers argue, that gave cratons their extraordinary stability.

The upper crust, enriched in HPEs, essentially acted as a thermal blanket, keeping the lower crust and the mantle below relatively cool and strong. This prevented the kind of large-scale deformation and recycling that affected younger parts of the continents.

Interestingly, the timing of craton stabilization around the globe supports this idea. The researchers point out that in many cratons, the appearance of HPE-enriched sedimentary rocks precedes the formation of distinctive Neoarchaean granites – the kinds of rocks that would form from the melting of HPE-rich sediments.

The rocks on the left are old rocks that have been deformed and altered many times. They are juxtaposed next to an Archean granite on the right side. The granite is the result of melting that led to the stabilization of the continental crust. Credit: Matt Scott. All Rights Reserved.

Furthermore, metamorphic rocks – rocks transformed by heat and pressure deep in the crust – also record a history consistent with the model. Many cratons contain granulite terranes, regions of the deep crust uplifted to the surface that formed in the Neoarchaean. These granulites often have compositions that suggest they formed from the melting of sedimentary rocks.

So, the sequence of events – the emergence of continents, increased weathering, burial of HPE-rich sediments, deep crustal melting, and finally, craton stabilization – all seem to line up.

Source: https://studyfinds.org/earths-unbreakable-continents/

The 7 Fastest Animals In The World: Can You Guess Them All?

Cheetah (Photo by David Groves on Unsplash)

Move over Usain Bolt, because in the animal kingdom, speed takes on a whole new meaning! Forget sprinting at a measly 28 mph – these record-breaking creatures can leave you in the dust (or water, or sky) with their mind-blowing velocity. From lightning-fast cheetahs hunting down prey on the African savanna to majestic peregrine falcons diving from incredible heights, these animals rely on their extreme speed to survive and thrive in the wild. So, buckle up as we explore the top seven fastest animals on Earth.

The animal kingdom is brimming with speedsters across different habitats. We’re talking about fish that can zoom by speedboats, birds that plummet from the sky at breakneck speeds, and even insects with lightning-fast reflexes. Below is our list of the consensus top seven fastest animals in the world. We want to hear from you too! Have you ever encountered an animal with incredible speed? Share your stories in the comments below, and let’s celebrate the awe-inspiring power of nature’s speed demons!

The List: Fastest Animals in the World, Per Wildlife Experts

1. Peregrine Falcon – 242 MPH

Peregrine Falcon (Photo by Vincent van Zalinge on Unsplash)

The peregrine falcon takes the title of the fastest animal in the world, able to achieve speeds of 242 miles per hour. These birds don’t break the sound barrier by flapping their wings like crazy. Instead, they use gravity as their accomplice, raves The Wild Life. In a blink of an eye, the falcon can plummet towards its prey, like a fighter jet in a vertical dive. These dives can exceed 200 miles per hour, which is the equivalent of a human running at over 380 mph! That’s fast enough to make even the speediest sports car look like a snail.

That prominent bulge of this falcon’s chest cavity isn’t just for show – it’s a keel bone, and it acts like a supercharged engine for their flight muscles. A bigger keel bone translates to more powerful wing strokes, propelling the falcon forward with incredible force, explains A-Z Animals. These birds also boast incredibly stiff, tightly packed feathers that act like a high-performance suit, reducing drag to an absolute minimum. And the cherry on top? Their lungs and air sacs are designed for one-way airflow, meaning they’re constantly topped up with fresh oxygen, even when exhaling. This ensures they have the fuel they need to maintain their breakneck dives.

These fast falcons might be the ultimate jet setters of the bird world, but they’re not picky about their digs. The sky-dwelling predators are comfortable calling a variety of landscapes home, as long as there’s open space for hunting, writes One Kind Planet. They can be found soaring over marshes, estuaries, and even skyscrapers, always on the lookout for unsuspecting prey.

2. Golden Eagle – 200 MPH

Golden Eagle (Photo by Mark van Jaarsveld on Unsplash)

The golden eagle is a large bird that is well known for its powerful and fast flight. These majestic birds can reach speeds of up to 199 mph during a hunting dive, says List 25. Just like the peregrine falcon, the golden eagle uses a hunting technique called a stoop. With a powerful tuck of its wings, the eagle plummets towards its target in a breathtaking dive.

They are undeniably impressive birds, with a wingspan that can stretch up to eight feet wide! Imagine an athlete being able to run at 179 miles per hour! That’s what a golden eagle achieves in a dive, reaching speeds of up to 87 body lengths per second, mentions The Wild Life. The air rushes past its feathers, creating a whistling sound as it picks up, hurtling toward its prey.

They also use these impressive dives during courtship rituals and even playful moments, states Live Science. Picture two golden eagles soaring in tandem, one diving after the other in a dazzling aerial ballet. It’s a display of both power and grace that reaffirms their status as the ultimate rulers of the skies. Their habitat range stretches across the northern hemisphere, including North America, Europe, Africa, and Asia, according to the International Union for Conservation of Nature (IUCN). So next time you see a golden eagle circling above, remember – it’s more than just a bird, it’s a living embodiment of speed, skill, and breathtaking beauty.

3. Black Marlin – 80 MPH

A Black Marlin jumping out of the water (Photo by Finpat on Shutterstock)

The ocean is a vast and mysterious realm, teeming with incredible creatures. And when it comes to raw speed, the black marlin is a high-performance athlete of the sea. They have a deep, muscular body built for cutting through water with minimal resistance, informs Crosstalk. Think of a sleek racing yacht compared to a clunky rowboat. Plus, their dorsal fin is lower and rounder, acting like a spoiler on a race car, reducing drag and allowing for a smoother ride through the water. Their “spears,” those sharp protrusions on their snouts, are thicker and more robust than other marlins. These aren’t just for show – they’re used to slash and stun prey during a hunt.

Some scientists estimate their burst speed at a respectable 22 mph. That’s impressive, but here’s where the debate gets interesting. Some reports claim black marlin can pull fishing line at a staggering 120 feet per second! When you do the math, that translates to a whopping 82 mph, according to Story Teller. This magnificent fish calls shallow, warm shores home, their ideal habitat boasts water temperatures between 59 to 86 degrees Fahrenheit. – basically, a permanent summer vacation!

The secret behind its impressive swimming prowess lies in its tail. Unlike the rounded tails of many fish, black marlin possess crescent-shaped tails, explains A-Z Animals. With a powerful flick, they can propel themselves forward with incredible bursts of speed. This marlin also boasts a long, thin, and sharp bill that cuts through water, offering minimal resistance as it surges forward. But that’s not all. Black marlin also have rigid pectoral fins that act like perfectly sculpted wings. These fins aren’t for flapping – they provide stability and lift, allowing the marlin to maintain a streamlined position in the water.

4. Cheetah – 70 MPH

Adult and cheetah pup on green grass during daytime (Photo by Sammy Wong on Unsplash)

The cheetah is Africa’s most endangered large cat and also the world’s fastest land animal. Their bodies are built for pure velocity, with special adaptations that allow them to go from zero to sixty in a mind-blowing three seconds, shares Animals Around The Globe. Each stride stretches an incredible seven meters, eating up the ground with astonishing speed. But they can only maintain their high speeds for short bursts.

Unlike its stockier lion and tiger cousins, the cheetah boasts a lean, streamlined physique that makes them aerodynamic. But the real innovation lies in the cheetah’s spine. It’s not just a rigid bone structure – it’s a flexible marvel, raves A-Z Animals. With each powerful push, this springy spine allows the cheetah to extend its strides to incredible lengths, propelling it forward with tremendous force. And finally, we come to the engine room: the cheetah’s muscles. Packed with a high concentration of “fast-twitch fibers,” these muscles are specifically designed for explosive bursts of speed. Think of them as tiny, built-in turbochargers that give the cheetah that extra surge of power when it needs it most.

These magnificent cats haven’t always been confined to the dry, open grasslands of sub-Saharan Africa. Cheetahs were once widespread across both Africa and Asia, but their range has shrunk dramatically due to habitat loss and dwindling prey populations, says One Kind Planet. Today, most cheetahs call protected natural reserves and parks home.

Source: https://studyfinds.org/fastest-animals-in-the-world/

Exit mobile version