Average young adult predicts they’ll be dead by 76!

(© Syda Productions – stock.adobe.com)

Have you ever wondered how long you’ll live? A recent study has revealed some intriguing insights into how different age groups perceive their own mortality. Buckle up, because the results might surprise you! The research surveyed 2,000 adults across the United Kingdom. It turns out that millennials (those in the 35-44 age bracket) believe they’ll reach the ripe old age of 81. Conversely, their younger Gen Z counterparts (the under-24 crowd) is a bit more pessimistic, expecting to only make it to 76. In fact, 1 in 6 Gen Z participants aren’t even sure they’ll be alive in time for retirement!

But here’s the kicker: those over 65 are the most optimistic of all, anticipating they’ll live until 84 – the highest estimate of any age group.

And what about the battle of the sexes? Well, men seem to think they’ll outlast women, predicting an average lifespan of 82 compared to women’s 80. However, the joke might be on them, as women typically have a longer life expectancy than men.

The study, commissioned by UK life insurance brand British Seniors, also found that a whopping 65% of respondents sometimes or often contemplate their own mortality. As a spokesperson from British Seniors put it, “The research has revealed a fascinating look into these predictions and differences between gender, location, and age group. Such conversations are becoming more open than ever – as well as discussion of how you’d like your funeral to look.”

Speaking of funerals, 23% of adults have some or all of their funeral plans in place. A quarter don’t want any fuss for their send-off, while 20% are happy with whatever their friends and family decide on. The report revealed that 21% have discussed their own funeral with someone else, and 35% of those over 65 have explained their preferences to someone.

So, what’s the secret to a long life? According to the respondents, leading an active lifestyle, not smoking, keeping the brain ticking, and having good genetics and family history on their side are all key factors. And when it comes to approaching life, 37% believe in being balanced, 20% want to live it to the fullest, and 16% think slow and steady wins the race.

Source: https://studyfinds.org/young-adult-lifespan-prediction/

Survey says it takes nearly 2 months of exercise before you’ll start to look more fit

(© rangizzz – stock.adobe.com)

The poll of 2,000 adults reveals what goals people prioritize when it comes to their fitness. Above all, they’re aiming to lose a certain amount of weight (43%), increase their general strength (43%) and increase their general mobility (35%).

However, 48 percent are worried about potentially losing the motivation to get fit and 65 percent believe the motivation to increase their level of physical fitness wanes over time.

According to respondents, the motivation to keep going lasts for about four weeks before needing a new push.

The survey, commissioned by Optimum Nutrition and conducted by TalkerResearch, finds that a majority of Americans’ diet affects their level of fitness motivation (89%).

Nearly three in 10 (29%) believe they don’t get enough protein in their diet, lacking it either “all the time” (19%) or often (40%).

Gen X respondents feel like they are lacking protein the most out of all generations (35%), compared to millennials (34%), Gen Z (27%) and baby boomers (21%). Plus, over three in five (35%) women don’t think they get enough protein vs. 23 percent of men.

The average person has two meals per day that don’t include protein, but 61 percent would be more likely to increase their protein intake to help achieve their fitness goals.

As people reflect on health and wellness goals, the most common experiences that make people feel out of shape include running out of breath often (49%) and trying on clothing that no longer fits (46%).

Over a quarter (29%) say they realized they were out of shape after not being able to walk up a flight of stairs without feeling winded.

Source: https://studyfinds.org/survey-says-it-takes-nearly-2-months-of-exercise-before-youll-start-to-look-more-fit/

What it’s like to have aphantasia, the condition that turns off the mind’s eye

Concept of aphantasia, inability to visualize and create mental images. (© Studio Light & Shade – stock.adobe.com)

Close your eyes and try to picture a loved one’s face or your childhood home. For most people, this conjures up a mental image, perhaps fuzzy but still recognizable. But for a small percentage of the population, this simple act of imagination draws a complete blank. No colors, no shapes, no images at all – just darkness. This condition, known as aphantasia, is shedding new light on the nature of human imagination and consciousness.

A recent review published in Trends in Cognitive Sciences explores the fascinating world of aphantasia and its opposite extreme, hyperphantasia – imagery so vivid it rivals actual perception. These conditions, affecting roughly 1% and 3% of the population, respectively, are opening up new avenues for understanding how our brains create and manipulate mental images.

Aphantasia, from the Greek “a” (without) and “phantasia” (imagination), was only recently named in 2015 by Adam Zeman, a professor at the University of Exeter, though the phenomenon was first noted by Sir Francis Galton in the 1880s. People with aphantasia report being unable to voluntarily generate visual images in their mind’s eye. This doesn’t mean they lack imagination altogether – many excel in abstract or spatial thinking – but they can’t “see” things in their mind the way most people can.

On the flip side, those with hyperphantasia experience incredibly vivid mental imagery, sometimes indistinguishable from actual perception. These individuals might be able to recall a scene from a movie in perfect detail or manipulate complex visual scenarios in their minds with ease.

What’s particularly intriguing about these conditions is that they often affect multiple senses. Many people with aphantasia report difficulty imagining sounds, smells, or tactile sensations as well. This suggests that the ability to generate mental imagery might be a fundamental aspect of how our brains process and represent information.

The review, authored by Zeman, delves into the growing body of research on these conditions. Some key findings include the apparent genetic component of aphantasia, as it seems to run in families. People with aphantasia often have reduced autobiographical memory – they can recall facts about their past but struggle to “relive” experiences in their minds. Interestingly, many people with aphantasia still experience visual dreams, suggesting different neural mechanisms for voluntary and involuntary imagery.

There’s a higher prevalence of aphantasia among people in scientific and technical fields, while hyperphantasia is more common in creative professions. Additionally, aphantasia is associated with some difficulties in face recognition and a higher likelihood of having traits associated with autism spectrum disorders.

These findings paint a complex picture of how mental imagery relates to other cognitive processes and even career choices. But perhaps most importantly, they’re challenging our assumptions about what it means to “imagine” something.

Methodology: Peering into the Mind’s Eye
Studying something as subjective as mental imagery poses unique challenges. How do you measure something that exists only in someone’s mind? Zeman reviewed about 50 previous studies to reach his takeaways about the condition.

Researchers across these studies developed several clever approaches to better understand aphantasia. The most common method is simply asking people to rate the vividness of their mental images using self-report questionnaires like the Vividness of Visual Imagery Questionnaire (VVIQ).

Researchers also use behavioral tasks that typically require mental imagery and compare performance between those with and without aphantasia. For example, participants might be asked to compare the colors of two objects without seeing them. Some studies have looked at physical responses that correlate with mental imagery, such as pupil dilation in response to imagining bright or dark scenes.

Brain imaging techniques, particularly functional MRI, allow researchers to see which brain areas activate during imagery tasks in people with different imagery abilities. Another interesting technique is binocular rivalry, which uses the tendency for mental imagery to bias subsequent perception. It’s been used to objectively measure imagery strength.

These varied approaches help researchers triangulate on the nature of mental imagery and its absence in aphantasia, providing a more comprehensive understanding of these phenomena.

Results: A World Without Pictures
The review synthesizes findings from numerous studies, revealing a complex picture of how aphantasia affects cognition and behavior. While general memory function is largely intact, people with aphantasia often report less vivid and detailed autobiographical memories. They can recall facts about events but struggle to “relive” them mentally.

Contrary to what one might expect, aphantasia doesn’t necessarily impair creativity. Many successful artists and writers have aphantasia, suggesting alternative routes to creative thinking. Some studies suggest that people with aphantasia have a reduced emotional response to written scenarios, possibly because they can’t visualize the described scenes.

Surprisingly, many people with aphantasia report normal visual dreams. This dissociation between voluntary and involuntary imagery is a puzzle for researchers. There’s also a higher prevalence of face recognition difficulties among those with aphantasia, though the connection isn’t fully understood.

While object imagery is impaired, spatial imagery abilities are often preserved in aphantasia. This suggests different neural underpinnings for these two types of mental representation. Neuroimaging studies show differences in connectivity between frontal and visual areas of the brain in people with aphantasia, potentially explaining the difficulty in generating voluntary mental images.

“Despite the profound contrast in subjective experience between aphantasia and hyperphantasia, effects on everyday functioning are subtle – lack of imagery does not imply lack of imagination. Indeed, the consensus among researchers is that neither aphantasia nor hyperphantasia is a disorder. These are variations in human experience with roughly balanced advantages and disadvantages. Further work should help to spell these out in greater detail,” Prof. Zeman says in a media release.

Source: https://studyfinds.org/what-its-like-to-have-aphantasia/

Gold goes 2D: Scientists create ultra-thin ‘goldene’ sheets

Lars Hultman, professor of thin film physics and Shun Kashiwaya, researcher at the Materials Design Division at Linköping University. (Credit: Olov Planthaber)

In a remarkable feat of nanoscale engineering, scientists have created the world’s thinnest gold sheets at just one atom thick. This new material, dubbed “goldene,” could revolutionize fields from electronics to medicine, offering unique properties that bulk gold simply can’t match.

The research team, led by scientists from Linköping University in Sweden, managed to isolate single-atom layers of gold by cleverly manipulating the metal’s atomic structure. Their findings, published in the journal Nature Synthesis, represent a significant breakthrough in the rapidly evolving field of two-dimensional (2D) materials.

Since the discovery of graphene — single-atom-thick sheets of carbon — in 2004, researchers have been racing to create 2D versions of other elements. While 2D materials made from carbon, boron, and even iron have been achieved, gold has proven particularly challenging. Previous attempts resulted in gold sheets several atoms thick or required the gold to be supported by other materials.

The Swedish team’s achievement is particularly noteworthy because they created free-standing sheets of gold just one atom thick. This ultra-thin gold, or goldene, exhibits properties quite different from its three-dimensional counterpart. For instance, the atoms in goldene are packed more tightly together, with about 9% less space between them compared to bulk gold. This compressed structure leads to changes in the material’s electronic properties, which could make it useful for a wide range of applications.

One of the most exciting potential uses for goldene is in catalysis, which is the process of speeding up chemical reactions. Gold nanoparticles are already used as catalysts in various industrial processes, from converting harmful vehicle emissions into less dangerous gases to producing hydrogen fuel. The researchers believe that goldene’s extremely high surface-area-to-volume ratio could make it an even more efficient catalyst.

The creation of goldene also opens up new possibilities in fields like electronics, photonics, and medicine. For example, the material’s unique optical properties could lead to improved solar cells or new types of sensors. In medicine, goldene might be used to create ultra-sensitive diagnostic tools or to deliver drugs more effectively within the body.

How They Did It: Peeling Gold Atom by Atom
The process of creating goldene is almost as fascinating as the material itself. The researchers used a technique that might be described as atomic-scale sculpting, carefully removing unwanted atoms to leave behind a single layer of gold.

They started with a material called Ti3AuC2, which is part of a family of compounds known as MAX phases. These materials have a layered structure, with sheets of titanium carbide (Ti3C2) alternating with layers of gold atoms. The challenge was to remove the titanium carbide layers without disturbing the gold.

To accomplish this, the team used a chemical etching process. They immersed the Ti3AuC2 in a carefully prepared solution containing potassium hydroxide and potassium ferricyanide, known as Murakami’s reagent. This solution selectively attacks the titanium carbide layers, gradually dissolving them away.

However, simply etching away the titanium carbide wasn’t enough. Left to their own devices, t

he freed gold atoms would quickly clump together, forming 3D nanoparticles instead of 2D sheets. To prevent this, the researchers added surfactants — molecules that help keep the gold atoms spread out in a single layer.

Two key surfactants were used: cetrimonium bromide (CTAB) and cysteine. These molecules attach to the surface of the gold, creating a protective barrier that prevents the atoms from coalescing. The entire process took about a week, with the researchers carefully controlling the concentration of the etching solution and surfactants to achieve the desired result.

For the first time, scientists have managed to create sheets of gold only a single atom layer thick. (Credit: Olov Planthaber)

Results: A New Form of Gold Emerges

The team’s efforts resulted in sheets of gold just one atom thick, confirmed through high-resolution electron microscopy. These goldene sheets showed several interesting properties:

  1. Compressed structure: The gold atoms in goldene are packed about 9% closer together than in bulk gold. This compression changes how the electrons in the material behave, potentially leading to new electronic and optical properties.
  2. Increased binding energy: X-ray photoelectron spectroscopy revealed that the electrons in goldene are more tightly bound to their atoms compared to bulk gold. This shift in binding energy could affect the material’s chemical reactivity.
  3. Rippling and curling: Unlike perfectly flat sheets, the goldene layers showed some rippling and curling, especially at the edges. This behavior is common in 2D materials and can influence their properties.
  4. Stability: Computer simulations suggested that goldene should be stable at room temperature, although the experimental samples showed some tendency to form blobs or clump together over time.

The researchers also found that they could control the thickness of the gold sheets by adjusting their process. Using slightly different conditions, they were able to create two- and three-atom-thick sheets of gold as well.

Limitations and Challenges

  1. Scale: The current process produces relatively small sheets of goldene, typically less than 100 nanometers across. Scaling up production to create larger sheets will be crucial for many potential applications.
  2. Stability: Although computer simulations suggest goldene should be stable, the experimental samples showed some tendency to curl and form blobs, especially at the edges. Finding ways to keep the sheets flat and prevent them from clumping together over time will be important.
  3. Substrate dependence: The goldene sheets were most stable when still partially attached to the original Ti3AuC2 material or when supported on a substrate. Creating large, free-standing sheets of goldene remains a challenge.
  4. Purity: The etching process leaves some residual titanium and carbon atoms mixed in with the gold. While these impurities are minimal, they could affect the material’s properties in some applications.
  5. Reproducibility: The process of creating goldene is quite sensitive to the exact conditions used. Ensuring consistent results across different batches and scaling up production will require further refinement of the technique.

The surprising cure for chronic back pain? Just take a walk

(© glisic_albina – stock.adobe.com)

For anyone who has experienced the debilitating effects of low back pain, the results of an eye-opening new study may be a game-changer. Researchers have found that a simple, accessible program of progressive walking and education can significantly reduce the risk of constant low back pain flare-ups in adults. The implications are profound — no longer does managing this pervasive condition require costly equipment or specialized rehab facilities. Instead, putting on a pair of sneakers and taking a daily stroll could be one of the best preventative therapies available.

Australian researchers, publishing their work in The Lancet, recruited over 700 adults across the country who had recently recovered from an episode of non-specific low back pain, which lasted at least 24 hours and interfered with their daily activities. The participants were divided into two groups: one received an individualized walking and education program guided by a physiotherapist over six months, and the other received no treatment at all during the study.

Participants were then carefully followed for at least one year, up to a maximum of nearly three years for some participants. The researchers meticulously tracked any recurrences of low back pain that were severe enough to limit daily activities.

“Our study has shown that this effective and accessible means of exercise has the potential to be successfully implemented at a much larger scale than other forms of exercise,” says lead author Dr. Natasha Pocovi in a media release. “It not only improved people’s quality of life, but it reduced their need both to seek healthcare support and the amount of time taken off work by approximately half.”

Methodology: A Step-by-Step Approach

So, what did this potentially back-saving intervention involve? It utilized the principles of health coaching, where physiotherapists worked one-on-one with participants to design and progressively increase a customized walking plan based on the individual’s age, fitness level, and objectives.

The process began with a 45-minute consultation to understand each participant’s history, conduct an examination, and prescribe an initial “walking dose,” which was then gradually ramped up. The guiding target was to work up to walking at least 30 minutes per day, five times per week, by the six-month mark.

During this period, participants also participated in lessons to help overcome fears about back pain while learning easy strategies to self-manage any recurrences. They were provided with a pedometer and a walking diary to track their progress. After the first 12 weeks, they could choose whether to keep using those motivational tools. Follow-up sessions with the physiotherapist every few weeks, either in-person or via video calls, were focused on monitoring progress, adjusting walking plans when needed, and providing encouragement to keep participants engaged over the long haul.

Results: Dramatic Improvement & A Manageable Approach
The impact of this straightforward intervention was striking. Compared to the control group, participants in the walking program experienced a significantly lower risk of suffering a recurrence of low back pain that limited daily activities. Overall, the risk fell by 28%.

Even more impressive, the average time for a recurrence to strike was nearly double for those in the walking group (208 days) versus the control group (112 days). The results for any recurrence of low back pain, regardless of impact on activities and recurrences requiring medical care, showed similarly promising reductions in risk. Simply put, people engaging in the walking program stayed pain-free for nearly twice as long as others not treating their lower back pain.

Source: https://studyfinds.org/back-pain-just-take-a-walk/

Intermittent fasting may supercharge ‘natural killer’ cells to destroy cancer

Could skipping a few meals each week help you fight cancer? It might sound far-fetched, but new research suggests that one type of intermittent fasting could actually boost your body’s natural ability to defeat cancer.

(Credit: MIA Studio/Shutterstock)

A team of scientists at Memorial Sloan Kettering Cancer Center (MSK) has uncovered an intriguing link between fasting and the body’s immune system. Their study, published in the journal Immunity, focuses on a particular type of immune cell called natural killer (NK) cells. These cells are like the special forces of your immune system, capable of taking out cancer cells and virus-infected cells without needing prior exposure.

So, what’s the big deal about these NK cells? Well, they’re pretty important when it comes to battling cancerous tumors. Generally speaking, the more NK cells you have in a tumor, the better your chances of beating the disease. However, there’s a catch: the environment inside and around tumors is incredibly harsh. It’s like a battlefield where resources are scarce, and many immune cells struggle to survive.

This is where fasting enters the picture. The researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment and more effective at fighting cancer.

“Tumors are very hungry,” says immunologist Joseph Sun, PhD, the study’s senior author, in a media release. “They take up essential nutrients, creating a hostile environment often rich in lipids that are detrimental to most immune cells. What we show here is that fasting reprograms these natural killer cells to better survive in this suppressive environment.”

Illustration of a group of cancer cells. Researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment. (© fotoyou – stock.adobe.com)

How exactly does intermittent fasting achieve this?
The study, which was conducted on mice, involved denying the animals food for 24 hours twice a week, with normal eating in between. This intermittent fasting approach had some pretty remarkable effects on the NK cells.

First off, fasting caused the mice’s glucose levels to drop and their levels of free fatty acids to rise. Free fatty acids are a type of lipid (fat) that can be used as an alternative energy source when other nutrients are scarce. The NK cells learned to use these fatty acids as fuel instead of glucose, which is typically their primary energy source.

“During each of these fasting cycles, NK cells learned to use these fatty acids as an alternative fuel source to glucose,” says Dr. Rebecca Delconte, the lead author of the study. “This really optimizes their anti-cancer response because the tumor microenvironment contains a high concentration of lipids, and now they’re able enter the tumor and survive better because of this metabolic training.”

The fasting also caused the NK cells to move around the body in interesting ways. Many of them traveled to the bone marrow, where they were exposed to high levels of a protein called Interleukin-12. This exposure primed the NK cells to produce more of another protein called Interferon-gamma, which plays a crucial role in fighting tumors. Meanwhile, NK cells in the spleen were undergoing their own transformation, becoming even better at using lipids as fuel. The result? NK cells were pre-primed to produce more cancer-fighting substances and were better equipped to survive in the harsh tumor environment.

 

Source: https://studyfinds.org/intermittent-fasting-fight-cancer/

There are 6 different types of depression, brain pattern study shows

(Image by Feng Yu on Shutterstock)

Depression and anxiety disorders are among the most common mental health issues worldwide, yet current treatments often fail to provide relief for many sufferers. A major challenge has been the heterogeneity of these conditions. Patients with the same diagnosis can have vastly different symptoms and underlying brain dysfunctions. Now, a team of researchers at Stanford University has developed a novel approach to parse this heterogeneity, identifying six distinct “biotypes” of depression and anxiety based on specific patterns of brain circuit dysfunction.

The study, published in Nature Medicine, analyzed brain scans from over 800 patients with depression and anxiety disorders. By applying advanced computational techniques to these scans, the researchers were able to quantify the function of key brain circuits involved in cognitive and emotional processing at the individual level. This allowed them to group patients into biotypes defined by shared patterns of circuit dysfunction, rather than relying solely on symptoms.

Intriguingly, the six biotypes showed marked differences not just in their brain function, but also in their clinical profiles. Patients in each biotype exhibited distinct constellations of symptoms, cognitive impairments, and critically, responses to different treatments. For example, one biotype characterized by hyperconnectivity in circuits involved in self-referential thought and salience processing responded particularly well to behavioral therapy. Another, with heightened activity in circuits processing sadness and reward, was distinguished by prominent anhedonia (inability to feel pleasure).

These findings represent a significant step towards a more personalized, brain-based approach to diagnosing and treating depression and anxiety. By moving beyond one-size-fits-all categories to identify subgroups with shared neural mechanisms, this work opens the door to matching patients with the therapies most likely to help them based on the specific way their brain is wired. It suggests that brain circuit dysfunction may be a more meaningful way to stratify patients than symptoms alone. In the future, brain scans could be used to match individual patients with the treatments most likely to work for them, based on their specific neural profile.

More broadly, this study highlights the power of a transdiagnostic, dimensional approach to understanding mental illness. By focusing on neural circuits that cut across traditional diagnostic boundaries, we may be able to develop a more precise, mechanistic framework for classifying these conditions.

“To our knowledge, this is the first time we’ve been able to demonstrate that depression can be explained by different disruptions to the functioning of the brain,” says the study’s senior author, Dr. Leanne Williams, a professor of psychiatry and behavioral sciences, and the director of Stanford Medicine’s Center for Precision Mental Health and Wellness. “In essence, it’s a demonstration of a personalized medicine approach for mental health based on objective measures of brain function.”

The 6 Biotypes Of Depression

  1. The Overwhelmed Ruminator: This biotype has overactive brain circuits involved in self-reflection, detecting important information, and controlling attention. People in this group tend to have slowed-down emotional reactions and attention, but respond well to talk therapy.
  2. The Distracted Impulsive: This biotype has underactive brain circuits that control attention. They tend to have trouble concentrating and controlling impulses, and don’t respond as well to talk therapy.
  3. The Sensitive Worrier: This biotype has overactive brain circuits that process sadness and reward. They tend to have trouble experiencing pleasure and positive emotions.
  4. The Overcontrolled Perfectionist: This biotype has overactive brain circuits involved in regulating behavior and thoughts. They tend to have excessive negative emotions and threat sensitivity, trouble with working memory, but respond well to certain antidepressant medications.
  5. The Disconnected Avoider: This biotype has reduced connectivity in emotion circuits when viewing threatening faces, and reduced activity in behavior control circuits. They tend to have less rumination and faster reaction times to sad faces.
  6. The Balanced Coper: This biotype doesn’t show any major overactivity or underactivity in the brain circuits studied compared to healthy people. Their symptoms are likely due to other factors not captured by this analysis.

Of course, much work remains to translate these findings into clinical practice. The biotypes need to be replicated in independent samples and their stability over time needs to be established. We need to develop more efficient and scalable ways to assess circuit function that could be deployed in routine care. And ultimately, we will need prospective clinical trials that assign patients to treatments based on their biotype.

Nevertheless, this study represents a crucial proof of concept. It brings us one step closer to a future where psychiatric diagnosis is based not just on symptoms, but on an integrated understanding of brain, behavior, and response to interventions. As we continue to map the neural roots of mental illness, studies like this light the way towards more personalized and effective care for the millions of individuals struggling with these conditions.

“To really move the field toward precision psychiatry, we need to identify treatments most likely to be effective for patients and get them on that treatment as soon as possible,” says Dr. Jun Ma, the Beth and George Vitoux Professor of Medicine at the University of Illinois Chicago. “Having information on their brain function, in particular the validated signatures we evaluated in this study, would help inform more precise treatment and prescriptions for individuals.”

Source: https://studyfinds.org/there-are-6-different-types-of-depression-brain-pattern-study-shows/

Super dads, super kids: Science uncovers how the magic of fatherly care boosts child development

(Photo by Ketut Subiyanto from Pexels)

The crucial early years of a child’s life lay the foundation for their lifelong growth and happiness. Spending quality time with parents during these formative stages can lead to substantial positive changes in children. With that in mind, researchers have found an important link between a father’s involvement and their child’s successful development, both mentally and physically. Simply put, being a “super dad” results in raising super kids.

However, in Japan, where this study took place, a historical gender-based division of labor has limited fathers’ participation in childcare-related activities, impacting the development of children. Traditionally, Japanese fathers, especially those in their 20s to 40s, have been expected to prioritize work commitments over family responsibilities.

This cultural norm has resulted in limited paternal engagement in childcare, regardless of individual inclinations. The increasing number of mothers entering full-time employment further exacerbates the issue, leaving a void in familial support for childcare. With the central government advocating for paternal involvement in response to low fertility rates, Japanese fathers are now urged to become co-caregivers, shifting away from their traditional role as primary breadwinners.

While recent trends have found a rise in paternal childcare involvement, the true impact of this active participation on a child’s developmental outcomes has remained largely unexplored. This groundbreaking study published in Pediatric Research, utilizing data from the largest birth cohort in Japan, set out to uncover the link between paternal engagement and infant developmental milestones. Led by Dr. Tsuguhiko Kato from the National Center for Child Health and Development and Doshisha University Center for Baby Science, the study delved into this critical aspect of modern parenting.

“In developed countries, the time fathers spend on childcare has increased steadily in recent decades. However, studies on the relationship between paternal care and child outcomes remain scarce. In this study, we examined the association between paternal involvement in childcare and children’s developmental outcomes,” explains Dr. Kato in a media release.

Leveraging data from the Japan Environment and Children’s Study, the research team assessed developmental milestones in 28,050 Japanese children. These children received paternal childcare at six months of age and were evaluated for various developmental markers at three years. Additionally, the study explored whether maternal parenting stress mediates these outcomes at 18 months.

“The prevalence of employed mothers has been on the rise in Japan. As a result, Japan is witnessing a paradigm shift in its parenting culture. Fathers are increasingly getting involved in childcare-related parental activities,” Dr. Kato says.

The study measured paternal childcare involvement through seven key questions, gauging tasks like feeding, diaper changes, bathing, playtime, outdoor activities, and dressing. Each father’s level of engagement was scored accordingly. The research findings were then correlated with the extent of developmental delay in infants, as evaluated using the Ages and Stages questionnaire.

Source: https://studyfinds.org/super-dads-super-kids/

Women are losing their X chromosomes — What’s causing it?

(Credit: ustas7777777/Shutterstock)

A groundbreaking new study has uncovered genetic factors that may help explain why some women experience a phenomenon called mosaic loss of the X chromosome (mLOX) as they age. With mLOX, some of a woman’s blood cells randomly lose one of their two X chromosomes over time. Concerningly, scientists believe this genetic oddity may lead to the development of several disease, including cancer.

Researchers with the National Institutes of Health found that certain inherited gene variants make some women more susceptible to developing mLOX in the first place. Other genetic variations they identified seem to give a selective growth advantage to the blood cells that retain one X chromosome over the other after mLOX occurs.

Importantly, the study published in the journal Nature confirmed that women with mLOX have an elevated risk of developing blood cancers like leukemia and increased susceptibility to infections like pneumonia. This underscores the potential health implications of this chromosomal abnormality.

As some women age, their white blood cells can lose a copy of chromosome X. A new study sheds light on the potential causes and consequences of this phenomenon. (Credit: Created by Linda Wang with Biorender.com)

Paper Summary

Methodology

To uncover the genetic underpinnings of mLOX, the researchers conducted a massive analysis of nearly 900,000 women’s blood samples from eight different biobanks around the world. About 12% of these women showed signs of mLOX in their blood cells.

Results

By comparing the DNA of women with and without mLOX, the team pinpointed 56 common gene variants associated with developing the condition. Many of these genes are known to influence processes like abnormal cell division and cancer susceptibility. The researchers also found that rare mutations in a gene called FBXO10 could double a woman’s risk of mLOX. This gene likely plays an important role in the cellular processes that lead to randomly losing an X chromosome.

Source: https://studyfinds.org/women-losing-x-chromosomes/

Facially expressive people are more well-liked, socially successful

(Photo by airdone on Shutterstock)

Are you an open book, your face broadcasting every passing emotion, or more of a stoic poker face? Scientists at Nottingham Trent University say that wearing your heart on your sleeve (or rather, your face) could actually give you a significant social advantage. Their research shows that people who are more facially expressive are more well-liked by others, considered more agreeable and extraverted, and even fare better in negotiations if they have an amenable personality.

The study, led by Eithne Kavanagh, a research fellow at NTU’s School of Social Sciences, is the first large-scale systematic exploration of individual differences in facial expressivity in real-world social interactions. Across two studies involving over 1,300 participants, Kavanagh and her team found striking variations in how much people moved their faces during conversations. Importantly, this expressivity emerged as a stable individual trait. People displayed similar levels of facial expressiveness across different contexts, with different social partners, and even over time periods up to four months.

Connecting facial expressions with social success
So what drives these differences in facial communication styles and why do they matter? The researchers say that facial expressivity is linked to personality, with more agreeable, extraverted and neurotic individuals displaying more animated faces. But facial expressiveness also translated into concrete social benefits above and beyond the effects of personality.

In a negotiation task, more expressive individuals were more likely to secure a larger slice of a reward, but only if they were also agreeable. The researchers suggest that for agreeable folks, dynamic facial expressions may serve as a tool for building rapport and smoothing over conflicts.

Across the board, the results point to facial expressivity serving an “affiliative function,” or a social glue that fosters liking, cooperation and smoother interactions. Third-party observers and actual conversation partners consistently rated more expressive people as more likable.

Expressivity was also linked to being seen as more “readable,” suggesting that an animated face makes one’s intentions and mental states easier for others to decipher. Beyond frequency of facial movements, people who deployed facial expressions more strategically to suit social goals, such as looking friendly in a greeting, were also more well-liked.

“This is the first large scale study to examine facial expression in real-world interactions,” Kavanagh says in a media release. “Our evidence shows that facial expressivity is related to positive social outcomes. It suggests that more expressive people are more successful at attracting social partners and in building relationships. It also could be important in conflict resolution.”

Taking our faces at face value
The study, published in Scientific Reports, represents a major step forward in understanding the dynamics and social significance of facial expressions in everyday life. Moving beyond the traditional focus on static, stylized emotional expressions, it highlights facial expressivity as a consequential and stable individual difference.

The findings challenge the “poker face” intuition that a still, stoic demeanor is always most advantageous. Instead, they suggest that for most people, allowing one’s face to mirror inner states and intentions can invite warmer reactions and reap social rewards. The authors propose that human facial expressions evolved largely for affiliative functions, greasing the wheels of social cohesion and cooperation.

The results also underscore the importance of studying facial behavior situated in real-world interactions to unveil its true colors and consequences. Emergent technologies like automated facial coding now make it feasible to track the face’s mercurial movements in the wild, opening up new horizons for unpacking how this ancient communication channel shapes human social life.

Far from mere emotional readouts, our facial expressions appear to be powerful tools in the quest for interpersonal connection and social success. As the researchers conclude, “Being facially expressive is socially advantageous.” So the next time you catch yourself furrowing your brow or flashing a smile, know that your face just might be working overtime on your behalf to help you get ahead.

 

Source: https://studyfinds.org/facially-expressive-people-well-liked-socially-successful/

Can indie games inspire a creative boom from Indian developers?

Visai Games’ Venba won a Bafta Games Award this year

India might not be the first country that springs to mind when someone mentions video games, but it’s one of the fastest-growing markets in the world.
Analysts believe there could be more than half a billion players there by the end of this year.
Most of them are playing on mobile phones and tablets, and fans will tell you the industry is mostly known for fantasy sports games that let you assemble imaginary teams based on real players.
Despite concerns over gambling and possible addiction, they’re big business.
The country’s three largest video game startups – Game 24X7, Dream11 and Mobile Premier League – all provide some kind of fantasy sport experience and are valued at over $1bn.
But there’s hope that a crop of story-driven games making a splash worldwide could inspire a new wave of creativity and investment.
During the recent Summer Game Fest (SGF) – an annual showcase of new and upcoming titles held in Los Angeles and watched by millions – audiences saw previews of a number of story-rich titles from South Asian teams.

Detective Dotson will also have a companion TV series produced

One of those was Detective Dotson by Masala Games, based in Gujarat, about a failed Bollywood actor turned detective.
Industry veteran Shalin Shodhan is behind the game and tells BBC Asian Network this focus on unique stories is “bucking the trend” in India’s games industry.
He wants video games to become an “interactive cultural export” but says he’s found creating new intellectual property difficult.
“There really isn’t anything in the marketplace to make stories about India,” he says, despite the strength of some of the country’s other cultural industries.
“If you think about how much intellectual property there is in film in India, it is really surprising to think nothing indigenous exists as an original entertainment property in games,” he says.
“It’s almost like the Indian audience accepted that we’re just going to play games from outside.”
Another game shown during SGF was The Palace on the Hill – a “slice-of-life” farming sim set in rural India.
Mala Sen, from developer Niku Games, says games like this and Detective Dotson are what “India needed”.
“We know that there are a lot of people in India who want games where characters and setting are relatable to them,” she says.

Games developed by South Asian teams based in western countries have been finding critical praise and commercial success in recent years.

Venba, a cooking sim that told the story of a migrant family reconnecting with their heritage through food, became the first game of its kind to take home a Bafta Game Award this year.

Canada-based Visai Games, which developed the title, was revealed during SGF as one of the first beneficiaries of a new fund set up by Among Us developer Innersloth to boost fellow indie developers.

That will go towards their new, unnamed project based on ancient Tamil legends.

Another title awarded funding by the scheme was Project Dosa, from developer Outerloop, that sees players pilot giant robots, cook Indian food and fight lawyers.

Its previous game, Thirsty Suitors, was also highly praised and nominated for a Bafta award this year.

Games such as these resonating with players worldwide helps perceptions from the wider industry, says Mumbai-based Indrani Ganguly, of Duronto Games.

“Finally, people are starting to see we’re not just a place for outsource work,” she says.

“We’re moving from India being a technical space to more of a creative hub.

“I’m not 100% seeing a shift but that’s more of a mindset thing.

“People who are able to make these kinds of games have always existed but now there is funding and resource opportunities available to be able to act on these creative visions.”

Earth’s inner core rotation slows down and reverses direction. What does this mean for the planet?

(Image by DestinaDesign on Shutterstock)

Earth’s inner core, a solid iron sphere nestled deep within our planet, has slowed its rotation, according to new research. Scientists from the University of Southern California say their discovering challenges previous notions about the inner core’s behavior and raises intriguing questions about its influence on Earth’s dynamics.

The inner core, a mysterious realm located nearly 3,000 miles beneath our feet, has long been known to rotate independently of the Earth’s surface. Scientists have spent decades studying this phenomenon, believing it to play a crucial role in generating our planet’s magnetic field and shaping the convection patterns in the liquid outer core. Until now, it was widely accepted that the inner core was gradually spinning faster than the rest of the Earth, a process known as super-rotation. However, this latest study, published in the journal Nature, reveals a surprising twist in this narrative.

“When I first saw the seismograms that hinted at this change, I was stumped,” says John Vidale, Dean’s Professor of Earth Sciences at the USC Dornsife College of Letters, Arts and Sciences, in a statement. “But when we found two dozen more observations signaling the same pattern, the result was inescapable. The inner core had slowed down for the first time in many decades. Other scientists have recently argued for similar and different models, but our latest study provides the most convincing resolution.”

Slowing Spin, Reversing Rhythm
By analyzing seismic waves generated by repeating earthquakes in the South Sandwich Islands from 1991 to 2023, the researchers discovered that the inner core’s rotation had not only slowed down but had actually reversed direction. The team focused on a specific type of seismic wave called PKIKP, which traverses the inner core and is recorded by seismic arrays in northern North America. By comparing the waveforms of these waves from 143 pairs of repeating earthquakes, they noticed a peculiar pattern.

Many of the earthquake pairs exhibited seismic waveforms that changed over time, but remarkably, they later reverted to match their earlier counterparts. This observation suggests that the inner core, after a period of super-rotation from 2003 to 2008, had begun to sub-rotate, or spin more slowly than the Earth’s surface, essentially retracing its previous path. The researchers found that from 2008 to 2023, the inner core sub-rotated two to three times more slowly than its prior super-rotation.

The inner core began to decrease its speed around 2010, moving slower than the Earth’s surface. (Credit: USC Graphic/Edward Sotelo)

The study’s findings paint a captivating picture of the inner core’s rotational dynamics. The matching waveforms observed in numerous earthquake pairs indicate moments when the inner core returned to positions it had occupied in the past, relative to the mantle. This pattern, combined with insights from previous studies, reveals that the inner core’s rotation is far more complex than a simple, steady super-rotation.

The researchers discovered that the inner core’s super-rotation from 2003 to 2008 was faster than its subsequent sub-rotation, suggesting an asymmetry in its behavior. This difference in rotational rates implies that the interactions between the inner core, outer core, and mantle are more intricate than previously thought.

Limitations: Pieces Of The Core Puzzle
While the study offers compelling evidence for the inner core’s slowing and reversing rotation, the study of course has some limitations. The spatial coverage of the seismic data is relatively sparse, particularly in the North Atlantic, due to the presence of chert layers that hindered continuous coring. Furthermore, the Earth system model used in the study, despite its sophistication, is still a simplified representation of the complex dynamics at play.

The authors emphasize the need for additional high-resolution data from a broader range of locations to strengthen their findings. They also call for ongoing refinement of Earth system models to better capture the intricacies of the inner core’s behavior and its interactions with the outer core and mantle.

Source: https://studyfinds.org/earth-inner-core-rotation-slows/

Mars missions likely impossible for astronauts without kidney dialysis

Photo by Mike Kiev from Unsplash

New study shows damage from cosmic radiation, microgravity could be ‘catastrophic’ for human body
LONDON — As humanity sets its sights on deep space missions to the Moon, Mars, and beyond, a team of international researchers has uncovered a potential problem lurking in the shadows of these ambitious plans: spaceflight-induced kidney damage.

The findings, in a nutshell
In a new study that integrated a dizzying array of cutting-edge scientific techniques, researchers from University College London found that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts.

This sobering discovery, published in Nature Communications, not only highlights the immense challenges of long-duration space travel but also underscores the urgent need for effective countermeasures to protect the health of future space explorers.

“If we don’t develop new ways to protect the kidneys, I’d say that while an astronaut could make it to Mars they might need dialysis on the way back,” says the study’s first author, Dr. Keith Siew, from the London Tubular Centre, based at the UCL Department of Renal Medicine, in a media release. “We know that the kidneys are late to show signs of radiation damage; by the time this becomes apparent it’s probably too late to prevent failure, which would be catastrophic for the mission’s chances of success.”

New research shows that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts. (© alonesdj – stock.adobe.com)

Methodology

To unravel the complex effects of spaceflight on the kidneys, the researchers analyzed a treasure trove of biological samples and data from 11 different mouse missions, five human spaceflights, one simulated microgravity experiment in rats, and four studies exposing mice to simulated galactic cosmic radiation on Earth.

The team left no stone unturned, employing a comprehensive “pan-omics” approach that included epigenomics (studying changes in gene regulation), transcriptomics (examining gene expression), proteomics (analyzing protein levels), epiproteomics (investigating protein modifications), metabolomics (measuring metabolite profiles), and metagenomics (exploring the microbiome). They also pored over clinical chemistry data (electrolytes, hormones, biochemical markers), assessed kidney function, and scrutinized kidney structure and morphology using advanced histology, 3D imaging, and in situ hybridization techniques.

By integrating and cross-referencing these diverse datasets, the researchers were able to paint a remarkably detailed and coherent picture of how spaceflight stressors impact the kidneys at multiple biological levels, from individual molecules to whole organ structure and function.

Results
The study’s findings are as startling as they are sobering. Exposure to microgravity and simulated cosmic radiation induced a constellation of detrimental changes in the kidneys of both humans and animals.

First, the researchers discovered that spaceflight alters the phosphorylation state of key kidney transport proteins, suggesting that the increased kidney stone risk in astronauts is not solely a secondary consequence of bone demineralization but also a direct result of impaired kidney function.

Second, they found evidence of extensive remodeling of the nephron – the basic structural and functional unit of the kidney. This included the expansion of certain tubule segments but an overall loss of tubule density, hinting at a maladaptive response to the unique stressors of spaceflight.

Perhaps most alarmingly, exposing mice to a simulated galactic cosmic radiation dose equivalent to a round trip to Mars led to overt signs of kidney damage and dysfunction, including vascular injury, tubular damage, and impaired filtration and reabsorption.

Piecing together the diverse “omics” datasets, the researchers identified several convergent molecular pathways and biological processes that were consistently disrupted by spaceflight, causing mitochondrial dysfunction, oxidative stress, inflammation, fibrosis, and senescence (cell death) — all hallmarks of chronic kidney disease.

Source: https://studyfinds.org/mars-missions-catastrophic-astronauts-kidneys/

Being more optimistic can keep you from procrastinating

(© chinnarach – stock.adobe.com)

We’ve all been there — a big task is looming over our heads, but we choose to put it off for another day. Procrastination is so common that researchers have spent years trying to understand what drives some people to chronically postpone important chores until the last possible moment. Now, researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future.

The findings, in a nutshell
Researchers found evidence that having a pessimistic view about how stressful the future will be could increase the likelihood of falling into a pattern of severe procrastination. Moreover, the study published in Scientific Reports reveals that having an optimistic view on the future wards off the urge to procrastinate.

“Our research showed that optimistic people — those who believe that stress does not increase as we move into the future — are less likely to have severe procrastination habits,” explains Saya Kashiwakura from the Graduate School of Arts and Sciences at the University of Tokyo, in a media release. “This finding helped me adopt a more light-hearted perspective on the future, leading to a more direct view and reduced procrastination.”

Researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future. (Credit: Ground Picture/Shutterstock)

Methodology
To examine procrastination through the lens of people’s perspectives on the past, present, and future, the researchers introduced new measures they dubbed the “chronological stress view” and “chronological well-being view.” Study participants were asked to rate their levels of stress and well-being across nine different timeframes: the past 10 years, past year, past month, yesterday, now, tomorrow, next month, next year, and the next 10 years.

The researchers then used clustering analysis to group participants based on the patterns in their responses over time – for instance, whether their stress increased, decreased or stayed flat as they projected into the future. Participants were also scored on a procrastination scale, allowing the researchers to investigate whether certain patterns of future perspective were associated with more or less severe procrastination tendencies.

Results: Procrastination is All About Mindset
When examining the chronological stress view patterns, the analysis revealed four distinct clusters: “descending” (stress decreases over time), “ascending” (stress increases), “V-shaped” (stress is lowest in the present), and a “skewed mountain” shape where stress peaked in the past and declined toward the future.

Intriguingly, the researchers found a significant relationship between cluster membership and level of procrastination. The percentage of severe procrastinators was significantly lower in the “descending” cluster – those who believed their stress levels would stay flat or decrease as they projected into the future.

Source: https://studyfinds.org/being-more-optimistic-can-keep-you-from-procrastinating/

Who’s most vulnerable to scams? Psychologists reveal who criminals target and why

(Credit: fizkes/Shutterstock)

About 1 in 6 Americans are age 65 or older, and that percentage is projected to grow. Older adults often hold positions of power, have retirement savings accumulated over the course of their lifetimes, and make important financial and health-related decisions – all of which makes them attractive targets for financial exploitation.

In 2021, there were more than 90,000 older victims of fraud, according to the FBI. These cases resulted in US$1.7 billion in losses, a 74% increase compared with 2020. Even so, that may be a significant undercount since embarrassment or lack of awareness keeps some victims from reporting.

Financial exploitation represents one of the most common forms of elder abuse. Perpetrators are often individuals in the victims’ inner social circles – family members, caregivers, or friends – but can also be strangers.

When older adults experience financial fraud, they typically lose more money than younger victims. Those losses can have devastating consequences, especially since older adults have limited time to recoup – dramatically reducing their independence, health, and well-being.

But older adults have been largely neglected in research on this burgeoning type of crime. We are psychologists who study social cognition and decision-making, and our research lab at the University of Florida is aimed at understanding the factors that shape vulnerability to deception in adulthood and aging.

Defining vulnerability
Financial exploitation involves a variety of exploitative tactics, such as coercion, manipulation, undue influence, and, frequently, some sort of deception.

The majority of current research focuses on people’s ability to distinguish between truth and lies during interpersonal communication. However, deception occurs in many contexts – increasingly, over the internet.

Our lab conducts laboratory experiments and real-world studies to measure susceptibility under various conditions: investment games, lie/truth scenarios, phishing emails, text messages, fake news and deepfakes – fabricated videos or images that are created by artificial intelligence technology.

To study how people respond to deception, we use measures like surveys, brain imaging, behavior, eye movement, and heart rate. We also collect health-related biomarkers, such as being a carrier of gene variants that increase risk for Alzheimer’s disease, to identify individuals with particular vulnerability.

And our work shows that an older adult’s ability to detect deception is not just about their individual characteristics. It also depends on how they are being targeted.

Individual risk factors
Better cognition, social and emotional capacities, and brain health are all associated with less susceptibility to deception.

Cognitive functions, such as how quickly our brain processes information and how well we remember it, decline with age and impact decision-making. For example, among people around 70 years of age or older, declines in analytical thinking are associated with reduced ability to detect false news stories.

Additionally, low memory function in aging is associated with greater susceptibility to email phishing. Further, according to recent research, this correlation is specifically pronounced among older adults who carry a gene variant that is a genetic risk factor for developing Alzheimer’s disease later in life. Indeed, some research suggests that greater financial exploitability may serve as an early marker of disease-related cognitive decline.

Social and emotional influences are also crucial. Negative mood can enhance somebody’s ability to detect lies, while positive mood in very old age can impair a person’s ability to detect fake news.

Lack of support and loneliness exacerbate susceptibility to deception. Social isolation during the COVID-19 pandemic has led to increased reliance on online platforms, and older adults with lower digital literacy are more vulnerable to fraudulent emails and robocalls.

Isolation during the COVID-19 pandemic has increased aging individuals’ vulnerability to online scams. (© Andrey Popov – stock.adobe.com)

Finally, an individual’s brain and body responses play a critical role in susceptibility to deception. One important factor is interoceptive awareness: the ability to accurately read our own body’s signals, like a “gut feeling.” This awareness is correlated with better lie detection in older adults.

According to a first study, financially exploited older adults had a significantly smaller size of insula – a brain region key to integrating bodily signals with environmental cues – than older adults who had been exposed to the same threat but avoided it. Reduced insula activity is also related to greater difficulty picking up on cues that make someone appear less trustworthy.

Types of effective fraud
Not all deception is equally effective on everyone.

Our findings show that email phishing that relies on reciprocation – people’s tendency to repay what another person has provided them – was more effective on older adults. Younger adults, on the other hand, were more likely to fall for phishing emails that employed scarcity: people’s tendency to perceive an opportunity as more valuable if they are told its availability is limited. For example, an email might alert you that a coin collection from the 1950s has become available for a special reduced price if purchased within the next 24 hours.

There is also evidence that as we age, we have greater difficulty detecting the “wolf in sheep’s clothing”: someone who appears trustworthy, but is not acting in a trustworthy way. In a card-based gambling game, we found that compared with their younger counterparts, older adults are more likely to select decks presented with trustworthy-looking faces, even though those decks consistently resulted in negative payouts. Even after learning about untrustworthy behavior, older adults showed greater difficulty overcoming their initial impressions.

Reducing vulnerability
Identifying who is especially at risk for financial exploitation in aging is crucial for preventing victimization.

We believe interventions should be tailored instead of a one-size-fits-all approach. For example, perhaps machine learning algorithms could someday determine the most dangerous types of deceptive messages that certain groups encounter – such as in text messages, emails, or social media platforms – and provide on-the-spot warnings. Black and Hispanic consumers are more likely to be victimized, so there is also a dire need for interventions that resonate with their communities.

Prevention efforts would benefit from taking a holistic approach to help older adults reduce their vulnerability to scams. Training in financial, health, and digital literacy are important, but so are programs to address loneliness.

People of all ages need to keep these lessons in mind when interacting with online content or strangers – but not only then. Unfortunately, financial exploitation often comes from individuals close to the victim.

Source: https://studyfinds.org/whos-most-vulnerable-to-scams/

Mushroom-infused ‘microdosing’ chocolate bars are sending people to the hospital, prompting investigation: FDA

The Food and Drug Administration (FDA) is warning consumers about a mushroom-infused chocolate bar that has reportedly sent some people to the hospital.

The FDA released an advisory message about Diamond Shruumz “microdosing” chocolate bars on June 7. The chocolate bars contain a “proprietary nootropics blend” that is said to give a “relaxed euphoric experience without psilocybin,” according to its website.

“The FDA and CDC, in collaboration with America’s Poison Centers and state and local partners, are investigating a series of illnesses associated with eating Diamond Shruumz-brand Microdosing Chocolate Bars,” the FDA’s website reads.

“Do not eat, sell, or serve Diamond Shruumz-Brand Microdosing Chocolate Bars,” the site warns. “FDA’s investigation is ongoing.”

The FDA is warning consumers against Diamond Shruumz chocolate bars. (FDA | iStock)

“Microdosing” is a practice where one takes a very small amount of psychedelic drugs with the intent of increasing productivity, inspiring creativity and boosting mood. According to Diamond Shruumz’s website, the brand said its products help achieve “a subtle, sumptuous experience and a more creative state of mind.”

“We’re talkin’ confections with a kick,” the brand said. “So if you like mushroom chocolate bars and want to mingle with some microdosing, check us out. We just might change how you see the world.”

But government officials warn that the products have caused seizures in some consumers and vomiting in others.

“People who became ill after eating Diamond Shruumz-brand Microdosing Chocolate Bars reported a variety of severe symptoms including seizures, central nervous system depression (loss of consciousness, confusion, sleepiness), agitation, abnormal heart rates, hyper/hypotension, nausea, and vomiting,” the FDA reported.

Six people reportedly experienced such severe reactions that they sought medical care.

At least eight people have suffered a variety of medical symptoms from the chocolates, including nausea. (iStock)

“All eight people have reported seeking medical care; six have been hospitalized,” the FDA’s press release said. “No deaths have been reported.”

Diamond Shruumz says on its website that its products are not necessarily psychedelic. Although the chocolate is marketed as promising a psilocybin-like experience, there is no psilocybin in it.

“There is no presence of psilocybin, amanita or any scheduled drugs, ensuring a safe and enjoyable experience,” the website claims. “Rest assured, our treats are not only free from psychedelic substances but our carefully crafted ingredients still offer an experience.”

“This allows you to indulge in a uniquely crafted blend designed for your pleasure and peace of mind.”

Officials warn consumers to keep the products out of the reach of minors, as kids and teens may be tempted to eat the chocolate bars.

Source: https://www.foxnews.com/health/mushroom-infused-microdosing-chocolate-bars-sending-people-hospital-prompting-investigation-fda

 

Elephants give each other ‘names,’ just like humans

(Photo by Unsplash+ in collaboration with Getty Images)

They say elephants never forget a face, and now as it turns out, they seem to remember names too. That is, the “names” they have for one another. Yes, believe it or not, a new study shows that elephants actually have the rare ability to identify one another through unique calls, essentially giving one another human-like names when they converse.

Scientists from Colorado State University, along with a team of researchers from Save the Elephants and ElephantVoices, used machine learning to make this fascinating discovery. Their work suggests that elephants possess a level of communication and abstract thought that is more similar to ours than previously believed.

In the study, published in Nature Ecology and Evolution, the researchers analyzed hundreds of recorded elephant calls from Kenya’s Samburu National Reserve and Amboseli National Park. By training a sophisticated model to identify the intended recipient of each call based on its unique acoustic features, they could confirm that elephant calls contain a name-like component, a behavior they had suspected based on observation.

“Dolphins and parrots call one another by ‘name’ by imitating the signature call of the addressee. By contrast, our data suggest that elephants do not rely on imitation of the receiver’s calls to address one another, which is more similar to the way in which human names work,” says lead author Michael Pardo, who conducted the study as a postdoctoral researcher at CSU and Save the Elephants, in a statement.

Once the team pinpointed the specific calls to the corresponding elephants, the scientists played back the recordings and observed their reactions. When the calls were addressed to them, the elephants responded positively by calling back or approaching the speaker. In contrast, calls meant for other elephants elicited less enthusiasm, demonstrating that the elephants recognized their own “names.”

Two juvenile elephants greet each other in Samburu National Reserve in Kenya. (Credit: George Wittemyer)

Elephants’ Brains Even More Complex Than Realized

The ability to learn and produce new sounds, a prerequisite for naming individuals, is uncommon in the animal kingdom. This form of arbitrary communication, where a sound represents an idea without imitating it, is considered a higher-level cognitive skill that greatly expands an animal’s capacity to communicate.

Co-author George Wittemyer, a professor at CSU’s Warner College of Natural Resources and chairman of the scientific board of Save the Elephants, elaborated on the implications of this finding: “If all we could do was make noises that sounded like what we were talking about, it would vastly limit our ability to communicate.” He adds that the use of arbitrary vocal labels suggests that elephants may be capable of abstract thought.

To arrive at these conclusions, the researchers embarked on a four-year study that included 14 months of intensive fieldwork in Kenya. They followed elephants in vehicles, recording their vocalizations and capturing approximately 470 distinct calls from 101 unique callers and 117 unique receivers.

Kurt Fristrup, a research scientist in CSU’s Walter Scott, Jr. College of Engineering, developed a novel signal processing technique to detect subtle differences in call structure. Together with Pardo, he trained a machine-learning model to correctly identify the intended recipient of each call based solely on its acoustic features. This innovative approach allowed the researchers to uncover the hidden “names” within the elephant calls.

Source: https://studyfinds.org/elephants-give-each-other-names/

Baby talk explained! All those sounds mean more than you think

Mother and baby laying down together (Photo by Ana Tablas on Unsplash)

From gurgling “goos” to squealing “wheees!”, the delightful symphony of sounds emanating from a baby’s crib may seem like charming gibberish to the untrained ear. However, a new study suggests that these adorable vocalizations are far more than just random noise — they’re actually a crucial stepping stone on the path to language development.

The research, published in PLOS One, took a deep dive into the vocal patterns of 130 typically developing infants over the course of their first year of life. Their discoveries challenge long-held assumptions about how babies learn to communicate.

Traditionally, many experts believed that infants start out making haphazard sounds, gradually progressing to more structured “baby talk” as they listen to and imitate the adults around them. This new study paints a different picture, one where babies are actively exploring and practicing different categories of sounds in what might be thought of as a precursor to speech.

Think of it like a baby’s very first music lesson. Just as a budding pianist might spend time practicing scales and chords, it seems infants devote chunks of their day to making specific types of sounds, almost as if they’re trying to perfect their technique.

The researchers reached this conclusion after sifting through an enormous trove of audio data captured by small recording devices worn by the babies as they went about their daily lives. In total, they analyzed over 1,100 daylong recordings, adding up to nearly 14,500 hours – or about 1.6 years – of audio.

Using special software to isolate the infant vocalizations, the research team categorized the sounds into three main types: squeals (high-pitched, often excited-sounding noises), growls (low-pitched, often “rumbly” sounds), and vowel-like utterances (which the researchers dubbed “vocants”).

Next, they zoomed in on five-minute segments from each recording, hunting for patterns in how these sound categories were distributed. The results were striking: 40% of the recordings showed significant “clustering” of squeals, with a similar percentage showing clustering of growls. In other words, the babies weren’t randomly mixing their sounds, but rather, they seemed to focus on one type at a time, practicing it intensively.

Source: https://studyfinds.org/baby-talk-explained/

Why do giraffes have long necks? Researchers may finally have the answer

Photo by Krish Radhakrishna from Unsplash

Everything in biology ultimately boils down to food and sex. To survive as an individual, you need food. To survive as a species, you need sex.

Not surprisingly, then, the age-old question of why giraffes have long necks has centered around food and sex. After debating this question for the past 150 years, biologists still cannot agree on which of these two factors was the most important in the evolution of the giraffe’s neck. In the past three years, my colleagues and I have been trying to get to the bottom of this question.

Necks for sex
In the 19th century, biologists Charles Darwin and Jean Baptiste Lamarck both speculated that giraffes’ long necks helped them reach acacia leaves high up in the trees, though they likely weren’t observing actual giraffe behavior when they came up with this theory. Several decades later, when scientists started observing giraffes in Africa, a group of biologists came up with an alternative theory based on sex and reproduction.

These pioneering giraffe biologists noticed how male giraffes, standing side by side, used their long necks to swing their heads and club each other. The researchers called this behavior “neck-fighting” and guessed that it helped the giraffes prove their dominance over each other and woo mates. Males with the longest necks would win these contests and, in turn, boost their reproductive success. That favorability, the scientists predicted, drove the evolution of long necks.

Since its inception, the necks-for-sex sexual selection hypothesis has overshadowed Darwin’s and Lamarck’s necks-for-food hypothesis.

The necks-for-sex hypothesis predicts that males should have longer necks than females since only males use them to fight, and indeed, they do. But adult male giraffes are also about 30% to 50% larger than female giraffes. All of their body components are bigger. So, my team wanted to find out if males have proportionally longer necks when accounting for their overall stature, comprised of their head, neck, and forelegs.

Necks not for sex?
But it’s not easy to measure giraffe body proportions. For one, their necks grow disproportionately faster during the first six to eight years of their life. And in the wild, you can’t tell exactly how old an individual animal is. To get around these problems, we measured body proportions in captive Masai giraffes in North American zoos. Here, we knew the exact age of the giraffes and could then compare this data with the body proportions of wild giraffes that we knew confidently were older than 8 years.

To our surprise, we found that adult female giraffes have proportionally longer necks than males, which contradicts the necks-for-sex hypothesis. We also found that adult female giraffes have proportionally longer body trunks, while adult males have proportionally longer forelegs and thicker necks.

Giraffe babies don’t have any of these sex-specific body proportion differences. They only appear as giraffes are reaching adulthood.
Finding that female giraffes have proportionally both longer necks and longer body trunks led us to propose that females, and not males, drove the evolution of the giraffe’s long neck, and not for sex but for food and reproduction. Our theory is in agreement with Darwin and Lamarck that food was the major driver for the evolution of the giraffe’s neck but with an emphasis on female reproductive success.

A shape to die for
Giraffes are notoriously picky eaters and browse on fresh leaves, flowers, and seed pods. Female giraffes especially need enough to eat because they spend most of their adult lives either pregnant or providing milk to their calves.

Females tend to use their long necks to probe deep into bushes and trees to find the most nutritious food. By contrast, males tend to feed high in trees by fully extending their necks vertically. Females need proportionally longer trunks to grow calves that can be well over 6 feet tall at birth.

For males, I’d guess that their proportionally longer forelegs are an adaptation that allows them to mount females more easily during sex. While we found that their necks might not be as proportionally long as females’ necks are, they are thicker. That’s probably an adaptation that helps them win neck fights.

Source: https://studyfinds.org/why-do-giraffes-have-long-necks/

Eleven tonnes of rubbish taken off Himalayan peaks

Fewer permits were issued and fewer climbers died on Mount Everest in 2024 than 2023.

The Nepalese army says it has removed eleven tonnes of rubbish, four corpses and one skeleton from Mount Everest and two other Himalayan peaks this year.
It took troops 55 days to recover the rubbish and bodies from Everest, Nuptse and Lhotse mountains.
It is estimated that more than fifty tonnes of waste and more than 200 bodies cover Everest.
The army began conducting an annual clean-up of the mountain, which is often described as the world’s highest garbage dump, in 2019 during concerns about overcrowding and climbers queueing in dangerous conditions to reach the summit.
The five clean-ups have collected 119 tonnes of rubbish, 14 human corpses and some skeletons, the army says.
This year, authorities aimed to reduce rubbish and improve rescues by making climbers wear tracking devices and bring back their own poo.

In the future, the government plans to create a mountain rangers team to monitor rubbish and put more money toward its collection, Nepal’s Department of Tourism director of mountaineering Rakesh Gurung told the BBC.
For the spring climbing season that ended in May, the government issued permits to 421 climbers, down from a record-breaking 478 last year. Those numbers do not include Nepalese guides. In total, an estimated 600 people climbed the mountain this year.
This year, eight climbers died or went missing, compared to 19 last year.
A Brit, Daniel Paterson, and his Nepalese guide, Pastenji Sherpa, are among those missing after being hit by falling ice on 21 May.
Mr Paterson’s family started a fundraiser to hire a search team to find them, but said in an update on 4 June that recovery “is not possible at this time” because of the location and danger of the operation.
Mr Gurung said the number of permits was lower this year because of the global economic situation, China also issuing permits and the national election in India which reduced the number of climbers from that country.
Source: https://www.bbc.com/news/articles/cq5539lj1pqo

Women experience greater mental agility during menstruation

For female athletes, the impact of the menstrual cycle on physical performance has been a topic of much discussion. But what about the mental side of the game? A groundbreaking new study suggests that certain cognitive abilities, particularly those related to spatial awareness and anticipation, may indeed ebb and flow with a woman’s cycle.

(Photo 102762325 | Black Teen Brain © Denisismagilov | Dreamstime.com)

The findings, in a nutshell
Researchers from University College London tested nearly 400 participants on a battery of online cognitive tasks designed to measure reaction times, attention, visuospatial functions (like 3D mental rotation), and timing anticipation. The study, published in Neuropsychologia, included men, women on hormonal contraception, and naturally cycling women.

Fascinatingly, the naturally cycling women exhibited better overall cognitive performance during menstruation compared to any other phase of their cycle. This held true even though these women reported poorer mood and more physical symptoms during their period. In contrast, performance dipped during the late follicular phase (just before ovulation) and the luteal phase (after ovulation).

“What is surprising is that the participant’s performance was better when they were on their period, which challenges what women, and perhaps society more generally, assume about their abilities at this particular time of the month,” says Dr. Flaminia Ronca, first author of the study from UCL, in a university release.

“I hope that this will provide the basis for positive conversations between coaches and athletes about perceptions and performance: how we feel doesn’t always reflect how we perform.”

This study provides compelling preliminary evidence that sport-relevant cognitive skills may indeed fluctuate across the menstrual cycle, with a surprising boost during menstruation itself. If confirmed in future studies, this could have implications for understanding injury risk and optimizing mental training in female athletes.

Importantly, there was a striking mismatch between women’s perceptions and their actual performance. Many felt their thinking was impaired during their period when, in fact, it was enhanced. This points to the power of negative expectations and the importance of educating athletes about their unique physiology.

Source: https://studyfinds.org/womens-brains-show-more-mental-agility-during-their-periods/

Colon cancer crisis in young people could be fueled by booming drinks brands adored by teens

Colon cancer crisis in young people could be fueled by booming drinks brands adored by teens


They are used by millions of workers to power through afternoon slumps.

But highly caffeinated energy drinks could be partly fueling the explosion of colorectal cancers in young people, US researchers warn.

They believe an ingredient in Red Bull and other top brands such as Celsius and Monster may be linked to bacteria in the gut that speeds up tumor growth.

Researchers in Florida theorize that cancer cells use taurine – an amino acid thought to improve mental clarity – as their ‘primary energy source.’

At the world’s biggest cancer conference this week, the team announced a new human trial that will test their hypothesis, which so far is based on animal studies.

They plan to discover whether drinking an energy drink every day causes levels of cancer-causing gut bacteria to rise.

Highly caffeinated energy drinks could be partly fueling the explosion of colorectal cancers in young people , US researchers warn – based on a new hypothesis

DailyMail.com revealed earlier this week how diets high in sugar and low in fiber may also be contributing to the epidemic of colon cancers in under-50s.

The University of Florida researchers are recruiting around 60 people aged 18 to 40 to be studied for four weeks.

Half of the group will consume at least one original Red Bull or Celsius, a sugar-free energy drink, per day and their guts will be compared to a control group who don’t.

The upcoming trial is ‘one of the earliest’ studies to evaluate potential factors contributing to the meteoric in colorectal cancer, the researchers say.

Early onset cancers are still uncommon. About 90 per cent of all cancers affect people over the age of 50.

But rates in younger age groups have soared around 70 percent since the 1990s, with around 17,000 new cases diagnosed in the US each year.

Source: https://www.dailymail.co.uk/health/article-13493163/red-bull-colon-cancer-crisis-young-people.html

Here’s why sugar wreaks havoc on gut health, worsens inflammatory bowel disease

(Photo by Alexander Grey from Unsplash)

There can be a lot of inconsistent dietary advice when it comes to gut health, but those that says that eating lots of sugar is harmful tend to be the most consistent of them all. Scientists from the University of Pittsburgh are now showing that consuming excess sugar disrupts cells that keep the colon healthy in mice with inflammatory bowel disease (IBD).

“The prevalence of IBD is rising around the world, and it’s rising the fastest in cultures with industrialized, urban lifestyles, which typically have diets high in sugar,” says senior author Timothy Hand, Ph.D., associate professor of pediatrics and immunology at Pitt’s School of Medicine and UPMC Children’s Hospital of Pittsburgh. “Too much sugar isn’t good for a variety of reasons, and our study adds to that evidence by showing how sugar may be harmful to the gut. For patients with IBD, high-density sugar — found in things like soda and candy — might be something to stay away from.”

In this study, researchers fed mice either a standard or high-sugar diet, and then mimicked IBD symptoms by exposing them to a chemical called DSS, which damages the colon.

Shockingly, all of the mice that ate a high-sugar diet died within nine days. All of the animals that ate a standard diet lived until the end of the 14-day experiment. To figure out where things went wrong, the team looked for answers inside the colon. Typically, the colon is lined with a layer of epithelial cells that are arranged with finger-like projections called crypts. They are frequently replenished by dividing stem cells to keep the colon healthy.

“The colon epithelium is like a conveyor belt,” explains Hand in a media release. “It takes five days for cells to travel through the circuit from the bottom to the top of the crypt, where they are shed into the colon and defecated out. You essentially make a whole new colon every five days.”

(© T. L. Furrer – stock.adobe.com)

This system collapsed in mice fed a high-sugar diet
In fact, the protective layer of cells was completely gone in some animals, filling the colon with blood and immune cells. This shows that sugar may directly impact the colon, rather than the harm being dependent on the gut microbiome, which is what the team originally thought.

To compare the findings to human colons, the researchers used poppy seed-sized intestinal cultures that could be grown in a lab dish. They found that as sugar concentrations increased, fewer cultures developed, which suggests that sugar hinders cell devision.

“We found that stem cells were dividing much more slowly in the presence of sugar — likely too slow to repair damage to the colon,” says Hand. “The other strange thing we noticed was that the metabolism of the cells was different. These cells usually prefer to use fatty acids, but after being grown in high-sugar conditions, they seemed to get locked into using sugar.”

Hand adds that these findings may be key to strengthening existing links between sweetened drinks and worse IBD outcomes.

Source: https://studyfinds.org/sugar-wreaks-havoc-gut-health/

Shocking study claims pollution causes more deaths than war, disease, and drugs combined

(Credit: aappp/Shutterstock)

We often think of war, terrorism, and deadly diseases as the greatest threats to human life. But what if the real danger is something we encounter every day, something that’s in the air we breathe, the water we drink, and even in the noise that surrounds us? A new study published in the Journal of the American College of Cardiology reveals a startling truth: pollution, in all its forms, is now a greater health threat than war, terrorism, malaria, HIV, tuberculosis, drugs, and alcohol combined. Specifically, researchers estimate that manmade pollutants and climate change contribute to a staggering seven million deaths globally each year.

“Every year around 20 million people worldwide die from cardiovascular disease with pollutants playing an ever-increasing role,” explains Professor Jason Kovacic, Director and CEO of the Victor Chang Cardiac Research Institute in Australia, in a media release.

The findings, in a nutshell
The culprits behind this global death toll aren’t just the obvious ones like air pollution from car exhausts or factory chimneys. The study, conducted by researchers from prestigious institutions worldwide, shines a light on lesser-known villains: soil pollution, noise pollution, light pollution, and even exposure to toxic chemicals in our homes.

Think about your daily life. You wake up after a night’s sleep disrupted by the glow of streetlights and the hum of late-night traffic. On your way to work, you’re exposed to car fumes and the blaring horns of impatient drivers. At home, you might be unknowingly using products containing untested chemicals. All these factors, the study suggests, are chipping away at your heart health.

“Pollutants have reached every corner of the globe and are affecting every one of us,” Prof. Kovacic warns. “We are witnessing unprecedented wildfires, soaring temperatures, unacceptable road noise and light pollution in our cities and exposure to untested toxic chemicals in our homes.”

Specifically, researchers estimate that manmade pollutants and climate change contribute to a staggering 7 million deaths globally each year. (© Quality Stock Arts – stock.adobe.com)

How do these pollutants harm our hearts?
Air Pollution: When you inhale smoke from a wildfire or exhaust fumes, these toxins travel deep into your lungs, enter your bloodstream, and then circulate throughout your body. It’s like sending tiny invaders into your system, causing damage wherever they go, including your heart.

Noise and Light Pollution: Ever tried sleeping with a streetlight shining through your window or with noisy neighbors? These disruptions do more than just annoy you—they mess up your sleep patterns. Poor sleep can lead to inflammation in your body, raise your blood pressure, and even cause weight gain. All of these are risk factors for heart disease.

Extreme Heat: Think of your heart as a car engine. On a scorching hot day, your engine works harder to keep cool. Similarly, during a heatwave, your heart has to work overtime. This extra strain, coupled with dehydration and reduced blood volume from sweating, can lead to serious issues like acute kidney failure.

Chemical Exposure: Many household items — from non-stick pans to water-resistant clothing — contain chemicals that haven’t been thoroughly tested for safety. Prof. Kovacic points out, “There are hundreds of thousands of chemicals that haven’t even been tested for their safety or toxicity, let alone their impact on our health.”

The statistics are alarming. Air pollution alone is linked to over seven million premature deaths per year, with more than half due to heart problems. During heatwaves, the risk of heat-related cardiovascular deaths can spike by over 10%. In the U.S., exposure to wildfire smoke has surged by 77% since 2002.

Source: https://studyfinds.org/pollution-causes-more-deaths/

Never-before-seen blue ants discovered in India

In the lush forests of India’s Arunachal Pradesh, a team of intrepid researchers has made a startling discovery: a never-before-seen species of ant that sparkles like a brilliant blue gemstone. The remarkable find marks the first new species of its genus to be identified in India in over 120 years.

Dubbing the species Paraparatrechina neela, the fascinating discovery was made by entomologists Dr. Priyadarsanan Dharma Rajan and Ramakrishnaiah Sahanashree, from the Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bengaluru, along with Aswaj Punnath from the University of Florida. The name “neela” comes from Indian languages, meaning the color blue. And for good reason – this ant sports an eye-catching iridescent blue exoskeleton, unlike anything seen before in its genus.

Paraparatrechina is a widespread group of ants found across Asia, Africa, Australia and the Pacific. They are typically small, measuring just a few millimeters in length. Before this discovery, India was home to only one known species in the genus, Paraparatrechina aseta, which was described way back in 1902.

The researchers collected the dazzling P. neela specimens during an expedition in 2022 to the Siang Valley in the foothills of the Eastern Himalayas. Fittingly, this trip was part of a series called the “Siang Expeditions” – a project aiming to retrace the steps of a historic 1911-12 expedition that documented the region’s biodiversity.

Paraparatrechina neela — the blue ant discovered in India’s Himalayas. (Credit: Sahanashree R)

Over a century later, the area still holds surprises. The team found the ants living in a tree hole in a patch of secondary forest, at an altitude of around 800 meters. After carefully extracting a couple of specimens with an aspirator device, they brought them back to the lab for a closer look under the microscope. Their findings are published in the journal ZooKeys.

Beyond its “captivating metallic-blue color,” a unique combination of physical features distinguishes P. neela from its relatives. The body is largely blue, but the legs and antennae fade to a brownish-white. Compared to the light brown, rectangular head of its closest Indian relative, P. aseta, the sapphire ant has a subtriangular head. It also has one less tooth on its mandibles and a distinctly raised section on its propodeum (the first abdominal segment that’s fused to the thorax).

So what’s behind the blue? While pigments provide color for some creatures, in insects, hues like blue are usually the result of microscopic structural arrangements that reflect light in particular ways. Different layers and shapes of the exoskeleton can interact with light to produce shimmering, iridescent effects. This has evolved independently in many insect groups, but is very rare in ants.

The function of the blue coloration remains a mystery for now. In other animals, such striking hues can serve many possible roles – from communication and camouflage to thermoregulation.

“This vibrant feature raises intriguing questions. Does it help in communication, camouflage, or other ecological interactions? Delving into the evolution of this conspicuous coloration and its connections to elevation and the biology of P. neela presents an exciting avenue for research,” the authors write.

A view of Siang Valley. (Credit: Ranjith AP)

The Eastern Himalayas are known to be a biodiversity hotspot, but remain underexplored by scientists. Finding a new species of ant, in a genus that specializes in tiny, inconspicuous creatures, hints at the many more discoveries that likely await in the region’s forests. Who knows – maybe there are entire rainbow-hued colonies of ants hidden in the treetops!

Source: https://studyfinds.org/blue-ants-discovered/

Prenatal stress hormones may finally explain why infants won’t sleep at night

(Photo by Laura Garcia on Unsplash)

Babies with higher stress hormone levels late in their mother’s pregnancy can end up having trouble falling asleep, researchers explain. The sleep research suggests that measuring cortisol during the third trimester can predict infant sleep patterns up to seven months after a baby’s birth.

Babies often wake up in the middle of the night and have trouble falling asleep. A team from the University of Denver says one possible but unexplored reason for this is how well the baby’s hypothalamic-adrenal-pituitary (HPA) system is working. The HPA system is well-known for regulating the stress response and has previously been linked with sleep disorders when it’s not working properly. Cortisol is the end product produced from the HPA axis.

What is cortisol?

Cortisol is a steroid hormone produced by the adrenal glands, which are located on top of each kidney. It plays a crucial role in several body functions, including:

Regulation of metabolism: Cortisol helps regulate the metabolism of proteins, fats, and carbohydrates, releasing energy and managing how the body uses these macronutrients.

Stress response: Often referred to as the “stress hormone,” cortisol is released in response to stress and low blood-glucose concentration. It helps the body manage and cope with stress by altering immune system responses and suppressing non-essential functions in a fight-or-flight situation.

Anti-inflammatory effects: Cortisol has powerful anti-inflammatory capabilities, helping to reduce inflammation and assist in healing.

Blood pressure regulation: It helps in maintaining blood pressure and cardiovascular function.

Circadian rhythm influence: Cortisol levels fluctuate throughout the day, typically peaking in the morning and gradually falling to their lowest level at night.

Collecting hair samples is one way to measure fetal cortisol levels in the final trimester of pregnancy.

“Although increases in cortisol across pregnancy are normal and important for preparing the fetus for birth, our findings suggest that higher cortisol levels during late pregnancy could predict the infant having trouble falling asleep,” says lead co-author Melissa Nevarez-Brewster in a media release. “We are excited to conduct future studies to better understand this link.”

The team collected hair cortisol samples from 70 infants during the first few days after birth. Approximately 57% of the infants were girls. When each child was seven months-old, parents completed a sleep questionnaire including questions such as how long it took on average for the children to fall asleep, how long babies stayed awake at night, and the number of times the infants woke up in the middle of the night. The researchers also collected data on each infant’s gestational age at birth and their family’s income.

Source: https://studyfinds.org/prenatal-stress-hormones-may-finally-explain-why-infants-wont-sleep-at-night/

How much stress is too much?

Pedro Figueras / pexels.com

COVID-19 taught most people that the line between tolerable and toxic stress – defined as persistent demands that lead to disease – varies widely. But, some people will age faster and die younger from toxic stressors than others.

So, how much stress is too much, and what can you do about it?

I’m a psychiatrist specializing in psychosomatic medicine, which is the study and treatment of people who have physical and mental illnesses. My research is focused on people who have psychological conditions and medical illnesses, as well as those whose stress exacerbates their health issues.

I’ve spent my career studying mind-body questions and training physicians to treat mental illness in primary care settings. My forthcoming book is titled “Toxic Stress: How Stress is Killing Us and What We Can Do About It.”

A 2023 study of stress and aging over the life span – one of the first studies to confirm this piece of common wisdom – found that four measures of stress all speed up the pace of biological aging in midlife. It also found that persistent high-stress ages people in a comparable way to the effects of smoking and low socioeconomic status, two well-established risk factors for accelerated aging.

The difference between good stress and the toxic kind

Good stress – a demand or challenge you readily cope with – is good for your health. In fact, the rhythm of these daily challenges, including feeding yourself, cleaning up messes, communicating with one another, and carrying out your job, helps to regulate your stress response system and keep you fit.

Toxic stress, on the other hand, wears down your stress response system in ways that have lasting effects, as psychiatrist and trauma expert Bessel van der Kolk explains in his bestselling book “The Body Keeps the Score.”

The earliest effects of toxic stress are often persistent symptoms such as headache, fatigue, or abdominal pain that interfere with overall functioning. After months of initial symptoms, a full-blown illness with a life of its own – such as migraine headaches, asthma, diabetes, or ulcerative colitis – may surface.

When we are healthy, our stress response systems are like an orchestra of organs that miraculously tune themselves and play in unison without our conscious effort – a process called self-regulation. But when we are sick, some parts of this orchestra struggle to regulate themselves, which causes a cascade of stress-related dysregulation that contributes to other conditions.

For instance, in the case of diabetes, the hormonal system struggles to regulate sugar. With obesity, the metabolic system has a difficult time regulating energy intake and consumption. With depression, the central nervous system develops an imbalance in its circuits and neurotransmitters that makes it difficult to regulate mood, thoughts and behaviors.

‘Treating’ stress
Though stress neuroscience in recent years has given researchers like me new ways to measure and understand stress, you may have noticed that in your doctor’s office, the management of stress isn’t typically part of your treatment plan.

Most doctors don’t assess the contribution of stress to a patient’s common chronic diseases such as diabetes, heart disease, and obesity, partly because stress is complicated to measure and partly because it is difficult to treat. In general, doctors don’t treat what they can’t measure.

Stress neuroscience and epidemiology have also taught researchers recently that the chances of developing serious mental and physical illnesses in midlife rise dramatically when people are exposed to trauma or adverse events, especially during vulnerable periods such as childhood.

Over the past 40 years in the U.S., the alarming rise in rates of diabetes, obesity, depression, PTSD, suicide, and addictions points to one contributing factor that these different illnesses share: toxic stress.

Toxic stress increases the risk for the onset, progression, complications, or early death from these illnesses.

Suffering from toxic stress
Because the definition of toxic stress varies from one person to another, it’s hard to know how many people struggle with it. One starting point is the fact that about 16% of adults report having been exposed to four or more adverse events in childhood. This is the threshold for higher risk for illnesses in adulthood.

Research dating back to before the COVID-19 pandemic also shows that about 19% of adults in the U.S. have four or more chronic illnesses. If you have even one chronic illness, you can imagine how stressful four must be.

And about 12% of the U.S. population lives in poverty, the epitome of a life in which demands exceed resources every day. For instance, if a person doesn’t know how they will get to work each day or doesn’t have a way to fix a leaking water pipe or resolve a conflict with their partner, their stress response system can never rest. One or any combination of threats may keep them on high alert or shut them down in a way that prevents them from trying to cope at all.

Add to these overlapping groups all those who struggle with harassing relationships, homelessness, captivity, severe loneliness, living in high-crime neighborhoods, or working in or around noise or air pollution. It seems conservative to estimate that about 20% of people in the U.S. live with the effects of toxic stress.

Source: https://studyfinds.org/how-much-stress-is-too-much/

Eye Stroke Cases Surge During Heatwave: Symptoms, Prevention Tips

The extreme heat can affect overall health, increasing the risk of heart diseases, brain disorders, and other organ issues.

गर्मियों में कैसे रखें आंखों का ख्याल | Image:Freepik

As heatwaves sweep across various regions, there has been a noticeable increase in eye stroke cases. This condition, also known as retinal artery occlusion, can cause sudden vision loss and is comparable to a brain stroke in its seriousness.

Impact of heatwaves on eye health 

The extreme heat can affect overall health, increasing the risk of heart diseases, brain disorders, and other organ issues. Notably, it can also lead to eye strokes due to dehydration and heightened blood pressure. Dehydration during hot weather makes the blood more prone to clotting, while high temperatures can exacerbate cardiovascular problems, raising the risk of arterial blockages.

Eye stroke

An eye stroke occurs when blood flow to the retina is obstructed, depriving it of oxygen and nutrients. This can cause severe retinal damage in minutes. Dehydration from heatwaves thickens the blood, making clots more likely, while heat stress can worsen cardiovascular conditions, further increasing eye stroke risk.

Signs and symptoms

Sudden Vision Loss: The most common symptom, this can be partial or complete, and typically painless.

Visual Disturbances: Sudden dimming or blurring of vision, where central vision is affected but peripheral vision remains intact.

Preventive measures

Stay Hydrated: Ensure adequate fluid intake to prevent dehydration.

Avoid Peak Sun Hours: Limit exposure to the sun during the hottest parts of the day.

Manage Chronic Conditions: Keep blood pressure and other chronic conditions under control.

TImmediate Medical Attentioreatment optionsn: Urgency is crucial as delays can lead to permanent vision loss.

Source: https://www.republicworld.com/health/eye-stroke-cases-surge-during-heatwave-symptoms-prevention-tips/?amp=1

5 Hidden Effects Of Childhood Neglect

(Photo by Volurol on Shutterstock)

Trauma, abuse, and neglect — in the current cultural landscape, it’s not hard to find a myriad of discussions on these topics. But with so many people chiming in on the conversation, it’s more important now than ever to listen to what experts on the topic have to say. As we begin to understand more and more about the effects of growing up experiencing trauma and abuse, we also begin to understand that the effects of these experiences are more complex and wide-ranging than we had ever imagined.

Recent studies in the field of childhood trauma and abuse have found that these experiences can affect a wide range of aspects of our adult life. In fact, even seemingly disparate topics ranging from your stance on vaccinations to the frequency with which we experience headaches, to the types of judgments that we make about others are impacted by histories of abuse, trauma, or neglect.

Clearly, the effects of a traumatic childhood go far beyond the time when you are living in an abusive or unhealthy environment. A recent study reports that early childhood traumas can impact health outcomes decades later, potentially following you for the rest of your life. With many new and surprising effects of childhood trauma being discovered every day, it’s no wonder that so many people are interested in what exactly trauma is and how it can affect us.

So, what are the long-term ramifications of childhood neglect? For an answer to that question, StudyFinds sat down with Michael Menard, inventor-turned-author of the upcoming book, “The Kite That Couldn’t Fly: And Other May Avenue Stories,” to discuss the lesser-understood side of trauma and how it can affect us long into our adult lives.

Here is his list of five hidden effects of trauma, and some of them just might surprise you.

1. Unstable Relationships
For individuals with childhood trauma, attachment issues are an often overlooked form of collateral damage. Through infancy and early childhood, a person’s attachment style is developed largely through familial bonds and is then carried into every relationship from platonic peers to romantic partners. When this is lovingly and healthily developed, this is usually a positive thing. But for children and adults with a background of neglect, it often leads to difficulty in finding, developing, and keeping healthy relationships.

As Menard explains it, a childhood spent feeling invisible left scars on his adult relationship patterns. “As a child, I felt that I didn’t exist. No matter what I did, it was not recognized, so there was no reinforcement,” he says. “As a young adult, I panicked when I got ignored. I was afraid that everyone was going to leave. I also felt that I would drive people away in relationships. I would only turn to others when I needed emotional support, never when things were good. When things were good, I could handle them myself. I didn’t need anybody.”

Childhood trauma often creates adults who struggle to be emotionally vulnerable, to process feelings of anger and disappointment, and to accept support from others. And with trust as one of the most vital components of longterm, healthy relationships, it’s clear where difficulty may arise. But Menard emphasizes that a childhood of neglect should not have to mean a lifetime of distant or unstable relationships. “A large percentage of the people that I’ve talked to about struggles in their life, they think it’s life. But we were born to be healthy, happy, prosperous, and anything that is taking away from that is not good,” he says.

“The lesser known [effects] I would say are the things that cause disruption in relationships,” Menard adds. “The divorce rate is about 60%. Where does that come from? It comes from disruption and unhappiness between two people. Lack of respect, love, trust, sacrifice. And if you come into that relationship broken from childhood trauma and you don’t even know it, I’d say that’s not well known.”

2. Physical Health Issues
The most commonly discussed long-term effects of childhood neglect are usually mental and emotional ones. But believe it or not, a background of trauma can actually impact your physical health. From diabetes to cardiac disease, the toll of childhood trauma can turn distinctly physical. “Five of the top 10 diseases that kill us have been scientifically proven to come from childhood trauma,” says Menard. “I’ve got high blood pressure. I go to the doctor, and they can’t figure it out. I have diabetes, hypertension, obesity, cardiac disease, COPD—it’s now known that they have a high probability that they originated from childhood trauma or neglect. Silent killers.”

In some cases, the physical ramifications of childhood trauma may be due to long-term medical neglect. What was once a treatable issue can become a much larger and potentially permanent problem. In Menard’s case, untreated illness in his childhood meant open heart surgery in his adult years. “I’m now 73. When I was 70, my aortic valve closed. I had to have four open heart surgeries in two months — almost died three times,” he explains. “Now, can I blame that on childhood trauma? I can, because I had strep throat repeatedly as a child without medication. One episode turned into rheumatic fever that damaged my aortic valve. 50 years later, I’m having my chest opened up.”

From loss of sleep to chronic pain, the physical manifestations of a neglectful childhood can be painful and difficult. But beyond that, they often go entirely overlooked. For many people, this can feel frustrating and invalidating. For others, they may not know themselves that their emotional pain could be having physical ramifications. As Menard puts it, “things are happening to people that they think [are just] part of life, and [they’re] not.”

3. Mental Health Struggles
Growing up in an abusive or neglectful environment can have a variety of negative effects on children. However, one of the most widely discussed and understood consequences is that of their mental health. “Forty-one percent of all depression in the United States is caused by childhood trauma or comes from childhood trauma,” notes Menard. And this connection between trauma and mental illness goes far beyond just depression. In fact, a recent study found a clear link between experiences of childhood trauma and various mental illnesses including anxiety, depression, and substance use disorders.

Of course, depression and anxiety are also compounded when living in an environment lacking the proper love, support, and encouragement that a child deserves to grow up in. For Menard, growing up in a home with 16 people did little to keep the loneliness at bay. “I just thought it was normal—being left out,” Menard says. “We all need to trust, and we need to rely on people. But if you become an island and self-reliant, not depending on others, you become isolated.”

In some cases, the impact of mental health can also do physical damage. In one example, Menard notes an increased likelihood for eating disorders. “Mine came from not having enough food,” he says. “I get that, but there are all types of eating disorders that come from emotional trauma.”

4. Acting Out

For most children, the model set by the behavior of their parents lays the foundation for their own personal growth and development. However, kids who lack these positive examples of healthy behavior are less likely to develop important traits like empathy, self-control, and responsibility. Menard is acutely aware of this, stating, “Good self-care and self-discipline are taught. It goes down the drain when you experience emotional trauma.” Children who are not given proper role models for behavior will often instead mimic the anger and aggressive behaviors prevalent in emotionally neglectful or abusive households.

“My wife is a school teacher and she could tell immediately through the aggressive behavior of even a first grader that there were multiple problems,” adds Menard. However, his focus is less on pointing fingers at the person who is displaying these negative behaviors, and more about understanding what made them act this way in the first place. “It’s not about what’s wrong with you, it’s about what happened to you.”

However, for many, the negative influence extends beyond simple bad behavior. Menard also describes being taught by his father to steal steaks from the restaurant where he worked at the age of 12. This was not only what his father encouraged him to do, but also what seemed completely appropriate to him because of how he had been raised. “I’d bring steaks home for him, and when he got off the factory shift at midnight, that seemed quite okay,” Menard says. “It seemed quite normal. And it’s horrible. Everybody’s searching to try to heal that wound and they don’t know why they’re doing it.”

Source: https://studyfinds.org/5-hidden-effects-of-childhood-neglect/

You won’t believe how fast people adapt to having an extra thumb

The Third Thumb worn by different users (CREDIT: Dani Clode Design / The Plasticity Lab)

Will human evolution eventually give us a sixth finger? If it does, a new study is showing that we’ll have no trouble using an extra thumb! It may sound like science fiction, but researchers have shown that people of all ages can quickly learn how to use an extra, robotic third thumb.

The findings, in a nutshell
A team at the University of Cambridge developed a wearable, prosthetic thumb device and had nearly 600 people from diverse backgrounds try it out. The results in the journal Science Robotics were astonishing: 98% of participants could manipulate objects using the third thumb within just one minute of picking it up and getting brief instructions.

The researchers put people through simple tasks like moving pegs from a board into a basket using only the robotic thumb. They also had people use the device along with their real hand to manipulate oddly-shaped foam objects, testing hand-eye coordination. People, both young and old, performed similarly well on the tasks after just a little practice. This suggests we may be surprisingly adept at integrating robotic extensions into our sense of body movement and control.

While you might expect those with hand-intensive jobs or hobbies to excel, that wasn’t really the case. Most everyone caught on quickly, regardless of gender, handedness, age, or experience with manual labor. The only groups that did noticeably worse were the very youngest children under age 10 and the oldest seniors. Even so, the vast majority in those age brackets still managed to use the third thumb effectively with just brief training.

Professor Tamar Makin and designer Dani Clode have been working on Third Thumb for several years. One of their initial tests in 2021 demonstrated that the 3D-printed prosthetic thumb could be a helpful extension of the human hand. In a test with 20 volunteers, it even helped participants complete tasks while blindfolded!

Designer Dani Clode with her ‘Third Thumb’ device. (Credit: Dani Clode)

How did scientists test the third thumb?
For their inclusive study, the Cambridge team recruited a wide range of 596 participants between the ages of three and 96. The group comprised an intentionally diverse mix of demographics to ensure the robotic device could be effectively used by all types of people.

The Third Thumb device itself consists of a rigid, controllable robotic digit worn on the opposite side of the hand from the normal thumb. It’s operated by foot sensors – pressing with the right foot pulls the robotic thumb inward across the palm while the left foot pushes it back out toward the fingertips. Releasing foot pressure returns the thumb to its resting position.

During testing at a science exhibition, each participant received up to one minute of instructions on how to control the device and perform one of two simple manual tasks. The first had them individually pick up pegs from a board using just the third thumb and drop as many as possible into a basket within 60 seconds. The second required them to manipulate a set of irregularly-shaped foam objects using the robotic thumb in conjunction with their real hand and fingers.

Detailed data was collected on every participant’s age, gender, handedness, and even occupations or hobbies that could point to exceptional manual dexterity skills. This allowed the researchers to analyze how user traits and backgrounds affected performance with the third thumb device after just a minute’s practice. The stark consistency across demographics proved its intuitive usability.

Source: https://studyfinds.org/people-adapt-to-extra-thumb/

Mysterious layer inside Earth may come from another planet!

3D illustration showing layers of the Earth in space. (© Destina – stock.adobe.com)

From the surface to the inner core, Earth has several layers that continue to be a mystery to science. Now, it turns out one of these layers may consist of material from an entirely different planet!

Deep within our planet lies a mysterious, patchy layer known as the D” layer. Located a staggering 3,000 kilometers (1,860 miles) below the surface, this zone sits just above the boundary separating Earth’s molten outer core from its solid mantle. Unlike a perfect sphere, the D” layer’s thickness varies drastically around the globe, with some regions completely lacking this layer altogether – much like how continents poke through the oceans on Earth’s surface.

These striking variations have long puzzled geophysicists, who describe the D” layer as heterogeneous, meaning non-uniform in its composition. However, a new study might finally shed light on this deep enigma, proposing that the D” layer could be a remnant of another planet that collided with Earth during its early days, billions of years ago.

The findings, in a nutshell
The research, published in National Science Review and led by Dr. Qingyang Hu from the Center for High Pressure Science and Technology Advanced Research and Dr. Jie Deng from Princeton University, draws upon the widely accepted Giant Impact hypothesis. This hypothesis suggests that a Mars-sized object violently collided with the proto-Earth, creating a global ocean of molten rock, or magma, in the aftermath.

Hu and Deng believe the D” layer’s unique composition may be the leftover fallout from this colossal impact, potentially holding valuable clues about our planet’s formation. A key aspect of their theory involves the presence of substantial water within this ancient magma ocean. While the origin of this water remains up for debate, the researchers are focusing on what happened as the molten rock began to cool.

“The prevailing view,” Dr. Deng explains in a media release, “suggests that water would have concentrated towards the bottom of the magma ocean as it cooled. By the final stages, the magma closest to the core could have contained water volumes comparable to Earth’s present-day oceans.”

Is there a hidden ocean inside the Earth?
This water-rich environment at the bottom of the magma ocean would have created extreme pressure and temperature conditions, fostering unique chemical reactions between water and minerals.

“Our research suggests this hydrous magma ocean favored the formation of an iron-rich phase called iron-magnesium peroxide,” Dr. Hu elaborates.

This peroxide, which has a chemical formula of (Fe,Mg)O2, has an even stronger affinity for iron compared to other major components expected in the lower mantle.

“According to our calculation, its affinity to iron could have led to the accumulation of iron-dominant peroxide in layers ranging from several to tens of kilometers thick,” Hu explains.

The presence of such an iron-rich peroxide phase would alter the mineral composition of the D” layer, deviating from our current understanding. According to the new model proposed by Hu and Deng, minerals in the D” layer would be dominated by an assemblage of iron-poor silicate, iron-rich (Fe,Mg) peroxide, and iron-poor (Fe,Mg) oxide. Interestingly, this iron-dominant peroxide also possesses unique properties that could explain some of the D” layer’s puzzling geophysical features, such as ultra-low velocity zones and layers of high electrical conductance — both of which contribute to the D” layer’s well-known compositional heterogeneity.

Source: https://studyfinds.org/layer-inside-earth-another-planet/

Targeting ‘monster cells’ may keep cancer from returning after treatment

Targeted Cancer Therapy Illustration (© Riz – stock.adobe.com)

Cancer can sometimes come back, even after undergoing chemotherapy or radiation treatments. Why does this happen? Researchers at the MUSC Hollings Cancer Center may have unlocked part of the mystery. They discovered that cancer cells can transform into monstrous “polyploid giant cancer cells” or PGCCs when under extreme stress from treatment. With that in mind, scientists believe targeting these cells could be the key to preventing recurrences of cancer.

The findings, in a nutshell
Study authors, who published their work in the Journal of Biological Chemistry, found that these bizarre, monster-like cells have multiple nuclei crammed into a single, enlarged cell body. At first, the researchers thought PGCCs were doomed freaks headed for cellular destruction. However, they realized PGCCs could actually spawn new “offspring” cancer cells after the treatment ended. It’s these rapidly dividing daughter cells that likely drive cancer’s resurgence in some patients. Blocking PGCCs from reverting and generating these daughter cells could be the strategy that keeps the disease from returning.

The scientists identified specific genes that cancer cells crank up to become PGCCs as a survival mechanism against harsh therapy. One gene called p21 seems particularly important. In healthy cells it stops DNA replication if damage occurs, but in cancer cells lacking p53 regulation, p21 allows replication of damaged DNA to continue, facilitating PGCC formation.

PGCCs could actually spawn new “offspring” cancer cells after treatments like chemotherapy have ended. (© RFBSIP – stock.adobe.com)

How did scientists make the discovery?
Originally, the Hollings team was studying whether an experimental drug inhibitor could boost cancer cell death when combined with radiation therapy. However, their initial experiments showed no extra killing benefit from the combination treatment. Discouraged, they extended the experiment timeline, and that’s when they noticed something very strange.

While the inhibitor made no difference in the short term, over a longer period, the scientists observed the emergence of bizarre, bloated “monster” cancer cells containing multiple nuclei. At first, they assumed these polyploid giant cancer cells (PGCCs) were doomed mutations that would naturally die off in the patient’s body. Then, researchers saw the PGCCs were generating rapidly dividing offspring cells around themselves, mimicking tumor recurrence.

This made the team rethink the inhibitor’s effects. It didn’t increase cancer cell killing, but it did seem to stop PGCCs from reverting to a state where they could spawn proliferating daughter cells. Blocking this reversion to divisible cells could potentially prevent cancer relapse after treatment.

The researchers analyzed gene expression changes as cancer cells transformed into PGCCs and then back into dividing cells. They identified molecular pathways involved, like p21 overexpression, which allows duplication of damaged DNA. Ultimately, combining their inhibitor with radiation prevented PGCC reversion and daughter cell generation, providing a possible novel strategy against treatment-resistant cancers.

What do the researchers say?
“We initially thought that combination of radiation with the inhibitor killed cancer cells better,” says research leader Christina Voelkel-Johnson, Ph.D., in a media release. “It was only when the inhibitor failed to make a difference in short-term experiments that the time frame was extended, which allowed for an unusual observation.”

Source: https://studyfinds.org/monster-cells-cancer-returning/

Average person wastes more than 2 hours ‘dreamscrolling’ everyday!

(Photo by Perfect Wave on Shutterstock)

NEW YORK — The average American spends nearly two and a half hours a day “dreamscrolling” — looking at dream purchases or things they’d like to one day own. While some might think you’re just wasting your day, a whopping 71% say it’s time well spent, as the habit motivates them to reach their financial goals.

In a recent poll of 2,000 U.S. adults, more than two in five respondents say they spend more time dreamscrolling when the economy is uncertain (43%). Over a full year, that amounts to about 873 hours or nearly 36 days spent scrolling.

Conducted by OnePoll on behalf of financial services company Empower, the survey reveals half of the respondents say they dreamscroll while at work. Of those daydreaming employees, one in five admit to spending between three and four hours a day multitasking while at their job.

Gen Zers spend the most time dreamscrolling at just over three hours per day, while boomers spend the least, clocking in around an hour of fantasy purchases and filling wish lists. Americans say looking at dream purchases makes it easier for them to be smart with their money (56%), avoid making unplanned purchases or going into debt (30%), and better plan to achieve their financial goals (25%).

Nearly seven in 10 see dreamscrolling as an investment in themselves (69%) and an outlet for them to envision what they want out of life (67%). Four in 10 respondents (42%) say they regularly spend time picturing their ideal retirement — including their retirement age, location, and monthly expenses.

A whopping 71% say dreamscrolling is time well spent, as the habit motivates them to reach their financial goals. (© Antonioguillem – stock.adobe.com)

Many respondents are now taking the American dream online, with one in five respondents scrolling through listings of dream homes or apartments. Meanwhile, some are just browsing through vacation destinations (25%), beauty or self-care products (23%), and items for their pets (19%). Many others spend time looking at clothing, shoes, and accessories (49%), gadgets and technology (30%), and home décor or furniture (29%).

More than half (56%) currently have things left open in tabs and windows or saved in shopping carts that they’d like to purchase or own in the future. For those respondents, they estimate it would cost about $86,593.40 to afford everything they currently have saved.

Almost half of Americans say they are spending more time dreamscrolling now than in previous years (45%), and 56% plan on buying something on their dream list before this year wraps. While 65% are optimistic they’ll be able to one day buy everything on their list, nearly one in four say they don’t think they’ll ever be able to afford the majority of items (23%).

More than half (51%) say owning their dream purchases would make them feel more financially secure, and close to half say working with a financial professional would help them reach their goals (47%). Others feel they have more work to do: 34% say they’ve purchased fewer things on their dream list than they should at their age, with millennials feeling the most behind (39%).

Rising prices (54%), the inability to save money (29%), and growing debt (21%) are the top economic factors that may be holding some Americans back. Instead of doom spending, dreamscrolling has had a positive impact on Americans’ money habits: respondents say they better understand their financial goals (24%) as a result.

Source: https://studyfinds.org/shopping-browsing-cant-afford/

Who really was Mona Lisa? 500+ years on, there’s good reason to think we got it wrong

Visiting looking at the Mona Lisa (Credit: pixabay.com)

In the pantheon of Renaissance art, Leonardo da Vinci’s Mona Lisa stands as an unrivalled icon. This half-length portrait is more than just an artistic masterpiece; it embodies the allure of an era marked by unparalleled cultural flourishing.

Yet, beneath the surface of the Mona Lisa’s elusive smile lies a debate that touches the very essence of the Renaissance, its politics and the role of women in history.

A mystery woman

The intrigue of the Mona Lisa, also known as La Gioconda, isn’t solely due to Leonardo’s revolutionary painting techniques. It’s also because the identity of the subject is unconfirmed to this day. More than half a century since it was first painted, the real identity of the Mona Lisa remains one of art’s greatest mysteries, intriguing scholars and enthusiasts alike.

A Mona Lisa painting from the workshop of Leonardo da Vinci, held in the collection of the Museo del Prado in Madrid, Spain. Collection of the Museo del Prado

The painting has traditionally been associated with Lisa Gherardini, the wife of Florentine silk merchant Francesco del Giocondo. But another compelling theory suggests a different sitter: Isabella of Aragon.

Isabella of Aragon was born into the illustrious House of Aragon in Naples, in 1470. She was a princess who was deeply entwined in the political and cultural fabric of the Renaissance.

Her 1490 marriage to Gian Galeazzo Sforza, Duke of Milan, positioned Isabella at the heart of Italian politics. And this role was both complicated and elevated by the ambitions and machinations of Ludovico Sforza (also called Ludovico il Moro), her husband’s uncle and usurper of the Milanese dukedom.

In The Virgin and Child with Four Saints and Twelve Devotees, by (unknown) Master of the Pala Sforzesca, circa 1490, Gian Galeazzo Sforza is shown in prayer facing his wife, Isabella of Aragon (identified by her heraldic red and gold). National Gallery

Scholarly perspectives
The theory that Isabella is the real Mona Lisa is supported by a combination of stylistic analyses, historical connections and reinterpretations of Leonardo’s intent as an artist.

In his biography of Leonardo, author Robert Payne points to preliminary studies by the artist that bear a striking resemblances to Isabella around age 20. Payne suggests Leonardo captured Isabella across different life stages, including during widowhood, as depicted in the Mona Lisa.

U.S. artist Lillian F. Schwartz’s 1988 study used x-rays to reveal an initial sketch of a woman hidden beneath Leonardo’s painting. This sketch was then painted over with Leonardo’s own likeness.

Schwartz believes the woman in the sketch is Isabella, because of its similarity with a cartoon Leonardo made of the princess. She proposes the work was made by integrating specific features of the initial model with Leonardo’s own features.

An illustration of Isabella of Aragon from the Story of Cremona by Antonio Campi. Library of Congress

This hypothesis is further supported by art historians Jerzy Kulski and Maike Vogt-Luerssen.

According to Vogt-Luerssen’s detailed analysis of the Mona Lisa, the symbols of the Sforza house and the depiction of mourning garb both align with Isabella’s known life circumstances. They suggest the Mona Lisa isn’t a commissioned portrait, but a nuanced representation of a woman’s journey through triumph and tragedy.

Similarly, Kulski highlights the portrait’s heraldic designs, which would be atypical for a silk merchant’s wife. He, too, suggests the painting shows Isabella mourning her late husband.

The Mona Lisa’s enigmatic expression also captures Isabella’s self-described state post-1500 of being “alone in misfortune.” Contrary to representing a wealthy, recently married woman, the portrait exudes the aura of a virtuous widow.

Late professor of art history Joanna Woods-Marsden suggested the Mona Lisa transcends traditional portraiture and embodies Leonardo’s ideal, rather than being a straightforward commission.

This perspective frames the work as a deeply personal project for Leonardo, possibly signifying a special connection between him and Isabella. Leonardo’s reluctance to part with the work also indicates a deeper, personal investment in it.

Beyond the canvas
The theory that Isabella of Aragon could be the true Mona Lisa is a profound reevaluation of the painting’s context, opening up new avenues through which to appreciate the work.

It elevates Isabella from a figure overshadowed by the men in her life, to a woman of courage and complexity who deserves recognition in her own right.

Source: https://studyfinds.org/who-really-was-mona-lisa-500-years-on-theres-good-reason-to-think-we-got-it-wrong/

Scientists discover what gave birth to Earth’s unbreakable continents

Photo by Brett Zeck from Unsplash

The Earth beneath our feet may feel solid, stable, and seemingly eternal. But the continents we call home are unique among our planetary neighbors, and their formation has long been a mystery to scientists. Now, researchers believe they may have uncovered a crucial piece of the puzzle: the role of ancient weathering in shaping Earth’s “cratons,” the most indestructible parts of our planet’s crust.

Cratons are the old souls of the continents, forming roughly half of Earth’s continental crust. Some date back over three billion years and have remained largely unchanged ever since. They form the stable hearts around which the rest of the continents have grown. For decades, geologists have wondered what makes these regions so resilient, even as the plates shift and collide around them.

It turns out that the key may lie not in the depths of the Earth but on its surface. A new study out of Penn State and published in Nature suggests that subaerial weathering – the breakdown of rocks exposed to air – may have triggered a chain of events that led to the stabilization of cratons billions of years ago, during the Neoarchaean era, around 2.5 to 3 billion years ago.

These ancient metamorphic rocks called gneisses, found on the Arctic Coast, represent the roots of the continents now exposed at the surface. The scientists said sedimentary rocks interlayered in these types of rocks would provide a heat engine for stabilizing the continents. Credit: Jesse Reimink. All Rights Reserved.

To understand how this happened, let’s take a step way back in time. In the Neoarchaean, Earth was a very different place. The atmosphere contained little oxygen, and the continents were mostly submerged beneath a global ocean. But gradually, land began to poke above the waves – a process called continental emergence.

As more rock was exposed to air, weathering rates increased dramatically. When rocks weather, they release their constituent minerals, including radioactive elements like uranium, thorium, and potassium. These heat-producing elements, or HPEs, are crucial because their decay generates heat inside the Earth over billions of years.

The researchers propose that as the HPEs were liberated by weathering, they were washed into sediments that accumulated in the oceans. Over time, plate tectonic processes would have carried these sediments deep into the crust, where the concentrated HPEs could really make their presence felt.

Buried at depth and heated from within, the sediments would have started to melt. This would have driven what geologists call “crustal differentiation” – the separation of the continental crust into a lighter, HPE-rich upper layer and a denser, HPE-poor lower layer. It’s this layering, the researchers argue, that gave cratons their extraordinary stability.

The upper crust, enriched in HPEs, essentially acted as a thermal blanket, keeping the lower crust and the mantle below relatively cool and strong. This prevented the kind of large-scale deformation and recycling that affected younger parts of the continents.

Interestingly, the timing of craton stabilization around the globe supports this idea. The researchers point out that in many cratons, the appearance of HPE-enriched sedimentary rocks precedes the formation of distinctive Neoarchaean granites – the kinds of rocks that would form from the melting of HPE-rich sediments.

The rocks on the left are old rocks that have been deformed and altered many times. They are juxtaposed next to an Archean granite on the right side. The granite is the result of melting that led to the stabilization of the continental crust. Credit: Matt Scott. All Rights Reserved.

Furthermore, metamorphic rocks – rocks transformed by heat and pressure deep in the crust – also record a history consistent with the model. Many cratons contain granulite terranes, regions of the deep crust uplifted to the surface that formed in the Neoarchaean. These granulites often have compositions that suggest they formed from the melting of sedimentary rocks.

So, the sequence of events – the emergence of continents, increased weathering, burial of HPE-rich sediments, deep crustal melting, and finally, craton stabilization – all seem to line up.

Source: https://studyfinds.org/earths-unbreakable-continents/

The 7 Fastest Animals In The World: Can You Guess Them All?

Cheetah (Photo by David Groves on Unsplash)

Move over Usain Bolt, because in the animal kingdom, speed takes on a whole new meaning! Forget sprinting at a measly 28 mph – these record-breaking creatures can leave you in the dust (or water, or sky) with their mind-blowing velocity. From lightning-fast cheetahs hunting down prey on the African savanna to majestic peregrine falcons diving from incredible heights, these animals rely on their extreme speed to survive and thrive in the wild. So, buckle up as we explore the top seven fastest animals on Earth.

The animal kingdom is brimming with speedsters across different habitats. We’re talking about fish that can zoom by speedboats, birds that plummet from the sky at breakneck speeds, and even insects with lightning-fast reflexes. Below is our list of the consensus top seven fastest animals in the world. We want to hear from you too! Have you ever encountered an animal with incredible speed? Share your stories in the comments below, and let’s celebrate the awe-inspiring power of nature’s speed demons!

The List: Fastest Animals in the World, Per Wildlife Experts

1. Peregrine Falcon – 242 MPH

Peregrine Falcon (Photo by Vincent van Zalinge on Unsplash)

The peregrine falcon takes the title of the fastest animal in the world, able to achieve speeds of 242 miles per hour. These birds don’t break the sound barrier by flapping their wings like crazy. Instead, they use gravity as their accomplice, raves The Wild Life. In a blink of an eye, the falcon can plummet towards its prey, like a fighter jet in a vertical dive. These dives can exceed 200 miles per hour, which is the equivalent of a human running at over 380 mph! That’s fast enough to make even the speediest sports car look like a snail.

That prominent bulge of this falcon’s chest cavity isn’t just for show – it’s a keel bone, and it acts like a supercharged engine for their flight muscles. A bigger keel bone translates to more powerful wing strokes, propelling the falcon forward with incredible force, explains A-Z Animals. These birds also boast incredibly stiff, tightly packed feathers that act like a high-performance suit, reducing drag to an absolute minimum. And the cherry on top? Their lungs and air sacs are designed for one-way airflow, meaning they’re constantly topped up with fresh oxygen, even when exhaling. This ensures they have the fuel they need to maintain their breakneck dives.

These fast falcons might be the ultimate jet setters of the bird world, but they’re not picky about their digs. The sky-dwelling predators are comfortable calling a variety of landscapes home, as long as there’s open space for hunting, writes One Kind Planet. They can be found soaring over marshes, estuaries, and even skyscrapers, always on the lookout for unsuspecting prey.

2. Golden Eagle – 200 MPH

Golden Eagle (Photo by Mark van Jaarsveld on Unsplash)

The golden eagle is a large bird that is well known for its powerful and fast flight. These majestic birds can reach speeds of up to 199 mph during a hunting dive, says List 25. Just like the peregrine falcon, the golden eagle uses a hunting technique called a stoop. With a powerful tuck of its wings, the eagle plummets towards its target in a breathtaking dive.

They are undeniably impressive birds, with a wingspan that can stretch up to eight feet wide! Imagine an athlete being able to run at 179 miles per hour! That’s what a golden eagle achieves in a dive, reaching speeds of up to 87 body lengths per second, mentions The Wild Life. The air rushes past its feathers, creating a whistling sound as it picks up, hurtling toward its prey.

They also use these impressive dives during courtship rituals and even playful moments, states Live Science. Picture two golden eagles soaring in tandem, one diving after the other in a dazzling aerial ballet. It’s a display of both power and grace that reaffirms their status as the ultimate rulers of the skies. Their habitat range stretches across the northern hemisphere, including North America, Europe, Africa, and Asia, according to the International Union for Conservation of Nature (IUCN). So next time you see a golden eagle circling above, remember – it’s more than just a bird, it’s a living embodiment of speed, skill, and breathtaking beauty.

3. Black Marlin – 80 MPH

A Black Marlin jumping out of the water (Photo by Finpat on Shutterstock)

The ocean is a vast and mysterious realm, teeming with incredible creatures. And when it comes to raw speed, the black marlin is a high-performance athlete of the sea. They have a deep, muscular body built for cutting through water with minimal resistance, informs Crosstalk. Think of a sleek racing yacht compared to a clunky rowboat. Plus, their dorsal fin is lower and rounder, acting like a spoiler on a race car, reducing drag and allowing for a smoother ride through the water. Their “spears,” those sharp protrusions on their snouts, are thicker and more robust than other marlins. These aren’t just for show – they’re used to slash and stun prey during a hunt.

Some scientists estimate their burst speed at a respectable 22 mph. That’s impressive, but here’s where the debate gets interesting. Some reports claim black marlin can pull fishing line at a staggering 120 feet per second! When you do the math, that translates to a whopping 82 mph, according to Story Teller. This magnificent fish calls shallow, warm shores home, their ideal habitat boasts water temperatures between 59 to 86 degrees Fahrenheit. – basically, a permanent summer vacation!

The secret behind its impressive swimming prowess lies in its tail. Unlike the rounded tails of many fish, black marlin possess crescent-shaped tails, explains A-Z Animals. With a powerful flick, they can propel themselves forward with incredible bursts of speed. This marlin also boasts a long, thin, and sharp bill that cuts through water, offering minimal resistance as it surges forward. But that’s not all. Black marlin also have rigid pectoral fins that act like perfectly sculpted wings. These fins aren’t for flapping – they provide stability and lift, allowing the marlin to maintain a streamlined position in the water.

4. Cheetah – 70 MPH

Adult and cheetah pup on green grass during daytime (Photo by Sammy Wong on Unsplash)

The cheetah is Africa’s most endangered large cat and also the world’s fastest land animal. Their bodies are built for pure velocity, with special adaptations that allow them to go from zero to sixty in a mind-blowing three seconds, shares Animals Around The Globe. Each stride stretches an incredible seven meters, eating up the ground with astonishing speed. But they can only maintain their high speeds for short bursts.

Unlike its stockier lion and tiger cousins, the cheetah boasts a lean, streamlined physique that makes them aerodynamic. But the real innovation lies in the cheetah’s spine. It’s not just a rigid bone structure – it’s a flexible marvel, raves A-Z Animals. With each powerful push, this springy spine allows the cheetah to extend its strides to incredible lengths, propelling it forward with tremendous force. And finally, we come to the engine room: the cheetah’s muscles. Packed with a high concentration of “fast-twitch fibers,” these muscles are specifically designed for explosive bursts of speed. Think of them as tiny, built-in turbochargers that give the cheetah that extra surge of power when it needs it most.

These magnificent cats haven’t always been confined to the dry, open grasslands of sub-Saharan Africa. Cheetahs were once widespread across both Africa and Asia, but their range has shrunk dramatically due to habitat loss and dwindling prey populations, says One Kind Planet. Today, most cheetahs call protected natural reserves and parks home.

Source: https://studyfinds.org/fastest-animals-in-the-world/

Exit mobile version