Inside the attention spans of young kids: Why curiosity is mistaken for lack of focus

(Credit: August de Richelieu from Pexels)

Picture this: You’re playing a game of “Guess Who?” with a five-year-old. You’ve narrowed it down to the character with the red hat, but instead of triumphantly declaring their guess, the child keeps flipping over cards, examining every detail from mustaches to earrings. Frustrating? Maybe. But according to new research, this seemingly inefficient behavior might be a key feature of how young minds learn about the world.

A study published in Psychological Science by researchers at The Ohio State University has shed new light on a longstanding puzzle in child development: Why do young children seem to pay attention to everything, even when it doesn’t help them complete a task? The answer, it turns out, is more complex and fascinating than anyone expected.

For years, scientists have observed that children tend to distribute their attention broadly, taking in information that adults would consider irrelevant or distracting. This “distributed attention” has often been chalked up to immature brain development or a simple lack of focus. But Ohio State psychology professor Vladimir Sloutsky and his team suspected there might be more to the story.

“Children can’t seem to stop themselves from gathering more information than they need to complete a task, even when they know exactly what they need,” Sloutsky explains in a media release.

This over-exploration persists even when children are motivated by rewards to complete tasks quickly.

To investigate this question, Sloutsky and lead author Qianqian Wan designed clever experiments involving four to six-year-old children and adults. Participants were shown images of cartoon creatures and asked to sort them into two made-up categories called “Hibi” and “Gora.” Each creature had seven features like horns, wings, and tails. Importantly, only one feature perfectly predicted which category the creature belonged to, while the other features were only somewhat helpful for categorizing.

The key twist was that all the features were initially hidden behind “bubbles” on a computer screen. Participants could reveal features one at a time by tapping or clicking on the bubbles. This setup allowed the researchers to see exactly which features people chose to look at before making their category decision.

“Children can’t seem to stop themselves from gathering more information than they need to complete a task, even when they know exactly what they need,” researchers explain. (Credit: Kamaji Ogino from Pexels)

If children’s broad attention was simply due to an inability to filter out distractions, the researchers reasoned that hiding irrelevant features should help them focus only on the most important one. However, that’s not what happened. Even when they quickly figured out which feature was the perfect predictor of category, children – especially younger ones – continued to uncover and examine multiple features on each trial. Adults, on the other hand, quickly zeroed in on the key feature and mostly ignored the rest.

Interestingly, by age six, children started to show a mix of strategies. About half the six-year-olds behaved more like adults, focusing mostly on the key feature. The other half continued to explore broadly like younger children. This suggests the study may have captured a key transition point in how children learn to focus their attention.

To rule out the possibility that children just enjoyed the action of tapping to reveal features, the researchers ran a second experiment. This time, they gave children the option to either reveal all features at once with one tap or uncover them one by one. Children of all ages strongly preferred the single-tap option, indicating their goal was indeed to gather information rather than simply tapping for fun.

So, why do children persist in this seemingly inefficient exploration? Sloutsky proposes two intriguing possibilities. The first is simple curiosity – an innate drive to learn about the world that overrides task efficiency. The second, which Sloutsky favors, relates to the development of working memory.

“The children learned that one body part will tell them what the creature is, but they may be concerned that they don’t remember correctly. Their working memory is still under development,” Sloutsky suggests. “They want to resolve this uncertainty by continuing to sample, by looking at other body parts to see if they line up with what they think.”

Source: https://studyfinds.org/over-exploring-minds-attention-kids/?nab=0

Just 10 seconds of light exercise boosts brain activity in kids

(Photo by Yan Krukov from Pexels)

What if the secret to unlocking your child’s cognitive potential was as simple as a 10-second stretch? It may sound too good to be true, but a revolutionary study from Japan suggests that brief, light exercises could be the key to boosting brain activity in children, challenging our understanding of the mind-body connection.

The findings, published in Scientific Reports, suggest that these quick, low-intensity activities could be a valuable tool for enhancing cognitive function and potentially improving learning in school settings.

The research, led by Takashi Naito and colleagues, focuses on a part of the brain called the prefrontal cortex (PFC). This area, located at the front of the brain, is crucial for many important mental tasks. It helps us plan, make decisions, control our impulses, and pay attention – all skills that are vital for success in school and life.

As children grow, their prefrontal cortex continues to develop. This means that childhood is a critical time for building strong mental abilities. However, many children today aren’t getting enough physical activity. In fact, a whopping 81% of children worldwide don’t get enough exercise. This lack of movement could potentially hinder their brain development and cognitive skills.

While previous studies have shown that moderate to intense exercise can improve brain function, less was known about the effects of light, easy activities – the kind that could be done quickly in a classroom or during short breaks. This study aimed to fill that gap by examining how simple exercises affect blood flow in the prefrontal cortex of children.

“Our goal is to develop a light-intensity exercise program that is accessible to everyone, aiming to enhance brain function and reduce children’s sedentary behavior,” Naito explains in a statement. “We hope to promote and implement this program in schools through collaborative efforts.”

The researchers recruited 41 children between the ages of 10 and 15 to participate in the study. These kids performed seven different types of light exercises, each lasting either 10 or 20 seconds. The exercises included things like stretching, hand movements, and balancing on one leg – all activities that could be easily done in a classroom without special equipment.

To measure brain activity, the researchers used a technique called functional near-infrared spectroscopy (fNIRS). This non-invasive method uses light to detect changes in blood flow in the brain, which can indicate increased brain activity. The children wore a special headband with sensors while doing the exercises, allowing the researchers to see how their brain activity changed during each movement.

Most of the exercises led to significant increases in blood flow to the prefrontal cortex, suggesting increased brain activity in this important region. Interestingly, not all exercises had the same effect. Simple, static stretches didn’t show much change, but exercises that required more thought or physical effort – like twisting movements, hand exercises, and balancing – showed the biggest increases in brain activity.

These findings suggest that even short bursts of light activity can “wake up” the prefrontal cortex in children. This could potentially lead to improved focus, better decision-making, and enhanced learning abilities. The best part is that these exercises are quick and easy to do, making them perfect for incorporating into a school day or study routine.

Source: https://studyfinds.org/10-seconds-exercise-brain-activity/?nab=0

Tourist dies after ice collapse in Icelandic glacier

An aerial view of the Breidamerkurjökull glacier in 2021

A foreign tourist has died in south Iceland after ice collapsed during a visit their group was making to a glacier, local media report.

A second tourist was injured but they have been taken to hospital and their life is not in danger, while two others are still missing.

Rescuers have suspended the search for the missing in the Breidamerkurjökull glacier until morning because of difficult conditions.

Ice collapsed as the group of 25 people were visiting an ice cave along with a guide on Sunday.

Emergency workers worked by hand to try to rescue those missing.

First responders received a call just before 15:00 on Sunday about the collapse.

“The conditions are very difficult on the ground,” said local police chief Sveinn Kristján Rúnarsson. “It’s in the glacier. It’s hard to get equipment there… It’s bad. Everything is being done by hand.”

Local news outlets reported that 200 people were working on the rescue operation at one point on Sunday.

Speaking on Icelandic TV, Chief Superintendent Rúnarsson said police had been unable to contact the two missing people.

While the conditions were “difficult”, the weather was “fair”, he said.

Confirming that all those involved were foreign tourists, he said there was nothing to suggest that the trip to the cave should not have taken place.

“Ice cave tours happen almost the whole year,” he said

“These are experienced and powerful mountain guides who run these trips. It’s always possible to be unlucky. I trust these people to assess the situation – when it’s safe or not safe to go, and good work has been done there over time. This is a living land, so anything can happen.”

The police chief was quoted as saying that people had been standing in a ravine between cave mouths when an ice wall collapsed.

Source : https://www.bbc.com/news/articles/cp8ny80e6lyo

Mental menu: Your food choices may be causing anxiety and depression

(Credit: Prostock-studio/Shutterstock)

The proverbial “sugar high” that follows the ingestion of a sweet treat is a familiar example of the potentially positive effects of food on mood.

On the flip side, feeling “hangry” – the phenomenon where hunger manifests in the form of anger or irritability – illustrates how what we eat or don’t eat can also provoke negative emotions.

The latest research suggests that blood sugar fluctuations are partly responsible for the connection between what we eat and how we feel. Through its effects on our hormones and our nervous system, blood sugar levels can be fuel for anxiety and depression.

Mental health is complex. There are countless social, psychological, and biological factors that ultimately determine any one person’s experience. However, numerous randomized controlled trials have demonstrated that diet is one biological factor that can significantly influence risk for symptoms of depression and anxiety, especially in women.

As a family medicine resident with a Ph.D. in nutrition, I have witnessed the fact that antidepressant medications work for some patients but not others. Thus, in my view, mental health treatment strategies should target every risk factor, including nutrition.

The role of the glycemic index
Many of the randomized controlled trials that have proven the link between diet and mental health have tested the Mediterranean diet or a slightly modified version of it. The Mediterranean diet is typically characterized by lots of vegetables – especially dark green, leafy vegetables – fruit, olive oil, whole grains, legumes and nuts, with small amounts of fish, meat and dairy products. One of the many attributes of the Mediterranean diet that may be responsible for its effect on mood is its low glycemic index.

The glycemic index is a system that ranks foods and diets according to their potential to raise blood sugar. Thus, in keeping with the observation that blood sugar fluctuations affect mood, high glycemic index diets that produce drastic spikes in blood sugar have been associated with increased risk for depression and to some extent anxiety.

High glycemic index carbohydrates include white rice, white bread, crackers and baked goods. Therefore, diets high in these foods may increase risk for depression and anxiety. Meanwhile, low glycemic index carbs, such as parboiled rice and al dente pasta, that are more slowly absorbed and produce a smaller blood sugar spike are associated with decreased risk.

Diets high in legumes and dark green vegetables produce lower spikes in blood sugar. (Credit: Jacqueline Howell from Pexels)

How diet affects mood

Many scientific mechanisms have been proposed to explain the connection between diet and mental health. One plausible explanation that links blood sugar fluctuations with mood is its effect on our hormones.

Every time we eat sugar or carbohydrates such as bread, rice, pasta, potatoes, and crackers, the resulting rise in blood sugar triggers a cascade of hormones and signaling molecules. One example, dopamine – our brain’s pleasure signal – is the reason we can experience a “sugar high” following the consumption of dessert or baked goods. Dopamine is the body’s way of rewarding us for procuring the calories, or energy that are necessary for survival.

Insulin is another hormone triggered by carbohydrates and sugar. Insulin’s job is to lower blood sugar levels by escorting the ingested sugar into our cells and tissues so that it can be used for energy. However, when we eat too much sugar, too many carbs, or high glycemic index carbs, the rapid increase in blood sugar prompts a drastic rise in insulin. This can result in blood sugar levels that dip below where they started.

This dip in blood sugar sparks the release of adrenaline and its cousin noradrenaline. Both of these hormones appropriately send glucose into the bloodstream to restore blood sugar to the appropriate level.

However, adrenaline influences more than just blood sugar levels. It also affects how we feel, and its release can manifest as anxiety, fear, or aggression. Hence, diet affects mood through its effect on blood sugar levels, which trigger the hormones that dictate how we feel.

Interestingly, the rise in adrenaline that follows sugar and carbohydrate consumption doesn’t happen until four to five hours after eating. Thus, when eating sugar and carbs, dopamine makes us feel good in the short term; but in the long term, adrenaline can make us feel bad.

However, not everyone is equally affected. Identical meals can produce widely varying blood sugar responses in different people, depending on one’s sex, as well as genetics, sedentariness, and the gut microbiome.

And it’s important to keep in mind that, as previously mentioned, mental health is complicated. So in certain circumstances, no amount of dietary optimization will overcome the social and psychological factors that may underpin one’s experience.

Nevertheless, a poor diet could certainly make a person’s experience worse and is thus relevant for anyone, especially women, hoping to optimize mental health. Research has shown that women, in particular, are more sensitive to the effects of the glycemic index and diet overall.

Source: https://studyfinds.org/food-choices-anxiety-depression/

As climate change raises global temperatures, a massive chunk of Antarctica’s ice sheet is expected to melt and raise sea levels in the coming decades. Considering that Antarctica’s ice sheet is the most enormous ice mass on Earth, the rising sea levels would be catastrophic for island nations and populations living near coastlines. Fortunately, not all hope is lost. A new study published in Scientific Advances suggests Earth’s natural forces could significantly reduce ice loss, but only if humans reduce carbon emissions.

As climate change raises global temperatures, a massive chunk of Antarctica’s ice sheet is expected to melt and raise sea levels in the coming decades. Considering that Antarctica’s ice sheet is the most enormous ice mass on Earth, the rising sea levels would be catastrophic for island nations and populations living near coastlines.

Fortunately, not all hope is lost. A new study published in Scientific Advances suggests Earth’s natural forces could significantly reduce ice loss, but only if humans reduce carbon emissions.

“With nearly 700 million people living in coastal areas and the potential cost of sea-level rise reaching trillions of dollars by the end of the century, understanding the domino effect of Antarctic ice melt is crucial,” says lead author Natalya Gomez, an associate professor in McGill University’s Department of Earth and Planetary Sciences and Canada Research Chair in Ice sheet-Sea level interactions, in a media release.

While the damages caused by climate change have made rising sea levels inevitable, minimizing carbon outputs is expected to reduce damage faced in coastal communities.

Ice melts when its weight decreases. This causes the land under it to expand like a sponge, a phenomenon known as post-glacial uplift. On one hand, post-glacial uplift helps stop ice mass loss. The land expansion lifts the ice up, acting as nature’s break in the flow of ice from land to ocean. However, the current study found that the post-glacial uplift would not be enough to slow down the rapidly thawing ice if carbon emissions continue.

The ANET-POLENET team flew to remote field sites on Antarctica’s Backer Islands to record bedrock uplift. Ohio State University co-author Terry Wilson is second from the left. (Credit: Nicolas Bayou)

The study authors created a 3D model of Earth’s interior to study how Antarctica’s ice sheet interacts with the land and how carbon emissions influence that relationship. The model included geophysical field measurements from the U.S. ANET-POLENET project, which records any changes in land shifts across Antarctica.

“Our 3-D model peels back Earth’s layers like an onion, revealing dramatic variations in thickness and consistency of the mantle below. This knowledge helps us better predict how different areas will respond to melting,” says study co-author Maryam Yousefi, a geodesist at Natural Resources Canada.

According to the researchers, this is the first model to study in detail the dynamics between Antarctica’s ice sheet and the earth underneath.

The results show that post-glacial uplift decreases Antarctica’s contribution to sea levels by 40%. The study also found that if carbon emissions continue at their current pace, the post-glacial uplift effect would not be enough to slow down rising sea levels.

Source  : https://studyfinds.org/antarctica-rising-as-it-melts/?nab=0

In just 10 minutes, new app gives you a mental health makeover

(Credit: Microgen/Shutterstock)

Just 10 minutes of daily mindfulness practice, delivered through a free smartphone app, could be the key to unlocking a healthier, happier you. It sounds almost too good to be true, but that’s exactly what researchers from the Universities of Bath and Southampton have discovered.

In one of the largest and most diverse studies of its kind, 1,247 adults from 91 countries embarked on a 30-day mindfulness journey using the free Medito app. The results were nothing short of remarkable. Participants who completed the mindfulness program reported a 19.2% greater reduction in depression symptoms compared to the control group. They also experienced a 6.9% greater improvement in well-being and a 12.6% larger decrease in anxiety.

The benefits didn’t stop there. The study, published in the British Journal of Health Psychology, uncovered an intriguing link between mindfulness practice and healthier lifestyle choices. Participants who used the mindfulness app reported more positive attitudes towards health maintenance (7.1% higher than the control group) and stronger intentions to look after their health (6.5% higher). It’s as if the simple act of tuning into the present moment created a ripple effect, influencing not just mental health but also motivating healthier behaviors.

What makes this study particularly exciting is its accessibility. Unlike traditional mindfulness programs that might require significant time commitments or expensive retreats, this intervention was delivered entirely through a free mobile app. Participants, most of whom had no prior mindfulness experience, were asked to complete just 10 minutes of practice daily. The sessions included relaxation exercises, intention-setting, body scans, focused breathing, and self-reflection.

“This study highlights that even short, daily practices of mindfulness can offer benefits, making it a simple yet powerful tool for enhancing mental health,” says Masha Remskar, the lead researcher from the University of Bath, in a media release.

Participants who completed the mindfulness program reported a 19.2% greater reduction in depression symptoms. (Credit: Ground Picture/Shutterstock)

Perhaps even more impressive than the immediate effects were the long-term benefits. In follow-up surveys conducted 30 days after the intervention ended, participants in the mindfulness group continued to report improved well-being, reduced depression symptoms, and better sleep quality compared to the control group.

The study also shed light on why mindfulness might be so effective.

“The research underscores how digital technology – in this case, a freely available app – can help people integrate behavioral and psychological techniques into their lives, in a way that suits them,” notes Dr. Ben Ainsworth from the University of Southampton.

Source : https://studyfinds.org/10-minute-app-mental-health/?nab=0

Wow! Scientists may have finally decoded mysterious signal from space

The “Wow!” signal was originally captured in 1977 by the Ohio State University’s Big Ear radio telescope (Credit: Big Ear Radio Observatory and North American AstroPhysical Observatory)

For nearly half a century, astronomers have been puzzled by a brief and unexplainable radio signal detected in 1977 that seemed to hint at the existence of alien life. Known as the “Wow! Signal,” this tantalizing cosmic transmission has remained one of the most intriguing mysteries in the search for signs of intelligent life in outer space. Now, scientists may finally know where it came from!

A team of researchers may have uncovered a potential astrophysical explanation for the Wow! Signal that could reshape our understanding of this enduring enigma. Their findings, currently published in the preprint journal arXiv, suggest the signal may have been the result of a rare and dramatic event involving a burst of energy from a celestial object interacting with clouds of cold hydrogen gas in the Milky Way galaxy.

“Our latest observations, made between February and May 2020, have revealed similar narrowband signals near the hydrogen line, though less intense than the original Wow! Signal,” explains Abel Méndez, lead author of the study from the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo, in a media release.

“Our study suggests that the Wow! Signal was likely the first recorded instance of maser-like emission of the hydrogen line.”

Cold hydrogen clouds in the galaxy emit faint narrowband radio signals similar to those shown here, detected by the Arecibo Observatory in 2020. A sudden brightening of one of these clouds, triggered by a strong emission from another stellar source, may explain the Wow! Signal. (Credit: University of Puerto Rico at Arecibo)

The Wow! Signal was detected by the Big Ear radio telescope at The Ohio State University on August 15, 1977. It exhibited several intriguing characteristics, including a narrow bandwidth, high signal strength, and a frequency tantalizingly close to the natural radio emission of neutral hydrogen — an element abundant throughout the universe. These properties led many to speculate the signal could be of artificial origin, perhaps a deliberate message from an extraterrestrial intelligence.

This passing burst of activity in space led Dr. Jerry Ehman to famously write “Wow!” next to the print-out of the signal, which was like nothing else astronomers were seeing in space at the time. However, the signal was never detected again, despite numerous attempts to locate its source over the ensuing decades.

This has posed a major challenge for the SETI (Search for Extraterrestrial Intelligence) community, as repetition is considered essential for verifying the authenticity of a potential extraterrestrial signal — also known as a technosignature.

This new study, however, is pushing the conversation away from an alien radio transmission and closer to a once-in-a-lifetime natural occurrence in deep space. The researchers’ key insight stems from observations made using the now-decommissioned Arecibo Observatory in Puerto Rico, one of the world’s most powerful radio telescopes until its collapse in 2020.

For now, the Wow! Signal remains shrouded in mystery, but there is now at least a plausible explanation for its existence — one that does not involve aliens.

Source: https://studyfinds.org/wow-signal-decoded/?nab=0

Do ambitious people really make the best leaders? New study raises doubts

(Credit: fizkes/Shutterstock)

Leadership is a critical component in every aspect of human activity, from business and education to government and healthcare. We often assume that those who aspire to leadership positions are the most qualified for the job. However, a new study challenges this assumption, revealing a striking disconnect between ambition and actual leadership effectiveness.

The study, conducted by researchers Shilaan Alzahawi, Emily S. Reit, and Francis J. Flynn from Stanford University’s Graduate School of Business, explores the relationship between ambition and leadership evaluations. Their findings suggest that while ambitious individuals are more likely to pursue and obtain leadership roles, they may not necessarily be more effective leaders than their less ambitious counterparts.

At the heart of this research is the concept of ambition, defined as a persistent striving for success, attainment, and accomplishment. Ambitious individuals are typically drawn to leadership positions, motivated by the promise of power, status, and financial rewards. However, the study, published in PNAS Nexus, raises an important question: Does this ambition translate into better leadership skills?

To investigate this question, the researchers conducted a large-scale study involving 472 executives enrolled in a leadership development program. These executives were evaluated on 10 leadership competencies by their peers, subordinates, managers, and themselves. In total, the study analyzed 3,830 ratings, providing a comprehensive view of each leader’s effectiveness from multiple perspectives.

Perhaps the most thought-provoking finding of the study is the significant discrepancy between how ambitious leaders view themselves and how others perceive them. Highly ambitious individuals consistently rated themselves as more effective leaders across various competencies. However, this positive self-assessment was not corroborated by the evaluations from their peers, subordinates, or managers.

For instance, ambitious leaders believed they were better at motivating others, managing collaborative work, and coaching and developing people. They also thought they had a stronger growth orientation and were more accountable for results. Yet, their colleagues and subordinates did not observe these superior abilities in practice.

While ambitious individuals are more likely to pursue and obtain leadership roles, they may not necessarily be more effective leaders than their less ambitious counterparts. (Credit: fauxels from Pexels)

This disconnect between self-perception and reality has significant implications for how we select and develop leaders. Many organizations rely on self-selection processes, where individuals actively choose to be considered for leadership roles. The assumption is that those who step forward are the most capable candidates. However, this study suggests that such an approach may be flawed, potentially promoting individuals based on their ambition rather than their actual leadership skills.

The researchers propose that ambitious individuals may be drawn to leadership roles for reasons unrelated to their aptitude. The allure of higher salaries, greater authority, and increased social status may drive them to pursue these positions, regardless of their actual leadership capabilities. To justify this pursuit, ambitious individuals may unconsciously inflate their self-perceptions of leadership effectiveness.

This phenomenon aligns with psychological concepts such as motivated reasoning and cognitive dissonance. Essentially, people tend to interpret information in a way that confirms their existing beliefs or desires. In this case, ambitious individuals may convince themselves of their superior leadership skills to justify their pursuit of higher positions.

Organizations and individuals may need to rethink their approach to leadership selection and development. Rather than relying solely on self-selection and ambitious individuals dominating candidate pools, companies might benefit from actively identifying and encouraging individuals who possess leadership potential but may lack the confidence or ambition to pursue such roles.

Moreover, the research highlights the importance of gathering diverse perspectives when evaluating leadership effectiveness. Relying solely on self-assessments or the opinions of a single group (e.g., only peers or only subordinates) may provide an incomplete or biased picture of a leader’s true capabilities.

This study urges us to look beyond ambition when selecting and developing leaders. By focusing on actual leadership skills rather than mere drive for power, we can cultivate leaders who are truly capable of guiding us through the challenges of the 21st century.

Source: https://studyfinds.org/the-ambitious-leaders-dilemma/?nab=0

Sea snail’s deadly venom may hold the key to a diabetes cure

A freshly-collected batch of venomous cone snails. (Credit: Safavi Lab)

In the vast, mysterious depths of the ocean, where some of the planet’s deadliest creatures reside, scientists have discovered an unexpected ally in the fight against diabetes and hormone disorders. A new study finds that the geography cone, a venomous marine snail known for its lethal sting, harbors a powerful secret: a toxin that could revolutionize the way we treat certain diseases.

The geography cone (Conus geographus) isn’t your typical predator. Instead of using brute force to capture its prey, it employs a more insidious method — a cocktail of venomous toxins that disrupt the bodily functions of its victims, leaving them helpless and easy to consume. However, within this deadly arsenal lies a remarkable substance, one that mimics a human hormone and holds the potential to create groundbreaking medications.

Publishing their work in the journal Nature Communications, scientists from the University of Utah and their international collaborators have identified a component in the snail’s venom that acts like somatostatin, a human hormone responsible for regulating blood sugar and various other bodily processes. What’s truly astonishing is that this snail-produced toxin, known as consomatin, doesn’t just mimic the hormone — it surpasses it in stability and specificity, making it an extraordinary candidate for drug development.

How can a deadly venom become a life-saving drug?
Somatostatin in humans serves as a kind of master regulator, ensuring that levels of blood sugar, hormones, and other critical molecules don’t spiral out of control. However, consomatin, the snail’s version of this hormone, has some unique advantages. Unlike human somatostatin, which interacts with multiple proteins in the body, consomatin targets just one specific protein with pinpoint accuracy. This precise targeting means that consomatin could potentially be used to regulate blood sugar and hormone levels with fewer side-effects than existing medications.

Consomatin is also more stable than the human hormone, lasting longer in the body due to the presence of an unusual amino acid that makes it resistant to breakdown. For pharmaceutical researchers, this feature is a goldmine — it could lead to the development of drugs that offer longer-lasting benefits to patients, reducing the frequency of doses and improving overall treatment outcomes.

Ho Yan Yeung, PhD, first author on the study (left) and Thomas Koch, PhD, also an author on the study (right) examine a freshly-collected batch of cone snails. Credit: Safavi Lab

While it may seem counterintuitive to look to venom for inspiration in drug development, this approach is proving to be incredibly fruitful. As Dr. Helena Safavi, an associate professor of biochemistry at the University of Utah and the senior author of the study, explains, venomous animals like the geography cone have had millions of years to fine-tune their toxins to target specific molecules in their prey. This evolutionary precision is exactly what makes these toxins so valuable in the search for new medicines.

“Venomous animals have, through evolution, fine-tuned venom components to hit a particular target in the prey and disrupt it,” says Safavi in a media release. “If you take one individual component out of the venom mixture and look at how it disrupts normal physiology, that pathway is often really relevant in disease.”

In other words, nature’s own designs can offer shortcuts to discovering new therapeutic pathways.

In its natural environment, consomatin works alongside another toxin in the cone snail’s venom, which mimics insulin, to drastically lower the blood sugar of the snail’s prey. This one-two punch leaves the fish in a near-comatose state, unable to escape the snail’s deadly grasp. By studying consomatin and its insulin-like partner, researchers believe they can uncover new ways to control blood sugar levels in humans, potentially leading to better treatments for diabetes.

“We think the cone snail developed this highly selective toxin to work together with the insulin-like toxin to bring down blood glucose to a really low level,” explains Ho Yan Yeung, a postdoctoral researcher in biochemistry at the University of Utah and the study’s first author.

What’s even more exciting is the possibility that the cone snail’s venom contains additional yet undiscovered toxins that also regulate blood sugar.

“It means that there might not only be insulin and somatostatin-like toxins in the venom,” Yeung adds. “There could potentially be other toxins that have glucose-regulating properties too.”

Source: https://studyfinds.org/sea-snail-venom-diabetes/?nab=0

Franchise Faces: The Most Iconic Fast Food Mascots of All Time

Step right up, folks, and feast your eyes on the colorful cast of characters that have been tempting our taste buds and raiding our wallets for decades! We’re talking about those lovable (and sometimes slightly unnerving) fast food mascots that are as much a part of our culture as the greasy, delicious food they’re hawking. From the golden arches of McDonald’s to the finger-lickin’ goodness of KFC, these animated pitchmen have wormed their way into our hearts faster than you can say “supersize me.” They’ve made us laugh, occasionally made us cringe, and more often than not, made us inexplicably crave a burger at 2 AM. So, grab your favorite value meal and get ready for a nostalgic trip down fast food memory lane as we rank the best fast food mascots. Trust us, this list is more stacked than a triple-decker burger!

If fast food mascots feel like old friends, you aren’t alone. That’s why we’ve put together a list of the top best fast food mascots from 10 expert websites. Did your favorite make our list? As always, we’d like to see your own recommendations in the comments below!

The Consensus Best Fast Food Mascots, Ranked
1. Colonel Sanders – KFC
Who doesn’t love a heaping bucket of fried chicken? “One of the most popular and recognizable fast food mascots is KFC’s Colonel Sanders. Not only is this a mascot and symbol for the brand, it directly represents the founder of Kentucky Fried Chicken — Colonel Harland David Sanders,” notes Restaurant Clicks.

What makes him stand out? “As a character, Colonel Sanders is a lovable, sweet old man with plenty of personal ties to KFC. He’s often portrayed by comedians, which gives the brand plenty of room to create funny and innovative commercials,” adds Ranker.

“Dressed in a white suit and black bow tie, accessorized with glasses and a cane, the Colonel’s image has become synonymous with the brand’s finger-licking good fried chicken. His face, etched in the memories of countless fried chicken fans, carries an aura of professionalism, quality, and trustworthiness,” suggests Sixstoreys.

2. Ronald McDonald – McDonald’s
One of the most recognizable fast food mascots, Ronald McDonald even has his own balloon in the Macy’s Thanksgiving Day parade. The mascot, “was first introduced to audiences in 1963, when actor Willard Scott (who played the immensely popular Bozo the Clown at the time) took on the persona of the red-haired clown for three TV ads promoting McDonald’s. He was referred to as ‘Ronald McDonald – the hamburger-happy clown’ and sported a drink cup on his nose as well as a food tray as a hat,” according to Lovefood.com.

Ronald is the perfect combination of fun and odd for a mascot. “He has Wendy’s red hair, The King’s freaky appearance, and the Colonel’s kindly character. Put it all together and you have a master of the mascots,” adds WatchMojo.

Thrillist writes: “Ronald is without a doubt the most polemic fast-food mascot. He’s friendly and instantly recognizable, but he’s also a clown. Most normal people are terrified by clowns regardless of nostalgia, so whether he reminds you of Saturday mornings spent watching cartoons and eating Happy Meals or the scariest moments or Stephen King’s ‘It’ is all on you.”

3. The King – Burger King
Who remembers going into Burger King as a kid and getting one of those paper crowns? “The first iteration of the Burger King was an unsuspecting fellow with a lopsided crown sitting atop his burger throne, cradling a soda. Today, he’s a life-size dude with a massive plastic head. He’s always smiling, giving him an almost menacing air — he might be outside your bedroom window right now,” points out The Daily Meal.

You know who we are talking about. “That unsettling-yet-unforgettable maniacal grin has been producing nightmares across the U.S. since 2004, when the current, plastic-costumed incarnation was introduced to the world,” says Mashed.

Restaurant Clicks writes: “Sometimes creepy and odd is what restaurants need to make people pay attention. It’s also fitting that he’s wearing a paper crown, similar to the ones kids can get in-store.”

I had to ask my 9-year-old if she thought The King was creepy. Her response? “A little, but I like him.”

4. Wendy – Wendy’s

Consider Wendy’s founder Dave Thomas as the ultimate girl dad. His daughter, Melinda was the idea behind the smiling, freckled, red-headed girl that the fast food chain still embraces.

You don’t think of Wendy’s without conjuring up an image of this red-haired sweetheart. “She’s been the primary logo of Wendy’s since the beginning and her image is irrevocably tied to the restaurant chain. Her personality is a central part of the fast food chain – that of a sweet young girl with plenty of pep and enthusiasm. Plus, her association with her father gives the brand a family feel, even though it has grown into a huge corporation,” notes Ranker.

Sixstoreys adds, “the character has remained a consistent symbol of the all-American, wholesome cuisine that Wendy’s seeks to provide. Her warm and approachable demeanor instantly evokes a sense of familiarity and family, resonating with customers who appreciate the brand’s commitment to quality, freshness, and friendliness.”

“She isn’t animatronic, she doesn’t have any particular peculiarities, but she is one of the most famous faces in all of fast food,” points out WatchMojo.

5. Jack Box – Jack in the Box
Rounding out our top five is Jack Box, from (you guessed it) Jack in the Box. “An adaptation of the fast food chain’s original clown head mascot, the geometrical character has become a classic American mascot. The franchise has employed Jack in its advertising since 1994 – part of a larger rebranding effort after a 1993 food contamination scandal,” according to The Drum.

 

Source: https://studyfinds.org/best-fast-food-mascots/?nab=0

Gen Z blames social media for ruining their mental health — but no one’s signing off

(Photo by DimaBerlin on Shutterstock)

Three in four Gen Z Americans are putting the blame on social media for having a negative impact on their mental health.

The survey, commissioned by LG Electronics and conducted by Talker Research, offers compelling insights into the digital habits and emotional responses of 2,000 Gen Z social media users. In a startling revelation, 20% of Gen Zers cite Instagram and TikTok as detrimental to their well-being, followed by Facebook at 13%.

Despite these concerns, social media remains an integral part of Gen Z’s daily life. The average user spends a whopping five-and-a-half hours per day on social media apps, with 45% believing they outpace their friends in usage time. Boredom (66%), seeking laughter (59%), staying informed (49%), and keeping tabs on friends (44%) are the primary motivators for their online engagement.

However, this digital immersion comes at a cost. Nearly half the poll (49%) report experiencing negative emotions from social media use, with stress and anxiety affecting 30% of respondents. Even more alarming, those who experience these negative feelings report that it takes only 38 minutes of scrolling before their mood begins to sour.

“We spend a significant portion of our lives online and often these experiences may leave us feeling drained and not mentally stimulated,” says Louis Giagrande, head of U.S. marketing at LG Electronics, in a statement. “We encourage everyone to be more conscious about the social media content they choose to engage with, bringing stronger balance, inspiration, and happiness to their lives. If we focus on optimism, we will be better equipped to deal with life’s challenges and build a happier life.”

The study also uncovered a desire for change among Gen Z users. In fact, 62% wish they could “reset” their social media feeds and start anew. Over half (53%) express frustration with content misalignment, feeling that their feeds don’t reflect their interests. Moreover, 54% believe they have limited or no control over the content populating their feeds, with only 16% claiming total control.

Yet, it’s not all doom and gloom. Four in five respondents (80%) associate social media with positive impacts on their mood. Comedy (65%), animal content (48%), beauty-related posts (40%), and prank videos (34%) are among the top mood boosters. Two-thirds of users say that social media has turned a bad day into a good one, and 44% believe it positively impacts their outlook on life.

Source: https://studyfinds.org/gen-z-blames-social-media-for-ruining-their-mental-health-but-no-ones-signing-off/?nab=0

The superstorms from space that could end modern life

A sudden solar superstorm is thought to be behind a devastating bombardment of high-energy particles around 14,000 years ago (Credit: Nasa)

The Sun is going through a period of high activity, but it is nothing compared to an enormous solar event that slammed into our planet 14,000 years ago. If one were to occur today, the effect on Earth could be devastating.

The oldest trees on Earth date back a whopping 5,000 years, living through all manner of events. They have stood through the rise and fall of the Roman Empire, the birth of Christianity, the European discovery of the Americas and the first Moon landing. Trees can even be fossilised in soil underground, giving us a connection to the last 30,000 years.
At first glance, these long-lived specimens might just appear to be static observers, but not so. They are doing something extraordinary as they grow – recording the activity of our Sun.

As trees photosynthesise throughout the year, they change in colouration depending on the season, appearing lighter in spring and darker by autumn. The result is a year-on-year record contained within the growth “rings” of the tree. “This gives us this really valuable archive of time capsules,” says Charlotte Pearson, a dendrochronologist – someone who studies tree rings – at the Laboratory of Tree-Ring Research at the University of Arizona, US.

For most of the 20th Century, dendrochronologists have largely used tree rings to investigate change across wide chunks of history – a decade or more. Yet at certain points in time, the change they document has been more sudden and cataclysmic. What they are finding evidence of are massive solar events that reveal disturbing insights into the turbulent recent past of the star at the centre of our Solar System.

“Nobody was expecting a brief event to appear,” says Edouard Bard, a climatologist at the College de France in Paris. But in 2012 a then-PhD student called Fusa Miyake, now a cosmic ray physicist at Nagoya University in Japan, made an astonishing discovery. Studying Japanese cedar trees, she discovered a huge spike in a type of carbon known as carbon-14 in a single year nearly 800 years ago, in 774 CE. “I was so excited,” says Miyake.

After doubting the data at first, Miyake and her colleagues soon came to an unnerving conclusion. The spike in carbon-14 must have come from something injecting huge numbers of particles into our atmosphere, since this radioactive isotope of carbon is produced when high-energy particles strike nitrogen in the atmosphere. Once linked perhaps to cosmic events like supernovae, studies have since suggested another probable cause: a monster burst of particles thrown out by the Sun. These would be generated by superflares, far bigger than anything seen in the modern era.

“They require an event that’s at least ten times bigger than anything we’ve observed,” says Mathew Owens, a space physicist at the University of Reading in the UK. The first recorded solar flare sighting dates back to the middle of the 19th Century, and are associated with the great geomagnetic storm of 1859, which has become known as the Carrington Event, after one of the astronomers who observed it, Richard Carrington.

Spikes in the level of carbon-14 isotope in tree rings have revealed past spikes in high-energy particles bombarding the Earth (Credit: Getty Images)

Miyake’s discovery was confirmed by other studies of tree rings and analysis of ancient ice in cores collected from places such as Antarctica and Greenland. The latter contained correlated signatures of berylium-10 and chlorine-36, which are produced in a similar atmospheric process to carbon-14. Since then, more Miyake events, as these massive bursts of cosmic radiation and particles are now known, have been unearthed. In total, seven well studied events are known to have occurred over the past 15,000 years, while there are several other spikes in carbon-14 that have yet to be confirmed as Miyake events.

The most recent occurred just over 1,000 years ago in 993 CE. Researchers believe these events occur rarely – but at somewhat regular intervals, perhaps every 400 to 2,400 years.

The most powerful known Miyake event was discovered as recently as 2023 when Bard and his colleagues announced the discovery of a carbon-14 spike in fossilised Scots pine trees in Southern France dating back 14,300 years. The spike they saw was twice as powerful as any Miyake event seen before, suggesting these already-suspected monster events could be even bigger than previously thought.

The team behind the discovery of this superstorm from space had scoured the Southern French Alps for fossilised trees and found some that had been exposed by rivers. Using a chainsaw, they collected samples and examined them back in a laboratory, discovering evidence for an enormous carbon-14 spike. “We dreamed of finding a new Miyake event, and we were very, very happy to find this,” says Cécile Miramont, a dendrochronologist at Aix-Marseille University in France and a co-author on the study.

Source : https://www.bbc.com/future/article/20240815-miyake-events-the-giant-solar-superstorms-that-could-rock-earth

South American lungfish has largest genome of any animal

A South American lungfish, whose scientific name is Lepidosiren paradoxa, is seen at a laboratory at the Louisiana State University in Baton Rouge, Louisiana, U.S., March 18, 2024. Katherine Seghers, Louisiana State University/Handout via REUTERS/File Photo Purchase Licensing Rights

The South American lungfish is an extraordinary creature – in some sense, a living fossil. Inhabiting slow-moving and stagnant waters in Brazil, Argentina, Peru, Colombia, Venezuela, French Guiana and Paraguay, it is the nearest living relative to the first land vertebrates and closely resembles its primordial ancestors dating back more than 400 million years.

This freshwater species, called Lepidosiren paradoxa, also has another distinction: the largest genome – all the genetic information of an organism – of any animal on Earth. Scientists have now sequenced its genome, finding it to be about 30 times the size of the human genetic blueprint.

The metric for genome size was the number of base pairs, the fundamental units of DNA, in an organism’s cellular nuclei. If stretched out like from a ball of yarn, the length of the DNA in each cell of this lungfish would extend almost 200 feet (60 meters). The human genome would extend a mere 6-1/2 feet (2 meters).

“Our analyses revealed that the South American lungfish genome grew massively during the past 100 million years, adding the equivalent of one human genome every 10 million years,” said evolutionary biologist Igor Schneider of Louisiana State University, one of the authors of the study published this week in the journal Nature

In fact, 18 of the 19 South American lungfish chromosomes – the threadlike structures that carry an organism’s genomic information – are each individually larger than the entire human genome, Schneider said.

While huge, there are plants whose genome is larger. The current record holder is a fork fern species, called Tmesipteris oblanceolata, in the French overseas territory of New Caledonia in the Pacific. Its genome is more than 50 times the human genome’s size.

Until now, the largest-known animal genome was that of another lungfish, the Australian lungfish, Neoceratodus forsteri. The South American lungfish’s genome was more than twice as big. The world’s four other lungfish species live in Africa, also with large genomes.

Lungfish genomes are largely composed of repetitive elements – about 90% of the genome. The researchers said the massive genome expansion documented in lungfish genomes seems to be related to a reduction in these species of a mechanism that ordinarily suppresses such genomic repetition.

“Animal genome sizes vary greatly, but the significance and causes of genome size variation remain unclear. Our study advances our understanding of genome biology and structure by identifying mechanisms that control genome size while maintaining chromosome stability,” Schneider said.

The South American lungfish reaches up to about 4 feet (1.25 meters) long. While other fish rely upon gills to breathe, lungfish also possess a pair of lung-like organs. It lives in oxygen-starved, swampy environs of the Amazon and Parana-Paraguay River basins, and supplements the oxygen gotten from the water by breathing in oxygen from the air.

Lungfish first appeared during the Devonian Period. It was during the Devonian that one of the most important moments in the history of life on Earth occurred – when fish possessing lungs and muscular fins evolved into the first tetrapods, the four-limbed land vertebrates that now include amphibians, reptiles, birds and mammals.

Source : https://www.reuters.com/science/south-american-lungfish-has-largest-genome-any-animal-2024-08-16

Why are more young adults getting colorectal cancer? The answer may be their diet

3D Rendered Medical Illustration of Male Anatomy showing Colorectal Cancer (© SciePro – stock.adobe.com)

Colorectal cancer rates are rising at an alarming rate among young adults, but the reason behind the increased diagnoses has been a medical mystery. However, the Cleveland Clinic has released a study that pinpoints a major cause for the spike in cases: diet.

When looking at the microbiomes of adults 60 years and younger with colorectal cancer, researchers found an unusually high level of diet-derived molecules called metabolites. The metabolites involved in colorectal cancer usually come from eating red and processed meat.

“Researchers—ourselves included—have begun to focus on the gut microbiome as a primary contributor to colon cancer risk. But our data clearly shows that the main driver is diet,” says Dr. Naseer Sangwan, a director at the Microbial Sequencing & Analytics Resource Core at the Cleveland Clinic and study co-author, in a media release. “We already know the main metabolites associated with young-onset risk, so we can now move our research forward in the correct direction.”

The study is published in the journal npj Precision Oncology.

This is a map of colorectal cancer hotspots in the United States. (Image credit: Rogers et al. American Journal of Cancer Research)

The researchers created an artificial intelligence algorithm to examine a wide range of datasets in published studies to determine what factors contributed most to colorectal cancer risk. One crucial area to explore was the gut microbiome. Previous research showed significant differences in gut composition between younger and older adults with colorectal cancer.

One of the most striking features among young adults and older adults with colorectal cancer is the differences in diet, reflected through the type of metabolites present in the gut microbiome. Younger people showed higher levels of metabolites involved in producing and metabolizing an amino acid called arginine, along with metabolites involved in the urea cycle.

According to the authors, these metabolites likely result from overeating red meat and processed foods. They are currently examining national datasets to confirm their findings.

Choosing between the two, it is much simpler to change a diet than to completely reset a person’s microbiome. The findings suggest that eating less red and processed meat could lower a person’s risk of colorectal cancer.

“Even though I knew before this study that diet is an important factor in colon cancer risk, I didn’t always discuss it with my patients during their first visit. There is so much going on, it can already be so overwhelming,” says Dr. Suneel Kamath, a gastrointestinal oncologist at the Cleveland Clinic and senior author of the study. “Now, I always make sure to bring it up to my patients, and to any healthy friends or family members they may come in with, to try and equip them with the tools they need to make informed choices about their lifestyle.”

Making healthier dietary choices is also a more accessible method for preventing colorectal cancer. While screening is an important tool, Dr. Kamath notes it is impractical for doctors to give every person in the world a colonoscopy. In the future, simple tests that count specific metabolites as a marker for colorectal cancer risk may help with increased monitoring. On the research side, the authors plan to test whether particular diets or drugs involved in regulating arginine production and the urea cycle can help prevent or treat colorectal cancer in young adults.

Source: https://studyfinds.org/colorectal-cancer-diet/?nab=0

Shocking brain scans reveal consciousness remains among vegetative patients

(© Photographee.eu – stock.adobe.com)

For years, families of brain-injured patients have insisted their unresponsive loved ones were still “in there.” Now, a groundbreaking study on consciousness suggests they may have been right all along.

Researchers have discovered that approximately one in four patients who appear completely unresponsive may actually be conscious and aware but physically unable to show it. This phenomenon, known as cognitive motor dissociation, challenges long-held assumptions about disorders of consciousness and could have profound implications for how we assess and care for brain-injured patients.

The study, published in the New England Journal of Medicine, represents the largest and most comprehensive investigation of cognitive motor dissociation to date. An international team of researchers used advanced brain imaging and electrophysiological techniques to detect signs of consciousness in patients who seemed entirely unresponsive based on standard behavioral assessments.

The findings suggest that cognitive motor dissociation is far more common than previously thought. This has major implications for clinical care, end-of-life decision-making, and our fundamental understanding of consciousness itself.

The study examined 353 adult patients with disorders of consciousness resulting from various types of brain injuries. These conditions exist on a spectrum, ranging from coma (where patients are completely unresponsive and show no signs of awareness) to the vegetative state (where patients may open their eyes and have sleep-wake cycles but show no signs of awareness) to the minimally conscious state (where patients show some inconsistent but reproducible signs of awareness).

Traditionally, doctors have relied on bedside behavioral assessments to diagnose a patient’s level of consciousness. However, this approach assumes that if a patient can’t physically respond to commands or stimuli, they must not be aware. The new study challenges this assumption, revealing signs of consciousness that may not be outwardly visible.

Strikingly, the study found that 25% of patients who showed no behavioral signs of consciousness demonstrated brain activity consistent with awareness and the ability to follow commands. In other words, one in four patients who appeared to be in a vegetative state or minimally conscious state without command-following ability were actually conscious and able to understand and respond mentally to instructions.

“Some patients with severe brain injury do not appear to be processing their external world. However, when they are assessed with advanced techniques such as task-based fMRI and EEG, we can detect brain activity that suggests otherwise,” says lead study author Yelena Bodien, PhD, in a statement.

Bodien is an investigator for the Spaulding-Harvard Traumatic Brain Injury Model Systems and Massachusetts General Hospital’s Center for Neurotechnology and Neurorecovery.

“These results bring up critical ethical, clinical, and scientific questions – such as how can we harness that unseen cognitive capacity to establish a system of communication and promote further recovery?”

The study also found that cognitive motor dissociation was more common in younger patients, those with traumatic brain injuries, and those who were assessed later after their initial injury. This suggests that some patients may recover cognitive abilities over time, even if they remain unable to communicate behaviorally.

Interestingly, even among patients who could follow commands behaviorally, more than 60% did not show responses on the brain imaging tests. This highlights the complex nature of consciousness and the limitations of current detection methods.

The findings raise challenging questions about how we diagnose disorders of consciousness, make end-of-life decisions, and allocate resources for long-term care and rehabilitation. It also opens up new avenues for potential therapies aimed at restoring communication in these patients.

While the study represents a significant advance, the authors caution that the techniques used are not yet widely available and require further refinement before they can be routinely used in clinical practice.

“To continue our progress in this field, we need to validate our tools and to develop approaches for systematically and pragmatically assessing unresponsive patients so that the testing is more accessible,” adds Bodien. “We know that cognitive motor dissociation is not uncommon, but resources and infrastructure are required to optimize detection of this condition and provide adequate support to patients and their families.”

Source: https://studyfinds.org/brain-consciousness-vegetative/?nab=0

AI model 98% accurate in detecting diseases — just by looking at your tongue

Researchers in Iraq and Australia say they have developed a computer algorithm that can analyze the color of a person’s tongue to detect their medical condition in real-time — with 98% accuracy.
vladimirfloyd – stock.adobe.com

This technology could be aah-mazing!

Researchers in Iraq and Australia say they have developed a computer algorithm that can analyze the color of a person’s tongue to detect their medical condition in real time — with 98% accuracy.

“Typically, people with diabetes have a yellow tongue; cancer patients a purple tongue with a thick greasy coating; and acute stroke patients present with an unusually shaped red tongue,” explained senior study author Ali Al-Naji, who teaches at Middle Technical University in Baghdad and the University of South Australia.

Examining the tongue for signs of disease has long been commonplace in Chinese medicine.
MDPI

“A white tongue can indicate anemia; people with severe cases of COVID-19 are likely to have a deep-red tongue,” Al-Naji continued. “An indigo- or violet-colored tongue indicates vascular and gastrointestinal issues or asthma.”

Source: https://nypost.com/2024/08/13/lifestyle/ai-model-98-accurate-in-detecting-diseases-just-by-looking-at-your-tongue/

Paradise Found: Experts Rank the West Coast’s Most Beautiful Beaches

A pelican in front of Haystack Rock on Cannon Beach in Oregon (Photo by Hank Vermote on Shutterstock)

The West Coast of the United States is home to some of the most breathtaking beaches in the world. From California’s dramatic cliffs to Oregon and Washington’s peaceful shores, there’s a beach for every vibe. With nearly 8,000 miles of shoreline, it would take years to get to every beach. That’s why we’ve created a list of the best West Coast beaches that travel experts across seven websites recommend adding to your bucket list. So, grab your sunscreen and towel, and discover what could become your new favorite spot. Is there a beach you love that’s not on the list? Let us know!

The List: Top 6 Must-Visit Beaches on the Left Coast

1. Cannon Beach, Oregon

Cannon Beach, Oregon (Photo by Tim Mossholder on Unsplash)

You won’t be able to put your camera down when visiting Cannon Beach along the Oregon coast. The breathtaking sunsets rival those in far-off lands, with the towering Haystack Rock providing an incredible backdrop. Nestled in the charming town of Cannon Beach, this coastal gem is a favorite spot for day-trippers from Portland, says Roam The Northwest.

You’ll also encounter plenty of wildlife at the beach, from tide pools to unique marine life. Nearby, Ecola State Park offers hiking trails and lookouts – perfect for selfies. Sixt recommends a stroll through Cannon Beach’s downtown area, filled with unique boutiques and galleries. It’s a full-day adventure!

Swimming might not be the main attraction at Cannon Beach, but that doesn’t bother visitors reports Your Tango. There’s plenty to do, from exploring vibrant nature trails to walking to Haystack Rock at low tide. The beach promises fun times for everyone, even if you don’t take a dip!

2. La Jolla Beach, California

La Jolla Beach, California (Photo by Danita Delimont on Shutterstock)

According to USA Today, this popular San Diego beach is a dream come true with its miles of turquoise water and gentle waves, making it the perfect place to learn to surf. The swimming and snorkeling are unbeatable with plenty of colorful fish and marine life.

The beach is an ideal destination for families looking to enjoy the sand, surf, and explore the nearby State Marine Reserve. Roam the Northwest recommends bringing a bike to tour the Coast Walk Trail or visit other nearby beaches.

If you enjoy kayaking or snorkeling, Your Tango says that this beach offers some of the most secluded caves to explore. Just a short walk from La Jolla Cove is the Children’s Pool, an ideal spot for families with small children.

3. Rialto Beach, Washington

Sunset on Rialto Beach, Washington (Photo by Jay Yuan on Shutterstock)

The highlight of this stunning Olympic Coast beach is the aptly named Hole-in-the-Wall, located about a mile north of the parking lot. Accessible only at low tide, this natural rock formation is perfect for exploring and taking photos. It frames the sea stacks that line this stretch of beach beautifully, according to Roam the Northwest.

The water can be chilly in the summer, around 59 degrees, but Cheapism says the stunning scenery more than makes up for it. You’ll find massive sea stacks just offshore, piles of driftwood scattered along the beach and the famous Hole-in-the-Wall sea cave arch. The wildlife here is a real treat too—otters, whales, and seals are regulars!

This beach is one of the rare spots where you can bring your pets, but Yard Barker says you must keep them leashed and don’t let them go past Ellen Creek. It’s a popular place for beach camping too, though sadly, your four-legged friends have to sit that one out.

4. Ruby Beach, Washington

Ruby Beach ( Photo by Sean Pavone on Shutterstock)

This is one of Washington State’s best-kept secrets according to Yard Barker. Adjacent to Olympic National Park, this spot is more “beachy” than Rialto. If it weren’t for the cooler weather, you might think you were in California. This spot is a must-add to any vacation itinerary.

Ruby Beach is conveniently located off Highway 101 and is perfect for day-trippers. With its towering sea stacks and cobbled stones, Roam the Northwest guarantees you’ll spend hours beachcombing and soaking in the wild beauty. Its remote charm and stunning landscapes keep people coming back for more.

During low tide, visitors can explore rocky areas and discover marine life in the tide pools, while photographers capture the scenic sea stacks and driftwood. For a more active experience, the nearby Olympic National Park offers coastal trails with stunning views of the Pacific Ocean. As Sixt highlights, the 1.5-mile hike to the Hoh River is breathtaking, featuring sea stacks, driftwood, and the chance to spot eagles and other wildlife.

Source: https://studyfinds.org/best-west-coast-beaches/?nab=0

Going vegan vs. Mediterranean diet: Surprising study reveals which is healthier

(© Mustafa – stock.adobe.com)

The Mediterranean diet has long been touted as the gold standard for healthy eating, but a new contender has emerged from an unexpected corner. Recent research shows that a low-fat vegan diet not only promotes more weight loss but also dramatically reduces harmful substances in our food.

The study, conducted by researchers at the Physicians Committee for Responsible Medicine, an agency that promotes plant-based foods, compared the effects of a Mediterranean diet and a low-fat vegan diet on overweight adults. Participants on the vegan diet lost an average of 6 kilograms (about 13 pounds) more than those on the Mediterranean diet, with no change in their physical activity.

But the benefits, published in the journal Frontiers in Nutrition, didn’t stop at weight loss. The vegan diet also led to a dramatic 73% reduction in dietary advanced glycation end-products (AGEs). These harmful compounds, formed when proteins or fats combine with sugars, have been linked to various health issues, including inflammation, oxidative stress, and an increased risk of chronic diseases like Type 2 diabetes and cardiovascular disease.

Why you should eliminate AGEs from your diet

To understand AGEs, imagine them as unwanted houseguests that overstay their welcome in your body. They form naturally during normal metabolism, but they also sneak in through our diet, especially in animal-based and highly processed foods. AGEs are particularly abundant in foods cooked at high temperatures, such as grilled meats or fried foods. They can accumulate in our bodies over time, causing damage to tissues and contributing to the aging process – hence their nickname, “glycotoxins.”

The Mediterranean diet, long praised for its health benefits, surprisingly showed no significant change in dietary AGE levels. This finding challenges the perception that the Mediterranean diet is the gold standard for healthy eating. The vegan diet, on the other hand, achieved its AGE-busting effects primarily by eliminating meat consumption (which accounted for 41% of the AGE reduction), minimizing added fats (27% of the reduction), and avoiding dairy products (14% of the reduction).

These results suggest that a low-fat vegan diet could be a powerful tool in the fight against obesity and its related health issues. By reducing both body weight and harmful AGEs, this dietary approach may offer a two-pronged attack on factors that contribute to chronic diseases.

Mediterranean diet not best for weight loss?
The study’s lead author, Dr. Hana Kahleova, says that the vegan diet’s benefits extended beyond just numbers on a scale. The reduction in AGEs could have far-reaching implications for overall health, potentially lowering the risk of various age-related diseases.

“The study helps bust the myth that a Mediterranean diet is best for weight loss,” says Kahleova, the director of clinical research at the Physicians Committee for Responsible Medicine, in a statement. “Choosing a low-fat vegan diet that avoids the dairy and oil so common in the Mediterranean diet helps reduce intake of harmful advanced glycation end-products leading to significant weight loss.”

This research adds to a growing body of evidence supporting the benefits of plant-based diets. Previous studies have shown that vegetarian and vegan diets can reduce the risk of developing metabolic syndrome and Type 2 diabetes by about 50%. The dramatic reduction in dietary AGEs observed in this study may help explain some of these protective effects.

Source: https://studyfinds.org/vegan-vs-mediterranean-diet-which-is-healthier/?nab=0

How to know when it’s time to start therapy

(© Prostock-studio – stock.adobe.com)

People go to therapy for many reasons. A challenging life event, trauma, volatile emotions, relationship problems, poor mental health: all can prompt someone to seek it out.

Whatever the reason, it can be difficult to decide when and if therapy is right for you.

If you’re reading this, now’s probably the right time. If you’re considering therapy, something is likely bothering you and you want help. Consider this your sign to reach out.

If you’re still unsure, keep reading.

Why therapy?
Sometimes, our minds work against us. Therapy can help you understand why you think, feel, or act how you do and give you the skills you need to think, feel, or act in healthier ways.

This includes helping you:

  • identify, understand, and overcome internal obstacles
  • identify and challenge thought patterns and beliefs that are holding you back
  • improve your mental health
  • cope with mental illness
  • and create lasting changes to your thoughts and behavior that can improve all areas of your life.
    When your mental health is suffering

Everyone experiences negative emotions in difficult situations — like sadness after a breakup or anxiety before a big life event. But when do these feelings become problematic? When you have poor mental health.

Mental health and mental illness are distinct, but related, concepts. Mental health refers to the inner resources you have to handle life’s ups and downs. You have good mental health if you enjoy life; feel connected to others; cope well with stress; and have a sense of purpose, a sense of self and strong relationships.

If you have poor mental health, it can be hard to adapt to changes like a breakup, move, loss or parenthood. Therapy can help you improve your mental health, develop resilience and maintain a state of well-being.

Mental illness refers to distressing disturbances in thoughts, feelings and perceptions that interfere with daily life. There are different kinds of mental illness, each characterized by different thoughts, feelings and behaviors.

Mental illness may feel like:

  • Hopelessness — feeling stuck, unmotivated or helpless.
  • Apathy — feeling uninterested in things that used to give you satisfaction or pleasure.
  • Anger — feeling rage or resentment, especially frequently or disproportionately.
  • Stress — feeling overwhelmed, unable to cope, unwilling to rest or like everything is hard (even if you know it shouldn’t be).
  • Guilt — feeling ashamed, undeserving of good things or deserving of bad things.
  • Anxiety — worrying about what has or might happen or having disturbing intrusive thoughts.
  • Exhaustion — sleeping more than usual, having difficulty getting out of bed or lacking energy during the day.
  • Insomnia — having difficulty falling or staying asleep.

Both poor mental health and mental illness are equally good reasons to seek therapy.

Ask yourself: Am I having trouble dealing with life challenges?

If the answer is yes, therapy might be for you.

Therapy is a process that requires time, effort and the right psychologist for you. Don’t let mental health stigma hold you back. (Credit: cottonbro from Pexels)

People often cope with the feelings listed above in different ways. Some gain or lose a lot of weight. Others might seek out or do things that are unhealthy for them, like entering a toxic relationship, engaging in dangerous activities, developing an unhealthy habit or procrastinating. Others might isolate themselves from friends and family, or catastrophize and ruminate on negative experiences.

However it manifests, mental illness often gets worse if left untreated. It can have very real impacts on your life, potentially leading to unemployment, broken relationships, poor physical health, substance abuse, homelessness, incarceration or even suicide.

Ask yourself: Is mental illness negatively affecting my functioning or well-being?

If the answer is yes, therapy might be for you.

What if therapy didn’t work before?
Many people put off going to therapy because they don’t think their problems are serious enough, but you don’t need a big, deep reason to start therapy.

Some people go to therapy to learn more about themselves. Some, to improve their skills, relationships or productivity. Others go for help reaching their goals or because they aren’t happy and don’t know why. Any of these are good reasons to start therapy, even if they don’t seem like “problems” in a traditional sense. You can go to therapy just because there’s something about yourself or your life you’d like to explore.

Therapy is a process. Whether psychotherapy works for you depends on many factors, such as time, effort and your psychologist.

There’s no quick fix for mental health. Symptoms can take weeks, months or even years to improve. Although this can be frustrating or disheartening, for therapy to work, you have to give it time.

Source: https://studyfinds.org/when-to-start-therapy/?nab=0

Study Reveals Priorities Of Indian GenZ’s, How Hard They’re Willing To Work

Gen Z usually refers to those born between 1997 and 2012.

One in four GenZ respondents in India is more inclined towards new-age job fields. (Representational)

One in every four GenZ respondents in India is more inclined towards new-age job fields like Artificial Intelligence, cybersecurity and content creation while 43 per cent are willing to sacrifice the work-life balance to succeed in their career, a study has found.
The study, the Quest Report 2024, which unveils Gen Z traits and trends on dreams, careers, and aspirations, also found that only 9 per cent of respondents want to pursue entrepreneurship as they seek stability and security in work life.

Gen Z usually refers to those born between 1997 and 2012.

“One out of 4 Indian respondents are more inclined towards new-age job fields like content creation, data analysis, AI, and cybersecurity,” the study commissioned by iQOO in association with CyberMedia Research, said. iQOO is a sub-smartphone brand of the vivo group.

“43 per cent of respondents in India and 46 per cent globally are willing to give up work-life balance to succeed in their career,” it said, adding that around 62 per cent of the Indian youth are willing to give up their hobbies and other interests to achieve their dreams.

“Recent debate on work-life balance due to deliberation on 14-hour work day & 70-hour workweek have stirred conversations amongst the Gen Z,” the study noted.

The study of 6,700 Gen Z respondents, aged between 20-24 years, from seven countries including the US, the UK, Malaysia Brazil and India, stated that 19 per cent of Indians surveyed prefer career advancements in big organisations while 84 per cent of Indian respondents believe their jobs align with their goals, compared to 72 per cent globally.

Your high school friend’s genes may have a surprising effect on your life

(© Monkey Business – stock.adobe.com)

Have you ever wondered why some groups of friends all seem to follow similar life paths? Rutgers University researchers suggest that the answer might be hidden in our friends’ DNA. This fascinating research reveals how the genetic makeup of our teenage pals could influence our risk of developing mental health and substance abuse issues later in life.

The Power of Peer Genetics
Dr. Jessica Salvatore, an associate professor at Rutgers Robert Wood Johnson Medical School, led a team investigating an emerging field called socio-genomics. This area of study looks at how one person’s genes can affect the observable traits of another – in this case, how our friends’ genetic predispositions might shape our own health outcomes.

“Peers’ genetic predispositions for psychiatric and substance use disorders are associated with an individual’s own risk of developing the same disorders in young adulthood,” Salvatore explains in a media release. “What our data exemplifies is the long reach of social genetic effects.”

So, what does this mean in everyday terms? Essentially, the study suggests that if your teenage friends have a genetic tendency towards certain mental health or addiction issues, you might be more likely to experience similar problems as an adult – even if you don’t share those genetic risks yourself.

Methodology

To uncover these connections, Salvatore and her team dove into a treasure trove of data from Sweden. They analyzed information from over 1.5 million people born between 1980 and 1998, mapping out where these individuals went to school and lived during their teenage years.

The researchers then looked at medical records, pharmacy data, and legal registries to see who developed substance abuse issues, major depression, or anxiety disorders as adults. By comparing this information with what they knew about the genetic risks of each person’s peer group, they uncovered some startling patterns.

Key Results: School Ties Have the Strongest Impact
Even after accounting for an individual’s own genetic predispositions and family background, there was a clear link between the genetic makeup of their peer group and their likelihood of developing these disorders later in life.

Interestingly, the study found that these “social genetic effects” were strongest among school-based peers, particularly for those in upper secondary school (ages 16-19). This suggests that the friends we make during our late teens might have an especially powerful influence on our future health.

The effects were most pronounced for drug and alcohol use disorders, compared to depression and anxiety. This finding highlights the potential long-term impact of the social environment during those crucial teenage years.

Why Does This Happen?
You might be wondering how your friends’ genes could possibly affect your own health. Dr. Salvatore admits that more research is necessary to fully understand the mechanisms at play.

“The most obvious explanation for why peers’ genetic predispositions might be associated with our own well-being is the idea our peers’ genetic predispositions influence their phenotype, or the likelihood that peers are also affected by the disorder,” Dr. Salvatore says.

However, the study found that the connection persisted even when controlling for whether the peers themselves had these disorders. This suggests that something more complex might be happening – perhaps involving shared environments, social norms, or other factors we don’t yet fully understand.

Source: https://studyfinds.org/friends-genes-effect-on-your-life/?nab=0

Is knowledge a ‘curse’ threatening society?

(© Petrova-Apostolova – stock.adobe.com)

From ancient philosophers to modern-day scientists, the pursuit of knowledge has been seen as inherently good. But what if that assumption is wrong? A new study presents a counterintuitive idea that’s shaking up how we think about progress.

Economists Kaushik Basu of Cornell University and Jörgen Weibull of the Stockholm School of Economics have uncovered what they call a “knowledge curse” – situations where increased understanding of a problem can paradoxically reduce overall welfare. Their findings challenge our intuitions about the inherent value of information and raise thought-provoking questions about the downsides of scientific progress.

“Greater knowledge is always an advantage for a rational individual,” the researchers note in their paper, published in Royal Society Open Science. “However, this article shows that for a group of rational individuals greater knowledge can backfire, leading to a worse outcome for all.”

How is this possible? The key insight is that in certain types of interactions between people, having more information can change behavior in ways that end up hurting everyone involved.

The Mask Dilemma
To understand this counterintuitive concept, consider a simplified example: Imagine a society where everyone wears masks during flu season because they know it generally reduces transmission, even if it’s a bit uncomfortable. Overall, this leads to fewer illnesses and better public health.

Now imagine scientists discover a way to precisely measure the contagiousness of different flu strains each day. Armed with this new knowledge, people only wear masks on the most contagious days. While this seems rational for each individual, the result is that mask-wearing declines overall and more people get sick in the end.

This scenario illustrates how enhanced knowledge about an existing reality – such as the cost-benefit of wearing a face mask to help prevent the spread of disease – may hinder cooperation among purely self-interested individuals. When everyone acts on their own self-interest with more perfect information, it can sometimes lead to worse collective outcomes.

The Prisoner’s Dilemma of Progress

The researchers also demonstrate this effect using game theory – mathematical models of strategic interactions between rational decision-makers. They show how in certain types of “games” or scenarios, players with more knowledge about the situation will make choices that leave everyone worse off compared to when they had less information.

Basu and Weibull build their case using a theoretical two-player “Base Game” where each player has two actions to choose from, with expected payoffs for each combination. They then show how introducing new options or deeper understanding of the payoffs can lead to situations similar to the famous Prisoner’s Dilemma, where individual rationality leads to collectively suboptimal outcomes.

“What is summed up in the above parable is that in some interactions, perverse outcomes can be the result of science and rationality,” Basu and Weibull write.

A Race to the Bottom?
The implications of this work extend far beyond simple games. The researchers explore how evolutionary processes might play out if only some people initially gain access to new information. They find that knowledgeable individuals tend to outcompete the ignorant over time, potentially leading to a “race to the bottom” where everyone ends up worse off as the information spreads.

This raises challenging questions: Should potentially harmful information sometimes be restricted? Do we need stronger collective decision-making processes to override individual incentives in some cases? How can we reap the benefits of increased knowledge while mitigating its occasional downsides?

“Science can yield huge benefits, but we need safeguards,” Basu says in a statement. “What those are, we do not know. But the paper urges us to pay attention to this.”

The Power of Preemption
While the study paints a potentially grim picture, the authors also offer hope. They point to examples of successful preemptive actions throughout history, such as the drafting of constitutions that anticipate and address future problems. “Such preemptive laws have conferred large benefits to humankind,” they write.

The researchers suggest that cultivating stronger social norms and moral motivations could help align individual and collective interests, potentially dissolving the knowledge curse. By encouraging people to consider the greater good, not just their own immediate interests, we might be able to harness the power of knowledge without falling victim to its pitfalls.

As we continue to push the boundaries of human understanding, this work serves as a timely reminder that information isn’t always benign. Managing the power of knowledge responsibly may be one of the great challenges of our information age – a challenge we must meet to ensure that our quest for knowledge truly does lead to a better world for all.

“We assume that a scientific breakthrough that gives us a deeper understanding of the world can only help,” concludes Basu. “Our paper shows that in the real world, where many people live and strive individually or in small groups to do well for themselves, this intuition may not hold. Science may not be the panacea we take it to be.”

Source: https://studyfinds.org/is-knowledge-a-curse-threatening-society/?nab=0

No drink is safe: Studies show alcohol’s link to growing list of cancers

Drinking and depression are two of 12 specific modifiable risk factors that increase the risk of young-onset dementia. (Credit: Photo by Karolina Grabowska from Pexels)

It’s become common knowledge that alcohol is a carcinogen — otherwise known as a cancer-causing substance. A new report paints a concerning picture of just how many cancers and related deaths may be the result of drinking alcohol.

The study in CA: A Cancer Journal for Clinicians, published by the American Cancer Society, lists alcohol as the third most common carcinogen, with 5% of cancer cases in people over 30 attributable to imbibing. The new numbers may come as a shock to the public. In 2020, a national survey of adults showed that fewer than a third of respondents knew that alcohol increases cancer risk. About 10% thought that drinking actually reduced their risk of developing cancer.

In one year, just before the COVID pandemic (when cancer was less likely to be diagnosed due to interruptions in care) there were about 95,000 cancer cases, and 24,000 cancer deaths, attributed to alcohol. There were seven cancers associated with alcohol: female breast, colorectal, mouth, throat, larynx, esophagus, and liver. However, the report states that there is evidence accumulating that alcohol can cause other cancers, such as pancreatic cancer.

An alcoholic beverage in the United States has about 14 grams of pure ethanol (the type of alcohol in beverages). That includes one 12-ounce serving of 5% ABV (alcohol by volume) beer, 8 to 10 ounces of 7% ABV hard seltzer, a 5-ounce serving of 12% ABV wine, or 1.5 ounces of 40% ABV liquor. The greater a person’s consumption, the higher the risk of cancer.

(Photo by Fred Moon on Unsplash)

Lesser amounts of alcohol may also pose a risk to your health
A recent study at the Centers for Disease Control and Prevention (CDC) found about 17% of cancer deaths were attributable to low levels of alcohol consumption — less than the former national dietary guidelines’ recommended cap of two drinks per day for men and one drink per day for women. The updated, current national dietary guidelines state that NO amount of alcohol is safe or beneficial to your health.

Other research attributes tens of thousands of cancer cases to alcohol consumption. In the rising tide of disease and death associated with drinking — especially among women and younger people — cancer is a major component of concern.

Breast cancer is strongly associated with drinking alcohol, according to ongoing research. Female breast cancer is the type of cancer with the greatest number of cases than can be attributed to alcohol. About 44,000 (16% of cases) in 2019 were linked to ethanol use. About 18,000 cases (13%) of colorectal cancers in both men and women were linked to drinking.

The proportion of cancer cases that were attributed to alcohol was higher in men than in women. The number of alcohol-attributable cases was three times higher than women’s (23% versus 8%). The exception was esophageal cancer, in which 24% of cases among women were attributable to alcohol, compared to 17% of cases in men.

Source: https://studyfinds.org/no-drink-is-safe-alcohol-cancer/?nab=0

Robot dentist performs world’s first fully automated procedure

It is hoped the technology could be faster and more accurate than conventional procedures.

A patient undergoing a dental procedure from a robot. Pic: Perceptive

A robot has completed a fully-automated dental procedure on a human, in a world first.

The technology features a robotic arm – along with artificial intelligence and 3D imaging – for performing dental work.

The US-based company Perceptive says its technology aims to be more accurate and faster in completing procedures including fillings and crowns.

Chief executive and founder Dr Chris Ciriello said: “This medical breakthrough enhances precision and efficiency of dental procedures.”

The company’s received $30m (£23.5m) in funding and is backed by dentist Edward Zuckerberg, the father of Meta boss Mark Zuckerberg.

The firm claims that, in the future, crown placements could be completed in just 15 minutes.

That compares to the current method needing two hour-long visits to the dentist.

Part of the process begins with a 3D scan of a patient’s tooth and mouth, capturing images beneath the gum line.

The robotic device is still a work in progress though.

It is not currently on sale in the US and does not have clearance from the American regulator, the Food and Drug Administration (FDA).

Source: https://news.sky.com/story/robot-dentist-performs-worlds-first-fully-automated-procedure-13191514

Scientists discover new type of wood in iconic tulip trees

Liriodendron tulipifera wood ultrastructure observed under a cryo-SEM reveals enlarge macrofibril structures. (Credit: Jan J Lyczakowski and Raymond Wightman)

In a fascinating study that reads like a botanical detective story, researchers have uncovered a secret long hidden within the wood of some of the world’s most beloved trees. Scientists from Jagiellonian University and the University of Cambridge have discovered an entirely new type of wood in tulip trees, a finding that could reshape our understanding of plant evolution and perhaps greatly aid our efforts to combat climate change.

The study, published in New Phytologist, set out to explore the microscopic structure of wood across various tree species. But what they found in the tulip tree (Liriodendron tulipifera) and its close relative, the Chinese tulip tree (Liriodendron chinense), was truly unexpected – a wood structure that defies traditional categories.

The key to this discovery lies in tiny structures called macrofibrils – long fibers aligned in layers within the secondary cell wall of wood. In tulip trees, these macrofibrils are much larger than those found in their hardwood relatives. This unique structure, which the researchers have dubbed “midwood” or “accumulator-wood,” may explain the tulip tree’s remarkable ability to capture and store carbon.

“We show Liriodendrons have an intermediate macrofibril structure that is significantly different from the structure of either softwood or hardwood,” explains lead author Dr. Jan Łyczakowski, from Jagiellonian University, in a statement. This discovery challenges the long-held binary classification of wood as either softwood (from gymnosperms like pines) or hardwood (from angiosperms like oaks).

Tulip Tree (Liriodendron tulipifera) in the Cambridge University Botanic Garden. View from ground looking up into the canopy. (Credit: Kathy Grube)

The timing of this adaptation is particularly intriguing. Tulip trees diverged from their magnolia relatives around 30-50 million years ago, coinciding with a dramatic decrease in atmospheric CO2 levels. Dr. Łyczakowski suggests that this enlarged macrofibril structure could be an evolutionary response to more efficiently capture carbon in a changing environment.

This discovery opens up exciting possibilities for climate change mitigation. Tulip Trees are already known for their rapid growth and efficient carbon sequestration. Now, with a better understanding of their unique wood structure, there’s potential to leverage these trees in large-scale carbon capture initiatives.

“Tulip trees may end up being useful for carbon capture plantations,” Dr. Łyczakowski notes. “Some east Asian countries are already using Liriodendron plantations to efficiently lock in carbon, and we now think this might be related to its novel wood structure.”

The study’s findings extend beyond tulip trees. In their survey of 33 tree species from the Cambridge University Botanic Garden, the researchers also found that certain gymnosperms in the Gnetophytes family have independently evolved a hardwood-like structure typically seen only in angiosperms. This convergent evolution adds another layer of complexity to our understanding of plant adaptation.

“We analyzed some of the world’s most iconic trees like the giant sequoia, Wollemi pine and so-called ‘living fossils’ such as Amborella trichopoda, which is the sole surviving species of a family of plants that was the earliest still existing group to evolve separately from all other flowering plants,” says Dr. Raymond Wightman, Microscopy Core Facility Manager at the Sainsbury Laboratory Cambridge University.

This research not only advances our understanding of plant evolution but also highlights the crucial role of wood ultrastructure in carbon sequestration. As we grapple with the challenges of climate change, insights like these could prove invaluable in developing more effective strategies for carbon capture and storage.

The discovery of this new type of wood in tulip trees serves as a reminder that even in well-studied areas of biology, there are still surprises waiting to be uncovered. It also underscores the importance of preserving diverse plant collections, like those found in botanic gardens, which continue to yield new scientific insights centuries after their establishment.

Source: https://studyfinds.org/new-type-of-wood-tulip-trees/?nab=0

Some ALS patients miraculously recover almost entirely. Doctors may have figured out why

(© PH alex aviles – stock.adobe.com)

Imagine losing the ability to move your arms, to walk, to speak, and eventually even to breathe on your own. This is the grim reality faced by patients with amyotrophic lateral sclerosis (ALS), often known as Lou Gehrig’s disease. Incredibly, a handful of patients wind up defying this devastating prognosis. Now, in a study published in the journal Neurology, researchers have uncovered a genetic clue that may explain why these rare individuals experience a remarkable reversal of their ALS symptoms.

ALS, often referred to as Lou Gehrig’s disease, is a neurological condition that progressively destroys motor neurons, the nerve cells responsible for controlling voluntary muscle movement. As these neurons die, patients gradually lose control of their bodies. The disease is usually unforgiving, with most patients remaining trapped in their own skin before succumbing within 2-5 years of diagnosis.

However, in an intriguing twist that has puzzled researchers for at least 60 years, a handful of patients somehow survive and regain their lost faculties. These individuals, dubbed “ALS Reversals,” initially display all the hallmarks of the disease but then experience a substantial and sustained improvement in their condition. It’s as if their bodies have found a way to hit the pause button on ALS, or even rewind the clock.

“With other neurological diseases, there are now effective treatments,” says Dr. Richard Bedlack, the Stewart, Hughes, and Wendt Professor in the Department of Neurology at Duke University School of Medicine, in a statement.”But we still don’t have great options for these patients, and we desperately need to find things. This work provides a starting point to explore how biological reversals of ALS occur and how we might be able to harness that effect therapeutically.”

To unravel this medical mystery, researchers at Duke Health and St. Jude’s Research Hospital conducted a genome-wide association study (GWAS), comparing the genetic makeup of 22 ALS Reversal patients with that of typical ALS patients. The results were striking.

The team identified a specific genetic variant that was significantly more common in the Reversal group. This variant, known as a single nucleotide polymorphism (SNP), is associated with reduced levels of a protein that blocks the IGF-1 signaling pathway. In simpler terms, it’s like turning down the volume on a gene that might be contributing to the progression of ALS.

What makes this finding particularly exciting is its potential implications for treatment. IGF-1, or insulin-like growth factor 1, has long been a target of interest in ALS research due to its role in protecting motor neurons. Previous studies have shown that ALS patients with rapid disease progression tend to have lower levels of IGF-1 protein. However, clinical trials aimed at increasing IGF-1 levels have yielded disappointing results.

This new discovery suggests a fresh approach to targeting the IGF-1 pathway.

“While it may not be effective to simply give people IGF-1, our study indicates we might have a way to go about it differently by reducing the levels of this inhibiting protein,” says Dr. Jesse Crayle, co-lead author of the study. “It is also possible that the prior studies with IGF-1 were just not adequately dosed or need to be dosed in a different way.”

Source: https://studyfinds.org/als-reversals-miraculous-recovery/?nab=0

Lung Cancer: 5 Warning Signs You Can Spot On Face And Neck

Symptoms of lung cancer can go unnoticed, according to experts. A few early signs also mimic other conditions or do not cause any symptoms at all. However, if you notice a few indications on your face changes in skin texture, facial pain, and colour of your lips – you need to see your doctor immediately. Read on to know more.
Getting tested for cancer may feel scary, but an early diagnosis helps improve treatment outcomes and can prolong your life

Lung cancer is usually caused by uncontrolled cell division in your lungs, which divide and make more copies of themselves as a part of their normal function. These damaged cells divide uncontrollably creating masses, or tumours of tissue that eventually keep your organs from working properly. According to experts, there are many signs and symptoms of this cancer, which can help you prevent its further spread.

Knowing the warning signs of lung cancer is important so you can get tested early. Getting tested for cancer may feel scary, but an early diagnosis helps improve treatment outcomes and can prolong your life. A few early signs that you can spot of this cancer, which killed an estimated 1.8 million people across the world, include:

Swollen face

According to studies, if you spot a recurrent swelling on your face, it can be a symptom of lung cancer as it can be caused by a blockage in the superior vena cava (SVC). This blockage occurs when tumours near the SVC press on the vein or surrounding area, slowing or stopping blood flow.

Doctors say the swelling also affects the neck, arms, and upper chest, and the skin may appear bluish-red. Other symptoms of SVC syndrome include breathlessness, headaches, dizziness, and changes in consciousness.

Change in skin texture

Experts say even though it is not common, lung cancer can cause skin changes – including hyperpigmentation – a common condition that makes some areas of the skin darker than others. The spots are sometimes called age spots, sun spots, or liver spots. Doctors say you may notice flat brown, black, pink, or red spots or patches.

Facial pain 

According to doctors, facial pain is a rare symptom of non-metastatic lung cancer, caused by a lot of factors, including vagus nerve compression where the tumour or mediastinal adenopathy compresses the nerve leading to pain that is usually unilateral and located on the right side of the face, around the ear.

Also, a few malignant tumour cells can produce humoral factors that lead to paraneoplastic syndrome – which can also cause facial pain.

Source: https://www.timesnownews.com/health/lung-cancer-5-warning-signs-you-can-spot-on-face-and-neck-article-112152769

7 Best Careers, According To Experts

A nurse taking a patient’s blood pressure (© M. Business – stock.adobe.com)

Choosing the best career can be a daunting task, given the myriad of options available and the ever-changing job market. Whether you’re just starting your professional journey or considering a mid-life career switch, understanding which careers offer the most promise, satisfaction, and growth opportunities is crucial. In this article, we will explore the top careers across various industries, highlighting roles that not only offer competitive salaries and job security but also provide personal fulfillment and the chance to make a meaningful impact. From tech and healthcare to creative fields and emerging industries, we’ll delve into the best career paths that can help you achieve your professional and personal goals. We compared 10 different expert lists to come away with the seven most recommended career paths. Let us know your thoughts in the comments!

The List: Best Careers, According to Experts
1. Nurse Practitioner

Nurse practitioner wrapping patient’s arm with a bandage (© dusanpetkovic1 – stock.adobe.com)

Nursing and healthcare as a whole are growing rapidly. Being a healthcare practitioner is a great job for anyone who wants plenty of opportunity in their future. “Nurse practitioners are registered nurses with additional education. Extra schooling allows these professionals to take patient histories, perform physical exams, order labs, analyze lab results, prescribe medicines, authorize treatments and educate patients and families on continued care,” writes U.S. News.

There will be a massive projected change in employment from 2021 to 2031 for nurse practitioners. As many as 112,700 new jobs will be created, according to Insider. With that being said, you don’t have to worry about the longevity of a career in this field.

2. Software Engineer

Software engineer (Credit: PeopleImages.com – Yuri A/Shutterstock)

As technology continues to wrap itself around everything we do, we need engineers to design and innovate the softwares behind them. As it turns out, you could make a good living doing this work. According to GlassDoor, software engineers’ median base salary is $116,638.

“Software engineers build computer software, a broad category that could apply to computer games, mobile apps, web browsers or any other computer-based system. Most jobs require a degree in computer science or computer engineering, plus proficiency in programming languages like Java and Python,” writes CBS News.

3. Information Systems Manager

Information systems manager speaking with an employee (Photo by Zivica Kerkez on Shutterstock)

You’ll notice tech and healthcare jobs sort of dominate this list. Another top candidate for best career is an IS manager. According to the Bureau of Labor Statistics, information systems managers make $102,690 per year.

“Before becoming IS managers, individuals generally have several years of experience under their belt in a related field. In general, larger organizations require more-seasoned IT managers than smaller companies or startups do. According to the BLS, a chief technology officer (CTO), who supervises the entire technology function at a larger organization, will often need more than 15 years of IT experience,” according to Investopedia.

4. Health Specialties Teachers

High school teacher high-fiving teen students (© LIGHTFIELD STUDIOS – stock.adobe.com)

Teaching is a calling for some. If you’re interested in health or the human body, educating the next generation is a hot job right now and will only grow as the healthcare field does. The projected change in employment from 2021 to 2031 for healthcare specialties teachers is 59,400, according to Insider.

“The majority of health specialty teachers work in colleges or universities. However, some work in medical hospitals, trade schools, or junior colleges. Depending on the field that you are teaching in, additional licensure and certifications will be required. Health specialties include pharmacists, social workers, psychologists, veterinarians, dentists, and others. And, you can get paid to teach them!” according to CareerFitter.

Source: https://studyfinds.org/best-careers/?nab=0

Vegan diet slows down biological aging after just 8 weeks

(© rh2010 – stock.adobe.com)

The study, published in BMC Medicine and conducted by researchers from Stanford University and TruDiagnostic, focused on a unique group of participants: 21 pairs of identical twins. By comparing twins who followed either a vegan or omnivorous diet for eight weeks, the scientists were able to control for genetic factors and isolate the impact of diet alone on biological aging.

The most striking finding? Participants who followed a vegan diet showed significant decreases in their “epigenetic age” – a measure of biological aging based on chemical modifications to DNA. These modifications, known as DNA methylation, can affect how our genes are expressed without changing the underlying genetic code. Previous research has linked increased DNA methylation to the aging process, making this discovery particularly intriguing.

But the benefits didn’t stop there. The vegan group also showed improvements in the estimated biological age of several organ systems, including the heart, liver, and metabolic systems. These changes were not observed in the omnivorous group, suggesting that a plant-based diet might offer unique anti-aging benefits.

So, what does this mean for you? While the study was short-term and involved a small sample size, it provides compelling evidence that even a brief switch to a vegan diet could have measurable impacts on your biological age. This doesn’t necessarily mean you need to give up animal products entirely, but it does suggest that incorporating more plant-based meals into your diet could potentially slow down the aging process at a cellular level.

It’s important to note that the vegan group also lost more weight on average than the omnivorous group, which could have contributed to the observed anti-aging effects. The researchers provided meals for the first four weeks of the study, with the vegan meals containing fewer calories. This highlights the complex interplay between diet, weight loss, and aging, and underscores the need for further research to tease apart these factors.

The study utilized cutting-edge epigenetic analysis techniques to measure biological age. These “epigenetic clocks” are based on patterns of DNA methylation and provide a more accurate picture of how quickly a person is aging at a cellular level compared to their chronological age. By applying multiple epigenetic clocks and other analytical tools, the researchers were able to get a comprehensive view of how diet affected various aspects of biological aging.

While the results are promising, the researchers caution that more work is needed to understand the long-term effects of a vegan diet and to clarify the relationship between dietary composition, weight loss, and aging. They also stress the importance of proper nutrient supplementation for those following a vegan diet, as deficiencies in certain nutrients like vitamin B12 could potentially have negative effects on epigenetic processes.

As always, it’s important to consult with a healthcare professional before making significant changes to your diet, especially if you’re considering a fully vegan approach.

Source: https://studyfinds.org/vegan-diet-slows-down-biological-aging/?nab=0

Sit down, doc: Patients happier when physicians talk to them at eye level

female wheelchair patient with specialist (© Spotmatik – stock.adobe.com)

When you’re lying on a hospital bed, how your doctor interacts with you can make a big difference in your experience as a patient. A new study suggests that one simple change by physicians — having them sit down instead of standing over patients — may significantly improve how patients perceive their care.

Researchers from the University of Michigan and Veterans Affairs Ann Arbor Healthcare System reviewed 14 studies examining how a doctor’s posture affects patient perceptions in hospital settings. Their findings, published in the Journal of General Internal Medicine, indicate that patients tend to view seated doctors more favorably.

“The studies measured many different things, from length of the patient encounter and patient impressions of empathy and compassion, to hospitals’ overall patient evaluation scores as measured by standardized surveys like the federal HCAHPS survey,” notes Nathan Houchens, MD, a U-M Medical School faculty member and VA hospitalist who worked with U-M medical students to review this evidence, in a media release.

This may seem like a small detail, but in the high-stress environment of a hospital, these perceptions can have meaningful impacts. Patients who feel their doctors communicate well and care about them tend to be more satisfied with their care overall. They may also be more likely to follow treatment plans and have better health outcomes.

A new study suggests that one simple change by physicians — having them sit down instead of standing over patients — may significantly improve how patients perceive their care. (Photo by MART PRODUCTION from Pexels)

The idea behind this effect is rooted in nonverbal communication and social psychology. When a doctor stands over a patient’s bed, it can create a sense of hierarchy or intimidation. By sitting down, the doctor puts themselves at eye level with the patient, which can feel more equal and collaborative.

This doesn’t mean doctors need to pull up a chair for every brief interaction. The studies reviewed looked at more substantial conversations, like discussing diagnosis and treatment plans. For these important talks, taking a seat could make a real difference.

However, getting doctors to consistently sit down may be easier said than done. Several studies noted that even when instructed to sit, doctors often remained standing. Reasons included lack of available seating, concerns about efficiency, and worries about hygiene for patients in isolation.

Houchens suggests hospitals could encourage sitting by ensuring each patient room has a dedicated chair for clinicians and by creating a culture where sitting with patients is the norm. With minimal cost and effort, this small change in body language could lead to more positive hospital experiences for patients.

Source: https://studyfinds.org/sit-down-doc-patient-satisfaction/?nab=0

Why this creepy, jawless fish may ‘hold key to understanding’ human evolution

Closeup of a river lamprey. (Photo by Gena Melendrez on Shutterstock)

In the realm of evolutionary biology, an unlikely hero has emerged: the sea lamprey. This ancient, jawless fish, often viewed as a pest in Midwestern fisheries, is helping scientists unlock the secrets of our own evolutionary past. A jaw-dropping study from Northwestern University reveals fascinating insights into the origins of two crucial types of stem cells that played a pivotal role in vertebrate evolution.

The study, led by Professor Carole LaBonne, focuses on two types of stem cells: pluripotent blastula cells (also known as embryonic stem cells) and neural crest cells. Both of these cell types have the remarkable ability to develop into any other cell type in the body, a property known as pluripotency. By comparing the genetic makeup of lampreys with that of Xenopus, a jawed aquatic frog, the researchers have uncovered striking similarities in the gene networks that regulate these stem cells across both jawless and jawed vertebrates.

This discovery is particularly intriguing because lampreys represent one of only two living groups of jawless vertebrates, making them invaluable for understanding our evolutionary roots.

“Lampreys may hold the key to understanding where we came from. In evolutionary biology, if you want to understand where a feature came from, you can’t look forward to more complex vertebrates that have been evolving independently for 500 million years,” LaBonne says in a statement. “You need to look backwards to whatever the most primitive version of the type of animal you’re studying is, which leads us back to hagfish and lampreys — the last living examples of jawless vertebrates.”

One of just two vertebrates without a jaw, sea lampreys that are wreaking havoc in Midwestern fisheries are simultaneously helping scientists understand the origins of two important stem cells. (Credit: T. Lawrence, Great Lakes Fishery Commission)

One of the most fascinating aspects of the study, published in Nature Ecology & Evolution, is the revelation about a gene called pou5. This gene, which plays a crucial role in regulating stem cells, is present in both lampreys and jawed vertebrates. However, while it’s expressed in the blastula cells of both groups, the researchers found that lampreys don’t express pou5 in their neural crest cells. This absence may explain why lampreys lack jaws and other skeletal features found in jawed vertebrates.

The implications of this discovery are profound. It suggests that the basic genetic toolkit for creating neural crest cells – the “evolutionary Lego set” as LaBonne calls them – was present in the earliest vertebrates. However, different lineages then modified how they used this toolkit over millions of years of evolution. This finding challenges our understanding of how complex features evolve, suggesting that innovation in nature often comes from repurposing existing genetic programs rather than inventing entirely new ones.

“Another remarkable finding of the study is that even though these animals are separated by 500 million years of evolution, there are stringent constraints on expression levels of genes needed to promote pluripotency. The big unanswered question is, why?” asks LaBonne.

This research not only sheds light on our evolutionary past but also has potential implications for understanding human development and disease. Neural crest cells, with their ability to form diverse cell types, play crucial roles in human embryonic development. Abnormalities in neural crest development can lead to a variety of congenital disorders. By understanding the ancient origins and regulation of these cells, we may gain new insights into these conditions and potential therapeutic approaches.

Sea lamprey making the nest in the river to lay eggs. (Photo by Manuel E. Garci on Shutterstock)

The study also highlights the importance of preserving and studying diverse species, even those we might consider pests. Though the lamprey is often seen as a nuisance in Great Lakes fisheries, it’s proven to provide a treasure trove of evolutionary information. It serves as a reminder that every species, no matter how seemingly insignificant, may hold clues to our own biological history and future.

From the jawless lamprey to humans with our complex brains and versatile hands, scientific research like this continues to show the common evolutionary heritage written in our genes. The story of vertebrate evolution, it seems, is one of both conservation and innovation, with ancient genetic programs serving as the foundation for the diversity of life we see today.

Source: https://studyfinds.org/lamprey-may-hold-key-to-understanding-human-evolution/?nab=0

65 million people have the same reason for owning guns — they don’t feel safe in America

(Credit: RomanR/Shutterstock)

In recent years, the landscape of gun ownership in the United States has undergone a significant transformation. A new study reveals that protection has become the dominant reason why Americans own firearms, surpassing traditional motivations like hunting or sport shooting. This shift is reshaping not only why people own guns but also who owns them, with more women and racial minorities joining the ranks of gun owners.

The research, conducted in 2023 and published in the journal Injury Prevention, paints a picture of a nation where nearly 80% of gun owners cite protection as their primary reason for possessing a firearm. This translates to an estimated 65 million Americans owning guns for self-defense purposes – a number that has been steadily climbing over the past two decades.

What’s driving this change? The study suggests that a combination of factors, including evolving societal attitudes, changing laws, and a sense of uncertainty about personal safety, may be contributing to this trend.

One of the most striking findings is the changing demographics of gun owners. Traditionally, gun ownership in the U.S. has been associated with White males, often for recreational purposes like hunting. However, the new data shows that women and racial minorities are increasingly likely to own firearms, almost exclusively for protection. For instance, the study found that nearly 99% of Black and Asian women who own guns do so for self-defense.

This shift isn’t just changing who owns guns but also how they’re used. The research indicates that gun owners motivated by protection are more likely to carry their firearms outside the home. This behavior is particularly prevalent in states with “Stand Your Ground” (SYG) laws, which allow individuals to use deadly force in self-defense without first attempting to retreat from a dangerous situation.

“SYG laws specifically affect the legal right to use deadly force for self-defense in public places, and therefore, increased firearm carriage might be a mechanism by which states with SYG laws have contributed to higher rates of firearm violence,” the researchers suggest in a media release.

The study also delves into the psychological factors that might be influencing this trend. Interestingly, it found that a general feeling of distrust in society – not knowing who to rely on – was more closely associated with owning guns for protection than actual experiences of gun violence.

This changing landscape of gun ownership presents new challenges and considerations for policymakers and public health officials. As the motivations and demographics of gun owners evolve, researchers say so too must the approaches to gun safety and violence prevention.

The team from the University of Michigan emphasizes the need for ongoing monitoring of these trends and adaptive policies that ensure safe gun ownership practices across all segments of the population. They argue that understanding these shifts is crucial for developing effective strategies to address gun violence while respecting the rights of gun owners.

Source: https://studyfinds.org/65-million-owning-guns/?nab=0

Butterflies collect pollen without even touching flowers using this special power

(Photo by Unsplash+ in collaboration with Mohamed Nohassi)

Butterflies and moths possess a unique superpower when it comes to pollinating flowers. New research reveals the insects carry an electric charge that may significantly boost their pollination abilities. This discovery not only sheds light on the intricate mechanisms of plant reproduction but also challenges our understanding of how these beloved insects interact with their environment.

When you think of static electricity, you might recall the shock you get from touching a doorknob after shuffling across a carpet. But in the natural world, this same phenomenon could be playing a crucial role in one of nature’s most important processes: pollination.

Researchers from the University of Bristol have found that butterflies and moths accumulate electric charges as they fly, much like how we build up static when we walk across a carpet. This charge, while imperceptible to us, can be strong enough to help pollen grains leap from flowers onto the insects’ bodies without any physical contact.

The study’s lead author, Sam England, explains that this electrostatic attraction works because most insects tend to accumulate positive charges, while pollen grains often carry negative charges. Just as opposite poles of a magnet attract each other, this difference in charge creates a force that can pull pollen towards the insect.

Butterflies have the amazing ability to contract pollen from flowers without touching them thanks to static electricity. (Photo by Birger Strahl on Unsplash)

To put this into perspective, imagine a butterfly approaching a flower. Even before it lands, the electric field generated by its charge is strong enough to make pollen grains jump several millimeters through the air – a significant distance for such tiny particles. This means that butterflies and moths can pick up pollen even if they don’t directly touch the part of the flower where it’s produced.

This discovery, published in the Journal of the Royal Society Interface, is particularly exciting because it challenges previous doubts about butterflies’ effectiveness as pollinators. Some researchers had suggested that butterflies might be “nectar thieves,” taking the sweet reward from flowers without providing much pollination in return. However, this new evidence of electrostatic pollination suggests that butterflies might be more helpful to plants than we previously thought.

“We’ve known for a long time that pollinators such as bees and hoverflies have electric charge, but this is the first time we’ve measured the charge on Lepidoptera,” England says in a statement. “These findings suggest that electrostatic pollination may be much more widespread than previously thought, as it is likely that all flying pollinators carry electric charge.”

Intriguingly, the researchers found that different species of butterflies and moths carry different amounts and types of electric charge. Some consistently carry positive charges, while others tend to be negatively charged. These differences seem to be related to factors like the size of the insect, whether it’s active during the day or night, and even the climate where it lives.

Scientists believe that moths, like butterflies, are able to generate static electricity thanks to natural selection. (Photo by Unsplash+ in collaboration with Zdeněk Macháček)

For example, tropical butterflies and moths generally carried lower charges than their temperate counterparts. This could be because the humid air in tropical environments makes it harder for insects to build up and maintain static electricity. Night-flying moths that visit flowers also tended to have negative charges more often than daytime pollinators, which might help them avoid predators that could detect their electric fields.

These variations suggest that the ability to accumulate and control electric charge might be an evolutionary adaptation. Just as some butterflies have evolved bright colors to attract mates or camouflage to avoid predators, their electrical properties might have been shaped by natural selection to improve their survival and reproductive success.

“We think that this could be a possible explanation for why some species of moth, for example, are able to pollinate highly specialized orchids that other insects can’t pollinate,” explains England. “Also, the fact that some species are able to build up substantial electric charge suggests this has some evolutionary benefit to them, but what that is we don’t yet know.”

The implications of this research extend beyond just understanding how pollination works. It opens up new questions about how electric fields might influence other aspects of insect ecology. Could these charges affect how insects find food, avoid predators, or even communicate with each other? The study hints at a hidden world of electrical interactions in nature that we’re only beginning to understand.

Source: https://studyfinds.org/butterflies-moths-pollinate-static-electricity/?nab=0

‘New car smell’ could become a dangerous source of cancer on hot days

(Credit: Benedek Alpar/Shutterstock)

Many people have taken a sniff of that distinctive scent when getting into a brand-new car. It’s a smell that many associate with freshness, luxury, and even excitement. However, that pleasant “new car smell” could be seriously harmful to your health.

A new study published in PNAS Nexus reveals that the source of new car smell — volatile organic compounds (VOCs) — may pose significant health risks to both drivers and passengers. Researchers from the Beijing Institute of Technology and Harvard T.H. Chan School of Public Health analyzed the VOC emissions inside a new car during the hot summer months. Their findings paint a concerning picture of in-cabin air quality and highlight the need for better monitoring and control of these dangerous and possibly cancer-causing chemicals.

VOCs are a group of chemicals that easily become vapors or gases at room temperature. In cars, they’re emitted from various materials like plastics, synthetic fibers, leather, and adhesives. While some VOCs are harmless, others can cause health issues ranging from headaches and eye irritation to more serious conditions like lung disease.

The study found that formaldehyde, a known carcinogen, was the most prevalent VOC in the new car’s cabin. Alarmingly, over one-third of the measurements exceeded China’s air quality standards for vehicle interiors. Other concerning chemicals detected included acetaldehyde and hexaldehyde, both of which were present at levels that could potentially impact health.

However, it’s not just about what’s in the air — it’s also about what influences these emissions. Contrary to popular belief, the study found that the temperature of the car’s interior surfaces, rather than the air temperature itself, was the most significant factor affecting VOC emissions. This finding is particularly relevant for new cars in hot summer weather, explaining why that new car smell can be especially strong on a sunny day.

That pleasant “new car smell” could be seriously harmful to your health. (Credit: DimaBerlin/Shutterstock)

To address the challenge of predicting and monitoring these emissions, the research team developed an innovative deep-learning model. This artificial intelligence-based approach, named LSTM-A-E, showed promising results in accurately forecasting VOC concentrations inside the vehicle. Such a tool could be invaluable for car manufacturers and health authorities in assessing and mitigating risks associated with in-cabin air pollution.

The implications of this study extend beyond just new cars. As we spend more time in our vehicles — an average of 5.5% of our lives — understanding and managing the air quality inside them becomes increasingly important. This research not only sheds light on a hidden health concern but also paves the way for smarter, healthier transportation solutions.

Source: https://studyfinds.org/new-car-smell-cancer/

Cravings for ice cubes may signal a bigger health problem

(Credit: yamasan0708/Shutterstock)

I’m sure almost everyone has chewed on a piece of ice at some point in time. Shaved ice on a summer day is a classic heat wave treat, but crunching on actual ice cubes is a very different thing. Something may be abnormal health-wise when you have a deep craving for ice. If this sounds like you, let’s look at the potential causes and options for treatment.

The medical term that describes compulsively eating ice is called pagophagia. Gnawing on the pieces can not only be a sign of a dangerous underlying health condition, but it can also damage your teeth. Here are a few reasons why you might get the urge.

Iron deficiency anemia
There’s ample evidence supporting a link between iron deficiency and ice chip cravings. When your blood doesn’t have enough red blood cells, not enough oxygen is carried throughout your body’s tissues. Without iron, these red blood cells can’t be built. While it doesn’t happen to everyone with anemia, it is a relatively common experience that may provide your doctor a clue before deciding to get your blood drawn to truly diagnose it.

Pica
Pica is an eating disorder that involves eating anything with no nutritional value, so ice would fall into this category. Sometimes, pica can look like eating more benign things like ice or more dangerous things like paint, hair, or metal.

Since the severity of this condition can vary significantly and can quickly change, getting help in the initial phases is really important. Children and pregnant women are most at risk for developing it. Doctors and scientists don’t really know exactly what causes pica, but it’s more commonly seen in people with autism, those experiencing malnutrition, or are under high stress.

The problem with chronically chewing ice
Ice is just frozen water, so you definitely stay well-hydrated if you chew it frequently. That’s the only real benefit, though. In severe cases, this causes people to miss out on important nutrition and lead to nutrient deficiencies.

Ice has no calories, protein, fat, vitamins, or minerals. For some people, chewing ice turns into a replacement for meals and snacks, which is when the risk of nutrient deficiencies is at its highest. Additionally, you risk damaging your teeth. Even though ice melts, the act of chewing and crunching on it can lead to heightened cold sensitivity, tooth pain, or even structural damage.

Source: https://studyfinds.org/cravings-ice-cubes-problem/

Prof: We’ve already become too reliant on AI, and it’s ruining ‘real intelligence’

Older man using a smartphone (© Prostock-studio – stock.adobe.com)

Stop Googling, start napping among 9 key habits for preserving brainpower into old age
BOCA RATON, Fla. — In an age where people constantly reach for their smartphones to look up information, a leading Canadian academic is urging the public to exercise their brains instead. Professor Mohamed I. Elmasry, an expert in microchip design and artificial intelligence (AI), believes that 9 simple daily habits like taking afternoon naps and engaging in memory “workouts” can significantly reduce the risk of age-related dementia.

In his new book, “iMind: Artificial and Real Intelligence,” Elmasry argues that we’ve become too reliant on AI at the expense of our natural, or “real” intelligence (RI). He’s calling for a return to nurturing our human minds, which he compares to smartphones but describes as far more powerful and longer-lasting with proper care.

“Your brain-mind is the highest-value asset you have, or will ever have,” Elmasry writes in a media release. “Increase its potential and longevity by caring for it early in life, keeping it and your body healthy so it can continue to develop.”

The inspiration for Elmasry’s book came from personal experience. After losing his brother-in-law to Alzheimer’s and witnessing others close to him, including his mother, suffer from various forms of dementia, he felt compelled to share his insights on brain health.

While Elmasry acknowledges that smart devices are becoming increasingly advanced, he maintains that they pale in comparison to the human brain.

“The useful life expectancy for current smartphones is around 10 years, while a healthy brain-mind inside a healthy human body can live for 100 years or longer,” Elmasry explains.

One of the key issues Elmasry highlights is our growing dependence on technology for basic information recall. He shares an anecdote about his grandchildren needing to use a search engine to name Cuba’s capital despite having just spent a week in the country. This story serves as a stark reminder of how younger generations are increasingly relying on AI smartphone apps instead of exercising their own mental faculties.

“A healthy memory goes hand-in-hand with real intelligence,” Elmasry emphasizes. “Our memory simply can’t reach its full potential without RI.”

In an age where people constantly reach for their smartphones to look up information, a leading Canadian academic is urging the public to exercise their brains instead. (© ikostudio – stock.adobe.com)

So, what can we do to keep our brains sharp and reduce the risk of cognitive decline? Elmasry offers several practical tips:

Elmasry’s book goes beyond just offering tips for brain health. It delves into the history of microchip design, machine learning, and AI, explaining how these technologies work in smartphones and other devices. He also explores how human intelligence functions and how brain activity connects to our mind and memory.

Interestingly, Elmasry draws parallels between the human mind and smartphones, comparing our brain’s “hardware,” “software,” and “apps” to those of our digital devices. However, he stresses that the human brain far surpasses current AI in terms of speed, accuracy, storage capacity, and other functions.

The book also touches on broader societal issues related to brain health. Elmasry argues that healthy aging is as crucial as climate change but receives far less attention. He calls for policymakers to implement reforms that promote cognitive well-being, such as transforming bingo halls from sedentary entertainment venues into active learning centers.

Source: https://studyfinds.org/brain-health-googling-napping/

Smell of human stress affects dogs’ emotions leading them to make more pessimistic choices

(Photo by Meruyert Gonullu from Pexels)

Dogs experience emotional contagion from the smell of human stress, leading them to make more ‘pessimistic’ choices, new research finds. The University of Bristol-led study, published in Scientific Reports today [22 July], is the first to test how human stress odours affect dogs’ learning and emotional state.

Evidence in humans suggests that the smell of a stressed person subconsciously affects the emotions and choices made by others around them. Bristol Veterinary School researchers wanted to find out whether dogs also experience changes in their learning and emotional state in response to human stress or relaxation odours.

The team used a test of ‘optimism’ or ‘pessimism’ in animals, which is based on findings that ‘optimistic’ or ‘pessimistic’ choices by people indicate positive or negative emotions, respectively.

The researchers recruited 18 dog-owner partnerships to take part in a series of trials with different human smells present. During the trials, dogs were trained that when a food bowl was placed in one location, it contained a treat, but when placed in another location, it was empty. Once a dog learned the difference between these bowl locations, they were faster to approach the location with a treat than the empty location. Researchers then tested how quickly the dog would approach new, ambiguous bowl locations positioned between the original two.

A quick approach reflected ‘optimism’ about food being present in these ambiguous locations – a marker of a positive emotional state – whilst a slow approach indicated ‘pessimism’ and negative emotion. These trials were repeated whilst each dog was exposed to either no odour or the odours of sweat and breath samples from humans in either a stressed (arithmetic test) or relaxed (listening to soundscapes) state.

Researchers discovered that the stress smell made dogs slower to approach the ambiguous bowl location nearest the trained location of the empty bowl. An effect that was not seen with the relaxed smell. These findings suggest that the stress smell may have increased the dogs’ expectations that this new location contained no food, similar to the nearby empty bowl location.

Source: https://studyfinds.org/smell-human-stress-affects-dogs/

Nearly half of all cancer deaths are preventable by making simple life changes

(Photo by Diego Indriago from Pexels)

The best way to end cancer is to stop it from forming in the first place. However, did you know your life choices may be the biggest reason you’re at risk for cancer? In fact, a new study reveals that four in 10 cancer cases are preventable. Published in CA: A Cancer Journal for Clinicians, researchers with the American Cancer Society also report that nearly half of all cancer deaths among U.S. adults 30 years and older are the result of controllable risk factors such as cigarette smoking, physical inactivity, obesity, and excessive drinking.

Out of all of the modifiable lifestyle choices, cigarette smoking was the greatest contributor to cancer.

“Despite considerable declines in smoking prevalence during the past few decades, the number of lung cancer deaths attributable to cigarette smoking in the United States is alarming. This finding underscores the importance of implementing comprehensive tobacco control policies in each state to promote smoking cessation, as well as heightened efforts to increase screening for early detection of lung cancer, when treatment could be more effective,” says Dr. Farhad Islami, the senior scientific director of cancer disparity research at the American Cancer Society, in a media release.

Methodology
Researchers collected data on rates of cancer diagnosis, cancer deaths, and risk factors to estimate the number of cases and deaths caused by modifiable risk factors (excluding non-melanoma skin cancers). The study authors looked at 30 different cancer types.

The risk factors assessed included current or former cigarette smoking, routine exposure to secondhand smoke, excess body weight, heavy alcohol drinking, eating red and processed meat, low consumption of fruits and vegetables, dietary calcium, physical inactivity, ultraviolet radiation, and viral infections.

Key Results
Cigarette smoking caused a disproportionate amount of cancer cases, contributing to 19.3% of new diagnoses. Additionally, cigarette smoking contributed to 56% of all potentially preventable cancers in men and 39.9% of preventable cancers in women.

Obesity was the second most influential modifiable risk factor contributing to the formation of new cancers at 7.6%. This was followed by alcohol consumption, UV radiation exposure, and physical inactivity.

“Interventions to help maintain healthy body weight and diet can also substantially reduce the number of cancer cases and deaths in the country, especially given the increasing incidence of several cancer types associated with excess body weight, particularly in younger individuals,” explains Dr. Islami.

These lifestyle choices significantly increased the risk for certain types of cancers in staggering proportions. Modifiable risk factors contributed to 100% of cervical cancer cases and Kaposi sarcoma. For 19 of the 30 cancers studied, these factors played a part in over 50% of new diagnoses.

Source: https://studyfinds.org/cancer-deaths-preventable/

Scientists close to creating ‘one and done’ universal flu shot

Young child gets his annual flu shot. However, scientists believe the annual flu shot could soon be a thing of the past.

The annual flu shot could soon be a thing of the past. Scientists are working on a revolutionary formula that would result in only needing one flu shot in your lifetime. Simply put, this would mean no more annual vaccinations or worrying about whether this year’s shot will match the flu strains circulating around the world.

Scientists at Oregon Health & Science University (OHSU) have developed a promising approach to creating this universal influenza vaccine — one that could provide lifelong protection against the ever-changing flu virus. Their study, published in Nature Communications, tested a new vaccine platform against H5N1, a bird flu strain considered most likely to cause the next pandemic.

Here’s where it gets interesting: instead of using the current H5N1 virus, researchers vaccinated monkeys against the infamous 1918 flu virus – the same one that caused millions of deaths worldwide over a century ago. Surprisingly, this approach showed remarkable results.

“It’s exciting because in most cases, this kind of basic science research advances the science very gradually; in 20 years, it might become something,” says senior author Jonah Sacha, Ph.D., chief of the Division of Pathobiology at OHSU’s Oregon National Primate Research Center, in a media release. “This could actually become a vaccine in five years or less.”

So, How Does This New Vaccine Work?

Unlike traditional flu shots that target the virus’s outer surface – which constantly changes – this approach focuses on the virus’s internal structure. Think of it like targeting the engine of a car instead of its paint job. The internal parts of the virus don’t change much over time, providing a stable target for our immune system.

The researchers used a clever trick to deliver this vaccine. They inserted small pieces of the target flu virus into a common herpes virus called cytomegalovirus (CMV). Don’t worry – CMV is harmless for most people and often causes no symptoms at all. This modified CMV acts like a Trojan horse, sneaking into our bodies and teaching our immune system’s T cells how to recognize and fight off flu viruses.

To test their theory, the team exposed vaccinated non-human primates to the H5N1 bird flu virus. The results were impressive: six out of 11 vaccinated animals survived exposure to one of the deadliest viruses in the world today. In contrast, all unvaccinated primates succumbed to the disease.

“Should a deadly virus such as H5N1 infect a human and ignite a pandemic, we need to quickly validate and deploy a new vaccine,” says co-corresponding author Douglas Reed, Ph.D., associate professor of immunology at the University of Pittsburgh Center for Vaccine Research.

The study tested a new vaccine platform against H5N1, a bird flu strain considered most likely to cause the next pandemic. (Photo by Felipe Caparros on Shutterstock)

What makes this approach even more exciting is its potential to work against other mutating viruses, including the one that causes COVID-19.

“For viruses of pandemic potential, it’s critical to have something like this. We set out to test influenza, but we don’t know what’s going to come next,” Dr. Sacha believes.

The success of this vaccine lies in its ability to target parts of the virus that remain consistent over time.

“It worked because the interior protein of the virus was so well preserved,” Sacha continues. “So much so, that even after almost 100 years of evolution, the virus can’t change those critically important parts of itself.”

Source : https://studyfinds.org/one-and-done-flu-shot

Single drop of blood could accurately reveal your overall health

(Photo by Love the wind on Shutterstock)

Could the future of healthcare be as simple as going to your doctor for a routine checkup and giving a single drop of blood to screen you for multiple health conditions at once? This futuristic scenario may soon become reality, thanks to groundbreaking research combining infrared spectroscopy with machine learning.

A team of researchers from Germany developed a new method that can detect multiple health conditions from a single drop of blood plasma. Their study, published in Cell Reports Medicine, demonstrates how this technique could revolutionize health screening and early disease detection.

The method, called infrared molecular fingerprinting, works by shining infrared light through a blood plasma sample and measuring how different molecules in the sample absorb the light. This creates a unique “fingerprint” of the sample’s molecular composition. By applying advanced machine learning algorithms to these fingerprints, the researchers were able to detect various health conditions with impressive accuracy.

Led by Mihaela Žigman of Ludwig Maximilian University of Munich (LMU), the research team also included scientists from the Max Planck Institute of Quantum Optics (MPQ), and Helmholtz Munich.

Scientists say a single drop of blood can accurately screen for various health conditions including diabetes and hypertension. (Photo by KinoMasterskaya on Shutterstock)

What does the test screen for?

The study analyzed over 5,000 blood samples from more than 3,000 individuals, looking for five common health conditions: dyslipidemia (abnormal cholesterol levels), hypertension (high blood pressure), prediabetes, Type 2 diabetes, and overall health status. Remarkably, the technique was able to correctly identify these conditions simultaneously with high accuracy.

One of the most exciting aspects of this research is its potential for early disease detection. The method was able to predict which individuals would develop metabolic syndrome – a cluster of conditions that increase the risk of heart disease, stroke, and diabetes – up to 6.5 years before onset. This could allow for earlier interventions and potentially prevent or delay the development of serious health problems.

The approach offers a cost-effective, efficient way to screen for multiple health conditions with a single blood test. It could potentially transform how we approach preventive healthcare and disease management.

The technique also showed promise in estimating levels of various clinical markers typically measured in standard blood tests, such as cholesterol, glucose, and triglycerides. This suggests that infrared fingerprinting could potentially replace multiple conventional blood tests with a single, more comprehensive analysis.

Perhaps most intriguingly, the method was able to detect subtle differences between healthy individuals and those with early-stage or pre-disease conditions. For example, it could distinguish between people with normal blood sugar levels and those with prediabetes, a condition that often goes undiagnosed but significantly increases the risk of developing type 2 diabetes.

While doctors w(© bernardbodo – stock.adobe.com)

When will the blood test be available?

The implications of this research are far-reaching. If implemented in clinical practice, this technique could make health screening more accessible and comprehensive. It could enable doctors to catch potential health problems earlier, when they’re often easier to treat or manage. For patients, it could mean fewer blood draws and a more holistic view of their health status from a single test.

The researchers believe this study lays the groundwork for infrared molecular fingerprinting to become a routine part of health screening. As they continue to refine the system and expand its capabilities, they hope to add even more health conditions and their combinations to the diagnostic repertoire. This could lead to personalized health monitoring, where individuals regularly check their health status and catch potential issues long before they become serious.

However, study authors caution that more work is needed before this method can be widely adopted in clinical settings. The current study was conducted on a specific population in southern Germany, and further research is needed to confirm its effectiveness across diverse populations.

Nevertheless, this study represents a significant step forward in the field of medical diagnostics. As we move towards more personalized and preventive healthcare, tools like infrared molecular fingerprinting could play a crucial role in keeping us healthier for longer.

Source : https://studyfinds.org/single-drop-of-blood-test-screens-health

Why are some people happy when they are dying?

(Credit: anatoliy_gleb/Shutterstock)

Simon Boas, who wrote a candid account of living with cancer, passed away on July 15 at the age of 47. In a recent BBC interview, the former aid worker told the reporter: “My pain is under control and I’m terribly happy – it sounds weird to say, but I’m as happy as I’ve ever been in my life.”

It may seem odd that a person could be happy as the end draws near, but in my experience as a clinical psychologist working with people at the end of their lives, it’s not that uncommon.

There is quite a lot of research suggesting that fear of death is at the unconscious center of being human. William James, an American philosopher, called the knowledge that we must die “the worm at the core” of the human condition.

But a study in Psychological Science shows that people nearing death use more positive language to describe their experience than those who just imagine death. This suggests that the experience of dying is more pleasant – or, at least, less unpleasant – than we might picture it.

In the BBC interview, Boas shared some of the insights that helped him come to accept his situation. He mentioned the importance of enjoying life and prioritizing meaningful experiences, suggesting that acknowledging death can enhance our appreciation for life.

Despite the pain and difficulties, Boas seemed cheerful, hoping his attitude would support his wife and parents during the difficult times ahead.

Boas’s words echo the Roman philosopher Seneca who advised that: “To have lived long enough depends neither upon our years nor upon our days, but upon our minds.”

A more recent thinker expressing similar sentiments is the psychiatrist Viktor Frankl who, after surviving Auschwitz, wrote Man’s Search for Meaning (1946) in which he lay the groundwork for a form of existential psychotherapy, with the focus of discovering meaning in any kind of circumstance. Its most recent adaptation is meaning-centered psychotherapy, which offers people with cancer a way to improve their sense of meaning.

How happiness and meaning relate
In two recent studies, in Palliative and Supportive Care and the American Journal of Hospice and Palliative Care, people approaching death were asked what constitutes happiness for them. Common themes in both studies were social connections, enjoying simple pleasures such as being in nature, having a positive mindset, and a general shift in focus from seeking pleasure to finding meaning and fulfillment as their illness progressed.

In my work as a clinical psychologist, I sometimes meet people who have – or eventually arrive at – a similar outlook on life as Boas. One person especially comes to mind – let’s call him Johan.

The first time I met Johan, he came to the clinic by himself, with a slight limp. We talked about life, about interests, relationships and meaning. Johan appeared to be lucid, clear and articulate.

The second time, he came with crutches. One foot had begun to lag and he couldn’t trust his balance. He said it was frustrating to lose control of his foot, but still hoped to cycle around Mont Blanc.

When I asked him what his concerns were, he burst into tears. He said: “That I won’t get to celebrate my birthday next month.” We sat quietly for a while and took in the situation. It wasn’t the moment of death itself that weighed on him the most, it was all the things he wouldn’t be able to do again.

Source: https://studyfinds.org/happy-when-dying/

Study finds most lung cancer patients in India have never smoked in their life; so what is the cause?

Public awareness campaigns about lung cancer symptoms and risk factors are essential for early detection and prevention

Discover the alarming rise of lung cancer among non-smokers in India (Source: Pexels)

The landscape of lung cancer in India is undergoing a dramatic shift. Traditionally linked to smoking, the disease is increasingly affecting individuals with no history of tobacco use.

A recent narrative review, published in The Lancet, unveiled a startling finding: a substantial portion of lung cancer patients, particularly women, are non-smokers. The trend is alarming as it owes to the complex interplay of factors contributing to lung cancer development in the country, particularly a need to re-evaluate risk factors and prevention strategies beyond tobacco control.

Dr Vikas Mittal, pulmonologist at CK Birla Hospital, Delhi, emphasises the role of environmental factors, particularly air pollution. Exposure to particulate matter (PM2.5) is a significant contributor to lung cancer in non-smokers. The prevalence of tuberculosis, another public health challenge in India, can also exacerbate lung damage and increase the risk.

Passive smoking, occupational exposures, and genetic predisposition further contribute to the disease burden, explained Dr Neeraj Goel, Director of Oncology Services at CK Birla Hospital, Delhi. He underscores the importance of early detection through regular health check-ups and awareness about lung cancer symptoms.

The implications of these findings are profound. India’s battle against lung cancer requires a comprehensive approach. Reducing air pollution through stricter regulations, promoting clean energy sources, and improving public transportation are crucial steps. Strengthening tuberculosis control programs and investing in research to understand the genetic factors involved are equally important.

Moreover, public awareness campaigns about lung cancer symptoms and risk factors are essential for early detection and prevention. Encouraging healthy lifestyles, including smoking cessation and avoiding exposure to air pollution, can significantly reduce the risk.

Here are some warning signs of lung cancer you should watch out for, Dr Mittal advised.

What are the warning symptoms and signs of lung cancer?

The warning symptoms and signs of lung cancer include:

* Persistent Cough: A long-standing cough that does not go away.
* Blood in Sputum: Presence of blood in the spit.
* Breathing Difficulty: Trouble breathing or shortness of breath.
* Hoarseness of Voice: Changes in the voice, such as becoming hoarse.
* Chest Pain: Pain in the chest that may worsen with deep breathing, coughing, or laughing.
* Loss of Appetite and Weight Loss: Unexplained loss of appetite and significant weight loss.

Experimental drug extends the lifespan of ‘middle-aged’ mice by 25% – and could work on humans too, scientists say

Mice injected with the antibody anti-IL-11 lived longer and suffered from fewer diseases caused by fibrosis, chronic inflammation and poor metabolism – which are the hallmarks of ageing.

An experimental drug that extends the lifespan of mice by 25% could also work in humans, according to the scientist who ran the trials.

The treatment – an injection of an antibody called anti-IL-11 that was given to the mice when they were ‘middle-aged’ – reduced deaths from cancer.

It also lowered the incidence of diseases caused by fibrosis, chronic inflammation and poor metabolism, which are the hallmarks of ageing.

Professor Stuart Cook, a senior scientist on the study, said: “These findings are very exciting.

“While these findings are only in mice, it raises the tantalising possibility that the drugs could have a similar effect in elderly humans.

“The treated mice had fewer cancers, and were free from the usual signs of ageing and frailty, but we also saw reduced muscle wasting and improvement in muscle strength.

“In other words, the old mice receiving anti-IL-11 were healthier.”

Videos released by the scientists show untreated mice had greying patches on their fur, with hair loss and weight gain.

But those receiving the injection had glossy coats and were more active.

The two female mice – one of which has received the antibody injection. Pic PA

The researchers, from the Medical Research Council Laboratory of Medical Science (MRC LMS), Imperial College London and Duke-NUS Medical School in Singapore, gave the mice the antibody injection when they were 75 weeks old – equivalent to a human age of 55 years.

The mice went on to live to an average of 155 weeks, 35 weeks longer than mice who were not treated, according to results published in the journal Nature.

The drug appeared to have very few side effects.

“Previously proposed life-extending drugs and treatments have either had poor side-effect profiles, or don’t work in both sexes, or could extend life, but not healthy life – however this does not appear to be the case for IL-11,” Professor Cook said.

The antibody blocked the action of the IL-11 protein, which is thought to play a role in the ageing of cells and body tissues – in humans as well as mice.

Source: https://news.sky.com/story/experimental-drug-extends-the-lifespan-of-middle-aged-mice-by-25-and-could-work-on-humans-too-scientists-say-13179979

Is pooping every day necessary? Timing of bowel movements has surprising links to health

(© nito – stock.adobe.com)

We all do it, but how often should we? A groundbreaking study from the Institute for Systems Biology (ISB) has uncovered fascinating links between how frequently we poop and our long-term health. It turns out that your bathroom habits might be more important than you think!

The research team, led by Johannes Johnson-Martinez, examined over 1,400 healthy adults, analyzing everything from their gut microbes to blood chemistry. Their findings, published in Cell Reports Medicine, shed new light on the complex relationship between our bowel movements and overall well-being.

Interestingly, age, sex, and body mass index (BMI) all affected how often people visited the bathroom. Younger individuals, women, and those with lower BMIs tended to have less frequent bowel movements.

So, why does it matter?
“Prior research has shown how bowel movement frequency can have a big impact on gut ecosystem function. Specifically, if stool sticks around too long in the gut, microbes use up all of the available dietary fiber, which they ferment into beneficial short-chain fatty acids. After that, the ecosystem switches to fermentation of proteins, which produces several toxins that can make their way into the bloodstream,” Johnson-Martinez explains in a media release.

In other words, when you’re constipated, your gut bacteria run out of their preferred food (fiber) and start breaking down proteins instead. This process creates potentially harmful substances that can enter your bloodstream and affect other organs.

The study revealed a “Goldilocks zone” for optimal gut health – pooping 1-2 times per day. In this sweet spot, beneficial fiber-fermenting bacteria thrived. However, those with constipation or diarrhea showed higher levels of less desirable bacteria associated with protein fermentation or upper digestive tract issues.

However, this is not just about gut bugs. The researchers found that bowel movement frequency (BMF) has a link to various blood markers and even potential chronic disease risks. For instance, people with constipation had higher levels of substances like p-cresol-sulfate and indoxyl-sulfate in their blood. These compounds, produced by gut bacteria breaking down proteins, are known to be harmful to kidneys.

“Here, in a generally healthy population, we show that constipation, in particular, is associated with blood levels of microbially derived toxins known to cause organ damage, prior to any disease diagnosis,” says Dr. Sean Gibbons, the study’s corresponding author.

The research also hinted at connections between bowel habits and mental health, suggesting that how often you poop might be related to anxiety and depression.

So, what can you do to hit that bathroom sweet spot? Unsurprisingly, the study found that a fiber-rich diet, staying well-hydrated, and regular exercise were associated with healthier bowel movement patterns.

“Overall, this study shows how bowel movement frequency can influence all body systems, and how aberrant bowel movement frequency may be an important risk factor in the development of chronic diseases. These insights could inform strategies for managing bowel movement frequency, even in healthy populations, to optimize health and wellness,” Dr. Gibbons concludes.

While more research is necessary to fully understand these connections, this study highlights the importance of paying attention to your bathroom habits. They might just be a window into your overall health!

Source: https://studyfinds.org/pooping-every-day-necessary/

Parents ‘skipping meals’ and children ‘going without essentials’ – UNICEF UK calls for urgent help

UNICEF UK calls on the new government to scrap the two-child benefit cap as it warns of severe pressure on parents of young children.

File pic: iStock

Mounting debt and expensive childcare are putting children at risk, UNICEF UK has warned, as the charity claims 87% of parents of children under five worry about their future.

Based on findings from its annual survey, the charity said parents are not getting the support they need – particularly in lower-income households – and called on the government to do more.

One respondent to the survey said educational toys and books are too expensive for them, while they can’t afford days out and can just about buy second-hand clothes.

Joanna Rea, the charity’s director of advocacy, said support for parents must become an “urgent national priority”.

“This is the moment to start making the UK one of the best places to raise a child and reverse the years of underinvestment and austerity which contributed to the UK having the highest increase in child poverty of any rich country,” she said.

“With a quarter of parents borrowing money to pay for the essentials for their children – supporting them must be an urgent national priority for the new government.”

Other findings show:

• 38% dread the holidays because of the financial strain they put on the family;
• 25% have had to borrow money or gone into debt to make ends meet;
• 66% said the cost of living crisis had negatively impacted their family;
• 63% report struggling with their mental health while being a parent;
• 62% said childcare is one of the biggest challenges facing parents.

The charity is calling for the new government to introduce what it calls a “National Baby and Toddler Guarantee” to ensure children under five get the support and services they need.

But as a “matter of urgency”, UNICEF UK recommends an end to the two-child limit and removing the benefits cap.

The two-child cap, introduced by the Conservatives in 2017, prevents parents claiming Universal Credit or child tax credits for a third child, except in very limited circumstances.

Source: https://news.sky.com/story/parents-skipping-meals-and-children-going-without-essentials-unicef-uk-calls-for-urgent-help-13178265

Why cats meow at humans more than each other

A cat closes its eyes and meows. (Credit: Amir Ghoorchiani from Pexels)

This is a story that goes back thousands of years.

Originally, cats were solitary creatures. This means they preferred to live and hunt alone rather than in groups. Most of their social behavior was restricted to mother-kitten interactions. Outside of this relationship, cats rarely meow at each other.

However, as cats began to live alongside humans, these vocalizations took on new meanings. In many ways, when a cat meows at us, it’s as if they see us as their caregivers, much like their feline mothers.

Cats probably first encountered humans roughly 10,000 years ago, when people began establishing permanent settlements. These settlements attracted rodents, which in turn drew cats looking for prey. The less fearful and more adaptable cats thrived, benefiting from a consistent food supply. Over time, these cats developed closer bonds with humans.

Unlike dogs, which were bred by humans for specific traits, cats essentially domesticated themselves. Those that could tolerate and communicate with humans had a survival advantage, leading to a population well-suited to living alongside people.

To understand this process, we can look at Russian-farmed fox experiments. Beginning in the 1950s, Soviet scientist Dmitry Belyaev and his team selectively bred silver foxes, mating those that were less fearful and aggressive toward humans.

Over generations, these foxes became more docile and friendly, developing physical traits similar to domesticated dogs, such as floppy ears and curly tails. Their vocalizations changed too, shifting from aggressive “coughs” and “snorts” to more friendly “cackles” and “pants,” reminiscent of human laughter.

These experiments demonstrated that selective breeding for tameness could lead to a range of behavioral and physical changes in animals, achieving in a few decades what would usually take thousands of years. Though less obvious than the differences between dogs and the ancestral wolf, cats have also changed since their days as African wildcats. They now have smaller brains and more varied coat colors, traits common among many domestic species.

Cats’ vocal adaptations
Like the silver foxes, cats have adapted their vocalizations, albeit over a much longer period of time. Human babies are altrical at birth, meaning they are entirely dependent on their parents. This dependency has made us particularly attuned to distress calls – ignoring them would be costly for human survival.

Cats have altered their vocalizations to tap into this sensitivity. A 2009 study by animal behavior researcher Karen McComb and her team gives evidence of this adaptation. Participants in the study listened to two types of purrs. One type was recorded when cats were seeking food (solicitation purr) and another recorded when they were not (non-solicitation purr). Both cat owners and non-cat owners rated the solicitation purrs as more urgent and less pleasant.

An acoustic analysis revealed a high-pitch component in these solicitation purrs, resembling a cry. This hidden cry taps into our innate sensitivity to distress sounds, making it nearly impossible for us to ignore.

Source: https://studyfinds.org/why-cats-meow-at-humans-more/

Why people run from bears: Study explains how the brain takes action upon what we see

Coming upon a bear in the forest will immediately put your brain into fight or flight mode. (© Татьяна Макарова -stock.adobe.com)

From a menacing bear in the forest to a smiling friend at a party, our brains are constantly processing emotional stimuli and guiding our responses. But how exactly does our brain transform what we see into appropriate actions? A new study sheds new light on this complex process, revealing the sophisticated ways our brains encode emotional information to guide behavior.

Led by Prof. Sonia Bishop, now Chair of Psychology at Trinity College Dublin, and Samy Abdel-Ghaffar, a researcher at Google, the study delves into how a specific brain region called the occipital temporal cortex (OTC) plays a crucial role in processing emotional visual information. Its findings are published in Nature Communications.

“It is hugely important for all species to be able to recognize and respond appropriately to emotionally salient stimuli, whether that means not eating rotten food, running from a bear, approaching an attractive person in a bar or comforting a tearful child,” Bishop explains in a statement.

The researchers used advanced brain imaging techniques to analyze how the OTC responds to a wide range of emotional images. They discovered that this brain region doesn’t just categorize what we see – it also encodes information about the emotional content of images in a way that’s particularly well-suited for guiding behavior.

The brain is hard at work when we see emotional stimuli, (© Татьяна Макарова – stock.adobe.com)

One of the study’s key insights is that our brains don’t simply process emotional stimuli in terms of “approach” or “avoid.” Instead, the OTC appears to represent emotional information in a more nuanced way that allows for a diverse range of responses.

“Our research reveals that the occipital, temporal cortex is tuned not only to different categories of stimuli, but it also breaks down these categories based on their emotional characteristics in a way that is well suited to guide selection between alternate behaviors,” says Bishop.

For instance, the brain’s response to a large, threatening bear would be different from its response to a weak, diseased animal – even though both might generally fall under the category of “avoid.” Similarly, the brain’s representation of a potential mate would differ from its representation of a cute baby, despite both being positive stimuli.

The study employed a technique called voxel-wise modeling, which allowed the researchers to examine brain activity at a very fine-grained level. “This approach let us explore the intertwined representation of categorical and emotional scene features, and opened the door to novel understanding of how OTC representations predict behavior,” says Abdel-Ghaffar.

By applying machine learning techniques to the brain imaging data, the researchers found that the patterns of activity in the OTC were remarkably good at predicting what kinds of behavioral responses people would associate with each image. Intriguingly, these predictions based on brain activity were more accurate than predictions based solely on the objective features of the images themselves.

This suggests that the OTC is doing more than just passively representing what we see – it’s actively transforming visual information into a format that’s optimized for guiding our actions in emotionally-charged situations.

These findings not only advance our understanding of how the brain processes emotional information but could also have important implications for mental health research. As Prof. Bishop points out, “The paradigm used does not involve a complex task making this approach suitable in the future, for example, to further understanding of how individuals with a range of neurological and psychiatric conditions differ in processing emotional natural stimuli.”

By unraveling the ways our brains encode emotional information, the study brings us one step closer to understanding how we navigate the complex emotional landscape of our world. From everyday social interactions to life-or-death situations, our brains are constantly working behind the scenes, using sophisticated neural representations to help us respond appropriately to the emotional stimuli we encounter.

Source: https://studyfinds.org/why-people-run-from-bears-study-explains-how-the-brain-takes-action-upon-what-we-see/

Brain-imaging study reveals curiosity as it emerges

(Photo by Paula Corberan on Unsplash)

You look up into the clear blue sky and see something you can’t quite identify. Is it a balloon? A plane? A UFO? You’re curious, right?

A research team based at Columbia’s Zuckerman Institute has for the first time witnessed what is happening in the human brain when feelings of curiosity like this arise. In a study published in the Journal of Neuroscience, the scientists revealed brain areas that appear to assess the degree of uncertainty in visually ambiguous situations, giving rise to subjective feelings of curiosity.

“Curiosity has deep biological origins,” said corresponding author Jacqueline Gottlieb, PhD, a principal investigator at the Zuckerman Institute. The primary evolutionary benefit of curiosity, she added, is to encourage living things to explore their world in ways that help them survive.

“What distinguishes human curiosity is that it drives us to explore much more broadly than other animals, and often just because we want to find things out, not because we are seeking a material reward or survival benefit,” said Dr. Gottlieb, who is also a professor of neuroscience at Columbia’s Vagelos College of Physicians and Surgeons. “This leads to a lot of our creativity.”

Joining Dr. Gottlieb on the research were Michael Cohanpour, PhD, a former graduate student at Columbia (now a data scientist with dsm-firmenich), and Mariam Aly, PhD, also previously at Columbia and now an acting associate professor of psychology at the University of California, Berkeley.

Human brain-scan images show regions toward the back and front that are active for a person who is feeling curious. (Credit: Gottlieb Lab/Columbia’s Zuckerman Institute)

In the study, researchers employed a noninvasive, widely used technology to measure changes in the blood-oxygen levels in the brains of 32 volunteers. Called functional magnetic resonance imaging, or fMRI, the technology enabled the scientists to record how much oxygen different parts of the subjects’ brains consumed as they viewed images. The more oxygen a brain region consumes, the more active it is.

To unveil those brain areas involved in curiosity, the research team presented participants with special images known as texforms. These are images of objects, such as a walrus, frog, tank or hat, that have been distorted to various degrees to make them more or less difficult to recognize.

The researchers asked participants to rate their confidence and curiosity about each texform, and found that the two ratings were inversely related. The more confident subjects were that they knew what the texform depicts, the less curious they were about it. Conversely, the less confident subjects were that they could guess what the texform was, the more curious they were about it.

Three pairs of texforms showing unrecognizable and clear versions of objects.(Credit: Gottlieb Lab/Columbia’s Zuckerman Institute)

Using fMRI, the researchers then viewed what was happening in the brain as the subjects were presented with texforms. The brain-scan data showed high activity in the occipitotemporal cortex (OTC), a region located just above your ears, which has long been known to be involved in vision and in recognizing categories of objects. Based on previous studies, the researchers expected that when they presented participants with clear images, this brain region would show distinct activity patterns for animate and inanimate objects. “You can think of each pattern as a ‘barcode’ identifying the texform category,” Dr. Gottlied said.

The researchers used these patterns to develop a measure, which they dubbed “OTC uncertainty,” of how uncertain this cortical area was about the category of a distorted texform. They showed that, when subjects were less curious about a texform, their OTC activity corresponded to only one barcode, as if it clearly identified whether the image belonged to the animate or the inanimate category. In contrast, when subjects were more curious, their OTC had characteristics of both barcodes, as if it could not clearly identify the image category.

Source: https://studyfinds.org/brain-imaging-study-reveals-curiosity-as-it-emerges/

Saving just 1.2% of Earth’s surface may prevent next mass extinction event

(© herraez – stock.adobe.com)

In a world where the boundaries between human communities and the wilderness blur with each passing year, a groundbreaking study explains how preserving just a tiny amount of the Earth’s surface could prevent the next mass extinction event. An international team warns that the final unprotected havens of rare and endangered species across the globe live in these relatively minuscule spaces that make up just over one percent of the planet’s surface.

The study, published in the journal Frontiers in Science, calls these last refuges of biodiversity irreplaceable, and their loss would trigger the sixth major extinction event in Earth’s history.

“Most species on Earth are rare, meaning that species either have very narrow ranges or they occur at very low densities or both,” says Dr. Eric Dinerstein of the NGO Resolve, the lead author of the report, in a media release. “And rarity is very concentrated. In our study, zooming in on this rarity, we found that we need only about 1.2% of the Earth’s surface to head off the sixth great extinction of life on Earth.”

Methodology: Mapping the Conservation Imperatives
The research team combined six key biodiversity datasets to map out 16,825 vital areas, covering approximately 164 million hectares, that are currently unprotected. These areas were identified by overlaying existing protected zones with regions known for their rare and endangered species.

The team then refined this data using fractional land cover analysis, which allowed them to pinpoint the exact regions still harboring significant natural habitats that require immediate conservation efforts.

Results: Revealing a Dire Need for Protection

The results are alarming. A vast majority of these critical areas are located in the tropics, which are hotspots for biodiversity but also highly vulnerable to human interference and climate change.

Despite covering only 1.22% of the Earth’s terrestrial surface, protecting these areas could prevent a disproportionately large number of species extinctions. Furthermore, the study highlights that recent global efforts to expand protected areas have largely overlooked these crucial habitats, with only 7% of what the scientists call “conservation imperatives” being identified and currently safeguarded.

“These sites are home to over 4,700 threatened species in some of the world’s most biodiverse yet threatened ecosystems,” says study co-author Andy Lee of Resolve. “These include not only mammals and birds that rely on large intact habitats, like the tamaraw in the Philippines and the Celebes crested macaque in Sulawesi Indonesia, but also range-restricted amphibians and rare plant species.”

Source: https://studyfinds.org/earths-surface-mass-extinction/

This bad habit may be the main reason people suffer cognitive decline

(Credit: Laurent T/Shutterstock)

Smoking may be the most influential factor in whether older adults go on to develop dementia. That’s the concerning takeaway from a groundbreaking study spanning 14 European countries. Researchers in London have found that when it comes to maintaining cognitive function as we age, the biggest impact may come from a single lifestyle choice: not smoking.

The study, published in Nature Communications, followed over 32,000 adults between 50 and 104 for up to 15 years. While previous research has often lumped various healthy behaviors together, making it difficult to pinpoint which ones truly matter, this study took a different approach. By examining 16 different lifestyle combinations, the researchers were able to isolate the effects of smoking, alcohol consumption, physical activity, and social contact on cognitive decline.

The results were striking. Regardless of other lifestyle factors, non-smokers consistently showed slower rates of cognitive decline compared to smokers. This finding suggests that quitting smoking – or never starting in the first place – could be the most crucial step in preserving brain function as we age.

“Our findings suggest that among the healthy behaviors we examined, not smoking may be among the most important in terms of maintaining cognitive function,” says Dr. Mikaela Bloomberg from University College London in a media release.

“For people who aren’t able to stop smoking, our results suggest that engaging in other healthy behaviors such as regular exercise, moderate alcohol consumption and being socially active may help offset adverse cognitive effects associated with smoking.”

Methodology: Unraveling the Cognitive Puzzle
To understand how the researchers arrived at this conclusion, let’s break down their methodology. The study drew data from two major aging studies: the English Longitudinal Study of Ageing (ELSA) and the Survey of Health, Ageing and Retirement in Europe (SHARE). These studies are treasure troves of information, following thousands of older adults over many years and collecting data on their health, lifestyle, and cognitive function.

The researchers focused on 4 key lifestyle factors:

  1. Smoking (current smoker or non-smoker)
  2. Alcohol consumption (no-to-moderate or heavy)
  3. Physical activity (weekly moderate-plus-vigorous activity or less)
  4. Social contact (weekly or less than weekly)

By combining these factors, they created 16 distinct lifestyle profiles. For example, one profile might be a non-smoker who drinks moderately, exercises weekly, and has frequent social contact, while another might be a smoker who drinks heavily, doesn’t exercise regularly, and has limited social interaction.

To measure cognitive function, the researchers used two tests:

  1. A memory test, where participants had to recall a list of words immediately and after a delay
  2. A verbal fluency test, where participants named as many animals as they could in one minute

These tests were repeated at multiple time points over the years, allowing the researchers to track how cognitive function changed over time for each lifestyle profile. To ensure they were capturing the effects of lifestyle rather than early signs of dementia, the researchers excluded anyone who showed signs of cognitive impairment at the start of the study or who was diagnosed with dementia during the follow-up period.

Source: https://studyfinds.org/smoking-cognitive-decline/

The Caesar salad was born 100 years ago in Mexico — on the Fourth of July!

Photo by Chris Tweten from Unsplash

The most seductive culinary myths have murky origins, with a revolutionary discovery created by accident or out of necessity.

For the Caesar salad, these classic ingredients are spiced up with a family food feud and a spontaneous recipe invention on the Fourth of July, across the border in Mexico, during Prohibition.

Our story is set during the era when America banned the production and sale of alcohol from 1919–1933.

Two brothers, Caesar (Cesare) and Alex (Alessandro) Cardini, moved to the United States from Italy. Caesar opened a restaurant in California in 1919. In the 1920s, he opened another in the Mexican border town of Tijuana, serving food and liquor to Americans looking to circumvent Prohibition.

Tijuana’s Main Street, packed with saloons, became a popular destination for southern Californians looking for drink. It claimed to have the “world’s longest bar” at the Ballena, 215 feet (66 meters) long with ten bartenders and 30 waitresses.

The story of the Caesar salad, allegedly 100 years old, is one of a cross-border national holiday Prohibition-era myth, a brotherly battle for the claim to fame and celebrity chef endorsements.

Necessity is the mother of invention

On July 4, 1924, so the story goes, Caesar Cardini was hard at work in the kitchen of his restaurant, Caesar’s Place, packed with holiday crowds from across the border looking to celebrate with food and drink.

He was confronted with a chef’s worst nightmare: running out of ingredients in the middle of service.

As supplies for regular menu items dwindled, Caesar decided to improvise with what he had on hand.

He took ingredients in the pantry and cool room and combined the smaller leaves from hearts of cos lettuce with a dressing made from coddled (one-minute boiled) eggs, olive oil, black pepper, lemon juice, a little garlic, and Parmesan cheese.

The novel combination was a huge success with the customers and became a regular menu item: the Caesar salad.

Et tu, Alex?
There is another version of the origin of the famous salad, made by Caesar’s brother, Alex, at his restaurant in Tijuana.

Alex claims Caesar’s “inspiration” was actually a menu item at his place, the “aviator’s salad”, named because he made it as a morning-after pick-me-up for American pilots after a long night drinking.

His version had many of the same ingredients, but used lime juice, not lemon, and was served with large croutons covered with mashed anchovies.

When Caesar’s menu item later became famous, Alex asserted his claim as the true inventor of the salad, now named for his brother.

Enter the celebrity chefs
To add to the intrigue, two celebrity chefs championed the opposing sides of this feud. Julia Child backed Caesar, and Diana Kennedy (not nearly as famous, but known for her authentic Mexican cookbooks) supported Alex’s claim.

By entering the fray, each of these culinary heavyweights added credence to different elements of each story and made the variations more popular in the US.

While Child reached more viewers in print and on television, Kennedy had local influence, known for promoting regional Mexican cuisine.

While they chose different versions, the influence of major media figures contributed to the evolution of the Caesar salad beyond its origins.

The original had no croutons and no anchovies. As the recipe was codified into an “official” version, garlic was included in the form of an infused olive oil. Newer versions either mashed anchovies directly into the dressing or added Worcestershire sauce, which has anchovies in the mix.

Caesar’s daughter, Rosa, always maintained her father was the original inventor of the salad. She continued to market her father’s trademarked recipe after his death in 1954.

Ultimately, she won the battle for her father’s claim as the creator of the dish, but elements from Alex’s recipe have become popular inclusions that deviate from the purist version, so his influence is present – even if his contribution is less visible.

No forks required – but a bit of a performance
If this weren’t enough, there is also a tasty morsel that got lost along the way.

Caesar salad was originally meant to be eaten as finger food, with your hands, using the baby leaves as scoops for the delicious dressing ingredients.

Source: https://studyfinds.org/caesar-salad-fourth-of-july/

Age of newly-discovered cave painting rewrites human history

Photo-stitched panorama of the rock art panel (with photographs enhanced using DStretch_ac_lds_cb). (Credit: Nature/Griffith University)

The tale of early humans hunting pigs roughly 50,000 years ago may be the first recorded story in human history. Deep in the limestone caves of Indonesia’s Sulawesi island, this remarkable cave painting discovery has pushed back the origins of narrative art by thousands of years.

Researchers in Australia have found that our ancestors were creating complex scenes of human-animal interaction at least 51,200 years ago – making this the oldest known example of visual storytelling in the world. The groundbreaking study focuses on two cave sites in the Maros-Pangkep region of South Sulawesi. Using advanced dating techniques, researchers have determined that a hunting scene at Leang Bulu’ Sipong 4 cave is at least 48,000 years-old, while a newly discovered composition at Leang Karampuang cave dates back at least 51,200 years.

“Our findings show that figurative portrayals of anthropomorphic figures and animals have a deeper origin in the history of modern human (Homo sapiens) image-making than recognized to date, as does their representation in composed scenes,” the study authors write in the journal Nature.

The Leang Karampuang artwork depicts at least three human-like figures interacting with a large pig, likely a Sulawesi warty pig. This scene predates the next oldest known narrative art by over 20,000 years, fundamentally altering our understanding of early human brain and artistic development.

a, Photostitched panorama of the rock art panel (with photographs enhanced using DStretch_ac_lds_cb). b, Tracing of the rock art panel showing the results of LA-U-series dating. c, Tracing of the painted scene showing the human-like figures (H1, H2 and H3) interacting with the pig. d, Transect view of the coralloid speleothem, sample LK1, removed from the rock art panel, showing the paint layer and the three integration zones (ROIs), as well as the associated age calculations. e, LA-MC-ICP-MS imaging of the LK1 232Th/238U isotopic activity ratio. (Credit: Nature/Griffith University)

A New Way to Date Ancient Art

Key to this discovery was the research team’s novel approach to dating cave art. Previous studies relied on analyzing calcium carbonate deposits that form on top of paintings using a method called uranium-series dating. While effective, this technique had limitations when dealing with thin or complex mineral layers.

The researchers developed an innovative laser-based method that allows for much more precise analysis. By using laser ablation to create detailed maps of the calcium carbonate layers, they could pinpoint the oldest deposits directly on top of the pigments.

“This method provides enhanced spatial accuracy, resulting in older minimum ages for previously dated art,” the researchers explain.

The team applied this technique to re-date the hunting scene at Leang Bulu’ Sipong 4, which was previously thought to be around 44,000 years-old. The new analysis revealed it to be at least 48,000 years-old – 4,000 years older than initially believed.

Pushing Back the Timeline of Human Art
Armed with this refined dating method, the researchers turned their attention to Leang Karampuang cave. There, they discovered and dated a previously unknown composition showing human-like figures apparently interacting with a pig.

Three small human figures are arrayed around a much larger pig painted in red ochre. Two of the human figures appear to be holding objects, possibly spears or ropes, while a third is depicted upside-down with its arms outstretched towards the pig’s head.

Using their laser ablation technique, the team dated calcium carbonate deposits on top of these figures. The results were astounding – the artwork was at least 51,200 years-old, making it the earliest known example of narrative art in the world.

“This enigmatic scene may represent a hunting narrative, while the prominent portrayal of therianthropic figures implies that the artwork reflects imaginative storytelling (for example, a myth),” the international team writes.

Rewriting the History of Human Creativity
These findings have profound implications for our understanding of human brain development in ancient times. Previously, the oldest known figurative art was a painting of a Sulawesi warty pig from the same region, dated to 45,500 years ago. The oldest known narrative scene was thought to be the Leang Bulu’ Sipong 4 hunting tableau, originally dated to 44,000 years ago.

The new dates push back the origins of both figurative and narrative art by thousands of years. They suggest that early humans in this region were engaging in complex symbolic thinking and visual storytelling (drawing and painting) far earlier than previously believed.

Source: https://studyfinds.org/cave-painting-rewrites-history/

Here’s why most people are right-handed but actually left-eye dominant

(Credit: Krakenimages.com/Shutterstock)

Whether you’re left, right or ambidextrous, “handedness” is part of our identity. But a lot of people don’t realize that we have other biases too and they are not unique to humans. My colleagues and I have published a new study that shows aligning our biases in the same way as other people may have social benefits.

Across different cultures, human populations have high levels of right-handedness (around 90%). We also have a strong population bias in how we recognize faces and their emotions.

A significant majority of the population are faster and more accurate at recognizing identities and emotions when they fall within the left visual field compared with the right visual field.

These types of biases develop in our brains in early childhood. The left and right hemispheres of the brain control motor action on the opposite sides of the body. If your left visual field is dominant, that means the right side of your brain is taking dominance for recognizing faces and emotions.

Until recently, scientists thought behavioral biases were unique to humans. But animal research over the last several decades shows there are behavioral biases across all branches of the vertebrate tree of life.

For example, chicks that peck for food with an eye bias are better at telling grain from pebbles. Also, chicks with an eye bias for monitoring predators are less likely to be eaten than unlateralized chicks. Studies show that animals with biases tend to perform better at survival-related tasks in laboratory experiments, which probably translates to a better survival rate in the wild.

But the chicks with the best advantage are ones that favor one eye to the ground (to find food) and the other eye to the sky (to look out for threats). A benefit of the “divided brain” is that wild animals can forage for food and look out for predators – important multitasking.

So why do animals have behavioral biases?
Research suggests that brain hemisphere biases evolved because it allows the two sides of the brain to concurrently control different behavior. It also protects animals from becoming muddled. If both sides of the brain had equal control over critical functions they might simultaneously direct the body to carry out incompatible responses.

So biases free up some resources or “neural capacity”, making animals more efficient at finding food and keeping safe from predators.

Animal studies suggest it is the presence, not the direction (left or right) of our biases that matters for performance. But that doesn’t explain why so many people are right-handed for motor tasks and left-visual field biased for face processing.

Every person should have a 50-50 chance of being left or right-biased. Yet, across the animal kingdom, the majority of individuals in a species align in the same direction.

This suggests that aligning biases with others in your group might have a social advantage. For example, animals that align with the population during cooperative behavior (shoaling, flocking) dilute the possibility of being picked off by a predator. The few that turn away from the flock or shoal become clear targets.

People tend to be left or right-eye dominant. (Photo by Gayatri Malhotra)

Although humans are highly lateralized regardless of ethnic or geographic background, there is always a significant minority in the population, suggesting that this alternative bias has its own merits.

The prevailing theory is that deviating from the population offers animals an advantage during competitive interactions, by creating an element of surprise. It may explain why left-handedness is overrepresented in professional interactive sports like cricket and baseball.

In the first study of its kind, scientists from the universities of Sussex, Oxford, Westminster, London (City, Birkbeck), and Kent put our human behavioral biases to the test. We investigated associations between strength of hand bias and performance as well as direction of biases and social ability. We chose behavior that aligns with animal research.

Over 1,600 people of all ages and ethnicities participated in this investigation.

You don’t always use your preferred hand: some people are mildly, moderately, or strongly handed. So, we measured handedness in our participants using a timed color-matching pegboard task. Not everyone knows whether they have a visual field bias, so we evaluated this for participants using images of faces expressing different emotions (such as anger and surprise) presented on a screen.

People with mild to moderate strength hand bias (left or right) placed more color-matched pegs correctly than those with a strong or weak bias. These results suggest that, in humans, extremes may limit our performance flexibility, unlike wild animals.

The majority of the participants had a standard bias (right-handedness for motor tasks, left visual field bias for face processing). But not everyone.

Source: https://studyfinds.org/right-handed-but-left-eyed/

Ozempic linked to blindness? New study says sudden vision loss a side-effect of semaglutide

(Photo by myskin on Shutterstock)

A groundbreaking new study has uncovered a potential link between a popular weight loss and diabetes medication and an increased risk of sudden vision loss. The drug in question, semaglutide, sold under brand names like Ozempic and Wegovy, has been hailed as a game-changer in the fight against obesity and Type 2 diabetes. However, this research suggests it may come with an unexpected and serious side-effect.

Semaglutide belongs to a class of drugs called GLP-1 receptor agonists. These medications mimic a hormone that helps regulate blood sugar and appetite. Since its approval by the FDA in 2017 for diabetes and later for weight loss, semaglutide has skyrocketed in popularity. By early 2023, it accounted for the highest number of new prescriptions among similar drugs in the United States.

But as more people turn to semaglutide for its benefits, researchers at Massachusetts Eye and Ear and Harvard Medical School have raised a red flag. Their study, published in JAMA Ophthalmology, suggests that patients taking semaglutide may face a significantly higher risk of developing a condition called nonarteritic anterior ischemic optic neuropathy, or NAION for short.

NAION is a serious eye condition that occurs when blood flow to the optic nerve is suddenly reduced or blocked. This can lead to rapid and often permanent vision loss, typically in one eye. While it’s the second most common cause of optic nerve-related vision loss in adults, it’s still relatively rare, affecting only two to 10 people per 100,000 in the general population.

The study’s findings are striking. Among patients with Type 2 diabetes, those taking semaglutide were over four times more likely to develop NAION compared to those on other diabetes medications. The risk was even higher for overweight or obese patients using semaglutide for weight loss – they were more than seven times more likely to experience NAION than those using other weight loss drugs.

Overweight or obese patients using semaglutide for weight loss were more than 7 times more likely to experience NAION than those using other weight loss drugs. (© Mauricio – stock.adobe.com)

These numbers are certainly attention-grabbing, but what do they mean in real-world terms? To put it in perspective, over a three-year period, about 9% of diabetes patients on semaglutide developed NAION, compared to less than 2% of those on other medications. For overweight or obese patients, the numbers were about 7% for semaglutide users versus less than 1% for those on other drugs.

“The use of these drugs has exploded throughout industrialized countries and they have provided very significant benefits in many ways, but future discussions between a patient and their physician should include NAION as a potential risk,” says study co-author Dr. Joseph Rizzo, the study’s corresponding author and director of the Neuro-Ophthalmology Service at Mass Eye and Ear, in a statement. “It is important to appreciate, however, that the increased risk relates to a disorder that is relatively uncommon.”

The timing of NAION onset is also noteworthy. The study found that the risk was highest in the first year after starting semaglutide, with most cases occurring within the initial 12 months of treatment.

It’s important to note that this study doesn’t prove that semaglutide directly causes NAION. Rather, it highlights the need for increased awareness and careful monitoring among both patients and healthcare providers.

The potential link between semaglutide and NAION is particularly concerning given the drug’s widespread use and growing popularity. As obesity rates continue to climb and Type 2 diabetes remains a major public health concern, medications like semaglutide play a crucial role in managing these conditions. The benefits of these drugs – including improved blood sugar control, significant weight loss, and reduced risk of heart disease – are well-documented and potentially life-changing for many patients.

As research continues, patients currently taking semaglutide should not panic or discontinue their medication without consulting their doctor. Instead, they should be aware of the potential risk and report any sudden changes in vision immediately. Healthcare providers may need to consider more frequent eye exams for patients on these medications, especially in the first year of treatment.

“Our findings should be viewed as being significant but tentative, as future studies are needed to examine these questions in a much larger and more diverse population,” says Rizzo, who is also the Simmons Lessell Professor of Ophthalmology at Harvard Medical School. “This is information we did not have before and it should be included in discussions between patients and their doctors, especially if patients have other known optic nerve problems like glaucoma or if there is preexisting significant visual loss from other causes.”

Source: https://studyfinds.org/ozempic-linked-to-blindness/

Concerning link discovered between heart disease and disappearance of the Y chromosome

Y-Chromosomes. (©YustynaOlha – stock.adobe.com)

The spontaneous loss of the Y chromosome has been a medical mystery among aging men for quite some time. Now, a new study is linking this condition to an even more concerning problem — death from heart disease. Researchers at Boston Medical Center (BMC) and Boston University (BU) Chobanian & Avedisian School of Medicine have found that men who are losing their Y chromosomes are at a much higher risk of dying from heart disease.

Specifically, the study published in Circulation: Heart Failure explored the risk factors for transthyretin cardiac amyloidosis (ATTR-CA), a common cause of heart disease among older men. Transthyretin amyloidosis occurs when a person’s liver produces faulty transthyretin proteins. Clumps of these abnormal proteins build up in the heart’s main pumping chamber, causing the left ventricle to become stiff and weak.

The analysis uncovered a connection to the spontaneous loss of the Y chromosome (LOY). The more blood cells missing their Y chromosomes, the greater the odds were that a person would die from ATTR-CA.

Study authors note that LOY is one of the most common genetic mutations among men. Over half of the male population who make it to age 90 will lose their Y chromosomes from at least some of their blood cells. Previous studies have also linked the disappearance of the Y chromosome to poorer heart health, but these reports did not examine LOY’s link to ATTR-CA.

“Our study suggests that spontaneous LOY in circulating white blood cells contributes both to the development of ATTR-CA in men and influences the severity of disease,” says Dr. Frederick Ruberg, the Chief of Cardiovascular Medicine at BMC and Professor of Medicine at BU Chobanian & Avedisian School of Medicine, in a media release. “Additionally, our study’s findings indicate that elevated LOY may be an important reason why some patients do not respond to the ATTR-CA therapy that is typically effective.”

Methodology & Results
In total, researchers examined 145 men from the United States and Japan with ATTR-CA and another 91 dealing with heart failure due to an issue other than transthyretin cardiac amyloidosis. Results revealed that men who had lost more than 21% of their Y chromosomes were over two and a half times more likely to die of heart disease than men with intact blood cells.

Source: https://studyfinds.org/heart-disease-y-chromosome/

Sixty-million-year-old grape seeds reveal how the death of the dinosaurs may have paved the way for grapes to spread

Fabiany Herrera (left) and Mónica Carvalho (right) at the fossil plant locality, holding the newly-discovered earliest grape from the Western Hemisphere. (Photos courtesy of Fabiany Herrera.)

If you’ve ever snacked on raisins or enjoyed a glass of wine, you may, in part, have the extinction of the dinosaurs to thank for it. In a discovery described in the journal Nature Plants, researchers found fossil grape seeds that range from 60 to 19 million years old in Colombia, Panama, and Peru. One of these species represents the earliest known example of plants from the grape family in the Western Hemisphere. These fossil seeds help show how the grape family spread in the years following the death of the dinosaurs.

“These are the oldest grapes ever found in this part of the world, and they’re a few million years younger than the oldest ones ever found on the other side of the planet,” says Fabiany Herrera, an assistant curator of paleobotany at the Field Museum in Chicago’s Negaunee Integrative Research Center and the lead author of the Nature Plants paper. “This discovery is important because it shows that after the extinction of the dinosaurs, grapes really started to spread across the world.”

It’s rare for soft tissues like fruits to be preserved as fossils, so scientists’ understanding of ancient fruits often comes from the seeds, which are more likely to fossilize. The earliest known grape seed fossils were found in India and are 66 million years old. It’s not a coincidence that grapes appeared in the fossil record 66 million years ago–that’s around when a huge asteroid hit the Earth, triggering a massive extinction that altered the course of life on the planet. “We always think about the animals, the dinosaurs, because they were the biggest things to be affected, but the extinction event had a huge impact on plants too,” says Herrera. “The forest reset itself, in a way that changed the composition of the plants.”

Herrera and his colleagues hypothesize that the disappearance of the dinosaurs might have helped alter the forests. “Large animals, such as dinosaurs, are known to alter their surrounding ecosystems. We think that if there were large dinosaurs roaming through the forest, they were likely knocking down trees, effectively maintaining forests more open than they are today,” says Mónica Carvalho, a co-author of the paper and assistant curator at the University of Michigan’s Museum of Paleontology. But without large dinosaurs to prune them, some tropical forests, including those in South America, became more crowded, with layers of trees forming an understory and a canopy.

Lithouva – the earliest fossil grape from the Western Hemisphere, ~60 million years old from Colombia. Top figure shows fossil accompanied with CT scan reconstruction. Bottom shows artist reconstruction. (Photos by Fabiany Herrera, art by Pollyanna von Knorring.)

These new, dense forests provided an opportunity. “In the fossil record, we start to see more plants that use vines to climb up trees, like grapes, around this time,” says Herrera. The diversification of birds and mammals in the years following the mass extinction may have also aided grapes by spreading their seeds.

In 2013, Herrera’s PhD advisor and senior author of the new paper, Steven Manchester, published a paper describing the oldest known grape seed fossil, from India. While no fossil grapes had ever been found in South America, Herrera suspected that they might be there too.

“Grapes have an extensive fossil record that starts about 50 million years ago, so I wanted to discover one in South America, but it was like looking for a needle in a haystack,” says Herrera. “I’ve been looking for the oldest grape in the Western Hemisphere since I was an undergrad student.”

But in 2022, Herrera and his co-author Mónica Carvalho were conducting fieldwork in the Colombian Andes when a fossil caught Carvalho’s eye. “She looked at me and said, ‘Fabiany, a grape!’ And then I looked at it, I was like, ‘Oh my God.’ It was so exciting,” recalls Herrera. The fossil was in a 60-million-year-old rock, making it not only the first South American grape fossil, but among the world’s oldest grape fossils as well.

The fossil seed itself is tiny, but Herrera and Carvalho were able to identify it based on its particular shape, size, and other morphological features. Back in the lab, they conducted CT scans showing its internal structure that confirmed its identity. The team named the fossil Lithouva susmanii, “Susman’s stone grape,” in honor of Arthur T. Susman, a supporter of South American paleobotany at the Field Museum. “This new species is also important because it supports a South American origin of the group in which the common grape vine Vitis evolved,” says co-author Gregory Stull of the National Museum of Natural History.

Internet addiction: What is it doing to teen brains?

(© olly – stock.adobe.com)

Internet addiction is the problematic, compulsive use of the Internet that results in significant impairments in an individual’s functioning in various aspects of life, including social, work, and academic arenas.

Internet addiction is becoming a worldwide problem. Individual screen time averages have risen to about three hours daily. Many people declare that their internet use is “compulsive.” In fact, more than 30 million of the United Kingdom’s 50 million internet users acknowledge that their compulsive, habitual use of the Internet is adversely affecting their personal lives by disrupting relationships and neglecting responsibilities.

Teens addicted to their internet-connected devices have significant alterations in their brain function, worsening addictive behaviors and prohibiting normal development. Internet addiction, powered by uncontrollable urges, disrupts their development, psychological well-being, and every aspect of their lives – mental, emotional, social, and physical.

A study by scientists at UCLA identified the extensive changes to young brains, especially those of children aged 10 to 19 years. A ten-year study, which concluded in 2023, collected the findings from 237 adolescents who had been officially diagnosed with Internet addiction.

Teens addicted to their internet-connected devices have significant alterations in their brain function, worsening addictive behaviors and prohibiting normal development. (© Monkey Business – stock.adobe.com)

Effects on brain function

Using functional magnetic resonance imaging (fMRI), scientists examined different areas of the brain and diverse types of brain function both at rest and while performing tasks. Some parts of the brain showed increased activity, and some parts showed decreased activity. The most significant changes occurred in the connectivity in the part of the brain critical for active thinking and decision-making.

Alterations in brain function show up as addictive behaviors and deterioration in both thought and physical capabilities. The teens’ still immature brains suffered changes that adversely affected intellectual function, physical coordination, mental health, development, and overall well-being.

The brain is in an especially vulnerable stage of development in adolescence. It is more susceptible to internet-associated compulsions. Some of the compulsions were nonstop mouse clicking and consumption of social media. The damage can be profound, with dire consequences. It can manifest as problems in maintaining relationships, lying about online activities, and disturbed eating and sleeping patterns. The sleep disruption interferes with daytime concentration and chronic fatigue.

Brain function is not the only thing altered in teens with internet addiction. Anxiety, depression, and social isolation are all severe consequences of their irresistible compulsions. Additional significant concerns include cyberbullying and exposure to inappropriate material, resulting in emotional distress and a distorted perception of reality.

Source: https://studyfinds.org/internet-addiction-what-its-doing-to-teen-brains/

Scientists discover what really causes us to procrastinate

(Credit: ntkris/Shutterstock)

Chronic procrastinators are often seen as lazy, but a new study suggests that it’s more than just a lack of motivation. A new study published in the Proceedings of the Annual Meeting of the Cognitive Science Society examined the cost-benefit risks the brain goes through when deciding to put off tasks, especially in the face of serious consequences or failure. According to researchers in Germany, understanding why people wait until the last minute to finish important tasks would help create more effective strategies when it comes to productivity.

Procrastination is a complex issue, especially when you consider that most people have been guilty of doing this at least once. Whether it’s filing taxes, meeting a project deadline for work, or simply cleaning out the garage, procrastination causes people to delay tasks despite having the time to do them right away. Given the stress, anxiety, and guilt that can come with procrastination, it’s surprising the human brain continues to support this bad habit.

One issue with procrastination is more than waiting until the last minute to complete a task. While they might look alike, there are different forms of procrastination.

“Procrastination is an umbrella term for different behaviors,” explains Sahiti Chebolu, a computational neuroscientist from the Max Planck Institute for Biological Cybernetics, in a media release. “If we want to understand it, we need to differentiate between its various types.”

A common pattern of procrastination, for example, is not following through on a decision. You might have set aside time to do laundry in the evening, but when the time comes, you decide to watch a movie instead. Usually, something is stopping a person from committing to the original task and waiting for the right conditions or motivation to start the work.

In the current study, Chebolu categorized each type of procrastination and narrowed it down to two explanations: misjudging the time needed to complete the task and protecting the ego from prospective failure.

Researchers narrowed down procrastination to two explanations: misjudging the time needed to complete the task and protecting the ego from prospective failure. (Photo by Pedro Forester Da Silva from Unsplash)

The Theory Behind a Distracted Brain

The theory of procrastination is that it is a series of temporal decisions or making a choice now that would have consequences later. For example, deciding to file taxes on Friday but then choosing to watch a new show on TV when the time comes. Obviously, missing the Tax Day deadline results in penalties and other financial consequences — yet people do it anyway.

According to the authors, the brain weighs all the rewards and penalties of choosing an alternative behavior. However, the brain is biased and prefers immediate gratification over delayed pleasure. The joy of watching television right now is a more appealing option to the brain than the relief of filing taxes three weeks later. It’s too long of a wait for the reward, so the brain prefers the quicker option.

Now, if this were the case all the time, no one would get anything done. That’s why the brain also considers the penalties for making a different decision. However, the study finds the negative outcomes have less weight than the option that gives immediate pleasure. The brain will always try to find the easiest and most immediately pleasurable option.

Evolutionarily, this makes sense. The distant future is always full of uncertainties, so the emphasis should be on helping yourself in the present moment. Procrastination comes when this mental process becomes maladaptive. Chebolu says people’s decision-making skills become flawed as they put too much emphasis on experiences in the present and not enough on the future.

Source: https://studyfinds.org/what-causes-procrastinate/

Is silver worse than bronze? Here’s why many Olympic athletes shockingly think it is

RIO DE JANEIRO, BRAZIL – AUGUST 12, 2016: Laszlo Cseh HUN (L), Chad le Clos RSA , Michael Phelps USA and Joseph Schooling SGP during medal ceremony after Men’s 100m butterfly of the Rio 2016 Olympics (Credit: Leonard Zhukovsky/Shutterstock)

At the 2022 Beijing Olympics, a distraught Alexandra Trusova won silver and promptly declared, “I will never skate again.” Swimmer Michael Phelps displayed a mix of frustration and disappointment at the 2012 London Olympics when he added a silver to his trove of gold medals. At those same games, gymnast McKayla Maroney’s grim expression on the medal stand went viral.

These moments, caught by the camera’s unblinking eye, reveal a surprising pattern: Silver medalists often appear less happy than those winning bronze.

In a 2021 study, which we conducted with our research assistant, Raelyn Rouner, we investigated whether there’s any truth to this phenomenon.

Detecting disappointment
When the athletes of the world convene in Paris this summer for the games of the 33rd Olympiad, many will march in the opening ceremonies, dreaming of gold.

But what happens when they fall just short?

We studied photos of 413 Olympic athletes taken during medal ceremonies between 2000 and 2016. The photos came from the Olympic World Library and Getty Images and included athletes from 67 countries. We also incorporated Sports Illustrated’s Olympic finish predictions because we wanted to see whether athletes’ facial expressions would be affected if they had exceeded expectations or underperformed.

To analyze the photos, we used a form of artificial intelligence that detects facial expressions. By using AI to quantify the activation of facial muscles, we eliminated the need for research assistants to manually code the expressions, reducing the possibility of personal bias. The algorithm identified the shapes and positions of the athletes’ mouths, eyes, eyebrows, nose, and other parts of the face that indicate a smile.

Even though second-place finishers had just performed objectively better than third-place finishers, the AI found that bronze medalists, on average, appeared happier than silver medalists.

Close but no cigar
So why does this happen?

The answer has to do with what psychologists call “counterfactual thinking,” which refers to when people envision what didn’t occur but could have happened.

With this thought process in mind, there are two main explanations for this medal stand phenomenon.

Beijing, China – February 10, 2022: Close-up of the silver medal of the Winter Olympic Games in Beijing (Credit: Andrew Will/Shutterstock)

First, silver medalists and bronze medalists form different points of comparison – what are called category-based counterfactuals.

Silver medalists form an upward comparison, imagining a different outcome – “I almost won gold.” Bronze medalists, on the other hand, form a downward comparison: “At least I won a medal” or “It could have been worse.”

The direction of this comparison shows how happiness can be relative. For silver medalists, almost winning gold is a cause for disappointment, while simply being on the medal stand can gratify a bronze medalist.

We also point to a second reason for this phenomenon: Medalists form something called expectation-based counterfactuals.

Some silver medalists are disappointed because they expected to do better. Maroney’s famous grimace is an example of this. Sports Illustrated predicted she would win the gold medal by a wide margin. In other words, for Maroney, anything other than gold was a big disappointment.

We found evidence consistent with both category-based and expectation-based counterfactual accounts of Olympic medalists’ expressions. Unsurprisingly, our analysis also found that gold medalists are far more likely to smile than the other two medalists, and people who finished better than expected were also more likely to smile, regardless of their medal.

Source: https://studyfinds.org/silver-worse-bronze-olympics/

Memory expert: Triple your recall skills using this simple method

(Credit: Marko Aliaksandr on Shutterstock)

We’re all forgetful from time to time, but for some of us, forgetfulness is a real problem. From little things like items on our grocery list to bigger things like important work meetings or anniversaries, the tendency to forget is not only annoying, but it can be detrimental to our relationships, work, and general ability to function well in a structured, fast-paced society.

There are many ways to combat memory loss and decrease an individual’s risk for conditions such as Alzheimers disease and other forms of dementia. Playing cognitively stimulating games and engaging in educational classes or activities has been proven to reduce the onset of memory decline.

But what if you could do more than delay memory loss? What if we told you that it is possible to triple your memory with one simple method? Memory expert Dave Farrow, author of the book “Brainhacker,” has developed a test that does just that. Farrow believes that our minds have slowed because individuals are no longer asked to remember things. Phone numbers are programmed into our phones and we are able to “ask” our phones to remind us of important dates or events.

“We have become better at sifting through information and searching for information. Looking through search engines and such, and much worse at remembering information. And the reason is because we have a device that remembers it for us,” Farrow tells StudyFinds. “We don’t need phone numbers. We don’t need to hold it in our heads, things like that. Just a little bit of brain training and actually exercising your brain makes a difference there.”

That’s why he suggests what he calls “The Farrow Memory Method,” which he claims can triple an individual’s ability to remember. Here’s how it works:

The Farrow Memory Method

1. Make A List Of Random Objects

Select six or seven random objects and make a list. Focus on the order of the objects. You will need to repeat the objects in order at the end of the test.

2. Use Visual Association

Make connections between the objects. “Essentially what you would do is you’d get a list of random objects and you use visual association.” Instead of memorizing the entire list at once, focus on two at a time. Farrow continues, “the way I would memorize that is, you want to connect two items together at a time. The mistake people make when they’re trying to memorize a list of items is they try to hold it all in their head, and that’s why you have a limit of six to seven items or so. But what you should do is just focus on two at a time and making a connection.”

Farrow uses an example list to explain: shoe, tree, rubber ball, money, and movie. After he makes his list, he begins connecting the items by visualizing silly pictures or actions. “So the first item was a shoe. I would imagine a shoe connected to a tree. Maybe a tree is growing shoes like some miracle of genetic engineering. I love that. I actually pictured like a tree growing out of a giant shoe and it’s just like sitting on the ground and some art project,” he says.

He connects the tree to a rubber ball by visualizing balls coming out of the tree and hitting kids nearby. The kids discover money inside the rubber balls. He says, “Some of the kids, they pick up the ball, and they open it up and they realize it’s actually money inside. So they’re all excited.  After money, I believe I had a movie, and I just imagined like you go to a movie and just dollar bills are raining from the sky in the movie, like you just won the lottery or something.”

By making unique, visual connections, individuals are more likely to remember the list. Objects are no longer random, but part of a story.

3. Take Some Time

Read a book, watch a movie, or go out with a friend. Walk away from the list for a period of time. Then, come back to the list.

4. Recall The List

Using the visual connections, restate your list. The images should help you tie the seemingly random objects together.

Will you always need the silly pictures to help you remember? Farrow says no.

“With just a few repetitions most of these links will fade, but the information will stay. That is, you won’t remember that there was a tree growing out of a shoe. It’s just, whenever you think of shoe, it’ll remind you of tree,” he explains. “By the third or fourth repetition, the links would fade, and you would just remember the information. That’s really the goal. You don’t want to have to come up with silly pictures all the time just to remember your parent’s phone number. So it’s a means to an end. The picture fades and the information stays.”

Other Memory Tips

Of course, previous studies point to other easy ways we can improve our recall.

In one study, scientists in Australia found that simple mental activities strengthen the brain by improving a person’s cognitive reserve. Activities such as adult literacy courses were found to reduce dementia risk by 11 percent, while playing intelligence-testing games led to a nine-percent reduction. Engaging in painting, drawing, or other artistic hobbies displayed an association with a seven-percent decrease in dementia risk.

And following a healthy lifestyle with a nutritious diet is also beneficial in warding off memory loss. A decade-long study of Chinese adults over the age of 60 shows that the benefits of healthy living even positively impact those with a gene making them susceptible to Alzheimer’s disease. The study followed carriers of the apolipoprotein E (APOE) gene — the strongest known risk factor for Alzheimer’s and other types of dementia. Those with favorable or average lifestyles were nearly 90 percent and almost 30 percent less likely to develop dementia or mild cognitive impairment in comparison to unfavorable lifestyle participants, respectively.

Are today’s teens more content being single? Study reveals surprising trends

Teenagers (Photo by Tim Mossholder on Unsplash)

Maybe romance really is just for adults after all. A new study suggests that teenagers today are not only more likely to be single, but also happier about it compared to previous generations. It’s an interesting shift in attitudes towards romantic relationships among young people considering rising levels of loneliness across the world today.

The research, conducted by a team of psychologists in Germany and published in the journal Personality and Social Psychology Bulletin, examines how satisfaction with being single has changed over time for different age groups. Their most striking finding was that adolescents born between 2001 and 2003 reported significantly higher satisfaction with singlehood compared to those born just a decade earlier.

This trend appears to be unique to teenagers, as the study found no similar increases in singlehood satisfaction among adults in their 20s and 30s. The results suggest that broader societal changes in how relationships and individual autonomy are viewed may be having a particularly strong impact on the youngest generation.

“Adolescents nowadays may be postponing entering relationships, prioritizing personal autonomy and individual fulfillment over romantic involvement, and embracing singlehood more openly,” the researchers speculate. However, they caution that more investigation is needed to understand the exact reasons behind this shift.

Beyond the generational differences, the study also uncovered several factors that were associated with higher satisfaction among singles across age groups. Younger singles tended to be more content than older ones, and those with lower levels of the personality trait neuroticism also reported greater satisfaction with singlehood.

Interestingly, the research found that singles’ satisfaction tends to decline over time, both with being single specifically and with life in general. This suggests that while attitudes may be changing, there are still challenges associated with long-term singlehood for many people.

“It seems that today’s adolescents are less inclined to pursue a romantic relationship. This could well be the reason for the increased singlehood satisfaction,” said psychologist and lead author Dr. Tita Gonzalez Avilés, of the Institute of Psychology at Johannes Gutenberg University Mainz, in a statement.

Methodology

The study utilized data from a large, nationally representative longitudinal survey in Germany called the Panel Analysis of Intimate Relationships and Family Dynamics (pairfam). This ongoing project has been collecting annual data on romantic relationships and family dynamics since 2008.

The researchers employed a cohort-sequential design, allowing them to compare different birth cohorts at similar ages. They focused on four birth cohorts (1971-1973, 1981-1983, 1991-1993, and 2001-2003) and three age groups: adolescents (14-20 years), emerging adults (24-30 years), and established adults (34-40 years).

For their main analyses, the team included 2,936 participants who remained single throughout the study period. These individuals provided annual data on their satisfaction with singlehood and overall life satisfaction over three consecutive years.

The researchers used sophisticated statistical techniques, including multilevel growth-curve models, to examine how satisfaction changed over time and how it differed between cohorts, age groups, and based on individual characteristics like gender and personality traits.

Results Breakdown

The study’s findings can be broken down into several key areas:

  1. Prevalence of singles: Adolescents born in 2001-2003 were about 3% more likely to be single compared to those born in 1991-1993. This difference was not observed for older age groups.
  2. Satisfaction with singlehood: Later-born adolescents (2001-2003) reported significantly higher satisfaction with being single compared to earlier-born adolescents (1991-1993). This difference was not found among emerging or established adults.
  3. Life satisfaction: There were no significant cohort differences in overall life satisfaction for singles.
  4. Age effects: Across cohorts, adolescent singles reported higher satisfaction (both with singlehood and life in general) compared to adult singles.
  5. Gender differences: Contrary to expectations, single women in established adulthood (34-40 years) reported higher satisfaction with singlehood than single men in the same age group.
  6. Personality effects: Higher levels of neuroticism were associated with lower satisfaction among singles, while the effects of extraversion were less consistent.
  7. Changes over time: On average, satisfaction with singlehood tended to decline over the two-year study period for all age groups.

Limitations

The researchers acknowledge several limitations to their study:

  1. Time frame: The study compared cohorts separated by only 10 years. Longer time periods might reveal more pronounced effects of historical changes.
  2. Period vs. cohort effects: It’s challenging to completely separate the effects of being born in a certain time period from the effects of experiencing certain events (like the COVID-19 pandemic) at a particular age.
  3. Age range: The study focused on individuals up to age 40, so the findings may not generalize to older singles.
  4. Cultural context: The research was conducted in Germany, and the results might differ in countries with more traditional views on marriage and family.
  5. Limited factors: While the study examined several individual characteristics, there are many other factors that could influence singles’ satisfaction that were not included in this analysis.

Discussion and Takeaways

The study’s findings offer several important insights and raise intriguing questions for future research:

Changing norms: The higher prevalence and satisfaction with singlehood among recent cohorts of adolescents suggests that societal norms around romantic relationships may be shifting. This could have implications for future patterns of partnership, marriage, and family formation.

Age-specific effects: The fact that historical changes were only observed among adolescents, not adults, indicates that this age group may be particularly responsive to shifting social norms. This aligns with developmental theories suggesting adolescence is a key period for identity formation and susceptibility to societal influences.

Individual differences matter: While cohort effects were observed, individual factors like age and personality traits emerged as stronger predictors of singles’ satisfaction. This highlights the importance of considering both societal and personal factors in understanding relationship experiences.

Declining satisfaction over time: Researchers say the general trend of decreasing satisfaction with singlehood over time suggests that there may still be challenges associated with long-term singlehood, even as social acceptance increases.

Gender dynamics: The finding that older single women reported higher satisfaction than older single men contradicts some previous assumptions and warrants further investigation into changing gender roles and expectations.

Neuroticism’s impact: The consistent negative relationship between neuroticism and satisfaction among singles points to the importance of emotional stability and coping skills in navigating singlehood.

Adolescent well-being: The higher overall satisfaction reported by adolescent singles compared to adult singles raises questions about the pressures and expectations that may emerge in adulthood regarding romantic relationships.

Source: https://studyfinds.org/are-todays-teens-more-content-being-single-study-reveals-surprising-trends/

Average young adult predicts they’ll be dead by 76!

(© Syda Productions – stock.adobe.com)

Have you ever wondered how long you’ll live? A recent study has revealed some intriguing insights into how different age groups perceive their own mortality. Buckle up, because the results might surprise you! The research surveyed 2,000 adults across the United Kingdom. It turns out that millennials (those in the 35-44 age bracket) believe they’ll reach the ripe old age of 81. Conversely, their younger Gen Z counterparts (the under-24 crowd) is a bit more pessimistic, expecting to only make it to 76. In fact, 1 in 6 Gen Z participants aren’t even sure they’ll be alive in time for retirement!

But here’s the kicker: those over 65 are the most optimistic of all, anticipating they’ll live until 84 – the highest estimate of any age group.

And what about the battle of the sexes? Well, men seem to think they’ll outlast women, predicting an average lifespan of 82 compared to women’s 80. However, the joke might be on them, as women typically have a longer life expectancy than men.

The study, commissioned by UK life insurance brand British Seniors, also found that a whopping 65% of respondents sometimes or often contemplate their own mortality. As a spokesperson from British Seniors put it, “The research has revealed a fascinating look into these predictions and differences between gender, location, and age group. Such conversations are becoming more open than ever – as well as discussion of how you’d like your funeral to look.”

Speaking of funerals, 23% of adults have some or all of their funeral plans in place. A quarter don’t want any fuss for their send-off, while 20% are happy with whatever their friends and family decide on. The report revealed that 21% have discussed their own funeral with someone else, and 35% of those over 65 have explained their preferences to someone.

So, what’s the secret to a long life? According to the respondents, leading an active lifestyle, not smoking, keeping the brain ticking, and having good genetics and family history on their side are all key factors. And when it comes to approaching life, 37% believe in being balanced, 20% want to live it to the fullest, and 16% think slow and steady wins the race.

Source: https://studyfinds.org/young-adult-lifespan-prediction/

Survey says it takes nearly 2 months of exercise before you’ll start to look more fit

(© rangizzz – stock.adobe.com)

The poll of 2,000 adults reveals what goals people prioritize when it comes to their fitness. Above all, they’re aiming to lose a certain amount of weight (43%), increase their general strength (43%) and increase their general mobility (35%).

However, 48 percent are worried about potentially losing the motivation to get fit and 65 percent believe the motivation to increase their level of physical fitness wanes over time.

According to respondents, the motivation to keep going lasts for about four weeks before needing a new push.

The survey, commissioned by Optimum Nutrition and conducted by TalkerResearch, finds that a majority of Americans’ diet affects their level of fitness motivation (89%).

Nearly three in 10 (29%) believe they don’t get enough protein in their diet, lacking it either “all the time” (19%) or often (40%).

Gen X respondents feel like they are lacking protein the most out of all generations (35%), compared to millennials (34%), Gen Z (27%) and baby boomers (21%). Plus, over three in five (35%) women don’t think they get enough protein vs. 23 percent of men.

The average person has two meals per day that don’t include protein, but 61 percent would be more likely to increase their protein intake to help achieve their fitness goals.

As people reflect on health and wellness goals, the most common experiences that make people feel out of shape include running out of breath often (49%) and trying on clothing that no longer fits (46%).

Over a quarter (29%) say they realized they were out of shape after not being able to walk up a flight of stairs without feeling winded.

Source: https://studyfinds.org/survey-says-it-takes-nearly-2-months-of-exercise-before-youll-start-to-look-more-fit/

What it’s like to have aphantasia, the condition that turns off the mind’s eye

Concept of aphantasia, inability to visualize and create mental images. (© Studio Light & Shade – stock.adobe.com)

Close your eyes and try to picture a loved one’s face or your childhood home. For most people, this conjures up a mental image, perhaps fuzzy but still recognizable. But for a small percentage of the population, this simple act of imagination draws a complete blank. No colors, no shapes, no images at all – just darkness. This condition, known as aphantasia, is shedding new light on the nature of human imagination and consciousness.

A recent review published in Trends in Cognitive Sciences explores the fascinating world of aphantasia and its opposite extreme, hyperphantasia – imagery so vivid it rivals actual perception. These conditions, affecting roughly 1% and 3% of the population, respectively, are opening up new avenues for understanding how our brains create and manipulate mental images.

Aphantasia, from the Greek “a” (without) and “phantasia” (imagination), was only recently named in 2015 by Adam Zeman, a professor at the University of Exeter, though the phenomenon was first noted by Sir Francis Galton in the 1880s. People with aphantasia report being unable to voluntarily generate visual images in their mind’s eye. This doesn’t mean they lack imagination altogether – many excel in abstract or spatial thinking – but they can’t “see” things in their mind the way most people can.

On the flip side, those with hyperphantasia experience incredibly vivid mental imagery, sometimes indistinguishable from actual perception. These individuals might be able to recall a scene from a movie in perfect detail or manipulate complex visual scenarios in their minds with ease.

What’s particularly intriguing about these conditions is that they often affect multiple senses. Many people with aphantasia report difficulty imagining sounds, smells, or tactile sensations as well. This suggests that the ability to generate mental imagery might be a fundamental aspect of how our brains process and represent information.

The review, authored by Zeman, delves into the growing body of research on these conditions. Some key findings include the apparent genetic component of aphantasia, as it seems to run in families. People with aphantasia often have reduced autobiographical memory – they can recall facts about their past but struggle to “relive” experiences in their minds. Interestingly, many people with aphantasia still experience visual dreams, suggesting different neural mechanisms for voluntary and involuntary imagery.

There’s a higher prevalence of aphantasia among people in scientific and technical fields, while hyperphantasia is more common in creative professions. Additionally, aphantasia is associated with some difficulties in face recognition and a higher likelihood of having traits associated with autism spectrum disorders.

These findings paint a complex picture of how mental imagery relates to other cognitive processes and even career choices. But perhaps most importantly, they’re challenging our assumptions about what it means to “imagine” something.

Methodology: Peering into the Mind’s Eye
Studying something as subjective as mental imagery poses unique challenges. How do you measure something that exists only in someone’s mind? Zeman reviewed about 50 previous studies to reach his takeaways about the condition.

Researchers across these studies developed several clever approaches to better understand aphantasia. The most common method is simply asking people to rate the vividness of their mental images using self-report questionnaires like the Vividness of Visual Imagery Questionnaire (VVIQ).

Researchers also use behavioral tasks that typically require mental imagery and compare performance between those with and without aphantasia. For example, participants might be asked to compare the colors of two objects without seeing them. Some studies have looked at physical responses that correlate with mental imagery, such as pupil dilation in response to imagining bright or dark scenes.

Brain imaging techniques, particularly functional MRI, allow researchers to see which brain areas activate during imagery tasks in people with different imagery abilities. Another interesting technique is binocular rivalry, which uses the tendency for mental imagery to bias subsequent perception. It’s been used to objectively measure imagery strength.

These varied approaches help researchers triangulate on the nature of mental imagery and its absence in aphantasia, providing a more comprehensive understanding of these phenomena.

Results: A World Without Pictures
The review synthesizes findings from numerous studies, revealing a complex picture of how aphantasia affects cognition and behavior. While general memory function is largely intact, people with aphantasia often report less vivid and detailed autobiographical memories. They can recall facts about events but struggle to “relive” them mentally.

Contrary to what one might expect, aphantasia doesn’t necessarily impair creativity. Many successful artists and writers have aphantasia, suggesting alternative routes to creative thinking. Some studies suggest that people with aphantasia have a reduced emotional response to written scenarios, possibly because they can’t visualize the described scenes.

Surprisingly, many people with aphantasia report normal visual dreams. This dissociation between voluntary and involuntary imagery is a puzzle for researchers. There’s also a higher prevalence of face recognition difficulties among those with aphantasia, though the connection isn’t fully understood.

While object imagery is impaired, spatial imagery abilities are often preserved in aphantasia. This suggests different neural underpinnings for these two types of mental representation. Neuroimaging studies show differences in connectivity between frontal and visual areas of the brain in people with aphantasia, potentially explaining the difficulty in generating voluntary mental images.

“Despite the profound contrast in subjective experience between aphantasia and hyperphantasia, effects on everyday functioning are subtle – lack of imagery does not imply lack of imagination. Indeed, the consensus among researchers is that neither aphantasia nor hyperphantasia is a disorder. These are variations in human experience with roughly balanced advantages and disadvantages. Further work should help to spell these out in greater detail,” Prof. Zeman says in a media release.

Source: https://studyfinds.org/what-its-like-to-have-aphantasia/

Gold goes 2D: Scientists create ultra-thin ‘goldene’ sheets

Lars Hultman, professor of thin film physics and Shun Kashiwaya, researcher at the Materials Design Division at Linköping University. (Credit: Olov Planthaber)

In a remarkable feat of nanoscale engineering, scientists have created the world’s thinnest gold sheets at just one atom thick. This new material, dubbed “goldene,” could revolutionize fields from electronics to medicine, offering unique properties that bulk gold simply can’t match.

The research team, led by scientists from Linköping University in Sweden, managed to isolate single-atom layers of gold by cleverly manipulating the metal’s atomic structure. Their findings, published in the journal Nature Synthesis, represent a significant breakthrough in the rapidly evolving field of two-dimensional (2D) materials.

Since the discovery of graphene — single-atom-thick sheets of carbon — in 2004, researchers have been racing to create 2D versions of other elements. While 2D materials made from carbon, boron, and even iron have been achieved, gold has proven particularly challenging. Previous attempts resulted in gold sheets several atoms thick or required the gold to be supported by other materials.

The Swedish team’s achievement is particularly noteworthy because they created free-standing sheets of gold just one atom thick. This ultra-thin gold, or goldene, exhibits properties quite different from its three-dimensional counterpart. For instance, the atoms in goldene are packed more tightly together, with about 9% less space between them compared to bulk gold. This compressed structure leads to changes in the material’s electronic properties, which could make it useful for a wide range of applications.

One of the most exciting potential uses for goldene is in catalysis, which is the process of speeding up chemical reactions. Gold nanoparticles are already used as catalysts in various industrial processes, from converting harmful vehicle emissions into less dangerous gases to producing hydrogen fuel. The researchers believe that goldene’s extremely high surface-area-to-volume ratio could make it an even more efficient catalyst.

The creation of goldene also opens up new possibilities in fields like electronics, photonics, and medicine. For example, the material’s unique optical properties could lead to improved solar cells or new types of sensors. In medicine, goldene might be used to create ultra-sensitive diagnostic tools or to deliver drugs more effectively within the body.

How They Did It: Peeling Gold Atom by Atom
The process of creating goldene is almost as fascinating as the material itself. The researchers used a technique that might be described as atomic-scale sculpting, carefully removing unwanted atoms to leave behind a single layer of gold.

They started with a material called Ti3AuC2, which is part of a family of compounds known as MAX phases. These materials have a layered structure, with sheets of titanium carbide (Ti3C2) alternating with layers of gold atoms. The challenge was to remove the titanium carbide layers without disturbing the gold.

To accomplish this, the team used a chemical etching process. They immersed the Ti3AuC2 in a carefully prepared solution containing potassium hydroxide and potassium ferricyanide, known as Murakami’s reagent. This solution selectively attacks the titanium carbide layers, gradually dissolving them away.

However, simply etching away the titanium carbide wasn’t enough. Left to their own devices, t

he freed gold atoms would quickly clump together, forming 3D nanoparticles instead of 2D sheets. To prevent this, the researchers added surfactants — molecules that help keep the gold atoms spread out in a single layer.

Two key surfactants were used: cetrimonium bromide (CTAB) and cysteine. These molecules attach to the surface of the gold, creating a protective barrier that prevents the atoms from coalescing. The entire process took about a week, with the researchers carefully controlling the concentration of the etching solution and surfactants to achieve the desired result.

For the first time, scientists have managed to create sheets of gold only a single atom layer thick. (Credit: Olov Planthaber)

Results: A New Form of Gold Emerges

The team’s efforts resulted in sheets of gold just one atom thick, confirmed through high-resolution electron microscopy. These goldene sheets showed several interesting properties:

  1. Compressed structure: The gold atoms in goldene are packed about 9% closer together than in bulk gold. This compression changes how the electrons in the material behave, potentially leading to new electronic and optical properties.
  2. Increased binding energy: X-ray photoelectron spectroscopy revealed that the electrons in goldene are more tightly bound to their atoms compared to bulk gold. This shift in binding energy could affect the material’s chemical reactivity.
  3. Rippling and curling: Unlike perfectly flat sheets, the goldene layers showed some rippling and curling, especially at the edges. This behavior is common in 2D materials and can influence their properties.
  4. Stability: Computer simulations suggested that goldene should be stable at room temperature, although the experimental samples showed some tendency to form blobs or clump together over time.

The researchers also found that they could control the thickness of the gold sheets by adjusting their process. Using slightly different conditions, they were able to create two- and three-atom-thick sheets of gold as well.

Limitations and Challenges

  1. Scale: The current process produces relatively small sheets of goldene, typically less than 100 nanometers across. Scaling up production to create larger sheets will be crucial for many potential applications.
  2. Stability: Although computer simulations suggest goldene should be stable, the experimental samples showed some tendency to curl and form blobs, especially at the edges. Finding ways to keep the sheets flat and prevent them from clumping together over time will be important.
  3. Substrate dependence: The goldene sheets were most stable when still partially attached to the original Ti3AuC2 material or when supported on a substrate. Creating large, free-standing sheets of goldene remains a challenge.
  4. Purity: The etching process leaves some residual titanium and carbon atoms mixed in with the gold. While these impurities are minimal, they could affect the material’s properties in some applications.
  5. Reproducibility: The process of creating goldene is quite sensitive to the exact conditions used. Ensuring consistent results across different batches and scaling up production will require further refinement of the technique.

The surprising cure for chronic back pain? Just take a walk

(© glisic_albina – stock.adobe.com)

For anyone who has experienced the debilitating effects of low back pain, the results of an eye-opening new study may be a game-changer. Researchers have found that a simple, accessible program of progressive walking and education can significantly reduce the risk of constant low back pain flare-ups in adults. The implications are profound — no longer does managing this pervasive condition require costly equipment or specialized rehab facilities. Instead, putting on a pair of sneakers and taking a daily stroll could be one of the best preventative therapies available.

Australian researchers, publishing their work in The Lancet, recruited over 700 adults across the country who had recently recovered from an episode of non-specific low back pain, which lasted at least 24 hours and interfered with their daily activities. The participants were divided into two groups: one received an individualized walking and education program guided by a physiotherapist over six months, and the other received no treatment at all during the study.

Participants were then carefully followed for at least one year, up to a maximum of nearly three years for some participants. The researchers meticulously tracked any recurrences of low back pain that were severe enough to limit daily activities.

“Our study has shown that this effective and accessible means of exercise has the potential to be successfully implemented at a much larger scale than other forms of exercise,” says lead author Dr. Natasha Pocovi in a media release. “It not only improved people’s quality of life, but it reduced their need both to seek healthcare support and the amount of time taken off work by approximately half.”

Methodology: A Step-by-Step Approach

So, what did this potentially back-saving intervention involve? It utilized the principles of health coaching, where physiotherapists worked one-on-one with participants to design and progressively increase a customized walking plan based on the individual’s age, fitness level, and objectives.

The process began with a 45-minute consultation to understand each participant’s history, conduct an examination, and prescribe an initial “walking dose,” which was then gradually ramped up. The guiding target was to work up to walking at least 30 minutes per day, five times per week, by the six-month mark.

During this period, participants also participated in lessons to help overcome fears about back pain while learning easy strategies to self-manage any recurrences. They were provided with a pedometer and a walking diary to track their progress. After the first 12 weeks, they could choose whether to keep using those motivational tools. Follow-up sessions with the physiotherapist every few weeks, either in-person or via video calls, were focused on monitoring progress, adjusting walking plans when needed, and providing encouragement to keep participants engaged over the long haul.

Results: Dramatic Improvement & A Manageable Approach
The impact of this straightforward intervention was striking. Compared to the control group, participants in the walking program experienced a significantly lower risk of suffering a recurrence of low back pain that limited daily activities. Overall, the risk fell by 28%.

Even more impressive, the average time for a recurrence to strike was nearly double for those in the walking group (208 days) versus the control group (112 days). The results for any recurrence of low back pain, regardless of impact on activities and recurrences requiring medical care, showed similarly promising reductions in risk. Simply put, people engaging in the walking program stayed pain-free for nearly twice as long as others not treating their lower back pain.

Source: https://studyfinds.org/back-pain-just-take-a-walk/

Intermittent fasting may supercharge ‘natural killer’ cells to destroy cancer

Could skipping a few meals each week help you fight cancer? It might sound far-fetched, but new research suggests that one type of intermittent fasting could actually boost your body’s natural ability to defeat cancer.

(Credit: MIA Studio/Shutterstock)

A team of scientists at Memorial Sloan Kettering Cancer Center (MSK) has uncovered an intriguing link between fasting and the body’s immune system. Their study, published in the journal Immunity, focuses on a particular type of immune cell called natural killer (NK) cells. These cells are like the special forces of your immune system, capable of taking out cancer cells and virus-infected cells without needing prior exposure.

So, what’s the big deal about these NK cells? Well, they’re pretty important when it comes to battling cancerous tumors. Generally speaking, the more NK cells you have in a tumor, the better your chances of beating the disease. However, there’s a catch: the environment inside and around tumors is incredibly harsh. It’s like a battlefield where resources are scarce, and many immune cells struggle to survive.

This is where fasting enters the picture. The researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment and more effective at fighting cancer.

“Tumors are very hungry,” says immunologist Joseph Sun, PhD, the study’s senior author, in a media release. “They take up essential nutrients, creating a hostile environment often rich in lipids that are detrimental to most immune cells. What we show here is that fasting reprograms these natural killer cells to better survive in this suppressive environment.”

Illustration of a group of cancer cells. Researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment. (© fotoyou – stock.adobe.com)

How exactly does intermittent fasting achieve this?
The study, which was conducted on mice, involved denying the animals food for 24 hours twice a week, with normal eating in between. This intermittent fasting approach had some pretty remarkable effects on the NK cells.

First off, fasting caused the mice’s glucose levels to drop and their levels of free fatty acids to rise. Free fatty acids are a type of lipid (fat) that can be used as an alternative energy source when other nutrients are scarce. The NK cells learned to use these fatty acids as fuel instead of glucose, which is typically their primary energy source.

“During each of these fasting cycles, NK cells learned to use these fatty acids as an alternative fuel source to glucose,” says Dr. Rebecca Delconte, the lead author of the study. “This really optimizes their anti-cancer response because the tumor microenvironment contains a high concentration of lipids, and now they’re able enter the tumor and survive better because of this metabolic training.”

The fasting also caused the NK cells to move around the body in interesting ways. Many of them traveled to the bone marrow, where they were exposed to high levels of a protein called Interleukin-12. This exposure primed the NK cells to produce more of another protein called Interferon-gamma, which plays a crucial role in fighting tumors. Meanwhile, NK cells in the spleen were undergoing their own transformation, becoming even better at using lipids as fuel. The result? NK cells were pre-primed to produce more cancer-fighting substances and were better equipped to survive in the harsh tumor environment.

 

Source: https://studyfinds.org/intermittent-fasting-fight-cancer/

There are 6 different types of depression, brain pattern study shows

(Image by Feng Yu on Shutterstock)

Depression and anxiety disorders are among the most common mental health issues worldwide, yet current treatments often fail to provide relief for many sufferers. A major challenge has been the heterogeneity of these conditions. Patients with the same diagnosis can have vastly different symptoms and underlying brain dysfunctions. Now, a team of researchers at Stanford University has developed a novel approach to parse this heterogeneity, identifying six distinct “biotypes” of depression and anxiety based on specific patterns of brain circuit dysfunction.

The study, published in Nature Medicine, analyzed brain scans from over 800 patients with depression and anxiety disorders. By applying advanced computational techniques to these scans, the researchers were able to quantify the function of key brain circuits involved in cognitive and emotional processing at the individual level. This allowed them to group patients into biotypes defined by shared patterns of circuit dysfunction, rather than relying solely on symptoms.

Intriguingly, the six biotypes showed marked differences not just in their brain function, but also in their clinical profiles. Patients in each biotype exhibited distinct constellations of symptoms, cognitive impairments, and critically, responses to different treatments. For example, one biotype characterized by hyperconnectivity in circuits involved in self-referential thought and salience processing responded particularly well to behavioral therapy. Another, with heightened activity in circuits processing sadness and reward, was distinguished by prominent anhedonia (inability to feel pleasure).

These findings represent a significant step towards a more personalized, brain-based approach to diagnosing and treating depression and anxiety. By moving beyond one-size-fits-all categories to identify subgroups with shared neural mechanisms, this work opens the door to matching patients with the therapies most likely to help them based on the specific way their brain is wired. It suggests that brain circuit dysfunction may be a more meaningful way to stratify patients than symptoms alone. In the future, brain scans could be used to match individual patients with the treatments most likely to work for them, based on their specific neural profile.

More broadly, this study highlights the power of a transdiagnostic, dimensional approach to understanding mental illness. By focusing on neural circuits that cut across traditional diagnostic boundaries, we may be able to develop a more precise, mechanistic framework for classifying these conditions.

“To our knowledge, this is the first time we’ve been able to demonstrate that depression can be explained by different disruptions to the functioning of the brain,” says the study’s senior author, Dr. Leanne Williams, a professor of psychiatry and behavioral sciences, and the director of Stanford Medicine’s Center for Precision Mental Health and Wellness. “In essence, it’s a demonstration of a personalized medicine approach for mental health based on objective measures of brain function.”

The 6 Biotypes Of Depression

  1. The Overwhelmed Ruminator: This biotype has overactive brain circuits involved in self-reflection, detecting important information, and controlling attention. People in this group tend to have slowed-down emotional reactions and attention, but respond well to talk therapy.
  2. The Distracted Impulsive: This biotype has underactive brain circuits that control attention. They tend to have trouble concentrating and controlling impulses, and don’t respond as well to talk therapy.
  3. The Sensitive Worrier: This biotype has overactive brain circuits that process sadness and reward. They tend to have trouble experiencing pleasure and positive emotions.
  4. The Overcontrolled Perfectionist: This biotype has overactive brain circuits involved in regulating behavior and thoughts. They tend to have excessive negative emotions and threat sensitivity, trouble with working memory, but respond well to certain antidepressant medications.
  5. The Disconnected Avoider: This biotype has reduced connectivity in emotion circuits when viewing threatening faces, and reduced activity in behavior control circuits. They tend to have less rumination and faster reaction times to sad faces.
  6. The Balanced Coper: This biotype doesn’t show any major overactivity or underactivity in the brain circuits studied compared to healthy people. Their symptoms are likely due to other factors not captured by this analysis.

Of course, much work remains to translate these findings into clinical practice. The biotypes need to be replicated in independent samples and their stability over time needs to be established. We need to develop more efficient and scalable ways to assess circuit function that could be deployed in routine care. And ultimately, we will need prospective clinical trials that assign patients to treatments based on their biotype.

Nevertheless, this study represents a crucial proof of concept. It brings us one step closer to a future where psychiatric diagnosis is based not just on symptoms, but on an integrated understanding of brain, behavior, and response to interventions. As we continue to map the neural roots of mental illness, studies like this light the way towards more personalized and effective care for the millions of individuals struggling with these conditions.

“To really move the field toward precision psychiatry, we need to identify treatments most likely to be effective for patients and get them on that treatment as soon as possible,” says Dr. Jun Ma, the Beth and George Vitoux Professor of Medicine at the University of Illinois Chicago. “Having information on their brain function, in particular the validated signatures we evaluated in this study, would help inform more precise treatment and prescriptions for individuals.”

Source: https://studyfinds.org/there-are-6-different-types-of-depression-brain-pattern-study-shows/

Super dads, super kids: Science uncovers how the magic of fatherly care boosts child development

(Photo by Ketut Subiyanto from Pexels)

The crucial early years of a child’s life lay the foundation for their lifelong growth and happiness. Spending quality time with parents during these formative stages can lead to substantial positive changes in children. With that in mind, researchers have found an important link between a father’s involvement and their child’s successful development, both mentally and physically. Simply put, being a “super dad” results in raising super kids.

However, in Japan, where this study took place, a historical gender-based division of labor has limited fathers’ participation in childcare-related activities, impacting the development of children. Traditionally, Japanese fathers, especially those in their 20s to 40s, have been expected to prioritize work commitments over family responsibilities.

This cultural norm has resulted in limited paternal engagement in childcare, regardless of individual inclinations. The increasing number of mothers entering full-time employment further exacerbates the issue, leaving a void in familial support for childcare. With the central government advocating for paternal involvement in response to low fertility rates, Japanese fathers are now urged to become co-caregivers, shifting away from their traditional role as primary breadwinners.

While recent trends have found a rise in paternal childcare involvement, the true impact of this active participation on a child’s developmental outcomes has remained largely unexplored. This groundbreaking study published in Pediatric Research, utilizing data from the largest birth cohort in Japan, set out to uncover the link between paternal engagement and infant developmental milestones. Led by Dr. Tsuguhiko Kato from the National Center for Child Health and Development and Doshisha University Center for Baby Science, the study delved into this critical aspect of modern parenting.

“In developed countries, the time fathers spend on childcare has increased steadily in recent decades. However, studies on the relationship between paternal care and child outcomes remain scarce. In this study, we examined the association between paternal involvement in childcare and children’s developmental outcomes,” explains Dr. Kato in a media release.

Leveraging data from the Japan Environment and Children’s Study, the research team assessed developmental milestones in 28,050 Japanese children. These children received paternal childcare at six months of age and were evaluated for various developmental markers at three years. Additionally, the study explored whether maternal parenting stress mediates these outcomes at 18 months.

“The prevalence of employed mothers has been on the rise in Japan. As a result, Japan is witnessing a paradigm shift in its parenting culture. Fathers are increasingly getting involved in childcare-related parental activities,” Dr. Kato says.

The study measured paternal childcare involvement through seven key questions, gauging tasks like feeding, diaper changes, bathing, playtime, outdoor activities, and dressing. Each father’s level of engagement was scored accordingly. The research findings were then correlated with the extent of developmental delay in infants, as evaluated using the Ages and Stages questionnaire.

Source: https://studyfinds.org/super-dads-super-kids/

Women are losing their X chromosomes — What’s causing it?

(Credit: ustas7777777/Shutterstock)

A groundbreaking new study has uncovered genetic factors that may help explain why some women experience a phenomenon called mosaic loss of the X chromosome (mLOX) as they age. With mLOX, some of a woman’s blood cells randomly lose one of their two X chromosomes over time. Concerningly, scientists believe this genetic oddity may lead to the development of several disease, including cancer.

Researchers with the National Institutes of Health found that certain inherited gene variants make some women more susceptible to developing mLOX in the first place. Other genetic variations they identified seem to give a selective growth advantage to the blood cells that retain one X chromosome over the other after mLOX occurs.

Importantly, the study published in the journal Nature confirmed that women with mLOX have an elevated risk of developing blood cancers like leukemia and increased susceptibility to infections like pneumonia. This underscores the potential health implications of this chromosomal abnormality.

As some women age, their white blood cells can lose a copy of chromosome X. A new study sheds light on the potential causes and consequences of this phenomenon. (Credit: Created by Linda Wang with Biorender.com)

Paper Summary

Methodology

To uncover the genetic underpinnings of mLOX, the researchers conducted a massive analysis of nearly 900,000 women’s blood samples from eight different biobanks around the world. About 12% of these women showed signs of mLOX in their blood cells.

Results

By comparing the DNA of women with and without mLOX, the team pinpointed 56 common gene variants associated with developing the condition. Many of these genes are known to influence processes like abnormal cell division and cancer susceptibility. The researchers also found that rare mutations in a gene called FBXO10 could double a woman’s risk of mLOX. This gene likely plays an important role in the cellular processes that lead to randomly losing an X chromosome.

Source: https://studyfinds.org/women-losing-x-chromosomes/

Facially expressive people are more well-liked, socially successful

(Photo by airdone on Shutterstock)

Are you an open book, your face broadcasting every passing emotion, or more of a stoic poker face? Scientists at Nottingham Trent University say that wearing your heart on your sleeve (or rather, your face) could actually give you a significant social advantage. Their research shows that people who are more facially expressive are more well-liked by others, considered more agreeable and extraverted, and even fare better in negotiations if they have an amenable personality.

The study, led by Eithne Kavanagh, a research fellow at NTU’s School of Social Sciences, is the first large-scale systematic exploration of individual differences in facial expressivity in real-world social interactions. Across two studies involving over 1,300 participants, Kavanagh and her team found striking variations in how much people moved their faces during conversations. Importantly, this expressivity emerged as a stable individual trait. People displayed similar levels of facial expressiveness across different contexts, with different social partners, and even over time periods up to four months.

Connecting facial expressions with social success
So what drives these differences in facial communication styles and why do they matter? The researchers say that facial expressivity is linked to personality, with more agreeable, extraverted and neurotic individuals displaying more animated faces. But facial expressiveness also translated into concrete social benefits above and beyond the effects of personality.

In a negotiation task, more expressive individuals were more likely to secure a larger slice of a reward, but only if they were also agreeable. The researchers suggest that for agreeable folks, dynamic facial expressions may serve as a tool for building rapport and smoothing over conflicts.

Across the board, the results point to facial expressivity serving an “affiliative function,” or a social glue that fosters liking, cooperation and smoother interactions. Third-party observers and actual conversation partners consistently rated more expressive people as more likable.

Expressivity was also linked to being seen as more “readable,” suggesting that an animated face makes one’s intentions and mental states easier for others to decipher. Beyond frequency of facial movements, people who deployed facial expressions more strategically to suit social goals, such as looking friendly in a greeting, were also more well-liked.

“This is the first large scale study to examine facial expression in real-world interactions,” Kavanagh says in a media release. “Our evidence shows that facial expressivity is related to positive social outcomes. It suggests that more expressive people are more successful at attracting social partners and in building relationships. It also could be important in conflict resolution.”

Taking our faces at face value
The study, published in Scientific Reports, represents a major step forward in understanding the dynamics and social significance of facial expressions in everyday life. Moving beyond the traditional focus on static, stylized emotional expressions, it highlights facial expressivity as a consequential and stable individual difference.

The findings challenge the “poker face” intuition that a still, stoic demeanor is always most advantageous. Instead, they suggest that for most people, allowing one’s face to mirror inner states and intentions can invite warmer reactions and reap social rewards. The authors propose that human facial expressions evolved largely for affiliative functions, greasing the wheels of social cohesion and cooperation.

The results also underscore the importance of studying facial behavior situated in real-world interactions to unveil its true colors and consequences. Emergent technologies like automated facial coding now make it feasible to track the face’s mercurial movements in the wild, opening up new horizons for unpacking how this ancient communication channel shapes human social life.

Far from mere emotional readouts, our facial expressions appear to be powerful tools in the quest for interpersonal connection and social success. As the researchers conclude, “Being facially expressive is socially advantageous.” So the next time you catch yourself furrowing your brow or flashing a smile, know that your face just might be working overtime on your behalf to help you get ahead.

 

Source: https://studyfinds.org/facially-expressive-people-well-liked-socially-successful/

Can indie games inspire a creative boom from Indian developers?

Visai Games’ Venba won a Bafta Games Award this year

India might not be the first country that springs to mind when someone mentions video games, but it’s one of the fastest-growing markets in the world.
Analysts believe there could be more than half a billion players there by the end of this year.
Most of them are playing on mobile phones and tablets, and fans will tell you the industry is mostly known for fantasy sports games that let you assemble imaginary teams based on real players.
Despite concerns over gambling and possible addiction, they’re big business.
The country’s three largest video game startups – Game 24X7, Dream11 and Mobile Premier League – all provide some kind of fantasy sport experience and are valued at over $1bn.
But there’s hope that a crop of story-driven games making a splash worldwide could inspire a new wave of creativity and investment.
During the recent Summer Game Fest (SGF) – an annual showcase of new and upcoming titles held in Los Angeles and watched by millions – audiences saw previews of a number of story-rich titles from South Asian teams.

Detective Dotson will also have a companion TV series produced

One of those was Detective Dotson by Masala Games, based in Gujarat, about a failed Bollywood actor turned detective.
Industry veteran Shalin Shodhan is behind the game and tells BBC Asian Network this focus on unique stories is “bucking the trend” in India’s games industry.
He wants video games to become an “interactive cultural export” but says he’s found creating new intellectual property difficult.
“There really isn’t anything in the marketplace to make stories about India,” he says, despite the strength of some of the country’s other cultural industries.
“If you think about how much intellectual property there is in film in India, it is really surprising to think nothing indigenous exists as an original entertainment property in games,” he says.
“It’s almost like the Indian audience accepted that we’re just going to play games from outside.”
Another game shown during SGF was The Palace on the Hill – a “slice-of-life” farming sim set in rural India.
Mala Sen, from developer Niku Games, says games like this and Detective Dotson are what “India needed”.
“We know that there are a lot of people in India who want games where characters and setting are relatable to them,” she says.

Games developed by South Asian teams based in western countries have been finding critical praise and commercial success in recent years.

Venba, a cooking sim that told the story of a migrant family reconnecting with their heritage through food, became the first game of its kind to take home a Bafta Game Award this year.

Canada-based Visai Games, which developed the title, was revealed during SGF as one of the first beneficiaries of a new fund set up by Among Us developer Innersloth to boost fellow indie developers.

That will go towards their new, unnamed project based on ancient Tamil legends.

Another title awarded funding by the scheme was Project Dosa, from developer Outerloop, that sees players pilot giant robots, cook Indian food and fight lawyers.

Its previous game, Thirsty Suitors, was also highly praised and nominated for a Bafta award this year.

Games such as these resonating with players worldwide helps perceptions from the wider industry, says Mumbai-based Indrani Ganguly, of Duronto Games.

“Finally, people are starting to see we’re not just a place for outsource work,” she says.

“We’re moving from India being a technical space to more of a creative hub.

“I’m not 100% seeing a shift but that’s more of a mindset thing.

“People who are able to make these kinds of games have always existed but now there is funding and resource opportunities available to be able to act on these creative visions.”

Earth’s inner core rotation slows down and reverses direction. What does this mean for the planet?

(Image by DestinaDesign on Shutterstock)

Earth’s inner core, a solid iron sphere nestled deep within our planet, has slowed its rotation, according to new research. Scientists from the University of Southern California say their discovering challenges previous notions about the inner core’s behavior and raises intriguing questions about its influence on Earth’s dynamics.

The inner core, a mysterious realm located nearly 3,000 miles beneath our feet, has long been known to rotate independently of the Earth’s surface. Scientists have spent decades studying this phenomenon, believing it to play a crucial role in generating our planet’s magnetic field and shaping the convection patterns in the liquid outer core. Until now, it was widely accepted that the inner core was gradually spinning faster than the rest of the Earth, a process known as super-rotation. However, this latest study, published in the journal Nature, reveals a surprising twist in this narrative.

“When I first saw the seismograms that hinted at this change, I was stumped,” says John Vidale, Dean’s Professor of Earth Sciences at the USC Dornsife College of Letters, Arts and Sciences, in a statement. “But when we found two dozen more observations signaling the same pattern, the result was inescapable. The inner core had slowed down for the first time in many decades. Other scientists have recently argued for similar and different models, but our latest study provides the most convincing resolution.”

Slowing Spin, Reversing Rhythm
By analyzing seismic waves generated by repeating earthquakes in the South Sandwich Islands from 1991 to 2023, the researchers discovered that the inner core’s rotation had not only slowed down but had actually reversed direction. The team focused on a specific type of seismic wave called PKIKP, which traverses the inner core and is recorded by seismic arrays in northern North America. By comparing the waveforms of these waves from 143 pairs of repeating earthquakes, they noticed a peculiar pattern.

Many of the earthquake pairs exhibited seismic waveforms that changed over time, but remarkably, they later reverted to match their earlier counterparts. This observation suggests that the inner core, after a period of super-rotation from 2003 to 2008, had begun to sub-rotate, or spin more slowly than the Earth’s surface, essentially retracing its previous path. The researchers found that from 2008 to 2023, the inner core sub-rotated two to three times more slowly than its prior super-rotation.

The inner core began to decrease its speed around 2010, moving slower than the Earth’s surface. (Credit: USC Graphic/Edward Sotelo)

The study’s findings paint a captivating picture of the inner core’s rotational dynamics. The matching waveforms observed in numerous earthquake pairs indicate moments when the inner core returned to positions it had occupied in the past, relative to the mantle. This pattern, combined with insights from previous studies, reveals that the inner core’s rotation is far more complex than a simple, steady super-rotation.

The researchers discovered that the inner core’s super-rotation from 2003 to 2008 was faster than its subsequent sub-rotation, suggesting an asymmetry in its behavior. This difference in rotational rates implies that the interactions between the inner core, outer core, and mantle are more intricate than previously thought.

Limitations: Pieces Of The Core Puzzle
While the study offers compelling evidence for the inner core’s slowing and reversing rotation, the study of course has some limitations. The spatial coverage of the seismic data is relatively sparse, particularly in the North Atlantic, due to the presence of chert layers that hindered continuous coring. Furthermore, the Earth system model used in the study, despite its sophistication, is still a simplified representation of the complex dynamics at play.

The authors emphasize the need for additional high-resolution data from a broader range of locations to strengthen their findings. They also call for ongoing refinement of Earth system models to better capture the intricacies of the inner core’s behavior and its interactions with the outer core and mantle.

Source: https://studyfinds.org/earth-inner-core-rotation-slows/

Mars missions likely impossible for astronauts without kidney dialysis

Photo by Mike Kiev from Unsplash

New study shows damage from cosmic radiation, microgravity could be ‘catastrophic’ for human body
LONDON — As humanity sets its sights on deep space missions to the Moon, Mars, and beyond, a team of international researchers has uncovered a potential problem lurking in the shadows of these ambitious plans: spaceflight-induced kidney damage.

The findings, in a nutshell
In a new study that integrated a dizzying array of cutting-edge scientific techniques, researchers from University College London found that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts.

This sobering discovery, published in Nature Communications, not only highlights the immense challenges of long-duration space travel but also underscores the urgent need for effective countermeasures to protect the health of future space explorers.

“If we don’t develop new ways to protect the kidneys, I’d say that while an astronaut could make it to Mars they might need dialysis on the way back,” says the study’s first author, Dr. Keith Siew, from the London Tubular Centre, based at the UCL Department of Renal Medicine, in a media release. “We know that the kidneys are late to show signs of radiation damage; by the time this becomes apparent it’s probably too late to prevent failure, which would be catastrophic for the mission’s chances of success.”

New research shows that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts. (© alonesdj – stock.adobe.com)

Methodology

To unravel the complex effects of spaceflight on the kidneys, the researchers analyzed a treasure trove of biological samples and data from 11 different mouse missions, five human spaceflights, one simulated microgravity experiment in rats, and four studies exposing mice to simulated galactic cosmic radiation on Earth.

The team left no stone unturned, employing a comprehensive “pan-omics” approach that included epigenomics (studying changes in gene regulation), transcriptomics (examining gene expression), proteomics (analyzing protein levels), epiproteomics (investigating protein modifications), metabolomics (measuring metabolite profiles), and metagenomics (exploring the microbiome). They also pored over clinical chemistry data (electrolytes, hormones, biochemical markers), assessed kidney function, and scrutinized kidney structure and morphology using advanced histology, 3D imaging, and in situ hybridization techniques.

By integrating and cross-referencing these diverse datasets, the researchers were able to paint a remarkably detailed and coherent picture of how spaceflight stressors impact the kidneys at multiple biological levels, from individual molecules to whole organ structure and function.

Results
The study’s findings are as startling as they are sobering. Exposure to microgravity and simulated cosmic radiation induced a constellation of detrimental changes in the kidneys of both humans and animals.

First, the researchers discovered that spaceflight alters the phosphorylation state of key kidney transport proteins, suggesting that the increased kidney stone risk in astronauts is not solely a secondary consequence of bone demineralization but also a direct result of impaired kidney function.

Second, they found evidence of extensive remodeling of the nephron – the basic structural and functional unit of the kidney. This included the expansion of certain tubule segments but an overall loss of tubule density, hinting at a maladaptive response to the unique stressors of spaceflight.

Perhaps most alarmingly, exposing mice to a simulated galactic cosmic radiation dose equivalent to a round trip to Mars led to overt signs of kidney damage and dysfunction, including vascular injury, tubular damage, and impaired filtration and reabsorption.

Piecing together the diverse “omics” datasets, the researchers identified several convergent molecular pathways and biological processes that were consistently disrupted by spaceflight, causing mitochondrial dysfunction, oxidative stress, inflammation, fibrosis, and senescence (cell death) — all hallmarks of chronic kidney disease.

Source: https://studyfinds.org/mars-missions-catastrophic-astronauts-kidneys/

Being more optimistic can keep you from procrastinating

(© chinnarach – stock.adobe.com)

We’ve all been there — a big task is looming over our heads, but we choose to put it off for another day. Procrastination is so common that researchers have spent years trying to understand what drives some people to chronically postpone important chores until the last possible moment. Now, researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future.

The findings, in a nutshell
Researchers found evidence that having a pessimistic view about how stressful the future will be could increase the likelihood of falling into a pattern of severe procrastination. Moreover, the study published in Scientific Reports reveals that having an optimistic view on the future wards off the urge to procrastinate.

“Our research showed that optimistic people — those who believe that stress does not increase as we move into the future — are less likely to have severe procrastination habits,” explains Saya Kashiwakura from the Graduate School of Arts and Sciences at the University of Tokyo, in a media release. “This finding helped me adopt a more light-hearted perspective on the future, leading to a more direct view and reduced procrastination.”

Researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future. (Credit: Ground Picture/Shutterstock)

Methodology
To examine procrastination through the lens of people’s perspectives on the past, present, and future, the researchers introduced new measures they dubbed the “chronological stress view” and “chronological well-being view.” Study participants were asked to rate their levels of stress and well-being across nine different timeframes: the past 10 years, past year, past month, yesterday, now, tomorrow, next month, next year, and the next 10 years.

The researchers then used clustering analysis to group participants based on the patterns in their responses over time – for instance, whether their stress increased, decreased or stayed flat as they projected into the future. Participants were also scored on a procrastination scale, allowing the researchers to investigate whether certain patterns of future perspective were associated with more or less severe procrastination tendencies.

Results: Procrastination is All About Mindset
When examining the chronological stress view patterns, the analysis revealed four distinct clusters: “descending” (stress decreases over time), “ascending” (stress increases), “V-shaped” (stress is lowest in the present), and a “skewed mountain” shape where stress peaked in the past and declined toward the future.

Intriguingly, the researchers found a significant relationship between cluster membership and level of procrastination. The percentage of severe procrastinators was significantly lower in the “descending” cluster – those who believed their stress levels would stay flat or decrease as they projected into the future.

Source: https://studyfinds.org/being-more-optimistic-can-keep-you-from-procrastinating/

Who’s most vulnerable to scams? Psychologists reveal who criminals target and why

(Credit: fizkes/Shutterstock)

About 1 in 6 Americans are age 65 or older, and that percentage is projected to grow. Older adults often hold positions of power, have retirement savings accumulated over the course of their lifetimes, and make important financial and health-related decisions – all of which makes them attractive targets for financial exploitation.

In 2021, there were more than 90,000 older victims of fraud, according to the FBI. These cases resulted in US$1.7 billion in losses, a 74% increase compared with 2020. Even so, that may be a significant undercount since embarrassment or lack of awareness keeps some victims from reporting.

Financial exploitation represents one of the most common forms of elder abuse. Perpetrators are often individuals in the victims’ inner social circles – family members, caregivers, or friends – but can also be strangers.

When older adults experience financial fraud, they typically lose more money than younger victims. Those losses can have devastating consequences, especially since older adults have limited time to recoup – dramatically reducing their independence, health, and well-being.

But older adults have been largely neglected in research on this burgeoning type of crime. We are psychologists who study social cognition and decision-making, and our research lab at the University of Florida is aimed at understanding the factors that shape vulnerability to deception in adulthood and aging.

Defining vulnerability
Financial exploitation involves a variety of exploitative tactics, such as coercion, manipulation, undue influence, and, frequently, some sort of deception.

The majority of current research focuses on people’s ability to distinguish between truth and lies during interpersonal communication. However, deception occurs in many contexts – increasingly, over the internet.

Our lab conducts laboratory experiments and real-world studies to measure susceptibility under various conditions: investment games, lie/truth scenarios, phishing emails, text messages, fake news and deepfakes – fabricated videos or images that are created by artificial intelligence technology.

To study how people respond to deception, we use measures like surveys, brain imaging, behavior, eye movement, and heart rate. We also collect health-related biomarkers, such as being a carrier of gene variants that increase risk for Alzheimer’s disease, to identify individuals with particular vulnerability.

And our work shows that an older adult’s ability to detect deception is not just about their individual characteristics. It also depends on how they are being targeted.

Individual risk factors
Better cognition, social and emotional capacities, and brain health are all associated with less susceptibility to deception.

Cognitive functions, such as how quickly our brain processes information and how well we remember it, decline with age and impact decision-making. For example, among people around 70 years of age or older, declines in analytical thinking are associated with reduced ability to detect false news stories.

Additionally, low memory function in aging is associated with greater susceptibility to email phishing. Further, according to recent research, this correlation is specifically pronounced among older adults who carry a gene variant that is a genetic risk factor for developing Alzheimer’s disease later in life. Indeed, some research suggests that greater financial exploitability may serve as an early marker of disease-related cognitive decline.

Social and emotional influences are also crucial. Negative mood can enhance somebody’s ability to detect lies, while positive mood in very old age can impair a person’s ability to detect fake news.

Lack of support and loneliness exacerbate susceptibility to deception. Social isolation during the COVID-19 pandemic has led to increased reliance on online platforms, and older adults with lower digital literacy are more vulnerable to fraudulent emails and robocalls.

Isolation during the COVID-19 pandemic has increased aging individuals’ vulnerability to online scams. (© Andrey Popov – stock.adobe.com)

Finally, an individual’s brain and body responses play a critical role in susceptibility to deception. One important factor is interoceptive awareness: the ability to accurately read our own body’s signals, like a “gut feeling.” This awareness is correlated with better lie detection in older adults.

According to a first study, financially exploited older adults had a significantly smaller size of insula – a brain region key to integrating bodily signals with environmental cues – than older adults who had been exposed to the same threat but avoided it. Reduced insula activity is also related to greater difficulty picking up on cues that make someone appear less trustworthy.

Types of effective fraud
Not all deception is equally effective on everyone.

Our findings show that email phishing that relies on reciprocation – people’s tendency to repay what another person has provided them – was more effective on older adults. Younger adults, on the other hand, were more likely to fall for phishing emails that employed scarcity: people’s tendency to perceive an opportunity as more valuable if they are told its availability is limited. For example, an email might alert you that a coin collection from the 1950s has become available for a special reduced price if purchased within the next 24 hours.

There is also evidence that as we age, we have greater difficulty detecting the “wolf in sheep’s clothing”: someone who appears trustworthy, but is not acting in a trustworthy way. In a card-based gambling game, we found that compared with their younger counterparts, older adults are more likely to select decks presented with trustworthy-looking faces, even though those decks consistently resulted in negative payouts. Even after learning about untrustworthy behavior, older adults showed greater difficulty overcoming their initial impressions.

Reducing vulnerability
Identifying who is especially at risk for financial exploitation in aging is crucial for preventing victimization.

We believe interventions should be tailored instead of a one-size-fits-all approach. For example, perhaps machine learning algorithms could someday determine the most dangerous types of deceptive messages that certain groups encounter – such as in text messages, emails, or social media platforms – and provide on-the-spot warnings. Black and Hispanic consumers are more likely to be victimized, so there is also a dire need for interventions that resonate with their communities.

Prevention efforts would benefit from taking a holistic approach to help older adults reduce their vulnerability to scams. Training in financial, health, and digital literacy are important, but so are programs to address loneliness.

People of all ages need to keep these lessons in mind when interacting with online content or strangers – but not only then. Unfortunately, financial exploitation often comes from individuals close to the victim.

Source: https://studyfinds.org/whos-most-vulnerable-to-scams/

Mushroom-infused ‘microdosing’ chocolate bars are sending people to the hospital, prompting investigation: FDA

The Food and Drug Administration (FDA) is warning consumers about a mushroom-infused chocolate bar that has reportedly sent some people to the hospital.

The FDA released an advisory message about Diamond Shruumz “microdosing” chocolate bars on June 7. The chocolate bars contain a “proprietary nootropics blend” that is said to give a “relaxed euphoric experience without psilocybin,” according to its website.

“The FDA and CDC, in collaboration with America’s Poison Centers and state and local partners, are investigating a series of illnesses associated with eating Diamond Shruumz-brand Microdosing Chocolate Bars,” the FDA’s website reads.

“Do not eat, sell, or serve Diamond Shruumz-Brand Microdosing Chocolate Bars,” the site warns. “FDA’s investigation is ongoing.”

The FDA is warning consumers against Diamond Shruumz chocolate bars. (FDA | iStock)

“Microdosing” is a practice where one takes a very small amount of psychedelic drugs with the intent of increasing productivity, inspiring creativity and boosting mood. According to Diamond Shruumz’s website, the brand said its products help achieve “a subtle, sumptuous experience and a more creative state of mind.”

“We’re talkin’ confections with a kick,” the brand said. “So if you like mushroom chocolate bars and want to mingle with some microdosing, check us out. We just might change how you see the world.”

But government officials warn that the products have caused seizures in some consumers and vomiting in others.

“People who became ill after eating Diamond Shruumz-brand Microdosing Chocolate Bars reported a variety of severe symptoms including seizures, central nervous system depression (loss of consciousness, confusion, sleepiness), agitation, abnormal heart rates, hyper/hypotension, nausea, and vomiting,” the FDA reported.

Six people reportedly experienced such severe reactions that they sought medical care.

At least eight people have suffered a variety of medical symptoms from the chocolates, including nausea. (iStock)

“All eight people have reported seeking medical care; six have been hospitalized,” the FDA’s press release said. “No deaths have been reported.”

Diamond Shruumz says on its website that its products are not necessarily psychedelic. Although the chocolate is marketed as promising a psilocybin-like experience, there is no psilocybin in it.

“There is no presence of psilocybin, amanita or any scheduled drugs, ensuring a safe and enjoyable experience,” the website claims. “Rest assured, our treats are not only free from psychedelic substances but our carefully crafted ingredients still offer an experience.”

“This allows you to indulge in a uniquely crafted blend designed for your pleasure and peace of mind.”

Officials warn consumers to keep the products out of the reach of minors, as kids and teens may be tempted to eat the chocolate bars.

Source: https://www.foxnews.com/health/mushroom-infused-microdosing-chocolate-bars-sending-people-hospital-prompting-investigation-fda

 

Elephants give each other ‘names,’ just like humans

(Photo by Unsplash+ in collaboration with Getty Images)

They say elephants never forget a face, and now as it turns out, they seem to remember names too. That is, the “names” they have for one another. Yes, believe it or not, a new study shows that elephants actually have the rare ability to identify one another through unique calls, essentially giving one another human-like names when they converse.

Scientists from Colorado State University, along with a team of researchers from Save the Elephants and ElephantVoices, used machine learning to make this fascinating discovery. Their work suggests that elephants possess a level of communication and abstract thought that is more similar to ours than previously believed.

In the study, published in Nature Ecology and Evolution, the researchers analyzed hundreds of recorded elephant calls from Kenya’s Samburu National Reserve and Amboseli National Park. By training a sophisticated model to identify the intended recipient of each call based on its unique acoustic features, they could confirm that elephant calls contain a name-like component, a behavior they had suspected based on observation.

“Dolphins and parrots call one another by ‘name’ by imitating the signature call of the addressee. By contrast, our data suggest that elephants do not rely on imitation of the receiver’s calls to address one another, which is more similar to the way in which human names work,” says lead author Michael Pardo, who conducted the study as a postdoctoral researcher at CSU and Save the Elephants, in a statement.

Once the team pinpointed the specific calls to the corresponding elephants, the scientists played back the recordings and observed their reactions. When the calls were addressed to them, the elephants responded positively by calling back or approaching the speaker. In contrast, calls meant for other elephants elicited less enthusiasm, demonstrating that the elephants recognized their own “names.”

Two juvenile elephants greet each other in Samburu National Reserve in Kenya. (Credit: George Wittemyer)

Elephants’ Brains Even More Complex Than Realized

The ability to learn and produce new sounds, a prerequisite for naming individuals, is uncommon in the animal kingdom. This form of arbitrary communication, where a sound represents an idea without imitating it, is considered a higher-level cognitive skill that greatly expands an animal’s capacity to communicate.

Co-author George Wittemyer, a professor at CSU’s Warner College of Natural Resources and chairman of the scientific board of Save the Elephants, elaborated on the implications of this finding: “If all we could do was make noises that sounded like what we were talking about, it would vastly limit our ability to communicate.” He adds that the use of arbitrary vocal labels suggests that elephants may be capable of abstract thought.

To arrive at these conclusions, the researchers embarked on a four-year study that included 14 months of intensive fieldwork in Kenya. They followed elephants in vehicles, recording their vocalizations and capturing approximately 470 distinct calls from 101 unique callers and 117 unique receivers.

Kurt Fristrup, a research scientist in CSU’s Walter Scott, Jr. College of Engineering, developed a novel signal processing technique to detect subtle differences in call structure. Together with Pardo, he trained a machine-learning model to correctly identify the intended recipient of each call based solely on its acoustic features. This innovative approach allowed the researchers to uncover the hidden “names” within the elephant calls.

Source: https://studyfinds.org/elephants-give-each-other-names/

Baby talk explained! All those sounds mean more than you think

Mother and baby laying down together (Photo by Ana Tablas on Unsplash)

From gurgling “goos” to squealing “wheees!”, the delightful symphony of sounds emanating from a baby’s crib may seem like charming gibberish to the untrained ear. However, a new study suggests that these adorable vocalizations are far more than just random noise — they’re actually a crucial stepping stone on the path to language development.

The research, published in PLOS One, took a deep dive into the vocal patterns of 130 typically developing infants over the course of their first year of life. Their discoveries challenge long-held assumptions about how babies learn to communicate.

Traditionally, many experts believed that infants start out making haphazard sounds, gradually progressing to more structured “baby talk” as they listen to and imitate the adults around them. This new study paints a different picture, one where babies are actively exploring and practicing different categories of sounds in what might be thought of as a precursor to speech.

Think of it like a baby’s very first music lesson. Just as a budding pianist might spend time practicing scales and chords, it seems infants devote chunks of their day to making specific types of sounds, almost as if they’re trying to perfect their technique.

The researchers reached this conclusion after sifting through an enormous trove of audio data captured by small recording devices worn by the babies as they went about their daily lives. In total, they analyzed over 1,100 daylong recordings, adding up to nearly 14,500 hours – or about 1.6 years – of audio.

Using special software to isolate the infant vocalizations, the research team categorized the sounds into three main types: squeals (high-pitched, often excited-sounding noises), growls (low-pitched, often “rumbly” sounds), and vowel-like utterances (which the researchers dubbed “vocants”).

Next, they zoomed in on five-minute segments from each recording, hunting for patterns in how these sound categories were distributed. The results were striking: 40% of the recordings showed significant “clustering” of squeals, with a similar percentage showing clustering of growls. In other words, the babies weren’t randomly mixing their sounds, but rather, they seemed to focus on one type at a time, practicing it intensively.

Source: https://studyfinds.org/baby-talk-explained/

Why do giraffes have long necks? Researchers may finally have the answer

Photo by Krish Radhakrishna from Unsplash

Everything in biology ultimately boils down to food and sex. To survive as an individual, you need food. To survive as a species, you need sex.

Not surprisingly, then, the age-old question of why giraffes have long necks has centered around food and sex. After debating this question for the past 150 years, biologists still cannot agree on which of these two factors was the most important in the evolution of the giraffe’s neck. In the past three years, my colleagues and I have been trying to get to the bottom of this question.

Necks for sex
In the 19th century, biologists Charles Darwin and Jean Baptiste Lamarck both speculated that giraffes’ long necks helped them reach acacia leaves high up in the trees, though they likely weren’t observing actual giraffe behavior when they came up with this theory. Several decades later, when scientists started observing giraffes in Africa, a group of biologists came up with an alternative theory based on sex and reproduction.

These pioneering giraffe biologists noticed how male giraffes, standing side by side, used their long necks to swing their heads and club each other. The researchers called this behavior “neck-fighting” and guessed that it helped the giraffes prove their dominance over each other and woo mates. Males with the longest necks would win these contests and, in turn, boost their reproductive success. That favorability, the scientists predicted, drove the evolution of long necks.

Since its inception, the necks-for-sex sexual selection hypothesis has overshadowed Darwin’s and Lamarck’s necks-for-food hypothesis.

The necks-for-sex hypothesis predicts that males should have longer necks than females since only males use them to fight, and indeed, they do. But adult male giraffes are also about 30% to 50% larger than female giraffes. All of their body components are bigger. So, my team wanted to find out if males have proportionally longer necks when accounting for their overall stature, comprised of their head, neck, and forelegs.

Necks not for sex?
But it’s not easy to measure giraffe body proportions. For one, their necks grow disproportionately faster during the first six to eight years of their life. And in the wild, you can’t tell exactly how old an individual animal is. To get around these problems, we measured body proportions in captive Masai giraffes in North American zoos. Here, we knew the exact age of the giraffes and could then compare this data with the body proportions of wild giraffes that we knew confidently were older than 8 years.

To our surprise, we found that adult female giraffes have proportionally longer necks than males, which contradicts the necks-for-sex hypothesis. We also found that adult female giraffes have proportionally longer body trunks, while adult males have proportionally longer forelegs and thicker necks.

Giraffe babies don’t have any of these sex-specific body proportion differences. They only appear as giraffes are reaching adulthood.
Finding that female giraffes have proportionally both longer necks and longer body trunks led us to propose that females, and not males, drove the evolution of the giraffe’s long neck, and not for sex but for food and reproduction. Our theory is in agreement with Darwin and Lamarck that food was the major driver for the evolution of the giraffe’s neck but with an emphasis on female reproductive success.

A shape to die for
Giraffes are notoriously picky eaters and browse on fresh leaves, flowers, and seed pods. Female giraffes especially need enough to eat because they spend most of their adult lives either pregnant or providing milk to their calves.

Females tend to use their long necks to probe deep into bushes and trees to find the most nutritious food. By contrast, males tend to feed high in trees by fully extending their necks vertically. Females need proportionally longer trunks to grow calves that can be well over 6 feet tall at birth.

For males, I’d guess that their proportionally longer forelegs are an adaptation that allows them to mount females more easily during sex. While we found that their necks might not be as proportionally long as females’ necks are, they are thicker. That’s probably an adaptation that helps them win neck fights.

Source: https://studyfinds.org/why-do-giraffes-have-long-necks/

Eleven tonnes of rubbish taken off Himalayan peaks

Fewer permits were issued and fewer climbers died on Mount Everest in 2024 than 2023.

The Nepalese army says it has removed eleven tonnes of rubbish, four corpses and one skeleton from Mount Everest and two other Himalayan peaks this year.
It took troops 55 days to recover the rubbish and bodies from Everest, Nuptse and Lhotse mountains.
It is estimated that more than fifty tonnes of waste and more than 200 bodies cover Everest.
The army began conducting an annual clean-up of the mountain, which is often described as the world’s highest garbage dump, in 2019 during concerns about overcrowding and climbers queueing in dangerous conditions to reach the summit.
The five clean-ups have collected 119 tonnes of rubbish, 14 human corpses and some skeletons, the army says.
This year, authorities aimed to reduce rubbish and improve rescues by making climbers wear tracking devices and bring back their own poo.

In the future, the government plans to create a mountain rangers team to monitor rubbish and put more money toward its collection, Nepal’s Department of Tourism director of mountaineering Rakesh Gurung told the BBC.
For the spring climbing season that ended in May, the government issued permits to 421 climbers, down from a record-breaking 478 last year. Those numbers do not include Nepalese guides. In total, an estimated 600 people climbed the mountain this year.
This year, eight climbers died or went missing, compared to 19 last year.
A Brit, Daniel Paterson, and his Nepalese guide, Pastenji Sherpa, are among those missing after being hit by falling ice on 21 May.
Mr Paterson’s family started a fundraiser to hire a search team to find them, but said in an update on 4 June that recovery “is not possible at this time” because of the location and danger of the operation.
Mr Gurung said the number of permits was lower this year because of the global economic situation, China also issuing permits and the national election in India which reduced the number of climbers from that country.
Source: https://www.bbc.com/news/articles/cq5539lj1pqo

Women experience greater mental agility during menstruation

For female athletes, the impact of the menstrual cycle on physical performance has been a topic of much discussion. But what about the mental side of the game? A groundbreaking new study suggests that certain cognitive abilities, particularly those related to spatial awareness and anticipation, may indeed ebb and flow with a woman’s cycle.

(Photo 102762325 | Black Teen Brain © Denisismagilov | Dreamstime.com)

The findings, in a nutshell
Researchers from University College London tested nearly 400 participants on a battery of online cognitive tasks designed to measure reaction times, attention, visuospatial functions (like 3D mental rotation), and timing anticipation. The study, published in Neuropsychologia, included men, women on hormonal contraception, and naturally cycling women.

Fascinatingly, the naturally cycling women exhibited better overall cognitive performance during menstruation compared to any other phase of their cycle. This held true even though these women reported poorer mood and more physical symptoms during their period. In contrast, performance dipped during the late follicular phase (just before ovulation) and the luteal phase (after ovulation).

“What is surprising is that the participant’s performance was better when they were on their period, which challenges what women, and perhaps society more generally, assume about their abilities at this particular time of the month,” says Dr. Flaminia Ronca, first author of the study from UCL, in a university release.

“I hope that this will provide the basis for positive conversations between coaches and athletes about perceptions and performance: how we feel doesn’t always reflect how we perform.”

This study provides compelling preliminary evidence that sport-relevant cognitive skills may indeed fluctuate across the menstrual cycle, with a surprising boost during menstruation itself. If confirmed in future studies, this could have implications for understanding injury risk and optimizing mental training in female athletes.

Importantly, there was a striking mismatch between women’s perceptions and their actual performance. Many felt their thinking was impaired during their period when, in fact, it was enhanced. This points to the power of negative expectations and the importance of educating athletes about their unique physiology.

Source: https://studyfinds.org/womens-brains-show-more-mental-agility-during-their-periods/

Colon cancer crisis in young people could be fueled by booming drinks brands adored by teens

Colon cancer crisis in young people could be fueled by booming drinks brands adored by teens


They are used by millions of workers to power through afternoon slumps.

But highly caffeinated energy drinks could be partly fueling the explosion of colorectal cancers in young people, US researchers warn.

They believe an ingredient in Red Bull and other top brands such as Celsius and Monster may be linked to bacteria in the gut that speeds up tumor growth.

Researchers in Florida theorize that cancer cells use taurine – an amino acid thought to improve mental clarity – as their ‘primary energy source.’

At the world’s biggest cancer conference this week, the team announced a new human trial that will test their hypothesis, which so far is based on animal studies.

They plan to discover whether drinking an energy drink every day causes levels of cancer-causing gut bacteria to rise.

Highly caffeinated energy drinks could be partly fueling the explosion of colorectal cancers in young people , US researchers warn – based on a new hypothesis

DailyMail.com revealed earlier this week how diets high in sugar and low in fiber may also be contributing to the epidemic of colon cancers in under-50s.

The University of Florida researchers are recruiting around 60 people aged 18 to 40 to be studied for four weeks.

Half of the group will consume at least one original Red Bull or Celsius, a sugar-free energy drink, per day and their guts will be compared to a control group who don’t.

The upcoming trial is ‘one of the earliest’ studies to evaluate potential factors contributing to the meteoric in colorectal cancer, the researchers say.

Early onset cancers are still uncommon. About 90 per cent of all cancers affect people over the age of 50.

But rates in younger age groups have soared around 70 percent since the 1990s, with around 17,000 new cases diagnosed in the US each year.

Source: https://www.dailymail.co.uk/health/article-13493163/red-bull-colon-cancer-crisis-young-people.html

Here’s why sugar wreaks havoc on gut health, worsens inflammatory bowel disease

(Photo by Alexander Grey from Unsplash)

There can be a lot of inconsistent dietary advice when it comes to gut health, but those that says that eating lots of sugar is harmful tend to be the most consistent of them all. Scientists from the University of Pittsburgh are now showing that consuming excess sugar disrupts cells that keep the colon healthy in mice with inflammatory bowel disease (IBD).

“The prevalence of IBD is rising around the world, and it’s rising the fastest in cultures with industrialized, urban lifestyles, which typically have diets high in sugar,” says senior author Timothy Hand, Ph.D., associate professor of pediatrics and immunology at Pitt’s School of Medicine and UPMC Children’s Hospital of Pittsburgh. “Too much sugar isn’t good for a variety of reasons, and our study adds to that evidence by showing how sugar may be harmful to the gut. For patients with IBD, high-density sugar — found in things like soda and candy — might be something to stay away from.”

In this study, researchers fed mice either a standard or high-sugar diet, and then mimicked IBD symptoms by exposing them to a chemical called DSS, which damages the colon.

Shockingly, all of the mice that ate a high-sugar diet died within nine days. All of the animals that ate a standard diet lived until the end of the 14-day experiment. To figure out where things went wrong, the team looked for answers inside the colon. Typically, the colon is lined with a layer of epithelial cells that are arranged with finger-like projections called crypts. They are frequently replenished by dividing stem cells to keep the colon healthy.

“The colon epithelium is like a conveyor belt,” explains Hand in a media release. “It takes five days for cells to travel through the circuit from the bottom to the top of the crypt, where they are shed into the colon and defecated out. You essentially make a whole new colon every five days.”

(© T. L. Furrer – stock.adobe.com)

This system collapsed in mice fed a high-sugar diet
In fact, the protective layer of cells was completely gone in some animals, filling the colon with blood and immune cells. This shows that sugar may directly impact the colon, rather than the harm being dependent on the gut microbiome, which is what the team originally thought.

To compare the findings to human colons, the researchers used poppy seed-sized intestinal cultures that could be grown in a lab dish. They found that as sugar concentrations increased, fewer cultures developed, which suggests that sugar hinders cell devision.

“We found that stem cells were dividing much more slowly in the presence of sugar — likely too slow to repair damage to the colon,” says Hand. “The other strange thing we noticed was that the metabolism of the cells was different. These cells usually prefer to use fatty acids, but after being grown in high-sugar conditions, they seemed to get locked into using sugar.”

Hand adds that these findings may be key to strengthening existing links between sweetened drinks and worse IBD outcomes.

Source: https://studyfinds.org/sugar-wreaks-havoc-gut-health/

Shocking study claims pollution causes more deaths than war, disease, and drugs combined

(Credit: aappp/Shutterstock)

We often think of war, terrorism, and deadly diseases as the greatest threats to human life. But what if the real danger is something we encounter every day, something that’s in the air we breathe, the water we drink, and even in the noise that surrounds us? A new study published in the Journal of the American College of Cardiology reveals a startling truth: pollution, in all its forms, is now a greater health threat than war, terrorism, malaria, HIV, tuberculosis, drugs, and alcohol combined. Specifically, researchers estimate that manmade pollutants and climate change contribute to a staggering seven million deaths globally each year.

“Every year around 20 million people worldwide die from cardiovascular disease with pollutants playing an ever-increasing role,” explains Professor Jason Kovacic, Director and CEO of the Victor Chang Cardiac Research Institute in Australia, in a media release.

The findings, in a nutshell
The culprits behind this global death toll aren’t just the obvious ones like air pollution from car exhausts or factory chimneys. The study, conducted by researchers from prestigious institutions worldwide, shines a light on lesser-known villains: soil pollution, noise pollution, light pollution, and even exposure to toxic chemicals in our homes.

Think about your daily life. You wake up after a night’s sleep disrupted by the glow of streetlights and the hum of late-night traffic. On your way to work, you’re exposed to car fumes and the blaring horns of impatient drivers. At home, you might be unknowingly using products containing untested chemicals. All these factors, the study suggests, are chipping away at your heart health.

“Pollutants have reached every corner of the globe and are affecting every one of us,” Prof. Kovacic warns. “We are witnessing unprecedented wildfires, soaring temperatures, unacceptable road noise and light pollution in our cities and exposure to untested toxic chemicals in our homes.”

Specifically, researchers estimate that manmade pollutants and climate change contribute to a staggering 7 million deaths globally each year. (© Quality Stock Arts – stock.adobe.com)

How do these pollutants harm our hearts?
Air Pollution: When you inhale smoke from a wildfire or exhaust fumes, these toxins travel deep into your lungs, enter your bloodstream, and then circulate throughout your body. It’s like sending tiny invaders into your system, causing damage wherever they go, including your heart.

Noise and Light Pollution: Ever tried sleeping with a streetlight shining through your window or with noisy neighbors? These disruptions do more than just annoy you—they mess up your sleep patterns. Poor sleep can lead to inflammation in your body, raise your blood pressure, and even cause weight gain. All of these are risk factors for heart disease.

Extreme Heat: Think of your heart as a car engine. On a scorching hot day, your engine works harder to keep cool. Similarly, during a heatwave, your heart has to work overtime. This extra strain, coupled with dehydration and reduced blood volume from sweating, can lead to serious issues like acute kidney failure.

Chemical Exposure: Many household items — from non-stick pans to water-resistant clothing — contain chemicals that haven’t been thoroughly tested for safety. Prof. Kovacic points out, “There are hundreds of thousands of chemicals that haven’t even been tested for their safety or toxicity, let alone their impact on our health.”

The statistics are alarming. Air pollution alone is linked to over seven million premature deaths per year, with more than half due to heart problems. During heatwaves, the risk of heat-related cardiovascular deaths can spike by over 10%. In the U.S., exposure to wildfire smoke has surged by 77% since 2002.

Source: https://studyfinds.org/pollution-causes-more-deaths/

Never-before-seen blue ants discovered in India

In the lush forests of India’s Arunachal Pradesh, a team of intrepid researchers has made a startling discovery: a never-before-seen species of ant that sparkles like a brilliant blue gemstone. The remarkable find marks the first new species of its genus to be identified in India in over 120 years.

Dubbing the species Paraparatrechina neela, the fascinating discovery was made by entomologists Dr. Priyadarsanan Dharma Rajan and Ramakrishnaiah Sahanashree, from the Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bengaluru, along with Aswaj Punnath from the University of Florida. The name “neela” comes from Indian languages, meaning the color blue. And for good reason – this ant sports an eye-catching iridescent blue exoskeleton, unlike anything seen before in its genus.

Paraparatrechina is a widespread group of ants found across Asia, Africa, Australia and the Pacific. They are typically small, measuring just a few millimeters in length. Before this discovery, India was home to only one known species in the genus, Paraparatrechina aseta, which was described way back in 1902.

The researchers collected the dazzling P. neela specimens during an expedition in 2022 to the Siang Valley in the foothills of the Eastern Himalayas. Fittingly, this trip was part of a series called the “Siang Expeditions” – a project aiming to retrace the steps of a historic 1911-12 expedition that documented the region’s biodiversity.

Paraparatrechina neela — the blue ant discovered in India’s Himalayas. (Credit: Sahanashree R)

Over a century later, the area still holds surprises. The team found the ants living in a tree hole in a patch of secondary forest, at an altitude of around 800 meters. After carefully extracting a couple of specimens with an aspirator device, they brought them back to the lab for a closer look under the microscope. Their findings are published in the journal ZooKeys.

Beyond its “captivating metallic-blue color,” a unique combination of physical features distinguishes P. neela from its relatives. The body is largely blue, but the legs and antennae fade to a brownish-white. Compared to the light brown, rectangular head of its closest Indian relative, P. aseta, the sapphire ant has a subtriangular head. It also has one less tooth on its mandibles and a distinctly raised section on its propodeum (the first abdominal segment that’s fused to the thorax).

So what’s behind the blue? While pigments provide color for some creatures, in insects, hues like blue are usually the result of microscopic structural arrangements that reflect light in particular ways. Different layers and shapes of the exoskeleton can interact with light to produce shimmering, iridescent effects. This has evolved independently in many insect groups, but is very rare in ants.

The function of the blue coloration remains a mystery for now. In other animals, such striking hues can serve many possible roles – from communication and camouflage to thermoregulation.

“This vibrant feature raises intriguing questions. Does it help in communication, camouflage, or other ecological interactions? Delving into the evolution of this conspicuous coloration and its connections to elevation and the biology of P. neela presents an exciting avenue for research,” the authors write.

A view of Siang Valley. (Credit: Ranjith AP)

The Eastern Himalayas are known to be a biodiversity hotspot, but remain underexplored by scientists. Finding a new species of ant, in a genus that specializes in tiny, inconspicuous creatures, hints at the many more discoveries that likely await in the region’s forests. Who knows – maybe there are entire rainbow-hued colonies of ants hidden in the treetops!

Source: https://studyfinds.org/blue-ants-discovered/

Prenatal stress hormones may finally explain why infants won’t sleep at night

(Photo by Laura Garcia on Unsplash)

Babies with higher stress hormone levels late in their mother’s pregnancy can end up having trouble falling asleep, researchers explain. The sleep research suggests that measuring cortisol during the third trimester can predict infant sleep patterns up to seven months after a baby’s birth.

Babies often wake up in the middle of the night and have trouble falling asleep. A team from the University of Denver says one possible but unexplored reason for this is how well the baby’s hypothalamic-adrenal-pituitary (HPA) system is working. The HPA system is well-known for regulating the stress response and has previously been linked with sleep disorders when it’s not working properly. Cortisol is the end product produced from the HPA axis.

What is cortisol?

Cortisol is a steroid hormone produced by the adrenal glands, which are located on top of each kidney. It plays a crucial role in several body functions, including:

Regulation of metabolism: Cortisol helps regulate the metabolism of proteins, fats, and carbohydrates, releasing energy and managing how the body uses these macronutrients.

Stress response: Often referred to as the “stress hormone,” cortisol is released in response to stress and low blood-glucose concentration. It helps the body manage and cope with stress by altering immune system responses and suppressing non-essential functions in a fight-or-flight situation.

Anti-inflammatory effects: Cortisol has powerful anti-inflammatory capabilities, helping to reduce inflammation and assist in healing.

Blood pressure regulation: It helps in maintaining blood pressure and cardiovascular function.

Circadian rhythm influence: Cortisol levels fluctuate throughout the day, typically peaking in the morning and gradually falling to their lowest level at night.

Collecting hair samples is one way to measure fetal cortisol levels in the final trimester of pregnancy.

“Although increases in cortisol across pregnancy are normal and important for preparing the fetus for birth, our findings suggest that higher cortisol levels during late pregnancy could predict the infant having trouble falling asleep,” says lead co-author Melissa Nevarez-Brewster in a media release. “We are excited to conduct future studies to better understand this link.”

The team collected hair cortisol samples from 70 infants during the first few days after birth. Approximately 57% of the infants were girls. When each child was seven months-old, parents completed a sleep questionnaire including questions such as how long it took on average for the children to fall asleep, how long babies stayed awake at night, and the number of times the infants woke up in the middle of the night. The researchers also collected data on each infant’s gestational age at birth and their family’s income.

Source: https://studyfinds.org/prenatal-stress-hormones-may-finally-explain-why-infants-wont-sleep-at-night/

How much stress is too much?

Pedro Figueras / pexels.com

COVID-19 taught most people that the line between tolerable and toxic stress – defined as persistent demands that lead to disease – varies widely. But, some people will age faster and die younger from toxic stressors than others.

So, how much stress is too much, and what can you do about it?

I’m a psychiatrist specializing in psychosomatic medicine, which is the study and treatment of people who have physical and mental illnesses. My research is focused on people who have psychological conditions and medical illnesses, as well as those whose stress exacerbates their health issues.

I’ve spent my career studying mind-body questions and training physicians to treat mental illness in primary care settings. My forthcoming book is titled “Toxic Stress: How Stress is Killing Us and What We Can Do About It.”

A 2023 study of stress and aging over the life span – one of the first studies to confirm this piece of common wisdom – found that four measures of stress all speed up the pace of biological aging in midlife. It also found that persistent high-stress ages people in a comparable way to the effects of smoking and low socioeconomic status, two well-established risk factors for accelerated aging.

The difference between good stress and the toxic kind

Good stress – a demand or challenge you readily cope with – is good for your health. In fact, the rhythm of these daily challenges, including feeding yourself, cleaning up messes, communicating with one another, and carrying out your job, helps to regulate your stress response system and keep you fit.

Toxic stress, on the other hand, wears down your stress response system in ways that have lasting effects, as psychiatrist and trauma expert Bessel van der Kolk explains in his bestselling book “The Body Keeps the Score.”

The earliest effects of toxic stress are often persistent symptoms such as headache, fatigue, or abdominal pain that interfere with overall functioning. After months of initial symptoms, a full-blown illness with a life of its own – such as migraine headaches, asthma, diabetes, or ulcerative colitis – may surface.

When we are healthy, our stress response systems are like an orchestra of organs that miraculously tune themselves and play in unison without our conscious effort – a process called self-regulation. But when we are sick, some parts of this orchestra struggle to regulate themselves, which causes a cascade of stress-related dysregulation that contributes to other conditions.

For instance, in the case of diabetes, the hormonal system struggles to regulate sugar. With obesity, the metabolic system has a difficult time regulating energy intake and consumption. With depression, the central nervous system develops an imbalance in its circuits and neurotransmitters that makes it difficult to regulate mood, thoughts and behaviors.

‘Treating’ stress
Though stress neuroscience in recent years has given researchers like me new ways to measure and understand stress, you may have noticed that in your doctor’s office, the management of stress isn’t typically part of your treatment plan.

Most doctors don’t assess the contribution of stress to a patient’s common chronic diseases such as diabetes, heart disease, and obesity, partly because stress is complicated to measure and partly because it is difficult to treat. In general, doctors don’t treat what they can’t measure.

Stress neuroscience and epidemiology have also taught researchers recently that the chances of developing serious mental and physical illnesses in midlife rise dramatically when people are exposed to trauma or adverse events, especially during vulnerable periods such as childhood.

Over the past 40 years in the U.S., the alarming rise in rates of diabetes, obesity, depression, PTSD, suicide, and addictions points to one contributing factor that these different illnesses share: toxic stress.

Toxic stress increases the risk for the onset, progression, complications, or early death from these illnesses.

Suffering from toxic stress
Because the definition of toxic stress varies from one person to another, it’s hard to know how many people struggle with it. One starting point is the fact that about 16% of adults report having been exposed to four or more adverse events in childhood. This is the threshold for higher risk for illnesses in adulthood.

Research dating back to before the COVID-19 pandemic also shows that about 19% of adults in the U.S. have four or more chronic illnesses. If you have even one chronic illness, you can imagine how stressful four must be.

And about 12% of the U.S. population lives in poverty, the epitome of a life in which demands exceed resources every day. For instance, if a person doesn’t know how they will get to work each day or doesn’t have a way to fix a leaking water pipe or resolve a conflict with their partner, their stress response system can never rest. One or any combination of threats may keep them on high alert or shut them down in a way that prevents them from trying to cope at all.

Add to these overlapping groups all those who struggle with harassing relationships, homelessness, captivity, severe loneliness, living in high-crime neighborhoods, or working in or around noise or air pollution. It seems conservative to estimate that about 20% of people in the U.S. live with the effects of toxic stress.

Source: https://studyfinds.org/how-much-stress-is-too-much/

Eye Stroke Cases Surge During Heatwave: Symptoms, Prevention Tips

The extreme heat can affect overall health, increasing the risk of heart diseases, brain disorders, and other organ issues.

गर्मियों में कैसे रखें आंखों का ख्याल | Image:Freepik

As heatwaves sweep across various regions, there has been a noticeable increase in eye stroke cases. This condition, also known as retinal artery occlusion, can cause sudden vision loss and is comparable to a brain stroke in its seriousness.

Impact of heatwaves on eye health 

The extreme heat can affect overall health, increasing the risk of heart diseases, brain disorders, and other organ issues. Notably, it can also lead to eye strokes due to dehydration and heightened blood pressure. Dehydration during hot weather makes the blood more prone to clotting, while high temperatures can exacerbate cardiovascular problems, raising the risk of arterial blockages.

Eye stroke

An eye stroke occurs when blood flow to the retina is obstructed, depriving it of oxygen and nutrients. This can cause severe retinal damage in minutes. Dehydration from heatwaves thickens the blood, making clots more likely, while heat stress can worsen cardiovascular conditions, further increasing eye stroke risk.

Signs and symptoms

Sudden Vision Loss: The most common symptom, this can be partial or complete, and typically painless.

Visual Disturbances: Sudden dimming or blurring of vision, where central vision is affected but peripheral vision remains intact.

Preventive measures

Stay Hydrated: Ensure adequate fluid intake to prevent dehydration.

Avoid Peak Sun Hours: Limit exposure to the sun during the hottest parts of the day.

Manage Chronic Conditions: Keep blood pressure and other chronic conditions under control.

TImmediate Medical Attentioreatment optionsn: Urgency is crucial as delays can lead to permanent vision loss.

Source: https://www.republicworld.com/health/eye-stroke-cases-surge-during-heatwave-symptoms-prevention-tips/?amp=1

5 Hidden Effects Of Childhood Neglect

(Photo by Volurol on Shutterstock)

Trauma, abuse, and neglect — in the current cultural landscape, it’s not hard to find a myriad of discussions on these topics. But with so many people chiming in on the conversation, it’s more important now than ever to listen to what experts on the topic have to say. As we begin to understand more and more about the effects of growing up experiencing trauma and abuse, we also begin to understand that the effects of these experiences are more complex and wide-ranging than we had ever imagined.

Recent studies in the field of childhood trauma and abuse have found that these experiences can affect a wide range of aspects of our adult life. In fact, even seemingly disparate topics ranging from your stance on vaccinations to the frequency with which we experience headaches, to the types of judgments that we make about others are impacted by histories of abuse, trauma, or neglect.

Clearly, the effects of a traumatic childhood go far beyond the time when you are living in an abusive or unhealthy environment. A recent study reports that early childhood traumas can impact health outcomes decades later, potentially following you for the rest of your life. With many new and surprising effects of childhood trauma being discovered every day, it’s no wonder that so many people are interested in what exactly trauma is and how it can affect us.

So, what are the long-term ramifications of childhood neglect? For an answer to that question, StudyFinds sat down with Michael Menard, inventor-turned-author of the upcoming book, “The Kite That Couldn’t Fly: And Other May Avenue Stories,” to discuss the lesser-understood side of trauma and how it can affect us long into our adult lives.

Here is his list of five hidden effects of trauma, and some of them just might surprise you.

1. Unstable Relationships
For individuals with childhood trauma, attachment issues are an often overlooked form of collateral damage. Through infancy and early childhood, a person’s attachment style is developed largely through familial bonds and is then carried into every relationship from platonic peers to romantic partners. When this is lovingly and healthily developed, this is usually a positive thing. But for children and adults with a background of neglect, it often leads to difficulty in finding, developing, and keeping healthy relationships.

As Menard explains it, a childhood spent feeling invisible left scars on his adult relationship patterns. “As a child, I felt that I didn’t exist. No matter what I did, it was not recognized, so there was no reinforcement,” he says. “As a young adult, I panicked when I got ignored. I was afraid that everyone was going to leave. I also felt that I would drive people away in relationships. I would only turn to others when I needed emotional support, never when things were good. When things were good, I could handle them myself. I didn’t need anybody.”

Childhood trauma often creates adults who struggle to be emotionally vulnerable, to process feelings of anger and disappointment, and to accept support from others. And with trust as one of the most vital components of longterm, healthy relationships, it’s clear where difficulty may arise. But Menard emphasizes that a childhood of neglect should not have to mean a lifetime of distant or unstable relationships. “A large percentage of the people that I’ve talked to about struggles in their life, they think it’s life. But we were born to be healthy, happy, prosperous, and anything that is taking away from that is not good,” he says.

“The lesser known [effects] I would say are the things that cause disruption in relationships,” Menard adds. “The divorce rate is about 60%. Where does that come from? It comes from disruption and unhappiness between two people. Lack of respect, love, trust, sacrifice. And if you come into that relationship broken from childhood trauma and you don’t even know it, I’d say that’s not well known.”

2. Physical Health Issues
The most commonly discussed long-term effects of childhood neglect are usually mental and emotional ones. But believe it or not, a background of trauma can actually impact your physical health. From diabetes to cardiac disease, the toll of childhood trauma can turn distinctly physical. “Five of the top 10 diseases that kill us have been scientifically proven to come from childhood trauma,” says Menard. “I’ve got high blood pressure. I go to the doctor, and they can’t figure it out. I have diabetes, hypertension, obesity, cardiac disease, COPD—it’s now known that they have a high probability that they originated from childhood trauma or neglect. Silent killers.”

In some cases, the physical ramifications of childhood trauma may be due to long-term medical neglect. What was once a treatable issue can become a much larger and potentially permanent problem. In Menard’s case, untreated illness in his childhood meant open heart surgery in his adult years. “I’m now 73. When I was 70, my aortic valve closed. I had to have four open heart surgeries in two months — almost died three times,” he explains. “Now, can I blame that on childhood trauma? I can, because I had strep throat repeatedly as a child without medication. One episode turned into rheumatic fever that damaged my aortic valve. 50 years later, I’m having my chest opened up.”

From loss of sleep to chronic pain, the physical manifestations of a neglectful childhood can be painful and difficult. But beyond that, they often go entirely overlooked. For many people, this can feel frustrating and invalidating. For others, they may not know themselves that their emotional pain could be having physical ramifications. As Menard puts it, “things are happening to people that they think [are just] part of life, and [they’re] not.”

3. Mental Health Struggles
Growing up in an abusive or neglectful environment can have a variety of negative effects on children. However, one of the most widely discussed and understood consequences is that of their mental health. “Forty-one percent of all depression in the United States is caused by childhood trauma or comes from childhood trauma,” notes Menard. And this connection between trauma and mental illness goes far beyond just depression. In fact, a recent study found a clear link between experiences of childhood trauma and various mental illnesses including anxiety, depression, and substance use disorders.

Of course, depression and anxiety are also compounded when living in an environment lacking the proper love, support, and encouragement that a child deserves to grow up in. For Menard, growing up in a home with 16 people did little to keep the loneliness at bay. “I just thought it was normal—being left out,” Menard says. “We all need to trust, and we need to rely on people. But if you become an island and self-reliant, not depending on others, you become isolated.”

In some cases, the impact of mental health can also do physical damage. In one example, Menard notes an increased likelihood for eating disorders. “Mine came from not having enough food,” he says. “I get that, but there are all types of eating disorders that come from emotional trauma.”

4. Acting Out

For most children, the model set by the behavior of their parents lays the foundation for their own personal growth and development. However, kids who lack these positive examples of healthy behavior are less likely to develop important traits like empathy, self-control, and responsibility. Menard is acutely aware of this, stating, “Good self-care and self-discipline are taught. It goes down the drain when you experience emotional trauma.” Children who are not given proper role models for behavior will often instead mimic the anger and aggressive behaviors prevalent in emotionally neglectful or abusive households.

“My wife is a school teacher and she could tell immediately through the aggressive behavior of even a first grader that there were multiple problems,” adds Menard. However, his focus is less on pointing fingers at the person who is displaying these negative behaviors, and more about understanding what made them act this way in the first place. “It’s not about what’s wrong with you, it’s about what happened to you.”

However, for many, the negative influence extends beyond simple bad behavior. Menard also describes being taught by his father to steal steaks from the restaurant where he worked at the age of 12. This was not only what his father encouraged him to do, but also what seemed completely appropriate to him because of how he had been raised. “I’d bring steaks home for him, and when he got off the factory shift at midnight, that seemed quite okay,” Menard says. “It seemed quite normal. And it’s horrible. Everybody’s searching to try to heal that wound and they don’t know why they’re doing it.”

Source: https://studyfinds.org/5-hidden-effects-of-childhood-neglect/

You won’t believe how fast people adapt to having an extra thumb

The Third Thumb worn by different users (CREDIT: Dani Clode Design / The Plasticity Lab)

Will human evolution eventually give us a sixth finger? If it does, a new study is showing that we’ll have no trouble using an extra thumb! It may sound like science fiction, but researchers have shown that people of all ages can quickly learn how to use an extra, robotic third thumb.

The findings, in a nutshell
A team at the University of Cambridge developed a wearable, prosthetic thumb device and had nearly 600 people from diverse backgrounds try it out. The results in the journal Science Robotics were astonishing: 98% of participants could manipulate objects using the third thumb within just one minute of picking it up and getting brief instructions.

The researchers put people through simple tasks like moving pegs from a board into a basket using only the robotic thumb. They also had people use the device along with their real hand to manipulate oddly-shaped foam objects, testing hand-eye coordination. People, both young and old, performed similarly well on the tasks after just a little practice. This suggests we may be surprisingly adept at integrating robotic extensions into our sense of body movement and control.

While you might expect those with hand-intensive jobs or hobbies to excel, that wasn’t really the case. Most everyone caught on quickly, regardless of gender, handedness, age, or experience with manual labor. The only groups that did noticeably worse were the very youngest children under age 10 and the oldest seniors. Even so, the vast majority in those age brackets still managed to use the third thumb effectively with just brief training.

Professor Tamar Makin and designer Dani Clode have been working on Third Thumb for several years. One of their initial tests in 2021 demonstrated that the 3D-printed prosthetic thumb could be a helpful extension of the human hand. In a test with 20 volunteers, it even helped participants complete tasks while blindfolded!

Designer Dani Clode with her ‘Third Thumb’ device. (Credit: Dani Clode)

How did scientists test the third thumb?
For their inclusive study, the Cambridge team recruited a wide range of 596 participants between the ages of three and 96. The group comprised an intentionally diverse mix of demographics to ensure the robotic device could be effectively used by all types of people.

The Third Thumb device itself consists of a rigid, controllable robotic digit worn on the opposite side of the hand from the normal thumb. It’s operated by foot sensors – pressing with the right foot pulls the robotic thumb inward across the palm while the left foot pushes it back out toward the fingertips. Releasing foot pressure returns the thumb to its resting position.

During testing at a science exhibition, each participant received up to one minute of instructions on how to control the device and perform one of two simple manual tasks. The first had them individually pick up pegs from a board using just the third thumb and drop as many as possible into a basket within 60 seconds. The second required them to manipulate a set of irregularly-shaped foam objects using the robotic thumb in conjunction with their real hand and fingers.

Detailed data was collected on every participant’s age, gender, handedness, and even occupations or hobbies that could point to exceptional manual dexterity skills. This allowed the researchers to analyze how user traits and backgrounds affected performance with the third thumb device after just a minute’s practice. The stark consistency across demographics proved its intuitive usability.

Source: https://studyfinds.org/people-adapt-to-extra-thumb/

Mysterious layer inside Earth may come from another planet!

3D illustration showing layers of the Earth in space. (© Destina – stock.adobe.com)

From the surface to the inner core, Earth has several layers that continue to be a mystery to science. Now, it turns out one of these layers may consist of material from an entirely different planet!

Deep within our planet lies a mysterious, patchy layer known as the D” layer. Located a staggering 3,000 kilometers (1,860 miles) below the surface, this zone sits just above the boundary separating Earth’s molten outer core from its solid mantle. Unlike a perfect sphere, the D” layer’s thickness varies drastically around the globe, with some regions completely lacking this layer altogether – much like how continents poke through the oceans on Earth’s surface.

These striking variations have long puzzled geophysicists, who describe the D” layer as heterogeneous, meaning non-uniform in its composition. However, a new study might finally shed light on this deep enigma, proposing that the D” layer could be a remnant of another planet that collided with Earth during its early days, billions of years ago.

The findings, in a nutshell
The research, published in National Science Review and led by Dr. Qingyang Hu from the Center for High Pressure Science and Technology Advanced Research and Dr. Jie Deng from Princeton University, draws upon the widely accepted Giant Impact hypothesis. This hypothesis suggests that a Mars-sized object violently collided with the proto-Earth, creating a global ocean of molten rock, or magma, in the aftermath.

Hu and Deng believe the D” layer’s unique composition may be the leftover fallout from this colossal impact, potentially holding valuable clues about our planet’s formation. A key aspect of their theory involves the presence of substantial water within this ancient magma ocean. While the origin of this water remains up for debate, the researchers are focusing on what happened as the molten rock began to cool.

“The prevailing view,” Dr. Deng explains in a media release, “suggests that water would have concentrated towards the bottom of the magma ocean as it cooled. By the final stages, the magma closest to the core could have contained water volumes comparable to Earth’s present-day oceans.”

Is there a hidden ocean inside the Earth?
This water-rich environment at the bottom of the magma ocean would have created extreme pressure and temperature conditions, fostering unique chemical reactions between water and minerals.

“Our research suggests this hydrous magma ocean favored the formation of an iron-rich phase called iron-magnesium peroxide,” Dr. Hu elaborates.

This peroxide, which has a chemical formula of (Fe,Mg)O2, has an even stronger affinity for iron compared to other major components expected in the lower mantle.

“According to our calculation, its affinity to iron could have led to the accumulation of iron-dominant peroxide in layers ranging from several to tens of kilometers thick,” Hu explains.

The presence of such an iron-rich peroxide phase would alter the mineral composition of the D” layer, deviating from our current understanding. According to the new model proposed by Hu and Deng, minerals in the D” layer would be dominated by an assemblage of iron-poor silicate, iron-rich (Fe,Mg) peroxide, and iron-poor (Fe,Mg) oxide. Interestingly, this iron-dominant peroxide also possesses unique properties that could explain some of the D” layer’s puzzling geophysical features, such as ultra-low velocity zones and layers of high electrical conductance — both of which contribute to the D” layer’s well-known compositional heterogeneity.

Source: https://studyfinds.org/layer-inside-earth-another-planet/

Targeting ‘monster cells’ may keep cancer from returning after treatment

Targeted Cancer Therapy Illustration (© Riz – stock.adobe.com)

Cancer can sometimes come back, even after undergoing chemotherapy or radiation treatments. Why does this happen? Researchers at the MUSC Hollings Cancer Center may have unlocked part of the mystery. They discovered that cancer cells can transform into monstrous “polyploid giant cancer cells” or PGCCs when under extreme stress from treatment. With that in mind, scientists believe targeting these cells could be the key to preventing recurrences of cancer.

The findings, in a nutshell
Study authors, who published their work in the Journal of Biological Chemistry, found that these bizarre, monster-like cells have multiple nuclei crammed into a single, enlarged cell body. At first, the researchers thought PGCCs were doomed freaks headed for cellular destruction. However, they realized PGCCs could actually spawn new “offspring” cancer cells after the treatment ended. It’s these rapidly dividing daughter cells that likely drive cancer’s resurgence in some patients. Blocking PGCCs from reverting and generating these daughter cells could be the strategy that keeps the disease from returning.

The scientists identified specific genes that cancer cells crank up to become PGCCs as a survival mechanism against harsh therapy. One gene called p21 seems particularly important. In healthy cells it stops DNA replication if damage occurs, but in cancer cells lacking p53 regulation, p21 allows replication of damaged DNA to continue, facilitating PGCC formation.

PGCCs could actually spawn new “offspring” cancer cells after treatments like chemotherapy have ended. (© RFBSIP – stock.adobe.com)

How did scientists make the discovery?
Originally, the Hollings team was studying whether an experimental drug inhibitor could boost cancer cell death when combined with radiation therapy. However, their initial experiments showed no extra killing benefit from the combination treatment. Discouraged, they extended the experiment timeline, and that’s when they noticed something very strange.

While the inhibitor made no difference in the short term, over a longer period, the scientists observed the emergence of bizarre, bloated “monster” cancer cells containing multiple nuclei. At first, they assumed these polyploid giant cancer cells (PGCCs) were doomed mutations that would naturally die off in the patient’s body. Then, researchers saw the PGCCs were generating rapidly dividing offspring cells around themselves, mimicking tumor recurrence.

This made the team rethink the inhibitor’s effects. It didn’t increase cancer cell killing, but it did seem to stop PGCCs from reverting to a state where they could spawn proliferating daughter cells. Blocking this reversion to divisible cells could potentially prevent cancer relapse after treatment.

The researchers analyzed gene expression changes as cancer cells transformed into PGCCs and then back into dividing cells. They identified molecular pathways involved, like p21 overexpression, which allows duplication of damaged DNA. Ultimately, combining their inhibitor with radiation prevented PGCC reversion and daughter cell generation, providing a possible novel strategy against treatment-resistant cancers.

What do the researchers say?
“We initially thought that combination of radiation with the inhibitor killed cancer cells better,” says research leader Christina Voelkel-Johnson, Ph.D., in a media release. “It was only when the inhibitor failed to make a difference in short-term experiments that the time frame was extended, which allowed for an unusual observation.”

Source: https://studyfinds.org/monster-cells-cancer-returning/

Average person wastes more than 2 hours ‘dreamscrolling’ everyday!

(Photo by Perfect Wave on Shutterstock)

NEW YORK — The average American spends nearly two and a half hours a day “dreamscrolling” — looking at dream purchases or things they’d like to one day own. While some might think you’re just wasting your day, a whopping 71% say it’s time well spent, as the habit motivates them to reach their financial goals.

In a recent poll of 2,000 U.S. adults, more than two in five respondents say they spend more time dreamscrolling when the economy is uncertain (43%). Over a full year, that amounts to about 873 hours or nearly 36 days spent scrolling.

Conducted by OnePoll on behalf of financial services company Empower, the survey reveals half of the respondents say they dreamscroll while at work. Of those daydreaming employees, one in five admit to spending between three and four hours a day multitasking while at their job.

Gen Zers spend the most time dreamscrolling at just over three hours per day, while boomers spend the least, clocking in around an hour of fantasy purchases and filling wish lists. Americans say looking at dream purchases makes it easier for them to be smart with their money (56%), avoid making unplanned purchases or going into debt (30%), and better plan to achieve their financial goals (25%).

Nearly seven in 10 see dreamscrolling as an investment in themselves (69%) and an outlet for them to envision what they want out of life (67%). Four in 10 respondents (42%) say they regularly spend time picturing their ideal retirement — including their retirement age, location, and monthly expenses.

A whopping 71% say dreamscrolling is time well spent, as the habit motivates them to reach their financial goals. (© Antonioguillem – stock.adobe.com)

Many respondents are now taking the American dream online, with one in five respondents scrolling through listings of dream homes or apartments. Meanwhile, some are just browsing through vacation destinations (25%), beauty or self-care products (23%), and items for their pets (19%). Many others spend time looking at clothing, shoes, and accessories (49%), gadgets and technology (30%), and home décor or furniture (29%).

More than half (56%) currently have things left open in tabs and windows or saved in shopping carts that they’d like to purchase or own in the future. For those respondents, they estimate it would cost about $86,593.40 to afford everything they currently have saved.

Almost half of Americans say they are spending more time dreamscrolling now than in previous years (45%), and 56% plan on buying something on their dream list before this year wraps. While 65% are optimistic they’ll be able to one day buy everything on their list, nearly one in four say they don’t think they’ll ever be able to afford the majority of items (23%).

More than half (51%) say owning their dream purchases would make them feel more financially secure, and close to half say working with a financial professional would help them reach their goals (47%). Others feel they have more work to do: 34% say they’ve purchased fewer things on their dream list than they should at their age, with millennials feeling the most behind (39%).

Rising prices (54%), the inability to save money (29%), and growing debt (21%) are the top economic factors that may be holding some Americans back. Instead of doom spending, dreamscrolling has had a positive impact on Americans’ money habits: respondents say they better understand their financial goals (24%) as a result.

Source: https://studyfinds.org/shopping-browsing-cant-afford/

Who really was Mona Lisa? 500+ years on, there’s good reason to think we got it wrong

Visiting looking at the Mona Lisa (Credit: pixabay.com)

In the pantheon of Renaissance art, Leonardo da Vinci’s Mona Lisa stands as an unrivalled icon. This half-length portrait is more than just an artistic masterpiece; it embodies the allure of an era marked by unparalleled cultural flourishing.

Yet, beneath the surface of the Mona Lisa’s elusive smile lies a debate that touches the very essence of the Renaissance, its politics and the role of women in history.

A mystery woman

The intrigue of the Mona Lisa, also known as La Gioconda, isn’t solely due to Leonardo’s revolutionary painting techniques. It’s also because the identity of the subject is unconfirmed to this day. More than half a century since it was first painted, the real identity of the Mona Lisa remains one of art’s greatest mysteries, intriguing scholars and enthusiasts alike.

A Mona Lisa painting from the workshop of Leonardo da Vinci, held in the collection of the Museo del Prado in Madrid, Spain. Collection of the Museo del Prado

The painting has traditionally been associated with Lisa Gherardini, the wife of Florentine silk merchant Francesco del Giocondo. But another compelling theory suggests a different sitter: Isabella of Aragon.

Isabella of Aragon was born into the illustrious House of Aragon in Naples, in 1470. She was a princess who was deeply entwined in the political and cultural fabric of the Renaissance.

Her 1490 marriage to Gian Galeazzo Sforza, Duke of Milan, positioned Isabella at the heart of Italian politics. And this role was both complicated and elevated by the ambitions and machinations of Ludovico Sforza (also called Ludovico il Moro), her husband’s uncle and usurper of the Milanese dukedom.

In The Virgin and Child with Four Saints and Twelve Devotees, by (unknown) Master of the Pala Sforzesca, circa 1490, Gian Galeazzo Sforza is shown in prayer facing his wife, Isabella of Aragon (identified by her heraldic red and gold). National Gallery

Scholarly perspectives
The theory that Isabella is the real Mona Lisa is supported by a combination of stylistic analyses, historical connections and reinterpretations of Leonardo’s intent as an artist.

In his biography of Leonardo, author Robert Payne points to preliminary studies by the artist that bear a striking resemblances to Isabella around age 20. Payne suggests Leonardo captured Isabella across different life stages, including during widowhood, as depicted in the Mona Lisa.

U.S. artist Lillian F. Schwartz’s 1988 study used x-rays to reveal an initial sketch of a woman hidden beneath Leonardo’s painting. This sketch was then painted over with Leonardo’s own likeness.

Schwartz believes the woman in the sketch is Isabella, because of its similarity with a cartoon Leonardo made of the princess. She proposes the work was made by integrating specific features of the initial model with Leonardo’s own features.

An illustration of Isabella of Aragon from the Story of Cremona by Antonio Campi. Library of Congress

This hypothesis is further supported by art historians Jerzy Kulski and Maike Vogt-Luerssen.

According to Vogt-Luerssen’s detailed analysis of the Mona Lisa, the symbols of the Sforza house and the depiction of mourning garb both align with Isabella’s known life circumstances. They suggest the Mona Lisa isn’t a commissioned portrait, but a nuanced representation of a woman’s journey through triumph and tragedy.

Similarly, Kulski highlights the portrait’s heraldic designs, which would be atypical for a silk merchant’s wife. He, too, suggests the painting shows Isabella mourning her late husband.

The Mona Lisa’s enigmatic expression also captures Isabella’s self-described state post-1500 of being “alone in misfortune.” Contrary to representing a wealthy, recently married woman, the portrait exudes the aura of a virtuous widow.

Late professor of art history Joanna Woods-Marsden suggested the Mona Lisa transcends traditional portraiture and embodies Leonardo’s ideal, rather than being a straightforward commission.

This perspective frames the work as a deeply personal project for Leonardo, possibly signifying a special connection between him and Isabella. Leonardo’s reluctance to part with the work also indicates a deeper, personal investment in it.

Beyond the canvas
The theory that Isabella of Aragon could be the true Mona Lisa is a profound reevaluation of the painting’s context, opening up new avenues through which to appreciate the work.

It elevates Isabella from a figure overshadowed by the men in her life, to a woman of courage and complexity who deserves recognition in her own right.

Source: https://studyfinds.org/who-really-was-mona-lisa-500-years-on-theres-good-reason-to-think-we-got-it-wrong/

Scientists discover what gave birth to Earth’s unbreakable continents

Photo by Brett Zeck from Unsplash

The Earth beneath our feet may feel solid, stable, and seemingly eternal. But the continents we call home are unique among our planetary neighbors, and their formation has long been a mystery to scientists. Now, researchers believe they may have uncovered a crucial piece of the puzzle: the role of ancient weathering in shaping Earth’s “cratons,” the most indestructible parts of our planet’s crust.

Cratons are the old souls of the continents, forming roughly half of Earth’s continental crust. Some date back over three billion years and have remained largely unchanged ever since. They form the stable hearts around which the rest of the continents have grown. For decades, geologists have wondered what makes these regions so resilient, even as the plates shift and collide around them.

It turns out that the key may lie not in the depths of the Earth but on its surface. A new study out of Penn State and published in Nature suggests that subaerial weathering – the breakdown of rocks exposed to air – may have triggered a chain of events that led to the stabilization of cratons billions of years ago, during the Neoarchaean era, around 2.5 to 3 billion years ago.

These ancient metamorphic rocks called gneisses, found on the Arctic Coast, represent the roots of the continents now exposed at the surface. The scientists said sedimentary rocks interlayered in these types of rocks would provide a heat engine for stabilizing the continents. Credit: Jesse Reimink. All Rights Reserved.

To understand how this happened, let’s take a step way back in time. In the Neoarchaean, Earth was a very different place. The atmosphere contained little oxygen, and the continents were mostly submerged beneath a global ocean. But gradually, land began to poke above the waves – a process called continental emergence.

As more rock was exposed to air, weathering rates increased dramatically. When rocks weather, they release their constituent minerals, including radioactive elements like uranium, thorium, and potassium. These heat-producing elements, or HPEs, are crucial because their decay generates heat inside the Earth over billions of years.

The researchers propose that as the HPEs were liberated by weathering, they were washed into sediments that accumulated in the oceans. Over time, plate tectonic processes would have carried these sediments deep into the crust, where the concentrated HPEs could really make their presence felt.

Buried at depth and heated from within, the sediments would have started to melt. This would have driven what geologists call “crustal differentiation” – the separation of the continental crust into a lighter, HPE-rich upper layer and a denser, HPE-poor lower layer. It’s this layering, the researchers argue, that gave cratons their extraordinary stability.

The upper crust, enriched in HPEs, essentially acted as a thermal blanket, keeping the lower crust and the mantle below relatively cool and strong. This prevented the kind of large-scale deformation and recycling that affected younger parts of the continents.

Interestingly, the timing of craton stabilization around the globe supports this idea. The researchers point out that in many cratons, the appearance of HPE-enriched sedimentary rocks precedes the formation of distinctive Neoarchaean granites – the kinds of rocks that would form from the melting of HPE-rich sediments.

The rocks on the left are old rocks that have been deformed and altered many times. They are juxtaposed next to an Archean granite on the right side. The granite is the result of melting that led to the stabilization of the continental crust. Credit: Matt Scott. All Rights Reserved.

Furthermore, metamorphic rocks – rocks transformed by heat and pressure deep in the crust – also record a history consistent with the model. Many cratons contain granulite terranes, regions of the deep crust uplifted to the surface that formed in the Neoarchaean. These granulites often have compositions that suggest they formed from the melting of sedimentary rocks.

So, the sequence of events – the emergence of continents, increased weathering, burial of HPE-rich sediments, deep crustal melting, and finally, craton stabilization – all seem to line up.

Source: https://studyfinds.org/earths-unbreakable-continents/

The 7 Fastest Animals In The World: Can You Guess Them All?

Cheetah (Photo by David Groves on Unsplash)

Move over Usain Bolt, because in the animal kingdom, speed takes on a whole new meaning! Forget sprinting at a measly 28 mph – these record-breaking creatures can leave you in the dust (or water, or sky) with their mind-blowing velocity. From lightning-fast cheetahs hunting down prey on the African savanna to majestic peregrine falcons diving from incredible heights, these animals rely on their extreme speed to survive and thrive in the wild. So, buckle up as we explore the top seven fastest animals on Earth.

The animal kingdom is brimming with speedsters across different habitats. We’re talking about fish that can zoom by speedboats, birds that plummet from the sky at breakneck speeds, and even insects with lightning-fast reflexes. Below is our list of the consensus top seven fastest animals in the world. We want to hear from you too! Have you ever encountered an animal with incredible speed? Share your stories in the comments below, and let’s celebrate the awe-inspiring power of nature’s speed demons!

The List: Fastest Animals in the World, Per Wildlife Experts

1. Peregrine Falcon – 242 MPH

Peregrine Falcon (Photo by Vincent van Zalinge on Unsplash)

The peregrine falcon takes the title of the fastest animal in the world, able to achieve speeds of 242 miles per hour. These birds don’t break the sound barrier by flapping their wings like crazy. Instead, they use gravity as their accomplice, raves The Wild Life. In a blink of an eye, the falcon can plummet towards its prey, like a fighter jet in a vertical dive. These dives can exceed 200 miles per hour, which is the equivalent of a human running at over 380 mph! That’s fast enough to make even the speediest sports car look like a snail.

That prominent bulge of this falcon’s chest cavity isn’t just for show – it’s a keel bone, and it acts like a supercharged engine for their flight muscles. A bigger keel bone translates to more powerful wing strokes, propelling the falcon forward with incredible force, explains A-Z Animals. These birds also boast incredibly stiff, tightly packed feathers that act like a high-performance suit, reducing drag to an absolute minimum. And the cherry on top? Their lungs and air sacs are designed for one-way airflow, meaning they’re constantly topped up with fresh oxygen, even when exhaling. This ensures they have the fuel they need to maintain their breakneck dives.

These fast falcons might be the ultimate jet setters of the bird world, but they’re not picky about their digs. The sky-dwelling predators are comfortable calling a variety of landscapes home, as long as there’s open space for hunting, writes One Kind Planet. They can be found soaring over marshes, estuaries, and even skyscrapers, always on the lookout for unsuspecting prey.

2. Golden Eagle – 200 MPH

Golden Eagle (Photo by Mark van Jaarsveld on Unsplash)

The golden eagle is a large bird that is well known for its powerful and fast flight. These majestic birds can reach speeds of up to 199 mph during a hunting dive, says List 25. Just like the peregrine falcon, the golden eagle uses a hunting technique called a stoop. With a powerful tuck of its wings, the eagle plummets towards its target in a breathtaking dive.

They are undeniably impressive birds, with a wingspan that can stretch up to eight feet wide! Imagine an athlete being able to run at 179 miles per hour! That’s what a golden eagle achieves in a dive, reaching speeds of up to 87 body lengths per second, mentions The Wild Life. The air rushes past its feathers, creating a whistling sound as it picks up, hurtling toward its prey.

They also use these impressive dives during courtship rituals and even playful moments, states Live Science. Picture two golden eagles soaring in tandem, one diving after the other in a dazzling aerial ballet. It’s a display of both power and grace that reaffirms their status as the ultimate rulers of the skies. Their habitat range stretches across the northern hemisphere, including North America, Europe, Africa, and Asia, according to the International Union for Conservation of Nature (IUCN). So next time you see a golden eagle circling above, remember – it’s more than just a bird, it’s a living embodiment of speed, skill, and breathtaking beauty.

3. Black Marlin – 80 MPH

A Black Marlin jumping out of the water (Photo by Finpat on Shutterstock)

The ocean is a vast and mysterious realm, teeming with incredible creatures. And when it comes to raw speed, the black marlin is a high-performance athlete of the sea. They have a deep, muscular body built for cutting through water with minimal resistance, informs Crosstalk. Think of a sleek racing yacht compared to a clunky rowboat. Plus, their dorsal fin is lower and rounder, acting like a spoiler on a race car, reducing drag and allowing for a smoother ride through the water. Their “spears,” those sharp protrusions on their snouts, are thicker and more robust than other marlins. These aren’t just for show – they’re used to slash and stun prey during a hunt.

Some scientists estimate their burst speed at a respectable 22 mph. That’s impressive, but here’s where the debate gets interesting. Some reports claim black marlin can pull fishing line at a staggering 120 feet per second! When you do the math, that translates to a whopping 82 mph, according to Story Teller. This magnificent fish calls shallow, warm shores home, their ideal habitat boasts water temperatures between 59 to 86 degrees Fahrenheit. – basically, a permanent summer vacation!

The secret behind its impressive swimming prowess lies in its tail. Unlike the rounded tails of many fish, black marlin possess crescent-shaped tails, explains A-Z Animals. With a powerful flick, they can propel themselves forward with incredible bursts of speed. This marlin also boasts a long, thin, and sharp bill that cuts through water, offering minimal resistance as it surges forward. But that’s not all. Black marlin also have rigid pectoral fins that act like perfectly sculpted wings. These fins aren’t for flapping – they provide stability and lift, allowing the marlin to maintain a streamlined position in the water.

4. Cheetah – 70 MPH

Adult and cheetah pup on green grass during daytime (Photo by Sammy Wong on Unsplash)

The cheetah is Africa’s most endangered large cat and also the world’s fastest land animal. Their bodies are built for pure velocity, with special adaptations that allow them to go from zero to sixty in a mind-blowing three seconds, shares Animals Around The Globe. Each stride stretches an incredible seven meters, eating up the ground with astonishing speed. But they can only maintain their high speeds for short bursts.

Unlike its stockier lion and tiger cousins, the cheetah boasts a lean, streamlined physique that makes them aerodynamic. But the real innovation lies in the cheetah’s spine. It’s not just a rigid bone structure – it’s a flexible marvel, raves A-Z Animals. With each powerful push, this springy spine allows the cheetah to extend its strides to incredible lengths, propelling it forward with tremendous force. And finally, we come to the engine room: the cheetah’s muscles. Packed with a high concentration of “fast-twitch fibers,” these muscles are specifically designed for explosive bursts of speed. Think of them as tiny, built-in turbochargers that give the cheetah that extra surge of power when it needs it most.

These magnificent cats haven’t always been confined to the dry, open grasslands of sub-Saharan Africa. Cheetahs were once widespread across both Africa and Asia, but their range has shrunk dramatically due to habitat loss and dwindling prey populations, says One Kind Planet. Today, most cheetahs call protected natural reserves and parks home.

Source: https://studyfinds.org/fastest-animals-in-the-world/

Exit mobile version